New article in OBHDP (May 22nd, 2025)
The transparency dilemma: How AI disclosure erodes trust
Oliver Schilke, Martin Reimann
https://www.sciencedirect.com/science/article/pii/S0749597825000172
As generative artificial intelligence (AI) has found its way into various work tasks, people have begun to question whether its usage should be disclosed. On the one hand, disclosure heightens perceptions of transparency, which should have a positive impression on others. On the other hand, norms around AI usage are still developing and some applications could be considered inappropriate, which may contribute to a negative impression of the discloser. Speaking to these dynamics, this article explores whether disclosing AI usage impacts others’ trust in the users. The authors examine the impact of AI disclosure on trust across diverse tasks—from communications to analytics to artistry—and across different actors, such as supervisors, subordinates, professors, analysts, creatives, and investment funds. Thirteen experiments consistently demonstrate that actors who disclose their AI usage are trusted less than those who do not. Drawing on micro-institutional theory, the authors argue that this reduction in trust can be explained by reduced perceptions of legitimacy. Moreover, they demonstrate that this negative effect holds across different disclosure framings, above and beyond algorithm aversion, regardless of whether AI involvement is known, and regardless of whether disclosure is voluntary or mandatory. This article contributes to research on trust, AI, transparency, and legitimacy by showing that AI disclosure can harm social perceptions. These results emphasize that transparency is not straightforwardly beneficial, while also highlighting legitimacy’s central role in trust formation.
P.S. if you can’t access the full-text let us (m-kouchaki@kellogg.northwestern.edu or mikebaer@asu.edu) know and we’d be happy to share a copy.