OBHDP Editorial: Our framework for responsible AI use in research and reviewing
Responsible Collaboration with Artificial Intelligence in Organizational Scholarship: OBHDP’s Governance Framework for Authors and Reviewers
https://www.sciencedirect.com/science/article/pii/S0749597826000142
Direct download: https://osf.io/fw2nz/download
Generative artificial intelligence (AI) has rapidly evolved into a crucial tool for researchers. It has moved beyond simply extending the user’s capabilities to function as a virtual collaborator, capable of offering creative suggestions and partnering on ideas. To ensure responsible AI integration in scholarly work, clear guidelines are essential to differentiate appropriate from inappropriate use. In the editorial linked above, we propose that when AI is utilized correctly, it can be a legitimate collaborator, helping scholars to refine ideas, stress-test arguments, and enhance the clarity of their writing. We observe that this responsible collaboration should complement, not replace, a scholar’s own reasoning and direct engagement with primary sources. As with any collaborator or research tool, all AI-generated suggestions must undergo independent evaluation and verification.
The editorial outlines that authors are empowered to utilize AI for many tasks typically performed by human coauthors, including:
Discovery and Theorizing: Authors can use AI to surface alternative mechanisms, identify knowledge gaps, explore adjacent literatures, and clarify the contribution.
Methodology: Authors can use AI to gain efficiency in data cleaning, coding, and instrument feedback. AI is also effective at writing code and debugging user-provided code. AI can also help identify themes in large datasets, craft experimental manipulations, and provide feedback on survey items.
Writing: Authors can benefit from using AI to improve the clarity, flow, and internal consistency of their logic. Authors might also use AI to tighten their structure, eliminate typos, and ensure correct formatting for a particular journal.
The editorial also provides a strong cautionary note against specific AI practices. For example, using AI to generate substantial portions of the manuscript is discouraged, as this practice borders on “ghostwriting” and significantly raises the risk of accidental plagiarism. Instead, authors should treat AI-generated content as suggestions, verifying them meticulously before they are used to inform the final writing. In every instance of AI use, authors retain full responsibility and must exercise their own expertise, critical judgment, and due diligence.
Disclosure
To promote transparency and encourage authors to carefully consider whether they are using AI appropriately, OBHDP requires an AI disclosure during submission. During the review process, this disclosure is visible only to the editors; after acceptance, the disclosure will be published with the article.
Most AI uses can be disclosed through the form alone (e.g., writing edits, code debugging, organizing materials). However, when AI plays a methodologically meaningful role—such as functioning as a coder/rater for qualitative research—AI use should also be described in the manuscript so reviewers and readers can evaluate the method’s validity and reliability.
The policy also emphasizes within-team transparency. Authors must affirm that all coauthors have shared any consequential AI use with the rest of the team (e.g., what tool was used, for what purpose, and what verification steps were taken). This disclosure requirement is intended to promote a norm of collective responsibility for the work and promote thoughtful engagement with AI tools.
AI in the Review Process
Whereas our policy allows (and even encourages) AI usage for authors, our policy for reviewers is far more restrictive. Reviewers cannot upload any portion of a manuscript to an AI tool. Additionally, authors should not rely on AI to evaluate any component of a manuscript.
Reviewers may, however, use AI as a learning resource to gain a clearer understanding of unfamiliar theories or methods, provided they do not upload any manuscript content. AI can also help reviewers refine their own written feedback, such as ensuring a constructive tone, identifying and correcting internal inconsistencies and redundancies, and serving as a check against potential biases in their assessment.
The Path Forward
Our goal is to realize AI’s upside by encouraging responsible collaboration that complements human expertise. We believe OBHDP’s framework will help ensure that scholars’ utilization of AI is a beneficial force in the production of high-quality science, rather than a hindrance.
We encourage you to read the full editorial to learn more.
