
AI Wrote This (Maybe): Highlights From the Webinar on Equipping Peer Reviewers to Detect AI

What do we do when it seems like AI might take over how peer review is conducted? Is it a threat to the future of academia, or can it be harnessed to empower global research? IGI Global Scientific Publishing, along with eContent Pro and Reviewer Credits, takes on this year’s theme for Peer Review Week by hosting the webinar, “Human in the Loop: Equipping Peer Reviewers to Detect AI-Generated Content.” This event, sponsored by The Open Science Education Institute (OSEI), aimed to explore these very questions and the idea of training reviewers, editors, and copyeditors in how to detect AI in manuscripts.
I’d like to take this moment to thank the incredible panelists for this event: Dr. Gareth Dyke, Dr. Tim Vines, Dr. Haseeb Irfanullah, and Dr. Duncan Vinson. The insights shared during the webinar were truly invaluable to the scholarly community.
Hear from the Experts on Peer Review and AI Themselves
During the webinar, the panelists answered carefully selected questions that opened dialogue around AI use in research and guided participants through identifying AI use in submitted manuscripts. Below, we recap some key points from the webinar and some key takeaways that require further discussion.

Question 1: “What specific skills or knowledge do you believe peer reviewers and editors need today to responsibly work with or evaluate AI-generated content?”
Dr. Dyke led the first question by emphasizing that reviewers need to understand what AI is and how it generates content, including how it writes texts and generates images. They require knowledge firstly about the tools the editors and colleagues are using, because they don’t know how to spot it if they don’t know how it is generated in the first place. He also pointed out that it is extremely confusing to know what is out there and who is using what.
Dr. Irfanullah re-emphasized the idea that reviewers need to work with AI themselves to recognize it, while also acknowledging that AI and the tools used may not be the same around the globe. He notes that it is the publisher’s role to clear AI detection before it is transferred to reviewers and editors, an opinion that would continue to be voiced by many of the panelists.
Question 2: “What are the most common indicators that a manuscript may have been partially or fully generated by AI?”
Dr. Vines reminds us that LLMs are essentially “autocomplete on steroids” and that it is amazing at predicting the most likely next word when given a string of text. What is crucial is to remember that when AI writes articles, it writes very well in clear, concise prose, but it has no idea what it is writing about. He provided a very interesting analogy to drive this point, saying that intelligence is knowing a tomato is a fruit, while wisdom is knowing not to put a tomato in a fruit salad. AI lacks wisdom. Thus, articles generated by AI will be good but will also be average and follow specific patterns.
Dr. Vinson elaborated on specific examples of ways in which he has spotted AI in manuscripts he copyedits, including:
- Sudden shift in voice
- Tells in punctuation
- Burstiness (LLM-generated English sentences tend to be the same length and too long. Human writing tends to consist of short, concise sentences interspersed with more complex clauses. )
- Compound sentences with overused phrases (such as “not only but also” and “so as to”)
- Fake references (there might be a reference in the reference list where the author is an actual author, the title has the same keywords that are in the author’s research, the journal is an actual journal, but the DOI links to an incorrect article or no article).
Dr. Vinson voiced his concern over the fake references, as they can look totally fine to the casual eye and are only uncovered by experts in that particular field or after scrutiny.
Both Dr. Vines and Dr. Vinson expressed their concern that humans won’t be able to keep up with spotting AI-generated text and that, as new AI tools are developed to detect AI, AI users will continue to take countermeasures to not be recognized.
Question 3: “What are common indicators for determining if images/figures have been AI-generated?”
Dr. Vinson explained that the first step he takes is to look for discrepancies between the body text and the figures, checking whether the figures accurately reflect what the text describes. He noted that sometimes the text can be garbled or unrelated to the article’s main content. He also pointed out that inconsistencies among figures—such as differences in typeface, color schemes, or layout—can be red flags. Overall, he emphasized that detecting AI-generated figures is significantly harder than identifying LLM-generated text, as figures lack the distinct linguistic patterns or “tells” often present in AI-written prose.
Dr. Dyke echoed the importance of ensuring alignment between the figures and the text, as well as verifying the sources of data or figure origination. He also stressed that responsibility should not fall solely on peer reviewers or editors; publishers, he argued, ought to use the wide range of tools available to them to screen for these issues in advance.
Question 4: “How effective are current AI detection tools in identifying AI-authored content, and what are their limitations? Should they be used in the peer review process?”
Dr. Vines opened the discussion by asking where we draw the line of AI usage in manuscript writing. In his opinion, it is absolutely fine to use AI to polish text, especially if English is not the author’s first language. He emphasized that detecting AI used to assist in this way and then punishing the researchers for using it in this way was wrong. However, it is important, he said, to catch those who used it to generate the impression that actual research has been done. He spoke of using AI detection tools that are good at detecting AI but do not falsely accuse people of using AI when they aren’t.
Dr. Irfanullah backed Dr. Vines’ answer but also questioned, as AI becomes more human, how can we tackle it? False positives are abundant, and using one’s own judgment was the best course of action.
Question 5: “When AI-generated content is suspected, what is the appropriate course of action for a peer reviewer, editor, or copy editor to take?”
Dr. Dyke advised to first go back to the journal to see what the journal’s guidelines on AI use in manuscripts are. If it is perfectly fine with the publisher to use AI in manuscripts, then no further action is needed. If not, then talk to the editor and, in a final step, go back to the authors to ask for clarification.
Publishers have wildly different guidelines about the use of AI, including whether or not authors have to declare AI use in the generation of a manuscript, which is confusing for researchers. Thus, according to Dr. Dyke, publishers need to unify to guide researchers on how much and what tools can be used and how to declare them.
Dr. Irfanullah answered that suspected text will be happening continuously and that these are hints and not verdicts. Authors should be spoken with to get their version. Training is needed from publishers, organizations, or associations of publishers. Moreover, it is also the responsibility of the author’s institutions to educate the authors on how to use AI.
Question 6: “How can publishers and editorial leadership work together to build effective AI training programs for reviewers?”
Dr. Vinson and Dr. Dyke answered that the key was all working together in the same space. Dr. Vinson emphasized we need to use AI tools and play with them to get a sense of what they can do. Dr. Dyke reiterated that publishers need to unify guidelines for peer reviewers and need better training programs. He also noted that publishers tend to view AI more as a threat than an opportunity and that the reverse might be true for peer review. When using AI in the peer review process, there’s a great opportunity to cut down on “human in the loop.”
Question 7: “What best practices can authors follow to ensure that when they do utilize AI tools in their research and writing, they are doing so ethically and not risking rejection or retraction of their work?”
For Dr. Vinson, authors must remember that the authorship role involves responsibility for making decisions. Simply put, no AI tool can be an author. He advised not using LLMs to do literature reviews and to avoid its help with anything to do with citations, or at the very least, be careful and check the references thoroughly. Most importantly, the author needs to make sure that they are okay with it, as it is their name being put on the work.
Sometimes, said Dr. Vines, it seems as though authors let AI rewrite the articles and then don’t read them afterward, which seems insane to do after all the work they put into it. He advises reading the article before submitting it, not using AI to do the literature reviews, and reminds authors that they have to take responsibility for the text.
Question 8: “Do you believe it’s possible for AI-generated content to meet academic standards if disclosed transparently. Why or why not?”
Dr. Irfanullah questioned what we mean by content meeting academic standards, as academic standards seem to have a huge spectrum. A human-dependent system is “exploited,” and he would 100% be in favor of AI use in the peer review process.
If AI is making up the author’s data, Dr. Vines said it’s not possible for the work to meet academic standards. AI should never be used to make figures unless it’s an illustration because figures are based on data. Rather, AI should absolutely be used to improve text, but disclose it upon submission.
“Being transparent”, he reminded everyone, “is actually a very powerful signal of research integrity.” If authors don’t disclose AI use, then it makes people suspicious and makes them wonder what the author is trying to hide.
Q & A
During the Question & Answer portion, one participant raised an important question: “What to do if the author catches that the peer reviewer used AI to do the review?”
The panelists agreed that first, such a review should never reach the author because it means the editor is not doing their job. Of course, the editor may not have been trained to detect that the review was done with AI. It is highly unacceptable for AI to perform the review because the reviewer would have had to upload the manuscript to the AI tool without the author’s permission. The author needs to get back in touch with the editor, and the editor can bring it to the attention of the publisher as well.

Key Takeaways
The webinar created some fascinating dialogue and insights into AI detection and responsibility. While the webinar has since passed, the discussions that arose from it do not end there. As we walk away from this panel, we should continue to be curious and ask ourselves:
- Should publishers have AI detection implemented into the workflow, and where in the workflow should this be placed? Many seem to have incorporated it after the peer review process. Is this too late?
- Can AI detection tools even be trusted, or will false positives continue to flag perfectly acceptable AI use, while false negatives increase as AI users find ways to circumvent the detectors?
- Is there a missed opportunity for AI to assist with the peer review process?
- Should there be one standard, unified guideline that all publishers follow when it comes to AI use in publishing?
- How can better training programs be built and by whom?
After participating in this webinar, it is my opinion that publishers, institutions, and peer reviewers and editors will all have to work together in a continual checks and balances process. Institutions need to educate their authors on how to ethically use AI for assistance in writing, publishers will need to find better ways of implementing checks into publishing workflows, and peer reviewers and editors will need to be a safeguard to catch what detection tools cannot. Overall, we all need more overarching and definitive guidelines on what constitutes acceptable AI use and what steps should be taken when AI misuse is detected. It is clear that only by working collaboratively can we achieve the preservation of research integrity.
By the way, AI wrote one section of this article. Were you able to spot it?
Answer: AI rewrote the recap for Question 3 (meaning of the content was checked to ensure it wasn’t altered incorrectly, which it was not)
Interested in becoming a peer reviewer for an IGI Global Scientific Publishing publication? Please fill out the form here.
Discover open access articles about Peer Review on the AGOSR database:
Purposes of peer review: A qualitative study of stakeholder expectations and perceptions
Peer Review; Critical Process of a Scholarly Publication
Use of ChatGPT to Explore Gender and Geographic Disparities in Scientific Peer Review










