Vice President – Management
“Certainly, here’s a blog post highlighting the challenges a scholarly publisher could face if authors use GenAI to write research papers..”
I didn’t use ChatGPT to write this post, but I was most curious to see what it did say. And if I did use it and didn’t think to edit it, I’d be faced with an irked marketing team.
So what happens when authors writing research papers don’t remember to erase signs of AI help?
Over the last year or so, our copy editors have been finding an increasing amount of AI-generated content in the manuscripts that we receive from scholarly publishers. While it’s nearly impossible to tell exactly what was written by a human and what by ChatGPT or others (they are getting very good, very fast), certain phrases are dead giveaways.
When this problem began to come to light, we put a flagging process in place and began to highlight these AI-content-ridden papers to our customers, who then took the necessary action to ensure research integrity within their publication.
But we needed to make this more efficient and ensure nothing slipped through the cracks.
So we wrote a simple script.
Neel Sinha, our Head of Technology says “We built a script that plugs into our completeness and usability check tool, to scan for phrases that follow a repetitive pattern and are telltale signs of AI-generated content. The tool is already looking for missing details and checking the integrity of all elements in the manuscript; it now also gleans identifiable AI-generated copy.”
Some of the phrases include:
We are following a three-step process in our efforts towards ensuring research integrity. And as we identify more phrases, we will continue to update the script. We also keep our teams updated on what to look out for through workshops and knowledge sharing.
Let’s face it, GenAI and all its trimmings are here to stay. Every now and then authors will inadvertently leave in AI-generated content. But while we are nowhere close to solving problems of sophisticated AI-manipulations, simple solutions can be found to address some of the simpler challenges.
Guidelines and Training:
Deciding on your policy around the use of AI, then communicating it clearly across your channels is step 1. Focus on training and reinforcing the ethical behaviour that is expected from your editors and authors, insisting on things like disclosure statements. This must be woven into the submission process.
Ethical Oversight and Compliance:
A dedicated committee should oversee ethics in relation to AI, make decisions on violations, and ensure compliance and continuous updates to policies and procedures.
Using Technology Tools:
While there is no silver bullet, simple and innovative tools can bring in new layers of scrutiny. These tools need to be in place throughout your publishing workflow – right from submission, all the way through to proofing and publication.
This is what ChatGPT thinks I should have included in the blog post (which I asked it to generate after I had already written it!)