Lately, there have been lots of well-researched, insightful and technologically rigorous pieces on AI in the scholarly publishing industry. This isn’t one of them.

The starting point for this personal piece is an internal document from IBM, dating from sometime in the late 1970s, which was doing the rounds on social media a little earlier this year. People didn’t seem to be sure whether this ancient artefact from ye olden days was genuine:

I decided to consult an expert source in the field – my dad Barry De Boer, a highly respected software industry professional throughout a long career which started with IBM in the 1960s, involved selling business management systems to IBM in the 1970s, and then senior IT project management roles with the likes of Oracle from the 1980s through to the late noughties. Now retired, Barry confirmed that the edict in that 1970s IBM document was indeed genuine. The computer was only ever intended to be a decision support machine, and absolutely not to be trusted as a decision maker. As the edict says, who would be held accountable if the decision was wrong?

In fact, computing engineers back in the day went further. In the 1960s, Barry recalls staffers at IBM musing on whether computers should be referred to as TOM. Thoroughly Obedient Morons.

Fast forward to where we are now, both within the scholarly publishing industry and more broadly across society, and that notion of the obedient computer is up for serious debate.

Even at a fundamental level, ask a digital native in their 20s or teens what they are using to check messages, consume information or watch entertainment, and I doubt many would call their device a ‘computer’. And most of them probably wouldn’t question the use of AI in their devices to deliver content or shape their user experience. In fact, many of them may be oblivious to it.

Here in France, a government-backed AI tool trumpeted as challenging the likes of OpenAI – LUCIE – had to be hastily withdrawn earlier this year after it advised the French public to eat cow’s eggs as a ‘source of protein and nutrients’.1 Most of us (I would hope!) know how nonsensical this advice is. But would a modern-day teenager be so quick to spot the idiocy, if they are so dependent on their devices and 30-second video reels for their information?

Beyond the smartphone (another term destined for the history books) and consumer apps, adoption of AI still seems to be a largely conscious act on the part of most of us – certainly in our professional activities. With the proliferation of GenAI engines, more and more of us are using prompts to search for information or to help generate content. We are asking the machine to help us create better outputs and (hopefully) make better decisions.

For some time now, scholarly publishers have recognised the need for automated tools to help inform decision making, especially when trying to triage manuscripts or check for integrity issues at scale. There also seems to be a growing acceptance within the scholarly publishing community of GenAI as a tool to assist researchers in writing their papers (particularly authors for whom English is not their native language) and, a little more contentiously perhaps, to assist peer reviewers in writing their feedback.

But we are at a tipping point, in scholarly publishing and societally. In our industry, technology vendors are actively promoting AI-assisted peer review. Not AI-assisted writing, AI-assisted review. One vendor I met with recently posited the idea of having a ‘third peer review’ which is totally AI generated, to provide a counterpoint to the two human reviewers.

We are getting into the realms of the machines making decisions and serious questions about accountability. How big is the leap from AI-generated peer reviewers to a dystopian AI-powered government? If that sounds far-fetched, take a look at recent developments in the US since January 20th. If federal government employees are being laid off at scale, apparently to be replaced by more efficient AI technologies fed by data sharing across federal agencies, who will be making the decisions that affect Americans’ everyday lives? Is big tech watering down its AI ethics pledges so it can bid for government contracts down the line?2

Where is the ‘human in the loop’ that most people in the scholarly publishing industry believe is critical to the successful and ethical adoption of AI technologies?

And will those humans left in the loop actually be empowered to make decisions informed – but not directed – by what the machines are telling them? In the event of a catastrophic failure of technology, there is usually a witchhunt for a human scapegoat to be held accountable. It’s rarely the fault of the tech, except perhaps when cow’s eggs are being recommended as a dietary supplement.

I hope I am wrong about where we are headed with AI. I hope that these are just the musings of a 52-year old Luddite hunkering down in rural France. I must admit, I have yet to (knowingly) prompt a GenAI engine to help me write anything (including this piece). I have no idea what Copilot was wanting to do when I opened Microsoft Word to start writing this post, but I ignored it. At least I hope I did.

At the other end of my family story, I have two very young, very brilliant nieces back in the UK, who are kept busy with a diverse range of real-world hobbies, and who thankfully don’t spend all day on their devices. At least not yet!

But I wonder whether they will grow up not even questioning the idea that the technology embedded in their daily lives tells them what to do, rather than helping them make better decisions. They won’t go to the shops asking for cow’s eggs, but I hope TOM won’t be staring back at them in the mirror.

~~~

1 https://edition.cnn.com/2025/01/27/tech/lucie-ai-chatbot-france-scli-intl/index.html

2 https://www.cnbc.com/2025/02/04/google-removes-pledge-to-not-use-ai-for-weapons-surveillance.html

Featured image is Michael Caine as Harry Palmer (not TOM) in the 1967 movie Billion Dollar Brain