Reflections on APE 2026 and the ALPSP Berlin Seminar

One year ago, a few days before the inauguration of the 47th President of the United States and on the back of a period dominated by pandemic and economic instability, the scholarly publishing community gathered in Berlin for APE 2025 with a sense of trepidation about what lay ahead, and how it could impact on the challenges we face as an industry.

Fast forward twelve months, and at this year’s APE 2026 conference – and the ALPSP seminar which followed – we found ourselves comparing notes and sharing a general feeling of bewilderment at what has actually transpired. We are living in an increasingly unpredictable and unfair world, with a sense that things have regressed in recent years – particularly in the last twelve months – rather than progressed. This year’s APE theme of ‘Scholarly communications at a turning point’ was entirely appropriate. Unfortunately, other commitments on my part meant I was only able to attend the first day of APE.

The key challenges to scholarly communications haven’t fundamentally changed in the last year – namely, trying to keep pace with GenAI and maintain the integrity of the scholarly record – but the sense of urgency in the face of new realities is palpable. At both APE, once again hosted by ESMT Berlin, and at the ALPSP seminar, hosted by Wiley at its Berlin offices, there was a sense that now, more than ever, collaboration and fairness need to be at the centre of how we respond.

The opening keynote at APE was a powerful call to action by Caroline Sutton (STM), urging collaboration by actors within the scholarly ecosystem around their legitimate interests, and warning against monopolisation of power by any one actor. Checks and balances are integral to how science and research should operate – it’s why we traditionally rely on peer review, for instance. Caroline’s ‘thought experiment’ around the risks of one institutional actor – be that a government, funder or research institution – controlling employment, funding, agenda setting, evaluation and validation poses some very interesting questions.

The coercive risks of political power were explicitly discussed in the panel chaired by Lou Peck (The International Bunch), in which Ilyas Saliba (University of Erfut) shared his research on academic freedom index indicators, including global trends showing a reversion in 2025 to 1973-levels of freedom (or lack thereof) compared to the progress seen up to 2006. This also demonstrates that, despite an acceleration due to the dramatic events of the last year in the U.S. and repressive actions in other regions, this apparent decline in academic freedoms is not a new phenomenon.

Equity, collaboration and trust were themes developed at the ALPSP seminar, which was convened on the topic ‘Does AI help or hinder research integrity?’ The discussion was more around practical and ethical considerations in using AI, rather than a technical discussion about the AI tools themselves. Intentionally limited to a relatively small group of participants (around 25 to my reckoning) to stimulate open conversation, the format worked really well. The seminar was moderated by Jude Perera (Wiley) with insightful contributions from panellists Christian Kohl (PLOS), Thomas Metcalf (Institut für Wissenschaft und Ethik), Simone Ragavooloo (Frontiers) and Lukas Pollmann (Google Cloud).

Christian opened the discussion by making the point that the tasks for which scholarly publishers are increasingly using AI tools to help manage integrity risks at scale – such as checking patterns in networks and behaviour, and handling large amounts of data – rely on existing machine learning capabilities rather than advances in GenAI. Although he represents Google, Lukas was very open in his view that we (i.e. society) are increasingly expecting AI to solve complex problems which we as humans, with all our critical thinking faculties which AI currently lacks, have hitherto been unable to solve ourselves. Are we expecting (or fearing) too much from AI? After all, GenAI isn’t a sentient intelligence. It’s a data model and prompt engine prone to biases and hallucinations. As Christian pointed out, science typically needs systems which are logical and deterministic, whilst LLMs are the polar opposite of this by their very nature.

The ALPSP seminar raised a number of interesting – and potentially contentious – ideas. Firstly, complacency on the part of humans. There seems to be a degree of consensus within scholarly publishing that AI should be used to support humans, with humans kept in the loop and having final responsibility for decision making, as opposed to AI making decisions independently. Simone gave an example of editors over-relying on a relatively basic ethics indicator checking tool, in some cases allowing papers through without examining the results in sufficient detail. This isn’t a problem with the technology, it’s a problem with human behaviour and Simone acknowledged the need for ongoing training and editorial best practices. The group also discussed the (hopefully hypothetical) situation posed by Stephanie Dawson (formerly ScienceOpen) in which an editor working to KPIs based on meeting annual publication targets *might* be less rigorous in responding to AI warnings on a potentially problematic paper as the end of year approaches. In these cases, should we perhaps be asking whether humans help or hinder research integrity, rather than AI?

Secondly, the idea of a presumption of guilt in cases of problematic manuscripts and potential malpractice. The core tenet of most justice systems is a presumption of innocence until proven guilty; but in scholarly publishing, as we try to keep up with the growing scale of junk submissions and new methods of fraud, the presumption seems to be one of guilt on the part of the author, albeit with a right of appeal through some form of review process or – as a last resort – through the courts if they believe themselves to have been defamed. The group discussed how watchlists or blacklists could move the focus from tracking an overwhelming volume of problematic papers to a more manageable group of problematic authors. This makes sense in identifying persistent, repeat offenders whose activities merit suspicion and investigation. However, the problem with this approach of naming and shaming, either publicly or just privately amongst research integrity specialists, is the risk of reputational stain on the author, even in the event of the accusation being rescinded for those who have made a genuine error, through lack of training (or perhaps laziness in some cases). Thomas argued that presumptions of likely guilt – and the sanctions imposed if guilt is proven – should vary according to the subject discipline. For instance, medical research fraud could have life or death implications.

Thirdly, challenging the assumption – particularly at C-level – that adopting AI at scale reduces costs (usually through reducing headcount). As audience member Niamh O’Connor (PLOS) pointed out, the costs in scholarly publishing are not in the output format (for example, the idea that online is cheaper than print and requires fewer resources, as was suggested by one panellist) but in the assessment of the work, through manuscript triage (including integrity checks), the process of peer review, and the process of editorial decision making. Simone shared that the successful adoption of an in-house AI integrity checking tool across all publications at Frontiers, to support their team of human integrity specialists, has resulted in additional headcount to deal with the outputs from the AI tool, and to help keep the tool trained on emerging risks to avoid what Simone describes as ‘drift’ or reduced effectiveness of the tool.

The most immediate and serious risk of AI to research integrity identified by the group, beyond intentional malpractice at scale through the misuse of GenAI, is a reliance by all actors in the ecosystem on the largest, most generic GenAI tools, such as ChatGPT, Claude, Gemini or Copilot, rather than tools which have been developed specifically to support research and scholarly writing. Researchers, in their roles as authors and peer reviewers, are already using these generic tools without disclosure to scholarly journals – regardless of the policies in place. If organisations such as publishers and institutions start to embed these generic commercial tools in their infrastructure, are the risks posed by poorly trained LLMs and a lack of attribution for cited works also embedded in the scholarly publishing process?

Also, if these large commercial GenAI vendors differentiate their products between a paid-for tool which affluent actors (typically in Western nations) are able to afford and use, but which researchers in LMICs are unable to afford and who rely instead on less well-trained free versions, AI will contribute to growing inequity in scholarly publishing, instead of being the democratising force it could be. As Godwyns Onwuchekwa (Global Tapestry Consulting) passionately argued, a reliance on GenAI by researchers and scholarly publishers could serve to further disadvantage researchers in LMICs who are already unfairly criticised for the quality of their submissions to journals, due to a lack of access to AI technology, a lack of training in the scholarly publishing process, and trying to navigate this process in their second or third language (i.e. English).

Overall, the ALPSP seminar was an excellent discussion and a great opportunity to forge connections with some of the best people in our industry. In tandem with the APE conference, which had its largest turnout in several years since the pandemic, these two events have cemented Berlin as the place to be in January for scholarly publishers. Congratulations to Ingo Rother (BISP), Wayne Sime (ALPSP) and their organising teams. If you are planning to be in Berlin next January, just remember to bring your woolly hat and thermals!

Next stop: Researcher to Reader 2026 in London on 24-25 February. For more details visit https://r2rconf.com/ (Full disclosure: I am co-manager of the workshops at this year’s R2R event)

Disclaimer: The views expressed in this blog post are those of the author, unless otherwise attributed to a third party, and are in no way intended to represent the views of any client of De Boer Consultancy SARL / EURL.

Kriyadocs

Jason attended the APE conference and the ALPSP seminar in his capacity as Growth Director with Kriyadocs, a leading publishing technology provider supporting pre-submission, peer review and post-acceptance production workflows for a growing number of scholarly publishers worldwide. With a focus on supporting equitable approaches for researchers and leveraging technology to provide a next-generation platform for publishers, Kriyadocs is committed to meeting the needs of the scholarly publishing community. For more details, please visit https://www.kriyadocs.com