From 5-8 August 2025, we attended and spoke at the International Society of Managing and Technical Editors (ISMTE) 2025 annual conference in the vibrant city of Montréal in Canada. This event brought together managing and technical editors from across the scholarly industry, who lead the charge on managing the peer review process of manuscripts for different publishers and scholarly societies.
There were three of us in attendance from Crossref- Madhura Amdekar (Community Engagement Manager), Kathleen Luschek (Technical Support Specialist), and Dima Safonov (Senior Software Developer). The event was particularly special for us because this was the first time we were meeting Dima in person. This spirit of joviality was carried throughout the conference as we met several colleagues from the industry and engaged in conversations with them.
Given the impact of AI on editors and editorial offices, it was no surprise that the first plenary talk of the conference was given by Robert Siemens and Giovanni Cacciamani on “AI in Scholarly Publishing: policies, pitfalls, and possibilities”. They outlined the challenges faced by journals in this context, such as the increasing use of AI by authors and reviewers, and suggested whether open peer review can help to combat this and promote transparency. There is also heterogeneity when it comes to AI guidelines of journals, and the CANGARU initiative aims to provide a standardized set of guidelines that can be used by the scholarly community to report the use of ChatGPT and LLMs.
Another LLM-related session was “Detecting and Dealing with AI Data Manipulation” by Alice Meadows and Aashi Chaturvedi, which explored the current state of fraud detection in images. While image manipulation in scientific research is a matter of concern, in the last few years LLMs have made it easier to perpetrate fraud on a larger scale. It’s difficult-to-impossible to reliably detect manipulation in images, so one of the potential approaches would be to modify instruments to cryptographically sign the results before they even leave the instrument, making tampering easily detectable.
Several sessions offered practical tips to support the work of editors. The breakout session “That Journal Article is Published. Now What?” covered how authors and editors can promote published work, by leveraging social media, using video abstracts, and other innovative methods such as podcasts to build community connections. Another session “Reporting and Analysis” went over the key data and reports frequently requested by the journal’s key stakeholders. The speakers suggested using a standard framework to design reports and to incorporate key variables.
We had organised a breakout session on “Preserving the Integrity of the Scholarly Record: The Role of Post-Publication Updates”. We provided an overview of the role of scholarly metadata as trust signals. By telling us who authored a work, who funded it, whether it was updated after publication, and more, metadata can be very informative. Retraction and other post-publication updates have repeatedly been highlighted as being key metadata that signal integrity. Crossref provides infrastructure, including the Crossmark service, that allows its members to record retractions and other types of updates. Since the acquisition of the Retraction Watch database, retractions from Crossref members and Retraction Watch can now be accessed via Crossref’s REST API.
Caitlin Bakker, co-chair of NISO’s CREC working group, highlighted how publications are not consistently marked when they are retracted, leading these publications to continue to be cited and supporting future research. To promote consistency in the dissemination of retraction information, the CREC working group released a recommended practice (RP) containing best practices related to the creation, transfer and display of metadata on corrections, retractions, and expressions of concern. Since the release of the RP, the group has been engaging with the community to encourage discussion around the current processes used by publishers and vendors to update and deposit this metadata.
Kathleen rounded off the session by sharing best practices and methods of registering retractions with Crossref. She showed how one can update metadata for the original DOI as well as shared several examples of API queries that can be used to retrieve retraction metadata, e.g. retrieving retractions created in a specific year or published by a specific prefix.
When the audience was polled about what metadata they consider important for signalling trust, author names and affiliations, ORCID IDs, funder information, CREDIT taxonomy, and provenance information featured mainly. We also got feedback on how the attendees appreciated hearing about the infrastructure that supports registering of retraction information, which is a key metadata for signalling trustworthiness. We would like to continue to engage with the editorial community about which metadata is important to them as trust signals, and we are especially interested to know more about how metadata is being leveraged as part of editorial workflows. Keep watching this space for more on this!

