Room: TBD


09:30-09:45 Welcome and introduction

09:50-10:20 Interest is all you need: rethinking peer review for ML conferences
Lars Kotthoff, University of St Andrews and Sorbonne Université

10:25-10:55 Peer review in the age of LLMs: evidence and interventions from AISTATS 2026
Arno Solin, Aalto University

11:00-11:30 Coffee Break

11:35-12:25 Group discussion on the future of peer reviewing

12:30-14:30 Lunch Break

14:35-15:05 Imprecise Markov semigroups and their ergodicity: a testbed for AI-assisted publishing best practices
Michele Caprio, The University of Manchester

15:05-15:35 From LaTeX to Pubmed
Hoel Kervadec, University of Amsterdam

15:40-16:30 Group discussion on AI-assisted publishing

Abstracts


Interest is all you need: rethinking peer review for ML conferences

Lars Kotthoff

The ever-growing number of submissions to ML conferences puts more and more strain on the peer review system that is at the core of how we publish. Reviews are late, uninformative, or fraudulent; reviewers are stressed, overburdened, and have to review papers outside of their areas of expertise. I propose to take a small step towards common practice in other scientific disciplines, where conferences are not the primary means of publication, but dissemination of new ideas. In particular, I propose to use the bidding data when considering whether a paper should be accepted. Is there a lot of interest for the topic of the paper in the community, i.e. did many people bid on it, even though the work itself may not be mature? Or was nobody interested in the paper, even though the reviews indicate that the paper should be published? Moving towards a more lightweight review process where interest is one of the criteria for acceptance has the potential to address some of the current headaches in ML publishing.



Peer review in the age of LLMs: evidence and interventions from AISTATS 2026

Arno Solin

Large language models (LLMs) have been reshaping machine learning, in a methodological sense and focus, but also in how ideas are written up, reviewed by the community, and published. The 29th International Conference on Artificial Intelligence and Statistics (AISTATS 2026) received 2102 submissions, with ~30% accepted for publication in the Proceedings of Machine Learning Research series. In this talk, we share the organizers' experiences of uninvited effects of LLMs on the reviewing and publication process: increased volumes of low-quality submissions generated by LLMs ("AI Slop"), and cases where reviewers outsourced substantial parts of their reviews to LLMs. During the reviewing process, the organizers conducted an experiment, where LLM-generated reviews were automatically flagged and disregarded during the decision-making. While LLM-assisted reviewing was explicitly forbidden in AISTATS 2026, the organizers recognize the benefits of controlled, fair, and systematic use of contemporary tools to improve the review process.



Imprecise Markov semigroups and their ergodicity: a testbed for AI-assisted publishing best practices

Michele Caprio

We introduce the concept of an imprecise Markov semigroup Q, a framework for representing ambiguity in both the initial distribution and the transition mechanism of a continuous-time Markov process. Rather than committing to a single model, Q encodes a collection of Markov semigroups -- each potentially arising from a different underlying process. Using tools from topology, geometry, and probability, we study the long-run behavior of Q and show that, under conditions that also reflect the geometry of the state space, the ambiguity can vanish asymptotically. We call this property ergodicity of the imprecise Markov semigroup, relate it to classical ergodicity, and provide sufficient conditions in settings ranging from Euclidean spaces and Riemannian manifolds to arbitrary measurable spaces. This contribution also serves as a case study for the future of publishing in machine learning, where AI systems increasingly participate in the production of theoretical results. In our project, an AI assistant played a material role in helping overcome a central technical obstacle: removing a compactness assumption on Q that previously limited the generality of the arguments. This experience highlights concrete publishing questions that are not yet standardized in ML venues: What constitutes adequate disclosure of AI assistance in proofs and derivations? What should be archived to support reproducibility when models and toolchains evolve (e.g., prompts, intermediate AI outputs, model/version metadata, and human-vetted proof outlines)? How should peer review adapt to AI-assisted reasoning, especially for critical intermediate lemmas where errors can be subtle, while preserving accountability and clear attribution? We close by proposing lightweight, review-friendly reporting practices (provenance notes, "AI contribution statements," and a minimal artifact package for theoretical work) aimed at improving transparency without burdening authors or reviewers.



From LaTeX to Pubmed

Hoel Kervadec

In the recent decades, the role of traditional scientific publishers has evolved: type-setting is now done by LaTeX, and online publishing has replaced printing, and many open-access journals have appeared. However, in some areas traditional publishers have not yet been displaced, notably for the public archival of medical scientific literature on PubMed, maintained by the US NIH. Many US grant requires that the published papers are uploaded there. This is currently a limitation for many open-access journals: archiving papers on Pubmed is a difficult and time-consuming enterprise, but many authors cannot submit to journals that are not on Pubmed. In this abstract, we present the work that was undertaken at the MELBA journal (Machine Learning for Biomedical Imaging) [1] over the past few years. Getting a journal accepted on Pubmed goes in two stages: - a scientific evaluation, where the scientific rigour and relevance of the journal is assessed; - a technical evaluation, where the format of the papers (a custom JATS/XML flavour) is verified. The scientific evaluation requires the journal to be a few years old, with sufficient published papers, and policies on retraction and other ethical aspect clearly defined. Failing this evaluation requires the journal to wait two years before re-applying. The technical evaluation ensures that the articles are properly formatted in an archival format: Journal Article Tag Suite (JATS), defined in XML [2]. Providing articles, originally written in LaTeX, to this format, without inducing extra labour for each published article, is challenging. LaTeX being a macro-based programming language, it lacks the structure of XML, and automatic transpiling is far from trivial. The freedom given to the authors to implement their prose and figures creates a lot of corner-cases that simple pattern matching cannot handle. Over the past two years, a custom end-to-end solution—re-using existing open-source tools—with the addition of custom code, was developed. We are currently in the last phase of validating the converted articles before publication. Our current solution (to be open-sourced) works in several stages: - the LaTeX sources of the authors are collected (not the PDF); - LaTeXML [3] does a first conversion from LaTeX to HTML. Due to incompatibilities with TexLive it is version-pinned; - Pandoc [4] creates a first conversion from HTML to JATS. The resulting JATS is not acceptable as-is by the Pubmed style-checker [5] - a post-processing script cleans, fixes, and fills the missing metadata, while also packaging the figures and graphics of the article. Some errors to fix at that stage actually come from improper use of LaTeX by the authors. That scripts fully reconstructs the XML tree, discarding, fusing, and editing tags and their attributes as needed. - final proof-check is done through a preview tool provided by Pubmed. From a technical perspective, this complex conversion pipeline is far from satisfactory, but perfection should not be the enemy of the good-enough: once the journal is formally published on Pubmed, a rewrite and simplification of the conversion can be planned and started and be done without time pressure. [1] https://www.melba-journal.org/ [2] https://jats.nlm.nih.gov/publishing/tag-library/1.2/attribute/ref-type.html [3] https://github.com/brucemiller/LaTeXML [4] https://pandoc.org/ [5] https://pmc.ncbi.nlm.nih.gov/tools/stylechecker/