Sir, recent correspondence demonstrated that the ubiquitous artificial intelligence (AI) chatbot ChatGPT can be used to create submissions journals and pass written exams in the health professions.1

ChatGPT's success in passing exams has re-energised long-standing conversations about authentic assessment in health professional education;2 its authorship capabilities should similarly invite us to re-examine long-standing issues within academic publishing. AI bots, like ChatGPT, can only produce text based on the texts they have to learn from; as evidenced by examples producing racist, misogynistic and vaccine-hesitant texts, depending on the data available to learn from.3 This implies that the more extensive and stereotypical the language learning set for AI bots, the easier it is for them to emulate this genre.

Published clinical research is a vast corpus, increasing in size every year, whilst at the same time displaying characteristics that limit its utility and relevance. It is this context, rather than the potential for plagiarism, that AI should force us to confront. AI can produce texts that emulate what it learns from, but it cannot think, and it cannot innovate. It cannot develop new experimental methods, and it cannot imagine experimental results that differ from what it has previously seen. In academic publishing, including in dentistry, all AI can do is echo the prevailing publication bias. If publications produced with the aid of AI are a concern, then the best way to address this is to prioritise the publication of more diverse, controversial, and innovative methodologies and results.