This issue of the BDJ contains a first. The article entitled 'Artificial intelligence in healthcare and education'1 has been written by the artificial intelligence (AI) software ChatGPT. There are many matters that we need to discuss surrounding the use of AI both in dentistry and in the wider context of society and this really is a beginning.

figure 1

As with any new technology, it presents threats and opportunities. Notwithstanding, the overriding impression I have is that much of the fear and bewilderment is born of people in general (myself included) not fully understanding the extent of AI, what it can do and what it can't do. For the article in this issue, the authors have been completely transparent in that they have utilised the software to produce the text. The point is that they could have written the article themselves but chose (in this case as an exercise) to use AI as a tool to make the task easier (perhaps) and to demonstrate its capabilities. They have not edited it at all and neither have we. So, for example, it is written in US English (e.g. utilized instead of utilised) which in a very minor way illustrates the principle of garbage in garbage out (GIGO) as touched upon by Professor Bornstein also in this issue in a BDJ Perspectives view.2 Not that I am suggesting the current AI article is garbage only that it represents the output from the data sets available to it and the task which the authors asked of it. They could, for example, have presumably asked it to create the piece in UK English to align with the BDJ's style guide. In any event, the most important aspect is that the result is the AI software's unmodified view of the role of AI in healthcare and education and not necessarily that of the authors - or of the BDJ.

For me this is a central point. The application of AI as a tool is to be welcomed but a danger is that its use is not made transparent and the fear is that the tool gets out of control; beyond itself by apparently thinking for itself. The servant becomes the master. How likely this is to happen I do not know or understand but the possibility of such a scenario needs further explanation. It may be that only time will tell. What we can predict with certainty is that whatever technology is invented and developed, humans will manipulate it both for good and for evil. Drones have revolutionised aerial photography and cinematography and are also dropping bombs on people. The internet has thrown up electronic fraud as never before experienced but has simultaneously provided innumerable and hitherto unimaginable benefits from an apparently nerdy starting point of 'how interesting would it be to link lots of computers together?' It is the use to which we put innovation that can prove to be the threat rather than the technology itself.

Here too there is a rabbit hole of misunderstanding. In recent years, the term digital dentistry has emerged as a shorthand catch-all and yet in many ways it is so loose a definition as to be of little value. AI stands on the threshold of a similar fate, part of the confusion of which is that no one really understands what is actually being discussed. We need to receive greater clarification and we need to give enhanced detail about what we are describing and using, and how and why. In publishing, there are safeguards. We have for many years now used software to detect possible plagiarism in submissions. This has alerted us to cases in which our suspicions have led us to, at the very least, raise queries with authors and other instances where we have declined to proceed with any further consideration of the paper. We believe is has also acted as a deterrent and so it is reassuring that there is now also software that can detect AI plagiarism.3

What we can predict with certainty is that whatever technology is invented and developed, humans will manipulate it both for good and for evil.

Further, in its guidance to editors, Springer Nature advises the following: 'Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria. Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs. We thus ask that the use of an LLM be properly documented in the Methods section (and acknowledgements, if appropriate) of the manuscript. If a Methods section is not available, the introduction section (or another appropriate section) can be used to document the use of the LLM.' The world evolves.

While we have already begun our own AI collection (https://www.nature.com/collections/bjgaajjebd), one further matter is certAIn - we need to chat.