Sir, AI (artificial intelligence) is arguably one of the greatest inventions of the twenty-first century, that has already impacted on every aspect of our lives, from driving to healthcare. Increasingly, medical equipment incorporates AI which allows smart monitoring and/or management of medical conditions (eg diabetes,1 dental monitoring2), with no or limited interference by humans.

ChatGPT (Chat Generative Pre-trained Transformer), an AI chatbot developed by OpenAI, was introduced in November 2022. ChatGPT is trained to engage in human-like conversation and can answer questions, including clinical questions that clinicians encounter in their day-to-day practice. This raises the question of whether we can trust ChatGPT to produce scientific evidence for clinical decision making. In other words, whether ChatGPT can replace the standard evidence synthesis practice which involves literature search, critical appraisal, data collection and combining the findings of several studies in order to derive an evidence-based response to a clinical question.

We put this exact question to ChatGPT and the response was:

'As an AI language model, I am designed to generate text based on the patterns and relationships that I have learned from the input data I was trained on. While I can generate text that is related to scientific evidence and clinical decision-making, it is important to note that I am not a substitute for professional medical advice, diagnosis, or treatment.

'When it comes to using text generated by AI language models for clinical decision-making, it is important to exercise caution and carefully evaluate the information presented. While language models like GPT have shown great promise in natural language processing tasks, they are not perfect and can make errors or generate misleading or inaccurate information.

'Therefore, it is recommended to always consult with qualified medical professionals and rely on peer-reviewed scientific literature when making clinical decisions. AI language models like GPT can be used as a tool to help inform clinical decision-making, but they should not be the sole source of information or guidance.'

The AI generated response to this important question is rational. Clinicians should consider the limitations of AI powered searches, such as ChatGPT, and use it in conjunction with and not as a replacement for, standard evidence synthesis practice. In addition, AI can only make a judgement based on the input data whilst the human brain can come up with innovative ideas to resolve a simple or complex issue which is the basis of most human inventions. In future, we may see more powerful chatbots with more advanced capabilities such as ChatGPT4. However, they do not seem to be able to replace the human brain and we would still need human input and intervention.