Introduction

Recent years have witnessed an increasing interest in the application of Artificial Intelligence (AI) and deep learning in Ophthalmology [1]. Large language models (LLMs) have become a popular area of research in this field, and have been integrated into publicly available chatbots such as ChatGPT 3.5 and 4.0 (OpenAI, CA, US), Google Bard (Alphabet Inc., CA, US), and Bing Chat (Microsoft Corporation, WA, US) [2,3,4,5]. LLMs have been trained on vast amounts of data, enabling them to generate human-like text and answer complex questions. This capability has the potential to revolutionise clinical practice and assessment [2, 6, 7].

We evaluated the performance of LLM-driven AI chatbots on the Fellowship of Royal College of Ophthalmologists (FRCOphth) exams required for autonomous Ophthalmology practice in the UK. We focused on testing the capability of these models in the Part 1 and Part 2 FRCOphth Written exams. These advanced postgraduate exams consist of multiple-choice questions and cover the learning outcomes of the Ophthalmology Specialty Training curriculum in the first two years of training and towards the end of training, respectively.

Methods

We obtained sample multiple-choice questions from the Royal College of Ophthalmologists website, covering both the Part 1 and Part 2 examinations [8, 9]. We excluded image-based questions, resulting in 48 Part 1 and 43 Part 2 questions, categorised according to their topics. Specialty trainees who had recently passed the exams rated the difficulty of each question on a scale of 1–5, with 1 being “not at all difficult” and 5 being “extremely difficult” (Supplementary Materials). The mean difficulty score was consistent across all respondents.

We tested each LLM-chatbot three times on the sample questions at different timepoints. Additionally, for Part 2 questions, we evaluated ChatGPT-4.0 using various prompting strategies, such as asking the chatbot to answer the question from the perspective of a pharmacist or statistician. When the LLM-chatbot could not answer the question, it was recorded as incorrect. We did not provide additional instruction or training data.

We analysed the association between accuracy and LLM-chatbot using Chi-squared testing and multilevel (mixed effect) logistic regressions. Difficulty and topic were included as fixed effects, and question ID as a random effect. We selected the models with the lowest Akaike information criterion. Part 1 and Part 2 data were analysed separately. All statistical analyses were conducted in R.

Results

The LLM-chatbots achieved overall accuracies of 65.5% and 67.6% for Part 1 and Part 2 questions, respectively (Fig. 1). ChatGPT-3.5, Google Bard, and Bing Chat had respective accuracies of 55.1% and 49.6%, 62.6% and 51.9%, and 78.9% and 82.9% on the sample questions. ChatGPT-4.0 achieved an accuracy of 79.1% on Part 2 questions, which increased to 88.4% with prompting. Significant differences in accuracy were observed between the LLM-chatbots on both question sets (Chi-squared tests P < 0.001). Despite a 4% mean difference in accuracy with each iteration, no statistically significant differences in performance were observed for any individual LLM-chatbot.

Fig. 1: Performance of LLM-chatbots on FRCOphth examinations.
figure 1

The chart displays the average scores obtained by the LLM-chatbots on Part 1 (left) and Part 2 (right) FRCOphth written examinations. The x-axis denotes the name of the LLM-chatbots, while the y-axis represents the average scores.

On multilevel testing, Bing Chat outperformed ChatGPT-3.5 (OR 6.37, 95% CI 3.16–12.83, P < 0.001) and Google Bard (OR 3.73, 95% CI 1.88–7.37, P < 0.001) in Part 1 questions. No significant associations were found between accuracy and question difficulty or topic. In Part 2 questions, ChatGPT-3.5’s performance was surpassed by both ChatGPT-4.0 and Bing Chat, regardless of whether prompting was used or not (Table 1). LLM accuracy was significantly higher for questions on the “Cornea & External Eye” topic (Table 1). However, we found no other significant associations between LLM-chatbot accuracy and other covariates.

Table 1 Comparing the accuracy of responses to FRCOphth Part 2 written questions with different LLM-chatbots.

Discussion

This study is the first to demonstrate that publicly available LLM-driven chatbots can consistently provide accurate responses to postgraduate Ophthalmology specialty examinations, achieving an impressive accuracy of up to 82.9% without prompting or instruction tuning. This performance was independent of question topic and difficulty. Notably, most LLMs performed well enough to pass the high standards of these exams, which typically require a score of between 58% and 66% [10, 11]. Previous reports have shown that LLMs can achieve accuracies of up to 67.6% in generalist medical examinations with the use of different training data and instruction prompt tuning [7, 12].

We observed variation in the accuracy of responses between LLM-chatbots (Fig. 1), but each consistently provided similar accuracy with each iteration. Curated prompting strategies enhanced performance. LLMs demonstrated equal proficiency in answering basic science and clinical questions and performed similarly across difficulties and topics, except for Part 2 Cornea/External Eye questions, answered correctly 96% of the time (Table 1). This may reflect the use of different training data by LLMs, as our analyses accounted for question difficulty and characteristics. Limited officially-available questions precluded definitive topic-based comparisons (Supplementary Materials).

Our study has broad implications for the field of Ophthalmology, where large-scale medical AI models are being developed to aid clinical decision-making through free-text explanations, spoken recommendations, or image annotations [2]. LLMs outperformed our specialist examinations, raising questions about the adequacy of traditional assessments in measuring clinical competence. Alternative assessment methods, such as simulations or objective structured clinical examinations, may be needed to better capture the multifaceted skills and knowledge required for clinical practice.

Medical AI technology has great potential, but it also poses limitations and challenges. Clinicians may hold the AI system to a high standard of accuracy, creating barriers to effective human-machine collaboration. Responsibility for the answers generated by these technologies in a clinical setting is unclear; our testing revealed that LLMs could provide incorrect explanations and answers without the ability to recognise their own limitations [6]. Additionally, the use of LLMs for clinical purposes is restricted by inherent biases in data and algorithms used, raising major concerns [2, 6]. Ensuring the explainability of AI systems is a potential solution to this problem, and an interesting research topic. Issues related to validation, computational expenses, data procurement, and accessibility must also be addressed [2].

AI systems will become increasingly integrated into online learning and clinical practice, highlighting the need for ophthalmologists to develop AI literacy. Future research should focus on building open-access LLMs trained specifically with truthful Ophthalmology data to improve accuracy and reliability. Overall, LLMs offer significant opportunities to advance ophthalmic education and care.

Summary

What was known before

  • Large-scale medical AI models such as Large Language Models (LLMs) are being developed to aid clinical decision-making through free-text explanations, spoken recommendations, or image annotations.

  • Previous studies have shown that LLMs can achieve accuracies of up to 67.6% in generalist medical examinations using different training data and instruction prompt tuning.

What this study adds

  • This study is the first to demonstrate that LLMs can consistently provide accurate responses to postgraduate Ophthalmology specialty examinations, achieving an impressive accuracy rate of up to 82.9% without prompting or instruction tuning.

  • LLMs outperformed the standards of these specialist examinations, indicating that traditional assessments may not adequately measure clinical competence.

  • Issues related to validation, computational expenses, data procurement, and accessibility must be addressed to ensure the safe and effective integration of AI systems into online learning and clinical practice.