Danesh A, Pazouki H, Danesh F, Danesh A, Vardar-Sengul S. Artificial intelligence in dental education: ChatGPT's performance on the periodontic in-service examination. J Periodontol 2024; DOI: 10.1002/JPER.23-0514.

Both chatbot models leave considerable room for misinformation with their responses relating to periodontology.

ChatGPT3.5 and ChatGPT4 were evaluated on 311 multiple-choice questions obtained from the 2023 in-service examination administered by the American Academy of Periodontology (AAP). The dataset of in-service examination questions was accessed through Nova Southeastern University's Department of Periodontology. ChatGPT3.5 and ChatGPT4 answered 57.9% and 73.6% of in-service questions correctly on the 2023 Periodontics In-Service Written Examination, respectively. While ChatGPT4 showed a higher proficiency compared to ChatGPT3.5, both chatbot models leave considerable room for misinformation with their responses relating to periodontology. The findings of the study encourage residents to scrutinise the periodontic information generated by ChatGPT to account for the chatbot's current limitations.