We have located links that may give you full text access.
Accuracy and Consistency of Chatbots versus Clinicians for Answering Pediatric Dentistry Questions: A pilot study.
Journal of Dentistry 2024 March 17
OBJECTIVES: Artificial Intelligence has applications such as Large Language Models (LLMs), which simulate human-like conversations. The potential of LLMs in healthcare is not fully evaluated. This pilot study assessed the accuracy and consistency of chatbots and clinicians in answering common questions in pediatric dentistry.
METHODS: Two expert pediatric dentists developed thirty true or false questions involving different aspects of pediatric dentistry. Publicly accessible chatbots (Google Bard, ChatGPT4, ChatGPT 3.5, Llama, Sage, Claude 2 100k, Claude-instant, Claude-instant-100k, and Google Palm) were employed to answer the questions (3 independent new conversations). Three groups of clinicians (general dentists, pediatric specialists, and students; n=20/group) also answered. Responses were graded by two pediatric dentistry faculty members, along with a third independent pediatric dentist. Resulting accuracies (percentage of correct responses) were compared using analysis of variance (ANOVA), and post-hoc pairwise group comparisons were corrected using Tukey's HSD method. ACronbach's alpha was calculated to determine consistency.
RESULTS: Pediatric dentists were significantly more accurate (mean±SD 96.67%± 4.3%) than other clinicians and chatbots (p<.001). General dentists (88.0% ± 6.1%) also demonstrated significantly higher accuracy than chatbots (p<.001), followed by students (80.8%±6.9%). ChatGPT showed the highest accuracy (78%±3%) among chatbots. All chatbots except ChatGPT3.5 showed acceptable consistency (Cronbach alpha>0.7).
CONCLUSION: In this pilot study, chatbots showed lower accuracy than dentists. Chatbots may not yet be recommended for clinical pediatric dentistry.
METHODS: Two expert pediatric dentists developed thirty true or false questions involving different aspects of pediatric dentistry. Publicly accessible chatbots (Google Bard, ChatGPT4, ChatGPT 3.5, Llama, Sage, Claude 2 100k, Claude-instant, Claude-instant-100k, and Google Palm) were employed to answer the questions (3 independent new conversations). Three groups of clinicians (general dentists, pediatric specialists, and students; n=20/group) also answered. Responses were graded by two pediatric dentistry faculty members, along with a third independent pediatric dentist. Resulting accuracies (percentage of correct responses) were compared using analysis of variance (ANOVA), and post-hoc pairwise group comparisons were corrected using Tukey's HSD method. ACronbach's alpha was calculated to determine consistency.
RESULTS: Pediatric dentists were significantly more accurate (mean±SD 96.67%± 4.3%) than other clinicians and chatbots (p<.001). General dentists (88.0% ± 6.1%) also demonstrated significantly higher accuracy than chatbots (p<.001), followed by students (80.8%±6.9%). ChatGPT showed the highest accuracy (78%±3%) among chatbots. All chatbots except ChatGPT3.5 showed acceptable consistency (Cronbach alpha>0.7).
CONCLUSION: In this pilot study, chatbots showed lower accuracy than dentists. Chatbots may not yet be recommended for clinical pediatric dentistry.
Full text links
Related Resources
Trending Papers
A Guide to the Use of Vasopressors and Inotropes for Patients in Shock.Journal of Intensive Care Medicine 2024 April 14
British Society for Rheumatology guideline on management of adult and juvenile onset Sjögren disease.Rheumatology 2024 April 17
Albumin: a comprehensive review and practical guideline for clinical use.European Journal of Clinical Pharmacology 2024 April 13
Renin-Angiotensin-Aldosterone System: From History to Practice of a Secular Topic.International Journal of Molecular Sciences 2024 April 5
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app
All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.
By using this service, you agree to our terms of use and privacy policy.
Your Privacy Choices
You can now claim free CME credits for this literature searchClaim now
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app