We have located links that may give you full text access.
Talking technology: exploring chatbots as a tool for cataract patient education.
Clinical & Experimental Optometry : Journal of the Australian Optometrical Association 2024 January 10
CLINICAL RELEVANCE: Worldwide, millions suffer from cataracts, which impair vision and quality of life. Cataract education improves outcomes, satisfaction, and treatment adherence. Lack of health literacy, language and cultural barriers, personal preferences, and limited resources may all impede effective communication.
BACKGROUND: AI can improve patient education by providing personalised, interactive, and accessible information tailored to patient understanding, interest, and motivation. AI chatbots can have human-like conversations and give advice on numerous topics.
METHODS: This study investigated the efficacy of chatbots in cataract patient education relative to traditional resources like the AAO website, focusing on information accuracy,understandability, actionability, and readability. A descriptive comparative design was used to analyse quantitative data from frequently asked questions about cataracts answered by ChatGPT, Bard, Bing AI, and the AAO website. SOLO taxonomy, PEMAT, and the Flesch-Kincaid ease score were used to collect and analyse the data.
RESULTS: Chatbots scored higher than AAO website on cataract-related questions in terms of accuracy (mean SOLO score ChatGPT: 3.1 ± 0.31, Bard: 2.9 ± 0.72, Bing AI: 2.65 ± 0.49, AAO website: 2.4 ± 0.6, ( p < 0.001)). For understandability (mean PEMAT-U score AAO website: 0,89 ± 0,04, ChatGPT 0,84 ± 0,02, Bard: 0,84 ± 0,02, Bing AI: 0,81 ± 0,02, ( p < 0.001)), and actionability (mean PEMAT-A score ChatGPT: 0.86 ± 0.03, Bard: 0.85 ± 0.06, Bing AI: 0.81 ± 0.05, AAO website: 0.81 ± 0.06, ( p < 0.001)) AAO website scored better than chatbots. Flesch-Kincaid readability ease analysis showed that Bard (55,5 ± 8,48) had the highest mean score, followed by AAO website (51,96 ± 12,46), Bing AI (41,77 ± 9,53), and ChatGPT (34,38 ± 9,75, ( p < 0.001)).
CONCLUSION: Chatbots have the potential to provide more detailed and accurate data than the AAO website. On the other hand, the AAO website has the advantage of providing information that is more understandable and practical. When patient preferences are not taken into account, generalised or biased information can decrease reliability.
BACKGROUND: AI can improve patient education by providing personalised, interactive, and accessible information tailored to patient understanding, interest, and motivation. AI chatbots can have human-like conversations and give advice on numerous topics.
METHODS: This study investigated the efficacy of chatbots in cataract patient education relative to traditional resources like the AAO website, focusing on information accuracy,understandability, actionability, and readability. A descriptive comparative design was used to analyse quantitative data from frequently asked questions about cataracts answered by ChatGPT, Bard, Bing AI, and the AAO website. SOLO taxonomy, PEMAT, and the Flesch-Kincaid ease score were used to collect and analyse the data.
RESULTS: Chatbots scored higher than AAO website on cataract-related questions in terms of accuracy (mean SOLO score ChatGPT: 3.1 ± 0.31, Bard: 2.9 ± 0.72, Bing AI: 2.65 ± 0.49, AAO website: 2.4 ± 0.6, ( p < 0.001)). For understandability (mean PEMAT-U score AAO website: 0,89 ± 0,04, ChatGPT 0,84 ± 0,02, Bard: 0,84 ± 0,02, Bing AI: 0,81 ± 0,02, ( p < 0.001)), and actionability (mean PEMAT-A score ChatGPT: 0.86 ± 0.03, Bard: 0.85 ± 0.06, Bing AI: 0.81 ± 0.05, AAO website: 0.81 ± 0.06, ( p < 0.001)) AAO website scored better than chatbots. Flesch-Kincaid readability ease analysis showed that Bard (55,5 ± 8,48) had the highest mean score, followed by AAO website (51,96 ± 12,46), Bing AI (41,77 ± 9,53), and ChatGPT (34,38 ± 9,75, ( p < 0.001)).
CONCLUSION: Chatbots have the potential to provide more detailed and accurate data than the AAO website. On the other hand, the AAO website has the advantage of providing information that is more understandable and practical. When patient preferences are not taken into account, generalised or biased information can decrease reliability.
Full text links
Related Resources
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app
All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.
By using this service, you agree to our terms of use and privacy policy.
Your Privacy Choices
You can now claim free CME credits for this literature searchClaim now
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app