We have located links that may give you full text access.
ChatGPT (GPT-4) passed the Japanese National License Examination for Pharmacists in 2022, answering all items including those with diagrams: a descriptive study.
PURPOSE: The objective of this study was to assess the performance of ChatGPT (GPT-4) on all items, including those with diagrams, in the Japanese National License Examination for Pharmacists (JNLEP) and compare it with the previous GPT-3.5 model's performance.
METHODS: The 107th JNLEP, conducted in 2022, with 344 items input into the GPT-4 model, was targeted for this study. Separately, 284 items, excluding those with diagrams, were entered into the GPT-3.5 model. The answers were categorized and analyzed to determine accuracy rates based on categories, subjects, and presence or absence of diagrams. The accuracy rates were compared to the main passing criteria (overall accuracy rate ≥62.9%).
RESULTS: The overall accuracy rate for all items in the 107th JNLEP in GPT-4 was 72.5%, successfully meeting all the passing criteria. For the set of items without diagrams, the accuracy rate was 80.0%, which was significantly higher than that of the GPT-3.5 model (43.5%). The GPT-4 model demonstrated an accuracy rate of 36.1% for items that included diagrams.
CONCLUSION: Advancements that allow GPT-4 to process images have made it possible for LLMs to answer all items in medical-related license examinations. This study's findings confirm that ChatGPT (GPT-4) possesses sufficient knowledge to meet the passing criteria.
METHODS: The 107th JNLEP, conducted in 2022, with 344 items input into the GPT-4 model, was targeted for this study. Separately, 284 items, excluding those with diagrams, were entered into the GPT-3.5 model. The answers were categorized and analyzed to determine accuracy rates based on categories, subjects, and presence or absence of diagrams. The accuracy rates were compared to the main passing criteria (overall accuracy rate ≥62.9%).
RESULTS: The overall accuracy rate for all items in the 107th JNLEP in GPT-4 was 72.5%, successfully meeting all the passing criteria. For the set of items without diagrams, the accuracy rate was 80.0%, which was significantly higher than that of the GPT-3.5 model (43.5%). The GPT-4 model demonstrated an accuracy rate of 36.1% for items that included diagrams.
CONCLUSION: Advancements that allow GPT-4 to process images have made it possible for LLMs to answer all items in medical-related license examinations. This study's findings confirm that ChatGPT (GPT-4) possesses sufficient knowledge to meet the passing criteria.
Full text links
Related Resources
Trending Papers
Consensus Statement on Vitamin D Status Assessment and Supplementation: Whys, Whens, and Hows.Endocrine Reviews 2024 April 28
The Tricuspid Valve: A Review of Pathology, Imaging, and Current Treatment Options: A Scientific Statement From the American Heart Association.Circulation 2024 April 26
Intravenous infusion of dexmedetomidine during the surgery to prevent postoperative delirium and postoperative cognitive dysfunction undergoing non-cardiac surgery: a meta-analysis of randomized controlled trials.European Journal of Medical Research 2024 April 19
Interstitial Lung Disease: A Review.JAMA 2024 April 23
Management of Diverticulitis: A Review.JAMA Surgery 2024 April 18
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app
All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.
By using this service, you agree to our terms of use and privacy policy.
Your Privacy Choices
You can now claim free CME credits for this literature searchClaim now
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app