TY - JOUR
T1 - Performance of ChatGPT on Factual Knowledge Questions Regarding Clinical Pharmacy
AU - van Nuland, Merel
AU - Erdogan, Abdullah
AU - Aςar, Cenkay
AU - Contrucci, Ramon
AU - Hilbrants, Sven
AU - Maanach, Lamyae
AU - Egberts, Toine
AU - van der Linden, Paul D
N1 - Publisher Copyright:
© 2024, The American College of Clinical Pharmacology.
PY - 2024/9
Y1 - 2024/9
N2 - ChatGPT is a language model that was trained on a large dataset including medical literature. Several studies have described the performance of ChatGPT on medical exams. In this study, we examine its performance in answering factual knowledge questions regarding clinical pharmacy. Questions were obtained from a Dutch application that features multiple-choice questions to maintain a basic knowledge level for clinical pharmacists. In total, 264 clinical pharmacy-related questions were presented to ChatGPT and responses were evaluated for accuracy, concordance, quality of the substantiation, and reproducibility. Accuracy was defined as the correctness of the answer, and results were compared to the overall score by pharmacists over 2022. Responses were marked concordant if no contradictions were present. The quality of the substantiation was graded by two independent pharmacists using a 4-point scale. Reproducibility was established by presenting questions multiple times and on various days. ChatGPT yielded accurate responses for 79% of the questions, surpassing pharmacists' accuracy of 66%. Concordance was 95%, and the quality of the substantiation was deemed good or excellent for 73% of the questions. Reproducibility was consistently high, both within day and between days (>92%), as well as across different users. ChatGPT demonstrated a higher accuracy and reproducibility to factual knowledge questions related to clinical pharmacy practice than pharmacists. Consequently, we posit that ChatGPT could serve as a valuable resource to pharmacists. We hope the technology will further improve, which may lead to enhanced future performance.
AB - ChatGPT is a language model that was trained on a large dataset including medical literature. Several studies have described the performance of ChatGPT on medical exams. In this study, we examine its performance in answering factual knowledge questions regarding clinical pharmacy. Questions were obtained from a Dutch application that features multiple-choice questions to maintain a basic knowledge level for clinical pharmacists. In total, 264 clinical pharmacy-related questions were presented to ChatGPT and responses were evaluated for accuracy, concordance, quality of the substantiation, and reproducibility. Accuracy was defined as the correctness of the answer, and results were compared to the overall score by pharmacists over 2022. Responses were marked concordant if no contradictions were present. The quality of the substantiation was graded by two independent pharmacists using a 4-point scale. Reproducibility was established by presenting questions multiple times and on various days. ChatGPT yielded accurate responses for 79% of the questions, surpassing pharmacists' accuracy of 66%. Concordance was 95%, and the quality of the substantiation was deemed good or excellent for 73% of the questions. Reproducibility was consistently high, both within day and between days (>92%), as well as across different users. ChatGPT demonstrated a higher accuracy and reproducibility to factual knowledge questions related to clinical pharmacy practice than pharmacists. Consequently, we posit that ChatGPT could serve as a valuable resource to pharmacists. We hope the technology will further improve, which may lead to enhanced future performance.
KW - ChatGPT
KW - artificial intelligence
KW - clinical pharmacology
KW - exam questions
KW - language model
UR - http://www.scopus.com/inward/record.url?scp=85190971279&partnerID=8YFLogxK
U2 - 10.1002/jcph.2443
DO - 10.1002/jcph.2443
M3 - Article
C2 - 38623909
SN - 0091-2700
VL - 64
SP - 1095
EP - 1100
JO - Journal of Clinical Pharmacology
JF - Journal of Clinical Pharmacology
IS - 9
ER -