TY - JOUR
T1 - Assessing accuracy of ChatGPT in response to questions from day to day pharmaceutical care in hospitals
AU - van Nuland, Merel
AU - Lobbezoo, Anne-Fleur H
AU - van de Garde, Ewoudt M W
AU - Herbrink, Maikel
AU - van Heijl, Inger
AU - Bognàr, Tim
AU - Houwen, Jeroen P A
AU - Dekens, Marloes
AU - Wannet, Demi
AU - Egberts, Toine
AU - van der Linden, Paul D
N1 - Publisher Copyright:
© 2024 The Authors
PY - 2024/9
Y1 - 2024/9
N2 - BACKGROUND: The advent of Large Language Models (LLMs) such as ChatGPT introduces opportunities within the medical field. Nonetheless, use of LLM poses a risk when healthcare practitioners and patients present clinical questions to these programs without a comprehensive understanding of its suitability for clinical contexts.OBJECTIVE: The objective of this study was to assess ChatGPT's ability to generate appropriate responses to clinical questions that hospital pharmacists could encounter during routine patient care.METHODS: Thirty questions from 10 different domains within clinical pharmacy were collected during routine care. Questions were presented to ChatGPT in a standardized format, including patients' age, sex, drug name, dose, and indication. Subsequently, relevant information regarding specific cases were provided, and the prompt was concluded with the query "what would a hospital pharmacist do?". The impact on accuracy was assessed for each domain by modifying personification to "what would you do?", presenting the question in Dutch, and regenerating the primary question. All responses were independently evaluated by two senior hospital pharmacists, focusing on the availability of an advice, accuracy and concordance.RESULTS: In 77% of questions, ChatGPT provided an advice in response to the question. For these responses, accuracy and concordance were determined. Accuracy was correct and complete for 26% of responses, correct but incomplete for 22% of responses, partially correct and partially incorrect for 30% of responses and completely incorrect for 22% of responses. The reproducibility was poor, with merely 10% of responses remaining consistent upon regeneration of the primary question.CONCLUSIONS: While concordance of responses was excellent, the accuracy and reproducibility were poor. With the described method, ChatGPT should not be used to address questions encountered by hospital pharmacists during their shifts. However, it is important to acknowledge the limitations of our methodology, including potential biases, which may have influenced the findings.
AB - BACKGROUND: The advent of Large Language Models (LLMs) such as ChatGPT introduces opportunities within the medical field. Nonetheless, use of LLM poses a risk when healthcare practitioners and patients present clinical questions to these programs without a comprehensive understanding of its suitability for clinical contexts.OBJECTIVE: The objective of this study was to assess ChatGPT's ability to generate appropriate responses to clinical questions that hospital pharmacists could encounter during routine patient care.METHODS: Thirty questions from 10 different domains within clinical pharmacy were collected during routine care. Questions were presented to ChatGPT in a standardized format, including patients' age, sex, drug name, dose, and indication. Subsequently, relevant information regarding specific cases were provided, and the prompt was concluded with the query "what would a hospital pharmacist do?". The impact on accuracy was assessed for each domain by modifying personification to "what would you do?", presenting the question in Dutch, and regenerating the primary question. All responses were independently evaluated by two senior hospital pharmacists, focusing on the availability of an advice, accuracy and concordance.RESULTS: In 77% of questions, ChatGPT provided an advice in response to the question. For these responses, accuracy and concordance were determined. Accuracy was correct and complete for 26% of responses, correct but incomplete for 22% of responses, partially correct and partially incorrect for 30% of responses and completely incorrect for 22% of responses. The reproducibility was poor, with merely 10% of responses remaining consistent upon regeneration of the primary question.CONCLUSIONS: While concordance of responses was excellent, the accuracy and reproducibility were poor. With the described method, ChatGPT should not be used to address questions encountered by hospital pharmacists during their shifts. However, it is important to acknowledge the limitations of our methodology, including potential biases, which may have influenced the findings.
KW - Accuracy
KW - ChatGPT
KW - Clinical pharmacy
KW - Drug information
KW - Language model
UR - http://www.scopus.com/inward/record.url?scp=85197042083&partnerID=8YFLogxK
U2 - 10.1016/j.rcsop.2024.100464
DO - 10.1016/j.rcsop.2024.100464
M3 - Article
C2 - 39050145
SN - 2667-2766
VL - 15
JO - Exploratory Research in Clinical and Social Pharmacy
JF - Exploratory Research in Clinical and Social Pharmacy
M1 - 100464
ER -