Home Artificial Intelligence 3 Questions: Leo Anthony Celi on ChatGPT and medicine

3 Questions: Leo Anthony Celi on ChatGPT and medicine

0
3 Questions: Leo Anthony Celi on ChatGPT and medicine

Q: What do you think that the success of ChatGPT on the USMLE reveals in regards to the nature of the medical education and evaluation of scholars? 

A: The framing of medical knowledge as something that might be encapsulated into multiple alternative questions creates a cognitive framing of false certainty. Medical knowledge is usually taught as fixed model representations of health and disease. Treatment effects are presented as stable over time despite consistently changing practice patterns. Mechanistic models are passed on from teachers to students with little emphasis on how robustly those models were derived, the uncertainties that persist around them, and the way they need to be recalibrated to reflect advances worthy of incorporation into practice. 

ChatGPT passed an examination that rewards memorizing the components of a system slightly than analyzing how it really works, the way it fails, the way it was created, the way it is maintained. Its success demonstrates among the shortcomings in how we train and evaluate medical students. Critical pondering requires appreciation that ground truths in medicine continually shift, and more importantly, an understanding how and why they shift.

Q: What steps do you think that the medical community should take to change how students are taught and evaluated?  

A: Learning is about leveraging the present body of data, understanding its gaps, and in search of to fill those gaps. It requires being comfortable with and having the ability to probe the uncertainties. We fail as teachers by not teaching students find out how to understand the gaps in the present body of data. We fail them after we preach certainty over curiosity, and hubris over humility.  

Medical education also requires being aware of the biases in the way in which medical knowledge is created and validated. These biases are best addressed by optimizing the cognitive diversity throughout the community. Greater than ever, there may be a must encourage cross-disciplinary collaborative learning and problem-solving. Medical students need data science skills that can allow every clinician to contribute to, continually assess, and recalibrate medical knowledge.

Q: Do you see any upside to ChatGPT’s success on this exam? Are there useful ways in which ChatGPT and other types of AI can contribute to the practice of medication? 

A: There isn’t any query that enormous language models (LLMs) akin to ChatGPT are very powerful tools in sifting through content beyond the capabilities of experts, and even groups of experts, and extracting knowledge. Nonetheless, we’ll need to deal with the issue of information bias before we will leverage LLMs and other artificial intelligence technologies. The body of data that LLMs train on, each medical and beyond, is dominated by content and research from well-funded institutions in high-income countries. It shouldn’t be representative of many of the world.

We’ve also learned that even mechanistic models of health and disease could also be biased. These inputs are fed to encoders and transformers which might be oblivious to those biases. Ground truths in medicine are constantly shifting, and currently, there is no such thing as a solution to determine when ground truths have drifted. LLMs don’t evaluate the standard and the bias of the content they’re being trained on. Neither do they supply the extent of uncertainty around their output. But the proper shouldn’t be the enemy of the nice. There’s tremendous opportunity to enhance the way in which health care providers currently make clinical decisions, which we all know are tainted with unconscious bias. I even have little question AI will deliver its promise once now we have optimized the information input.

LEAVE A REPLY

Please enter your comment!
Please enter your name here