News
Article
Author(s):
ChatGPT may provide helpful information around gastroparesis, IBS and celiac disease diets, according to findings from a new study.
Popular consumer artificial intelligence (AI) platform ChatGPT could potentially provide reliable gastrointestinal dietary information and guidance to prospective patients, according to findings from a team of New Jersey-based gastroenterologists.
A poster presented at the American College of Gastroenterology (ACG) 2024 Scientific Sessions in Philadelphia, PA, this weekend showed an independent board of gastroenterology experts generally graded ChatGPT’s responses to questions on how to manage diets for gastroparesis, irritable bowel syndrome (IBS) and celiac disease as accurate, complete, and comprehensive. The new data show how accessible tools like adaptive AI chats could benefit clinicians providing comprehensive care guidance to their patients with chronic diseases.
The research team led by Rabih Ghazi, MD, of the department of gastroenterology at Cooper University Hospital in Camden, sought to assess the reliability, accuracy, completeness, and comprehensibility of ChatGPT-4 when responding to prompts about dietary care for each of gastroparesis, IBS and celiac disease. As they noted, the assessment is well-timed with the transformative effect of AI tools on modern medicine.
“ChatGPT is a readily accessible AI tool for patients seeking health advice,” they wrote. “However, concerns about the accuracy and reliability of its information persist, potentially compromising patient safety. Effective dietary counseling is crucial for managing gastrointestinal conditions.”
For their study, Ghazi and colleagues created a new account under ChatGPT-4 to collect the AI platform’s responses to dietary-related questions regarding the common gastrointestinal conditions. The platform was fed the same 2 questions sequentially, per condition, in the same chat session conducted in May 2024:
Investigators disabled ChatGPT-4’s memory feature to ensure the integrity of data. They created a separate chat session for each gastrointestinal condition. The reliability of the responses was tested by the same questions being asked again in a new chat session 5 days after clearing the initial session’s data.
Ghazi and colleagues recruited 5 gastroenterology physicians to independently review the responses and rate each on based on accuracy, completeness, and comprehensibility. The team used a 5-point Likert scale for grading, with higher scores indicating a greater endorsement.
The final assessment included 180 responses recorded to assess accuracy, completeness and comprehensibility across the 12 diet-related answers from ChatGPT-4; each disease category included 60 responses. Mean scores for each domain for the initial ChatGPT-4 responses were as follows:
In comparison with the initial prompts, the repeat prompts 5 days later showed no significant difference for mean accuracy (4.4 vs 4.4; P = .648) nor completeness (3.9 vs 4.1; P = .304). However, mean comprehensibility scores were significantly lower (4.3 vs 4.6; P = .03). The team additionally noted that mean completeness scores were significantly lower than both mean accuracy (P = .001) and comprehensibility scores (P <.001).
Though research has previously shown a level of uncertainty regarding the clinical utility of the currently available iterations of AI platforms including ChatGPT, these new data are supported by an international trial from 2023 that showed ChatGPT may be a dependable resource for even healthcare professionals treating IBD.
Ghazi and colleagues concluded their data show its benefit could extend to patients with gastrointestinal disease, as well.
“ChatGPT could be a useful resource for gastrointestinal diet information as it provided accurate and comprehensible responses that were mostly complete,” investigators wrote. “While generally reliable, a noted variation in the comprehensibility across repeated questions suggests the need for further studies.”
References