A cross-sectional study assessing AI-generated patient information guides on common cardiovascular conditions

Authors

  • Mustafa Sibaa Department of Medicine, Tbilisi State University, Tbilisi, Georgia
  • Hugo Douma Department of Medical Sciences, Faculty of Medicine, International European University, Ukraine; Department of Computer Science, College of Natural Sciences, University of Texas at Austin, USA
  • Ireene Elsa Mathew Department of Medicine, Tbilisi State University, Tbilisi, Georgia
  • Taha Kassim Dohadwala Department of Medicine, Faculty of Medicine, David Tvildiani Medical University, Tbilisi, Georgia
  • Kundaranahalli Pradeep Harshath Odeker Department of Medicine, Vijayanagar Institute of Medical Sciences, Ballari, Karnataka, India
  • Deepa Polinati Fortis Hospital, Bangalore, Karnataka, India
  • Nidhi Laxminarayan Rao Department of Medicine, KAP Vishwanathan Government Medical College, Tiruchirapalli, Tamil Nadu, India

DOI:

https://doi.org/10.18203/2320-6012.ijrms20244094

Keywords:

Angina, Artificial intelligence, Cardiac arrest, ChatGPT, Educational tool, Google Gemini, Hypertension, Patient education brochure

Abstract

Background: Patient education is essential for management of CVD as it enables in earlier diagnosis, early treatment and prevention of complications. Artificial intelligence is and increasingly popular resource with applications in virtual patient counselling. Thus, the study aimed to compare the AI generated response for patient education guide on common cardiovascular diseases using ChatGPT and Google Gemini.

Methods: The study assessed the responses generated by ChatGPT 3.5 and Google Gemini for patient education brochure on angina, hypertension, and cardiac arrest. Number of words, sentences, average word count per sentence, average syllables per word, grade level, and ease level were assessed using Flesch-Kincaid Calculator, and similarity score was checked using Quillbot. Reliability was assessed using modified DISCERN score. The statistical analysis was done using R version 4.3.2.

Results: The statistical analysis exhibited that there were no statistically significant differences between the responses generated by the AI tools based on different variables except for the ease score (p=0.2043), which was statistically superior for ChatGPT. The correlation coefficient between both the two tools was negative for the ease score (r=-0.9986, p=0.0332), the reliability score (r=-0.8660, p=0.3333), but was statistically significant for ease score.

Conclusions: The study demonstrated no significant differences between the responses generated by the AI tools for patient education brochures. Further research must be done to assess the ability of the AI tools, and ensure accurate and latest information is being generated, to benefit overall public well-being.

Metrics

Metrics Loading ...

References

World Heart Report 2023: Confronting the World’s Number One Killer. Geneva, Switzerland. World Heart Federation; 2023.

Zargarzadeh A, Javanshir E, Ghaffari A, Mosharkesh E, Anari B. Artificial intelligence in cardiovascular medicine: An updated review of the literature. J Cardiovasc Thorac Res. 2023;15(4):204.

Sun X, Yin Y, Yang Q, Huo T. Artificial intelligence in cardiovascular diseases: diagnostic and therapeutic perspectives. Eur J Med Res. 2023;28(1):242.

Waisberg E, Ong J, Masalkhi M, Kamran SA, Zaman N, Sarker P, Lee AG, Tavakkoli A. GPT-4: a new era of artificial intelligence in medicine. Irish Journal of Medical Science (1971-). 2023;192(6):3197-200.

OpenAI. Introducing ChatGPT. Available from: https://openai.com/blog/chatgpt. Accessed on 3 April 2023.

Pichai S. An important next step on our AI journey. Google. 2023. Available from: https://blog.google/technology/ai/bard-google-ai-search-updates/. Accessed on 3 April 2023.

Masalkhi M, Ong J, Waisberg E, Lee AG. Google DeepMind’s gemini AI versus ChatGPT: a comparative analysis in ophthalmology. Eye. 2024:1-6.

Alowais SA, Alghamdi SS, Alsuhebany N, Alqahtani T, Alshaya AI, Almohareb SN, Aldairem A, Alrashed M, Bin Saleh K, Badreldin HA, Al Yami MS. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ. 2023;23(1):689.

Flesch R. Flesch-Kincaid readability test. Retrieved October. 2007;26(3):2007.

Khazaal Y, Chatton A, Cochand S, Coquard O, Fernandez S, Khan R, et al. Brief DISCERN, six questions for the evaluation of evidence-based content of health-related websites. Patient Educ Counsel. 2009;77(1):33-7.

Eapen J, Adhithyan VS. Personalization and customization of llm responses. Int J Res Publicat Rev. 2023;4(12):2617-27.

Kasabwala K, Agarwal N, Hansberry DR, Baredes S, Eloy JA. Readability assessment of patient education materials from the American Academy of Otolaryngology- Head and Neck Surgery Foundation. Otolaryngol Head Neck Surg. 2012;147(3):466-71.

Howard J, Cheung HC. Artificial intelligence in medical writing. AsiaIntervention. 2024;10(1):12-4.

Wu K, Wu E, Cassasola A, Zhang A, Wei K, Nguyen T, Riantawan S, Riantawan PS, Ho DE, Zou J. How well do LLMs cite relevant medical references? An evaluation framework and analyses. arXiv preprint arXiv:2402.02008. 2024.

Kaicker J, Borg Debono V, Dang W. Assessment of the quality and variability of health information on chronic pain websites using the DISCERN instrument. BMC Med. 2010;8:59.

Golan R, Ripps SJ, Reddy R, Loloi J, Bernstein AP, Connelly ZM, et al. ChatGPT's Ability to Assess Quality and Readability of Online Medical Information: Evidence From a Cross-Sectional Study. Cureus. 2023;15(7):e42214.

Kumar AH. Analysis of ChatGPT Tool to Assess the Potential of its Utility for Academic Writing in Biomedical Domain. Biology, Engineering, Medicine and Science Reports. 2023;9:4–30.

Downloads

Published

2024-12-31

How to Cite

Sibaa, M., Douma, H., Mathew, I. E., Dohadwala, T. K., Pradeep Harshath Odeker, K., Polinati, D., & Rao, N. L. (2024). A cross-sectional study assessing AI-generated patient information guides on common cardiovascular conditions. International Journal of Research in Medical Sciences, 13(1), 50–54. https://doi.org/10.18203/2320-6012.ijrms20244094

Issue

Section

Original Research Articles