Subscribe To Print Edition About The Tribune Code Of Ethics Download App Advertise with us Classifieds
search-icon-img
  • ftr-facebook
  • ftr-instagram
  • ftr-instagram
search-icon-img
Advertisement

OpenAI’s ChatGPT is a failure in assessing heart risk, finds study

New Delhi, May 1 Although OpenAI’s ChatGPT could pass several medical exams, it lacks potential in assessing heart risk, found a study on Wednesday. Research, published in the journal PLOS ONE, showed that “it would be unwise to rely on...
  • fb
  • twitter
  • whatsapp
  • whatsapp
Advertisement

New Delhi, May 1

Although OpenAI’s ChatGPT could pass several medical exams, it lacks potential in assessing heart risk, found a study on Wednesday.

Research, published in the journal PLOS ONE, showed that “it would be unwise to rely on it for some health assessments, such as whether a patient with chest pain needs to be hospitalised”.

Advertisement

ChatGPT’s predictions in cases of patients with chest pain were “inconsistent”.

They also provided different heart risk assessment levels for the same patient data—from low to intermediate, and occasionally a high risk.

Advertisement

The variation “can be dangerous” said lead author Dr. Thomas Heston, a researcher with Washington State University’s Elson S. Floyd College of Medicine.

Further, the generative AI system also failed to match the traditional methods physicians use to judge a patient’s cardiac risk.

“ChatGPT was not acting in a consistent manner,” said Heston.

However, Heston sees great potential for generative AI in healthcare, but with further development.

“It can be a useful tool, but I think the technology is going a lot faster than our understanding of it, so it’s critically important that we do a lot of research, especially in these high-stakes clinical situations.”

Advertisement
Advertisement
Advertisement
Advertisement
tlbr_img1 Home tlbr_img2 Opinion tlbr_img3 Classifieds tlbr_img4 Videos tlbr_img5 E-Paper