As hospitals and health systems are turning to emerging AI technologies such as generative AI and large language models to automate tedious tasks, many leaders are urging caution, stating that these tools could provide patients with inaccurate information or cause unintended harm.
Northwell Health CEO Michael Dowling said although artificial intelligence has the potential to enhance healthcare delivery, he fears there could come a day where the technology could outsmart the human brain.
"The big issue for me long term is what happens if the machines become smarter than the human brain? That, I think, is something that we have to be careful of," Mr. Dowling told Politico in a July 20 publication.
He also advised the healthcare industry to be careful that it doesn't become too dependent on it.
"We have to be careful to make sure that we don't become so dependent upon very, very smart technology when we don't fully understand what it actually can do," he said.
Sunil Dadlani, executive vice president and chief information and digital transformation officer of Morristown, N.J.-based Atlantic Health System, also told Becker's that although ChatGPT holds great promise for transforming healthcare, it still has its limitations.
"It is important to be aware of the limitations when utilized in healthcare settings, like lack of real-time data," he said.
A lack of real-time data could lead to "unintended consequences" when healthcare professionals are using it in the clinical setting, according to Laura Smith, CIO of West Des Moines, Iowa-based UnityPoint Health.
"If the data used to train [ChatGPT] has bias, is incomplete or is inaccurate, the ramifications of using that data to augment or even perform decision-making could lead to unintended consequences," Ms. Smith told Becker's.
Google Chief Health Officer Karen DeSalvo, MD, also urged caution on the rollout of the tools for healthcare organizations, stating they have a lot of potential, but still have a long way to go.
"We have a lot of things to work out to make sure the models are constrained appropriately, that they're factual, consistent, and that they follow these ethical and equity approaches that we want to take — but I'm super excited about the potential, even as a doc," she told The Guardian in a July 23 article.