The World Health Organization released a statement May 16 warning healthcare organizations about artificial intelligence, stating that the technology may be biased and generate misleading or inaccurate information.
WHO said AI-based models can be misused to generate disinformation.
The organization also expressed concerns over how the technology would be used in healthcare, specifically how it will be used as a decision-support tool.
"Precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, erode trust in AI and thereby undermine (or delay) the potential long-term benefits and uses of such technologies around the world," the organization wrote.
Although WHO said it is excited about emerging AI technologies such as ChatGPT, it reiterated that these tools need to have clinical oversight to make sure they are safe, effective and ethical.