Developments on X, formerly Twitter, have sparked interest and concern as users submit medical images like X-rays and MRIs to Grok, an AI chatbot introduced by Elon Musk, for potential diagnoses, The New York Times reported Nov. 18.
Mr. Musk has promoted Grok as a tool that, with user input, could improve its diagnostic capabilities over time, offering faster results or even acting as a second opinion. Early feedback is mixed: while some users report accurate insights, others highlight notable errors, such as misidentifying a broken clavicle as a dislocated shoulder. Even some physicians are experimenting with the tool out of curiosity, according to the report.
But, unlike HIPAA-protected platforms used by healthcare providers, data shared with AI chatbots like Grok is not subject to strict federal privacy laws. Experts told the publication that sharing sensitive health information with such platforms could lead to misuse, as it becomes part of a broader online footprint accessible for unintended purposes, such as targeted marketing or discrimination by insurers and employers.
"This is very personal information, and you don't exactly know what Grok is going to do with it," Bradley Malin, PhD, a professor of biomedical informatics at Nashville, Tenn.-based Vanderbilt University told the publication.
While X's privacy policy states that user data will not be sold to third parties, it does allow data sharing with affiliated companies. X did not respond to The New York Times request for comment.
Experts also cautioned in the report against relying on AI tools that lack the rigorous training and data quality needed for healthcare applications. Misinterpretations could lead to unnecessary tests or treatments, increasing patient burden. Despite these risks, some users upload data under the belief that contributing information could advance AI development in healthcare — a practice dubbed "information altruism."