The HHS has taken the initial step in regulating emerging AI tools and algorithms within the healthcare sector.
On Dec. 13, the agency released the "Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing" rule. This rule is the agency's attempt at increasing transparency regarding the use of artificial intelligence in clinical settings, according to a press release from the HHS.
Five things to know about the rule:
- This initiative introduces transparency requirements for certified health IT and software developers, particularly focusing on AI and predictive algorithms such as models that analyze medical imaging, generate clinical notes and notify clinicians of potential risks to patients.
- The goal of the rule is to require developers to provide healthcare organizations with information and data that they can use to evaluate whether their algorithms are promoting fairness, appropriateness, validity, effectiveness and safety.
- Under the rule, developers must provide organizations with details on the software's development process and functionality. This includes disclosing funding sources, specifying its intended role in decision-making and providing guidelines on when caution is warranted for clinicians using it.
- Developers will also need to inform customers about the AI's training data. Additionally, they will be required to disclose performance metrics, elaborate on ongoing performance monitoring procedures and outline the frequency of algorithm updates.
- By the end of 2024, healthcare professionals utilizing decision support software certified by the HHS will be subject to these regulations.
The rule comes at a time when many hospitals and health system executives called on organizations like the HHS to create a structured framework to guide the ethical advancement and application of AI in healthcare.
"As technology advances, the medical community will need to develop standards for these innovative technologies, as well as revisit current regulatory systems on which physicians and patients rely to ensure that healthcare AI is responsible, evidence-based, bias-free, and designed and deployed to promote equity," Mike Thompson, vice president of enterprise data intelligence at Los Angeles-based Cedars-Sinai, said in an Oct. 25 news release.