The evolution of artificial intelligence and its potential for your organization – for good and ill – should be high on the agenda at your next board meeting.
Perhaps no industry will be as fundamentally impacted by AI as health care. New and extraordinary opportunities seem to arise daily – from relatively straightforward radiology diagnostics to the data driven revelation of new best practices, to the transformation of the patient’s relationship with their caregivers (and their health care system). Things are moving fast-almost too fast.
Running alongside the breathless unveiling of AI’s potential? Increasing awareness of its associated risks, from privacy breaches and clinical errors to wholesale misinformation and manipulation.
OpenAI CEO Sam Altman’s extraordinary testimony before a subcommittee of the Senate Judiciary Committee just last week, held to explore the role of government in the oversight of AI, should be a prompt to health care boards considering their own approach to the same question.
On the one hand, Altman spoke proudly of the promise of AI and its potential to make new discoveries and address some of humanity’s biggest challenges. On the other, he frankly acknowledged the risks of the technology and agreed with subcommittee members proposing that it be closely regulated.
“Regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” he testified. Quite the admission, coming from the CEO of the company that created AI tools such as ChatGPT.
The hearing provided an outline of a future regulatory scheme, in a manner instructive to interested boards. It projects elements of industry self-regulation and industry/government partnering on safety guidelines, together with federal-level licensing and testing requirements for development and release of AI models above a threshold of capabilities. All this, along with an unmistakable theme of corporate accountability for harm and misuse.
Of course, the wheels of government move slowly, even as AI continues its rapid evolution and as strategically significant products and services, grounded in AI, are offered to the market and to your organization. And to your competitors.
AI could be an unparalleled opportunity for a smart system, or an unprecedented risk, and may be both. Nonetheless, that the definitive way forward with AI is unclear cannot be an excuse for inaction.
There is work to do. How should a savvy health care board proceed today?
- Monitoring structures. Boards can act now to develop their own monitoring structures, for example. This might include creating a board level “Science, Technology and Innovation” committee with a broad organizational AI oversight portfolio, that can support and supervise efforts of its senior executive and scientific leaders.
- AI ethics panels. Boards can include an AI-focused ethics panel at the management level, to assure internal application of the proper AI “guardrails”. The federal government has a clear expectation that organizations will exercise heightened vigilance in their consideration of AI.
- Expanded management roles. It can direct expanded, AI-related responsibilities for the chief legal officer and the chief compliance officer, both of whom can be expected to play critical roles in positioning the organization to evaluate AI related investments and anticipate oncoming regulation.
- Avoid complacency. Despite the hyperbolic language engulfing AI today, it may yet be a kind of replay of yesterday’s cryptocurrency, a passing storm with lots of thunder but little rain. This seems unlikely, as every day is replete with announcements of multimillion dollar – or multibillion dollar – investments in AI by corporations who dwarf the largest hospital systems. Still, this round of AI may ultimately not blossom into the earthshaking development some predict. Even if your board has a skeptical eye, the significance of the opportunity and the scale of the risk calls for a board’s direct attention.
- Do not let the perfect be the enemy of the good. Healthcare delivery is notoriously slow to adopt the new. There is wisdom in this, of course. There are few other businesses in which getting it wrong is a matter of life and death. However, the response of the industry during the pandemic demonstrated we can move when we need to move. AI may be its own kind of industry-shaping pandemic.
As a board, you are writing – right now, today – the story that you will tell later about how you responded to the coming of this generation of AI. What do you want that story to be? Inaction or waiting to be acted upon seems to be the poorest of choices and the weakest of future narratives. From the law’s perspective, informed board risk taking is respected and incentivized.
Trust. Reliability. Reputation. Quality. Accuracy. These are all critical aspects of health care delivery and are, ultimately, the responsibility of the governing board. And they are particularly critical when the care delivery is provided with or through AI. For health care consumers are very anxious about how AI can affect the care they receive.
As they should be, given Mr. Altman’s observation that, “my worst fears are that we cause significant, we, the field, the technology, the industry cause significant harm to the world. I think that could happen in a lot of different ways…I think if this technology goes wrong, it can go quite wrong.” There’s just a lot at stake; both promises and pitfalls. And it’s precisely the type of “big picture” issue that boards are expected to address.
How health care boards respond to this current inflection point in AI development can have a direct impact on consumer acceptance of its use, its adoption by your caregivers, and your organization’s competitive edge. Few items may be more pertinent for your next board agenda.
Michael W. Peregrine is a partner in the law firm of McDermott, Will & Emery. David Jarrard is Founder and Chair, Jarrard Inc. Executive Committee.