Phishing for new tactics: Could cybercriminals take advantage of AI and GPT?

With all the buzz around artificial intelligence tools, such as ChatGPT, Becker's reached out to CIOs and chief information security officers to understand how these tools will impact healthcare cybersecurity.

Aaron Weismann. CISO of Main Line Health (Radnor Township, Pa.): I think we're already seeing a rapid generation of novel malware coming out of ChatGPT, and depending on the reviewer, it's a mixed bag. Some say ChatGPT is a profoundly adept malware writer, while others provide middling to negative reviews. I think OpenAI's been smart about trying to restrict the ease with which ChatGPT creates those, but that content is still easily accessible via ChatGPT's API or one of the many alternatives available (even large language models trained by GPT 3.5!). Chat GPT doesn't need to be great if it's prolific — eventually, something will work and it will work well.

Overall, I'm excited to see what ChatGPT can do. I think there are a lot of great business use cases like assisting with marketing material development, presentations, white papers, etc. It does a fantastic job with those (of course with a bit of editing afterward). I think there's a lot of general administrative workflows that can be streamlined and enhanced using ChatGPT and its ilk.

Esmond Kane. CISO of Steward Health Care (Dallas): All of our security awareness around social engineering must change. We've spent decades training the workforce to detect poorly written phishing emails; ChatGPT will write perfectly formatted and grammatically correct emails in any language you wish. You will get an irate call from your boss with an urgent request to reset his password or transfer funds, grandparents will hear from their distraught grandkids who need help, maybe even the dead will demand payment with Apple Cards. The next generation of AI will impersonate voices and faces flawlessly.

Jack Kufahl. CISO of Michigan Medicine (Ann Arbor): As with any disruptive technology, there are going to be threats and opportunities with the new OpenAI-type tools that are leveraged by our information security team and malicious actors alike. There are some substantive questions for an institution or company to address as they relate to privacy and legal concerns; the speed at which companies are developing and deploying these capabilities could mean that the scales could be tipped away from ethics and appropriate use for a while while we determine effective ways to counter their potentially negative impact. There are some startling implications for would-be actors to use these tools to help mimic more advanced tools, tactics and procedures to negatively impact healthcare institutions as the artificial intelligence resources supplement, effectively lowering the bar for a new range attack that may have been previously out of reach for threatening groups and individuals. 

There are strong market forces from our security capability, providing vendors to counter these effectively that will work to our collective advantage. Having equal access to these resources for the benefit of our internal threat intelligence teams helps even the playing field so we may learn and incorporate those new tactics, albeit at a time when cybersecurity talent is scarce and overburdened. This is a compelling reason for digital risk professionals to prioritize these efforts above business as usual. We would look to proactively leverage these tools not only in our threat intelligence areas, but also how we perform secure code reviews and overall vulnerability assessments. These may help quantitatively provide insight into otherwise opaque weaknesses in our risk ecosystem and give us some flexibility to refocus in areas that would otherwise go unnoticed by conventional security tools and practices. It is a bit of an arms race to learn and incorporate these new capabilities, so we are prepared for the inevitable use of these technologies against us for the purposes of disruption of our critical clinical functions.

Mauricio Angée. CISO of University of Miami Health System: The possibilities with this new technology from a security perspective are both tempting and risky. Some analysts have stated that ChatGPT is just "plug and play"; the model can perform basic tasks to write entire papers, write queries, and develop code that is executable, but also, it can be manipulated to violate policies, which pose new ethical and legal implications, and not to mention the direct potential impact to patient care and patient safety in healthcare specifically.

According to security industry reports, ChatGPT will likely allow bad actors to develop and launch campaigns that are larger, more persuasive, and harder to identify. In fact, cybercriminals have already been using the ChatGPT platform to fully automate cyberattacks, like targeted phishing email attacks and malware. In an article published, cybersecurity researchers have successfully used the platform's web GUI or programmatic API, which may bypass security content filters to deploy malware that could easily evade security products that make the detection and mitigation difficult or almost impossible. In this example it reminds me of when I was a little boy growing up watching sci-fi movies where Earth was being invaded by aliens, and the armed forces were using conventional weapons that could not stop the invasion.

It is a fact that there is a need to embrace new technologies. Technologies that help automate and improve people's lives. Researchers have seen the potential for using ChatGPT for continuous improvement and validation in the quality and size of data sets, and in developing sophisticated methods. But there is a general warning about the potential risk for this powerful AI platform to provide false or biased information. As the research community continues to learn and test the applicability of ChatGPT, most be aware of the potential implications, risks, impact and vulnerabilities it may bring.

Sanjeev Sah. CISO of Centura Health (Centennial, Colo.): I think there are issues and concerns with AI and with ChatGPT being an example of it. I've seen some studies where, malefactors can leverage the technology for undesired intents in harming or cyber problems with malware are created using new models. 

At the same time, we've also got an opportunity to enhance cybersecurity tools to be able to detect those malwares because we can detect the fact that they include substances. In terms of futuristic capabilities, we can enhance the cyber front to deal with challenges brought on by the abilities or exploits of AI. To focus back on the benefits brought about because of AI tools, model and analytic capabilities, it naturally enables cybersecurity field and operations teams to understand security complexities and solve problems much faster. With the learning aspects of AI machine learning, in the cybersecurity realm you won't have to repeat tasks that we do on a regular basis. 

Simon Linwood, MD. CIO of UCR Health (Riverside, Calif.): As a CIO, I have significant concerns about the potential for AI-driven cyberattacks. While hospitals have already been victims of phishing attacks, the use of AI tools like ChatGPT by bad actors for social engineering could be even more lethal. AI-generated fake content and chatting-like back-and-forth communications can be highly contextual, look real, and are difficult to distinguish from authentic ones. It poses a serious risk to our healthcare professionals and patients. This type of attack could not only compromise sensitive patient data but also disrupt critical healthcare operations, putting lives at risk.

Steven Ramirez. CISO of Renown Health (Reno, Nev.): With the emergence of OpenAI tools, the cybersecurity community will see both positive and negative impacts. Negatively, ChatGPT will enable advanced phishing messages that will deceive even the trained security eye. There is also the ability to use the tool for education on coding and script writing, that can "level" up the skill set and capabilities of novice hackers.

On the other hand, I see its benefits. I can see where ChatGPT can help with cybersecurity awareness material, process/procedure and policy development.

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Articles We Think You'll Like

 

Featured Whitepapers

Featured Webinars