Here's why Facebook's suicide-screening tool is used in the US — but not the EU

Stringent privacy protections in the European Union have impeded Facebook's ability to use its controversial suicide-screening tool in the region, according to Business Insider. The algorithm is already in use in the U.S., where it has faced criticism.

Here are five things to know:

1. In March 2017, Facebook launched an effort to prevent suicide using artificial intelligence. The project scans nearly every post on the social media platform to assess a user's suicide risk and detect signs of potential self-harm. In cases where Facebook's algorithm flags a user as high-risk for suicide, a reviewer may connect the user with mental-health resources or contact law enforcement.

"In the last year, we've helped first responders quickly reach around 3,500 people globally who needed help," Facebook CEO Mark Zuckerberg wrote in a November blog post on the initiative.

2. Facebook's suicide algorithm scans posts in English, Spanish, Portuguese and Arabic, but it doesn't assess posts from the EU. That's because Facebook hasn't deployed the program in the EU on account of the region's General Data Protection Regulation, which went into effect in May. GDPR requires websites to obtain consent to collect personal information from European citizens, which includes data pertaining to a user's mental health.

3. GDPR is stricter and broader than HIPAA — the main piece of legislation protecting individuals' health information in the U.S. HIPAA mandates privacy protections for specific covered entities that provide healthcare services, such as hospitals and payers, and therefore doesn't apply to data created by Facebook's algorithm.

4. Facebook has received some criticism over the algorithm in the U.S., with public health experts questioning whether the company's approach is accurate, effective and safe. Natasha Duarte, a policy analyst at the Center for Democracy and Technology, told Business Insider Facebook's ability to deliver user information to law enforcement poses a privacy risk.

"The biggest risk in my mind is a false positive that leads to unnecessary law enforcement contact," Ms. Duarte said.

5. Dan Reidenberg, PsyD, a suicide prevention expert who helped Facebook launch its suicide-prevention program, voiced his support for the effort to Business Insider. He added the trade-off between privacy and safety, particularly in cases involving law enforcement, is not unique to Facebook.

"Health professionals make a critical professional decision if they're at risk and then they will initiate active rescue," Dr. Reidenberg said. "The technology companies, Facebook included, are no different than that. They have to determine whether or not to activate law enforcement to save someone."

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Articles We Think You'll Like

 

Featured Whitepapers

Featured Webinars