Critics are arguing Facebook's suicide prevention efforts, though promising, have allowed the social network to "[assume] the authority of a public health agency while protecting its process as if it were a corporate secret," The New York Times reports.
Facebook's suicide threat screening and alert program uses algorithms and user reports to spot users potentially at risk for committing suicide, chiefly by flagging posts with potential suicidal threats. When a post is flagged, human reviewers tapped to call local law enforcement at their discretion, at which point law enforcement may mandate a psychiatric intervention. In a November post, Facebook CEO Mark Zuckerberg said the program had helped 3,500 globally.
However, public health experts are questioning whether Facebook's approach is accurate, effective and safe, according to the NYT.
"It's hard to know what Facebook is actually picking up on, what they are actually acting on, and are they giving the appropriate response to the appropriate risk," John Torous, MD, director of the digital psychiatry division at Beth Israel Deaconess Medical Center in Boston, told the publication. "It's black box medicine."
Health law scholar Mason Marks argues Facebook's suicide risk scoring software constitutes the practice of medicine, and should therefore fall under the purview of federal regulation.
"In this climate in which trust in Facebook is really eroding, it concerns me that Facebook is just saying, 'Trust us here,'" Mr. Marks told the NYT.
To read the NYT's report, click here.