Search

Tech platforms, stop enabling the anti-vaxers

Megan E. Garcia
The ongoing and increasingly contentious debate about whether technology companies have a responsibility to moderate harmful content takes on a new dimension when faced with such an urgent domestic and international public health emergency.
The question we should ask in this case is: Do technology companies have the responsibility to moderate their content when there is a public health risk involved?
This month, several large technology companies have implicitly, and correctly, answered yes. All the companies in question should embrace this affirmative response and explore two possible methods to address this misinformation on their platforms.
The Guardian unleashed a furor of activity around vaccines with an investigation into how anti-vaxer content is ranked and spread online. The Guardian found that neutral searches of the word "vaccine" by a new user with no friends or likes yielded overwhelmingly anti-vaccine content, unsupported by science, on both Facebook and YouTube. The two companies' algorithms steer users toward anti-vax pages and videos, even when users initially consume authoritative medical resources, such as a video uploaded by the Mayo Clinic about the MMR vaccine.
After the initial media frenzy about the Guardian's reporting, Facebook, YouTube -- which is owned by Google -- and other platforms have issued responses on the spectrum from all-out content banning (Pinterest) to taking modest steps to address unscientific information on vaccines (YouTube) to treading water while they examine the issue (Facebook). Rep. Adam Schiff, a Democrat from California, sent a letter earlier this month to Facebook and Google expressing his concern that companies are "surfacing and recommending" anti-vaccination content. Amazon, when confronted by CNN with the prevalence of anti-vaccination content, referred reporters to its content guidelines page, which says it provides customers with a "variety of viewpoints" but reserves the right "not to sell certain content, such as pornography and other inappropriate content." A recent search on Amazon for "vaccine" resulted in hits dominated by anti-vaccination content.
Part of what has stirred such reaction from news media, tech companies and policymakers is the recent rapid increase in multiple preventable disease outbreaks like the one in Washington state. After a 30% increase in measles cases globally, the World Health Organization took the extraordinary step of including "vaccine hesitancy" in its list of Top 10 Threats to Global Health in 2019. Washington is one of several states with outbreaks that allows parents with a personal or philosophical objection to keep their children from being vaccinated. Similar outbreaks have occurred in several states as content tying vaccines to autism has spread online despite being soundly refuted by the Center for Disease Control and Prevention and a host of medical authorities.
Large technology companies have a history of claiming a lack of responsibility for the impact of online information, either because of the existence of free speech laws, because they rightly worry that the slippery slope of moderating some dangerous content could lead to responsibility for moderating much more, or because they have received harsh criticism for banning content deemed political. But in recent years, they have also begun working to combat the spread of content deemed harmful, including terrorist propaganda and child pornography. There are two methods large technology companies could use, some already in place on other issues, to respond to public health crises.
Rubella gave me a disability. This is my message to anti-vaxxers
The first is modeled on Tech Against Terrorism, a collaboration between the United Nations and technology companies who opt in to the project. Launched in 2017, the effort is one way that both large technology companies and smaller companies are able to share best practices and operate with some degree of shared information about internal efforts to remove content that supports terrorist ideals. In the public health case, the World Health Organization could lead a similar international effort to share practices companies use to excise debunked health information from online platforms.
The second option is for each company to moderate its own content and create clear mechanisms to warn users when they encounter unscientific content that could potentially be harmful. Facebook and other companies already use a more extreme version of this method to find and remove terrorist propaganda from their sites. The impressive efforts marry the use of AI and human screeners to find posts that support terrorism and remove them, on average, 18 hours after they first go up.
A similar, although smaller program, for health misinformation would not necessarily require content takedown -- a practice the companies use for terrorist content and child porn, or even dreaded down ranking. A middle option would be to use similar AI and human screening practices but to label the suspect unscientific content much the same way malware is labeled. When users search for information on vaccines and suspect sites emerge, they would see a large warning label.
Intelligence agencies worked hard to convince large technology companies -- especially Facebook and YouTube -- that content on their sites was a key recruitment element for ISIS and other terrorist groups, and the companies have responded appropriately. In this case, unscientific information is causing a public health crisis that primarily harms children. Tech companies have a responsibility to respond. The only question is how.

Let's block ads! (Why?)

from CNN.com - RSS Channel https://ift.tt/2EfI9PO

Bagikan Berita Ini

0 Response to "Tech platforms, stop enabling the anti-vaxers"

Post a Comment

Powered by Blogger.