Search

Facebook says it's policing its platform, but it didn't catch a livestream of a massacre. Why?

This despite Facebook repeatedly touting over the past two years its hiring of thousands of content moderators and its investment in artificial intelligence for content moderation.
If the artificial intelligence systems built by one of the richest companies in the world can't identify and take action on a video containing weaponry, repeated gunfire and murder, what can they identify?
Well, broccoli, for one thing.
Mike Schroepfer, Facebook's chief technology officer, boasted in a recent interview with Fortune magazine how good Facebook's artificial systems were at identifying the difference between pictures of broccoli and pictures of marijuana.
In one example, Schroepfer showed how the systems could determine with about 90% accuracy which image showed to it contained broccoli and which contained marijuana.
It was an illustration of how Facebook could crack down on attempted drug sales on the platform, perhaps.
But at Facebook's scale, with billions of posts, being wrong 10% of the time is not good enough, Hany Farid, a professor at Dartmouth and expert in digital forensics and image analysis, told CNN Business on Friday.
"20% of the work is to get you to 90% accuracy," he said, adding that 80% of the work comes in getting to 99.9% accuracy.
"We are not even close," he said of the artificial intelligence, "we are years away from being able to do the sophisticated, nuanced things that humans do very well."
Machines, Farid said, "can't even tell the difference between broccoli and marijuana, let alone if a video is of a movie, or a videogame, or documenting war crimes, or is a crazy guy killing people in a mosque."
In November, Facebook released a report on how it was policing its platform. Trying to show it was being proactive in finding posts that violated its terms of service, it said that between July and September last year it had found 99.5% of "terrorist propaganda" before users had reported the posts to the company, and 96.8% of "violence and graphic content."
Policing a platform with billions of people freely and openly sharing their thoughts, their videos, and their pictures, is not an easy task. But that is the platform Facebook built.
Facebook has made it clear it is committed to investing in both human moderators and artificial intelligence, and its most recent self-reporting on its performance would suggest it is making progress. But this devastating video of mass murder still managed to slip through the cracks.
CNN Business asked Facebook on Friday if it could provide some insight into its artificial intelligence systems should have, or could have detected a video like this. The company did not immediately respond.
"New Zealand Police alerted us to a video on Facebook shortly after the livestream commenced and we quickly removed both the shooter's Facebook and Instagram accounts and the video," Mia Garlick of Facebook New Zealand said in a statement.
CNN Business also asked Facebook if the video had been going through any part of the content moderation process before police alerted Facebook to the video. Facebook did not immediately respond.
After Mark Zuckerberg launched live video streaming on his platform, he told BuzzFeed News in 2016, "We built this big technology platform so we can go and support whatever the most personal and emotional and raw and visceral ways people want to communicate are as time goes on."

Let's block ads! (Why?)

from CNN.com - RSS Channel https://ift.tt/2XZu5mI

Bagikan Berita Ini

0 Response to "Facebook says it's policing its platform, but it didn't catch a livestream of a massacre. Why?"

Post a Comment

Powered by Blogger.