Prime Minister Jacinda Ardern speaks to the house at Parliament on March 19, 2019 in Wellington, New Zealand.
London (CNN)Facebook acknowledges its systems failed to catch the livestream video of the New Zealand mosque attack, shedding new light on how the company became aware of the video.
In a blog post late Wednesday evening the social media company's vice president of integrity, Guy Rosen, wrote that the shooter's video did not trigger Facebook's automatic detection systems because its artificial intelligence did not have enough training to recognize that type of video.
The shooter livestreamed 17 minutes of the horrific attack -- which left 50 people dead -- on Facebook.
Facebook said when the video was live, fewer than 200 people watched it. The video was later viewed 4,000 times before Facebook took it down. The company hasn't said exactly when it removed the shooter's video.
Since the attack, the video has been downloaded and re-uploaded millions of times to various platforms.
New Zealand leaders have criticized Facebook for not taking enough action to remove all versions of the video.
Artificial Intelligence systems rely on "training data," in which Facebook and other companies feed their software examples of content to take down. Facebook uses such systems to help catch and take down content like nudity and terrorism.
"We've been asked why our image and video matching technology, which has been so effective at preventing the spread of propaganda from terrorist organizations, did not catch those additional copies," Rosen wrote. "What challenged our approach was the proliferation of many different variants of the video, driven by the broad and diverse ways in which people shared it."
Rosen also revealed that the way users flagged the video led to a delay in Facebook's reaction. The social network has in the past focused on reacting immediately to videos that show suicide.
The first user reports of New Zealand attack video, which came after the livestream ended, labeled it as something "other than suicide" and as such "it was handled according to different procedures." Rosen said Facebook is addressing that logic to escalate other types of content more quickly.
Facebook users can flag as in appropriate content that includes nudity, violence, harassment, false news, spam, terrorism, hate speech, gross content and suicide or self-injury.
Further complicating the problem for Facebook, a core community of "bad actors" worked together to continually upload edited versions of the video. The iterations -- which included videos filmed off of television and computer monitors -- worked to deceive the AI system and allowed those copies to spread, Rosen wrote.
Facebook will now start using an audio-based technology to detect videos that might have been edited to trick the visual AI system, Rosen wrote.
Some critics have called on Facebook to place a time delay on Facebook live streams. but Rosen argued that such a delay would not address the problems they faced with the New Zealand video, and a delay "would only further slow down videos getting reported, reviewed and first responders being alerted to provide help on the ground."
Sorce:-
https://edition.cnn.com/2019/03/21/tech/facebok-new-zealand-artificial-intelligence/index.html
Sorce:-
https://edition.cnn.com/2019/03/21/tech/facebok-new-zealand-artificial-intelligence/index.html
Updated 1304 GMT (2104 HKT) March 21, 2019
Comments
Post a Comment