Facebook’s AI system couldn’t identify Christchurch first-person livestream
While the world still reels from the horrifying mass-shooting in multiple Christchurch mosques in New Zealand, the social media giant is still offering explanations to wash its hands of the mess.
The mass-shooter actually livestreamed his entire 17-minute carnage on Facebook and when asked why the live stream was allowed to go up in the first place, Facebook, as per a report by Bloomberg, said that “head-mounted camera by the gunman, made it harder for its systems to automatically detect the nature of the video.”
Facebook public policy director, who gave the statement also added that “this was a type of video we had not seen before,”. Facebook had earlier touted its AI and machine learning algorithms could stop any kind of extremist content before it comes on its platform.
Even after the shooting had taken place Silicon Valley was unsuccessful in removing the content completely from their platform. Facebook has its own tools for removing child pornography content called PhotoDNA with Google developing an open-source API version of the same tool for its own usage. However, the rules change when an extremist event is live-streamed to an audience or parts of the clip are used in news websites.
Motherboard has found out that once a live video has been flagged on Facebook, moderators will have the ability to ” ignore it, delete it, check back in on it again in five minutes”. These moderators are told to look for specific actions such as “crying, pleading, begging” and also “display or sound of guns or other weapons (knives, swords) in any context.” Keeping this in mind it is still unclear how the Christchurch shooter was allowed to stream for a good 17 minutes before the footage was cut.
Tech2 is now on WhatsApp. For all the buzz on the latest tech and science, sign up for our WhatsApp services. Just go to Tech2.com/Whatsapp and hit the Subscribe button.