Recently, Facebook has announced to remove 1.5 million videos in the first 24 hours after the deadly attack at Christchurch in New Zealand in which 50 people died and the same number of people got injured.
1.2 million videos were blocked out of 1.5 million numbers. Facebook’s Mia Garlick asserted that the company is working in the way of removing violating content with the help of both technology and people.
To remove content that violates Facebook policies, the company has a dedicated team of human moderators along with AI-enabled systems that identify and flag off inappropriate content.
After the attack in New Zealand, Facebook, YouTube, and Reddit took steps to remove accounts that are sharing the violent footage of the attack, which was live streamed by the gunman.
Apart from removing the actual graphical video circulating through his digital platform, Facebook is also eliminating the edited versions of the video that doesn’t show graphic content to check the spread.