Terrorists and hate groups continue to get their propaganda onto social media platforms despite efforts by Facebook, Twitter, and YouTube to shut them down, a US Senate panel heard on Wednesday.
Islamic State, Al Qaeda, and others have stepped up their use of bots and other methods to fight the artificial intelligence and algorithms the social media giants deploy to screen them out.
In addition, they are now turning to smaller platforms and messaging apps with encryption and less ability to police users, like Telegram, Reddit and WhatsApp, though none have offered yet the previous broad reach that Facebook and YouTube have had.
Nevertheless, the largest social media firms were pressed in a Senate Commerce Committee hearing on Wednesday over their reliance on artificial intelligence and algorithms to keep their powerful platforms clear of violent extremist posts.
YouTube is automatically removing 98 percent of videos promoting violent extremism using algorithms, said Public Policy Director Juniper Downs.
Facebook’s head of Product Policy and Counterterrorism, Monika Bickert, said that 99 percent Islamic State and Al Qaeda-related terror content “is detected and removed before anyone in our community reports it, and in some cases, before it goes live on the site.”