Monika Bickert, Facebook's Director of Global Policy Management, and Brian Fishman, the company's Counterterrorism Policy Manager, acknowledged in a posting on its website that they believe social media should not be a place where terrorists have a voice.
Facebook is also experimenting with language understanding, analyzing pro-terrorist text that's been removed from the site.
Bickert and Fishman acknowledge this pressure in their post, writing that "in the wake of recent terrorist attacks, people have questioned the role of tech companies in fighting terrorism online". In most cases, Facebook only removes objectionable material if users first report it. Some experts criticized it as a drop in the bucket considering how much content Facebook's users share.
Facebook's announcement comes on the heels of a blog post from Facebook policy chief Elliot Schrage which pledged the company would begin to "talk more openly about some complex subjects", including how platforms should fight the spread of terrorist "propaganda" online.
"At Facebook, more than 150 people are exclusively or primarily focused on countering terrorism as their core responsibility".
Right now, the social giant is focusing on terrorist groups based in the Middle East like ISIS and Al Qaeda, but eventually, they hope these tools will be good counterterrorism measures against any similar organization.
The ability of so-called Islamic State to use technology to radicalise and recruit people has raised major questions for the large technology companies.
Hyundai unveils small SUV Kona as China sales skid
In 2015 the Hyundai known as the ix35 was small enough to compete in the compact SUV segment of the Kiwi new vehicle market. The subcompact SUV market snowballed by almost 10 times from 2010 to 2016, Hyundai said, citing data from IHS Automotive .
Companies have sharply boosted the volume of content they have removed in the last two years, but these efforts haven't proven effective enough to tamp down a groundswell of criticism from governments and advertisers.
The company says it is also using algorithms to detect "clusters" of accounts or images relating to support for terrorism.
Facebook has partnered with Microsoft, YouTube and Twitter to develop and shared industry database of "hashes" or digital fingerprints of terrorist content. "What we see is terrorist actors and their supporters start to understand the kind of things that we're doing and they try to change what they do and we have to be reactive to that".
The organization's commitment lies on fully supporting law enforcement agencies by providing any data they require, promoting encryption to help secure users, and implementing these security measures in sister platforms like WhatsApp and Instagram as soon as possible.
Terrorism remains a problem that governments and companies everywhere are attempting to solve, but Facebook plans to launch several initiatives to contribute to these efforts. "We want to be very clear how seriously we take this - keeping our community safe on Facebook is critical to our mission".
In the post, Bickert and Fishman admit "AI can't catch everything".