In the wake of terrorists using online social networks to propagate their agenda, one of the largest social media network, Facebook, has revealed how it’s implementing its AI unit in its steps to curb terrorist-related content from its platform.
With the expansion of the internet to different corners of the world, it has quickly gained the interest of notorious groups with malicious intent.
Social networks have presented a new avenue for terrorist groups to not only recruit prospective candidates with similar extremist but also to plan and coordinate attacks in the real world.
“There’s no place on Facebook for terrorism. We remove terrorists and posts that support terrorism whenever we become aware of them. When we receive reports of potential terrorism posts, we review those reports urgently and with scrutiny,” Facebook stated.
Facebook had recently deployed its artificial intelligence system to take up the fight against terrorist content on Facebook.
Facebook’s AI — Lumos — makes use machine learning alongwith the following techs:
- Image matching: Using this tech the AI identifies images or videos similar to those banned in the past and flags them before it reaches to the audience on Facebook.
- Language understanding: Another recent development that the company is still testing out. This tech tries to identify texts that might be advocating terrorism — making use of machine learning.
- Targeting Terrorist accounts with a similar pattern: Disabling accounts, groups or pages related to or similar to terrorist accounts banned in the past.
- Recidivism: Detecting fake accounts created by repeat offenders and deleting them faster than they’re created again.
- Cross-platform monitoring: This AI monitoring will not only be implied on the Facebook desktop and mobile app, but also across other Facebook-owned social media platforms such as Instagram, Whatsapp and Messenger.
“Although academic research finds that the radicalization of members of groups like ISIS and Al Qaeda primarily occurs offline, we know that the internet does play a role — and we don’t want Facebook to be used for any terrorist activity whatsoever,” the company added.
Since what supports and doesn’t support terrorism is subjective because an image of a terrorist or text talking about terrorism can also be part of a news report and in order to sift through the content, Facebook will also be adding human reviewers to the fold in order to ‘understand more nuanced cases’.
The human counterparts of the AI will be responsible to review the content that has been reported by members of the Facebook community, to identify the credibility and magnitude of a terrorist threat found by an AI and assist the AI in other ways too.
“Terrorists are continuously evolving their methods and we’re constantly identifying new ways that terrorist actors try to circumvent our systems — and we update our tactic accordingly,” the company added.
Terrorism has been a growing threat for the past few decades and as the internet is reaching out to far corners of the world, it’s giving a way for people to connect beyond boundaries but also a way for perpetrators to propagate their philosophy, meet other link-minded people and recruit impressionable minds too.
Facebook has been consistently making efforts to make its community a better place for people to interact in — be it the fight against revenge porn, efforts to curb fake news, training its AI to monitor activities on its platform and collaborating with other social media bigwigs to tackle the threat of terrorism online — needless to say, all of this is greatly needed.
Facebook seems to agree with the opinion that ‘social media should not be a place where terrorists have a voice’. The threat is real and the sooner we respond to it, the better it is for a safer internet in the future.