Facebook, Twitter, Microsoft, and YouTube have announced a new partnership to curb the spread of terrorist content on online platforms.
A new partnership formed between technology giants Facebook, Microsoft, Twitter, and YouTube has been announced this morning, wherein the four companies have vowed to join forces with the view of curbing the spread of terrorist content found on their online platforms.
The partnership will see the four companies develop a shared industry database of ‘hashes’ – terrorist imagery or recruitment propaganda constituted from video or images – that have been removed from web platforms.
Read: The Pentagon‘s Task Force Ares will unleash cyber warfare against ISIS
Facebook took to an official blog post where it announced the formation of the partnership, outlining that
“Starting today, we commit to the creation of a shared industry database of “œhashes“ “” unique digital “œfingerprints“ “” for violent terrorist imagery or terrorist recruitment videos or images that we have removed from our services. By sharing this information with each other, we may use the shared hashes to help identify potential terrorist content on our respective hosted consumer platforms. We hope this collaboration will lead to greater efficiency as we continue to enforce our policies to help curb the pressing global issue of terrorist content online.
Our companies will begin sharing hashes of the most extreme and egregious terrorist images and videos we have removed from our services “” content most likely to violate all of our respective companies‘ content policies. Participating companies can add hashes of terrorist images or videos that are identified on one of our platforms to the database. Other participating companies can then use those hashes to identify such content on their services, review against their respective policies and definitions, and remove matching content as appropriate.”
The database will serve to enable each of the four companies to collate data and insight on the spread of terrorist content across other web platforms. Facebook cited that shared hashes will be used to identify other potential terrorist content while clarifying that no personally-identifiable information will be shared and that matching content will not automatically be deleted from platforms or services intrinsically involved in the partnership.
“As we continue to collaborate and share best practices, each company will independently determine what image and video hashes to contribute to the shared database. No personally identifiable information will be shared, and matching content will not be automatically removed. Each company will continue to apply its own policies and definitions of terrorist content when deciding whether to remove content when a match to a shared hash is found. And each company will continue to apply its practice of transparency and review for any government requests, as well as retain its own appeal process for removal decisions and grievances. As part of this collaboration, we will all focus on how to involve additional companies in the future.”
The announcement was made with the adjunct assurance that the partnership will protect the human rights of users enrolled on services associated with either company involved in the partnership.
“Throughout this collaboration, we are committed to protecting our users‘ privacy and their ability to express themselves freely and safely on our platforms. We also seek to engage with the wider community of interested stakeholders in a transparent, thoughtful and responsible way as we further our shared objective to prevent the spread of terrorist content online while respecting human rights.”
It remains to be seen how effective the ongoing partnership between each internet giant will be. While Twitter has rolled out strong forays into securing the privacy of its users in addition to introducing new tools to prevent online harassment, YouTube continues to rely on community input to flag content deemed undesirable on the service.
Twitter, in particular, recently announced that it has suspended more than 360,000 accounts since 2015 for violating its policy on “œviolent threats and the promotion of terrorism.“, while Microsoft banned terrorist content from its consumer services in May this year.
Facebook has had a tough time finding its feet in an evolving news economy this year, and came under fire for reportedly suppressing conservative news stories in an incident known as TrendingGate. The platform later fired its team of human editors and has replaced the curation of its trending news stories with an AI, leaving many users to complain over the decline in quality of news found in their News Feed.
Particularly, the platform came under fire for propagating ‘filter bubbles’ – essentially the fragmentation of news and other information which is delivered to a particular audience. That ‘bubble’ in itself has given rise to an increasing prevalence of fake news designed to provoke interactivity or virality in amongst a particular target audience, and has highlighted a concern for Facebook wherein the platform has been accused of unconsciously furthering bigotry or hate speech.
Facebook has subsequently announced that it will maintain the database of accumulated content deemed to be associated with terrorist networks or goals, and that it will not accept hashes flagged by governments or law enforcement agencies. Any such organization wishing to access the database would have to submit a formal request as they would for any other content inquiry.
Read: ISIS begins using hobby drones as improvised explosive devices
What are your thoughts? How could internet giants proactively combat the spread of terrorist content online? Be sure to let us know your thoughts in the comments below!
Follow Bryan Smith on Twitter: @bryansmithSA


