The British Government is providing funding and support for a new project that will help to stop violent videos being shared online after terrorist attacks.
The new funding will support efforts to develop industry-wide technology that can automatically identify online videos which have been altered to avoid existing detection methods, and help prevent them from being shared online.
The announcement follows the Christchurch attack in New Zealand in March, in which 51 people were killed and which saw hundreds of different versions of the attacker’s live-streamed video spread across online platforms, with Facebook removing over 1.5 million uploads of the video from their platform. Many had been intentionally edited to evade current content filters and, in some cases, it took days for them to be removed.
British data-science experts, supported by the government, will use the new funding to create an algorithm which any technology company in the world can use, free of charge, to improve the way that they detect violent and harmful videos and prevent them being shared by their users.
Not only will this make it much harder for terrorist content to be shared online but the outcomes of the research could eventually also be used to help spot other types of harmful content such as child sexual abuse.
Home Secretary Priti Patel said the funding honors the commitments made in the Christchurch Call to Action to tackle terrorist use of the internet, which world leaders signed up to at a summit in Paris in May.