UK Government Unveils Online Extremism Blocker
Home Secretary Amber Rudd has unveiled the UK government’s new tool for detecting and blocking online extremist and jihadist content.
Publicly Funded
The new tool was developed by artificial intelligence company ‘ASI Data Science’ based in London, and was funded using £600,000 of public funds.
Tackling A Growing Problem
The tool was developed to tackle the growing problem extremist / jihadist (e.g. IS) content being posted online, and current moderating techniques simply not being able to keep up with the job of detecting and removing it fast enough. For example, as well as the popular video platforms for posting such content, the Home Office estimates that between July and the end of 2017, extremist material appeared in almost 150 web services that had not been used for this kind of propaganda before.
An ASI Data Science spokesperson is reported as saying that there are currently over 100 different (extremist / IS) videos posted on over 400 different platforms online.
The danger is of course, that the material can contribute to the promotion of extremist causes, the radicalisation of people, the recruitment of new terror group members, and inspiring individuals / groups to commit their own acts of terror. Some of the content can also be very disturbing e.g. if viewed by children online.
How The New Tool Works
The new tool is reported to have an AI element which has enabled it to be ‘trained’ to correctly pick out extremist content. For obvious reasons, the exact workings of the tool are being kept secret, but it is understood that the tool uses an algorithm to detect signals that contribute to a level of probability (low to high) that a video is likely to be terrorist propaganda rather than e.g. a legitimate news video. The tool can be applied at the point of upload on a video platform, thereby stopping the propaganda video from being uploaded in the first place.
This tool is reported to be able to accurately detect 94% of IS video uploads, and that it can typically flag 0.005% of non-IS video uploads. On a site with five million daily uploads, for example, it would flag 250 non-IS videos for review / for a human decision to be taken.
Others Have Tried
Facebook and Google are known to have been trying to develop their own terror material filtering tool, and this UK version is thought to be suitable for use by smaller platforms first.
Home Secretary Says…
Home Secretary Rudd is reported as saying that even though the tool has been developed, the UK government won’t rule out taking legislative action too where necessary, and that an industry-led forum such as The Global Internet Forum to Counter Terrorism, launched last year, will also help to tackle the issue.
What Does This Mean For Your Business?
For businesses using the smaller social media and video platforms, this tool could be a practical solution to current moderation problems. For the UK government, it provides some good publicity, a chance to gain back some ground in the online battle with terror groups such as IS, and a way to be seen to be tackling worries of radicalisation of UK citizens. It also provides a way for the Home Secretary to apply more pressure to the popular social media platforms, some of which the UK government has criticised for not taking enough fast action to detect remove extremist content.
For UK businesses generally, association with and use of advertising platforms that are free of extremist and unsavoury material is obviously better from a brand protection point of view. It is, however, a fact that Facebook and Google are hugely important for business advertising, and that PPC advertising for example, is unlikely to be affected by whether the chosen video / social media platform adopts such a screening-tool in the near future.