The Prevent Duty Guidance: What do you need to look for in a Web Filter?

In part one of this blog post I covered the key points of the Prevent Duty Guidance and Web Filtering. The main takeaway being that schools should have appropriate levels of filtering and other specified bodies such as colleges and universities should consider filtering in their overall strategy.

So what does effective filtering look like and what should you look for in a filtering solution with regard to Prevent?

The most important point to acknowledge is that effective filtering will require multiple techniques and tools to minimize the risk of users being able to access extremism content either deliberately or accidentally. 

One of the most important areas is how the filter analyses and categorizes content to determine if the content should be displayed or not. Many filters primarily rely on a URL list to determine the type of content on the web page. This approach has a couple of major limitations.  The first is that nobody can keep a URL list up to date; with around 100 new websites being created every minute, it can take days or weeks before a new website is added to the list. The second big problem is typically the URL list only covers the top level domain, i.e. www.newsite.com rather than www.newsite.com/reallybadstuff. So if www.newsite.com has been categorised as news and the users policy allows access to news then they will also get access to www.newsite.com/reallybadstuff.

The best approach is to use a filter that scans the content of every requested web page to determine the category of content and block or allow access. This real-time approach comes in two flavours. One approach is to use keywords to look for an occurrence or occurrences of the word on the page and then block based on that. The limitations of that approach are that an appearance of a word or a phrase on a page may not give sufficient certainty that the page is in a specific category. This can lead to legitimate content being blocked and bad content being allowed through.

A better approach is to use a filter that uses a content classifier with contextual understanding, such as the Bloxx Web Filter with Tru-View Technology (TVT). This is a very accurate way to determine the type of content that’s being requested in real-time, even if it’s a brand new web page that’s never been seen before.

But what if the content is encrypted with SSL/TLS? Encrypted traffic is on the rise, driven by the growing desire for privacy and companies such as Google forcing encryption on. Essentially that means your Web filter could be blind to the content being accessed. So it’s important that your filter is able to intercept SSL content and pass the unencrypted content to the categorization engine. Also look for the capability to not decrypt sensitive traffic such as online banking.

One important point to also consider is the content categories in the filter. In the Bloxx Web Filter and Secure Web Gateway products we have a number of filtering categories such as Hate & Discrimination, Violence and illegal that can be used to block radicalisation content. These and other content categories are also retrained based on feedback from our customers to ensure that new forms of extremism content can be classified correctly. The updates to TVT are then pushed out to client machines.

I mentioned earlier that content can accessed accidentally or deliberately. If its deliberate accessing then users may attempt to circumvent your filter using a proxy anonymizer. So it’s important that your Web filter has good functionality that will identify and block access to these sites. Many filters rely on a daily updated list but that approach is very risky. The real-time approach provides a much better level of protection. The Bloxx Web filter uses TVT to identify proxy avoidance sites and also uses sophisticated de-obfuscation techniques to thwart the attempts by the site owners to stop them being detected by normal filters. One point to note is that in the unlikely event we don’t block access to the proxy site (a very rare occurrence), then we still continue to filter the requested pages being requested through the proxy server.

Another key area is around search engines. You should ensure that your web filter enforces safe search for all the major search engines and that you can report on what users have been searching for.

The use of Social Media to distribute and access radicalisation content is particularly difficult to manage. Many organisations will simply ban any access; but that sledgehammer approach is often at odds with the positive use of social media in the organisation. Your filter should be able to control access to social media content at a granular level; for example you should be able to allow access to Facebook but block the ability for users to post comments.

So what if your organisation decides that it wants less restrictive filtering but still wants the assurance that it has visibility of what users are searching for or have been accessing?

One way to look for intent is to use a filter that will provide real-time alerts on what users are searching for. For example, the Bloxx Web Filter allows you to create lists of risky search terms so that when a user searches for that word or phrase then an email is sent to whomever in the organisation needs to be informed. If that approach is too much of an overhead, then the ability to automatically run and distribute daily reports on search terms is another approach that may be taken.

In addition to search reports, your Web filter should also have an excellent forensic reporting capability that can be used if the need to investigate any incidents arises.

For higher education, there may be a desire to have no filtering at all. That’s not an approach we would recommend, but for the scenario it’s still important to have an audit trail of what content has been accessed. So your filter should have the ability to log all accesses but not block any content. In addition, if users might be accessing extremism or radicalisation content for research purposes, Universities UK has issued guidance which can be found here.

A final area to consider is the use of BYODs on your network. Since you don’t own the device you will have less control over it. One approach is to have a Web filter that can use the users’ domain credentials to identify them and apply the appropriate filtering policy.

There’s no doubt that the Web will continue to evolve and that there will be new threats that organisations will need to address. To restate my own personal view, there’s not a content filter out there that provides 100% protection against radicalisation or extremism content being viewed. And most importantly, and as is stated clearly in the Prevent Duty Guidance, reducing the threat requires the specified authority to have a clear and cohesive strategy on how it will minimize the risk of people being radicalised.

Join the Conversation

Give us your thoughts on the prevent duty and other key risk areas in education IT security