A company repeatedly endorsed by ministers backing the UK’s Online Safety Bill was warned by its lawyers that its technology could breach the Investigatory Powers Act’s ban on unlawful interception of communications, The Register can reveal.
SafeToNet, a content-scanning startup whose product is aimed at parents and uses AI to monitor messages sent to and from children’s online accounts, had to change its product after being warned that a feature developed for the government-approved app would break the law.
SafeToNet was hailed this week by senior politicians as an example of “new tech in the fight against online child abuse,” having previously featured in announcements from the Department for Digital, Culture, Media and Sport over the past 12 months.
Chief exec Richard Pursey recounted, during an online seminar at the CogX conference in March this year, how his company’s lawyers warned SafeToNet its technology was unlawful.
“I don’t think I’ve ever broken out in such a sweat in all my life,” Pursey told the seminar.
Intercepting data without the permission of the sender is a civil offence under section 3 of the Investigatory Powers Act 2016 (aka the Snoopers’ Charter). Breaches are investigated and judged by the Information Commissioner’s Office.
Pursey continued: “We were doing it for good, you know, it’s a social impact, we were doing it to safeguard children, until the lawyers said… ‘you realise you could go to prison for doing that’. And so what seemed a pretty obvious thing to do – why wouldn’t you be allowed to do that – you know, it just put the fear of God in me.”
Such interception may also breach section 1 of the Computer Misuse Act 1990, which criminalises accessing data without authorisation.
Pursey told The Register “development of that particular feature” had stopped when the company was told of the legal compliance problem, adding:
This is a reference to section 3(61 (2)(b) of the Snoopers’ Charter, which allows message interception with the “express or implied consent” of a system owner or manager.
SafeToNet’s website says today: “For legal reasons, SafeToNet does not analyze incoming messages before a child has read them.”
Pursey added that Britain’s unique tech law environment throws this problem up regularly, telling us: “We were a very young startup then but it worries me that those that don’t have the finance to get professional advice will cut corners and innocently/naively breach laws like [the Computer Misuse Act] etc. We see that all the time, especially with international safety tech providers entering the UK market. They often have no idea these laws exist.”
The revelation that a government-approved company’s product fell foul of Britain’s laws highlights the ongoing campaign to reform the Computer Misuse Act, and may well prompt further reforms of Britain’s convoluted surveillance legislation.
Civil servants and government ministers are engaged in a bitter war against social media platforms’ moves towards end-to-end encryption (E2EE) for user messages. Law enforcement bodies such as the National Crime Agency claim that wider adoption of E2EE will stop them from detecting paedophiles preying on children through messaging apps. Tech platforms and privacy advocates say E2EE is a vital tool to prevent and deter unlawful surveillance.
Avoiding the E2EE problem by scanning messages on children’s devices after delivery seems like it might help preserve adults’ internet privacy while allowing police agencies to focus on actual harms instead of indiscriminate platform surveillance.
Part of the government war on social media platforms is the Online Safety Bill, renamed from Online Harms Bill at the last moment perhaps in a crude attempt to disassociate it from mountains of well-informed criticism.
While Britain’s current surveillance laws were designed to place GCHQ and other spy agencies above and beyond the criminal law (following former NSA sysadmin Edward Snowden 2013’s revelations about the extent of nation-states’ spying and public revulsion at unchecked domestic mass surveillance), their authors may not have intended to cause difficulties for people making child safety apps. ®