Over 68 IT security and privacy academics are joining the chorus of concerns over the U.K.’s Online Safety Bill. They emphasize the need for amendments to prevent any compromise of strong encryption, which plays a crucial role in safeguarding digital communications. In an open letter, these researchers underline the potential risks posed by the draft legislation to essential security technologies. Their concerns echo those expressed by popular encrypted messaging services like WhatsApp, Signal, and Element, which have stated their intention to withdraw from the market or face blocking by U.K. authorities rather than compromise user security. Apple has also voiced its concerns, stating that the bill poses a significant threat to end-to-end encryption, which it deems a critical protection capability. Without adequate amendments to protect strong encryption, Apple warns that the bill could put U.K. citizens at greater risk contrary to the safety claims made in the legislation’s title. An independent legal analysis conducted last year also raised concerns about the bill’s surveillance powers and their potential impact on the integrity of end-to-end encryption.
Security academics are urging lawmakers in the House of Lords to defend encryption in the proposed legislation. The bill, currently at the report stage in the House of Lords, allows for amendments to be suggested. The academics, with expertise in the field, hope to mobilize lawmakers in the second chamber who can address the encryption issue, where MPs have failed.
The academics, from prestigious universities including King’s College, Imperial College, Oxford, Cambridge, Edinburgh, Sheffield, and Manchester, have written a letter expressing concern about the Online Safety Bill. They believe that the bill’s deployment of surveillance technologies for online safety undermines privacy guarantees and online safety.
Their main concern is the bill’s advocacy for “routine monitoring” of communications to combat the spread of harmful content. The academics argue that this approach disregards critical security protocols and will cause significant harm to the public and society as a whole.
The academics assert that routine monitoring of private communications cannot coexist with the privacy guarantees offered by online communication protocols. They warn against attempts to address this contradiction through additional technologies like client-side scanning or crypto backdoors, stating that such attempts are destined to fail both technologically and societally.
In summary, the academics emphasize that technology is not a solution to all problems. They provide concise explanations of why the two proposed routes to accessing protected private messages are incompatible with maintaining privacy and security rights.
Experts caution that there is no technological solution that can effectively balance the need for confidentiality while sharing information with third parties. The history of cryptographic backdoors proves that previous attempts have failed. All proposed technological solutions involve granting third parties access to private speech, messages, and images based on their criteria.
The experts argue that implementing client-side scanning on mobile devices disproportionately invades privacy and amounts to constant surveillance. It is like having a mandatory wiretap constantly scanning for prohibited content. Additionally, the technology itself is not robust enough to meet the requirements of the proposed bill.
The notion of having a “police officer in your pocket” faces significant technological hurdles. It must accurately detect and reveal targeted content while not detecting or revealing non-targeted content. Even existing client-side scanning technology designed to detect known illegal content has issues with accuracy.The experts also raise concerns about the potential misuse of such algorithms for purposes like facial recognition and covert surveillance.
Furthermore, the bill may push platforms to use even more intrusive AI models to scan messages for previously unseen but prohibited content. However, such technology is not reliably developed, which raises the risk of false positives. Innocent users could suffer harm as their private messages are viewed without justification, and they could potentially be falsely accused of viewing illicit material.The lack of reliability when it comes to false positive hits in client-side scanning AI models can have serious consequences. Sharing private or sensitive messages with third parties, such as private-company vetters, law enforcement, or anyone with access to monitoring infrastructure, could be considered exploitation and abuse. These AI models also pose a risk of being repurposed and used for broader surveillance purposes, potentially expanding their scope to detect other types of content. This raises concerns about the increasing levels of state-mandated surveillance that U.K. citizens might be subjected to. We have contacted the Department for Science, Innovation, and Technology (DSIT) to get the government’s response to this issue.