Priti Patel has hit out at Facebook’s plans to encrypt direct messages, even as the company is facing criticism in the US for failing to protect the privacy of women seeking abortions.
The UK home secretary has urged Meta, which owns Facebook, Instagram and WhatsApp, to reconsider its intentions to apply “end-to-end encryption” to direct messages sent from Messenger and Instagram.
The technology, already enabled on WhatsApp, prevents anyone other than the sender and recipient of a message from accessing its contents – including Facebook itself, as well as law enforcement and state security agencies.
Writing in the Telegraph, Patel warned that the feature could limit the ability of police to investigate and prevent child abuse. “Parents need to know that their kids will be safe online. The consequences of inadequate protections – especially for end-to-end encrypted social media platforms – would be catastrophic,” she wrote.
“A great many child predators use social media platforms such as Facebook to discover, target and sexually abuse children. These protections need to be in place before end-to-end encryption is rolled out around the world. Child safety must never be an afterthought.”
Patel’s comments were prompted by an announcement earlier this month that Facebook would be moving ahead with plans to test the change. Although the company said the plans for the test had been in place for a long time, it announced the decision shortly after it came under intense criticism for handing over to police the direct messages of a 17-year-old accused of having an illegal abortion in Nebraska.
Facebook says it did not know that the police were investigating an abortion when it handed over the data, but US pro-choice groups lambasted the company’s cooperation with investigators.
At Netroots Nation, an American gathering of leftwing groups, activists postered the company’s stand with signs saying “Facebook isn’t free” and “Fix Facebook now”, prompting the company to leave the booth unstaffed for the second day of the conference.
With end-to-end encryption enabled, the company would have been unable to hand the data to the police, even if they were in possession of a legally binding search warrant. But it would also be unable to monitor communications between users for other reasons, such as to find and report child abuse imagery being shared in direct messages.
In 2021, Facebook alone found and reported 22m pieces of child abuse imagery to the National Centre for Missing and Exploited Children, a US nonprofit that coordinates responses to child abuse online. Instagram reported an additional 3.3m, and there are fears among activists that end-to-end encryption could bring those numbers plummeting down, representing many millions of instances of sharing of child abuse imagery disappearing from view.
“Meta’s announcement that they are testing default end-to-end encryption before ensuring effective child safety mitigations are in place will pose an immediate risk to children,” Andy Burrows, NSPCC head of child safety online policy, said.
“Private messaging is the frontline of child sexual abuse online and this will start to blindfold the company, meaning less child abuse will be detected.”
Patel’s comments come as the fate of the online safety bill, which aims to create a new internet regulator with powers over sites such as Facebook, hangs in the balance.
The bill, which failed to make it on to the legislative timetable before the summer recess, will probably be revived in some form by the next prime minister, but the tone of the Conservative leadership contest has been sharply dismissive of aspects of the bill that are seen as “legislating for offence”, including efforts to clean up older laws such as the Malicious Communications Act by replacing them with a new offence of “harmful communications”.
In a statement, a Meta spokesperson said: “Experts are clear that technologies like those proposed in this paper would undermine end-to-end encryption and threaten people’s privacy, security and human rights. We have no tolerance for child exploitation on our platforms and are focused on solutions that do not require the intrusive scanning of people’s private conversations. We want to prevent harm from happening in the first place, not just detect it after the fact.
“We already do this by banning suspicious profiles, restricting adults from messaging children they’re not connected with and defaulting under-18s to private or “friends only” accounts. We’re also encouraging people to report harmful messages to us so we can see the reported contents, respond swiftly and make referrals to the authorities. We continue to work with outside experts and law enforcement to help keep people safe online.”