It’s almost a year to the day since the revelation that the data of millions of Facebook users made its way into the hands of Cambridge Analytica, which weaponized the information on behalf of Donald Trump’s campaign in 2016.
The fallout from the scandal continues to plague Facebook, with evidence published in court documents on Thursday suggesting that some employees knew about Cambridge Analytica’s dirty deeds months before The Guardian first reported on the issue in December 2015.
The consequences of Cambridge Analytica were so profound that it forced Mark Zuckerberg to rethink the philosophy of his company. The fruits of that period of introspection were published in his new manifesto for a “privacy-focused” Facebook earlier this month.
Gone is the man who once said, “They ‘trust me.’ Dumb f—-,” when referring to other people’s data. Zuckerberg wants to recast himself and his company as the guardians of privacy. Specifically, this will involve a big lurch toward end-to-end encryption, which will be the backbone of newly interoperable messaging services WhatsApp, Messenger, and Instagram Direct Messages.
The plan to effectively create two Facebooks, a public “town square” and a private “living room,” is divisive internally. It contributed to the departure of senior executives, including the 13-year veteran and product boss Chris Cox. But Zuckerberg, having been accused of a spectacular failure of leadership over Cambridge Analytica, is convinced that this is the right way forward for his company.
“The future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won’t stick around forever,” he said in his blueprint for Facebook. “This is the future I hope we will help bring about.”
The problem with Facebook’s privacy push
But while Facebook’s embrace of encryption may help solve its privacy problem, it could come at a cost. Namely, it will make it much harder to detect the spread of hideous videos like that of the New Zealand mosque shootings last week, which has drawn international condemnation.
Much of the attention has focused on the video’s spread on the public-facing Facebook — the “town square,” to use Zuckerberg’s parlance. Here, Facebook says it has removed 1.5 million copies of the footage, with both its algorithms and moderators creaking under the pressure of its virality.
But what about its spread on WhatsApp, Facebook’s already-encrypted messaging service? Here, only the sender and receiver of a message can view its content, making it impossible for Facebook or law enforcement to detect. This is the Facebook “living room” that Zuckerberg imagines.
A quick Twitter search shows people complaining about receiving copies of the Christchurch massacre footage via WhatsApp. “Oh my god.. just received the Christchurch mosque attack video in a family WhatsApp group,” the British journalist Umer Ali tweeted last week. It was also spotted by a former Facebook product manager, Antonio García Martínez, who said he had identified “a litany of complaints” from WhatsApp users.
It’s not the first time WhatsApp has been abused by bad actors. Terrorists have used it to send guarded messages, as Khalid Masood did before he killed six people in an attack in Westminster, London, in 2017. And in India last year, WhatsApp was used to spread misinformation about child abduction, fueling mob lynchings.
Facebook has already taken steps to limit the number of people a WhatsApp message can be forwarded to, seeking to stem the spread of toxic content. Still, Zuckerberg is well aware that his privacy pivot could have disastrous drawbacks. In an interview with Wired earlier this month, he said (emphasis ours):
“There is just a clear trade-off here when you’re building a messaging system between end-to-end encryption, which provides world-class privacy and the strongest security measures on the one hand, but removes some of the signal that you have to detect really terrible things some people try to do, whether it’s child exploitation or terrorism or extorting people.“
Does Zuckerberg want to wash his hands of toxic content?
It has led some to question whether Zuckerberg has an ulterior motive for his privacy vision: to absolve himself of responsibility for moderating harmful content on his platform at a time when regulators are talking about leveling huge fines on tech firms over the issue.
“Is this just a way for Facebook to avoid any responsibility for what people share on the platform?” asked Damian Collins, the British lawmaker who has been investigating the Cambridge Analytica scandal for months.
“This becomes a charter for spreading disinformation and other harmful content if the platform is basically going to absolve itself of any responsibility to know people are sharing,” he told Business Insider.
García Martínez has similar suspicions. “The dedication to encryption is Zuck’s move to get out from under the content moderation onus, and simply write off dealing with the issue,” he tweeted.
Concerns were also raised by Ben Horowitz, the cofounder of the influential Silicon Valley venture-capital firm Andreessen Horowitz. Horowitz’s partner, Marc Andreessen, is a Facebook board member.
“If a social network is truly private via end-to-end encryption as Mark Zuckerberg specified, nobody including Facebook or the U.S. Government would be able to monitor it for hate speech and other violations,” he said on Twitter last week. “Essentially, Facebook would be flying right into the face of some of the current backlash against them.”
Zuckerberg has promised to publicly consult with experts around the world, including governments, law enforcement, regulators, and safety advocates, to address these issues. In putting out a fire over privacy, Zuckerberg will need to be careful that he doesn’t throw oil over another burning issue.