Pin It
chriswylie

This is how Facebook can save itself

Ahead of his appearance in Congress, whistleblower Chris Wylie shares the advice that he’ll be giving to the social media platform

“We appreciate your client’s willingness to prepare a written summary as soon as possible.  When do you think he will be in a position to do so? We are eager to receive it.”

This is what Facebook recently told my lawyers. Despite banning me from Facebook – and Instagram – the company requested my advice on how to fix this mess. I'm also speaking to the US Congress about Facebook this week. From all sides, people seem to want answers. So this is what I’m going to tell them.

RATHER THAN #DELETEFACEBOOK, WE NEED TO #FIXFACEBOOK

The problem with #DeleteFacebook is that it creates a false dichotomy between our privacy rights and living in a modern digitised society. If we want privacy, we shouldn't feel forced to opt out of digital communities. The onus to protect privacy shouldn’t be on you the user, but on the platform making money out of your life online.

Data is the new electricity of our digital economy. And just like electricity, you can’t escape data. Can we really expect a person to stop using search engines, social media and email, and still be a functional in society? Let’s be real, who’s about to hire someone who refuses to use the Internet? This is what is so problematic with #DeleteFacebook. If you don’t use Facebook, you’ll still be using Google, Uber, or Instagram – and you bet they collect data on you. Unless you become a digital hermit, you cannot avoid becoming encapsulated in data.  

For all its faults, Facebook still has a lot to offer.  From getting invites to your friend’s band’s shows to organising protests in oppressive regimes, Facebook helps people. And that’s pretty amazing. But I worry that Facebook is not thinking carefully about its knee-jerk reaction to this mess. If it creates an overly restrictive platform, it may actually serve to encourage bad behaviour by advertisers who will always try to target ads. And when Facebook reveals features like the ability to retroactively delete private messages, I worry that access to such tools will embolden online bullying and sexual harassment by enabling perpetrators to delete evidence. So I’d say to Facebook: slow it down and let’s think carefully (and consult the users) about the consequences of these major platform changes. Plus:

TREAT PRIVACY AS AN URGENT MATTER OF SAFETY

We don’t allow buildings to lack fire exits as long as there are terms and conditions posted on the wall. We also don’t allow automotive companies to build unsafe cars as long as they include warning labels. This is because in every other sector, we empower public regulators to create safety standards. Rather than relying on privacy policies no one reads, developers should have to follow engineering codes to integrate privacy design into system architectures. Legislators have an opportunity to create expert-led technology safety authorities to enforce these technical standards for user safety, just as we already do for everything else we value: cars, electricity, appliances, medicines, buildings and food.

JUST BECAUSE A USER CLICKED “I AGREE” ONCE, DOESN’T MEAN THEY ALWAYS WILL

A major problem with consent on technology platforms comes from the frequent addition of new features not originally found in the first version of the product. The problem here is that when we sign up, we can’t actually consent to unknown technologies not yet invented.  Honestly, did any of us really expect facial recognition when we joined Facebook back in 2007? For new technologies, prior consent shouldn’t be assumed for new features that wouldn’t have been reasonably expected by the user when they first signed up. And Facebook’s approach to forcing consent for new features is wrong – platforms shouldn’t be allowed to create all-or-nothing false choices that force users to either agree to new technologies or quit the platform all together.

BAN APPS FROM TAKING DATA THEY DON’T NEED

Currently, a single tick box can mean your most intimate details may be used for almost any purpose. Does a crossword app really need to know your religion or sexual orientation to offer a “more personalised user experience”? No, it doesn't. We need rules that would require the data collected by each app to be proportional to the actual purpose of the app.  This would eliminate apps that offer little to no value, but clone your entire digital life by harvesting your data.

“Facebook should embrace scrutiny as a key part of their product development. Just as they reward ethical hackers for revealing flaws, Facebook should reward journalists who perform the exact same public service” – Chris Wylie

LAWMAKERS NEED TO CATCH UP WITH TECH

Technology should not be exempt from public debate simply because it relates to software. If we can regulate nuclear power plants, we can regulate software. And the law does not have to be slow.  Just look at the criminalisation of illicit drugs. You never see debates in Parliament or Congress about the molecular structure of a new street drug. Why? Because the whole thing is delegated to regulators, who use expedited procedures that add a new drug to the list of banned substances. So if the government can use expedited legal rules to stop you from getting high, it can figure out how to quickly respond to emerging technologies.

NO MORE SHADOWY TARGETING OF FACEBOOK USERS

Microtargeting robs our democracy of scrutiny in the public forum. But a blanket ban won’t help. There are legitimate uses for targeting, such as encouraging voter registration in underrepresented minority groups.  Instead we need to address the problem of targeting happening in secret, away from the healthy scrutiny of media or critics. Facebook should build transparency pages where all ads and targeting criteria occurring on the platform are open for searches by users, journalists and civil society. This shouldn’t just apply to political adverts. Commercial advertising deserves public scrutiny too.

TELL THE USERS EXACTLY WHAT’S GOING ON

We should get rid of “black box” targeting, where it’s unclear why a person is being targeted. The “Why am I seeing this ad?” feature on Facebook is useless and provides stupidly circular answers like “because Company X is interested in reaching people like you”. Users should be able to see the exact criteria that leads an advert to appear in their feed – and users should be able to opt out of any or all targeting criteria in three clicks or less.

PUT LIMITS ON HOW MANY TARGETED ADS A USER IS FORCED TO SEE

One of the problems revealed by Cambridge Analytica is how targeting is algorithmically segregating people, where certain groups see things that no one else sees. First, we could “rate limit” the amount of targeted messaging a user can see from any given organisation. In other words, campaigns would only be allowed to engage a user for a certain number of interactions before a “cooling off” period temporarily limits any more advertising. Second, we could limit the ratio of users who can be targeted by any given campaign. For example, we could require an 80-20 split for ad targeting, where 80% of the ads go to the target group and 20% go to a randomly selected group. Such ratios ensure that some outside population sees what the target group sees. Why would this be useful? Simple: communal scrutiny from outside the target group would help weed out the bullshit content.

JUST BE LESS CREEPY

Too often, Facebook tries to shut down outside scrutiny. Before the Cambridge Analytica story broke, Facebook actually threatened to sue the Guardian. They then banned me, the whistleblower, from their platform after the story broke. Despite having more global users than American ones, Mark Zuckerberg has refused to show up at the British, EU, and Canadian parliaments to answer tough questions.

This is the wrong approach. And honestly, it’s not a good look. Facebook should embrace scrutiny as a key part of their product development. Just as they reward ethical hackers for revealing flaws, Facebook should reward journalists who perform the exact same public service. More broadly, Facebook needs to take a long-term view before risking an even greater existential crisis. People’s confidence in social media is eroding because of the sector’s arrogant behaviour. In this light, regulation may be Facebook’s saving grace. Yes, there will be short-term costs to adapt to new rules, but Facebook relies on its users – and without their trust, the platform won’t exist. Car safety standards have not inhibited demand, nor have they unreasonably inhibited profit. No one would buy a car that avoided safety regulations, or claimed that such rules weren’t necessary for their vehicles.

We are in the middle of a new industrial revolution, but there are strong lessons from history. Just like the tech sector today, early industrialisation moved fast and broke things too. Regulation became inevitable and this resulted in the introduction of safety standards, social welfare, and labour rights. And now, we may need to add a digital bill of rights to that list.  Innovation is great, but moving fast and breaking things shouldn’t involve breaking our democracy in the process.

Read the Dazed interview with Chris Wylie here