“If you have nothing to hide, you have nothing to fear.”
It was an argument we heard a lot in the years following Facebook CEO Mark Zuckerberg’s famous claim that privacy was no longer a social norm. A lot has changed in the eight years since. The web has evolved, new tools make it easier to protect our privacy online, and scandals with social networks and other online entities have made privacy itself a hot topic once again.
And yet, as the second decade of the 21st century edges towards its conclusion, we continue to sacrifice our privacy in the name of progress, often without our knowledge. But do we care enough about it to slow down the pace of technological innovation? To halt the development of more powerful interconnected services? Well, it all depends on who you ask.
In the name of progress
In his 2014 Ted Talk titled, “Privacy is dead and that’s great,” Richard Aldrich highlighted some of the exciting benefits of a privacy-free future. He suggested that through smartphones and cameras, the general public could help solve high-profile crimes, corporations wouldn’t be able to dodge tax obligations through shady accounting, and tracking people’s biometrics could lead to great advances in healthcare.
His idea of the future relies on such transparency extending to everyone, including the wealthy and politically connected. But the promise of living longer by having health data on tap for analytical services and artificial intelligences, could be an easy sell in comparison to the seemingly ever more nebulous concept of privacy.
In a talk at dConstruct 2014, Tom Scott took a step further. He suggested that by 2030, privacy could become something that only grandparents remember. Such an age of pervasive surveillance would create a socially manned, digital panopticon he said, helping to bring crime levels to historic lows, making everyone accountable to their actions, not just of today, but of everything they ever did.
In many ways, we’re seeing the first hints of such a future right now.
If the 2000s were a decade of advancements in compact computing and processing power, the 2010’s have been driven by data. With ever expanding free services offered by companies like Google and Facebook, big data and the analytics that followed, have lead to huge profits for those companies, but also exciting new products. Translation tools, image and speech recognition, have all improved enormously in the past few years thanks to the collection of data on a hitherto unheard of scale.
Smart assistants like Siri and Cortana take those tools and improve them further through personalization by learning behaviors based on information gathered on the user(s). Smart speakers like Amazon’s Alexa driven Echo devices are increasingly offering more data driven functions with voice support.
These are all ideas that on paper sound like they would open up the world to a beautiful, data-driven tomorrow. As Google’s Sundar Pinchai explained, this vision of the future is “AI-first” and allows us to live alongside this augmented reality in a manner that is more personalized, if less anonymous.
It sounds like the trade is worth it then, right? Well, not to everyone. Rising to counter these utopian ambitions is a growing movement that doesn’t want to see such a future come about, especially if it’s not willingly instigated. That’s proved a very real concern too, since companies like Google have been found to effectively disregard user preference in its ever hungrier quest for data. There’s a disturbing perspective on where this is leading, and the stakes rise by the day.
Looking forward through dystopian lenses
One expert waving a red flag is Lotte Houwing. She’s a privacy enthusiast who works on strategic litigation in the field of human rights in the Netherlands. For her, it’s all about data and who controls it.
“I share different data with my employer than with my mother, and it is important for me to have that control,” she told Digital Trends.
Houwing suggested that too much surveillance, combined with a willingness to accept it as the norm, could lead to a society built around compliance to an arbitrary digital authority. Such a world, she argued, would cater to a select few and reward falsehoods and conformity above all else.
“The social justice implications of [facial recognition…] people of color are so disproportionately impacted by the collection and use of this information”
To help imagine how this philosophy of privacy could play out in the real world, Houwing drew upon the wealth of dystopian fiction we have. In a particularly illuminating episode of Black Mirror (“Nosedive”), it shows how every aspect of a person’s life could be affected by their numerical stature in a digital application. How they interact with people in their personal life, how bright their smile is, and perhaps most disturbingly, their adherence to societal norms, all have an affect on their rating. That rating in turn affects their ability to take out loans, to live in certain neighborhoods or to work for certain companies.
You don’t need a system like that to prove the point. There has always been more privacy afforded to those with privilege than those without, if that’s what they desire. Historically, the powerful could afford houses with multiple rooms and larger plots of land. The same is true today, as Mark Zuckerberg showed when he purchased four houses around his own to improve his personal privacy.
There are always limitations to that kind of privacy, though, because it’s grounded in the real, physical world. In digital spaces there is arguably no limit to the amount of space the privileged few can put between their data and that of less wealthy or connected internet users.
That’s the greatest concern of Gennie Gebhart, a researcher for the Electronic Frontier Foundation. In her chat with Digital Trends, she suggested that certain technologies like facial recognition, have the potential to widen the gap between the haves and the have nots like never before.
“The social justice implications of this – people of color are so disproportionately impacted by the collection and use of this information – that’s a real dystopia,” she said.
It’s that interconnected, privacy-less world Google imagines — flipped on its head.
“It’s a technology that’s advancing rapidly and in particular when it comes to law enforcement,” she said. “Different kinds of regulations have not been able to keep up […] It’s something that affects more people than they’re aware of.”
That’s something we’re already seeing play out in some parts of the country, with facial recognition and analytics being used to even predict crimes before they happen, raising questions about the role law enforcement plays in society.
Were such a system to become commonplace, some believe that what it could mean a fundamental change in what it means to be human. That might sound overstated, but data collection always come at a price – and in this case, it’s at the privacy of the users. That’s not a far-off dystopia. It’s happening today.
Trading in privacy for a profit
The difficulty with privacy and the laws that protect it for individuals, is that privacy means something different to different people and some are more comfortable with less of it than others. Indeed the very concept of privacy is a modern one, with many historical examples to suggest that privacy is less of a social norm than proponents of it may suggest.
“Privacy can be part of our law and in the U.S. in that tradition, it’s the right to be left alone.”
“The notion of privacy that we are most familiar with comes straight from Aristotle in a lot of ways,” Gennie Gebhart told Digital Trends. “Privacy can be part of our law and in the U.S. in that tradition, it’s the right to be left alone. The right to a private space for self expression, exploration and growth. The right to control information about oneself – who else can have access to it and when.”
But it was only in the middle of the 20th-century that the concept of privacy was fully embedded in modern society and protected by law. Roman societies bathed and went to the bathroom in public and the concept of having a bed and “bed chamber” exclusively for individuals, even among the wealthy, was alien until the 17th century. Everyone else simply slept on one large mattress with their whole family – often with animals in the same room.
But many people today willingly give up their right to privacy for the sake keeping friends and family updated on what they’re doing in their lives. Others turn it into a business. Everyone from mommy vloggers and Twitch streamers to Instagram celebrities, make a living from their existence in virtual space by sharing their data with others. To some this is a crude example of a cultural shift towards the death of privacy, whereas others see it as a way to profit from something companies have been doing for decades.
British satirist, Oli Frost is most known for creating the fake social media enhancing company, LifeFaker. He famously attempted to sell his Facebook data on Ebay. While initially unsuccessful, he still considers his personal and private life unimportant enough to warrant protective privacy measures.
“The biggest companies in the world spend huge amounts of money and employ the most brilliant minds to make you click on the buttons.”
“I’m not doing much that’s interesting most days anyway,” he said. “Mostly I come home from work too exhausted to [deal] with the existential issues with my life, and so decide to watch Netflix instead.”
For the EFF’s Gebhart, though, this apathetic response to the concept of privacy isn’t born of a lack of care about it, but a feeling of helplessness in a world that seems designed to cater to those who discard it.
“I absolutely don’t blame consumers if they fall into the attitude of ‘I might as well share it,’ this security nihilism,” she said. “It’s easy to get dispirited or frustrated like that. Particularly when the biggest companies in the world spend huge amounts of money and employ the most brilliant minds to make you click on the buttons, make you continue sharing. The odds you’re up against as a consumer are really hard. I think that that attitude is really common.”
Giving the power of privacy back to the people
Almost a decade on from Mark Zuckerberg’s inflammatory comments on privacy, Facebook’s public-facing stance is quite different. When asked for comment, the social network sent Digital Trends a quote from its deputy chief privacy officer, Rob Sherman.
“When it comes to privacy, there are a few things we know to be true. First, everyone has a basic right to privacy,” he said during a recent talk. “Second, because privacy means different things to different people at different times, the only way to guarantee it for everyone, all the time, is by putting people in control.”
He went on to refute the paradigm that people of the future will need to opt for privacy or functional services.
For privacy proponents like Gebhart and Houwing, this is all very encouraging, because as they see it now, the future is not as rosy as it could be.
Legislative changes like the GDPR and major privacy scandals like the Cambridge Analytica data theft have shown that there is still a real appetite for privacy in the modern day. Flipping the coin on their concerns for the future, we asked our sources to give us their idea of a privacy utopia and they all suggested the same thing: It should be one driven by choice.
“The right to informed decision-making and consent, not only in a meaningful way, but in an ongoing basis would be a must,” Gebhart explained. She went on to suggest that companies would need to be frank and open with people about the information they collected and stored on them, giving users complete control over how it was used, how long it was stored for, and when it was ultimately deleted.
For that to be possible though, she highlighted that more competition for top-tier services was needed. Right now, she said, Facebook has no viable competition – no other service has the number of users it has. That’s something Lotte Houwing was keen to see happen too, highlighting that in the future, we’ll need to see a lot more alternatives to the existing status quo.
Wherever you stand on the spectrum of the privacy debate, it seems hard to argue that we aren’t going through a transitory phase
“It might be a mixture between some cool privacy nerds taking privacy by design and privacy by default to the next level and develop a lot of alternative apps for the things people like to use on an open source basis” she said. “Reclaim technology thereby enabling themselves to set the standards and the requirements for what technology will be used.”
Wherever you stand on the spectrum of the privacy debate, it seems hard to argue that we aren’t going through a transitory phase as a burgeoning digital society. The early days of the internet and its services provided anonymity in a fashion that hadn’t been possible before, but the veil is gradually being lifted. It’s becoming a more personal space, but not one that the people in it have much control over.
If we can instead build digital services and products that let the people who use them decide what happens to their data and what the limits of its use are, then everyone wins. If we don’t, then we risk stifling progress in all sorts of exciting fields, or giving ourselves over to a world where the technology that was designed to set us free, imprisons us in a digital panopticon of our making.
- Yes, data is the new oil and the fight to reclaim it from tech giants starts now
- First, it was San Francisco. Now, the U.K. is fighting facial recognition
- Facebook says the future is private, but what does that mean?
- Facebook aims to repair reputation with focus on encrypted messaging
- Facebook to shut down Onavo app that harvested user data for market research