An unprotected infrastructure is often compared to an unlocked home with its windows wide open and the owners away on holiday. For hackers, breaking into such a system is fairly easy, if not entirely effortless. Last year, companies around the world were taught a bitter lesson about the need for continuous data protection – especially the ones that saw their revenues, and reputation shrink dramatically.
Focus around data protection has always been in the domain of cyber security, and businesses are now being alerted on how to successfully implement security measures to avoid malicious attacks, such as Disaster Recovery as a Service. However, research by the New York Times and the Observer reveals that a cyber security breach, may not be the only threat to the integrity of personal data – reckless use of social media can be a threat too.
The largest data leak in social network history
How and for whom people choose to vote has changed throughout history. Being able to know how to tackle the right voters, with a message that will resonate, seems to be a tactic that campaign managers and policymakers must now adopt if they want their voice to be heard.
Cambridge Analytica, a British data analytics company, claims to be doing just this. The company is giving a voice to people who pay for their services. But the voice is more emotional and less factual, as the firm’s Managing Director Mark Turnbull tried to explain to Channel 4 fixer – “It is not good to fight an election campaign on the facts because actually, it is all about emotion”.
Things got out of hand when it was revealed, that to service customers such as Republican Party of United States, Cambridge Analytica had been using tactics that have been described by the biggest media outlets as: unethical, callous and even potentially illegal.
Christopher Wylie, Cambridge Analytica whistleblower, explains, that the organisation was responsible for harvesting personal data from around 50 million Facebook profiles. The data was later used to feed the algorithms working behind the proprietary psychometric modeling techniques that enable the firm to provide their clients with detailed analysis of the “units of culture” – aka an in-depth profiling of internet users.
The company is believed to have obtained vulnerable data through a psychometric app called thisisyourdigitallife, where users were asked to take personality tests and have their data collected for academic purposes, for a small compensation in return. The app had been designed, by an academic researcher – Aleksandr Kogan, a founder of Global Science Research, who also works at the University of Cambridge.
Kogan’s extensive academic work on inferring behavioral patterns from Facebook profiles was apparently supported with a grant from the Russian government.
Allegedly, the work Kogan did for Cambridge Analytica, played a significant role in electing Donald Trump as president of the United States – it helped target the hopes and fears of Americans living in the so-called swing states.
Although Facebook denies the incident to be qualified as a data breach, on the basis that Kogan accessed the information in “ a legitimate way, and through the proper channels”, it is hard not to join the crowds of disappointed Facebook subscribers in asking, “How secure IS our data on Facebook?”
The data leakiness on social media
Runaway data is not a new concept, especially when it comes to topics concerning privacy on social media. Seeing our own behaviors mirrored in the form of targeted advertising may feel like a violation of personal space, even if the methods of obtaining that data are legitimate. But algorithms, either created by Facebook or third parties, seem to know what we like but also, what we are like.
Whether we are simply victims of our own misconduct and naivety is yet to be decided. Nevertheless, it is probably worth mentioning, that people who answered Kogan’s test weren’t the first who had been lured by the incentive of finding out their personality types.
David Stillwell, the Deputy Director of the Psychometrics Centre at the University of Cambridge, was the first Cambridge academic to create a psychometric tool called Mypersonality app that he posted on Facebook shortly finishing his undergraduate degree in 2007.
The app was able to predict people’s personality based on five key features such as openness, neuroticism, extraversion, agreeableness, and conscientiousness. In total, the app managed to attract 6 million users; creating one of the richest databases of social information Stillwell has ever seen.
Personality tests make up just a small percentage of all available apps that users can use on social media. We rely on some of them more than others, but in general, it is hard to imagine a return to the pre-app world, no matter how much they violate our privacy.
Most of us are aware that keeping Facebook away from our personal data, means erasing other apps owned by the company, like Instagram or WhatsApp. For most, this would be unthinkable, and we’re often stuck in what scholar’s call a “privacy paradox”, where we continue using technologies and applications that are gathering our personal information – even if it’s not what we want.
Yet, it’s worth remembering that although mass data collection might seem to be “designed” in our everyday life, much of it is still a result of business attempting to deliver customized services and advertising more effectively. Not everything is done with malice. And not all private data has to fall into the hand, of people who will use it for duplicitous purposes.
And to get the full picture of how Facebook is using our data, we’ll have to wait for a response from Mark Zuckerberg, who has already been sent a formal request to comment on the scandal with Cambridge Analytica. Hopefully, it will arrive before the next decisive political moment.