The new data protection law and the trust on social media: Facebook’s ugly face (4/4)
Ending this four-part analysis on the crisis of Facebook and Cambridge Analytica, now I would like to focus on how organisations in the digital age are managing the trust that their users and clients have in them and the role of ethics in big data. This article is particularly relevant as this week the new European regulation for data protection comes into force.
This particular case shows a clear breakdown of trust between users and Facebook. In addition, we observe how the question of consent is central to understanding this crisis. Not that this is something new, because the founder of Facebook is starting to be known as a «serial-apologizer». It is also not anything new that Facebook stands out negatively in different studies on trust, such as the one in which it is the last among the giants of the hi-tech market and also on this one, in which it shows how its reputation was affected after the crisis.
Facebook mantra is «move fast and break fast» and possibly a lot of things have been broken along the way, but a break in trust is something more difficult to amend, and this seems to be a very interesting example on the subject of building trust in digital environments. Trust is a mental state and an attitude in relation to an agent (in this case, a social network platform) and is associated with the behavior or action expected in the future of this agent. The evaluation that is made of the trust attributed to this agent is called reputation.
Can we trust in Facebook?
Trust is a situational construct and depends on the perception of the nature of the intentions and motives of the other person or organization. The case of Facebook is symptomatic, because apparently the network did not import much with the use of its platform for research purposes, despite knowing that this research would not be left alone in the theoretical world. The experiment had the ultimate goal of being used as a test field for a new type of advertising based on the use of the social network as a predictor of social behavior (for commercial or electoral purposes). This could be very beneficial to Facebook and Mark Zuckemberg said that «this was a breach of trust between Kogan, Cambridge Analytica and Facebook, but it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it. We need to fix that.»
There is nothing new in the use of personal information for commercial purposes, as we discussed in a previous post, and that is the bread and butter of data-driven marketing.The problem in this case was how Cambridge Analytica appropriated the data through an app offered through Facebook with the explicit objective of being a personality test. In reality, the app had a covered intention of harvesting people’s data also their contacts’s data. Worse than that, perhaps, it was the fact that the final objective of CA was to use this data set to create disinformation campaigns according to comments from a company’s former employee.
In summary, and to conclude this series of articles, at the core of this crisis is the ethics of organizations in managing people´s data. If, as the saying goes, technology is agnostic, people and companies, in turn, generally have their preferences. And interests. So, the ethical use of the data must follow some sort of criteria. In this interesting paper, an IBM engineer gives us some guidelines on how this should happen and in this one from MIT it is possible to anticipate the concerns with the dark side of big data.