Can we trust Facebook

The new data protection law and the trust on social media: Facebook’s ugly face (4/4)

Posted Leave a commentPosted in Big Data, Crisis, Ethics, Privacy, Trust

Ending this four-part analysis on the crisis of Facebook and Cambridge Analytica, now I would like to focus on how organisations in the digital age are managing the trust that their users and clients have in them and the role of ethics in big data. This article is particularly relevant as this week the new European regulation for data protection comes into force.

This particular case shows a clear breakdown of trust between users and Facebook. In addition, we observe how the question of consent is central to understanding this crisis. Not that this is something new, because the founder of Facebook is starting to be known as a “serial-apologizer”. It is also not anything new that Facebook stands out negatively in different studies on trust, such as the one in which it is the last among the giants of the hi-tech market and also on this one, in which it shows how its reputation was affected after the crisis.

Facebook mantra is “move fast and break fast” and possibly a lot of things have been broken along the way, but a break in trust is something more difficult to amend, and this seems to be a very interesting example on the subject of building trust in digital environments. Trust is a mental state and an attitude in relation to an agent (in this case, a social network platform) and is associated with the behavior or action expected in the future of this agent. The evaluation that is made of the trust attributed to this agent is called reputation.

Can we trust in Facebook?

Trust is a situational construct and depends on the perception of the nature of the intentions and motives of the other person or organization. The case of Facebook is symptomatic, because apparently the network did not import much with the use of its platform for research purposes, despite knowing that this research would not be left alone in the theoretical world. The experiment had the ultimate goal of being used as a test field for a new type of advertising based on the use of the social network as a predictor of social behavior (for commercial or electoral purposes). This  could be very beneficial to Facebook and Mark Zuckemberg said that “this was a breach of trust between Kogan, Cambridge Analytica and Facebook, but it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it. We need to fix that.”

There is nothing new in the use of personal information for commercial purposes, as we discussed in a previous post, and that is the bread and butter of data-driven marketing.The problem in this case was how Cambridge Analytica appropriated the data through an app offered through Facebook with the explicit objective of being a personality test. In reality, the app had a covered intention of harvesting people’s data also their contacts’s data. Worse than that, perhaps, it was the fact that the final objective of CA was to use this data set to create disinformation campaigns according to comments from a company’s former employee.

In summary, and to conclude this series of articles, at the core of this crisis is the ethics of organizations in managing people´s data. If, as the saying goes, technology is agnostic, people and companies, in turn, generally have their preferences. And interests. So, the ethical use of the data must follow some sort of criteria. In this interesting paper, an IBM engineer gives us some guidelines on how this should happen and in this one from MIT it is possible to anticipate the concerns with the dark side of big data.

Digita_Transparency_Surveillance

The Myth of Transparency: Facebook´s ugly face (2/4)

Posted 3 CommentsPosted in Big Data, Crisis, Crisis management, Data mining, Privacy, Privacy, Social media, Trust

A crisis about transparency (or lack of), we could summarise the Facebook reputation nightmare. Or, as the Times magazine puts it  brilliantly: “All this has prompted sharp criticism of the company, which meticulously tracks its users but failed to keep track of where information about the lives and thinking of those people went.” In this apparent paradox lies the first point I would like to highlight in this 4-part analysis: The Myth of Transparency.

If you read books such as Jeff Jarvis’ Public Parts (2011), you know how social media has successfully created a hype about the virtues of living life under the public sphere, in a continuous Self Big Brother. Although back then Jarvis agreed with some sort of protection to people’s privacy, such as the ones proposed by then European commissioner Viviane Reding, he was defending a libertarian, perhaps utopian, view of transparency that disregarded a basic impulse behind the “publification” of our lives by tech companies: data has economic value and social media thrives on marketing data.

What this crisis has brought to the surface and to the attention of regulators was the culmination of a series of privacy issues and breaches involving Internet and more famously Facebook. It is perhaps the beginning of the “end of the innocence” and the realisation by the users that transparency is good when it happens on both ways: from the part of the producer of the data (i.e. us) and the marketer of the data (social media companies). The market has become more mature and people starts to realise that there has never been a truly “free service” by Google or Facebook. As Viviane Reding poses it: we pay the service with our data.

To be fair, these companies never have said that they didn’t use people’s data for different purposes, including making tons of money. However, what people are noticing now is how obscure and careless firms have been in the management of this data – and how vulnerable they are when their minds can be read by data mining companies such as Cambridge Analytic with the controversial, and at the same time, brilliant experiment conducted by Kosinski et al (2015).

People are also realising how social media creates a subtle form of surveillance, by letting unknown organisations to access their view of the world, their relationships and their tastes. By impacting serious decisions like votes in a general election, for example, or referendums, the public opinion starts realising the risks of manipulation in this cycle of data transparency – data mining – campaign management.

Click to TweetClick To Tweet

Facebook trust crisis

Facebook´s ugly face (1/4)

Posted 2 CommentsPosted in Online Reputation, Privacy, Strategy, Trust

In this high-profile reputation crisis involving Facebook and Cambridge Analytica I pose a reflection: Is it possible for a multinational organization to be apolitical? This is one of the main ethical challenges of any multinational, but that is even more important when a company is not simply involved in the business of selling products and services, such as gasoline, shoes or perfumes. In the case of Facebook, which self-defined it ambitiously as a company that wants to ‘make the world more open and connected’, it is clear that it is quite complicated. The mission of the company enters in the collision route with the right to privacy and the power of those who use our data in advertising via Facebook as a weapon of influence.

In this sense, the #DeleteFacebook initiative, although it will seldom affect the company, is an interesting indicator of a possible change of social mood in relation to the nice blue company. That’s because the initiative expresses a rejection and the growing awareness that social networks, and in particular Facebook as its main actor, is not as innocent as their smiley faces or thumbs up icons. Or the posts of dogs and cats.

In this crisis of Facebook and Cambridge Analytica what we see clearly is the questioning of the ethics of a company by the way in which:

  1. Manages the data it has gotten from people: the myth of transparency
  2. Understands the private realm as something commercially profitable: the “monetization of our online footprint”
  3. Ultimately, manages stakeholders’ trust

It is an extremely complex case with implications for any company that moves in the digital economy.  Such companies need some parameters about how to make their strategic decisions when these three aspects face their businesses. In the next three posts I will try to address these points. To be continued…