The new data protection law and the trust on social media: Facebook’s ugly face (4/4)

Ending this four-part analysis on the crisis of Facebook and Cambridge Analytica, now I would like to focus on how organisations in the digital age are managing the trust that their users and clients have in them and the role of ethics in big data. This article is particularly relevant as this week the new European regulation for data protection comes into force.

This particular case shows a clear breakdown of trust between users and Facebook. In addition, we observe how the question of consent is central to understanding this crisis. Not that this is something new, because the founder of Facebook is starting to be known as a «serial-apologizer». It is also not anything new that Facebook stands out negatively in different studies on trust, such as the one in which it is the last among the giants of the hi-tech market and also on this one, in which it shows how its reputation was affected after the crisis.

Facebook mantra is «move fast and break fast» and possibly a lot of things have been broken along the way, but a break in trust is something more difficult to amend, and this seems to be a very interesting example on the subject of building trust in digital environments. Trust is a mental state and an attitude in relation to an agent (in this case, a social network platform) and is associated with the behavior or action expected in the future of this agent. The evaluation that is made of the trust attributed to this agent is called reputation.

Can we trust in Facebook?

Trust is a situational construct and depends on the perception of the nature of the intentions and motives of the other person or organization. The case of Facebook is symptomatic, because apparently the network did not import much with the use of its platform for research purposes, despite knowing that this research would not be left alone in the theoretical world. The experiment had the ultimate goal of being used as a test field for a new type of advertising based on the use of the social network as a predictor of social behavior (for commercial or electoral purposes). This  could be very beneficial to Facebook and Mark Zuckemberg said that «this was a breach of trust between Kogan, Cambridge Analytica and Facebook, but it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it. We need to fix that.»

There is nothing new in the use of personal information for commercial purposes, as we discussed in a previous post, and that is the bread and butter of data-driven marketing.The problem in this case was how Cambridge Analytica appropriated the data through an app offered through Facebook with the explicit objective of being a personality test. In reality, the app had a covered intention of harvesting people’s data also their contacts’s data. Worse than that, perhaps, it was the fact that the final objective of CA was to use this data set to create disinformation campaigns according to comments from a company’s former employee.

In summary, and to conclude this series of articles, at the core of this crisis is the ethics of organizations in managing people´s data. If, as the saying goes, technology is agnostic, people and companies, in turn, generally have their preferences. And interests. So, the ethical use of the data must follow some sort of criteria. In this interesting paper, an IBM engineer gives us some guidelines on how this should happen and in this one from MIT it is possible to anticipate the concerns with the dark side of big data.

Data Fest in London: Sysomos Summit

Last Tuesday, I went to the Sysomos Summit in London, a celebration of technology, data science, social media and business. The event was also the place where Meltwater, a leading company in the area of market media monitoring and business intelligence software, announced the acquisition of Sysomos. Jorn Lyseggen, Meltwater’s founder and CEO presented the ideas behind this acquisition, which will help crystallise the company’s new vision: Outside Insight.

I was happily surprised to see the impact of the new digital reality in decision making mentioned in one of Lyseggen’s slides, as this is exactly the key theme of this blog. Actually, Lyseggents wrote a book about this subject, which I’m halfway through reading, and I can say that it is 100% worth reading.

To start, it’s not a book about data only, something you might expect from a company that helps make sense of the abundance of data we have nowadays. It’s about change, decision-making and strategy. It’s very action-oriented as well, and the key thesis of the book is that organisations not only need to have an internal view of data (typically, managed by ERP software) but, fundamentally, should keep permanent sight (and keep all other senses alert) to external factors, such as “online breadcrumbs” that are available online (I also like to call this the “digital footprint”), from varied sources.

The pay off for this attitude is clear: companies acting (instead of reacting) on real time, make smarter decisions and are able to predict, rather than explain, their actions in an ever-changing market.

Very interestingly, as well, was Lysenggen’s mention of Michael Porter’s 5 Force Model. Perhaps as a vindication for detractors that consider that the model is outdated, he showed that the model remains valid as an essential part of strategy analysis and formulation. With the new paradigm proposed by the book, the model is tremendously enriched and its apparent static nature changes completely into a vibrant, real time competitive arena.

The whole-day event included the participation of several other speakers from different areas, and perhaps the most important point shared amongst the presentations was how external data can be a source of competitive advantage (Porter, again). Companies that are integrating data in a way described by Jorn Lysenggen in his book as “fighting preconceived beliefs and breaking through their internal bias” will certainly avoid the kind of tunnel vision or marketing myopia (another classic autor, Levitt) that affected big brands such as Kodak or Blackberry and will survive these hypercompetitive times will less difficulties.

The Myth of Transparency: Facebook´s ugly face (2/4)

A crisis about transparency (or lack of), we could summarise the Facebook reputation nightmare. Or, as the Times magazine puts it  brilliantly: “All this has prompted sharp criticism of the company, which meticulously tracks its users but failed to keep track of where information about the lives and thinking of those people went.” In this apparent paradox lies the first point I would like to highlight in this 4-part analysis: The Myth of Transparency.

If you read books such as Jeff Jarvis’ Public Parts (2011), you know how social media has successfully created a hype about the virtues of living life under the public sphere, in a continuous Self Big Brother. Although back then Jarvis agreed with some sort of protection to people’s privacy, such as the ones proposed by then European commissioner Viviane Reding, he was defending a libertarian, perhaps utopian, view of transparency that disregarded a basic impulse behind the “publification” of our lives by tech companies: data has economic value and social media thrives on marketing data.

What this crisis has brought to the surface and to the attention of regulators was the culmination of a series of privacy issues and breaches involving Internet and more famously Facebook. It is perhaps the beginning of the “end of the innocence” and the realisation by the users that transparency is good when it happens on both ways: from the part of the producer of the data (i.e. us) and the marketer of the data (social media companies). The market has become more mature and people starts to realise that there has never been a truly “free service” by Google or Facebook. As Viviane Reding poses it: we pay the service with our data.

To be fair, these companies never have said that they didn’t use people’s data for different purposes, including making tons of money. However, what people are noticing now is how obscure and careless firms have been in the management of this data – and how vulnerable they are when their minds can be read by data mining companies such as Cambridge Analytic with the controversial, and at the same time, brilliant experiment conducted by Kosinski et al (2015).

People are also realising how social media creates a subtle form of surveillance, by letting unknown organisations to access their view of the world, their relationships and their tastes. By impacting serious decisions like votes in a general election, for example, or referendums, the public opinion starts realising the risks of manipulation in this cycle of data transparency – data mining – campaign management.

[clickToTweet tweet=»With the Facebook #reputation #crisis people are also realising how social media creates a subtle form of #surveillance, by letting unknown organisations to access their view of the world, relationships and tastes. » quote=»Click to Tweet»]

Why are mistakes made in decisions?

A recent study by researchers Ashton Anderson of Microsoft Research in New York, Jon Kleinberg of Cornell University in Ithaca, and Sendhil Mullainathan of Harvard University has shed light on the issue of decision-making and its errors. Using chess as a laboratory, and a dataset of 200 million chess games played by amateurs and another dataset with nearly one million games between grandmasters, the authors started from the premise that the error in a decision is related to three factors: the skill of the player, the time available for a play/ decision and the inherent difficulty of the decision.

After exhaustive analysis of the data, the researchers came to the conclusion that the most reliable prediction factor is the inherent degree of difficulty of the decision. The study invites us to discuss how errors occur in other fields. For example, is it the doctor’s experience or the difficulty of the case that can lead to an error in the diagnosis? Or would distracted drivers more prone to accidents than experienced drivers facing roads in dangerous conditions?

We could also ask: if the current environment is more difficult for those who make marketing and communication decisions (and for all leaders, in general) would it be necessary to create new strategic models to less complex the battlefield where the digital competition unfolds and, therefore, avoid mistakes? And what is the role of executive training and capability programs in this sense?

Despite the experience you have, and the time you can take to make your decisions (and nowadays there is not much time …) new technologies create a much more difficult and uncertain scenario than usual. Consequently,  I believe that the amount of errors in times of uncertainty must be high – to the point of taking a business to bankruptcy. This situation may explain the fatalistic approaches of Silicon Valley as the «fail fast, fail often» mantra popularised by Facebook, which for some is nothing more than an irresponsible hype. Or, simply, learn by trial and error. A lot of errors.

There is nothing wrong in accepting risks and all entrepreneur knows that. But, that does not mean looking for failures as a learning strategy. Maybe there are more effective and less painful methods. The use of data mining and predictive models in marketing could surely reduce the complexity of decision making, and perhaps a way to avoid easily identifiable failures. Perhaps, much better would be «be successful quickly, and stay in front,» as this article comments.