The waves of the Digital Age and the beginning of Cyberlife

The Digital Age (also known as the Information Age or Digital Revolution) started after the Industrial Revolution and in its first stage or wave it was characterized by the introduction of computers in our lives, and cybernetics was the word of the day, then. This wave was divided into two phases: the first with the introduction of mainframes in large organisations, and later, thanks to Steve Jobs and Bill Gates, its widespread adoption in companies and homes via personal computers.

The second wave, which begun around 1995, was characterised by the connection of these computers through the Internet and the emergence of cyberspace. Also, we verified two phases here, the 1.0 with the web and the widespread popularity of electronic commerce giving rise to companies like Amazon, and creating giants like Dell (a company with which I had the pleasure of working exactly in this phase) and the 2.0 web, with social networks (and with the emergence of new hi-tech giants). This second phase started about 10 years ago.

Now we perceive the beginning of a third wave, an extremely powerful phase which I call cyberlife.

It is the moment when the power of computers is added to the power of social networks and we have the emergence of cloud computing, widespread adoption of Artificial Intelligence, social analytics, machine learning, blockchain, among other innovations. This makes possible significant developments in different areas of economy, from genetic engineering to smart cities; from shared economy to autonomous vehicles. And, unfortunately, we also start to witness the growing risks of cyberwar.

Like all rapidly evolving technology, the technical challenges of cyberlife will be resolved before social and ethical challenges. We are living times of accelerated change, and the decisions of the future are played now. Will we know how to play this game well?

The new data protection law and the trust on social media: Facebook’s ugly face (4/4)

Ending this four-part analysis on the crisis of Facebook and Cambridge Analytica, now I would like to focus on how organisations in the digital age are managing the trust that their users and clients have in them and the role of ethics in big data. This article is particularly relevant as this week the new European regulation for data protection comes into force.

This particular case shows a clear breakdown of trust between users and Facebook. In addition, we observe how the question of consent is central to understanding this crisis. Not that this is something new, because the founder of Facebook is starting to be known as a «serial-apologizer». It is also not anything new that Facebook stands out negatively in different studies on trust, such as the one in which it is the last among the giants of the hi-tech market and also on this one, in which it shows how its reputation was affected after the crisis.

Facebook mantra is «move fast and break fast» and possibly a lot of things have been broken along the way, but a break in trust is something more difficult to amend, and this seems to be a very interesting example on the subject of building trust in digital environments. Trust is a mental state and an attitude in relation to an agent (in this case, a social network platform) and is associated with the behavior or action expected in the future of this agent. The evaluation that is made of the trust attributed to this agent is called reputation.

Can we trust in Facebook?

Trust is a situational construct and depends on the perception of the nature of the intentions and motives of the other person or organization. The case of Facebook is symptomatic, because apparently the network did not import much with the use of its platform for research purposes, despite knowing that this research would not be left alone in the theoretical world. The experiment had the ultimate goal of being used as a test field for a new type of advertising based on the use of the social network as a predictor of social behavior (for commercial or electoral purposes). This  could be very beneficial to Facebook and Mark Zuckemberg said that «this was a breach of trust between Kogan, Cambridge Analytica and Facebook, but it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it. We need to fix that.»

There is nothing new in the use of personal information for commercial purposes, as we discussed in a previous post, and that is the bread and butter of data-driven marketing.The problem in this case was how Cambridge Analytica appropriated the data through an app offered through Facebook with the explicit objective of being a personality test. In reality, the app had a covered intention of harvesting people’s data also their contacts’s data. Worse than that, perhaps, it was the fact that the final objective of CA was to use this data set to create disinformation campaigns according to comments from a company’s former employee.

In summary, and to conclude this series of articles, at the core of this crisis is the ethics of organizations in managing people´s data. If, as the saying goes, technology is agnostic, people and companies, in turn, generally have their preferences. And interests. So, the ethical use of the data must follow some sort of criteria. In this interesting paper, an IBM engineer gives us some guidelines on how this should happen and in this one from MIT it is possible to anticipate the concerns with the dark side of big data.