The Common Good in the Digital Age

Conclusive remarks

During the three days of the seminar, we had the pleasure to witness a very high level of presentations and reflections from different panelists, with an open dialogue and interaction among participants on the impact of new technology (AI, robotics, big data, etc.) on the Common Good. Using Dr. Norman Webb’s Depth of Knowledge (DoK) scale to test the development of knowledge during the seminar, we could say that many of the contents shared were between DoK level 1 to 3 but that we also achieved some DoK level 4 by means of an extended thinking about the future.

 

Some of the main points that have emerged as polarities were how the classical definition of ‘Common Good’ –as a value-based, heuristic and virtue oriented moral system– could fit into a new world dictated by technology. Another one was the “fragmented reality of the digital age” whereby, on the one hand, we face general challenges and, on the other, we seek precise solutions; or when we deal with platforms that are universal but, on the other hand, we realize how fixed and immutable they become once they are set. In this regard, the main outcomes could be summarized as follows: 

 

  1. The need to unpack the concept of ‘Common Good’ derived from an anthropology that sees in the social nature of human beings their distinctiveness, and add the new challenges brought forth by the emerging digital world: a) the necessity to explore new relationships that the concept of ‘Common Good’ entails (such as the relationship between time and space); b) the issue of agency and responsibility; and c) the establishment of new paradigms to build useful moral models.

  2. The need to foster moral growth that can keep pace with technological development: a) the ability to break down the processes of moral decisions; b) the question of biases (in particular, the use of false positives and false negatives); and c) the definition of the roles of regulatory norms vs. individual behaviors in shaping a moral world.

  3. The need to address the pervasiveness of artificial intelligence in the area of defense, security and weapons: a) the need for a precise analysis of the role of artificial intelligence with regards to new weapons (robots, super-soldiers, and autonomous/semi-autonomous weapons); b) the definition of new parameters of military equilibria no longer based on simple deterrence or on the so-called security dilemma; and c) the circumscription of the limits of artificial intelligence in theaters of conflict (the issue of escalation and, especially, of de-escalation).

  4. The need to address the pervasiveness of artificial intelligence in the sphere of labor and of the future of work: a) the establishment of patterns of negotiation (such as collective negotiations) to prevent the undesirable effects of automation; b) a critical exploration of the “technocratic paradigm” from the perspective of moral values, education and the purpose of work; and, c) the fostering of decisional flows through subsidiarity between social partners and different levels of government.

  5. The need to address future ethical dilemmas: a) the bridge building between academic, corporate and NGO structures; b) the finding of the root causes of new social impasses; and c) the tackling of the consequences of the moral decentering due to a “dispersed humanity” thanks to “conventional wisdom”.

 

An example of what extended thinking might look like can be found in a recent app called ChatterBaby (an effort to help hearing-impaired parents who were worried they could not hear and interpret their baby’s crying),where we move from the concept of artificial intelligence (in singular) to artificial intelligences (in plural), with the following distinctions:

 

  • simple artificial intelligence (AI) – identify the cry of a baby and associate it with the real need (hungry, sleep, sick, need to be changed, etc.)

  • machine learning (ML) – by allowing the app to access big data, determining that a certain type of milk or diapers are not good for your child, or

  • deep learning (DL) – the app can automatically order the right type of milk or diapers by inserting it into your shopping list when you are running out of it.


However, with artificial general intelligence (AGI) or Strong AI/Human-Level AI, the app may start to take “moral” decisions: a) what if the baby cries from what may be burning wounds and the app orders cigarettes in your shopping list?; or b) what if the AGI were to know from  the criminal records of the parents that 20 years before, they had been arrested for illegal possession of drugs? The AGI might then, on its own and without any human input, decide to call social services or the police, who in turn might show up at their doorstep.

 

We are not far away from this latter scenario becoming reality, i.e. having a machine deciding what is good for human beings. Can we program the common good? Can a machine teach itself to promote the common good? Indeed, technology will turn everything into precise commands, and the common good – for better or for worse – might come to be expressed in exact actions by machines and not humans. The Trolley Experiment might become an everyday exercise in which machines will decide in our stead and without asking us. Is this the world we want? This simple example can help us discern better how the digital age and the common good can interface in a positive way.