Challenge 1: Ethics¶
The issue of Artificial Intelligence, as with the appearance and affirmation of every new technology, re-proposes the contrast between the “doom-mongers and enthusiasts” [1].
The doom-mongers fear that Artificial Intelligence will prevail over people, will decide for them, steal their jobs, discriminate against them, violate their privacy, and will secretly control them by conditioning their lives.
The enthusiasts, on the other hand, dream of a world where machines are capable of autonomously performing bureaucratic processes, of being used as powerful computational tools to process and interpret large amounts of data in the best way, replacing men in the most burdensome and repetitive tasks, and creating solutions able to diminish crime and eradicate diseases.
Basically there are two perceptions of technology, of diametrically opposite sign.
That of the doom-mongers, negatively assesses the introduction of AI in Public Administration (PA), citing a series of critical issues that could have negative effects not only on the efficiency and effectiveness of the measures but also on citizens’ rights.
That of the enthusiasts, on the other hand, considers the use of AI to be extremely positive, believes that the implementation of these technologies can significantly improve not only the activity of the PA but also the quality of life of citizens and that a total and unconditional process of research and development is therefore necessary in this area [2].
Two extreme points of view, each with different peculiarities, which must be critically analysed in order to resolve the weaknesses indicated by the “doom-mongers” and modulate the strengths sustained by the “enthusiasts”. The examples mentioned above are not chosen by chance, but are the result of the debate that in recent years has been going on in the scientific community and in civil society regarding the impact of AI systems on our lives.
The ethical challenge of the introduction of Artificial Intelligence solutions is represented by the need to respond in a balanced manner to the polarisation of these two visions, integrating innovation and taking into account the effects that this has already had and will continue to have in the development of society, respecting and safeguarding the universally recognised core values.
The use of AI based on algorithms of data analysis in decision-making processes related to social, health and judicial issues (such as risk assessment) therefore requires a thorough reflection in terms of ethics and, more broadly, of governance.
The algorithms for data analysis involve high costs that encompass the entire evolutionary cycle of their functioning: from implementation to evolutionary maintenance, to the verification of results and to the training of users who must use them responsibly. Speaking of greater efficiency or tax cuts thanks to the use of AI technologies in public services can be a misleading narrative register as a correct development of such tools implies high costs and great attention to the ethical aspects related to their use.
The focus on the functional development of this technology requires the economic and professional resources suitable for ethical development and above all in line with the data it processes and the decisions it guides. Otherwise, what will come out of the analysis will only help finance the private sector, with the illusion of helping people. Or, worse, to introducing a distortion or a flight of responsibility, from time to time referring the cause of decisional errors to the algorithms instead of the decision makers.
Capitalizing on the benefits of technology requires important investment on the part of the PA and a significant commitment to improve the quality and efficiency of services and to have systems that are secure and able to truly reduce inequalities.
To understand its extent, it is possible to analyse those that represent the central elements in the public debate and in scientific analysis:
- data quality and neutrality: machine learning systems need data which is “annotated” [3] by human beings (supervised learning) or at least selected and prepared (unsupervised learning). This also includes errors or bias introduced, even inadvertently, by the designers, replicating them in all future applications. For example, datasets with bias they propagate the same evaluation errors in the meaning of an image or a concept, as happened, for example, with certain algorithms used to prevent crimes, in which the data was compromised by a historical series that emphasised ethnic differences [4]. Or unbalanced datasets, that overestimate or underestimate the weight of certain variables in the reconstruction of the cause-effect relationship necessary to explain certain events and, above all, to predict them;
- responsibility (accountability and liability) [5]: the examples just mentioned highlight the strong impact that Artificial Intelligence has on the decision-making activity of public entities. Both when it acts as an assistant to human beings as well as as an autonomous entity, AI generates effects on the lives of people in relation to which it is necessary to be able to establish legal liability. Nevertheless, the ownership of the latter is not clearly identifiable, since it could be attributed to the producer [6] or to the owner [7] of the Artificial Intelligence, or even to its end user [8]. Those who design AI systems can be responsible for design or implementation defects, but not for behaviour caused by inadequate instruction datasets. Can a public decision-maker be considered politically responsible for the decisions made on the basis of algorithms that process data affected by the bias mentioned above? What type of responsibility can there be for Public Administration? If a robot hurts someone, who should be held responsible and who, if anyone, has the obligation to compensate the victim (and with which assets)? Can the public decision-maker transfer his political responsibility to an AI system that does not respond to a clear principle of representation? Is it ethically sustainable that, in order to improve the efficiency and effectiveness of measures, certain important choices can be made with the influence of an AI or even completely delegating them to the AI? And in trusting an AI system, how can its consistency be controlled over time? These are just some of the issues that emerge in this area and highlight the need to establish principles for the use of AI technologies in a public context.
- transparency and openness [9]: the issue of the responsibility of public administration also has to do with the duties of the latter with respect to citizens, when it decides to provide them with services or to make decisions that concern them, using Artificial Intelligence solutions. The functioning of the latter must meet criteria of transparency and openness. Transparency becomes a fundamental prerequisite to avoid discrimination and solve the problem of information asymmetry, guaranteeing citizens the right to understand public decisions. It is also necessary to think about the policies chosen to determine the reference indices (benchmark policies) to avoid effects of a larger dimension: just as an administrator can act in a non-transparent manner, pursuing not the common good but private interests, a non-transparent algorithm could carry out the same offences even more broadly, producing not only injustices but also social discrimination.
- protection of the private sphere [10]: a further need, closely linked to the previous one, is to protect the data of the individuals. PA must design services based on AI able to guarantee efficiency and prompt response, but also protection of citizens’ sensitive data. This requirement, strictly connected to the legal context, has some ethical peculiarities concerning the use that PA can make of the data that has come to its knowledge in contexts different from those in which it was collected. Is it ethically sustainable that PA, through the use of data collected for other purposes, takes action based on the new derived information? Is it ethical to use this data to feed predictive systems?
To address these challenges, it may be helpful to follow some general principles. Among these we can mention the need for an anthropocentric [11] approach, according to which Artificial Intelligence must always be put at the service of people and not vice versa [12]. Moreover, there are principles of procedural (non-arbitrary procedures), formal (equal treatment for equal individuals or groups) and substantial (effective removal of economic and social obstacles) equity, as well as the satisfaction of certain basic universal needs, including respect for the freedom and rights of individuals and the community [13]. These and many other aspects related to the need to place AI at the service of people in every context are analysed in subsequent challenges.
Footnotes
[1] | Ref. Umberto Eco, Apocalittici e integrati, Bompiani, 1964. |
[2] | The utopias of the “Californian ideology” (Richard Barbrook, Imaginary Futures: From Thinking Machines to the Global Village, 2007) are currently contrasted by the radical criticism of technological “solutionism” (Eugenij Morozov, To Save Everything, Click Here. The Folly of Technological Solutionism, 2013). The challenges of AI at the service of citizens |
[3] | Data that is enriched with comments and metadata. For example, a caption can act as a description of an image. |
[4] | Bruno Lepri, Nuria Oliver, Emmanuel Letouz, Alex Pentland, Patrick Vinck, “Fair, transparent and accountable algorithmic decision-making processes. The premise, the proposed solutions, and the open challenges”, Science business media, Springer, 2017. |
[5] | Ref. “Legal challenge”. |
[6] | There are neural networks whose calculation algorithms are not completely reconstructable, not even by their programmers, generating what is called the “black-box effect”. |
[7] | What currently happens in the field of robotics. |
[8] | With a parallel, we could cite the case of construction works. The builder bears full responsibility for the first years after the inauguration of the work, but then the responsibility passes to the person responsible for its maintenance. |
[9] | Ref. “Legal challenge”. |
[10] | Ref. “Legal challenge”. |
[11] | Ref. http://www.g7italy.it/sites/default/files/documents/ANNEX2-Artificial_Intelligence_0.pdf. |
[12] | Necessary, paraphrasing Kantian thought, that AI “treats man always as an end and never as one of the means”. Immanuel Kant, Fondazione della metafisica dei costumi, 1785. |
[13] | Ref. https://medium.com/code-for-canada/responsible-ai-in-the-government-of-canada-a-sneak-peek-973727477bdf. |