Solving the responsibility problem in artificial intelligence ethical governance

The rapid development of artificial intelligence technology and its wide application in various fields of society have brought a series of risks and challenges. How to manage the ethical risks of artificial intelligence has become an important issue that people have to think about. According to traditional liability theory, the moral responsibility caused by a certain technology must be allocated to human subjects such as designers, manufacturers, users, and regulators of the technology according to the magnitude of the fault. However, because artificial intelligence has significant automatic characteristics, if it is excluded from moral responsibility, it may create a “responsibility empty field.” Therefore, to establish a sound artificial intelligence ethical governance system, it is necessary to solve the difficult problems faced by the technology in the attribution of moral responsibility.

Solving the responsibility problem in artificial intelligence ethical governance

Limited Liability

The first problem with the responsibility of artificial intelligence is whether artificial intelligence is qualified to be a responsible actor for a certain event (or consequence), not just a technological tool controlled by humans. This needs to answer whether artificial intelligence has the minimum moral cognitive ability and moral autonomy in terms of assuming responsibility. Artificial intelligence is “human-like intelligence” that is simulated and realized by computers. It converts external environment information into input data, and processes the input data through the characteristic quantities generated from the data to realize data output. The input and output process of artificial intelligence can be understood as the process by which it perceives external environmental information and takes specific actions based on the perceptual information. The behavior selection of artificial intelligence is based on the active response to the judgment of the external environment. In contrast, traditional computer programs complete data input and output according to designed functions, and their response to the external environment is passive. In addition, artificial intelligence can further insert moral algorithms to translate human understanding of the importance of morality into logical rules for artificial intelligence information processing and decision-making, so that it can comply with social moral requirements in a forward-looking sense and make responsible behavioral decisions.

The particularity of artificial intelligence lies in its unique learning algorithm. Through multi-layer processing, low-level feature representation is transformed into high-level feature representation, so that artificial intelligence has the ability to complete complex tasks and can imitate specific human behavior. For example, through the deep learning of neural networks and the reinforcement learning of repeated self-games, AlphaGo has not only improved its predictive ability in the game of Go, but also discovered new game rules and new strategies. Here, the learning algorithm is pre-defined by human designers, but the behavioral decisions made by AlphaGo through machine learning are not determined by human designers. Artificial intelligence has definite but limited autonomy, interactivity, and action capabilities, and can independently make behavioral decisions in a specific environment, so it can be morally imputed. Recognizing that artificial intelligence can assign responsibility at a minimum, affirms its limited active position in the moral governance system.

Unintentional liability

The second problem is that artificial intelligence follows the laws of nature such as mathematics and physics, and lacks “the sense of being able to take responsibility.” The subjective feeling of responsibility first manifests as the actor’s subjective intention, that is, the actor’s understanding of the meaning of the behavior, and the attitude of the actor who hopes or wants to act in a certain way. Although subjective intention does affect the strength of people’s moral condemnation or punishment of actors, it is not a necessary condition for responsibility. From the perspective of moral practice, it is difficult to accurately evaluate subjective intention as the internal state of the actor. Moreover, the purpose of people’s attribution of responsibility is to determine the cause of the actor who caused the consequence through the retrospective evaluation of the damaging consequence, regardless of whether the actor has a subjective intention. German scholar Otfried Hefei took Oedipus as an example to illustrate that this kind of no-fault responsibility prevailed in prehistoric society: Responsibility is objective, even if the actors do not have any subjective faults, they also need to bear moral responsibility. According to the principle of objectivity, the responsibility for artificial intelligence is assigned, which mainly examines the harmful consequences and the causal relationship between the behavior and the consequences, without paying attention to the subjective intentions of the actors.

The subjective feeling of responsibility is also manifested in the self-blame, regret and other reactive attitudes of the actor due to his actions, thus admitting that he could have acted in other ways. Artificial intelligence lacks receptivity, and it is difficult to imagine that it will produce any form of pain or guilt. So, does it make sense to assign responsibility to artificial intelligence and impose moral punishment? As far as reactive attitudes are concerned, the reactive attitudes of others, not the self-reactive attitudes of the actors, are the driving force behind the responsibility. The blame and resentment of others is not only the evaluation of the value of an event (or consequence) made by others, but also the reaction to the actor based on the evaluation. Blame and resentment allow the actor to provide reasons for avoiding being blamed. But at the same time, if the actor’s causal responsibility for a certain event (or consequence) is extremely significant, triggering a strong response of blame and resentment from others, others will reject artificial intelligence and use technology as their excuse for exemption. As far as moral punishment is concerned, punishment as “a natural revenge for past mistakes” has gradually lost its moral legitimacy in modern society. Moral responsibility does not necessarily require moral punishment of actors, but requires actors to face up to, correct or compensate for their own moral faults, and at the same time try to avoid similar moral faults. In this regard, it is feasible to require artificial intelligence to bear moral responsibility.

Shared responsibility

The third problem is whether the assignment of responsibility to artificial intelligence will weaken the moral responsibility of mankind or threaten man’s position as the subject of responsibility. the answer is negative. The correct attribution of artificial intelligence helps to clarify the responsibility boundary between humans and artificial intelligence, thereby strengthening human moral responsibility. Human designers, manufacturers, users, etc. participate in the theoretical innovation, technology research and development, manufacturing, and practical applications of artificial intelligence, and accordingly bear the responsibility of internal knowledge, scientific experimentation, and the responsibility of putting science into practice . These multiple subjects cooperate in the division of labor in the technology design-production-application chain, and thus develop a responsibility chain: no single link bears its own responsibility, each link is part of the responsibility, and this part of the responsibility has to be combined with This link is related to the responsibility for the entire behavior. As Hefei said, whoever creates some preconditions for others to be able to do this must be jointly responsible for this. On the contrary, if they do not see this kind of relationship, or cannot see this kind of relationship, there will be a requirement for attribution of responsibility, and an institution capable of supervising the fulfillment of responsibilities should be set up for this purpose. The agency supervises and manages the responsibility performance of human designers, manufacturers, and users. Human actors not only cannot be exempted from moral responsibilities, but also need to bear corresponding compensation or compensation responsibilities.

In moral practice, human actors delegate part of the morality to artificial intelligence, such as allowing intelligent search engines to screen and block sensitive words. Authorized behavior means that part of the moral capital is transferred from human actors to artificial intelligence, so that the latter can perform certain social duties on behalf of human actors. Does the moral authorization of artificial intelligence mean the authorization of moral responsibility? It depends on whether the authorization is the authorization of the moral behavior itself or the authorization of the moral behavior coordinate (that is, only providing feedback information for human moral decision-making). The authorization of moral behavior means that moral good and evil and responsibilities are distributed in a multi-agent system composed of artificial intelligence and humans. At this time, artificial intelligence replaces human actors to participate in the actual social division of labor and becomes an important part of the social system. Therefore, it is necessary to bear the moral responsibility assigned by the society to this responsibility, that is, distributed moral responsibility. The authorization of ethical behavior coordinates still regards humans as the center of moral responsibility and realizes human moral intentions through artificial intelligence. Artificial intelligence only needs to assume extended moral responsibilities. Neither distributed nor extended moral responsibility exempts human beings from the dominant position in moral governance.

In short, the key to ethical governance of artificial intelligence is to assign responsibility to artificial intelligence based on a new open responsibility theory. First, the moral responsibility of artificial intelligence is no-fault responsibility. The purpose of artificial intelligence responsibility attribution is to correctly attribute moral behavior and its consequences based on the principle of objectivity. Especially when multiple actors participate in a certain behavior decision, correct attribution helps to correct mistakes in time. Second, artificial intelligence can and can only assume moral responsibilities that are compatible with its intelligence and autonomy. The moral responsibility of currently widely used weak artificial intelligence is closely related to its ability to extract features from big data. The attribution of their responsibilities cannot be uniform, and they need to be treated according to their automation levels. Moral responsibilities beyond their capabilities still need to be borne by human actors. Third, the responsibility of artificial intelligence requires retrospective experiments on artificial intelligence algorithms until algorithm flaws or program loopholes are found. The ways in which artificial intelligence assumes responsibility include optimizing algorithms, program upgrades, suspension or permanent suspension of use. The corresponding compensation or liability must also be borne by human actors. Finally, human designers should supervise and encourage artificial intelligence to continuously accumulate moral experience, transform moral experience into moral algorithms or new feature quantities, and minimize unanticipated moral situations and dilemmas. In the computer field, it is very common to correct loopholes through program upgrades, which also requires a reasonable understanding and summary of the moral errors of artificial intelligence.

The Links:   ETL81-050 LM64P101 PM150RSE120