Artificial Intelligence and Innovation Ethics (session 271)

When:  Aug 10, 2020 from 16:00 to 17:30 (FR)
This symposium examines key questions posed by teaching ethics to artificial intelligence for business settings. A general question is how to balance the benefits and risks of AI, which is a significant concern with technological change. That concern is made more severe by the large-scale implications of AI on human life, including our understanding of what it is to be a human being and what entities can be properly treated as right holders. More specifically, several topics arise in the intersection of AI and Ethics that this panel will address. Fairness in the use of AI for business: When AI is used at a large scale for business, there is always a concern that it may also lead to drastic and large scale discrimination against some groups of people. For example, deep-learning systems may deny mortgage loans to members of certain groups when others with comparable financial resources receive loans, and this may occur even if none of the training data indicate group membership. It is thus crucial to design tools that can monitor the AI’s performance to continuously test for bias. A second crucial goal is to design methods to mitigate any such biases to the maximum extent possible. This research direction will involve both making fundamental contributions to AI and statistics in terms of developing these tools, and impactful use in business in many applications. A large ethics literature has carefully analyzed concepts of fairness, and this body of thought can be applied to AI. Many statistical measures of bias have been proposed, some of which are inconsistent with others. An ethical analysis can help evaluate whether measures have normative justification. Ethically grounded value alignment: Deep learning systems are frequently designed to reflect human values so as to avoid recommending decisions inconsistent with these values. Values are typically ascertained, however, in the much the same empirical way as facts and predictions - in this case, by analyzing large datasets that reflect human beliefs and preferences. Yet the AI community is coming to realize that a purely empirical approach can reflect biases and prejudices as well as acceptable moral values. There is no substitute for grounding value alignment in ethical principles that are independently derived, a manoeuvre that avoids the philosophically famous “naturalistic fallacy” of deriving ethical conclusions from purely factual premises. The deontological tradition in ethics provides the intellectual resources to develop rigorously defined and grounded principles that can be used to screen training sets or otherwise direct learning procedures. Human-Centered Explainable AI (XAI): Many industry experts have pointed out the critical need for human oriented explanation by AI systems. According to an IBM survey, about 60% of 5,000 executives were concerned “about being able to explain how AI is using data and making decisions.” However, the most successful algorithms in use today are not transparent. All of these models are fundamentally “black boxes” that include many layers of complex, typically nonlinear, transformations of inputs. It can be quite difficult for anyone to understand the algorithm’s output and/or why the model makes key predictions. Given these challenges, efforts to develop more interpretable, explainable, or intelligible algorithms comprise a key area of current research. The explainability of an algorithm plays a key role in detecting, enabling, and improving auditability, fairness, trust, and reliability. However, the definition of interpretability and desiderata of what makes a good explanation remain elusive and different researchers use different, often problem- or domain-specific, definitions. More alarmingly, this XAI research rarely involves systematic investigation of human responses with regard to a “What is a good explanation for machine learning output?” AI generates a variety of ethical questions at three interconnected levels. The first is the legal dimension: what laws should be enacted to govern AI? Should some particular aspect of AI be subject to legal regulation at all? Do we need to fashion specific legislation to address AI issues or rely on more general legal standards? The second is the social dimension, which raises questions about the social morality that should be cultivated concerning AI. What sort of culture will develop in response to AI? A third level is concerned with issues that arise for individuals and associations in their engagement with AI. That connects with corporations and associations, which still need to exercise their own moral judgment.
#AOM2020
#TechnologyandInnovationManagement
#SocialIssuesinManagement

Location

Event Image