Not logged in.

Contribution Details

Type Dissertation
Scope Discipline-based scholarship
Title The Right Thing To Do? Artificial Intelligence for Ethical Decision Making
Organization Unit
  • Suzanne Tolmeijer
  • Abraham Bernstein
  • English
Institution University of Zurich
Faculty Faculty of Business, Economics and Informatics
Number of Pages 199
Date 2022
Date Annual Report 2022
Abstract Text With the advancement of AI technology, an increasing amount of AI applications are being developed and applied in various domains. While some tasks in such applications lend itself well for the strengths of AI, other tasks are more challenging to automate. One example of this is ethical decision making. AI for ethical decision making has not been explored much, among other reasons, for its possibly impactful and ethically loaded results, as well as a lack of ‘ground truth’ on what is considered the right thing to do. However, AI for ethical decision making could both be valuable in explicit ethical decision making domains and increase the ethical use of other AI applications. This thesis fills the mentioned research gap by focusing on if and how AI for ethical decision making can be designed in a way that is acceptable for users. The investigated research topics that are part of AI for ethical decision making are presented according to the incremental and iterative design cycle (IID), which is often applied in the development of new technology. After an initial planning phase, a design cycle consists of the following phases: planning and requirements, analysis and implementation, testing, and evaluation. During the first phase, initial planning, we investigate the state of the art of implementing ethical theory in AI, by performing an extensive literature review. Among other results, we find that the field is scattered in terms of the ethical theory and AI types used to create AI for ethical decision making. Additionally, the developed applications consist mostly of prototypes. These results imply that a Wizard of Oz approach is appropriate for the implementation and testing in the design cycle presented in this thesis. The success of any AI application depends on whether the users trust the AI enough to rely on it. Given the varying opinions regarding a ground truth for ethical AI, where AI decisions can easily be considered to be wrong, we focus on how AI mistakes influence user trust. In the second phase of the design cycle, called planning and requirements, we perform an experiment to investigate the effect of AI mistakes and their timing on user trust and reliance. We find that system inaccuracy negatively influences trust and reliance. Furthermore, the negative effect of AI mistakes is stronger when mistakes are made during the first interaction with the user. To mitigate these negative effect of AI mistakes, the third phase of analysis and implementation focuses on AI mistakes and how their negative effects can be mitigated, by presenting different interaction designs. This is done by introducing a taxonomy of AI mistakes and appropriate mitigation strategies. In the fourth testing phase, we use a Wizard of Oz application to test user perception of AI for ethical decision making. We find that while participants had higher moral trust in a human expert and find humans more responsible, they had more capacity trust and overall trust in an AI system for ethical decision making. In the final phase, evaluation, we describe the consequences of our finding. Since people perceive AI and humans to have different strengths that are both valuable for ethical decision making, we propose an interaction paradigm that utilizes the strengths of both: human-autonomy teaming. For AI and humans to be able to form an effective team, further development of different AI capabilities is needed: agency, communication, shared mental models, intent, and interdependence. In conclusion, this work contributes to the understanding of user perception of AI for ethical decision making, and suggests design strategies to move research on AI for ethical decision making forward.
Other Identification Number merlin-id:23049
PDF File Download from ZORA
Export BibTeX