Not logged in.
Quick Search - Contribution
Contribution Details
Type | Conference Presentation |
Scope | Contributions to practice |
Title | Trust in human-AI interaction: an empirical exploration |
Organization Unit | |
Authors |
|
Presentation Type | other |
Item Subtype | Original Work |
Refereed | No |
Status | Published in final form |
Language |
|
Event Title | Ethical and Legal Aspects of Autonomous Security Systems Conference 2019 |
Event Type | conference |
Event Location | University of Zurich |
Event Start Date | May 2 - 2019 |
Event End Date | May 3 - 2019 |
Abstract Text | Technological advances allow progressively more autonomous systems to become part of our society. Such systems can be especially useful when time pressure and uncertainty are part of a decision-making process, e.g. in a security context. However, by using such system, there is a risk that the output of the system does not match ethical expectation, e.g. because a suboptimal solution is selected or collateral damage occurs. This has two implications. Firstly, the actual advice or action the system performs should be as we prefer it to be. Secondly, the user needs to perceive the system as an ethical and trustworthy partner in the decision-making process, to ensure the system is actually used. This project focuses on the latter, and contributes to the further elaboration of empirical issues raised by the White Paper “Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications”. While there has been research on autonomous systems and ethics, the field is still very much developing. To our knowledge, the following specific factors in this research have not been combined before: different levels of autonomy in search and rescue scenarios, uncertainty and time pressure in ethical decision-making, and trust. In order to investigate the interplay of those factors, we use a multidisciplinary and experimental approach. Compared to standard experimental ethics that is usually vignette-based, we will present morally challenging scenarios to participants in a simulation. This setting allows more immersion into the ethical scenario and adds the human interaction component, which is important to research the perception and expectations of the user. Currently, an experimental setup is designed together with a simulation prototype; the experiment is going to take place with search and rescue recruits of the Swiss army. They will participate in simulations involving the use of drones controlled by the participants in two setting: a rescue mission where a limited number can be saved and a prevention mission (bringing down a terror drone) where there will be some casualties. The system will either provide decision support for a given scenario or autonomously take a decision on what to do; the user only has a veto option. After each scenario, question will be asked on ethical acceptability, ethical responsibility and trust. At the conference, we will present results of pretesting of different scenarios and we will further outline our research program. The results of this research should ultimately shape guidelines on how to build ethically trustworthy autonomous systems. |
Export |
![]() |