Not logged in.

Contribution Details

Type Master's Thesis
Scope Discipline-based scholarship
Title Computing the Trustworthiness Level of Black Box Machine and Deep Learning Models
Organization Unit
Authors
  • Dario Gagulic
Supervisors
  • Alberto Huertas Celdran
  • Chao Feng
  • Burkhard Stiller
Language
  • English
Institution University of Zurich
Faculty Faculty of Business, Economics and Informatics
Date 2023
Abstract Text The field of Artificial Intelligence (AI) is rapidly evolving and increasingly being integrated into our everyday life. Black Box Machine and Deep Learning systems support humans in making important decisions in safety-critical industries, that consequently influence the lives of real people. This has raised the need for the ability to assess the model’s trustworthiness. Trust is a subjective concept and depends on many factors. As Black Box models grow bigger and become more complex, it has become impossible, even for domain experts, to understand their reasoning and analyze how such models derive conclusions. Luckily, early work has developed automatic tools that allow the computation and evaluation of trust in a particular system, based on the pillars called fairness, explainability, robustness, and methodology. The algorithm computes various metrics and relies on the user to upload the model, the used dataset, and the FactSheet describing the applied training methodology. This forms a problem when computing the trustworthiness level of Black Box Machine and Deep Learning models with limited data access. Notably, the presented work identified two common definitions of the term Black Box established in the research community. The first focuses on complex systems with limited interpretability, and the underexplored second definition with respect to trustworthiness assessment describes systems with limited information available. Therefore, this master’s thesis introduces a Black Box Taxonomy, categorizing Machine Learning models based on interpretability into different subgroups and adding another dimension distinguishing their available information levels. Further, a novel approach is proposed introducing a synthetic dataset generator to compute the trust score of Black Box models. The generator offers two approaches (MUST and MAY) to balance privacy and accuracy concerns. This solution addresses incomputable metrics, leading to a more accurate trustworthiness assessment. In order to validate the approach, the implementation was evaluated on two real-world scenarios.
PDF File Download
Export BibTeX