Towards Trustworthy AI: Validating & Explaining AI Models and Decisions

back to overview

Type and Duration

FFF-Förderprojekt, November 2021 until December 2022 (finished)

Coordinator

Hilti Chair of Business Process Management

Main Research

Business Process Management

Description

Digitalization and innovation promise to ease our daily lives. Both of these trends are heavily driven by technologies, models, and algorithms from the field of Artificial Intelligence (AI). Despite
the success of AI, AI still suffers from serious shortcomings. Among them are its black box and statistical nature. That is, AI is difficult to understand for humans, and each prediction could
be wrong – even it appears to be simple to do correctly. These undesirable properties fostered the field of explainable AI, and they call for approaches to validate AI models and decisions. This
project seeks to contribute to these areas by investigating three specific problems. Two of them are closely related to existing projects with regional companies.