Explainable Artificial Intelligence in Economics

back to overview

Type and Duration

FFF-Förderprojekt, January 2022 until December 2023


Center for Economics

Main Research

Wealth Management


Deep learning (DL) models offer an interesting alternative to classical statistical mod-els, as they often outperform classical statistical models by using a series of hierar-chical layers, or a hierarchy of concepts, to perform the process of machine learning. However, the disadvantage of DL models is, that the interpretability of the parameters and the explainability of the results is only possible to a limited extent. As DL models are being employed to make important predictions in critical contexts (such as healthcare, transport, insurance, finance), the demand for transparency is increasing. Hence, explanations supporting the output of a model are crucial.
This project will explore how Explainable Artificial Intelligence (XAI) methods can be used to interpret and explain the solutions of DL models and allows for variable selection.
Since those methods can be used to disentangle complex interactive effects, it is used to study health trajectories of noncommunicable diseases (NCDs), which are the result of a combination of genetic, physiological, environmental and behavioral factors and tend to be of long duration.

Project Manager