Safety of Deep Learning and Robustness to Natural Adversarial Examples

back to overview

Type and Duration

Preproposal PhD-Thesis, since September 2020

Coordinator

Hilti Chair for Data and Application Security

Main Research

Business Process Management

Field of Research

Digital Innovation

Description

Unlike security, safety of learning algorithms has received little attention in the scientific community. For trustworthy AI systems both aspects must be considered. The key difference between these synonymous concepts lies in that safety deals with natural examples that may break learning systems. Natural examples are not subject to typical constraints of adversarial learning, e.g., being invisible or semantically intact, and hence cannot be handled by existing defenses. On the other hand, natural examples follow physical constraints that can be leveraged for the design of security mechanisms.
The main goal of this research is to develop methodology for safety analysis of deep learning given the knowledge of natural adversarial examples. This methodology should comprise techniques for systematic search for natural adversarial examples, attaining provable safety guarantees, exploratory verification of safety features as well as evaluation on applications.