Machine Learning Safety

mls-logo-dalle3

Deep Learning systems have shown remarkable performance across a wide range of tasks in various domains, including computer vision and natural language processing. Despite these successes, recent theoretical studies and high-profile incidents, such as accidents involving self-driving vehicles, have highlighted that Deep Learning models can sometimes fail unexpectedly and without apparent cause.

To ensure that artificial neural networks are suitable for use in safety-critical systems, such as autonomous vehicles or industrial manufacturing process controls, their reliability must be enhanced and, ideally, verified.

Our research primarily focuses on Anomaly Detection and Out-of-Distribution Detection. These areas involve developing methods to evaluate a model's ability to make accurate predictions. We are dedicated to exploring various techniques to assess and improve the confidence in the predictions made by artificial neural networks, aiming to equip these systems with the capacity to fail gracefully.

For further inquiries, please feel free to contact Konstantin Kirchheim.

Last Modification: 04.12.2023 - Contact Person: Webmaster