Workshop Details

ML Summit
The big training event for Machine Learning
October 11 - 13, 2021 | Online April 2022 | Munich

Serg Masis


Bis zum 18. Februar anmelden und bis zu 200 € pro Ticket sparen! Jetzt anmelden
Early Bird:
Save up to
€ 200 per ticket!
Secure your ticket now
June 29 – July 1
Early Bird
Register now and save up to € 200!
Secure your ticket now

Making Machine Learning Models Attack-Proof with Adversarial Robustness

We can easily trick a classifier into making embarrassingly false predictions. When this is done systematically and intentionally, it is called an adversarial attack. Specifically, this kind of attack is called an evasion attack. In this session, we will examine an evasion use case and briefly explain other forms of attacks. Then, we explain two defense methods: spatial smoothing preprocessing and adversarial training. Lastly, we will demonstrate one robustness evaluation method and one certification method to ascertain that the model can withstand such attacks.

Session Tracks

#Machine Learning 2