Workshop Details

ML Summit
The big training event for Machine Learning
October 11 - 13, 2021 | Online April 2022 | Munich

Serg Masis

en

Bis zum 18. Februar anmelden und bis zu 200 € pro Ticket sparen! Jetzt anmelden
Register until
and save
up to € 100!
Secure your ticket now
November 4
and save
up to €100!
Secure your ticket now

Making Machine Learning Models Attack-Proof with Adversarial Robustness

We can easily trick a classifier into making embarrassingly false predictions. When this is done systematically and intentionally, it is called an adversarial attack. Specifically, this kind of attack is called an evasion attack. In this session, we will examine an evasion use case and briefly explain other forms of attacks. Then, we explain two defense methods: spatial smoothing preprocessing and adversarial training. Lastly, we will demonstrate one robustness evaluation method and one certification method to ascertain that the model can withstand such attacks.

Session Tracks

#Machine Learning 2