AI Red Teaming: a facial recognition case study
What is AI Red Teaming? Why is it important and how can we apply penetration testing and SDLC methodologies for Machine Learning. In this presentation, we will answer those questions, but more importantly, demonstrate how this can be applied in practice for testing such critical applications as AI-driven facial recognition systems. Facial recognition technology has grown in its prevalence, and today you can find it in different areas of human activity, including social media, smart homes, ATMs and stores. Recently, researchers have discovered that AI algorithms are prone to adversarial attacks which involve changing an image and staying undetectable to the human eye. To be able to conduct a proper test, we created our own adversarial attack taxonomy for machine learning models and evaluated the efficacy of recent approaches to attacking AI-driven facial recognition systems. We will present our research conducted in the real environment in order to see how AI Red Teaming processes can be performed.