Why Security is Important in ML and How to Secure your ML-based Solutions
When enterprises adopt new technologies, security is often on the back burner. It can seem more important to get new products or services to customers and internal users as quickly as possible and at the lowest cost.
AI and ML offer all the same opportunities for vulnerabilities and misconfigurations as earlier technological advances, but they also have unique risks. As enterprises embark on major AI-powered digital transformations, those risks may become greater. AI and ML require more data, and more complex data, than other technologies. The algorithms developed by mathematicians and data scientists come out of research projects. The volume and processing requirements mean that cloud platforms often handle workloads, adding another level of complexity and vulnerability.
It’s no surprise that cybersecurity is the most worrisome risk for AI adopters. Machine learning is software, after all. That’s why in this presentation, I will focus on secure coding best practices and discuss security pitfalls of the Python programming language. Both adversarial machine learning and core secure coding topics with some hands-on labs and stories from real life. The examples will explain techniques that provide a strong engagement to security and substantially improve code hygiene.