Skip to content

Webinar 9: Security and Privacy in Machine Learning (Apr. 14, 2022)

Security and privacy play an indispensable role in building trust in any information system, and AI is no exception. If a machine learning model is insecure or leaks private/confidential information, companies will be reluctant to use them which eventually hinders AI and human development. Indeed, it has already been demonstrated that sensitive training data can be extracted from trained machine learning models, or their training data can be poisoned in order to misclassify specific samples as well as to prolong training. Moreover, imperceptible modifications to the input data, called adversarial examples, can fool AI and cause misclassifications potentially leading to life-threatening situations.

These are not far-fetched scenarios; stop signs with specially crafted adversarial stickers on them can be recognized as yield signs by self-driving cars, individuals with a pair of glasses can be recognized as a different person by a face recognition system, or leaking the involvement of a patient in the training data of a model predicting cancer prognosis can indicate that the patient has cancer. Trustworthy machine learning is also mandated by regulations (such as GDPR) whose violations could result in hefty fines for a company. Therefore, there is a great demand for experts who can audit the privacy and security risks of machine learning models thereby also demonstrating compliance with different AI and privacy regulations.

In this talk, the main security and privacy risks of machine learning models are reviewed, following the CIA (Confidentiality, Integrity, Availability) triad. The issues are demonstrated in real applications including malware detection, drug discovery, and synthetic data generation for the purpose of anonymization.

Watch on LinkedIn: https://www.linkedin.com/video/event/urn:li:ugcPost:6919571643547672576/