ISL Colloquium

← List all talks ...

Attacking the privacy of machine learning models

Nicholas Carlini – Research scientist, Google Brain

Thu, 29-Sep-2022 / 4:00pm / Packard 101

Abstract

Current machine learning models are not private: they reveal particular details about the individual examples contained in datasets used for training. This talk studies various aspects of this privacy problem. For example, we have shown how to query GPT-2 (a pretrained language model) to extract personally-identifiable information from its training set. This talk discusses how and why these attacks work, and what can be done to prevent them both in theory and in practice.

Bio

Nicholas Carlini is a research scientist at Google Brain. He studies the security and privacy of machine learning, for which he has received best paper awards at ICML, USENIX Security and IEEE S&P. He obtained his PhD from the University of California, Berkeley in 2018.