Information-theoretic Privacy: A holistic view via leakage measures, robust privacy guarantees, and adversarial models for mechanism design

Privacy is the problem of ensuring limited leakage of information about sensitive features while sharing information (utility) about non-private features to legitimate data users. Even as differential privacy has emerged as a strong desideratum for privacy, there is a need for varied yet rigorous approaches for applications with different requirements. This talk presents an information-theoretic approach and takes a holistic view focusing on leakage measures, design of privacy mechanisms, and verifiable implementations using generative adversarial models. Specifically, we introduce maximal alpha leakage as a new class of adversarially motivated tunable leakage measures that quantifies the maximal gain of an adversary in refining a tilted belief of any (potentially random) function of a dataset conditioned on a disclosed dataset. The choice of alpha determines the specific adversarial action ranging from refining a belief for alpha = 1 to guessing the best posterior for alpha = ∞, and for these extremal values this measure simplifies to mutual information (MI) and maximal leakage (MaxL), respectively. The problem of guaranteeing privacy can then be viewed as one of designing a randomizing mechanism that minimizes alpha leakage subject to utility constraints. We then present bounds on the robustness of privacy guarantees that can be made when designing mechanisms from a finite number of samples. Finally, we focus on a data-driven approach, generative adversarial privacy (GAP), to design privacy mechanisms using neural networks. GAP is modeled as a constrained minimax game between a privatizer (intent on publishing a utility-guaranteeing learning representation that limits leakage of the sensitive features) and an adversary (intent on learning the sensitive features). We demonstrate the performance of GAP on multi-dimensional Gaussian mixture models and the GENKI dataset. Time permitting, we will briefly discuss the learning-theoretic underpinnings of GAP as well as connections to the problem of algorithmic fairness.

This work is a result of multiple collaborations: (a) maximal alpha leakage with J. Liao (ASU),

O. Kosut (ASU), and F. P. Calmon (Harvard); (b) robust mechanism design with M. Diaz (ASU), H. Wang (Harvard), and F. P. Calmon (Harvard); and (c) GAP with C. Huang (ASU), P. Kairouz (Google), X. Chen (Stanford), and R. Rajagopal (Stanford).


Lalitha Sankar received the B.Tech degree from the Indian Institute of Technology, Bombay, the

M.S. degree from the University of Maryland, and the Ph.D degree from Rutgers University. She is presently an Associate Professor in the School of Electrical, Computer, and Energy Engineering at Arizona State University. Prior to this, she was an Associate Research Scholar at Princeton University. Following her doctorate, Sankar was a recipient of a three year Science and Technology Teaching Postdoctoral Fellowship from the Council on Science and Technology at Princeton University. Prior to her doctoral studies, she was a Senior Member of Technical Staff at AT&T Shannon Laboratories. Her research interests include information privacy and cyber- security in distributed and cyber-physical systems as well as network information theory and its applications to model and study large data systems. She received the NSF CAREER award in 2014. She received the IEEE Globecom 2011 Best Paper Award for her work on privacy of side- information in multi-user data systems. For her doctoral work, she received the 2007-2008 Electrical Engineering Academic Achievement Award from Rutgers University.