Uncharted Lecture Series: Using Artificial Intelligence to Protect Privacy

Michael Tschantz


Thursday, April 16, 2015
4:00 p.m., ICSI Lecture Hall

I will present two approaches to protecting privacy that use methods and models from artificial intelligence.

The first is an automated experimental design and analysis for determining when a blackbox system uses information.  We reduce the problem to one of causal inference.  Leveraging this connection, we push beyond traditional information flow analysis to develop a statistically rigorous tool, AdFisher, for detecting information usage in online ad selection.  With it, we find that Google's Ad Settings is opaque about some features of a user's profile, that it does provide some choice on ads, and that these choices can lead to seemingly discriminatory ads. In particular, we found that visiting webpages associated with substance abuse will change the ads shown but not the settings page.  We also found that setting the gender to female results in getting fewer instances of an ad related to high paying jobs than setting it to male.

The second is a method of auditing for compliance with purpose restrictions placed on information.  For example, the Health Insurance Portability and Accountability Act (HIPAA) require that hospital employees use medical information for only certain purposes, such as treatment, but not for others, such as gossip.  Thus, using formal or automated methods for enforcing privacy policies requires a semantics of purpose restrictions to determine whether an action is for a purpose. Using a survey, we showed that an action is for a purpose if and only if the action is part of a plan for optimizing the satisfaction of that purpose.  We model planning using a modified version of Markov Decision Processes (MDPs) to define when a sequence of actions is only for or not for a purpose.  This semantics enables us to create and implement an algorithm for automated auditing.

The first is joint work with Amit Datta, Anupam Datta, and Jeannette M. Wing.  The second is joint work with the latter two.


Michael Carl Tschantz is a researcher at the International Computer Science Institute.  He uses the models of artificial intelligence and statistics to solve the problems of privacy and security.  His current research includes automating information flow experiments, circumventing censorship, and securing machine learning.  He has a PhD in computer science from Carnegie Mellon University.