Contextualizing Privacy Decisions for Better Prediction (and Protection)

TitleContextualizing Privacy Decisions for Better Prediction (and Protection)
Publication TypeConference Paper
Year of Publication2018
AuthorsWijesekera, P., Reardon J., Reyes I., Tsai L., Chen J-W., Good N., Wagner D., Beznosov K., & Egelman S.
Published inProceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’18)

Modern mobile operating systems implement an ask-on-firstuse policy to regulate applications’ access to private user data: the user is prompted to allow or deny access to a sensitive resource the first time an app attempts to use it. Prior research shows that this model may not adequately capture user privacy preferences because subsequent requests may occur under varying contexts. To address this shortcoming, we implemented a novel privacy management system in Android, in which we use contextual signals to build a classifier that predicts user privacy preferences under various scenarios. We performed a 37-person field study to evaluate this new permission model under normal device usage. From our exit interviews and collection of over 5 million data points from participants, we show that this new permission model reduces the error rate by 75% (i.e., fewer privacy violations), while preserving usability. We offer guidelines for how platforms can better support user privacy decision making. 


This work was supported by the U.S. National Science Foundation (NSF) under grant CNS-1318680, the U.S. Department of Homeland Security (DHS) under contract FA8750-16-C0140, and the Center for Long-Term Cybersecurity (CLTC) at U.C. Berkeley. We would additionally like to thank our study participants and Refjohürs Lykkewe.

ICSI Research Group

Usable Security and Privacy