Not logged in.

Contribution Details

Type Book Chapter
Scope Discipline-based scholarship
Title Understanding ε for Differential Privacy in Differencing Attack Scenarios
Organization Unit
Authors
  • Narges Ashena
  • Daniele Dell'Aglio
  • Abraham Bernstein
Editors
  • Joaquin Garcia-Alfaro
  • et al
Item Subtype Original Work
Refereed Yes
Status Published in final form
Language
  • English
Booktitle Security and Privacy in Communication Networks : 17th EAI International Conference, SecureComm 2021, Virtual Event, September 6–9, 2021, Proceedings, Part I
ISBN 978-3-030-90018-2
Number 398
Place of Publication Cham
Publisher Springer
Page Range 187 - 206
Date 2021
Abstract Text One of the recent notions of privacy protection is Differential Privacy (DP) with potential application in several personal data protection settings. DP acts as an intermediate layer between a private dataset and data analysts introducing privacy by injecting noise into the results of queries. Key to DP is the role of ε – a parameter that controls the magnitude of injected noise and, therefore, the trade-off between utility and privacy. Choosing proper ε value is a key challenge and a non-trivial task, as there is no straightforward way to assess the level of privacy loss associated with a given ε value. In this study, we measure the privacy loss imposed by a given ε through an adversarial model that exploits auxiliary information. We define the adversarial model and the privacy loss based on a differencing attack and the success probability of such an attack, respectively. Then, we restrict the probability of a successful differencing attack by tuning the ε. The result is an approach for setting ε based on the probability of a successful differencing attack and, hence, privacy leak. Our evaluation finds that setting ε based on some of the approaches presented in related work does not seem to offer adequate protection against the adversarial model introduced in this paper. Furthermore, our analysis shows that the ε selected by our proposed approach provides privacy protection for the adversary model in this paper and the adversary models in the related work.
Official URL https://link.springer.com/chapter/10.1007/978-3-030-90019-9_10
Digital Object Identifier 10.1007/978-3-030-90019-9_10
Other Identification Number merlin-id:21645
Export BibTeX
EP3 XML (ZORA)
Additional Information 978-3-030-90019-9 (E)