Quantifying protection level of a noise candidate for noise multiplication masking scheme
RIS ID
130619
Abstract
When multiplicative noises are used to perturb a set of original data, the data provider needs to ensure that the original values are not likely to be learned by data intruders from the noise-multiplied data. Different attacking strategies for unveiling the original values have been recognised in the literature, and the data provider needs to ensure that the noise-multiplied data is protected against these attacking strategies by selecting an appropriate noise generating variable. However, there are many potential attacking strategies, which makes the quantification of the protection level of a noise candidate difficult. In this paper, we argue that, to quantify the protection level a noise candidate offers to the original data against an attacking strategy, the data provider might look at the average value disclosure risk it produces. Correspondingly, we propose an optimal estimator which maximizes the average value disclosure risk. As a result, the data provider could use the maximized average value disclosure risk as a single measure for quantifying the protection level a noise candidate offers to the original data. The measure could help the data provider with the process of noise generating variable selection in practice.
Publication Details
Ma, Y., Lin, Y., Krivitsky, P. N. & Wakefield, B. (2018). Quantifying protection level of a noise candidate for noise multiplication masking scheme. Lecture Notes in Computer Science, 11126 279-293.