

Computing aggregate statistics about user data is of vital importance for a variety of services and systems, but this practice seriously undermines the privacy of users. Recent research efforts have focused on the development of systems for aggregating and computing statistics about user data in a distributed and privacy-preserving manner. Differential privacy has played a pivotal role in this line of research: the fundamental idea is to perturb the result of the statistics before release, which suffices to hide the contribution of individual users.
In this paper, we survey existing approaches to privacy-preserving data aggregation and compare them with respect to system assumptions, privacy guarantees, utility, employed cryptographic methods, and supported sanitization mechanisms. Furthermore, we explore the usage of secure multiparty computations (SMC) for the development of a general framework for privacy-preserving data aggregation. The feasibility of such an approach is a long-held belief, which we support by providing the first efficient cryptographic realization. In particular, we present PrivaDA, a new and general design framework for distributed differential privacy that leverages recent advances in SMC on fixed and floating point numbers. PrivaDA supports a variety of perturbation mechanisms, e.g. the Laplace, discrete Laplace, and exponential mechanisms. We demonstrate the efficiency of PrivaDA with a performance evaluation and its usefulness by exploring two potential application scenarios, namely, web analytics and lecture evaluations.