Privacy Preserving Reputation Systems

The objective of this project is to study and design privacy preserving reputation systems. These systems compute the reputation of a target entity as a function of the feedback provided by peer entities. Additionally, these systems hide either the individual feedback values or the identities of the feedback providers in order to preserve their privacy.

Problems

The purpose of reputation systems is to make the users of a distributed application accountable for their behavior. The reputation of a user is computed as an aggregate of the feedback provided by other users in the system. Truthful feedback is clearly a prerequisite for computing a reputation score that accurately represents the behavior of a user. However, it has been observed that users often hesitate in providing truthful feedback, mainly due to the fear of retaliation. This is particularly true when the feedback being provided is negative. Privacy preserving reputation systems enable users to provide feedback in a private and thus uninhibited manner. Designing privacy preserving reputation systems involves overcoming several challenges, particularly in decentralized environments and in the absence of trusted third parties. Examples of these challenges include: (1) making sure that a user does not submit feedback that is outside the valid range even though the feedback is confidential; (2) guaranteeing that a user is unable to lie about or shed his reputation even though the user is anonymous and may use multiple pseudonyms; (3) guaranteeing that only an authorized user, i.e., a user who has transacted with the target user, can provide feedback about the target user, even though the feedback provider is anonymous.

Contributions

k-shares

We have developed the k-shares reputation protocol, which preserves the privacy of users against semi-honest adversaries.

The foundation of the protocol is a formal framework that unifies the concepts of trust, reputation, and privacy. A defining characteristic of the protocol is that an agent a himself selects the agents that are critical for preserving its privacy. The selection is based on a's knowledge of the trustworthiness of those agents in the context of preserving privacy. The consequence is that agent a is able to quantify and maximize the probability that its privacy will be preserved. This also enables an extension that allows agents to abstain when their privacy is at risk.

Another key characteristic is that the number of agents that each agent sends shares to is limited to k << n, where n is the size of the set of agents who provide feedback. This results in a protocol that is efficient and requires an exchange of only O(n) messages. This design choice is validated by experiments on real trust graphs, which show that the privacy of a high majority of agents can be assured with k as small as 2.

Malicious-k-shares

We have also presented a privacy preserving reputation protocol for the malicious adversarial model. The protocol counters attacks by malicious agents such as submitting illegal feedback values or making erroneous computations. The characteristics that differentiate the protocol from other protocols in the literature include: (1) full decentralization, (2) no need for trusted third parties and specialized platforms, (3) low communication complexity, (4) defense against fully malicious adversaries.

We use additive homomorphic cryptosystems, zero-knowledge proofs of set membership, zero-knowledge proofs of plaintext equality, and distributed hash tables as the building blocks for our protocol. The protocol requires an exchange of O(n+log N) messages, where n and N are the number of users in the protocol and the environment respectively. The protocol is significantly more efficient than comparable protocols that require up to O(n^3) messages to be exchanged.

Our experiments on three real and large trust graphs (Advogato, Squeak, Robots) demonstrate the validity of the two hypotheses that the Malicious-k-shares protocol is based on: (1) A source agent can preserve its privacy by trusting on only k fellow source agents, where k is much smaller than n. (2) Accurate reputation values can be computed even if the source agents whose privacy cannot be preserved abstain and thus do not provide their feedback values.

Contributors

Internal

External

Selected Publications