The aim of the IACR Ph.D. database is twofold. On the first hand, we want to offer an overview of Ph.D. already completed
in the domain of cryptology. Where possible, this should also include a subject classification, an abstract, and
access to the full text.
On the second hand, it deals with Ph.D. subjects
currently under investigation. This way, we provide a timely
map of contemporary research in cryptology.
All entries or changes need to be approved by an editor. You can contact them via phds (at) iacr.org.
S. Dov Gordon (#649)
S. Dov Gordon
Topic of his/her doctorate.
On Fairness in Secure Computation
Year of completion
Secure computation is a fundamental problem in modern cryptography in which multiple parties join to compute a function of their private inputs without revealing anything beyond the output of the function. A series of very strong results in the 1980’s demonstrated that any polynomial-time function can be computed while guaranteeing essentially every desired security property. The only exception is the fairness property, which states that no player should receive their output from the computation unless all players receive their out- put. While it was shown that fairness can be achieved whenever a majority of players are honest, it was also shown that fairness is impossible to achieve in general when half or more of the players are dishonest. Indeed, it was proven that even boolean XOR cannot be com- puted fairly by two parties
The fairness property is both natural and important, and as such it was one of the first questions addressed in modern cryptography (in the context of signature exchange). One contribution of this thesis is to survey the many approaches that have been used to guarantee different notions of partial fairness. We then revisit the topic of fairness within a modern security framework for secure computation. We demonstrate that, despite the strong impossibility result mentioned above, certain interesting functions can be computed fairly, even when half (or more) of the parties are malicious. We also provide a new notion of partial fairness, demonstrate feasibility of achieving this notion for a large class of functions, and show impossibility for certain functions outside this class. We consider fairness in the presence of rational adversaries, and, finally, we further study the difficulty of achieving fairness by exploring how much external help is necessary for enabling fair secure computation.