International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Andrew Morgan

Publications

Year
Venue
Title
2020
EUROCRYPT
Succinct Non-Interactive Secure Computation 📺
We present the first maliciously secure protocol for succinct non-interactive secure two-party computation (SNISC): Each player sends just a single message whose length is (essentially) independent of the running time of the function to be computed. The protocol does not require any trusted setup, satisfies superpolynomial-time simulation-based security (SPS), and is based on (subexponential) security of the Learning With Errors (LWE) assumption. We do not rely on SNARKs or "knowledge of exponent"-type assumptions. Since the protocol is non-interactive, the relaxation to SPS security is needed, as standard polynomial-time simulation is impossible; however, a slight variant of our main protocol yields a SNISC with polynomial-time simulation in the CRS model.
2018
TCC
On the Security Loss of Unique Signatures
Andrew Morgan Rafael Pass
We consider the question of whether the security of unique digital signature schemes can be based on game-based cryptographic assumptions using linear-preserving black-box security reductions—that is, black-box reductions for which the security loss (i.e., the ratio between “work” of the adversary and the “work” of the reduction) is some a priori bounded polynomial. A seminal result by Coron (Eurocrypt’02) shows limitations of such reductions; however, his impossibility result and its subsequent extensions all suffer from two notable restrictions: (1) they only rule out so-called “simple” reductions, where the reduction is restricted to only sequentially invoke “straight-line” instances of the adversary; and (2) they only rule out reductions to non-interactive (two-round) assumptions. In this work, we present the first full impossibility result: our main result shows that the existence of any linear-preserving black-box reduction for basing the security of unique signatures on some bounded-round assumption implies that the assumption can be broken in polynomial time.
2018
TCC
Achieving Fair Treatment in Algorithmic Classification
Andrew Morgan Rafael Pass
Fairness in classification has become an increasingly relevant and controversial issue as computers replace humans in many of today’s classification tasks. In particular, a subject of much recent debate is that of finding, and subsequently achieving, suitable definitions of fairness in an algorithmic context. In this work, following the work of Hardt et al. (NIPS’16), we consider and formalize the task of sanitizing an unfair classifier $$\mathcal {C}$$C into a classifier $$\mathcal {C}'$$C′ satisfying an approximate notion of “equalized odds” or fair treatment. Our main result shows how to take any (possibly unfair) classifier $$\mathcal {C}$$C over a finite outcome space, and transform it—by just perturbing the output of $$\mathcal {C}$$C—according to some distribution learned by just having black-box access to samples of labeled, and previously classified, data, to produce a classifier $$\mathcal {C}'$$C′ that satisfies fair treatment; we additionally show that our derived classifier is near-optimal in terms of accuracy. We also experimentally evaluate the performance of our method.