Mike Rosulek (
When Alice and Bob want to securely evaluate a function of their shared inputs, they typically first express the function as a (boolean or arithmetic) circuit, and then they securely evaluate that circuit gate-by-gate. In other words, a secure protocol for evaluating $f$ is typically obtained in a non-black-box-way from $f$ itself. As a consequence, secure computation protocols have high overhead (in communication and computation) that is directly linked to the circuit-description complexity of $f$.
In other settings throughout cryptography, black-box constructions invariably lead to better practical efficiency than comparable non-black-box constructions. Could secure computation protocols similarly be made more practical by eliminating their dependence on a circuit representation of the target function? Or, in other words, must one know the code of $f$ to securely evaluate $f$?
In this work we initiate the theoretical study of this question. We show the following:
1. A complete characterization of the 2-party tasks which admit such security against semi-honest adversaries. The characterization is inspired by notions of autoreducibility from computational complexity theory. From this characterization, we show a class of pseudorandom functions that cannot be securely evaluated (when one party holds the seed and the other holds the input) without “knowing” the code of the function in question. On the positive side, we show a class of functions (related to blind signatures) that can indeed be securely computed without “knowing” the code of the function.
2. Sufficient conditions for such security against malicious adversaries, also based on autoreducibility. We show that it is not possible to prove membership in the image of a one-way function in zero-knowledge, without “knowing” the code of the one-way function. We also describe a variant of the GMW compiler for transforming semi-honest to malicious security while preserving the specific black-box property considered here.