*15:17*[Pub][ePrint] Must you know the code of f to securely compute f?, by Mike Rosulek

When Alice and Bob want to securely evaluate a function of their shared inputs, they typically first express the function as a (boolean or arithmetic) circuit and then securely evaluate that circuit, gate-by-gate. In other words, a secure protocol for evaluating $f$ is typically obtained in a {\\em non-black-box-way} from $f$ itself. Consequently, secure computation protocols have high overhead (in communication \\& computation) that is directly linked to the circuit-description complexity of $f$.

In other settings throughout cryptography, black-box constructions invariably lead to better practical efficiency than comparable non-black-box constructions. Could secure computation protocols similarly be made more practical by eliminating their dependence on a circuit representation of the target function? Or, in other words, {\\em must one know the code of $f$ to securely evaluate $f$?}

In this work we initiate the theoretical study of this question. We show the following:

1. A complete characterization of the 2-party tasks which admit such security against semi-honest adversaries. The characterization is inspired by notions of {\\em autoreducibility} from computational complexity theory. From this characterization, we show a class of pseudorandom functions that {\\em cannot} be securely evaluated (when one party holds the seed and the other holds the input) without ``knowing\'\' the code of the function in question. On the positive side, we show a class of functions (related to blind signatures) that can indeed be securely computed without ``knowing\'\' the code of the function.

2. Sufficient conditions for such security against malicious adversaries, also based on autoreducibility. We show that it is not possible to prove membership in the image of a one-way function in zero-knowledge, without ``knowing\'\' the code of the one-way function.