The main results of this work are new public-key
encryption schemes that, under the quadratic residuosity (QR)
assumption (or Paillier's decisional composite residuosity (DCR)
assumption), achieve key-dependent message security as well as
high resilience to secret key leakage and high resilience to the
presence of auxiliary input information.

In particular, under what we call the {\it subgroup indistinguishability assumption}, of which the QR and DCR are special cases, we can construct a scheme that has:

1. Key-dependant message (circular) security. Achieves security even when encrypting affine functions of its own secret-key (in fact, w.r.t. affine "key-cycles" of predefined length). Our scheme also meets the requirements for extending key-dependant message security to broader classes of functions beyond affine functions using the techniques of [BGK, ePrint09] or [BHHI, ePrint09].

2. Leakage resiliency. Remains secure even if any adversarial low-entropy (efficiently computable) function of the secret-key is given to the adversary. A proper selection of parameters allows for a "leakage rate" of (1-o(1)) of the length of the secret-key.

3. Auxiliary-input security. Remains secure even if any sufficiently hard to invert (efficiently computable) function of the secret-key is given to the adversary.

Our scheme is the first to achieve key-dependant security and auxiliary-input security based on the DCR and QR assumptions. All previous schemes to achieve these properties relied either on the DDH or LWE assumptions. Our scheme is also the first to achieve leakage resiliency for leakage rate (1-o(1)) of the secret-key length, under the QR assumption. Leakage resilient schemes under the DCR and the QR assumptions (for the restricted case of composite modulus product of safe primes) were implied by the work of [NS, Crypto09], using hash proof systems. However, known constructions of hash proof systems under the QR assumption only allowed for a leakage rate of o(1) of the secret-key length.

In particular, under what we call the {\it subgroup indistinguishability assumption}, of which the QR and DCR are special cases, we can construct a scheme that has:

1. Key-dependant message (circular) security. Achieves security even when encrypting affine functions of its own secret-key (in fact, w.r.t. affine "key-cycles" of predefined length). Our scheme also meets the requirements for extending key-dependant message security to broader classes of functions beyond affine functions using the techniques of [BGK, ePrint09] or [BHHI, ePrint09].

2. Leakage resiliency. Remains secure even if any adversarial low-entropy (efficiently computable) function of the secret-key is given to the adversary. A proper selection of parameters allows for a "leakage rate" of (1-o(1)) of the length of the secret-key.

3. Auxiliary-input security. Remains secure even if any sufficiently hard to invert (efficiently computable) function of the secret-key is given to the adversary.

Our scheme is the first to achieve key-dependant security and auxiliary-input security based on the DCR and QR assumptions. All previous schemes to achieve these properties relied either on the DDH or LWE assumptions. Our scheme is also the first to achieve leakage resiliency for leakage rate (1-o(1)) of the secret-key length, under the QR assumption. Leakage resilient schemes under the DCR and the QR assumptions (for the restricted case of composite modulus product of safe primes) were implied by the work of [NS, Crypto09], using hash proof systems. However, known constructions of hash proof systems under the QR assumption only allowed for a leakage rate of o(1) of the secret-key length.

A cryptographic primitive is leakage-resilient, if it
remains secure even if an adversary can learn a bounded amount
of arbitrary information about the computation with every
invocation. As a consequence, the physical implementation of a
leakage-resilient primitive is secure against every side-channel
as long as the amount of information leaked per invocation is
bounded. In this paper we prove positive and negative results about the
feasibility of constructing leakage-resilient pseudorandom functions
and permutations (i.e. block-ciphers). Our results are three fold:

1. We construct (from any standard PRF) a PRF which satisfies a relaxed notion of leakage-resilience where (1) the leakage function is fixed (and not adaptively chosen with each query.) and (2) the computation is split into several steps which leak individually (a "step" will be the invocation of the underlying PRF.)

2. We prove that a Feistel network with a super-logarithmic number of rounds, each instantiated with a leakage-resilient PRF, is a leakage resilient PRP. This reduction also holds for the non-adaptive notion just discussed, we thus get a block-cipher which is leakage-resilient (against non-adaptive leakage).

3. We propose generic side-channel attacks against Feistel networks. The attacks are generic in the sense that they work for any round functions (e.g. uniformly random functions) and only require some simple leakage from the inputs to the round functions. For example we show how to invert an $r$ round Feistel network over $2n$ bits making $4\cdot (n+1)^{r-2}$ forward queries, if with each query we are also given as leakage the Hamming weight of the inputs to the $r$ round functions. This complements the result from the previous item showing that a super-constant number of rounds is necessary.

1. We construct (from any standard PRF) a PRF which satisfies a relaxed notion of leakage-resilience where (1) the leakage function is fixed (and not adaptively chosen with each query.) and (2) the computation is split into several steps which leak individually (a "step" will be the invocation of the underlying PRF.)

2. We prove that a Feistel network with a super-logarithmic number of rounds, each instantiated with a leakage-resilient PRF, is a leakage resilient PRP. This reduction also holds for the non-adaptive notion just discussed, we thus get a block-cipher which is leakage-resilient (against non-adaptive leakage).

3. We propose generic side-channel attacks against Feistel networks. The attacks are generic in the sense that they work for any round functions (e.g. uniformly random functions) and only require some simple leakage from the inputs to the round functions. For example we show how to invert an $r$ round Feistel network over $2n$ bits making $4\cdot (n+1)^{r-2}$ forward queries, if with each query we are also given as leakage the Hamming weight of the inputs to the $r$ round functions. This complements the result from the previous item showing that a super-constant number of rounds is necessary.

This talk is a combination of the following two papers:

Side-channel attacks have often proven to have a devastating effect on
the security of cryptographic schemes. In this paper we address the
problem of storing cryptographic keys and computing on them in a manner
that preserves security even when the adversary is able to obtain
information leakage during the computation on the key.

Using the recently achieved fully homomorphic encryption, we show how to encapsulate a key and repeatedly evaluate arbitrary functions on it so that no adversary can gain any useful information from a large class of side-channel attacks. We work in the model of Micali and Reyzin -- assuming that only the active part of memory during computation leaks information. Similarly to previous works, our construction makes use of a single "leak-proof" hardware token that samples from a globally fixed distribution that does not depend on the key. If the amount of computation that will be performed on the key is known in advance then our construction requires no leak-proof tokens at all -- the values produced by the token can be pre-computed and then accessed sequentially. In addition, we describe a simple variant of our scheme that splits the key between two devices, and preserves the secrecy of the key even if the memory contents of one device are leaked completely. This provides a meaningful protection against the powerful cold boot attacks of Halderman \etal (USENIX Security 08) where the complete memory contents of a device can be recovered.

In contrast to previous general compilers that achieve resilience to side-channel attacks, we allow leakage functions to be arbitrary polynomial size circuits with a sufficiently short output, and our construction does not require the amount of computation to grow with the amount of leakage that the adversary is able to obtain.

Using the recently achieved fully homomorphic encryption, we show how to encapsulate a key and repeatedly evaluate arbitrary functions on it so that no adversary can gain any useful information from a large class of side-channel attacks. We work in the model of Micali and Reyzin -- assuming that only the active part of memory during computation leaks information. Similarly to previous works, our construction makes use of a single "leak-proof" hardware token that samples from a globally fixed distribution that does not depend on the key. If the amount of computation that will be performed on the key is known in advance then our construction requires no leak-proof tokens at all -- the values produced by the token can be pre-computed and then accessed sequentially. In addition, we describe a simple variant of our scheme that splits the key between two devices, and preserves the secrecy of the key even if the memory contents of one device are leaked completely. This provides a meaningful protection against the powerful cold boot attacks of Halderman \etal (USENIX Security 08) where the complete memory contents of a device can be recovered.

In contrast to previous general compilers that achieve resilience to side-channel attacks, we allow leakage functions to be arbitrary polynomial size circuits with a sufficiently short output, and our construction does not require the amount of computation to grow with the amount of leakage that the adversary is able to obtain.

we present a general method to compile any cryptographic algorithms into
one which resists side channel attacks of the {\it only computation
leaks information} variety for an unbounded number of executions. our
method uses as a building block a semantically secure bit encryption
scheme with the following additional operations: key refreshing,
oblivious generation of cipher texts, cipher-text re-generation, and
blinded homomorphic evaluation of one single complete gate (e.g. nand).
furthermore, the security properties of the encryption scheme should
withstand bounded leakage incurred while performing each of the above
operations.

we show how to implement such an encryption scheme under the ddh intractability assumption and the existence of a simple secure hardware component. the hardware component is independent of the encryption scheme secret key. the encryption scheme resists leakage attacks which are polynomial time computable function, whose output size is bounded by a constant fraction of the secret key size.

we show how to implement such an encryption scheme under the ddh intractability assumption and the existence of a simple secure hardware component. the hardware component is independent of the encryption scheme secret key. the encryption scheme resists leakage attacks which are polynomial time computable function, whose output size is bounded by a constant fraction of the secret key size.

At the heart of many recent lattice-based cryptographic schemes is a
polynomial-time algorithm that, given a `high-quality' basis,
generates a lattice point according to a Gaussian-like distribution.
Unlike most other operations in lattice-based cryptography, however,
the known algorithm for this task (due to Gentry, Peikert, and
Vaikuntanathan; STOC 2008) is rather inefficient, and is inherently
sequential.
We present a new Gaussian sampling algorithm for lattices that is
\emph{efficient} and \emph{highly parallelizable}. We also show that
in most cryptographic applications, the algorithm's efficiency comes
at almost no cost in asymptotic security. At a high level, our
algorithm resembles the ``perturbation'' heuristic proposed as part of
NTRUSign (Hoffstein \etal, CT-RSA 2003), though the details are quite
different. To our knowledge, this is the first algorithm and rigorous
analysis demonstrating the security of a perturbation-like technique.

Gentry proposed a fully homomorphic public key
encryption scheme that uses ideal lattices. He based the
security of his scheme on the hardness of two problems: an
average-case decision problem over ideal lattices, and the
sparse (or "low-weight") subset sum problem (SSSP).

We provide a key generation algorithm for Gentry's scheme that generates ideal lattices according to a "nice" average-case distribution. Then, we prove a worst-case / average-case connection that bases Gentry's scheme (in part) on the quantum hardness of the shortest independent vector problem (SIVP) over ideal lattices in the worst-case. (We cannot remove the need to assume that the SSSP is hard.) Our worst-case / average-case connection is the first where the average-case lattice is an ideal lattice, which seems to be necessary to support the security of Gentry's scheme.

We provide a key generation algorithm for Gentry's scheme that generates ideal lattices according to a "nice" average-case distribution. Then, we prove a worst-case / average-case connection that bases Gentry's scheme (in part) on the quantum hardness of the shortest independent vector problem (SIVP) over ideal lattices in the worst-case. (We cannot remove the need to assume that the SSSP is hard.) Our worst-case / average-case connection is the first where the average-case lattice is an ideal lattice, which seems to be necessary to support the security of Gentry's scheme.

The search for encryption schemes that allow to evaluate
functions (or circuits) over encrypted data has attracted a lot
of attention since the seminal work on this subject by Rivest,
Adelman and Dertouzos in 1978.

In this work we define a theoretical object, chained encryption schemes, which allow a compact evaluation of polynomials of degree $d$ over encrypted data (without function privacy). Chained encryption schemes are generically constructed by concatenating cryptosystems with the appropriate homomorphic properties, which are common in lattice-based encryption. As a particular instantiation we propose a chained encryption scheme whose IND-CPA security is based on a worst-case/average-case reduction to uSVP.

In this work we define a theoretical object, chained encryption schemes, which allow a compact evaluation of polynomials of degree $d$ over encrypted data (without function privacy). Chained encryption schemes are generically constructed by concatenating cryptosystems with the appropriate homomorphic properties, which are common in lattice-based encryption. As a particular instantiation we propose a chained encryption scheme whose IND-CPA security is based on a worst-case/average-case reduction to uSVP.

Homomorphic encryption (HE) schemes enable computing functions on
encrypted data, by means of a public $\Eval$ procedure that can be
applied to ciphertexts. But the evaluated ciphertexts so generated
may differ from freshly encrypted ones. This brings up the question
of whether one can keep computing on evaluated ciphertexts. An
\emph{$i$-hop} homomorphic encryption is a scheme where $\Eval$ can be
called on its own output upto $i$~times, while still being able to
decrypt the result. A \emph{multi-hop} homomorphic encryption is a
scheme which is $i$-hop for all~$i$. In this work we study $i$-hop
and multi-hop schemes in conjunction with the properties of
function-privacy (i.e., $\Eval$'s output hides the function) and
compactness (i.e., the output of $\Eval$ is short). We provide formal
definitions and describe several constructions.

First, we observe that ``bootstrapping'' techniques can be used to convert any (1-hop) homomorphic encryption scheme into an $i$-hop scheme for any~$i$, and the result inherits the function-privacy and/or compactness of the underlying scheme. However, if the underlying scheme is not compact (such as schemes derived from Yao circuits) then the complexity of the resulting $i$-hop scheme can be as high as $n^{O(i)}$.

We then describe a specific DDH-based multi-hop homomorphic encryption scheme that does not suffer from this exponential blowup. Although not compact, this scheme has complexity linear in the size of the composed function, independently of the number of hops. The main technical ingredient in this solution is a \emph{re-randomizable} variant of the Yao circuits. Namely, given a garbled circuit, anyone can re-garble it in such a way that even the party that generated the original garbled circuit cannot recognize it. This construction may be of independent interest.

First, we observe that ``bootstrapping'' techniques can be used to convert any (1-hop) homomorphic encryption scheme into an $i$-hop scheme for any~$i$, and the result inherits the function-privacy and/or compactness of the underlying scheme. However, if the underlying scheme is not compact (such as schemes derived from Yao circuits) then the complexity of the resulting $i$-hop scheme can be as high as $n^{O(i)}$.

We then describe a specific DDH-based multi-hop homomorphic encryption scheme that does not suffer from this exponential blowup. Although not compact, this scheme has complexity linear in the size of the composed function, independently of the number of hops. The main technical ingredient in this solution is a \emph{re-randomizable} variant of the Yao circuits. Namely, given a garbled circuit, anyone can re-garble it in such a way that even the party that generated the original garbled circuit cannot recognize it. This construction may be of independent interest.

Motivated by the question of basing cryptographic protocols on stateless
tamper-proof hardware tokens, we revisit the question of unconditional
two-prover zero-knowledge proofs for NP. We show that such protocols
exist in the interactive PCP model of Kalai and Raz (ICALP '08), where
one of the provers is replaced by a PCP oracle. This strengthens the
feasibility result of Ben-Or, Goldwasser, Kilian, and Wigderson (STOC
'88) which requires two stateful provers. In contrast to previous
zero-knowledge PCPs of Kilian, Petrank, and Tardos (STOC '97), in our
protocol both the prover and the PCP oracle are efficient given an NP
witness.
Our main technical tool is a new primitive that we call interactive
locking, an efficient realization of an unconditionally secure
commitment scheme in the interactive PCP model. We implement interactive
locking by adapting previous constructions of interactive hashing
protocols to our setting, and also provide a direct construction which
uses a minimal amount of interaction and improves over our interactive
hashing based constructions.
Finally, we apply the above results towards showing the feasibility of
basing unconditional cryptography on stateless tamper-proof hardware
tokens, and obtain the following results: *) We show that if tokens can
be used to encapsulate other tokens, then there exist unconditional and
statistically secure (in fact, UC secure) protocols for general secure
computation. *) Even if token encapsulation is not possible, there are
unconditional and statistically secure commitment protocols and
zero-knowledge proofs for NP. *) Finally, if token encapsulation is not
possible, then no protocol can realize statistically secure oblivious
transfer.

This paper presents a fully secure functional encryption (FE) scheme for
a wide class of relations, that are specified by non-monotone access
structures combined with inner-product relations. The security is proven
under a simple and well-examined assumption, the decisional linear
(DLIN) assumption, in the standard model. The proposed FE scheme covers,
as special cases, (1) the key-policy (KP) and ciphertext-policy (CP)
attribute-based encryption (ABE) schemes with non-monotone access
structures, and (2) the FE schemes with zero and non-zero inner-product
relations.

We provide the first construction of a hash function into ordinary elliptic
curves that is indifferentiable from a random oracle, based on Icart's
deterministic encoding from Crypto 2009. While almost as efficient as
Icart's encoding, this hash function can be plugged into any cryptosystem
that requires hashing into elliptic curves, while not compromising proofs
of security in the random oracle model.

We also describe a more general (but less efficient) construction that works for a large class of encodings into elliptic curves, for example the Shallue-Woestijne-Ulas (SWU) algorithm. Finally we describe the first deterministic encoding algorithm into elliptic curves in characteristic $3$.

We also describe a more general (but less efficient) construction that works for a large class of encodings into elliptic curves, for example the Shallue-Woestijne-Ulas (SWU) algorithm. Finally we describe the first deterministic encoding algorithm into elliptic curves in characteristic $3$.

Secure two-party authentication and key exchange are fundamental
problems. Traditionally, the parties authenticate each other by means
of their identities, using a public-key infrastructure (PKI). However,
this is not always feasible or desirable: an appropriate PKI may not be
available, or the parties may want to remain anonymous, and not reveal
their identities.

To address these needs, we introduce the notions of credential-authenticated identification (CAID) and key exchange (CAKE), where the compatibility of the parties' \emph{credentials} is the criteria for authentication, rather than the parties' \emph{identities} relative to some PKI. We formalize CAID and CAKE in the universal composability (UC) framework, with natural ideal functionalities, and we give practical, modularly designed protocol realizations. We prove all our protocols UC-secure in the adaptive corruption model with erasures, assuming a common reference string (CRS). The proofs are based on standard cryptographic assumptions and do not rely on random oracles.

CAKE includes password-authenticated key exchange (PAKE) as a special case, and we present two new PAKE protocols. The first one is interesting in that it is uses completly different techniques than known practical PAKE protocols, and also achieves UC-security in the adaptive corruption model with erasures; the second one is the first practical PAKE protocol that provides a meaningful form of resilience against server compromise without relying on random oracles.

To address these needs, we introduce the notions of credential-authenticated identification (CAID) and key exchange (CAKE), where the compatibility of the parties' \emph{credentials} is the criteria for authentication, rather than the parties' \emph{identities} relative to some PKI. We formalize CAID and CAKE in the universal composability (UC) framework, with natural ideal functionalities, and we give practical, modularly designed protocol realizations. We prove all our protocols UC-secure in the adaptive corruption model with erasures, assuming a common reference string (CRS). The proofs are based on standard cryptographic assumptions and do not rely on random oracles.

CAKE includes password-authenticated key exchange (PAKE) as a special case, and we present two new PAKE protocols. The first one is interesting in that it is uses completly different techniques than known practical PAKE protocols, and also achieves UC-security in the adaptive corruption model with erasures; the second one is the first practical PAKE protocol that provides a meaningful form of resilience against server compromise without relying on random oracles.

The problem of password-authenticated key exchange (PAKE) has been
extensively studied. However to date, no construction is known for a
PAKE protocol that is secure in the plain model in the setting of
concurrent self-composition, where polynomially many protocol sessions
with the same password may be executed on the network in an arbitrarily
interleaved manner, and where the adversary may corrupt any number of
parties. In this paper, we resolve this open problem. In particular, we
give the first construction of a PAKE protocol that is secure (with
respect to the definition of Goldreich and Lindell) in the concurrent
setting in the plain model. We stress that we allow any unbounded
polynomially-many concurrent sessions.

We give the first non-trivial positive standard model instantiation
result about the influential RSA-OAEP encryption scheme of Bellare and
Rogaway (Eurocrypt~'94), and indeed about \emph{any} encryption or
signature scheme appearing in the PKCS \#1 v2.1 standard.

Specifically, we show that $f$-OAEP is semantically secure if the trapdoor function $f$ is \emph{lossy} in the sense of Peikert and Waters (STOC~'08) and the initial hash function is \emph{$t$-wise independent} for appropriate $t$ (in particular, neither hash function is modeled as a random oracle). Furthermore, under the $\Phi$-Hiding Assumption of Cachin et al. (Eurocrypt~'99), we show that RSA is lossy (by a factor $1/e$, where $e$ is the public RSA exponent). Taken together, these results refute "uninstantiability" of RSA-OAEP in the sense of Canetti et al.~(STOC~'98); \emph{i.e.}, there exists (a family of) efficiently computable functions that can securely replace its random oracles under reasonable assumptions. In particular, they increase our confidence that chosen-plaintext attacks are unlikely to be found against RSA-OAEP. In contrast, OAEP's predecessor in PKCS \#1 v1.5 was shown to be vulnerable to chosen-plaintext attacks by Coron et al.~(Eurocrypt~'00). Our first result is actually much more general: for \emph{any} "padding-based" encryption scheme whose trapdoor permutation is lossy, we show that in order to prove semantic security it suffices to argue that the padding function "fools" small-output distinguishers on a class of high-entropy input distributions. We then show that the first round of the OAEP padding satisfies this "fooling" condition.

Specifically, we show that $f$-OAEP is semantically secure if the trapdoor function $f$ is \emph{lossy} in the sense of Peikert and Waters (STOC~'08) and the initial hash function is \emph{$t$-wise independent} for appropriate $t$ (in particular, neither hash function is modeled as a random oracle). Furthermore, under the $\Phi$-Hiding Assumption of Cachin et al. (Eurocrypt~'99), we show that RSA is lossy (by a factor $1/e$, where $e$ is the public RSA exponent). Taken together, these results refute "uninstantiability" of RSA-OAEP in the sense of Canetti et al.~(STOC~'98); \emph{i.e.}, there exists (a family of) efficiently computable functions that can securely replace its random oracles under reasonable assumptions. In particular, they increase our confidence that chosen-plaintext attacks are unlikely to be found against RSA-OAEP. In contrast, OAEP's predecessor in PKCS \#1 v1.5 was shown to be vulnerable to chosen-plaintext attacks by Coron et al.~(Eurocrypt~'00). Our first result is actually much more general: for \emph{any} "padding-based" encryption scheme whose trapdoor permutation is lossy, we show that in order to prove semantic security it suffices to argue that the padding function "fools" small-output distinguishers on a class of high-entropy input distributions. We then show that the first round of the OAEP padding satisfies this "fooling" condition.

We introduce the notion of an extractable hash proof system.
Essentially, this is a special kind of non-interactive zero-knowledge
proof of knowledge system where the secret keys may be generated in one
of two modes to allow for either simulation or extraction.

-- We show how to derive efficient CCA-secure encryption schemes via extractable hash proofs in a simple and modular fashion. Our construction clarifies and generalizes the recent factoring-based cryptosystem of Hofheinz and Kiltz (Eurocrypt 09), and is reminiscent of an approach proposed by Rackoff and Simon (Crypto 91). We show how to instantiate extractable hash proof system for hard search problems, notably factoring and computational Diffie-Hellman. Using our framework, we obtain the first CCA-secure encryption scheme based on CDH where the public key is a constant number of group elements and a more modular and conceptually simpler variant of the Hofheinz-Kiltz cryptosystem (though less efficient).

-- We introduce adaptive weakly trapdoor functions, a relaxation of the adaptive trapdoor functions considered by Kiltz, Mohassel and O'Neil (Eurocrypt '10), but nonetheless imply CCA-secure encryption schemes. We show how to construct such functions using extractable hash proofs, which in turn yields realizations from hardness of factoring and CDH. This is the first general assumption that implies CCA-secure encryption while simultaneously admitting instantations from hardness of factoring, CDH and lattice-based assumptions.

-- We show how to derive efficient CCA-secure encryption schemes via extractable hash proofs in a simple and modular fashion. Our construction clarifies and generalizes the recent factoring-based cryptosystem of Hofheinz and Kiltz (Eurocrypt 09), and is reminiscent of an approach proposed by Rackoff and Simon (Crypto 91). We show how to instantiate extractable hash proof system for hard search problems, notably factoring and computational Diffie-Hellman. Using our framework, we obtain the first CCA-secure encryption scheme based on CDH where the public key is a constant number of group elements and a more modular and conceptually simpler variant of the Hofheinz-Kiltz cryptosystem (though less efficient).

-- We introduce adaptive weakly trapdoor functions, a relaxation of the adaptive trapdoor functions considered by Kiltz, Mohassel and O'Neil (Eurocrypt '10), but nonetheless imply CCA-secure encryption schemes. We show how to construct such functions using extractable hash proofs, which in turn yields realizations from hardness of factoring and CDH. This is the first general assumption that implies CCA-secure encryption while simultaneously admitting instantations from hardness of factoring, CDH and lattice-based assumptions.

This paper reports on the factorization of the 768-bit number RSA-768 by
the number field sieve factoring method and discusses some implications
for RSA.

Let $pk=(N,e)$ be an RSA public key with corresponding secret key
$sk=(p,q,d,d_p,d_q, q_p^{-1})$. Assume that we obtain partial error-free
information of $sk$, e.g., assume that we obtain half of the most
significant bits of $p$. Then there are well-known algorithms to recover
the full secret key. As opposed to these algorithms that allow for {\em
correcting erasures} of the key $sk$, we present for the first time a
heuristic probabilistic algorithm that is capable of {\em correcting
errors} in $sk$ provided that $e$ is small. That is, on input of a full
but error-prone secret key $\tilde{sk}$ we reconstruct the original $sk$
by correcting the faults.

More precisely, consider an error rate of $\delta \in [0,\frac 1 2)$, where we flip each bit in $sk$ with probability $\delta$ resulting in an erroneous key $\tilde{sk}$. Our Las-Vegas type algorithm allows to recover $sk$ from $\tilde {sk}$ in expected time polynomial in $\log N$ with success probability close to 1, provided that $\delta < 0.237$ .

More precisely, consider an error rate of $\delta \in [0,\frac 1 2)$, where we flip each bit in $sk$ with probability $\delta$ resulting in an erroneous key $\tilde{sk}$. Our Las-Vegas type algorithm allows to recover $sk$ from $\tilde {sk}$ in expected time polynomial in $\log N$ with success probability close to 1, provided that $\delta < 0.237$ .

We present improved cryptanalysis of two second-round SHA-3 candidates:
the AES-based hash functions ECHO and Grostl. We explain methods for
building better differential trails for ECHO by increasing the
granularity of the truncated differential paths previously considered.
In the case of Grostl, we describe a new technique, the internal
differential attack, which shows that when using parallel computations
designers should also consider the differential security between the
parallel branches. Then, we exploit the recently introduced
start-from-the-middle or Super-Sbox attacks, that proved to be very
efficient when attacking AES-like permutations, to achieve a very
efficient utilization of the available freedom degrees. Finally, we
obtain the best known attacks so far for both ECHO and Grostl. In
particular, we are able to mount a distinguishing attack for the full
Grostl-256 compression function.

The privacy of most GSM phone conversations is currently protected by
the 20+ years old A5/1 and A5/2 stream ciphers, which were repeatedly
shown to be cryptographically weak. They will soon be replaced by the
new A5/3 (and the soon to be announced A5/4) algorithm based on the
block cipher KASUMI, which is a modified version of MISTY. In this paper
we describe a new type of attack called a sandwich attack, and use it to
construct a simple distinguisher for 7 of the 8 rounds of KASUMI with an
amazingly high probability of 2^{-14}. By using this distinguisher and
analyzing the single remaining round, we can derive the complete 128 bit
key of the full KASUMI by using only 4~related keys, 2^{26} data, 2^{30}
bytes of memory, and 2^{32} time. These complexities are so small that
we have actually simulated the attack in less than two hours on a single
PC, and experimentally verified its correctness and complexity.
Interestingly, neither our technique nor any other published attack can
break MISTY in less than the 2^{128} complexity of exhaustive search,
which indicates that the changes made by ETSI's SAGE group in moving
from MISTY to KASUMI resulted in a much weaker cipher.

We present the UC/c framework, a general definition for secure and
incoercible multi-party protocols. Our framework allows to model
arbitrary reactive protocol tasks (by specifying an ideal functionality)
and comes with a universal composition theorem. We show that given
natural setup assumptions, we can construct incoercible two-party
protocols realising arbitrary functionalities (with respect to static
adversaries).

Concurrent non-malleable zero-knowledge (NMZK) considers the concurrent
execution of zero-knowledge protocols in a setting where the attacker
can simultaneously corrupt multiple provers and verifiers. Barak,
Prabhakaran and Sahai (FOCS'06) recently provided the first construction
of a concurrent NMZK protocol without any set-up assumptions. Their
protocol, however, is only computationally sound (a.k.a., a concurrent
NMZK \emph{argument}). In this work we present the first construction of
a concurrent NMZK \emph{proof} without any set-up assumptions. Our
protocol requires $O(n)$ rounds assuming one-way functions, or
$\tilde{O}(\log n)$ rounds assuming collision-resistant hash functions.

It is well known that proving the security of a key agreement protocol
(even in a special case where the protocol transcript looks random to an
outside observer) is at least as difficult as proving $P \not = NP$.
Another (seemingly unrelated) statement in cryptography is the existence
of two or more non-adaptively secure pseudo-random functions that do not
become adaptively secure under sequential or parallel composition. In
2006, Pietrzak showed that {\em at least one} of these two seemingly
unrelated statements is true. In other words, the existence of key
agreement or the existence of the adaptively insecure composition of
non-adaptively secure functions is true. Pietrzak's result was
significant since it showed a surprising connection between the worlds
of public-key (i.e., "cryptomania") and private-key cryptography (i.e.,
"minicrypt"). In this paper we show that this duality is far stronger:
we show that {\em at least one} of these two statements must also be
false. In other words, we show their {\em equivalence}.

More specifically, Pietrzak's paper shows that if sequential composition of two non-adaptively secure pseudo-random functions is not adaptively secure, then there exists a key agreement protocol. However, Pietrzak's construction implies a slightly stronger fact: If sequential composition does not imply adaptive security (in the above sense), then a {\em uniform-transcript} key agreement protocol exists, where by uniform-transcript we mean a key agreement protocol where the transcript of the protocol execution is indistinguishable from uniform to eavesdroppers. In this paper, we complete the picture, and show the reverse direction as well as a strong equivalence between these two notions. More specifically, as our main result, we show that if there exists {\em any} uniform-transcript key agreement protocol, then composition does not imply adaptive security. Our result holds for both parallel and sequential composition in the contexts of general-composition and self-composition. Our implication holds based on virtually all known key agreement protocols, and can also be based on general complexity assumptions of the existence of dense trapdoor permutations.

More specifically, Pietrzak's paper shows that if sequential composition of two non-adaptively secure pseudo-random functions is not adaptively secure, then there exists a key agreement protocol. However, Pietrzak's construction implies a slightly stronger fact: If sequential composition does not imply adaptive security (in the above sense), then a {\em uniform-transcript} key agreement protocol exists, where by uniform-transcript we mean a key agreement protocol where the transcript of the protocol execution is indistinguishable from uniform to eavesdroppers. In this paper, we complete the picture, and show the reverse direction as well as a strong equivalence between these two notions. More specifically, as our main result, we show that if there exists {\em any} uniform-transcript key agreement protocol, then composition does not imply adaptive security. Our result holds for both parallel and sequential composition in the contexts of general-composition and self-composition. Our implication holds based on virtually all known key agreement protocols, and can also be based on general complexity assumptions of the existence of dense trapdoor permutations.

We introduce and formalize the notion of Verifiable Computation, which
enables a computationally weak client to "outsource" the computation of
a function F on various dynamically-chosen inputs x_1,...,x_k to one or
more workers. The workers return the result of the function evaluation,
e.g., y_i=F(x_i), as well as a proof that the computation of F was
carried out correctly on the given value x_i. The primary constraint is
that the verification of the proof should require substantially less
computational effort than computing F(x_i) from scratch.

We present a protocol that allows the worker to return a computationally-sound, non-interactive proof that can be verified in O(m) time, where m is the bit-length of the output of F. The protocol requires a one-time pre-processing stage by the client which takes O(|C|) time, where C is the smallest known Boolean circuit computing F. Unlike previous work in this area, our scheme also provides (at no additional cost) input and output privacy for the client, meaning that the workers do not learn any information about the x_i or y_i values.

We present a protocol that allows the worker to return a computationally-sound, non-interactive proof that can be verified in O(m) time, where m is the bit-length of the output of F. The protocol requires a one-time pre-processing stage by the client which takes O(|C|) time, where C is the smallest known Boolean circuit computing F. Unlike previous work in this area, our scheme also provides (at no additional cost) input and output privacy for the client, meaning that the workers do not learn any information about the x_i or y_i values.

Following Gennaro, Gentry, and Parno (Cryptology ePrint Archive
2009/547), we use fully homomorphic encryption to design improved
schemes for delegating computation. In such schemes, a {\em delegator}
outsources the computation of a function $F$ on many, dynamically chosen
inputs $x_i$ to a {\em worker} in such a way that it is infeasible for
the worker to make the delegator accept a result other than $F(x_i)$.
The "online phase" of the Gennaro et al. scheme is very efficient: the
parties exchange two messages, the delegator runs in time $\poly(\log
T)$, and the worker runs in time $\poly(T)$, where $T$ is the time
complexity of $F$. However, the "offline phase" (which depends on the
function $F$ but not the inputs to be delegated) is inefficient: the
delegator runs in time $\poly(T)$ and generates a public key of length
$\poly(T)$ that needs to be accessed by the worker during the online
phase.

Our first construction eliminates the large public key from the Gennaro et al. scheme. The delegator still invests $\poly(T)$ time in the offline phase, but does not need to communicate or publish anything. Our second construction reduces the work of the delegator in the offline phase to $\poly(\log T)$ at the price of a 4-message (offline) interaction with a $\poly(T)$-time worker (which need not be the same as the workers used in the online phase). Finally, we describe a "pipelined" implementation of the second construction that avoids the need to re-run the offline construction after errors are detected (assuming errors are not too frequent).

Our first construction eliminates the large public key from the Gennaro et al. scheme. The delegator still invests $\poly(T)$ time in the offline phase, but does not need to communicate or publish anything. Our second construction reduces the work of the delegator in the offline phase to $\poly(\log T)$ at the price of a 4-message (offline) interaction with a $\poly(T)$-time worker (which need not be the same as the workers used in the online phase). Finally, we describe a "pipelined" implementation of the second construction that avoids the need to re-run the offline construction after errors are detected (assuming errors are not too frequent).

We reinvestigate the oblivious RAM concept introduced by Goldreich and
Ostrovsky, which enables a client, that can store locally only a
constant amount of data, to store remotely $n$ data items, and access
them while hiding the identities of items which are accessed. Oblivious
RAM is often cited as a powerful tool, which can be used, for example,
for search on encrypted data or for preventing cache attacks. However,
it is also commonly considered to be impractical due to its overhead,
which is asymptotically efficient but is quite high: each data request
is replaced by $O(\log^4 n)$ requests, or by $O(\log^3 n)$ requests
where the constant in the "$O$" notation is a few thousands. In
addition, $O(n \log n)$ external memory is required in order to store
the $n$ data items. We redesign the oblivious RAM protocol using modern
tools, namely Cuckoo hashing and a new oblivious sorting algorithm. The
resulting protocol uses only $O(n)$ external memory, and replaces each
data request by only $O(\log^2 n)$ requests (with a small constant).
This analysis is validated by experiments that we ran.

The Virtual Black Box (VBB) property for program obfuscators provides a
strong guarantee: Anything computable by an efficient adversary given
the obfuscated program can also be computed by an efficient simulator
that has only oracle access to the program. However, we know how to
achieve this notion only for very restricted classes of programs.

This work proposes a simple relaxation of VBB: Allow the simulator unbounded computation time, while still allowing only polynomially many queries to the oracle. We then demonstrate the viability of this relaxed notion, which we call Virtual Grey Box (VGB), in the context of fully composable obfuscators of point programs: It is known that, w.r.t. VBB, if such obfuscators exist then there exist multi-bit point obfuscators (aka "digital lockers") and subsequently also very strong variants of encryption that remain secure under key leakage and key-dependent-messages. However, no composable VBB-obfuscators for point programs have been shown. We show fully composable {\em VGB}-obfuscators for point programs under a strong variant of the Decision Diffie Hellman assumption, and show they still suffice for the above applications

This work proposes a simple relaxation of VBB: Allow the simulator unbounded computation time, while still allowing only polynomially many queries to the oracle. We then demonstrate the viability of this relaxed notion, which we call Virtual Grey Box (VGB), in the context of fully composable obfuscators of point programs: It is known that, w.r.t. VBB, if such obfuscators exist then there exist multi-bit point obfuscators (aka "digital lockers") and subsequently also very strong variants of encryption that remain secure under key leakage and key-dependent-messages. However, no composable VBB-obfuscators for point programs have been shown. We show fully composable {\em VGB}-obfuscators for point programs under a strong variant of the Decision Diffie Hellman assumption, and show they still suffice for the above applications

Generating random bits is a fundamental problem in cryptography.
Coin-tossing protocols, which generate a random bit with uniform
distribution, are used as a building box in many cryptographic
protocols. Cleve [STOC 1986] has shown that if at least half of the
parties can be malicious, then, in any $r$-round coin-tossing protocol,
the malicious parties can cause a bias of $\Omega(1/r)$ in the bit that
the honest parties output. However, for more than two decades the best
known protocols had bias $t/\sqrt{r}$, where $t$ is the number of
corrupted parties. Recently, in a surprising result, Moran, Naor, and
Segev [TCC 2009] have shown that there is an $r$-round two-party
coin-tossing protocol with the optimal bias of $O(1/r)$. We extend Moran
et al.~results to the multiparty model when less than 2/3 of the parties
are malicious. The bias of our protocol is proportional to $1/r$ and
depends on the gap between the number of malicious parties and the
number of honest parties in the protocol. Specifically, for a constant
number of parties or when the number of malicious parties is somewhat
larger than half, we present an $r$-round $m$-party coin-tossing
protocol with optimal bias of $O(1/r)$.

Multiparty computation protocols have been known for more than twenty
years now, but due to their lack of efficiency their use is still
limited in real-world applications: the goal of this paper is the design
of efficient two and multi party computation aimed to fill the gap
between theory and practice. We propose a new protocol to securely
evaluate reactive arithmetic circuits, that offers security against an
active adversary in the universally composable security framework.
Instead of the "do-and-compile" approach (where the parties use
zero-knowledge proofs to show that they are following the protocol) our
key ingredient is an efficient version of the "cut-and-choose"
technique, that allow us to achieve active security for just a (small)
constant amount of work more than for passive security.

We revisit the question of secure multiparty computation (MPC)
with two rounds of interaction. It was previously shown by Gennaro
et al.\ (Crypto 2002) that three or more communication rounds are
necessary for general MPC protocols with guaranteed output
delivery, assuming that there may be $t\ge 2$ corrupted parties.
This negative result holds regardless of the total number of
parties, even if {\em broadcast} messages are allowed in each
round, and even if only {\em fairness} is required. We complement
this negative result by presenting matching positive results.

Our first main result is that if only {\em one} party may be corrupted, then $n\ge 5$ parties can securely compute any function of their inputs using only {\em two} rounds of interaction over secure point-to-point channels (without broadcast or any additional setup). The protocol makes a black-box use of a pseudorandom generator, or alternatively can offer unconditional security for functionalities in $\NCone$.

We also prove a similar result in a client-server setting, where there are $m\ge 2$ clients who hold inputs and should receive outputs, and $n$ additional servers with no inputs and outputs. For this setting we obtain a general MPC protocol which requires a single message from each client to each server, followed by a single message from each server to each client. The protocol is secure against a single corrupted client and against coalitions of $t<n/3$ corrupted servers.

The above protocols guarantee output delivery and fairness. Our second main result shows that under a relaxed notion of security, allowing the adversary to selectively decide (after learning its own outputs) which honest parties will receive their (correct) output, there is a general 2-round MPC protocol which tolerates $t<n/3$ corrupted parties. This protocol relies on the existence of a pseudorandom generator in $\NCone$ (which is implied by most standard cryptographic assumptions), or alternatively can offer unconditional security for functionalities in $\NCone$.

Our first main result is that if only {\em one} party may be corrupted, then $n\ge 5$ parties can securely compute any function of their inputs using only {\em two} rounds of interaction over secure point-to-point channels (without broadcast or any additional setup). The protocol makes a black-box use of a pseudorandom generator, or alternatively can offer unconditional security for functionalities in $\NCone$.

We also prove a similar result in a client-server setting, where there are $m\ge 2$ clients who hold inputs and should receive outputs, and $n$ additional servers with no inputs and outputs. For this setting we obtain a general MPC protocol which requires a single message from each client to each server, followed by a single message from each server to each client. The protocol is secure against a single corrupted client and against coalitions of $t<n/3$ corrupted servers.

The above protocols guarantee output delivery and fairness. Our second main result shows that under a relaxed notion of security, allowing the adversary to selectively decide (after learning its own outputs) which honest parties will receive their (correct) output, there is a general 2-round MPC protocol which tolerates $t<n/3$ corrupted parties. This protocol relies on the existence of a pseudorandom generator in $\NCone$ (which is implied by most standard cryptographic assumptions), or alternatively can offer unconditional security for functionalities in $\NCone$.

We use security in the Universal Composition framework as a means to
study the "cryptographic complexity" of 2-party secure computation
tasks (functionalities). We say that a functionality F {\em reduces to}
another functionality G if there is a UC-secure protocol for F using
ideal access to G This reduction is a natural and fine-grained way to
compare the relative complexities of cryptographic tasks. There are two
natural "extremes" of complexity under the reduction: the {\em
trivial} functionalities, which can be reduced to any other
functionality; and the {\em complete} functionalities, to which any
other functionality can be reduced.

In this work we show that under a natural computational assumption (the existence of a protocol for oblivious transfer secure against semi-honest adversaries), there is a {\bf zero-one law} for the cryptographic complexity of 2-party deterministic functionalities. Namely, {\em every such functionality is either trivial or complete.} No other qualitative distinctions exist among functionalities, under this computational assumption.

While nearly all previous work classifying multi-party computation functionalities has been restricted to the case of secure function evaluation, our results are the first to consider completeness of arbitrary {\em reactive} functionalities, which receive input and give output repeatedly throughout several rounds of interaction. One important technical contribution in this work is to initiate the comprehensive study of the cryptographic properties of reactive functionalities. We model these functionalities as finite automata and develop an automata-theoretic methodology for classifying and studying their cryptographic properties. Consequently, we completely characterize the reactive behaviors that lead to cryptographic non-triviality. Another contribution of independent interest is to optimize the hardness assumption used by Canetti et al. (STOC 2002) in showing that the common random string functionality is complete (a result independently obtained by Damg{\aa}rd et al. (TCC 2010)).

In this work we show that under a natural computational assumption (the existence of a protocol for oblivious transfer secure against semi-honest adversaries), there is a {\bf zero-one law} for the cryptographic complexity of 2-party deterministic functionalities. Namely, {\em every such functionality is either trivial or complete.} No other qualitative distinctions exist among functionalities, under this computational assumption.

While nearly all previous work classifying multi-party computation functionalities has been restricted to the case of secure function evaluation, our results are the first to consider completeness of arbitrary {\em reactive} functionalities, which receive input and give output repeatedly throughout several rounds of interaction. One important technical contribution in this work is to initiate the comprehensive study of the cryptographic properties of reactive functionalities. We model these functionalities as finite automata and develop an automata-theoretic methodology for classifying and studying their cryptographic properties. Consequently, we completely characterize the reactive behaviors that lead to cryptographic non-triviality. Another contribution of independent interest is to optimize the hardness assumption used by Canetti et al. (STOC 2002) in showing that the common random string functionality is complete (a result independently obtained by Damg{\aa}rd et al. (TCC 2010)).

We prove beyond-birthday-bound security for the well-known types of
generalized Feistel networks, including: (1) unbalanced Feistel
networks, where the $n$-bit to $m$-bit round functions may have $n\ne
m$; (2) alternating Feistel networks, where the round functions
alternate between contracting and expanding; (3) type-1, type-2, and
type-3 Feistel networks, where $n$-bit to $n$-bit round functions are
used to encipher $kn$-bit strings for some $k\ge2$; and (4) numeric
variants of any of the above, where one enciphers numbers in some given
range rather than strings of some given size. Using a unified analytic
framework we show that, in any of these settings, for any
$\varepsilon>0$, with enough rounds, the subject scheme can tolerate CCA
attacks of up to $q\sim N^{1-\varepsilon}$ adversarial queries, where
$N$ is the size of the round functions' domain (the size of the larger
domain for alternating Feistel). This is asymptotically optimal. Prior
analyses for generalized Feistel networks established security to only
$q\sim N^{0.5}$ adversarial queries.

In spite of the central role of key derivation functions (KDF) in
applied cryptography, there has been little formal work addressing the
design and analysis of general multi-purpose KDFs. In practice, most
KDFs (including those widely standardized) follow ad-hoc approaches that
treat cryptographic hash functions as perfectly random functions. In
this paper we close some gaps between theory and practice by
contributing to the study and engineering of KDFs in several ways. We
provide detailed rationale for the design of KDFs based on the
extract-then-expand approach; we present the first general and rigorous
definition of KDFs and their security that we base on the notion of
computational extractors; we specify a concrete fully practical KDF
based on the HMAC construction; and we provide an analysis of this
construction based on the extraction and pseudorandom properties of
HMAC. The resultant KDF design can support a large variety of KDF
applications under suitable assumptions on the underlying hash function;
particular attention and effort is devoted to minimizing these
assumptions as much as possible for each usage scenario.

Beyond the theoretical interest in modeling KDFs, this work is intended to address two important and timely needs of cryptographic applications: (i) providing a single hash-based KDF design that can be standardized for use in multiple and diverse applications, and (ii) providing a conservative, yet efficient, design that exercises much care in the way it utilizes a cryptographic hash function.

(The HMAC-based scheme presented here, named HKDF, is being standardized by the IETF.)

Beyond the theoretical interest in modeling KDFs, this work is intended to address two important and timely needs of cryptographic applications: (i) providing a single hash-based KDF design that can be standardized for use in multiple and diverse applications, and (ii) providing a conservative, yet efficient, design that exercises much care in the way it utilizes a cryptographic hash function.

(The HMAC-based scheme presented here, named HKDF, is being standardized by the IETF.)

We study time space tradeoffs in the complexity of attacks against
one-way functions and pseudorandom generators.

Fiat and Naor (SICOMP 99) show that for every function $f: [N]\to [N]$ there is an algorithm that inverts $f$ everywhere using (ignoring lower order factors) time, space and advice at most $N^{3/4}$.

We show that an algorithm using time, space and advice at most \[ \max \{ \epsilon^{\frac 54} N^{\frac 34} \ , \ \sqrt{\epsilon N} \} \] exists that inverts $f$ on at least an $\epsilon$ fraction of inputs. A lower bound of $\tilde \Omega(\sqrt { \epsilon N })$ also holds, making our result tight in the "low end" of $\epsilon \leq \sqrt[3]{\frac{1}{N}}$.

(Both the results of Fiat and Naor and ours are formulated as more general trade-offs between the time and the space and advice length of the algorithm. The results quoted above correspond to the interesting special case in which time equals space and advice length.)

We also show that for every length-increasing generator $G:[N] \to [2N]$ there is a algorithm that achieves distinguishing probability $\epsilon$ between the output of $G$ and the uniform distribution and that can be implemented in polynomial (in $\log N$) time and with advice and space $O(\epsilon^2 \cdot N\log N)$. We prove a lower bound of $S\cdot T\geq \Omega(\epsilon^2 N)$ where $T$ is the time used by the algorithm and $S$ is the amount of advice. This lower bound applies even when the distinguisher has oracle access to $G$.

We prove stronger lower bounds in the {\em common random string} model, for families of one-way permutations and of pseudorandom generators.

Fiat and Naor (SICOMP 99) show that for every function $f: [N]\to [N]$ there is an algorithm that inverts $f$ everywhere using (ignoring lower order factors) time, space and advice at most $N^{3/4}$.

We show that an algorithm using time, space and advice at most \[ \max \{ \epsilon^{\frac 54} N^{\frac 34} \ , \ \sqrt{\epsilon N} \} \] exists that inverts $f$ on at least an $\epsilon$ fraction of inputs. A lower bound of $\tilde \Omega(\sqrt { \epsilon N })$ also holds, making our result tight in the "low end" of $\epsilon \leq \sqrt[3]{\frac{1}{N}}$.

(Both the results of Fiat and Naor and ours are formulated as more general trade-offs between the time and the space and advice length of the algorithm. The results quoted above correspond to the interesting special case in which time equals space and advice length.)

We also show that for every length-increasing generator $G:[N] \to [2N]$ there is a algorithm that achieves distinguishing probability $\epsilon$ between the output of $G$ and the uniform distribution and that can be implemented in polynomial (in $\log N$) time and with advice and space $O(\epsilon^2 \cdot N\log N)$. We prove a lower bound of $S\cdot T\geq \Omega(\epsilon^2 N)$ where $T$ is the time used by the algorithm and $S$ is the amount of advice. This lower bound applies even when the distinguisher has oracle access to $G$.

We prove stronger lower bounds in the {\em common random string} model, for families of one-way permutations and of pseudorandom generators.

This paper fills an important foundational gap with the first proofs,
under standard assumptions and in the standard model, of the existence
of PRFs and PRPs resisting rich and relevant forms of related-key attack
(RKA). An RKA allows the adversary to query the function not only under
the target key but under other keys derived from it in
adversary-specified ways. Based on the Naor-Reingold PRF we obtain an
RKA-PRF whose keyspace is a group and that is proven, under DDH, to
resist attacks in which the key may be operated on by arbibrary
adversary-specified group elements. Previous work was able only to
provide schemes in idealized models (ideal cipher, random oracle), under
new, non-standard assumptions, or for limited classes of attacks. The
reason was technical difficulties that we resolve via a new approach and
framework that, in addition to the above, yields other RKA-PRFs
including a DLIN-based one derived from the Lewko-Waters PRF. Over the
last 15 years cryptanalysts and blockcipher designers have routinely and
consistenly targetted RKA-security; it is visibly important for
abuse-resistant cryptography; and it helps protect against
fault-injection sidechannel attacks. Yet ours are the first significant
proofs of existence of secure constructs. We warn that our constructs
are proofs-of-concept in the foundational style and not practical.

We show that any two-party quantum computation, specified by a unitary
which simultaneously acts on the registers of both parties, can be
securely implemented against a quantum version of classical semi-honest
adversaries that we call specious.

We first show that no statistically private protocol exists for swapping qubits against specious adversaries. The swap functionality is modeled by a unitary transform that is not sufficient for universal quantum computation. It means that universality is not required in order to obtain impossibility proofs in our model. However, the swap transform can easily be implemented privately provided a classical bit commitment scheme.

We provide a simple protocol for the evaluation of any unitary transform represented by a circuit made out of gates in some standard universal set of quantum gates. All gates except one can be implemented securely provided one call to swap made available as an ideal functionality. For each appearance of the remaining gate in the circuit, one call to a classical NL-box is required for privacy. The NL-box can easily be constructed from oblivious transfer. It follows that oblivious transfer is universal for private evaluations of unitaries as well as for classical circuits.

Unlike the ideal swap, NL-boxes are classical primitives and cannot be represented by unitary transforms. It follows that, to some extent, this remaining gate is the hard one, like the AND gate for classical two-party computation.

We first show that no statistically private protocol exists for swapping qubits against specious adversaries. The swap functionality is modeled by a unitary transform that is not sufficient for universal quantum computation. It means that universality is not required in order to obtain impossibility proofs in our model. However, the swap transform can easily be implemented privately provided a classical bit commitment scheme.

We provide a simple protocol for the evaluation of any unitary transform represented by a circuit made out of gates in some standard universal set of quantum gates. All gates except one can be implemented securely provided one call to swap made available as an ideal functionality. For each appearance of the remaining gate in the circuit, one call to a classical NL-box is required for privacy. The NL-box can easily be constructed from oblivious transfer. It follows that oblivious transfer is universal for private evaluations of unitaries as well as for classical circuits.

Unlike the ideal swap, NL-boxes are classical primitives and cannot be represented by unitary transforms. It follows that, to some extent, this remaining gate is the hard one, like the AND gate for classical two-party computation.

Due to its universality oblivious transfer (OT) is a primitive of great
importance in secure multi-party computation. OT is impossible to
implement from scratch in an unconditionally secure way, but there are
many reductions of OT to other variants of OT, as well as other
primitives such as noisy channels. It is important to know how efficient
such unconditionally secure reductions can be in principle, i.e., how
many instances of a given primitive are at least needed to implement OT.
For perfect (error-free) implementations good lower bounds are known,
e.g. the bounds by Beaver (STOC '96) or by Dodis and Micali (EUROCRYPT
'99). But since in practice one is usually willing to tolerate a small
probability of error and since these statistical reductions can be much
more efficient, the known bounds have only limited application. In the
first part of this work we provide lower bounds on the efficiency of
1-out-of-n OT and Rabin-OT reductions to distributed randomness in the
statistical case. From these results we derive bounds on reductions to
different variants of OT that generalize known bounds to the statistical
case. Our bounds hold in particular for transformations between a finite
number of primitives and for any error. In the second part we look at
the efficiency of quantum reductions. Recently, Salvail, Schaffner and
Sotakova (ASIACRYPT '09) showed that most classical lower bounds for
perfectly secure reductions of OT to distributed randomness still hold
in a quantum setting. We present a statistically secure protocol that
violates these bounds by an arbitrarily large factor. We then present a
weaker lower bound for the statistical setting. We use this bound to
show that even quantum protocols cannot extend OT. Finally, we present
two lower bounds for reductions of OT to commitments and a protocol
based on string commitments that is optimal with respect to both of
these bounds.

We propose a framework for analyzing classical sampling strategies for
estimating the Hamming weight of a large string from a few sample
positions, when applied to a multi-qubit quantum system instead. The
framework shows how to interpret the result of such a strategy and how
to define its accuracy when applied to a quantum system. Furthermore, we
show how the accuracy of any strategy relates to its accuracy in its
classical usage, which is well understood for the important examples. We
show the usefulness of our framework by using it to obtain new and
simple security proofs for the following quantum-cryptographic schemes:
BB84 quantum-key-distribution, and quantum oblivious-transfer from
bit-commitment.