We show that an RSA private key with small public exponent can be efficiently recovered given a 0.27 fraction of its bits at random. An important application of this work is to the "cold boot" attacks of Halderman et al. We make new observations about the structure of RSA keys that allow our algorithm to make use of the redundant information in the typical storage format of an RSA private key. Our algorithm itself is elementary and does not make use of the lattice techniques used in other RSA key reconstruction problems. We give an analysis of the running time behavior of our algorithm that matches the threshold phenomenon observed in our experiments.

Most of the work in the analysis of cryptographic schemes is concentrated in abstract adversarial models that do not capture {\em side-channel attacks}. Such attacks exploit various forms of unintended information leakage, which is inherent to almost all physical implementations. Inspired by recent side-channel attacks, especially the "cold boot attacks", Akavia, Goldwasser and Vaikuntanathan (TCC '09) formalized a realistic framework for modeling the security of encryption schemes against a wide class of side-channel attacks in which adversarially chosen functions of the secret key are leaked. In the setting of public-key encryption, Akavia et al. showed that Regev's lattice-based scheme (STOC '05) is resilient to any leakage of $L / \polylog(L)$ bits, where $L$ is the length of the secret key.
In this paper we revisit the above-mentioned framework and establish the following results:

* We present a generic construction of a public-key encryption scheme that is resilient to key leakage from any {\em universal hash proof system}. The construction does not rely on additional computational assumptions, and the resulting scheme is as efficient as the underlying proof system. Existing constructions of such proof systems imply that our construction can be based on a variety of number-theoretic assumptions, including the decisional Diffie-Hellman assumption (and its progressively weaker $d$-Linear variants), the quadratic residuosity assumption, and Paillier's composite residuosity assumption.

* We construct a new hash proof system based on the decisional Diffie-Hellman assumption (and its $d$-Linear variants), and show that the resulting scheme is resilient to any leakage of $L(1 - o(1))$ bits. In addition, we prove that the recent scheme of Boneh et al. (CRYPTO '08), constructed to be a "circular-secure" encryption scheme, is resilient to any leakage of $L(1 - o(1))$ bits. These two proposed schemes complement each other in terms of efficiency.

* We extend the framework of key leakage to the setting of chosen-ciphertext attacks. On the theoretical side, we prove that the Naor-Yung paradigm is applicable in this setting as well, and obtain as a corollary encryption schemes that are CCA2-secure with any leakage of $L(1 - o(1))$ bits. On the practical side, we prove that variants of the Cramer-Shoup cryptosystem (along the lines of our generic construction) are CCA1-secure with any leakage of $L/4$ bits, and CCA2-secure with any leakage of $L/6$ bits.

* We present a generic construction of a public-key encryption scheme that is resilient to key leakage from any {\em universal hash proof system}. The construction does not rely on additional computational assumptions, and the resulting scheme is as efficient as the underlying proof system. Existing constructions of such proof systems imply that our construction can be based on a variety of number-theoretic assumptions, including the decisional Diffie-Hellman assumption (and its progressively weaker $d$-Linear variants), the quadratic residuosity assumption, and Paillier's composite residuosity assumption.

* We construct a new hash proof system based on the decisional Diffie-Hellman assumption (and its $d$-Linear variants), and show that the resulting scheme is resilient to any leakage of $L(1 - o(1))$ bits. In addition, we prove that the recent scheme of Boneh et al. (CRYPTO '08), constructed to be a "circular-secure" encryption scheme, is resilient to any leakage of $L(1 - o(1))$ bits. These two proposed schemes complement each other in terms of efficiency.

* We extend the framework of key leakage to the setting of chosen-ciphertext attacks. On the theoretical side, we prove that the Naor-Yung paradigm is applicable in this setting as well, and obtain as a corollary encryption schemes that are CCA2-secure with any leakage of $L(1 - o(1))$ bits. On the practical side, we prove that variants of the Cramer-Shoup cryptosystem (along the lines of our generic construction) are CCA1-secure with any leakage of $L/4$ bits, and CCA2-secure with any leakage of $L/6$ bits.

We study the design of cryptographic primitives resilient to key-leakage attacks, where an attacker can repeatedly and adaptively learn information about the secret key, subject only to the constraint that the overall amount of such information is bounded by some parameter L. We construct a variety of leakage-resilient public-key systems including the first known identification schemes (ID), signature schemes and authenticated key agreement protocols (AKA). Our main result is an efficient three-round AKA in the Random-Oracle Model, which is resilient to key-leakage attacks that can occur prior-to and after a protocol execution. Our AKA protocol can be used as an interactive encryption scheme with qualitatively stronger privacy guarantees than non-interactive encryption schemes (constructed in prior and concurrent works), which are inherently insecure if the adversary can perform leakage attacks after seing a ciphertext.

Moreover, our schemes can be flexibly extended to the Bounded-Retrieval Model, allowing us to tolerate very large absolute amount of adversarial leakage L (potentially many gigabytes of information), only by increasing the size of the secret key and without any other loss of efficiency in communication or computation. Concretely, given any leakage parameter L, security parameter S, and any desired fraction E, our schemes have the following properties:

[1] Secret key size is L(1+E)+O(S).

[2] Public key size is O(S), and independent of L.

[3] Communication complexity is O(S/E),

and independent of L.

[4] Computation reads O(S/E^2) locations of the secret key, independent of L.

Lastly, we show that our schemes allow for repeated "invisible updates" of the secret key, allowing us to tolerate up to L bits of leakage in between any two updates, and an unlimited amount of leakage overall. These updates require that the parties can securely store a short "master update key" (e.g. on a separate secure device protected against leakage), which is only used for updates and not during protocol execution. The updates are invisible in the sense that a party can update its secret key at any point in time, without modifying the public key or notifying the other users.

Moreover, our schemes can be flexibly extended to the Bounded-Retrieval Model, allowing us to tolerate very large absolute amount of adversarial leakage L (potentially many gigabytes of information), only by increasing the size of the secret key and without any other loss of efficiency in communication or computation. Concretely, given any leakage parameter L, security parameter S, and any desired fraction E, our schemes have the following properties:

[1] Secret key size is L(1+E)+O(S).

[2] Public key size is O(S), and independent of L.

[3] Communication complexity is O(S/E),

and independent of L.

[4] Computation reads O(S/E^2) locations of the secret key, independent of L.

Lastly, we show that our schemes allow for repeated "invisible updates" of the secret key, allowing us to tolerate up to L bits of leakage in between any two updates, and an unlimited amount of leakage overall. These updates require that the parties can securely store a short "master update key" (e.g. on a separate secure device protected against leakage), which is only used for updates and not during protocol execution. The updates are invisible in the sense that a party can update its secret key at any point in time, without modifying the public key or notifying the other users.

We present a refined chosen-prefix collision construction for MD5 that
allowed creation of a rogue Certification Authority (CA) certificate,
based on a collision with a regular end-user website certificate provided
by a commercial CA.
Compared to the previous construction from Eurocrypt 2007, this paper
describes a more flexible family of differential paths and a new
variable birthdaying search space. Combined with a time-memory trade-off,
these improvements lead to just three pairs of near-collision
blocks to generate the collision, enabling construction of RSA moduli that
are sufficiently short to be accepted by current CAs. The
entire construction is fast enough to allow for adequate prediction of
certificate serial number and validity period: it can be made
to require about $2^{49}$ MD5 compression function calls.
Finally, we improve the
complexity of identical-prefix collisions for MD5 to about $2^{16}$
MD5 compression function calls and use it to derive a practical
single-block chosen-prefix collision construction of which an example is given.

Preimage resistance of several hash functions has already been broken by the meet-in-the-middle attacks and they utilize a property that their message schedules consist of only permutations of message words. It is unclear whether this type of attacks is applicable to a hash function whose message schedule does not consist of permutations of message words. This paper proposes new attacks against reduced SHA-0 and SHA-1 hash functions by analyzing a message schedule that does not consist of permutations but linear combinations of message words. The newly developed cryptanalytic techniques enable the meet-in-the-middle attack to be applied to reduced SHA-0 and SHA-1 hash functions. The attacks find preimages of SHA-0 and SHA-1 in $2^{156.6}$ and $2^{159.3}$ compression function computations up to 52 and 48 steps, respectively, compared to the brute-force attack, which requires $2^{160}$ compression function computations. The previous best attacks find preimages up to 49 and 44 steps, respectively.

Today more than ever, national leaders are making choices about technology: choices that influence how technology will develop, and choices whose consequences depend on technical questions. Yet many technologists find the accompanying policy debates and political processes mystifying or even borderline bogus, and therefore choose not to get involved. In fact, the policy process has its own internal logic which is sensible, or at least predictable, given the constraints involved in public decisionmaking. Some of the apparent bugs in the policy process, such as the tendency to decide without learning all of the relevant facts, are actually features. This talk will discuss how the policy process really works, using frequent analogies to cryptography. Understanding the policy process can help us to be better citizens, to engage in the policy process with more confidence and more effective results, and to do research that better serves the public interest.

A bi-directional Private Authentication protocol, a.k.a. {\em
Unlinkable Secret Handshake}, allows two parties to authenticate
each other as certified by an appropriate certification authority,
and hence affiliated with an appropriate group, in a {\em mutually
private way} in the sense that the protocol reveals no information
about inputs of either participant to a party which does not satisfy
that participant's authentication policy. In particular, the
protocol hides all information about that participant's certificate,
including the group it belongs to, and authentication policy,
including the group it is interested in, and protocol instances
conducted by the same participant are unlinkable. We give the first
practical realization of such private authentication, in a three
round protocol (in ROM) with $O(1)$ exponentiations and bilinear
maps, secure under Strong Diffie-Hellman and Decisional Linear
assumptions.

Our protocol relies on a novel technical tool of independent interest, a family of efficient Private Conditional Oblivious Transfer (COT) protocols for a new class of languages. A COT protocol for language L allows sender $\Sd$ to encrypt message $m$ "under" statement $x$ so that receiver $\Rcv$ gets $m$ only if $\Rcv$ holds a witness for membership of $x$ in $L$, while $\Sd$ learns nothing from the protocol. A {\em private} COT for $L$ hides not only message $m$ but also statement $x$ from any $\Rcv$ that does not know a witness for it. We construct private COT's, secure under DDH, for languages defined by modular arithmetic constraints (e.g. equality, inequality, sums, products, etc) on discrete-log representations of group elements contained in statement $x$. (Recall that $(w_1,w_2,...,w_n)$ is a representation of $C$ in bases $(g_1,g_2,...,g_n)$ if $C=g_1^{w_1}g_2^{w_2}...g_n^{w_n}$.)

Our protocol relies on a novel technical tool of independent interest, a family of efficient Private Conditional Oblivious Transfer (COT) protocols for a new class of languages. A COT protocol for language L allows sender $\Sd$ to encrypt message $m$ "under" statement $x$ so that receiver $\Rcv$ gets $m$ only if $\Rcv$ holds a witness for membership of $x$ in $L$, while $\Sd$ learns nothing from the protocol. A {\em private} COT for $L$ hides not only message $m$ but also statement $x$ from any $\Rcv$ that does not know a witness for it. We construct private COT's, secure under DDH, for languages defined by modular arithmetic constraints (e.g. equality, inequality, sums, products, etc) on discrete-log representations of group elements contained in statement $x$. (Recall that $(w_1,w_2,...,w_n)$ is a representation of $C$ in bases $(g_1,g_2,...,g_n)$ if $C=g_1^{w_1}g_2^{w_2}...g_n^{w_n}$.)

We construct an efficient delegatable anonymous credential system. Users can anonymously and unlinkably obtain credentials from any authority, delegate their credentials to other users, and prove possession of a credential L levels away from the given authority. The size of the proof (and time to compute it) is O(Lk), where k is the security parameter. The only other construction of delegatable anonymous credentials (Chase and Lysyanskaya, Crypto 2006) relies on general non-interactive proofs for NP-complete languages of size k (2L).
We revise the entire approach to constructing anonymous credentials and identify randomizable zero-knowledge proof of knowledge systems as the key building block. We formally define the notion of randomizable non-interactive zero-knowledge proofs, and give the first construction by showing how to appropriately rerandomize Groth and Sahai (Eurocrypt 2008) proofs. We show that such proof systems, in combination with an appropriate authentication scheme and a few other protocols, allow us to construct delegatable anonymous credentials. Finally, we instantiate these building blocks under appropriate assumptions about groups with bilinear maps.

The definition of differential privacy has recently emerged as a gold standard of privacy guarantees for algorithms on statistical databases. We offer several relaxations of the definition which require privacy guarantees to hold only against efficient---i.e.,~computationally-bounded---adversaries. We establish various relationships among these notions, and in doing so, we observe their close connection with the theory of pseudodense sets by Reingold et al. We extend the dense model theorem of Reingold et al. to demonstrate equivalence between two definitions (indistinguishability- and simulatability-based) of computational differential privacy.

Our computational analogues of differential privacy seem to allow for more accurate constructions (i.e., higher utility) than the standard information-theoretic analogues. In particular, in the context of private approximation of the distance between two vectors, we present a differentially-private protocol for computing the approximation, and contrast it with a substantially more accurate protocol that is only computationally differential private. Proving that this gap is inherent remains an interesting open problem.

Our computational analogues of differential privacy seem to allow for more accurate constructions (i.e., higher utility) than the standard information-theoretic analogues. In particular, in the context of private approximation of the distance between two vectors, we present a differentially-private protocol for computing the approximation, and contrast it with a substantially more accurate protocol that is only computationally differential private. Proving that this gap is inherent remains an interesting open problem.

We give a general reduction that converts any
public-coin interactive proof
into a one-round (two-message) argument.
The reduction relies on a method proposed by Aiello {\em et
al.}~\cite{ABOR00},
of using a Private-Information-Retrieval (PIR) scheme to
collapse rounds in interactive protocols.
For example, the reduction implies that for any security parameter $t$,
the membership in any
language in PSPACE
can be proved by a one-round (two-message) argument of size $\poly(n,t)$,
which is sound for malicious provers of size $2^t$.
(Note that the honest prover in this construction
runs in exponential time, since she has to prove membership in PSPACE,
but we can choose $t$ such that $2^t$ is significantly larger than
the running time of the honest prover).

A {\em probabilistically checkable argument} (PCA) is a relaxation of the notion of probabilistically checkable proof (PCP). It is defined analogously to PCP, except that the soundness property is required to hold only {\em computationally}. We consider the model where the argument is of one round (two-message), where the verifier's message depends only on his (private) randomness. We show that for membership in many NP languages, there are PCAs (with efficient honest provers) that are of size polynomial in the size of the {\em witness}. This compares to the best PCPs that are of size polynomial in the size of the {\em instance} (that may be significantly larger). The number of queries to these PCAs is poly-logarithmic. The soundness property, in all our results, relies on exponential hardness assumptions for PIR schemes.

A {\em probabilistically checkable argument} (PCA) is a relaxation of the notion of probabilistically checkable proof (PCP). It is defined analogously to PCP, except that the soundness property is required to hold only {\em computationally}. We consider the model where the argument is of one round (two-message), where the verifier's message depends only on his (private) randomness. We show that for membership in many NP languages, there are PCAs (with efficient honest provers) that are of size polynomial in the size of the {\em witness}. This compares to the best PCPs that are of size polynomial in the size of the {\em instance} (that may be significantly larger). The number of queries to these PCAs is poly-logarithmic. The soundness property, in all our results, relies on exponential hardness assumptions for PIR schemes.

We show that only languages in BPP have public-coin, black-box zero-knowledge protocols that are secure under an unbounded (polynomial) number of parallel repetitions. This result holds both in the plain model (without any set-up) and in the Bare Public-Key Model (where the prover and the verifier have registered public keys). We complement this result by showing the existence of a public-coin black-box zero-knowledge proof that remains secure under any a-priori bounded number of concurrent executions.

We propose a general technique that allows improving the complexity of zero-knowledge protocols for a large class of problems where previously the best known solution was a simple cut-and-choose style protocol, i.e., where the size of a proof for problem instance $x$ and error probability $2^{-n}$ was $O(|x| n)$ bits. By using our technique to prove $n$ instances simultaneously, we can bring down the proof size per instance to $O(|x| + n)$ bits for the same error probability while using no computational assumptions. Examples where our technique applies include proofs for quadratic residuosity, proofs of subgroup membership and knowledge of discrete logarithms in groups of unknown order, and proofs of plaintext knowledge for various types of homomorphic encryptions schemes. The generality of our method stems from a somewhat surprising application of black-box secret sharing schemes.

We suggest practical sub-linear size zero-knowledge arguments for statements involving linear algebra. Given commitments to matrices over a finite field, we give a sub-linear size zero-knowledge argument that one committed matrix is the product of two other committed matrices. We also offer a sub-linear size zero-knowledge argument for a committed matrix being equal to the Hadamard product of two other committed matrices. Armed with these tools we can give many other sub-linear size zero-knowledge arguments, for instance for a committed matrix being upper or lower triangular, a committed matrix being the inverse of another committed matrix, or a committed matrix being a permutation of another committed matrix.

A special case of what can be proved using our techniques is the satisfiability of an arithmetic circuit with $N$ gates. Our arithmetic circuit zero-knowledge argument has a communication complexity of $O(\sqrt{N})$ group elements. We give both a constant round variant and an $O(\log N)$ round variant of our zero-knowledge argument; the latter has a computation complexity of $O(N/\log N)$ exponentiations for the prover and $O(N)$ multiplications for the verifier making it efficient for the prover and very efficient for the verifier. In the case of a binary circuit consisting of NAND-gates we give a zero-knowledge argument of circuit satisfiability with a communication complexity of $O(\sqrt{N})$ group elements and a computation complexity of $O(N)$ multiplications for both the prover and the verifier.

A special case of what can be proved using our techniques is the satisfiability of an arithmetic circuit with $N$ gates. Our arithmetic circuit zero-knowledge argument has a communication complexity of $O(\sqrt{N})$ group elements. We give both a constant round variant and an $O(\log N)$ round variant of our zero-knowledge argument; the latter has a computation complexity of $O(N/\log N)$ exponentiations for the prover and $O(N)$ multiplications for the verifier making it efficient for the prover and very efficient for the verifier. In the case of a binary circuit consisting of NAND-gates we give a zero-knowledge argument of circuit satisfiability with a communication complexity of $O(\sqrt{N})$ group elements and a computation complexity of $O(N)$ multiplications for both the prover and the verifier.

This paper develops several new techniques of cryptanalyzing MACs based on block ciphers, and is divided into two parts.

The first part presents new distinguishers of the MAC construction \textsc{Alred} and its specific instance \textsc{Alpha}-MAC based on AES. For the \textsc{Alred} construction, we first describe a general distinguishing attack which leads to a forgery attack directly with the complexity of the birthday attack. A 2-round collision differential path of \textsc{Alpha}-MAC is adopted to construct a new distinguisher with about $2^{65.5}$ chosen messages and $2^{65.5}$ queries. One of the most important results is to use this new distinguisher to recover the internal state, which is an equivalent subkey of \textsc{Alpha}-MAC. Moreover, our distinguisher on \textsc{Alred} construction can be applied to the MACs based on CBC and CFB encryption modes.

The second part describes the first impossible differential attack on MACs-\textsc{Pelican}, MT-MAC-AES and PC-MAC-AES. Using the birthday attack, enough message pairs that produce the inner near-collision with some specific differences are detected, then the impossible differential attack on 4-round AES to the above mentioned MACs is performed. For \textsc{Pelican}, our attack recovers its internal state, which is an equivalent subkey. For MT-MAC-AES, the attack turns out to be a subkey recovery attack directly. The complexity of the two attacks is $2^{85.5}$ chosen messages and $2^{85.5}$ queries. For PC-MAC-AES, we recover its 256-bit key with $2^{85.5}$ chosen messages and $2^{128}$ queries.

The first part presents new distinguishers of the MAC construction \textsc{Alred} and its specific instance \textsc{Alpha}-MAC based on AES. For the \textsc{Alred} construction, we first describe a general distinguishing attack which leads to a forgery attack directly with the complexity of the birthday attack. A 2-round collision differential path of \textsc{Alpha}-MAC is adopted to construct a new distinguisher with about $2^{65.5}$ chosen messages and $2^{65.5}$ queries. One of the most important results is to use this new distinguisher to recover the internal state, which is an equivalent subkey of \textsc{Alpha}-MAC. Moreover, our distinguisher on \textsc{Alred} construction can be applied to the MACs based on CBC and CFB encryption modes.

The second part describes the first impossible differential attack on MACs-\textsc{Pelican}, MT-MAC-AES and PC-MAC-AES. Using the birthday attack, enough message pairs that produce the inner near-collision with some specific differences are detected, then the impossible differential attack on 4-round AES to the above mentioned MACs is performed. For \textsc{Pelican}, our attack recovers its internal state, which is an equivalent subkey. For MT-MAC-AES, the attack turns out to be a subkey recovery attack directly. The complexity of the two attacks is $2^{85.5}$ chosen messages and $2^{85.5}$ queries. For PC-MAC-AES, we recover its 256-bit key with $2^{85.5}$ chosen messages and $2^{128}$ queries.

In this paper we construct a chosen-key distinguisher and a
related-key attack on the full 256-bit key AES. We define a
notion of {\em differential $q$-multicollision} and show that for
AES-256 $q$-multicollisions can be constructed in time $q\cdot
2^{67}$ and with negligible memory, while we prove that the same
task for an ideal cipher of the same block size would require at
least $O(q\cdot 2^{\frac{q-1}{q+1}128})$ time. Using similar
approach and with the same complexity we can also construct
$q$-pseudo collisions for AES-256 in Davies-Meyer mode, a
scheme which is provably secure in the ideal-cipher model. We have
also computed partial $q$-multicollisions in time $q\cdot 2^{37}$
on a PC to verify our results. These results show that AES-256 can
not model an ideal cipher in theoretical constructions.
Finally we extend our results
to find the first publicly known attack on the full 14-round
AES-256: a related-key distinguisher which works for one out of
every $2^{35}$ keys with $2^{120}$ data and time complexity and
negligible memory. This distinguisher is translated into a
key-recovery
attack with total complexity of $2^{131}$ time and $2^{65}$ memory.

We present several attacks on the block cipher C2, which is used for
encrypting DVD Audio discs and Secure Digital cards. C2 has a 56 bit key and a
secret $8$ to $8$ bit S-box. We show that if the attacker is allowed
to choose the key, the S-box can be recovered in $2^{24}$ C2
encryptions. Attacking the $56$ bit key for a known S-box can be done
in complexity $2^{48}$. Finally, a C2 implementation with a $8$ to $8$ bit
secret S-box (equivalent to $2048$ secret bits) and a $56$ bit secret key can be attacked in $2^{53.5}$
C2 encryptions on average.

We design an efficient mode of operation on block ciphers. Our mode
has the following properties, when instantiated with a block cipher f
to yield a variable-length, keyed hash function H:

(1) MAC Preservation. H is a secure message authentication code (MAC) with birthday security, as long as f is unpredictable.

(2) PRF Preservation. H$ is a secure pseudorandom function (PRF) with birthday security, as long as f is pseudorandom.

(3) Security against Side-Channels. As long as the block cipher f does not leak side-channel information about its internals to the attacker, properties (1) and (2) hold even if the remaining implementation of H is completely leaky. In particular, if the attacker can learn the transcript of all block cipher calls and other auxiliary information needed to implement our mode of operation.

Our mode is the first to satisfy the MAC preservation property (1) with birthday security, solving the main open problem of Dodis et al. from Eurocrypt 2008. Combined with the PRF preservation (2), our mode provides a hedge against the case when the block cipher $f$ is more secure as a MAC than as a PRF: if it is false, as we hope, we get a secure variable-length PRF; however, even if true, we still "salvage" a secure MAC, which might be enough for a given application.

We also remark that no prior mode of operation offered birthday security against side channel attacks, even if the block cipher was assumed pseudorandom.

Although very efficient, our mode is three times slower than many of the prior modes, such as CBC, which do not enjoy properties (1) and (3). Thus, our work motivates further research to understand the gap between unpredictability and pseudorandomness of the existing block ciphers, such as AES.

(1) MAC Preservation. H is a secure message authentication code (MAC) with birthday security, as long as f is unpredictable.

(2) PRF Preservation. H$ is a secure pseudorandom function (PRF) with birthday security, as long as f is pseudorandom.

(3) Security against Side-Channels. As long as the block cipher f does not leak side-channel information about its internals to the attacker, properties (1) and (2) hold even if the remaining implementation of H is completely leaky. In particular, if the attacker can learn the transcript of all block cipher calls and other auxiliary information needed to implement our mode of operation.

Our mode is the first to satisfy the MAC preservation property (1) with birthday security, solving the main open problem of Dodis et al. from Eurocrypt 2008. Combined with the PRF preservation (2), our mode provides a hedge against the case when the block cipher $f$ is more secure as a MAC than as a PRF: if it is false, as we hope, we get a secure variable-length PRF; however, even if true, we still "salvage" a secure MAC, which might be enough for a given application.

We also remark that no prior mode of operation offered birthday security against side channel attacks, even if the block cipher was assumed pseudorandom.

Although very efficient, our mode is three times slower than many of the prior modes, such as CBC, which do not enjoy properties (1) and (3). Thus, our work motivates further research to understand the gap between unpredictability and pseudorandomness of the existing block ciphers, such as AES.

We analyze the security of the Thorp shuffle, or, equivalently, a maximally unbalanced Feistel network. Roughly said, the Thorp shuffle on~$N$ cards mixes any $N^{1-1/r}$ of them in $O(r\lg N)$ steps. Correspondingly, making~$O(r)$ passes of maximally unbalanced Feistel over an~$n$-bit string ensures CCA-security to $2^{n(1-1/r)}$ queries. Our results, which employ Markov-chain techniques, enable the construction of a practical and provably-secure blockcipher-based scheme for deterministically enciphering credit card numbers and the like using a conventional blockcipher.

We describe a new explicit function that given an elliptic curve $E$ defined over $\FF_{p^n}$, maps elements of $\FF_{p^n}$ into $E$ in \emph{deterministic} polynomial time and in a constant number of operations over $\FF_{p^n}$. The function requires to compute a cube root. As an application we show how to hash \emph{deterministically} into an elliptic curve.

This paper sets new software speed records for high-security Diffie--Hellman computations, specifically 251-bit elliptic-curve variable-base-point scalar multiplication. In one second of computation on a $200 Core 2 Quad Q6600 CPU, this paper's software performs 30000 251-bit scalar multiplications on the binary Edwards curve d(x+x^2+y+y^2)=(x+x^2)(y+y^2) over the field F_2[t]/(t^{251}+t^7+t^4+t^2+1) where d=t^{57}+t^{54}+t^{44}+1.
The paper's field-arithmetic techniques can be applied in much more generality but have a particularly efficient interaction with the completeness of addition formulas for binary Edwards curves. See http://binary.cr.yp.to/edwards.html for more information.

In the {\em Hidden Number Problem (HNP)}, the goal is to find a hidden number $s$, when given $p$, $g$ and access to an oracle that on query $a$ returns the $k$ most significant bits of $s\cdot g^a \bmod p$.

We present an algorithm solving HNP, when given an advice depending only on $p$ and $g$; the running time and advice length are polynomial in $\log p$. This algorithm improves over prior HNP algorithms in achieving:

(1) optimal number of bits $k\ge 1$ (compared with $k\ge\Omega(\log\log p)$);

(2) robustness to random noise; and

(3) handling a wide family of predicates on top of the most significant bit.

As a central tool we present an algorithm that, given oracle access to a function $f$ over $\ZZ_N$, outputs all the significant Fourier coefficients of $f$ (\ie, those occupying, say, at least $1\%$ of the energy). This algorithm improves over prior works in being:

1. {\em Local.} Its running time is polynomial in $\log N$ and $L_1(\w f)$ (for $L_1(\w f)$ the sum of $f$'s Fourier coefficients, in absolute value).

2. {\em Universal.} For any $N,t$, the {\em same} oracle queries are asked for {\em all} functions $f$ over $\ZZ_N$ s.t. $L_1(\w f)\le t$.

3. {\em Robust.} The algorithm succeeds with high probability even if the oracle to $f$ is corrupted by random noise.

We present an algorithm solving HNP, when given an advice depending only on $p$ and $g$; the running time and advice length are polynomial in $\log p$. This algorithm improves over prior HNP algorithms in achieving:

(1) optimal number of bits $k\ge 1$ (compared with $k\ge\Omega(\log\log p)$);

(2) robustness to random noise; and

(3) handling a wide family of predicates on top of the most significant bit.

As a central tool we present an algorithm that, given oracle access to a function $f$ over $\ZZ_N$, outputs all the significant Fourier coefficients of $f$ (\ie, those occupying, say, at least $1\%$ of the energy). This algorithm improves over prior works in being:

1. {\em Local.} Its running time is polynomial in $\log N$ and $L_1(\w f)$ (for $L_1(\w f)$ the sum of $f$'s Fourier coefficients, in absolute value).

2. {\em Universal.} For any $N,t$, the {\em same} oracle queries are asked for {\em all} functions $f$ over $\ZZ_N$ s.t. $L_1(\w f)\le t$.

3. {\em Robust.} The algorithm succeeds with high probability even if the oracle to $f$ is corrupted by random noise.

Computational indistinguishability amplification is the problem of
strengthening cryptographic primitives whose security is defined by
bounding the distinguishing advantage of an efficient distinguisher.
Examples include pseudorandom generators (PRGs), pseudorandom functions
(PRFs), and pseudorandom permutations (PRPs).

The literature on computational indistinguishability amplification consists only of few isolated results. Yao's XOR-lemma implies, by a hybrid argument, that no efficient distinguisher has advantage better than (roughly) $n2^{m-1} \delta^m$ in distinguishing the XOR of $m$ independent $n$-bit PRG outputs $S_1,\ldots,S_m$ from uniform randomness if no efficient distinguisher has advantage more than $\delta$ in distinguishing $S_i$ from a uniform $n$-bit string. The factor $2^{m-1}$ allows for security amplification only if $\delta<\frac{1}{2}$: For the case of PRFs, a random-offset XOR-construction of Myers was the first result to achieve {\em strong} security amplification, i.e., also for $\frac{1}{2} \le \delta < 1$.

This paper proposes a systematic treatment of computational indistinguishability amplification. We generalize and improve the above product theorem for the XOR of PRGs along five axes. First, we prove the {\em tight} information-theoretic bound $2^{m-1}\delta^m$ (without factor $n$) also for the computational setting. Second, we prove results for {\em interactive} systems (e.g. PRFs or PRPs). Third, we consider the general class of {\em neutralizing combination constructions}, not just XOR. As an application this yields the first indistinguishability amplification results for the cascade of PRPs (i.e., block ciphers) converting a weak PRP into an arbitrarily strong PRP, both for single-sided and two-sided queries. Fourth, {\em strong} security amplification is achieved for a subclass of neutralizing constructions which includes as a special case the construction of Myers. As an application we obtain highly practical optimal security amplification for block ciphers, simply by adding random offsets at the input and output of the cascade. Fifth, we show strong security amplification also for {\em weakened assumptions} like security against random-input (as opposed to chosen-input) attacks.

A key technique is a generalization of Yao's XOR-lemma to (interactive) systems which is of independent interest.

The literature on computational indistinguishability amplification consists only of few isolated results. Yao's XOR-lemma implies, by a hybrid argument, that no efficient distinguisher has advantage better than (roughly) $n2^{m-1} \delta^m$ in distinguishing the XOR of $m$ independent $n$-bit PRG outputs $S_1,\ldots,S_m$ from uniform randomness if no efficient distinguisher has advantage more than $\delta$ in distinguishing $S_i$ from a uniform $n$-bit string. The factor $2^{m-1}$ allows for security amplification only if $\delta<\frac{1}{2}$: For the case of PRFs, a random-offset XOR-construction of Myers was the first result to achieve {\em strong} security amplification, i.e., also for $\frac{1}{2} \le \delta < 1$.

This paper proposes a systematic treatment of computational indistinguishability amplification. We generalize and improve the above product theorem for the XOR of PRGs along five axes. First, we prove the {\em tight} information-theoretic bound $2^{m-1}\delta^m$ (without factor $n$) also for the computational setting. Second, we prove results for {\em interactive} systems (e.g. PRFs or PRPs). Third, we consider the general class of {\em neutralizing combination constructions}, not just XOR. As an application this yields the first indistinguishability amplification results for the cascade of PRPs (i.e., block ciphers) converting a weak PRP into an arbitrarily strong PRP, both for single-sided and two-sided queries. Fourth, {\em strong} security amplification is achieved for a subclass of neutralizing constructions which includes as a special case the construction of Myers. As an application we obtain highly practical optimal security amplification for block ciphers, simply by adding random offsets at the input and output of the cascade. Fifth, we show strong security amplification also for {\em weakened assumptions} like security against random-input (as opposed to chosen-input) attacks.

A key technique is a generalization of Yao's XOR-lemma to (interactive) systems which is of independent interest.

We prove that every key exchange protocol in the random oracle model in which the honest users make at most n queries to the oracle can be broken by an adversary making O(n^2) queries to the oracle. This improves on the previous Omega(n^6) query attack given by Impagliazzo and Rudich (STOC '89), and answers an open question posed by them. Our bound is optimal up to a constant factor since Merkle (CACM '78) gave a key exchange protocol that can easily
be implemented in this model with n queries and cannot be broken by an adversary making o(n^2) queries.

We consider what constitutes {\em identities\/} in cryptography.
Typical examples include your name and your social-security number,
or your fingerprint/iris-scan, or your address, or your
(non-revoked) public-key coming from some trusted public-key
infrastructure. In many situations, however, {\bf where you are}
defines your identity. For example, we know the role of a
bank-teller behind a bullet-proof bank window not because she shows
us her credentials but by merely knowing her location. In this
paper, we initiate the study of cryptographic protocols where the
identity (or other credentials and inputs) of a party are derived
from its \emph{geographic location}.

We start by considering the central task in this setting, i.e., securely verifying the position of a device. Despite much work in this area, we show that in the Vanilla (or standard) model, the above task (i.e., of secure positioning) is impossible to achieve. In light of the above impossibility result, we then turn to the Bounded Storage Model and formalize and construct information theoretically secure protocols for two fundamental tasks:

* Secure Positioning; and

* Position Based Key Exchange.

We then show that these tasks are in fact {\em universal\/} in this setting -- we show how we can use them to realize Secure Multi-Party Computation.

Our main contribution in this paper is threefold: to place the problem of secure positioning on a sound theoretical footing; to prove a strong impossibility result that simultaneously shows the insecurity of previous attempts at the problem; and to present positive results by showing that the bounded-storage framework is, in fact, one of the "right"; frameworks (there may be others) to study the foundations of position-based cryptography.

We start by considering the central task in this setting, i.e., securely verifying the position of a device. Despite much work in this area, we show that in the Vanilla (or standard) model, the above task (i.e., of secure positioning) is impossible to achieve. In light of the above impossibility result, we then turn to the Bounded Storage Model and formalize and construct information theoretically secure protocols for two fundamental tasks:

* Secure Positioning; and

* Position Based Key Exchange.

We then show that these tasks are in fact {\em universal\/} in this setting -- we show how we can use them to realize Secure Multi-Party Computation.

Our main contribution in this paper is threefold: to place the problem of secure positioning on a sound theoretical footing; to prove a strong impossibility result that simultaneously shows the insecurity of previous attempts at the problem; and to present positive results by showing that the bounded-storage framework is, in fact, one of the "right"; frameworks (there may be others) to study the foundations of position-based cryptography.

We consider two-party quantum protocols starting with a transmission of some random BB84 qubits followed by classical messages. We show a general "compiler" improving the security of such protocols: if the original protocol is secure against an "almost honest" adversary, then the compiled protocol is secure against an arbitrary computationally bounded (quantum) adversary.
The compilation preserves the number of qubits sent and the number of rounds up to a constant factor. The compiler also preserves security in the bounded-quantum-storage model (BQSM), so if the original protocol was BQSM-secure, the compiled protocol can only be broken by an adversary who has large quantum memory *and* large computing power. This is in contrast to known BQSM-secure protocols, where security breaks down completely if the adversary has larger quantum memory than expected. We show how our technique can be applied to quantum identification and oblivious transfer protocols.

In 1999, Coron, Naccache and Stern discovered an existential signature forgery for two popular RSA signature standards, ISO/IEC 9796-1 and 2. Following this attack ISO/IEC 9796-1 was withdrawn. ISO/IEC 9796-2 was amended by increasing the message digest to at least 160 bits. Attacking this amended version required at least $2^{61}$ operations.

In this paper, we exhibit algorithmic refinements allowing to attack the {\sl amended} (currently valid) version of ISO/IEC 9796-2 for all modulus sizes. A practical forgery was computed in only two days using 19 servers on the Amazon EC2 grid for a total cost of ~ US$800. The forgery was implemented for $e=2$ but attacking odd exponents will not take longer. The forgery was computed for the RSA-2048 challenge modulus, whose factorization is still unknown.

The new attack blends several theoretical tools. These do not change the asymptotic complexity of Coron et al.'s technique but significantly accelerate it for parameter values previously considered beyond reach.

While less efficient (US$45,000), the acceleration also extends to EMV signatures. EMV is an ISO/IEC 9796-2-compliant format with extra redundancy. Luckily, this attack does not threaten any of the 730 million EMV payment cards in circulation for {\sl operational} reasons.

Costs are per modulus: after a first forgery for a given modulus, obtaining more forgeries is virtually immediate.

In this paper, we exhibit algorithmic refinements allowing to attack the {\sl amended} (currently valid) version of ISO/IEC 9796-2 for all modulus sizes. A practical forgery was computed in only two days using 19 servers on the Amazon EC2 grid for a total cost of ~ US$800. The forgery was implemented for $e=2$ but attacking odd exponents will not take longer. The forgery was computed for the RSA-2048 challenge modulus, whose factorization is still unknown.

The new attack blends several theoretical tools. These do not change the asymptotic complexity of Coron et al.'s technique but significantly accelerate it for parameter values previously considered beyond reach.

While less efficient (US$45,000), the acceleration also extends to EMV signatures. EMV is an ISO/IEC 9796-2-compliant format with extra redundancy. Luckily, this attack does not threaten any of the 730 million EMV payment cards in circulation for {\sl operational} reasons.

Costs are per modulus: after a first forgery for a given modulus, obtaining more forgeries is virtually immediate.

RSA-FDH and many other schemes secure in the Random-Oracle Model (ROM) require
a hash function with output size larger than standard sizes.
We show that the random-oracle instantiations proposed in the literature for such cases
are weaker than a random oracle,
including the proposals by Bellare and Rogaway from 1993 and 1996,
and the ones implicit in IEEE P1363 and PKCS standards:
for instance, we obtain a $2^{67}$ preimage attack on BR93 for 1024-bit digests.
Next, we study the security impact of hash function defects for ROM signatures.
As an extreme case, we note that any hash collision would suffice to disclose the master key
in the ID-based cryptosystem by Boneh et al. from FOCS '07,
and the secret key in the Rabin-Williams signature for which Bernstein proved tight security at EUROCRYPT '08.
Interestingly, for both of these schemes, a slight modification can prevent these attacks, while preserving the ROM security result.
We give evidence that in the case of RSA and Rabin/Rabin-Williams,
an appropriate PSS padding is more robust than all other paddings known.

Abstraction means to eliminate irrelevant details from
consideration, thereby focusing only on the relevant aspects of a
problem or context. Abstraction is of paramount importance in most
scientific fields, especially in computer science and mathematics.
The purpose of abstraction is to provide, at the same time, simpler
definitions, higher generality of results, simpler proofs, improved
elegance, and often better didactic suitability.

Abstraction can be a gradual process and need not be unique, but in many contexts, the highest achievable level of abstraction, once identified, appears natural and stable. For example, the abstract and natural concepts of a group or a field in algebra capture exactly what is required to prove many important results.

In the spirit of algebraic abstraction, we advocate the definition and use of higher levels of abstraction in cryptography, with the goal of identifying the highest possible level at which a definition or theorem should be stated and proved. Some questions one can ask are: What are abstractions of a system, a game, indistinguishability, a hybrid argument, a reduction, indifferentiability, or of (universal) composability? What are abstractions of efficient and negligible, and at which level of abstraction can computational and information-theoretic models be unified? And, of course: Can the abstract viewpoint lead to new concepts and results that are perhaps otherwise missed?

Abstraction can be a gradual process and need not be unique, but in many contexts, the highest achievable level of abstraction, once identified, appears natural and stable. For example, the abstract and natural concepts of a group or a field in algebra capture exactly what is required to prove many important results.

In the spirit of algebraic abstraction, we advocate the definition and use of higher levels of abstraction in cryptography, with the goal of identifying the highest possible level at which a definition or theorem should be stated and proved. Some questions one can ask are: What are abstractions of a system, a game, indistinguishability, a hybrid argument, a reduction, indifferentiability, or of (universal) composability? What are abstractions of efficient and negligible, and at which level of abstraction can computational and information-theoretic models be unified? And, of course: Can the abstract viewpoint lead to new concepts and results that are perhaps otherwise missed?

This work deals with "MPC-friendly" linear secret sharing schemes (LSSS), a mathematical primitive upon which secure multi-party computation (MPC) can be based and which was introduced by Cramer, Damgaard and Maurer (EUROCRYPT 2000). Chen and Cramer proposed a special class of such schemes that is constructed from algebraic geometry and that enables efficient secure multi-party computation over fixed finite fields (CRYPTO 2006). We extend this in four ways. First, we propose an abstract coding-theoretic framework in which this class of schemes and its (asymptotic) properties can be cast and analyzed. Second, we show that for {\em every finite field} $\FF_q$, there exists an infinite family of LSSS over $\FF_q$ that is asymptotically good in the following sense: the schemes are "{\em ideal}," i.e., each share consists of a single $\FF_q$-element, and the schemes have {\em $t$-strong multiplication} on $n$ players, where the corruption tolerance $\frac{3t}{n-1}$ tends to a constant $\nu(q)$ with $0<\nu(q)<1$ when $n$ tends to infinity.
Moreover, when $|\FF_q|$ tends to infinity, $\nu(q)$ tends to $1$, which is optimal. This leads to explicit lower bounds on $\widehat{\tau}(q)$, our measure of {\em asymptotic optimal corruption tolerance}.

We achieve this by combining the results of Chen and Cramer with a dedicated field-descent method. In particular, in the $\FF_2$-case there exists a family of binary $t$-strongly multiplicative ideal LSSS with $\frac{3t}{n-1}\approx 2.86\%$ when $n$ tends to infinity, a one-bit secret and just a one-bit share for every player. Previously, such results were shown for $\FF_q$ with $q\geq 49$ a square. Third, we present an infinite family of ideal schemes with $t$-strong multiplication that does not rely on algebraic geometry and that works over every finite field $\FF_q$. Its corruption tolerance vanishes, yet still $\frac{3t}{n-1}= \Omega(1/(\log\log n)\log n)$. Fourth and finally, we give an improved non-asymptotic upper bound on corruption tolerance.

We achieve this by combining the results of Chen and Cramer with a dedicated field-descent method. In particular, in the $\FF_2$-case there exists a family of binary $t$-strongly multiplicative ideal LSSS with $\frac{3t}{n-1}\approx 2.86\%$ when $n$ tends to infinity, a one-bit secret and just a one-bit share for every player. Previously, such results were shown for $\FF_q$ with $q\geq 49$ a square. Third, we present an infinite family of ideal schemes with $t$-strong multiplication that does not rely on algebraic geometry and that works over every finite field $\FF_q$. Its corruption tolerance vanishes, yet still $\frac{3t}{n-1}= \Omega(1/(\log\log n)\log n)$. Fourth and finally, we give an improved non-asymptotic upper bound on corruption tolerance.

The round complexity of interactive protocols is one of their most
important complexity measures. The investigation of the round
complexity of protocols is usually conducted under the assumption that
the protocols are error-free. In this paper we examine the question of
whether introducing a probability of error into the executions can
improve on known results and lower bounds. We study this question for
the specific task of Verifiable Secret Sharing.

Verifiable Secret Sharing (VSS) is a fundamental primitive used as a building block in many distributed cryptographic tasks, such as Multiparty Computation and Byzantine Agreement. VSS is a two phase (Share, Reconstruct) protocol carried out among $n$ parties in the presence of a computationally unbounded adversary who can corrupt up to $t$ parties. We assume that every two parties in the network are connected by a pairwise secure channel and a public broadcast channel is available to all.

In this work we prove that existing lower bounds for the round complexity of VSS can be circumvented by introducing a negligible probability of error in the reconstruction phase. Previous results show that matching lower and upper bounds of {\em three} rounds for VSS exist, with $n = 3t+1$, where the lower bound holds for protocols where reconstruction always succeeds, i.e. with probability 1. In contrast we show that with a negligible probability of error in the reconstruction phase:

1. There exists an efficient 2-round VSS protocol for $n = 3t + 1$. This protocol has a 2-round reconstruction phase. If we assume that the adversary is non-rushing then we can achieve a 1-round reconstruction phase.

2. There exists an efficient 1-round VSS for $t = 1$ and $n > 3$.

3. We prove that our results are optimal both in resilience and number of rounds by proving: a. There does not exist a 2-round WSS (and hence VSS) for $n \leq 3t$. b. There is no 1-round VSS protocol for $t \geq 2$ and $n \geq 4$.

Verifiable Secret Sharing (VSS) is a fundamental primitive used as a building block in many distributed cryptographic tasks, such as Multiparty Computation and Byzantine Agreement. VSS is a two phase (Share, Reconstruct) protocol carried out among $n$ parties in the presence of a computationally unbounded adversary who can corrupt up to $t$ parties. We assume that every two parties in the network are connected by a pairwise secure channel and a public broadcast channel is available to all.

In this work we prove that existing lower bounds for the round complexity of VSS can be circumvented by introducing a negligible probability of error in the reconstruction phase. Previous results show that matching lower and upper bounds of {\em three} rounds for VSS exist, with $n = 3t+1$, where the lower bound holds for protocols where reconstruction always succeeds, i.e. with probability 1. In contrast we show that with a negligible probability of error in the reconstruction phase:

1. There exists an efficient 2-round VSS protocol for $n = 3t + 1$. This protocol has a 2-round reconstruction phase. If we assume that the adversary is non-rushing then we can achieve a 1-round reconstruction phase.

2. There exists an efficient 1-round VSS for $t = 1$ and $n > 3$.

3. We prove that our results are optimal both in resilience and number of rounds by proving: a. There does not exist a 2-round WSS (and hence VSS) for $n \leq 3t$. b. There is no 1-round VSS protocol for $t \geq 2$ and $n \geq 4$.

Designing efficient cryptographic protocols tolerating adaptive
adversaries, who are able to corrupt parties on the fly as the
computation proceeds, has been an elusive task.
In this paper we make progress in this area. First, we introduce a new
notion called \emph{semi-adaptive} security which is slightly stronger
than static security but \emph{significantly weaker than fully
adaptive security}.
The main difference between adaptive and semi-adaptive security is that semi-adaptive security allows for the case where one party starts out corrupted and the other party becomes corrupted later on, but {\em not} the case where both parties start out honest and become corrupted later on.
As such, semi-adaptive security is much easier to
achieve than fully adaptive security. We then give a simple, generic
protocol compiler which transforms any semi-adaptively secure protocol
into a fully adaptively secure one. The compilation effectively
decomposes the problem of adaptive security into two (simpler)
problems which can be tackled separately: the problem of semi-adaptive
security and the problem of realizing a weaker variant of secure
channels.

We solve the latter problem by means of a new primitive that we call {\em somewhat non-committing encryption} resulting in significant efficiency improvements over the standard method for realizing secure channels using (fully) non-committing encryption. Somewhat non-committing encryption has two parameters: an equivocality parameter $\ell$ (measuring the number of ways that a ciphertext can be "opened") and the message sizes $k$. Our implementation is very efficient for small values $\ell$, \emph{even} when $k$ is large. This translates into a very efficient compilation of semi-adaptively secure protocols for tasks with small input/output domains (such as bit-OT) into fully adaptively secure protocols.

Indeed, we showcase our methodology by applying it to the recent Oblivious Transfer protocol by Peikert \etal [Crypto 2008], which is only secure against static corruptions, to obtain the first efficient, adaptively secure and composable OT protocol. In particular, to transfer an $n$-bit message, we use a constant number of rounds and $O(n)$ public key operations.

We solve the latter problem by means of a new primitive that we call {\em somewhat non-committing encryption} resulting in significant efficiency improvements over the standard method for realizing secure channels using (fully) non-committing encryption. Somewhat non-committing encryption has two parameters: an equivocality parameter $\ell$ (measuring the number of ways that a ciphertext can be "opened") and the message sizes $k$. Our implementation is very efficient for small values $\ell$, \emph{even} when $k$ is large. This translates into a very efficient compilation of semi-adaptively secure protocols for tasks with small input/output domains (such as bit-OT) into fully adaptively secure protocols.

Indeed, we showcase our methodology by applying it to the recent Oblivious Transfer protocol by Peikert \etal [Crypto 2008], which is only secure against static corruptions, to obtain the first efficient, adaptively secure and composable OT protocol. In particular, to transfer an $n$-bit message, we use a constant number of rounds and $O(n)$ public key operations.

Collusion-free protocols prevent subliminal communication (i.e., covert channels) between parties running the protocol. In the standard communication model, if one-way functions exist, then protocols satisfying any reasonable degree of privacy cannot be collusion-free. To circumvent this impossibility, Alwen, shelat and Visconti (CRYPTO 2008) recently suggested the _mediated model_ where all communication passes through a mediator. The goal is to design protocols where collusion-freeness is guaranteed as long as the mediator is honest, while standard security guarantees hold if the mediator is dishonest. In this model, they gave constructions of collusion-free protocols for commitments and zero-knowledge proofs in the two-party setting.

We strengthen the definition of Alwen et al., and resolve the main open questions in this area by showing a collusion-free protocol (in the mediated model) for computing any multi-party functionality.

We strengthen the definition of Alwen et al., and resolve the main open questions in this area by showing a collusion-free protocol (in the mediated model) for computing any multi-party functionality.

We consider enhancing with privacy concerns a large class of auctions,
which include sealed-bid single-item auctions but also general
multi-item multi-winner auctions, our assumption being that bidders
primarily care about monetary payoff and secondarily worry about
exposing information about their type to other players and learning
information about other players' types, that is, bidders are
\emph{greedy then paranoid}. To treat privacy explicitly within the
game theoretic context, we put forward a novel \emph{hybrid utility}
model that considers both monetary and privacy components in players'
payoffs.

We show how to use rational cryptography to approximately implement any given \emph{ex interim} individually strictly rational equilibrium of such an auction without a trusted mediator through a cryptographic protocol that uses only point-to-point authenticated channels between the players. By "ex interim individually strictly rational" we mean that, given its type and before making its move, each player has a strictly positive expected utility. By "approximately implement" we mean that, under cryptographic assumptions, running the protocol is a computational Nash equilibrium with a payoff profile negligibly close to the original equilibrium.

We show how to use rational cryptography to approximately implement any given \emph{ex interim} individually strictly rational equilibrium of such an auction without a trusted mediator through a cryptographic protocol that uses only point-to-point authenticated channels between the players. By "ex interim individually strictly rational" we mean that, given its type and before making its move, each player has a strictly positive expected utility. By "approximately implement" we mean that, under cryptographic assumptions, running the protocol is a computational Nash equilibrium with a payoff profile negligibly close to the original equilibrium.

The problem of carrying out cryptographic computations when the participating parties are \emph{rational} in a game-theoretic sense has recently gained much attention. One problem that has been studied considerably is that of rational secret sharing. In this setting, the aim is to construct a mechanism (protocol) so that parties behaving rationally have incentive to cooperate and provide their shares in the reconstruction phase, even if each party prefers to be the only one to learn the secret.

Although this question was only recently asked by Halpern and Teague (STOC 2004), a number of works with beautiful ideas have been presented to solve this problem. However, they all have the property that the protocols constructed need to know the actual utility values of the parties (or at least a bound on them). This assumption is very problematic because the utilities of parties are not public knowledge. We ask whether this \emph{dependence on the actual utility values} is really necessary and prove that in the basic setting, rational secret sharing cannot be achieved without it. On the positive side, we show that by somewhat relaxing the standard assumptions on the utility functions, it is possible to achieve utility independence. In addition to the above, observe that the known protocols for rational secret sharing that do not assume simultaneous channels all suffer from the problem that one of the parties can cause the others to output an incorrect value. (This problem arises when a party gains higher utility by having another output an incorrect value than by learning the secret itself; we argue that such a scenario is not at all unlikely.) We show that this problem is inherent in the non-simultaneous channels model, unless the actual values of the parties' utilities for this attack is known, in which case it is possible to prevent this from happening.

Although this question was only recently asked by Halpern and Teague (STOC 2004), a number of works with beautiful ideas have been presented to solve this problem. However, they all have the property that the protocols constructed need to know the actual utility values of the parties (or at least a bound on them). This assumption is very problematic because the utilities of parties are not public knowledge. We ask whether this \emph{dependence on the actual utility values} is really necessary and prove that in the basic setting, rational secret sharing cannot be achieved without it. On the positive side, we show that by somewhat relaxing the standard assumptions on the utility functions, it is possible to achieve utility independence. In addition to the above, observe that the known protocols for rational secret sharing that do not assume simultaneous channels all suffer from the problem that one of the parties can cause the others to output an incorrect value. (This problem arises when a party gains higher utility by having another output an incorrect value than by learning the secret itself; we argue that such a scenario is not at all unlikely.) We show that this problem is inherent in the non-simultaneous channels model, unless the actual values of the parties' utilities for this attack is known, in which case it is possible to prevent this from happening.

We prove the equivalence, up to a small polynomial approximation
factor $\sqrt{n/\log n}$,
of the lattice problems $\uSVP$ (unique Shortest Vector Problem),
$\BDD$ (Bounded Distance Decoding) and
$\GapSVP$ (the decision version of the Shortest Vector Problem).
This
resolves a long-standing open problem about the relationship
between uSVP and the more standard $\GapSVP$, as well the
$\BDD$ problem commonly used in coding theory.
The main cryptographic application of our work is the proof that the
Ajtai-Dwork (\cite{Ajtai+Dwork-PublCrypwithEqui:97})
and the Regev (\cite{DBLP:journals/jacm/Regev04}) cryptosystems,
which were previously only known to be based on the hardness of
$\uSVP$, can be equivalently based on the hardness of
worst-case GapSVP$_{O(n^{2.5})}$ and GapSVP$_{O(n^{2})}$,
respectively.
Also, in the case of $\uSVP$ and $\BDD$, our connection is very tight,
establishing the equivalence (within a small constant approximation factor)
between the two most central problems used in lattice based
public key cryptography and coding theory.

The well-studied task of \emph{learning a linear function with errors}
is a seemingly hard problem and the basis for several cryptographic
schemes. Here we demonstrate additional applications that enjoy
strong security properties and a high level of efficiency. Namely, we
construct:

* Public-key and symmetric-key cryptosystems that provide security for \emph{key-dependent messages} and enjoy \emph{circular security}. Our schemes are highly efficient: in both cases the ciphertext is only a constant factor larger than the plaintext, and the cost of encryption and decryption is only $n\cdot \polylog(n)$ bit operations per message symbol in the public-key case, and $\polylog(n)$ bit operations in the symmetric-case.

* Two efficient pseudorandom objects: a "weak randomized pseudorandom function" --- a relaxation of standard PRF --- that can be computed obliviously via a simple protocol, and a length-doubling pseudorandom generator that can be computed by a circuit of $n \cdot \polylog(n)$ size. The complexity of our pseudorandom generator almost matches the complexity of the fastest known construction (Applebaum \etal, RANDOM 2006), which runs in linear time at the expense of relying on a nonstandard intractability assumption.

Our constructions and security proofs are simple and natural, and involve new techniques that may be of independent interest. In addition, by combining our constructions with prior ones, we get fast implementations of several other primitives and protocols.

* Public-key and symmetric-key cryptosystems that provide security for \emph{key-dependent messages} and enjoy \emph{circular security}. Our schemes are highly efficient: in both cases the ciphertext is only a constant factor larger than the plaintext, and the cost of encryption and decryption is only $n\cdot \polylog(n)$ bit operations per message symbol in the public-key case, and $\polylog(n)$ bit operations in the symmetric-case.

* Two efficient pseudorandom objects: a "weak randomized pseudorandom function" --- a relaxation of standard PRF --- that can be computed obliviously via a simple protocol, and a length-doubling pseudorandom generator that can be computed by a circuit of $n \cdot \polylog(n)$ size. The complexity of our pseudorandom generator almost matches the complexity of the fastest known construction (Applebaum \etal, RANDOM 2006), which runs in linear time at the expense of relying on a nonstandard intractability assumption.

Our constructions and security proofs are simple and natural, and involve new techniques that may be of independent interest. In addition, by combining our constructions with prior ones, we get fast implementations of several other primitives and protocols.

We present a new methodology for proving security of encryption
systems using what we call Dual System Encryption. Our techniques
result in fully secure Identity-Based Encryption (IBE) and
Hierarchical Identity-Based Encryption (HIBE) systems under the simple
and established decisional Bilinear Diffie-Hellman and decisional
Linear assumptions. Our IBE system has ciphertexts, private keys, and
public parameters each consisting of a constant number of group
elements. These results are the first HIBE system and the first IBE system
with short parameters under simple assumptions.

In a Dual System Encryption system both ciphertexts and private keys can take on one of two indistinguishable forms. A private key or ciphertext will be normal if they are generated respectively from the system's key generation or encryption algorithm. These keys and ciphertexts will behave as one expects in an IBE system. In addition, we define semi-functional keys and ciphertexts. A semi-functional private key will be able to decrypt all normally generated ciphertexts; however, decryption will fail if one attempts to decrypt a semi-functional ciphertext with a semi-functional private key. Analogously, semi-functional ciphertexts will be decryptable only by normal private keys.

Dual System Encryption opens up a new way to prove security of IBE and related encryption systems. We define a sequence of games where we change first the challenge ciphertext and then the private keys one by one to be semi-functional. We finally end up in a game where the challenge ciphertext and all private keys are semi-functional at which point proving security is straightforward.

In a Dual System Encryption system both ciphertexts and private keys can take on one of two indistinguishable forms. A private key or ciphertext will be normal if they are generated respectively from the system's key generation or encryption algorithm. These keys and ciphertexts will behave as one expects in an IBE system. In addition, we define semi-functional keys and ciphertexts. A semi-functional private key will be able to decrypt all normally generated ciphertexts; however, decryption will fail if one attempts to decrypt a semi-functional ciphertext with a semi-functional private key. Analogously, semi-functional ciphertexts will be decryptable only by normal private keys.

Dual System Encryption opens up a new way to prove security of IBE and related encryption systems. We define a sequence of games where we change first the challenge ciphertext and then the private keys one by one to be semi-functional. We finally end up in a game where the challenge ciphertext and all private keys are semi-functional at which point proving security is straightforward.

We consider the cryptographic group of \emph{Signed Quadratic Residues}. This group is particularly useful for cryptography since it is a "gap-group," in which the computational problem (i.e., computing square roots) is as hard as factoring, while the corresponding decisional problem (i.e., recognizing \emph{signed} quadratic residues) is easy. We are able to show that under the factoring assumption, the Strong Diffie-Hellman assumption over the signed quadratic residues holds. That is, in this group the Diffie-Hellman problem is hard, even in the presence of a Decisional Diffie-Hellman oracle.

We demonstrate the usefulness of our results by applying them to the Hybrid ElGamal encryption scheme (aka Diffie-Hellman integrated encryption scheme --- DHIES). Concretely, we consider the security of the scheme when instantiated over the group of signed quadratic residues. It is known that, in the random oracle model, the scheme is chosen-ciphertext (CCA) secure under the Strong Diffie-Hellman assumption and hence, by our results, under the standard factoring assumption. We show that furthermore, in the standard model, Hybrid ElGamal is CCA secure under the higher residuosity assumption, given that the used hash function is four-wise independent. The latter result is obtained using the recent "randomness extraction framework" for hash proof systems.

We demonstrate the usefulness of our results by applying them to the Hybrid ElGamal encryption scheme (aka Diffie-Hellman integrated encryption scheme --- DHIES). Concretely, we consider the security of the scheme when instantiated over the group of signed quadratic residues. It is known that, in the random oracle model, the scheme is chosen-ciphertext (CCA) secure under the Strong Diffie-Hellman assumption and hence, by our results, under the standard factoring assumption. We show that furthermore, in the standard model, Hybrid ElGamal is CCA secure under the higher residuosity assumption, given that the used hash function is four-wise independent. The latter result is obtained using the recent "randomness extraction framework" for hash proof systems.

We present the first signature scheme which is "short", stateless and secure under the RSA assumption in the standard model. Prior short, standard model signatures in the RSA setting required either a strong complexity assumption such as Strong RSA or (recently) that the signer maintain state. A signature in our scheme is comprised of one element in Z_N* and one integer. The public key is also short, requiring only the modulus N, one element of Z_N*, one integer and one PRF seed.

To design our signature, we employ the known generic construction of fully-secure signatures from weakly-secure signatures and a chameleon hash. We then introduce a new proof technique for reasoning about weakly-secure signatures. This technique enables the simulator to predict a prefix of the message on which the adversary will forge and knowledge of this prefix is used to embed the challenge. This technique has wider applications beyond RSA.

We also use it to provide an entirely new analysis of the security of the Waters signatures: the only short, stateless signatures known to be secure under the Computational Diffie-Hellman assumption in the standard model.

To design our signature, we employ the known generic construction of fully-secure signatures from weakly-secure signatures and a chameleon hash. We then introduce a new proof technique for reasoning about weakly-secure signatures. This technique enables the simulator to predict a prefix of the message on which the adversary will forge and knowledge of this prefix is used to embed the challenge. This technique has wider applications beyond RSA.

We also use it to provide an entirely new analysis of the security of the Waters signatures: the only short, stateless signatures known to be secure under the Computational Diffie-Hellman assumption in the standard model.

The notion of smooth projective hash functions was proposed by
Cramer and Shoup and can be seen as special type of zero-knowledge
proof system for a language. Though originally used as a means
to build efficient chosen-ciphertext secure public-key
encryption schemes,
some variations of the Cramer-Shoup smooth projective hash functions
also found applications in several other contexts, such as
password-based authenticated key exchange and oblivious transfer.
In this paper, we first address the problem of building smooth
projective hash functions for more complex languages. More
precisely, we show how to build such functions for languages that
can be described in terms of disjunctions and conjunctions of
simpler languages for which smooth projective hash functions are
known to exist. Next, we illustrate how the use of smooth projective
hash functions with more complex languages can be efficiently
associated to extractable commitment schemes and avoid the need for
zero-knowledge proofs. Finally, we explain how to apply these
results to provide more efficient solutions to two well-known
cryptographic problems: a public-key certification which guarantees
the knowledge of the private key by the user without random oracles
or zero-knowledge proofs and adaptive security for password-based
authenticated key exchange protocols in the universal composability
framework with erasures.