Home
Call for papers
Accepted papers
Program
Rump session
Venue
Entertainment
Restaurant list
Travel Info
Accommodation
Registration
Contact

List of Papers (with Abstracts)

Concurrent Zero Knowledge without Complexity Assumptions
 Daniele Micciancio (UCSD) and Shien Jin Ong (Harvard) and Amit Sahai (UCLA) and Salil Vadhan (Harvard)
 Abstract:
We provide unconditional constructions of concurrent
statistical zeroknowledge proofs for
a variety of nontrivial problems (not known to have probabilistic
polynomialtime algorithms). The problems include Graph Isomorphism,
Graph Nonisomorphism, Quadratic Residuosity, Quadratic
Nonresiduosity, a restricted version of Statistical Difference, and
approximate versions of the coNP forms of the) Shortest Vector
Problem and Closest Vector Problem in lattices.
For some of the problems, such as Graph Isomorphism and Quadratic
Residuosity, the proof systems have provers that can be implemented
in polynomial time (given an NP witness) and have $Õ(log
n)$ rounds, which is known to be essentially optimal for blackbox
simulation.
To the best of our knowledge, these are the first constructions of
concurrent zeroknowledge proofs in the plain, asynchronous model
(i.e. without setup or timing assumptions) that do not require
complexity assumptions (such as the existence of oneway
functions).
 Interactive ZeroKnowledge with Restricted Random Oracles
 Moti Yung (RSA Labs and Columbia Univ.) and Yunlei Zhao (Fudan Univ.)
 Abstract:
We investigate the design and proofs of zeroknowledge (ZK) interactive
systems under what we call the ``restricted random oracle model''
which restrains the usage of the oracle in the protocol design to that
of collapsing protocol rounds a la FiatShamir heuristics, and
limits the oracle programmability in the security proofs. We
analyze subtleties resulting from the involvement of random oracles in
the interactive setting and derive our methodology. Then we
investigate the FeigeShamir 4round ZK argument for NP in
this model: First we show that a 2round protocol is possible for a
very interesting set of languages; we then show that while the
original protocol is not concurrently secure in the publickey model,
a modified protocol in our model is, in fact, concurrently secure in
the bare publickey model. We point at applications and implications
of this fact. Of possible independent interest is a concurrent attack
against the FeigeShamir ZK in the publickey model (for which it was
not originally designed).
 NonInteractive ZeroKnowledge from Homomorphic Encryption
 Ivan Damgaard (Univ. of Aarhus) and Nelly Fazio and Antonio Nicolosi (NYU)
 Abstract:
We propose a method for compiling a class of Σprotocols
(3move publiccoin protocols) into
noninteractive zeroknowledge arguments. The method is based on
homomorphic encryption and does not use random oracles. It only requires
that a private/public key pair is set up for the verifier. The method applies
to all known discretelog based Σprotocols. As applications,
we obtain noninteractive threshold RSA without random oracles, and
noninteractive zeroknowledge for NP more efficiently than by
previous methods.
 Ring Signatures: Stronger Definitions, and Constructions without Random Oracles
 Adam Bender and Jonathan Katz and Ruggero Morselli (Univ. of Maryland, College Park)
 Abstract:
Ring signatures, first introduced by Rivest, Shamir, and Tauman,
enable a user to sign a message so that a ring of possible signers
(of which the user is a member) is identified, without revealing exactly
which member of that ring actually generated the signature.
In contrast to group signatures, ring signatures are completely "adhoc" and
do not require any central authority or coordination among the various users
(indeed, users do not even need to be aware of each other); furthermore, ring
signature schemes grant users finegrained control over the
level of anonymity associated with any particular signature.
This paper has two main areas of focus. First, we examine previous
definitions of security for ring signature schemes and suggest that
most of these prior definitions are too weak, in the sense that they
do not take into account certain realistic attacks. We propose new
definitions of anonymity and unforgeability which address these
threats, and then give separation results proving that our new
notions are strictly stronger than previous ones. Next, we show two
constructions of ring signature schemes in the standard model: one
based on generic assumptions which satisfies our strongest
definitions of security, and a second, more efficient scheme
achieving weaker security guarantees and more limited functionality.
These are the first constructions of ring signature schemes that do
not rely on random oracles or ideal ciphers.
 Efficient Blind and Partially Blind Signatures Without Random Oracles
 Tatsuaki Okamoto (NTT Labs)
 Abstract:
This paper proposes a new efficient signature scheme from bilinear
maps that is secure in the standard model (i.e., without the random
oracle model). Our signature scheme is more effective in many
applications (e.g., blind signatures, group signatures, anonymous
credentials etc.) than the existing secure signature schemes in the
standard model such as the BonehBoyen, CamenischLysyanskaya,
CramerShoup and Waters schemes (and their variants). The security
proof of our scheme requires a slightly stronger assumption, the 2SDH
assumption, than the SDH assumption used by BonehBoyen. As typical
applications of our signature scheme, this paper presents efficient
blind signatures and partially blind signatures that are secure in
the standard model. Here, partially blind signatures are a
generalization of blind signatures (i.e., blind signatures are a
special case of partially blind signatures) and have many applications
including electronic cash and voting. Our blind signature scheme is
much more efficient than the existing secure blind signature schemes
in the standard model such as the CamenischKoprowskiWarinsch and
JuelsLubyOstrovsky schemes, and is also almost as efficient as the
most efficient blind signature schemes whose security has been
analyzed heuristically or in the random oracle model. Our partially
blind signature scheme is the first one that is secure in the standard
model and it is very efficient (almost as efficient as our blind
signatures). We also present a blind signature scheme based on the
Waters signature scheme.
 Key Exchange Using Passwords and Long Keys
 Vladimir Kolesnikov and Charles Rackoff (Univ. of Toronto)
 Abstract:
We propose a new model for key exchange (KE) based on a combination
of different types of keys. In our setting, servers
exchange keys with clients, who memorize short passwords and
carry (stealable) storage cards containing long (cryptographic) keys.
Our setting is a generalization of that of Halevi and Krawczyk (HK),
where clients have a password and the public key of the server.
We point out a subtle flaw in the protocols of HK and demonstrate a
practical attack on them, resulting in a full password compromise. We
give a definition of security of KE in our (and thus also in the HK)
setting and discuss many related subtleties. We define and discuss
protection against denial of access attacks, which is not possible
in any of the previous KE models that use passwords. Finally, we give a
very simple and efficient protocol satisfying all our requirements.
 Mercurial Commitments: Minimal Assumptions
and Efficient Constructions
 Dario Catalano (ENS) and Yevgeniy Dodis (NYU) and Ivan Visconti (Univ. of Salerno)
 Abstract:
(Noninteractive) Trapdoor Mercurial Commitments (TMCs) were
introduced by Chase et al. [8] and form a key building block for
constructing zeroknowledge sets (introduced by Micali, Rabin and
Kilian [28]). TMCs are quite similar and certainly imply ordinary
(noninteractive) trapdoor commitments (TCs). Unlike TCs, however,
they allow for some additional freedom in the way the message is
opened: informally, by allowing one to claim that "if this commitment
can be opened at all, then it would open to this message". Prior to
this work, it was not clear if this addition is critical or not, since
all the constructions of TMCs presented in [8] and [28] used strictly
stronger assumptions than TCs. We give an affirmative answer to this
question, by providing simple constructions of TMCs from any trapdoor
bit commitment scheme. Moreover, by plugging in various trapdoor bit
commitment schemes, we get, in the trusted parameters (TP) model, all
the effcient constructions from [28] and [8], as well as several
immediate new (either generic or efficient) constructions. In
particular, we get a construction of TMCs from any oneway function in
the TP model, and, by using a special flavor of TCs, called hybrid TCs
[6], even in the (weaker) shared random string (SRS) model.
Our results imply that (a) mercurial commitments can be viewed as
surprisingly simple variations of trapdoor commitments; and (b) the
existence of noninteractive zeroknowledge sets is equivalent to the
existence of collisionresistant hash functions. Of independent
interest, we also give a stronger and yet much simpler definition of
mercurial commitments than that of [8], which is also met by our
constructions in the TP model.
 Efficient CollisionResistant Hashing from WorstCase Assumptions on Cyclic Lattices
 Chris Peikert (MIT) and Alon Rosen (Harvard)
 Abstract:
The generalized knapsack function is defined as
$f_{a}(x) = ∑_{i} a_i · x_i$, where
$a = (a_1, ..., a_m)$ consists of $m$ elements from some ring $R$,
and $x = (x_1, ..., x_m)$ consists of $m$ coefficients from a
specified subset $S ⊆ R$. Micciancio (FOCS 2002) proposed a
specific choice of the ring $R$ and subset $S$ for which inverting
this function (for random $a,x$) is at least as hard
as solving certain worstcase problems on cyclic lattices.
We show that for a different choice of $S ⊂ R$, the
generalized knapsack function is in fact collisionresistant,
assuming it is infeasible to approximate the shortest vector in
$n$dimensional cyclic lattices up to factors $Õ(n)$. For
slightly larger factors, we even get collisionresistance for
any $m ≥ 2$. This yields very efficient
collisionresistant hash functions having key size and time
complexity almost linear in the security parameter $n$. We also
show that altering $S$ is necessary, in the sense that Micciancio's
original function is not collisionresistant (nor even
universal oneway).
Our results exploit an intimate connection between the linear
algebra of $n$dimensional cyclic lattices and the ring
Z[α]/(α^n1)$, and crucially depend on the
factorization of $α^n1$ into irreducible cyclotomic
polynomials. We also establish a new bound on the discrete Gaussian
distribution over general lattices, employing techniques introduced
by Micciancio and Regev (FOCS 2004) and also used by Micciancio in
his study of compact knapsacks.
 On Error Correction in the Exponent
 Chris Peikert (MIT)
 Abstract: Given a corrupted word $w = (w_1, ...,w_n)$
from a ReedSolomon code of distance $d$, there are many ways to
efficiently find and correct its errors. But what if we are instead
given $(g^{w_1}, ..., g^{w_n})$ where $g$ generates some large cyclic
group  can the errors still be corrected efficiently? This problem
is called error correction in the exponent, and though it
arises naturally in many areas of cryptography, it has received little
attention.
We first show that unique decoding and list decoding in
the exponent are no harder than the computational DiffieHellman (CDH)
problem in the same group. The remainder of our results are negative:
 Under mild assumptions on the parameters, we show that
boundeddistance decoding in the exponent, under
$e=dk^{1ε}$ errors for any $ε > 0$, is as hard as
the discrete logarithm problem in the same group.
 For
generic algorithms (as defined by Shoup, Eurocrypt 1997) that
treat the group as a "blackbox," we show lower bounds for decoding
that exactly match known algorithms.
Our generic lower
bounds also extend to decisional variants of the decoding problem, and
to groups in which the decisional DiffieHellman (DDH) problem is
easy. This suggests that hardness of decoding in the exponent is a
qualitatively new assumption that lies "between" the DDH and CDH
assumptions.
 On the Relation Between the Ideal Cipher and the Random Oracle Models
 Yevgeniy Dodis and Prashant Puniya (NYU)
 Abstract:
The Random Oracle Model and the Ideal Cipher Model are two of the most
popular idealized models in cryptography. It is a fundamentally
important practical and theoretical problem to compare the relative
strengths of these models and to see how they relate to each other.
Recently, Coron et al. [8] proved that one can securely
instantiate a random oracle in the ideal cipher model. In this paper,
we investigate if it is possible to instantiate an ideal block cipher
in the random oracle model, which is a considerably more challenging
question. We conjecture that the LubyRackoff construction
[19] with a sufficient number of rounds should suffice to show
this implication. This does not follow from the famous LubyRackoff
result [19] showing that 4 rounds are enough to turn a
pseudorandom function into a pseudorandom permutation, since the
results of the intermediate rounds are known to everybody. As a
partial step toward resolving this conjecture, we show that random
oracles imply ideal ciphers in the honestbutcurious model,
where all the participants are assumed to follow the protocol, but
keep all their intermediate results. Namely, we show that the
LubyRackoff construction with a superlogarithmic number of
rounds can be used to instantiate the ideal block cipher in any
honestbutcurious cryptosystem, and result in a similar
honestbutcurious cryptosystem in the random oracle model. We also
show that securely instantiating the ideal cipher using the Luby
Rackoff construction with upto a logarithmic number of rounds is
equivalent in the honestbutcurious and malicious models.

IntrusionResilience via the BoundedStorage Model
 Stefan Dziembowski (Warsaw University and CNR Pisa)
 Abstract: We introduce a new method of
achieving intrusionresilience in the cryptographic protocols. More
precisely we show how to preserve security of such protocols, even if
a malicious program (e.g. a virus) was installed on a computer of an
honest user (and it was later removed). The security of our protocols
relies on the assumption that the amount of data that the adversary
can transfer from the infected machine is limited (however, we allow
the adversary to perform any efficient computation on user's private
data, before deciding on what to transfer). We focus on two
cryptographic tasks, namely: sessionkey generation and entity
authentication. Our method is based on the results from the
BoundedStorage Model.

Perfectly Secure Password Protocols in the Bounded Retrieval Model
 Giovanni Di Crescenzo (Telcordia) and Richard Lipton (Georgia Tech.)
and Shabsi Walfish (NYU)
 Abstract:
We introduce a formal model, which we call the Bounded Retrieval
Model, for the design and analysis of cryptographic protocols
remaining secure against intruders that can retrieve a limited amount
of parties' private memory. The underlying model assumption on the
intruders' behavior is supported by reallife physical and logical
considerations, such as the inherent superiority of a party's local
data bus over a remote intruder's bandwidthlimited channel,
or the detectability of voluminous resource access by any local
intruder. More specifically, we assume a fixed upper bound on the
amount of a party's storage retrieved by the adversary. Our model
could be considered a nontrivial variation of the wellstudied
Bounded Storage Model, which postulates a bound on the amount of
storage available to an adversary attacking a given system.
In this model we study perhaps the simplest among cryptographic tasks:
user authentication via a password protocol. Specifically, we study
the problem of constructing efficient password protocols that remain
secure against offline dictionary attacks even when a large (but
bounded) part of the storage of the server responsible for password
verification is retrieved by an intruder through a remote or local
connection. We show password protocols having satisfactory
performance on both efficiency (in terms of the server's
running time) and provable security (making the offline
dictionary attack not significantly stronger than the online
attack). We also study the tradeoffs between efficiency, quantitative
and qualitative security in these protocols. All our schemes achieve
perfect security (security against computationallyunbounded
adversaries). Our main schemes achieve the interesting efficiency
property of the server's lookup complexity being much smaller than
the adversary's retrieval bound.

Polylogarithmic Private Approximations and Efficient Matching
 Piotr Indyk (MIT) and David Woodruff (MIT and Tsinghua Univ.)
 Abstract:
In [12] a private approximation of a function $f$ is defined to
be another function $F$ that approximates $f$ in the usual sense, but
does not reveal any information about $x$ other than what can be
deduced from $f(x)$. We give the first twoparty private approximation
of the $l_2$ distance with polylogarithmic communication. This, in
particular, resolves the main open question of [12].
We then look at the private near neighbor problem in which
Alice has a query point in ${0,1}^d$ and Bob a set of $n$ points in
${0,1}^d$, and Alice should privately learn the point closest to her
query. We improve upon existing protocols, resolving open questions of
[13, 10]. Then, we relax the problem by defining the private
approximate near neighbor problem, which requires introducing a
notion of secure computation of approximations for functions that
return sets of points rather than values. For this problem we give
several protocols with sublinear communication.
 Calibrating Noise to Sensitivity in Private Data Analysis
 Cynthia Dwork and Frank McSherry (Microsoft) and Kobbi Nissim (BenGurion Univ.) and Adam Smith (Weizmann)
 Abstract:
We continue a line of research initiated in [10, 11]
on privacypreserving statistical databases. Consider a trusted
server that holds a database of sensitive information. Given a query
function $f$ mapping databases to reals, the socalled true answer
is the result of applying $f$ to the database. To protect
privacy, the true answer is perturbed by the addition of random
noise generated according to a carefully chosen distribution, and
this response, the true answer plus noise, is returned to the user.
Previous work focused on the case of noisy sums, in which $f =
∑_{i} g(x_i)$, where $x_i$ denotes the $i$th row of the
database and $g$ maps database rows to $[0,1]$. We extend the study to
general functions $f$, proving that privacy can be preserved by
calibrating the standard deviation of the noise according to the
sensitivity of the function $f$. Roughly speaking, this is the
amount that any single argument to $f$ can change its output. The
new analysis shows that for several particular applications
substantially less noise is needed than was previously understood to
be the case.
The first step is a very clean characterization of privacy in terms
of indistinguishability of transcripts. Additionally, we obtain
separation results showing the increased value of interactive
sanitization mechanisms over noninteractive.
 Unconditionally Secure ConstantRounds MultiParty Computation for Equality, Comparison, Bits and Exponentiation
 Ivan Damgaard and Matthias Fitzi (Univ. of Aarhus) and Eike Kiltz (CWI Amsterdam) and Jesper Buus Nielsen and Tomas Toft (Univ. of Aarhus)
 Abstract:
We show that if a set of players hold shares of a value $a∈F_{p}$
for some prime $p$ (where the set of shares is written $[a]_{p}$),
it is possible to compute, in constant rounds and with unconditional
security, sharings of the bits of $a$, i.e., compute sharings
$[a_{_0}]_{p}, ..., [a_{\ell1}]_{p}$
such that
$\ell = ⌈log p⌉$, $a_{_0}, ..., a_{\ell1}
∈ {0,1}$ and $a =∑_{i=0}^{\ell1}
a_{_i}2^{i}$.
Our protocol
is secure against active adversaries and works
for any linear secret sharing scheme with a multiplication protocol.
The complexity of our protocol is $O(\ell log \ell)$ invocations of the
multiplication protocol for the underlying secret sharing scheme, carried
out in $O(1)$ rounds.
This result immediately implies solutions to other longstanding open
problems such as constantrounds and unconditionally secure protocols
for deciding whether a shared number is zero, comparing shared
numbers, raising a shared number to a shared exponent and reducing a
shared number modulo a shared modulus.
 Efficient MultiParty Computation with Dispute Control
 Zuzana Beerliovà and Martin Hirt (ETH Zurich)
 Abstract:
Secure multiparty computation (MPC) allows a set of $n$ players to
securely compute an agreed function of their inputs, even when up to $t$
players are under the control of an (active or passive) adversary. In the
informationtheoretic model MPC is possible if and only if $t≤n/2$ (where
active security with $t≥ n/3$ requires a trusted key setup).
Known passive MPC protocols require a communication of $O(n^2)$ field
elements per multiplication. Recently, the same communication complexity
was achieved for active security with $t<n/3$. It remained an open
question whether $O(n^2)$ complexity is achievable for $n/3≤t<n/2$.
We answer this question in the affirmative by presenting an active MPC
protocol that provides optimal ($t<n/2$) security and communicates only
$O(n^2)$ field elements per multiplication. Additionally the protocol
broadcasts $O(n^3)$ field elements overall, for the whole
computation.
The communication complexity of the new protocol is to be compared with
the most efficient previously known protocol for the same model, which
requires broadcasting $Ω(n^5)$ field elements per
multiplication. This substantial reduction in communication is mainly
achieved by applying a new technique called dispute control}
During the course of the protocol, the players keep track of disputes
that arise among them, and the ongoing computation is adjusted such that
known disputes cannot arise again. Dispute control is inspired by the
playerelimination framework. However, player elimination is not suited
for models with $t≥ n/3$.
 RoundOptimal
and Efficient Verifiable Secret Sharing
 Matthias Fitzi (Aarhus Univ.) and Juan Garay (Bell Labs) and Shyamnath
Gollakota (IIT Madras) and C. Pandu Rangan (IIT Madras) and Kannan Srinathan (International Institute of Information Technology, Hyderabad, India)
 Abstract:
We consider perfect verifiable secret sharing (VSS) in a synchronous
network of $n$ processors (players) where a designated player called
the dealer wishes to distribute a secret $s$ among the players
in a way that no $t$ of them obtain any information, but any $t+1$
players obtain full information about the secret. The round complexity
of a VSS protocol is defined as the number of rounds performed in the
sharing phase. Gennaro, Ishai, Kushilevitz and Rabin showed that three
rounds are necessary and sufficient when $n>3t$. Sufficiency, however,
was only demonstrated by means of an inefficient (i.e., exponentialtime)
protocol, and the construction of an efficient threeround protocol was
left as an open problem.
In this paper, we present an efficient threeround protocol for VSS.
The solution is based on a threeround solution of socalled weak
verifiable secret sharing (WSS), for which we also prove that three
rounds is a lower bound. Furthermore, we also demonstrate that one
round is sufficient for WSS when $n>4t$, and that VSS can be achieved
in $1+ε$ amortized rounds (for any $ε>0$) when
$n>3t$.
 Generalized Environmental Security from Number Theoretic Assumptions
 Tal Malkin (Columbia Univ.) and Ryan Moriarty (UCLA) and Nikolai Yakovenko (Google)
 Abstract:
We address the problem of realizing concurrently composable secure
computation without setup assumptions. While provably impossible in
the UC framework of [Can01], Prabhakaran and Sahai had recently
suggested a relaxed framework called generalized Environmental
Security (gES) [PS04], as well as a restriction of it to a
"clientserver" setting based on monitored functionalities
[PS05]. In these settings, the impossibility results do not
apply, and they provide secure protocols relying on new nonstandard
assumptions regarding the existence of hash functions with certain
properties.
In this paper, we first provide gES protocols for general
secure computation, based on a new, concrete number theoretic
assumption called the relativized discrete log assumption (rDLA).
Second, we provide secure protocols for functionalities in the
(limited) clientserver framework of [PS05], replacing their hash
function assumption with the standard discrete log assumption.
Both our results (like previous work) also use (standard)
superpolynomially strong trapdoor permutations.
We believe this is an important step towards obtaining positive
results for efficient secure computation in a concurrent environment
based on well studied assumptions. Furthermore, the new assumption we
put forward is of independent interest, and may prove useful for other
cryptographic applications.
 Games and the Impossibility of Realizable Ideal Functionality
 Anupam Datta and Ante Derek and John C. Mitchell and Ajith Ramanathan (Stanford Univ.) and Andre Scedrov (Univ. of Pennsylvania)
 Abstract:
A cryptographic primitive or a security mechanism can be specified in
a variety of ways, such as a condition involving a game against an
attacker, construction of an ideal functionality, or a list of
properties that must hold in the face of attack. While game
conditions are widely used, an ideal functionality is appealing
because a mechanism that is indistinguishable from an ideal
functionality is therefore guaranteed secure in any larger system that
uses it. We relate ideal functionalities to games by defining the
set of ideal functionalities associated with a game condition
and show that under this definition, which reflects accepted use and
known examples, bit commitment, a form of group signatures, and some
other cryptographic concepts do not have any realizable ideal
functionality.
 Universally Composable Symbolic Analysis of Mutual Authentication and Key Exchange Protocols
 Ran Canetti (IBM) and Jonathan Herzog (MITRE)
 Abstract:
Symbolic analysis of cryptographic protocols is dramatically simpler
than fullfledged cryptographic analysis. In particular, it is simple
enough to be automated. However, symbolic analysis does not, by
itself, provide any cryptographic soundness guarantees. Following
recent work on cryptographically sound symbolic analysis, we
demonstrate how DolevYao style symbolic analysis can be used to
assert the security of cryptographic protocols within the universally
composable (UC) security framework. Consequently, our methods enable
security analysis that is completely symbolic, and at the same time
cryptographically sound with strong composability properties.
More specifically, we concentrate on mutual authentication and
keyexchange protocols. We restrict attention to protocols that use
publickey encryption as their only cryptographic primitive and have a
specific restricted format. We define a mapping from such protocols
to DolevYao style symbolic protocols, and show that the symbolic
protocol satisfies a certain symbolic criterion if and only if the
corresponding cryptographic protocol is UCsecure. For mutual
authentication, our symbolic criterion is similar to the traditional
DolevYao criterion. For key exchange, we demonstrate that the
traditional DolevYao style symbolic criterion is insufficient, and
formulate an adequate symbolic criterion.
Finally, to demonstrate the viability of our treatment, we use an
existing tool to automatically verify whether some prominent
keyexchange protocols are UCsecure.
 Resource Fairness and Composability of Cryptographic Protocols
 Juan Garay (Bell Labs) ,Phillip MacKenzie (Google) and Manoj Prabhakaran (Univ. of Illinois, UrbanaChampaign) and Ke Yang (Google)
 Abstract:
We introduce the notion of resourcefair protocols.
Informally, this property states that if one party learns the output
of the protocol, then so can all other parties, as long as they expend
roughly the same amount of resources. As opposed to similar
previously proposed definitions, our definition follows the standard
simulation paradigm and enjoys strong composability properties. In
particular, our definition is similar to the security definition in
the universal composability (UC) framework, but works in a model that
allows any party to request additional resources from the environment
to deal with dishonest parties that may prematurely abort.
In this model we specify the ideally fair functionality as allowing
parties to "invest resources" in return for outputs, but in such an
event offering all other parties a fair deal. (The formulation of fair
dealings is kept independent of any particular functionality, by
defining it using a "wrapper.") Thus, by relaxing the notion of
fairness, we avoid a wellknown impossibility result for fair
multiparty computation with corrupted majority; in particular, our
definition admits constructions that tolerate arbitrary number of
corruptions. We also show that, as in the UC framework, protocols in
our framework may be arbitrarily and concurrently composed.
Turning to constructions, we define a "commitprovefairopen"
functionality and design an efficient resourcefair protocol that
securely realizes it, using a new variant of a cryptographic primitive
known as "timelines." With (the fairly wrapped version of) this
functionality we show that some of the existing secure multiparty
computation protocols can be easily transformed into resourcefair
protocols while preserving their security.
 Finding Pessiland
 Hoeteck Wee (UC Berkeley)
 Abstract:
We explore the minimal assumptions that are necessary for nontrivial
argument systems, such as Kilian's argument system for NP with
polylogarithmic communication complexity [K92]. We exhibit an
oracle relative to which there is a 2round argument system with
polylogarithmic communication complexity for some language in NP,
but no oneway functions. The language lies outside
$BPTime(2^{o(n)})$, so the relaxation to computational soundness is
essential for achieving sublinear communication complexity. We obtain
as a corollary that under blackbox reductions, nontrivial argument
systems do not imply oneway functions.
 Pseudorandom Generators from OneWay Functions: A Simple Construction for Any Hardness
 Thomas Holenstein (ETH Zurich)
 Abstract:
In a seminal paper, Håstad, Impagliazzo, Levin, and Luby showed
that pseudorandom generators exist if and only if oneway functions
exist. The construction they propose to obtain a pseudorandom
generator from an $n$bit oneway function uses $O(n^8)$ random
bits in the input (which is the most important complexity measure of
such a construction). In this work we study how much this can be
reduced if the oneway function satisfies a stronger security
requirement. For example, we show how to obtain a pseudorandom
generator which satisfies a standard notion of security using only
$O(n^4 log^2(n))$ bits of randomness if a oneway function with
exponential security is given, i.e., a oneway function for which no
polynomial time algorithm has probability higher than $2^{cn}$ in
inverting for some constant $c$.
Using the uniform variant of Impagliazzo's hardcore lemma given in
[7] our constructions and proofs are selfcontained
within this paper, and as a special case of our main theorem, we
give the first explicit description of the most efficient
construction from [6].
 On the Complexity
of Parallel Hardness Amplification for OneWay Functions
 ChiJen Lu (Academia Sinica)
 Abstract:
We prove complexity lower bounds for the tasks of hardness
amplification of oneway functions and construction of
pseudorandom generators from oneway functions, which are
realized nonadaptively in blackbox ways.
First, we consider the task of converting a oneway function
$f : {0,1}^n → {0,1}^m$ into a harder oneway function
$f' : {0,1}^{n'} → {0,1}^{m'}$, with $n',m'≤poly(n)$, in a
blackbox way. The hardness is measured as the fraction of inputs
any polynomialsize circuit must fail to invert. We show that to
use a constantdepth circuit to amplify hardness beyond a
polynomial factor, its size must exceed $2^{poly(n)}$, and to
amplify hardness beyond a $2^{o(n)}$ factor, its size must exceed
$2^{2^{o(n)}}$. Moreover, for a constantdepth circuit to amplify
hardness beyond an $n^{1+o(1)}$ factor in a security preserving
way (with $n'=O(n)$), it size must exceed $2^{n^{o(1)}}$.
Next, we show that if a constantdepth polynomialsize circuit can
amplify hardness beyond a polynomial factor in a weakly blackbox
way, then it must basically
embed a hard function in itself. In fact, one can derive from such
an amplification procedure a highly parallel oneway function,
which is computable by an NC0 circuit (constantdepth
polynomialsize circuit with bounded fanin gates).
Finally, we consider the task of constructing a pseudorandom
generator $G : {0,1}^{n'} → {0,1}^{m'}$ from a strongly oneway
function $f : {0,1}^n → {0,1}^m$ in a blackbox way. We show that
any such a construction realized by a constantdepth
$2^{n^{o(1)}}$size circuit can only have a sublinear stretch
(with $m'n'=o(n')$).
 On Matroids and Nonideal Secret Sharing
 Amos Beimel and Noam Livne (BenGurion Univ.)
 Abstract:
Secretsharing schemes are a tool used in many cryptographic
protocols. In these schemes, a dealer holding a secret string
distributes shares to the parties such that only authorized subsets of
participants can reconstruct the secret from their shares. The
collection of authorized sets is called an access structure. An
access structure is ideal if there is a secretsharing scheme
realizing it such that the shares are taken from the same domain as the
secrets. Brickell and Davenport (J. of Cryptology, 1991) have shown that
ideal access structures are closely related to matroids. They give a
necessary condition for an access structure to be ideal  the access
structure must be induced by a matroid. Seymour (J. of Combinatorial
Theory B, 1992) showed that the necessary condition is not sufficient:
There exists an access structure induced by a matroid that does not
have an ideal scheme.
In this work we continue the research on access structures induced
by matroids. Our main result in this paper is strengthening the
result of Seymour. We show that in any secret sharing scheme
realizing the access structure induced by the Vamos matroid with
domain of the secrets of size $k$, the size of the domain of the
shares is at least $k+Ω(\sqrt{k})$. Our second result
considers nonideal secret sharing schemes realizing access
structures induced by matroids. We prove that the fact that an
access structure is induced by a matroid implies lower and upper
bounds on the size of the domain of shares of subsets of
participants even in nonideal schemes (this generalized results of
Brickell and Davenport for ideal schemes).
 Secure Computation with Partial Message Loss
 ChiuYuen Koo (Univ. of Maryland, College Park)
 Abstract:
Existing communication models for multiparty computation (MPC) either
assume that all messages are delivered eventually or
any message can be lost. Under the former assumption, MPC
protocols guaranteeing output delivery are known. However, this
assumption may not hold in some network settings like the Internet
where messages can be lost due to denial of service attack or heavy
network congestion. On the other hand, the latter assumption may be
too conservative. Known MPC protocols developed under this
assumption have an undesirable feature: output delivery is not
guaranteed even only one party suffers message loss.
In this work, we propose a communication model which makes an
intermediate assumption on message delivery. In our model, there is a
common global clock and three types of parties: (i) Corrupted parties
(ii) Honest parties with connection problems (where message delivery
is never guaranteed) (iii) Honest parties that can normally
communicate but may lose a small fraction of messages at each round
due to transient network problems. We define secure MPC under this
model. Output delivery is guaranteed to type (ii) parties that do
not abort and type (iii) parties.
Let $n$ be the total number of parties, $e_f$ and $e_c$ be upper
bounds on the number of corrupted parties and type (ii) parties
respectively. We construct a secure MPC protocol for $n > 4e_f +
3e_c$. Protocols for broadcast and verifiable secret sharing are
constructed along the way.
 Communication Efficient Secure Linear Algebra
 Kobbi Nissim and Enav Weinreb (BenGurion Univ.)
 Abstract:
We present communication efficient secure protocols for a variety of linear
algebra problems. Our main building block is a protocol for
computing Gaussian Elimination on encrypted data. As input for
this protocol, Bob holds a $k \times k$ matrix $M$, encrypted
with Alice's key. At the end of the protocol run, Bob holds an
encryption of an uppertriangular matrix $M'$ such that the number
of nonzero elements on the diagonal equals the rank of $M$. The
communication complexity of our protocol is roughly $O(k^2)$.
Building on Oblivious Gaussian elimination, we present secure protocols
for several problems: deciding the
intersection of linear and affine subspaces, picking a random vector
from the intersection, and obliviously solving a set of linear
equations. Our protocols match known (insecure) communication
complexity lower bounds, and improve the communication
complexity of both Yao's garbled circuits and that of specific
previously published protocols.
 Threshold and Proactive PseudoRandom Permutations
 Yevgeniy Dodis (NYU) and Aleksandr Yampolskiy (Yale) and Moti Yung (RSA Labs and Columbia Univ.)
 Abstract:
We construct a reasonably efficient threshold and proactive
pseudorandom permutation (PRP). Our protocol needs only $O(1)$
communication rounds. It tolerates up to $(n1)/2$ of $n$ dishonest
servers in the semihonest environment. Many protocols that use PRPs
(.e.g, a CBC block cipher mode) can now be translated into the
distributed setting. Our main technique for constructing invertible
threshold PRPs is a distributed LubyRackoff construction where both
the secret keys and the input are shared among the servers. We
also present protocols for obliviously computing pseudorandom
functions by NaorReingold [41] and DodisYampolskiy [25] with shared
input and keys.
 PRF
Domain Extension Using DAGs
 Charanjit S. Jutla (IBM)
 Abstract:
We prove a general domain extension theorem for pseudorandom
functions (PRFs). Given a PRF $F$ from $n$ bits to $n$ bits, it is
well known that employing $F$ in a chaining mode (CBCMAC) yields a
PRF on a bigger domain of $mn$ bits. One can view each application of
$F$ in this chaining mode to be a node in a graph, and the chaining as
edges between the node. The resulting graph is just a line graph. In
this paper, we show that the underlying graph can be an arbitrary
directed acyclic graph (DAG), and the resulting function on the larger
domain is still a PRF. The only requirement on the graph is that it
have unique source and sink nodes, and no two nodes have the same set
of incident nodes. A new highly parallelizable MAC construction
follows which has a critical path of only $3+log^{*}m$
applications of $F$.
If we allow Galois field arithmetic, we can consider edgecolored
DAGs, where the colors represent multiplication in the field by the
color. We prove an even more general theorem, where the only
restriction on the colored DAGs is that if two nodes ($u$ and $v$)
have the same set of incident nodes $W$, then at least one $w$ in $W$
is incident on $u$ and $v$ with a different colored edge. PMAC
(Parallelizable Message Authentication [6]) is a simple example of
such graphs. Finally, to handle variable length domain extension, we
extend our theorem to a collection of DAGs. The general theorem
allows one to have further optimizations over PMAC, and many modes
which deal with variable lengths.
 ChosenCiphertext Security from TagBased Encryption
 Eike Kiltz (CWI Amsterdam)
 Abstract:
One of the celebrated applications of IdentityBased Encryption (IBE)
is the Canetti, Halevi, and Katz (CHK) transformation from any
(selectiveidentity secure) IBE scheme into a full chosenciphertext
secure encryption scheme. Since such IBE schemes in the standard
model are known from previous work this immediately provides new
chosenciphertext secure encryption schemes in the standard model.
This paper revisits the notion of TagBased Encryption (TBE) and
provides security definitions for the selectivetag case. Even though
TBE schemes belong to a more general class of cryptographic schemes
than IBE, we observe that (selectivetag secure) TBE is a sufficient
primitive for the CHK transformation and therefore implies
chosenciphertext secure encryption.
We construct efficient and practical TBE schemes and give tight
security reductions in the standard model from the Decisional Linear
Assumption in gapgroups. In contrast to all known IBE schemes our
TBE construction does not directly deploy pairings. Instantiating the
CHK transformation with our TBE scheme results in an encryption scheme
whose decryption can be carried out in one single
multiexponentiation.
Furthermore, we show how to apply the techniques gained from the TBE
construction to directly design a new Key Encapsulation Mechanism.
Since in this case we can avoid the CHK transformation the scheme
results in improved efficiency.

Separating Sources for Encryption and Secret Sharing
 Yevgeniy Dodis (NYU) and Krzysztof Pietrzak and Bartosz Przydatek (ETH Zurich)
 Abstract:
Most cryptographic primitives such as encryption, authentication or
secret sharing require randomness. Usually one assumes that perfect
randomness is available, but those primitives might also be realized
under weaker assumptions. In this work we continue the study of
building secure cryptographic primitives from imperfect random sources
initiated by Dodis and Spencer (FOCS'02). Their main result shows that
there exists a (highentropy) source of randomness allowing for
perfect encryption of a bit, and yet from which one cannot extract
even a single weakly random bit, separating encryption from
extraction. Our main result separates encryption from 2out2 secret
sharing (both in the informationtheoretic and in the computational
settings): any source which can be used to achieve onebit encryption
also can be used for 2out2 secret sharing of one bit, but the
converse is false, even for highentropy sources. Therefore,
possibility of extraction strictly implies encryption, which in turn
strictly implies 2out2 secret sharing.
