International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

15:17 [Pub][ePrint] On the immunity of Boolean functions against fast algebraic attacks using bivariate polynomial representation, by Meicheng Liu and Yin Zhang and Dongdai Lin

  In the last decade, algebraic and fast algebraic attacks are regarded as the most successful attacks on LFSR-based stream ciphers. Since the notion of algebraic immunity was introduced, the properties and constructions of Boolean functions with maximum algebraic immunity have been researched in a large number of papers. However, it is unclear whether these functions behave well against fast algebraic attacks. In this paper, we study the immunity of Boolean functions against fast algebraic attacks using bivariate polynomial representation. Based on bivariate polynomial representation, we present a sufficient and necessary condition for a Boolean function to achieve good immunity against fast algebraic attacks, propose an efficient method for estimating the immunity of a large class of Boolean functions, including the functions of Q. Jin et al., and prove that the functions of D. Tang et al. achieve (almost) optimal immunity against fast algebraic attacks.

15:17 [Pub][ePrint] Authenticity, Integrity and Proof-of-Existence for Long-Term Archiving: a Survey, by Martín A. G. Vigil and Daniel Cabarcas and Alexander Wiesmaier and Johannes Buchmann

  Digital archives that store electronic data for long periods are increasingly necessary for applications such as libraries, land registers, and medical records. Archived data is useful if protected from tampering to ensure authenticity, integrity and a date on which the existence of archived data was witnessed. We survey solutions that protected archived data using cryptography. We describe them and verify whether they successfully provide indefinite protection despite of current cryptography schemes\' limitations. Solutions are compared in regard to their goals, trust assumptions and efficiency. Based on our analysis, we elucidate open problems and promising research directions.

15:17 [Pub][ePrint] Constant Ciphertext Length in CP-ABE, by Nishant Doshi and Devesh Jinwala

  Ciphertext policy attribute based encryption (CP-ABE) is a technique

in which user with secret key containing attributes, only able to decrypt the message if the attributes in the policy match with the attributes in secret key. The existing methods that use reasonably computable decryption policies produce the ciphertext of size at least linearly varying with the number of attributes with additional pairing operations during encryption and decryption. In this paper, we propose a scheme in which ciphertext remains constant in length, irrespective of the number of attributes. Our scheme works for a threshold case: the number of attributes in a policy must be a subset of attributes in a secret key. The security of propose scheme is based on Decisional Bilinear Diffie-Hellman (DBDH) problem.

15:17 [Pub][ePrint] Privacy Amplification with Asymptotically Optimal Entropy Loss, by Nishanth Chandran and Bhavana Kanukurthi and Rafail Ostrovsky and Leonid Reyzin

  We study the problem of ``privacy amplification\'\': key agreement

between two parties who both know a weak secret w, such as a

password. (Such a setting is ubiquitous on the internet, where

passwords are the most commonly used security device.) We assume

that the key agreement protocol is taking place in the presence of

an active computationally unbounded adversary Eve. The adversary may

have partial knowledge about w, so we assume only that w has

some entropy from Eve\'s point of view. Thus, the goal of the

protocol is to convert this non-uniform secret w into a uniformly

distributed string $R$ that is fully secret from Eve. R may then

be used as a key for running symmetric cryptographic protocols (such

as encryption, authentication, etc.).

Because we make no computational assumptions, the entropy in R can

come only from w. Thus such a protocol must minimize the entropy

loss during its execution, so that R is as long as possible. The

best previous results have entropy loss of $\\Theta(\\kappa^2)$, where

$\\kappa$ is the security parameter, thus requiring the password to

be very long even for small values of $\\kappa$. In this work, we

present the first protocol for information-theoretic key agreement

that has entropy loss LINEAR in the security parameter. The

result is optimal up to constant factors. We achieve our improvement

through a somewhat surprising application of error-correcting codes

for the edit distance.

The protocol can be extended to provide also ``information

reconciliation,\'\' that is, to work even when the two parties have slightly different versions of w (for example, when biometrics are involved).

15:17 [Pub][ePrint] Are We Compromised? Modelling Security Assessment Games, by Viet Pham and Carlos Cid

  Security assessments are an integral part of organisations\' strategies for protecting their digital assets and critical IT infrastructure.

In this paper we propose a game-theoretic modelling of a particular form of security assessment -- one which addresses the question ``are we compromised?\'\'.

We do so by extending the recently proposed game ``FlipIt\'\', which itself can be used to model the interaction between defenders and attackers under the Advanced Persistent Threat (APT) scenario.

Our extension gives players the option to ``test\'\' the state of the game before making a move. This allows one to study the scenario in which organisations have the option to perform periodic security assessments of such nature, and the benefits they may bring.

15:17 [Pub][ePrint] Hierarchical Identity-Based (Lossy) Trapdoor Functions, by Alex Escala and Javier Herranz and Benoit Libert and Carla Rafols

  Lossy trapdoor functions, introduced by Peikert and Waters (STOC\'08), have received a lot of attention in the last years, because of their wide range of applications in theoretical cryptography. The notion has been recently extended to the identity-based setting by Bellare \\textit{et al.} (Eurocrypt\'12). We provide one more step in this direction, by considering the notion of hierarchical identity-based (lossy) trapdoor functions (HIB-TDFs). Hierarchical identity-based cryptography has proved very useful both for practical applications and to establish theoretical relations with other cryptographic primitives.

The notion of security for IB-TDFs put forward by Bellare \\textit{et al.} easily extends to the hierarchical scenario, but an (H)IB-TDF secure in this sense is not known to generically imply other related primitives with security against adaptive-id adversaries, not even IND-ID-CPA secure encryption. Our first contribution is to define a new security property for (H)IB-TDFs. We show that functions satisfying this property imply secure cryptographic primitives in the adaptive identity-based setting: these include encryption schemes with semantic security under chosen-plaintext attacks, deterministic encryption schemes, and (non-adaptive) hedged encryption schemes that maintain some security when messages are encrypted using randomness of poor quality. We emphasize that some of these primitives were unrealized in the (H)IB setting previous to this work.

As a second contribution, we describe the first pairing-based HIB-TDF realization. This is also the first example of hierarchical trapdoor function based on traditional number theoretic assumptions: so far, constructions were only known under lattice assumptions. Our HIB-TDF construction is based on techniques that differ from those of Bellare {\\it et al.} in that it uses a

hierarchical predicate encryption scheme as a key ingredient. The resulting HIB-TDF is proved to satisfy the new security definition, against either selective or, for hierarchies of constant depth, adaptive adversaries. In either case, we only need the underlying predicate encryption system to be selectively secure.

15:17 [Pub][ePrint] Scalable Deniable Group Key Establishment, by Kashi Neupane and Rainer Steinwandt and Adriana Suarez Corona

  The popular Katz-Yung compiler from CRYPTO 2003 can be used to transform unauthenticated group key establishment protocols into authenticated ones. In this paper we present a modication of Katz and Yung\'s construction which maintains the round complexity of their compiler, but for \"typical\" unauthenticated group key establishments adds authentication in such a way that deniability is achieved as well. As an application, a deniable authenticated group key establishment with three rounds of communication can be constructed.

15:17 [Pub][ePrint] On pseudorandomization of information-theoretically secure schemes without hardness assumptions, by Koji Nuida

  A recent work by Nuida and Hanaoka (in ICITS 2009) provided a proof technique for security of information-theoretically secure cryptographic schemes in which the random input tape is implemented by a pseudorandom generator (PRG). In this paper, we revisit their proof technique and generalize it by introducing some trade-off factor, which involves the original proof technique as a special case and provides a room of improvement of the preceding result. Secondly, we consider two issues of the preceding result; one is the requirement of some hardness assumption in their proof; another is the gap between non-uniform and uniform computational models appearing when transferring from the exact security formulation adopted in the preceding result to the usual asymptotic security. We point out that these two issues can be resolved by using a PRG proposed by Impagliazzo, Nisan and Wigderson (in STOC 1994) against memory-bounded distinguishers, instead of usual PRGs against time-bounded distinguishers. We also give a precise formulation of a computational model explained by Impagliazzo et al., and by using this, perform a numerical comparison showing that, despite the significant advantage of removing hardness assumptions, our result is still better than, or at least competitive to, the preceding result from quantitative viewpoints. The results of this paper would suggest a new motivation to use PRGs against distinguishers with computational constraints other than time complexity in practical situations rather than just theoretical works.

15:17 [Pub][ePrint] Succinct Malleable NIZKs and an Application to Compact Shuffles, by Melissa Chase and Markulf Kohlweiss and Anna Lysyanskaya and Sarah Meiklejohn

  Depending on the application, malleability in cryptography can be viewed as either a flaw or -- especially if sufficiently understood and restricted -- a feature. In this vein, Chase, Kohlweiss, Lysyanskaya, and Meiklejohn recently defined malleable zero-knowledge proofs, and showed how to control the set of allowable transformations on proofs. As an application, they construct the first compact verifiable shuffle, in which one such controlled-malleable proof suffices to prove the correctness of an entire multi-step shuffle.

Despite these initial steps, a number of natural open problems remain: (1) their construction of controlled-malleable proofs relies on the inherent malleability of Groth Sahai proofs and is thus not based on generic primitives; (2) the classes of allowable transformations they can support are somewhat restrictive; and (3) their construction of a compactly verifiable shuffle has proof size O(N^2 + L) (where N is the number of votes and L is the number of mix authorities), whereas in theory such a proof could be of size O(N + L).

In this paper, we address these open problems by providing a generic construction of controlled- malleable proofs using succinct non-interactive arguments of knowledge, or SNARGs for short. Our construction has the advantage that we can support a very general class of transformations (as we no longer rely on the transformations that Groth-Sahai proofs can support), and that we can use it to obtain a proof of size O(N + L) for the compactly verifiable shuffle.

15:17 [Pub][ePrint] Compact Implementation and Performance Evaluation of Hash Functions in ATtiny Devices, by Josep Balasch and Baris Ege and Thomas Eisenbarth and Benoit Gérard and Zheng Gong and Tim Güneysu and Stefa

  The pervasive diffusion of electronic devices in security and privacy sensitive applications has boosted research in cryptography. In this context, the study of lightweight algorithms has been a very active direction over the last years. In general, symmetric cryptographic primitives are good candidates for low-cost implementations. For example, several previous works have investigated the performances of block ciphers on various platforms. Motivated by the recent SHA3 competition, this paper extends these studies to another family of cryptographic primitives, namely hash functions. We implemented different algorithms on an ATMEL AVR ATtiny45 8-bit microcontroller, and provide their performance evaluation using different figures. All the implementations were carried out with the goal of minimizing the code size and memory utilization, and evaluated using a common interface. As part of our contribution, we additionally decided to make all the corresponding source codes available on a web page, under an open-source license. We hope that this paper provides a good basis for researchers and embedded system designers who need to include more and more functionalities in next generation smart devices.

15:17 [Pub][ePrint] On the (Im)Plausibility of Constant-Round Public-Coin Straight-Line-Simulatable Zero-Knowledge Proofs, by Yi Deng and Juan Garay and San Ling and Huaxiong Wang and Moti Yung

  In 2001, a breakthrough result by Barak [FOCS 2001] showed how to

achieve public-coin zero-knowledge (ZK) arguments in constant rounds,

a feature known to be impossible using black-box simulation. In this

approach, the simulator makes use of the code of the malicious

verifier in computing the prover messages (albeit without

understanding it), and does not rewind the malicious verifier---and it

is hence called a \\emph{straight-line} simulator.

Since then, however, we have witnessed little progress on the basic

question whether Barak\'s technique can be extended to ZK {\\em proof}

systems. In this paper we make progress on this front, by providing

strong evidence that such an extension is far from likely.

Specifically, we show that for a natural class of constant-round

public-coin ZK proofs (which we call ``canonical,\'\' as all known

non-black-box ZK protocols fall in this category), a straight-line

simulator based on the known non-black-box technique for such a proof

system can actually be used to solve a seemingly unrelated problem,

namely, to figure out some non-trivial property of a verifier\'s

program, and {\\em without executing the targe code}, a problem

commonly viewed as notoriously hard.

A key tool in our reduction is an improved structure-preserving

version of the well-known Babai-Moran Speedup (derandomization)

Theorem, which essentially says that, for a constant-round public-coin

interactive proof system in which the verifier sends $m$ messages and

each of the prover messages is of length $p$, if the cheating

probability for an unbounded prover is $\\epsilon$, then there exist

$(p/O(\\log \\frac{1}{\\epsilon}))^m$ verifier random tapes such that the

cheating probability for the unbounded prover over these random tapes

is bounded away from 1---and this holds even when the prover knows

this small set of random tapes in advance. (In our setting, the

original Babai-Moran theorem yields a much larger size ($(O(p))^m$) of

such set of verifier random tapes.) In addition, we show that this is

tight with respect to round complexity, in the sense that there are

public-coin proof systems with a super-constant number of rounds for

which the prover\'s cheating probability is $1$, over any polynomial

number of verifier random tapes.

Finally, although the notion of straight-line simulation is

intuitively clear and has been used several times in the literature,

we are not aware of a formal definition of the process, perhaps due to

the fact that thoroughly defining (as well as enforcing) ``executing

the verifier only once\'\' does not appear to be straightforward. The

notion of {\\em generalized straight-line simulation} that we introduce

not only overcomes those obstacles, but enables us to expose the

limitations of the currently known non-black-box techniques.