International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) You can also get this service via

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

05:22 [Pub][ePrint] Authenticated Key Exchange with Synchronized State, by Zheng Yang

  We study the problem on how to either prevent identity impersonation (IDI) attacks or limit its consequences by on-line detecting previously unidentified IDI attacks, where IDI attacks are normally caused by the leakage of identity related long-term key. Such problem has, up until now, lacked a provably good solution. We deal with this problem through the scenario on authenticated key exchange with synchronized state (AKESS). This work provides a security model for AKESS protocols, in which we particularly formalize the security of the synchronized state. We propose a two party execution state synchronization framework for symmetric case, based on which we propose a generic compiler for AKESS protocols. Our goal is to compile any existing passively secure key exchange (KE) protocol to AKESS protocol using synchronized state, without any modification on those KE protocols. The proposal is probably secure in the standard model under standard assumptions.

05:22 [Pub][ePrint] Reset Indifferentiability from Weakened Random Oracle Salvages One-pass Hash Functions, by Yusuke Naito and Kazuki Yoneyama and Kazuo Ohta

  Ristenpart et al. showed that the limitation of the indifferentiability

theorem of Maurer et al. which does not cover all multi stage security notions

but covers only single stage security notions, defined a new concept (reset

indifferentiability), and proved the reset indifferentiability theorem, which

is an analogy of the indifferentiability theorem covers all security

notions S: if H^U is reset indifferentiable from RO, for any security notion,

a cryptosystem C is at least as secure in the U model as in the RO model.

Unfortunately, they also proved the impossibility of H^U being reset

indifferentiable from a RO where H is a one-pass hash function such as ChopMD

and Sponge constructions.

In this paper, we will propose a new proof of molular approach instead of the

RO methodology, Reset Indifferentiability from Weakened Random Oracle, called

as the WRO methodology, in order to ensure the security of C with H^U,

salvaging ChopMD and Sponge. The concrete proof procedure of the WRO

methodology is as follows:

1. Define a new concept of WRO instead of RO,

2. Prove that H^U is reset indifferentiable from a WRO, (here an example of H

is ChopMD and Sponge), and

3. Prove that C is secure in the WRO model.

As a result we can prove that C with H^U is secure by combining the results of

Steps 2, 3, and the theorem of Ristenpart et al. Moreover, for public-key

encryption (as cryptosystem C) and chosen-distribution attack we will prove

that C(WRO) is secure, which implies the appropriateness of the new concept of

the WRO model.

05:22 [Pub][ePrint] Attacks and Security Proofs of EAX-Prime, by Kazuhiko Minematsu and Stefan Lucks and Hiraku Morita and Tetsu Iwata

  EAX$\'$ (EAX-prime) is an authenticated encryption (AE) specified by ANSI C12.22 as a standard security function for Smart Grid.

EAX$\'$ is based on EAX proposed by Bellare, Rogaway, and Wagner.

While EAX has a proof of security based on the pseudorandomness of the internal blockcipher, no published security result is known for EAX$\'$.

This paper studies the security of EAX$\'$ and shows that there is a sharp distinction in security of EAX$\'$ depending on the input length. EAX$\'$ encryption takes two inputs, called cleartext and plaintext,

and we present various efficient attacks against EAX$\'$ using single-block cleartext and plaintext.

At the same time we prove that if cleartexts are always longer than one block, it is provably secure

based on the pseudorandomness of the blockcipher.

05:22 [Pub][ePrint] Universally Composable Secure Computation with (Malicious) Physically Uncloneable Functions, by Rafail Ostrovsky, Alessandra Scafuro, Ivan Visconti, Akshay Wadia

  Physically Uncloneable Functions (PUFs) [Pap01] are noisy physical sources of randomness. As such, they are naturally appealing for cryptographic applications, and have caught the interest of both theoreticians and practitioners. A major step towards understanding and securely using PUFs was recently taken in [Crypto 2011] where Brzuska, Fischlin, Schröder and Katzenbeisser model PUFs in the Universal Composition (UC) framework of Canetti [FOCS 2001]. Their model considers trusted PUFs only, and thus real-world adversaries can not create malicious PUFs, and can access the physical object only via the prescribed procedure. However,this does not accurately reect real-life scenarios, where an adversary could be able to create and use malicious PUFs, or access the PUF through other procedures.

The goal of this work is to extend the model proposed in [Crypto 2011] in order to capture such real-world attacks. The main contribution of this work is the study of the Malicious PUFs model. Namely, we extend the PUF functionality of Brzuska et al. so that it allows the adversary to create arbitrarily malicious PUFs. Then, we provide positive results in this, more realistic, model. We show that, under computational assumptions, it is possible to UC-securely realize any functionality. Furthermore, we achieve unconditional (not UC) security with malicious PUFs, by showing a statistically hiding statistically binding commitment scheme that uses one PUF only, and such PUF can be malicious.

As an additional contribution, we investigate another attack model, where adversaries access to a trusted PUF in a dierent way (i.e., not following the prescribed procedure). Technically this attack translates into the fact that the simulator cannot observe the queries made to an honest PUF. In this model, queries are oblivious to the simulator, and we call it the Oblivious Query model. We are able to achieve unconditionally UC-secure computation, even in this more severe model. This protocol is secure against stronger adversaries compared to the ones of Brzuska et al.

Finally, we show the impossibility of UC secure computation in the combination of the above two new models, where the real-world adversary can create malicious PUFs and maliciously access to honest PUFs.

Our work sheds light on the signicant power and applicability of PUFs in the design of cryptographic protocols modeling adversaries that misbehave with PUFs.

05:22 [Pub][ePrint] A Strongly Secure Authenticated Key Exchange Protocol from Bilinear Groups without Random Oracles, by Zheng Yang

  Since the introducing of extended Canetti-Krawczyk~(eCK) security model for two party key exchange, many protocols have been proposed to provide eCK security. However, most of those protocols are provably secure in the random oracle model or rely on special design technique well-known as the NAXOS trick. In contrast to previous schemes, we present an eCK secure protocol in the standard model, without NAXOS trick and without knowledge of secret key (KOSK) assumption for public key registration. The security proof of our scheme is based on standard pairing assumption, collision resistant hash functions, bilinear decision Diffie-Hellman (BDDH) and decision linear Diffie-Hellman (DLIN) assumptions, and pseudo-random functions with pairwise independent random source. Although our proposed protocol is based on bilinear groups, it doesn\'t need any pairing operations during protocol execution.

05:22 [Pub][ePrint] Biclique Cryptanalysis Of PRESENT, LED, And KLEIN, by Farzaneh Abed and Christian Forler and Eik List and Stefan Lucks and Jakob Wenzel

  In this paper, we analyze the resistance of the lightweight ciphers PRESENT, LED, and KLEIN to biclique attacks. Primarily, we describe attacks on the full-round versions PRESENT-80, PRESENT-128, LED-64, LED-128, KLEIN-80, and KLEIN-96. Our attacks have time complexities of

$2^{79.49}$, $2^{127.32}$, $2^{63.58}$, $2^{127.42}$, $2^{79.00}$, and $2^{95.18}$ encryptions, respectively. In addition, we consider attacks

on round-reduced versions of PRESENT and LED, to show the security margin for which an adversary can obtain an advantage of at least a factor of two compared to exhaustive search.

05:22 [Pub][ePrint] Reusable Garbled Circuits and Succinct Functional Encryption, by Shafi Goldwasser and Yael Kalai and Raluca Ada Popa and Vinod Vaikuntanathan and Nickolai Zeldovich

  Garbled circuits, introduced by Yao in the mid 80s, allow computing a

function f on an input x without leaking anything about f or x besides f(x). Garbled circuits found numerous applications, but every known construction suffers from one limitation: it offers no security if used on multiple inputs x. In this paper, we construct for the first time reusable garbled circuits. The key building block is a new succinct single-key functional encryption scheme.

Functional encryption is an ambitious primitive: given an encryption

Enc(x) of a value x, and a secret key sk_f for a function f, anyone can compute f(x) without learning any other information about x. We

construct, for the first time, a succinct functional encryption

scheme for any polynomial-time function f where succinctness means

that the ciphertext size does not grow with the size of the circuit for f, but only with its depth. The security of our construction is based on the intractability of the Learning with Errors (LWE) problem and holds as long as an adversary has access to a single key sk_f (or even an a priori bounded number of keys for different functions).

Building on our succinct single-key functional encryption scheme, we

show several new applications in addition to reusable garbled circuits, such as a paradigm for general function obfuscation which we call token-based obfuscation, homomorphic encryption for a class of Turing machines where the evaluation runs in input-specific time rather than worst-case time, and a scheme for delegating computation which is publicly verifiable and maintains the privacy of the computation.

05:22 [Pub][ePrint] An Analysis of the EMV Channel Establishment Protocol, by Christina Brzuska and Nigel P. Smart and Bogdan Warinschi and Gaven J. Watson

  With over 1.5~billion debit and credit cards in use worldwide, the EMV system (a.k.a. ``Chip-and-PIN\'\') has become one of the most important deployed cryptographic protocol suites. Recently, the EMV consortium has decided to upgrade the existing RSA based system with a new system relying on Elliptic Curve Cryptography (ECC). One of the central components of the new system is a protocol that enables a card to establish a secure channel with a card reader. In this paper we provide a security analysis of the proposed protocol, we propose minor changes/clarifications to the ``Request for Comments\'\' issued in Nov 2012, and demonstrate that the resulting protocol meets the intended security goals.

The structure of the protocol is one commonly encountered in practice: first run a key-exchange to establish a shared key (which performs authentication and key confirmation), only then use the channel to exchange application messages. Although common in practice, this structure takes the protocol out of the reach of most standard security models for key-exchange. Unfortunately, the only models that can cope with the above structure suffer from some drawbacks that make them unsuitable for our analysis. Our second contribution is to provide new security models for channel establishment protocols. Our models have a more inclusive syntax, are quite general, deal with a realistic notion of authentication (one-sided authentication as required by EMV), and do not suffer from the drawbacks that we identify in prior models.

05:22 [Pub][ePrint] Design Space Exploration and Optimization of Path Oblivious RAM in Secure Processors, by Ling Ren and Xiangyao Yu and Christopher W. Fletcher and Marten van Dijk and Srinivas Devadas

  Keeping user data private is a huge problem both in cloud computing and computation outsourcing. One paradigm to achieve data privacy is to use tamper-resistant processors, inside which users\' private data is decrypted and computed upon. These processors need to interact with untrusted external memory. Even if we encrypt all data that leaves the trusted processor, however, the address sequence that goes off-chip may still leak information. To prevent this address leakage, the security community has proposed ORAM (Oblivious RAM). ORAM has mainly been explored in server/file settings which assume a vastly different computation model than secure processors.

Not surprisingly, naively applying ORAM to a secure processor setting incurs large performance overheads.

In this paper, a recent proposal called Path ORAM is studied. We demonstrate techniques to make Path ORAM practical in a secure processor setting. We introduce background eviction schemes to prevent Path ORAM failure and allow for a performance-driven design space exploration. We propose a concept called super blocks to further improve Path ORAM\'s performance, and also show an efficient integrity verification scheme for Path ORAM. With our optimizations, Path ORAM overhead drops by 41.8%, and SPEC benchmark execution time improves by 52.4% in relation to a baseline configuration. Our work can be used to improve the security level of previous secure processors.

05:22 [Pub][ePrint] A Security Framework for Analysis and Design of Software Attestation, by Frederik Armknecht and Ahmad-Reza Sadeghi and Steffen Schulz and Christian Wachsmann

  Software attestation has become a popular and challenging research topic at many established security conferences with an expected strong impact in practice. It aims at verifying the software integrity of (typically) resource-constrained embedded devices. However, for practical reasons, software attestation cannot rely on stored cryptographic secrets or dedicated trusted hardware. Instead, it exploits side-channel information, such as the time that the underlying device needs for a specific computation. As traditional cryptographic solutions and arguments are not applicable, novel approaches for the design and analysis are necessary. This is certainly one of the main reasons why the security goals, properties and underlying assumptions of existing software attestation schemes have been only vaguely discussed so far, limiting the confidence in their security claims. Thus, putting software attestation on a solid ground and having a founded approach for designing secure software attestation schemes is still an important open problem.

We provide the first steps towards closing this gap. Our first contribution is a security framework that formally captures security goals, attacker models, and various system and design parameters. Moreover, we present a generic software attestation scheme that covers most existing schemes in the literature. Finally, we analyze its security within our framework, yielding sufficient conditions for provably secure software attestation schemes. We expect that such a consolidating work allows for a meaningful security analysis of existing schemes and supports the design of arguably secure software attestation schemes and will inspire new research in this area.

05:22 [Pub][ePrint] Throughput Optimized Implementations of QUAD, by Jason R. Hamlet and Robert W. Brocato

  We present several software and hardware implementations of QUAD, a recently introduced stream cipher designed to be provably secure and practical to implement. The software implementations target both a personal computer and an ARM microprocessor. The hardware implementations target field programmable gate arrays. The purpose of our work was to first find the baseline performance of QUAD implementations, then to optimize our implementations for throughput. Our software implementations perform comparably to prior work. Our hardware implementations are the first known implementations to use random coefficients, in agreement with QUAD\'s security argument, and achieve much higher throughput than prior implementations.