International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

18:17 [Pub][ePrint] Enhanced Lattice-Based Signatures on Reconfigurable Hardware, by Thomas P\\\"oppelmann and L{\\\'e}o Ducas and Tim G\\\"uneysu

  The recent BLISS signature scheme showed that lattice-based constructions have evolved to practical alternatives to RSA or ECC. Besides reasonably small signatures with 5600 bits for a 128-bit level of security, BLISS enables extremely fast signing and signature verification in software. However, due to the complex sampling of Gaussian noise with high precision, it is not clear whether this scheme can be mapped efficiently to embedded devices. In particular, the software approach of using large precomputed tables for Gaussian sampling cannot be transferred to constrained computing environments, such as FPGAs with limited memory. In this work we present techniques for an efficient CDT-based Gaussian sampler on reconfigurable hardware involving Peikert\'s convolution lemma and the Kullback-Leibler divergence. Based on our enhanced sampler design, we provide a first BLISS architecture for Xilinx Spartan-6 FPGAs that integrates fast FFT/NTT-based polynomial multiplication, sparse multiplication, and a Keccak hash function. With on our core a signing operations requires 123 \\textmu s on average, using 2,584 slices, 8 BRAMs, and 6 DSPs. Verification takes slightly less with 70 \\textmu s.

18:17 [Pub][ePrint] Certification and Efficient Proofs of Committed Topology Graphs, by Thomas Gross

  Digital signature schemes are a foundational cryptographic building block in certification and the projection of trust. Based on a signature scheme on committed graphs, we propose a toolkit of certification and proof methods to sign committed topology graphs

and to prove properties of their certificates in zero-knowledge.

This toolkit allows an issuer, such as an auditor, to sign the topology representation of an infrastructure. The prover, such as an infrastructure provider, can then convince a verifier of topology properties, such as partitions, connectivity or isolation, without disclosing the structure of the topology itself. By that, we can achieve the certification of the structure of critical systems, such as infrastructure clouds or outsourced systems, while still maintaining confidentiality. We offer zero-knowledge proofs of knowledge for a general specification language of security goals for virtualized infrastructures, such that high-level security goalscan be proven over the topology certificate. Our method builds upon the Camenisch-Lysyanskaya signature scheme, is based on honest-verifier proofs and the strong RSA assumption.

18:17 [Pub][ePrint] Private and Dynamic Time-Series Data Aggregation with Trust Relaxation, by Iraklis Leontiadis and Kaoutar Elkhiyaoui and Refik Molva

  With the advent of networking applications collecting user data on

a massive scale, the privacy of individual users appears to be a major concern.

The main challenge is the design of a solution that allows the data analyzer to

compute global statistics over the set of individual inputs that are protected by

some confidentiality mechanism. Joye et al. [7] recently suggested a solution

that allows a centralized party to compute the sum of encrypted inputs collected

through a smart metering network. The main shortcomings of this solution are

its reliance on a trusted dealer for key distribution and the need for frequent key

updates. In this paper we introduce a secure protocol for aggregation of timeseries

data that is based on the Joye et al. [7] scheme and in which the main

shortcomings of the latter, namely, the requirement for key updates and for the

trusted dealer are eliminated. As such, during the protocol execution none of the

parties apart from the users themselves are aware of the secret keys. Moreover

our scheme supports a dynamic group management, whereby as opposed to Joye

et al. [7] leave and join operations do not trigger a key update at the users.

18:17 [Pub][ePrint] Handycipher: a Low-tech, Randomized, Symmetric-key Cryptosystem, by Bruce Kallick

  Handycipher is a low-tech, randomized, symmetric-key, stream cipher, simple enough to permit pen-and-paper encrypting and decrypting of messages, while providing a significantly high level of security by using a nondeterministic encryption procedure, multiple encryption, and randomly generated session keys.

18:17 [Pub][ePrint] A realtime key recovery attack on the authenticated cipher FASER128, by Xiutao FENG and Fan ZHANG

  FASER is a family of authenticated ciphers submitted to the CAESAR competition, which contains two parent ciphers: FASER128 and FASER256. In this work we only focus on FASER128 and present a key recovery attack to FASER128, which needs at most 64 key words and is realtime in a PC. The result shows that FASER128 is very insecure. What\'s more, our attack can be easily applied to FASER256 and break it entirely.

18:17 [Pub][ePrint] Practical Complexity Cube Attacks on Round-Reduced Keccak Sponge Function, by Itai Dinur and Pawel Morawiecki and Josef Pieprzyk and Marian Srebrny and Michal Straus

  In this paper we mount the cube attack on the Keccak sponge function. The cube attack, formally introduced in 2008, is an algebraic technique applicable to cryptographic primitives whose output can be described as a low-degree polynomial in the input. Our results show that 5- and 6-round Keccak sponge function is vulnerable to this technique. All the presented attacks have practical complexities and were verified on a desktop PC.

18:17 [Pub][ePrint] Locally Decodable Codes for edit distance, by Rafail Ostrovsky and Anat Paskin-Cherniavsky

  Locally decodable codes (LDC)~\\cite{BFLS91,KT00} are error correcting codes that allow decoding (any) individual symbol of the message, by reading only few symbols of the codeword. Consider an application such as storage

solutions for large data, where errors may occur in the disks (or some disks may just crush). In such an application, it is often desirable to recover only small portions of the data (have random access). Thus, in such applications, using LDC provides enormous efficiency gains over standard error correcting codes (ECCs), that need to read the entire encoded message to learn even a single bit of information.

Typically, LDC\'s, as well as standard ECC\'s decode

the encoded messaged if upto some bounded fraction of the symbols had been modified. This corresponds to decoding strings of bounded Hamming distance from a valid codeword. An often more realistic metric is the edit distance, measuring the shortest sequence of insertions and deletions (indel.) of symbols leading from one word to another.

For example, (few) indel. modifications is a more realistic model for mutations occurring in a genome. Even more commonly, communication over the web may sustain deletions (lost packets) and insertions (noise).\\footnote{Edit distance is indeed \"more expressive\" then Hamming distance in the sense that $dist_E(x,y)\\leq 2dist_H(x,y)$ always holds, while edit distance 2 may translate to Hamming distance $n$. For instance, consider $x=1010\\ldots 10,y=0101\\ldots 1$.


Standard ECC\'s for edit distance have been previously considered~\\cite{SZ97}. Furthermore,~\\cite{SZ97} devised codes with

rate and distance (error tolerance) optimal upto constants.

LDC\'s, originally considered in the setting of PCP\'s~\\cite{BFLS91}, have found many additional applications, and generated a lot of fascinating work (see~\\cite{Yek11} and references within).

However, combining these two useful settings of LDC, and robustness

against indel. errors has never been considered.

In this work, we study the question of constructing LDC\'s for edit distance. We demonstrate a strong positive result - LDC\'s for edit distance can be achieved, with similar parameters to LDC\'s for Hamming distance. More precisely, we devise a generic transformation from LDC

for Hamming distance to LDC for edit distance with related parameters.

18:17 [Pub][ePrint] Fault Analysis of Grain Family of Stream Ciphers, by Sandip Karmakar and Dipanwita Roy Chowdhury

  In this paper, we present fault attack on Grain family of stream ciphers, an eStream finalist. The earlier fault attacks on Grain work on LFSR whereas our target for fault induction is the NFSR. Our attack requires a small number of faults to be injected; 150 only for Grain v1 and only 312 and 384 for Grain-128 and Grain-128a, respectively. The number of faults are much lesser than the earlier reported fault attacks; 1587 for Grain-128 and 1831 for Grain-128a.

18:17 [Pub][ePrint] Differential Fault Analysis of MICKEY Family of Stream Ciphers, by Sandip Karmakar and Dipanwita Roy Chowdhury

  This paper presents differential fault analysis of the

MICKEY family of stream ciphers, one of the winners of eStream

project. The current attacks are of the best performance among

all the attacks against MICKEY ciphers reported till date. The

number of faults required with respect to state size is about

1.5 times the state size. We obtain linear equations to determine

state bits. The fault model required is reasonable. The fault model

is further relaxed without reproducing the faults and allowing

multiple bit faults. In this scenario, more faults are required

when reproduction is not allowed whereas, it has been shown

that the number of faults remains same for multiple bit faults.

18:17 [Pub][ePrint] A Generic Scan Attack on Hardware based eStream Winners, by Sandip Karmakar and Dipanwita Roy Chowdhury

  Scan chains, a design for testability (DFT)

feature, are included in most modern-day ICs. But, it

opens a side channel for attacking cryptographic chips.

We propose a methodology by which we can recover

internal states of any stream cipher using scan chains

without knowledge of its design. We consider conven-

tional scan-chain design which is normally not scram-

bled or protected in any other way. In this scenario

the challenge of the adversary is to obtain the corre-

spondence of output of the scan chain and the internal

state registers of the stream cipher. We present a math-

ematical model of the attack and the correspondence

between the scan chain-outputs and the internal state

bits have been proved under this model. We propose an

algorithm that through o-line and on-line simulation

forms bijection between the above mentioned sets and

thus nds the required correspondence. We also give an

estimate of the number of o-line simulations necessary

for nding the correspondence.

The proposed strategy is successfully applied to eS-

tream hardware based nalists MICKEY-128 2.0, Triv-

ium and Grain-128. To the best of our knowledge, this is

the rst scan based attack against full round Grain-128

and only the fourth reported cryptanalysis. This attack

on Trivium is better than that of the published scan-

attack on Trivium. This scan-based attack is also the

rst reported scan based cryptanalysis against MICKEY-

128 2.0.

18:17 [Pub][ePrint] Continuous After-the-fact Leakage-Resilient Key Exchange (full version), by Janaka Alawatugoda and Colin Boyd and Douglas Stebila

  Security models for two-party authenticated key exchange (AKE) protocols have developed over time to provide security even when the adversary learns certain secret keys. In this work, we advance the modelling of AKE protocols by considering more granular, continuous leakage of long-term secrets of protocol participants: the adversary can adaptively request arbitrary leakage of long-term secrets even after the test session is activated, with limits on the amount of leakage per query but no bounds on the total leakage. We present a security model supporting continuous leakage even when the adversary learns certain ephemeral secrets or session keys, and give a generic construction of a two-pass leakage-resilient key exchange protocol that is secure in the model; our protocol achieves continuous, after-the-fact leakage resilience with not much more cost than a previous protocol with only bounded, non-after-the-fact leakage.