International Association for Cryptologic Research

# IACR News Central

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2015-02-28
04:17 [Pub][ePrint]

Lightweight Cryptography aims at achieving security comparable to conventional cryptography at a much lower cost. Simon is a lightweight alternative to AES, as it shares same cryptographic parameters, but has been shown to be extremely area-efficient on FPGAs. However, in the embedded setting, protection against side channel analysis is often required. In this work we present a threshold implementation of Simon. The proposed core splits the information between three shares and achieves provable security against first order side-channel attacks. The core can be implemented in less than 100 slices of a low-cost FPGA, making it the world smallest threshold implementation of a block-cipher. Hence, the proposed core perfectly suits highly-constrained embedded systems including sensor nodes and RFIDs. Security of the proposed core is validated by provable arguments as well as practical DPA attacks and tests for leakage quantification.

04:17 [Pub][ePrint]

The arrival of indistinguishability obfuscation (iO) has transformed the cryptographic landscape by enabling several security goals that were previously beyond our reach. Consequently, one of the pressing goals currently is to construct iO from well-studied standard cryptographic assumptions.

In this work, we make progress in this direction by presenting a reduction from iO to a natural form of public-key functional encryption (FE). Specifically, we construct iO for general

functions from any sub-exponentially secure, single-key compact FE scheme for NC1 that achieves selective, indistinguishability security against sub-exponential time adversaries. Further, the FE scheme should be compact, namely, the running time of the encryption algorithm must only be a polynomial in the security parameter and the input message length and not depend on the function description size or its output length.

We achieve this result by developing a novel arity amplification technique to transform FE for single-ary functions into FE for multi-ary functions (aka multi-input FE). Instantiating our approach with known, non-compact FE schemes, we obtain the first constructions of multi-input FE for constant-ary functions based on standard assumptions.

Finally, as a result of independent interest, we construct a compact FE scheme from randomized encodings for Turing machines and learning with errors assumption.

04:17 [Pub][ePrint]

The Feistel-SP structure is a commonly adopted structure in symmetric cryptography with many practical instances. Differential power analysis (DPA) has proven to be effective against these ciphers with compact implementations within these years. However, the applications of DPA on Feistel-SP ciphers with loop hardware implementations are more complicated and less evaluated in literature, mainly due to the relatively large size (32-bit or more) of the whole round key which often results in complex relations with the targeted intermediate variable. \\\\

In this paper, we propose a practical chosen message power analysis method on Feistel-SP ciphers with loop hardware implementations. The essence of the new method lies in the delicate selection of the plaintext set in a chosen message manner. Thus, the input space of the plaintext in our method is decreased from $2^{32}$ or more to $2^8$ or less, which is suitable for practical power analysis. Moreover, we show that the whitening key at the first or last round can be easily revealed with our method, thus leading to the full exposure of the master key thanks to the relations between whitening keys and the master key in many practical ciphers. In order to further manifest the validity of the new method, we carry extensive experiments on two ISO standardized and widely deployed ciphers CLEFIA and Camellia with loop implementations on FPGA, and the master keys are recovered as expected.

01:17 [Pub][ePrint]

ICEPOLE is a CAESAR candidate with the intermediate level of robustness under nonce misuse circumstances in the original document. In particular, it was claimed that key recovery attack against ICEPOLE is impossible in the case of nonce misuse. ICEPOLE is strong against the differential cryptanalysis and linear cryptanalysis. In this paper, we developed the differential-linear attacks against ICEPOLE when nonce is misused. Our attacks show that the state of ICEPOLE-128 and ICEPOLE-128a can be recovered with data complexity $2^{46}$ and time complexity $2^{46}$; the state of ICEPOLE-256a can be recovered with data complexity $2^{60}$ and time complexity $2^{60}$. For ICEPOLE-128a and ICEPOLE-256a, the secret key is recovered once the state is recovered. We experimentally verified the attacks against ICEPOLE-128 and ICEPOLE-128a.

01:17 [Pub][ePrint]

This paper compares attack outcomes w.r.t. profiled single trace attacks of four different lightweight ciphers in order to investigate which of their properties, if any, contribute to attack success. We show that mainly the diffusion properties of both the round function and the key schedule play a role. In particular, the more (reasonably statistically independent) intermediate values are produced in a target implementation, the better attacks succeed. A crucial aspect for lightweight ciphers is hence the key schedule which is often designed to be particularly light. This design choice implies that information from all round keys can be easily combined which results in attacks that succeed with ease.

01:17 [Pub][ePrint]

In the last few years, cryptographic multilinear maps have proved their tremendous potential as building blocks for new constructions, in particular the first viable approach to general program obfuscation. After the first candidate construction by Garg, Gentry and Halevi (GGH) based on ideal lattices, a second construction over the integers was described by Coron, Lepoint and Tibouchi (CLT). However the CLT scheme was recently broken by Cheon et al.; the attack works by computing the eigenvalues of a diagonalizable matrix over Q derived from the multilinear map.

In this paper we describe a new candidate multilinear map over the integers. Our construction is based on CLT but with a new arithmetic technique that makes the zero-testing element non-linear in the encoding, which prevents the Cheon et al. attack. Our new construction is relatively practical as its efficiency is comparable to the original CLT scheme. Moreover the subgroup membership and decisional linear assumptions appear to hold in the new setting.

01:17 [Pub][ePrint]

Indistinguishability obfuscation is a tremendous notion, powerful enough to give rise to almost any known cryptographic object. We construct indistinguishability obfuscation from any public-key functional encryption scheme with succinct ciphertexts and sub-exponential security.

01:17 [Pub][ePrint]

Ring signature enables an user to anonymously sign a message on behalf of a group of users termed as \'ring\' formed in an \'ad-hoc\' manner. A naive scheme produces a signature linear in the size of the ring, but this is extremely inefficient when ring size is large. Dodis et al. proposed a constant size scheme in EUROCRYPT\'13, but provably secure in random oracle model. Best known result without random oracle is a sub-linear size construction by Chandran et al. in ICALP\'07 and a follow-up work by Essam Ghadafi in IMACC\'13. Therefore, construction of a constant size ring signature scheme without random oracle meeting stringent security requirement still remains as an

interesting open problem.

Our first contribution is a generic technique to convert a compatible signature scheme to a constantsized ring signature scheme. The technique employs a constant size set membership check that may be of independent interest. Our construction is instantiated over asymmetric pairing of composite order and optimally efficient. The scheme meets strongest security requirements, viz. anonymity under full key exposure and unforgeability against insider-corruption without using random oracle under simple hardness assumptions. We also provide a concrete instantiation of the scheme based on Full Boneh-Boyen signatures.

01:17 [Pub][ePrint]

Constant parallel-time cryptography allows to perform complex cryptographic tasks at an ultimate level of parallelism, namely, by local functions

that each of their output bits depend on a constant number of input bits. A natural way to obtain local cryptographic constructions is to use \\emph{random local functions} in which each output bit is computed by applying some fixed $d$-ary predicate $P$ to a randomly chosen $d$-size subset of the input bits.

In this work, we will study the cryptographic hardness of random local functions. In particular, we will survey known attacks and hardness results, discuss different flavors of hardness (one-wayness, pseudorandomness, collision resistance, public-key encryption), and mention applications to other problems in cryptography and computational complexity. We also present some open questions with the hope to develop a systematic study of the cryptographic hardness of local functions.

01:17 [Pub][ePrint]

We introduce quantitative usability and security models to guide the design of \\emph{password

management schemes} --- systematic strategies to help users create and remember multiple

passwords. In the same way that security proofs in cryptography are based on

complexity-theoretic assumptions (e.g., hardness of factoring and discrete logarithm), we quantify

usability by introducing \\emph{usability assumptions}. In particular, password management relies

on assumptions about human memory, e.g., that a user who follows a particular rehearsal

schedule will successfully maintain the corresponding memory. These assumptions are informed by research in cognitive science and can be tested empirically. Given rehearsal requirements and a user\'s

visitation schedule for each account, we use the total number of extra rehearsals that

the user would have to do to remember all of his passwords as a measure of the usability of

the password scheme. Our usability model leads us to a key observation: password reuse benefits users not only by reducing the number of passwords that the user has to memorize, but more importantly by increasing the natural rehearsal rate for each password. We also present a security model which accounts for the complexity of password

management with multiple accounts and associated threats,

including online, offline, and plaintext password leak attacks. Observing that current

password management schemes are either insecure or unusable, we present Shared Cues--- a new scheme in which the underlying secret is strategically

shared across accounts to ensure that most rehearsal requirements are satisfied naturally while

simultaneously providing strong security. The construction uses the Chinese Remainder Theorem to achieve these competing goals.

01:17 [Pub][ePrint]

Recent devastating attacks by Cheon et al. [Eurocrypt\'15] and others have highlighted significant gaps in our intuition about security in candidate multilinear map schemes, and in candidate

obfuscators that use them. The new attacks, and some that were previously known, are typically called \"zeroizing\" attacks because they all crucially rely on the ability of the adversary to create

encodings of 0.

In this work, we initiate the study of post-zeroizing obfuscation, and we present a construction for the special case of evasive functions. We show that our obfuscator survives all known attacks

on the underlying multilinear maps, by proving that no encodings of 0 can be created by a generic-model adversary. Previous obfuscators (for both evasive and general functions) were either analyzed in a less-conservative \"pre-zeroizing\" model that does not capture recent attacks, or were proved secure relative to assumptions that are now known to be false.

To prove security, we introduce a new technique for analyzing polynomials over multilinear map encodings. This technique shows that the types of encodings an adversary can create are much more restricted than was previously known, and is a crucial step toward achieving postzeroizing security. We also believe the technique is of independent interest, as it yields efficiency improvements for existing schemes.