International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

15:17 [Pub][ePrint] Dual System Encryption Framework in Prime-Order Groups, by Nuttapong Attrapadung

  We propose a new generic framework for achieving fully secure attribute based encryption (ABE) in prime-order bilinear groups. It is generic in the sense that it can be applied to ABE for arbitrary predicate. All previously available frameworks that are generic in this sense are given only in composite-order bilinear groups. These consist of the frameworks proposed by Wee (TCC\'14) and Attrapadung (Eurocrypt\'14). Both frameworks provide abstractions of dual-system encryption techniques introduced by Waters (Crypto\'09). Our framework can be considered as a prime-order version of Attrapadung\'s framework and works in a similar manner: it relies on a main component called pair encodings, and it generically compiles any secure pair encoding scheme for a predicate in consideration to a fully secure ABE scheme for that predicate. One feature of our new compiler is that although the resulting ABE schemes will be newly defined in prime-order groups, we require essentially the same security notions of pair encodings as before. Beside the security of pair encodings, our framework assumes only the Matrix Diffie-Hellman assumption, introduced by Escala et al. (Crypto\'13), which is a weak assumption that includes the Decisional Linear assumption as a special case.

As for its applications, we can plug in available pair encoding schemes and automatically obtain the first fully secure ABE realizations in prime-order groups for predicates of which only fully secure schemes in composite-order groups were known. These include ABE for regular languages, ABE for monotone span programs (and hence Boolean formulae) with short ciphertexts or keys, and completely unbounded ABE for monotone span programs.

As a side result, we establish the first generic implication from ABE for monotone span programs to ABE for branching programs. Consequently, we obtain fully-secure ABE for branching programs in some new variants, namely, unbounded, short-ciphertext, and short-key variants. Previous ABE schemes for branching programs are bounded and require linear-size ciphertexts and keys.

15:17 [Pub][ePrint] On the Communication Complexity of Secure Computation, by Deepesh Data and Manoj M. Prabhakaran and Vinod M. Prabhakaran

  Information theoretically secure multi-party computation (MPC) is a central

primitive of modern cryptography. However, relatively little

is known about the communication complexity of this primitive.

In this work, we develop powerful information theoretic tools to prove lower

bounds on the communication complexity of MPC. We restrict ourselves to a

concrete setting involving 3-parties, in order to bring out the power of

these tools without introducing too many complications. Our techniques

include the use of a data processing inequality for {\\em residual

information} --- i.e., the gap between mutual information and

G\\\'acs-K\\\"orner common information, a new {\\em information inequality} for

3-party protocols, and the idea of {\\em distribution switching} by which

lower bounds computed under certain worst-case scenarios can be shown to

apply for the general case.

Using these techniques we obtain tight bounds on communication complexity by

MPC protocols for various interesting functions. In particular, we show

concrete functions that have ``communication-ideal\'\' protocols, which

achieve the minimum communication simultaneously on all links in the

network. Also, we obtain the first {\\em explicit} example of a function that

incurs a higher communication cost than the input length in the secure

computation model of Feige, Kilian and Naor \\cite{FeigeKiNa94}, who had

shown that such functions exist. We also show that our communication bounds

imply tight lower bounds on the amount of randomness required by MPC

protocols for many interesting functions.

15:17 [Pub][ePrint] Forgery Attacks on round-reduced ICEPOLE-128, by Christoph Dobraunig and Maria Eichlseder and Florian Mendel

  ICEPOLE is a family of authenticated encryptions schemes submitted to the ongoing CAESAR competition and in addition presented at CHES 2014. To justify the use of ICEPOLE, or to point out potential weaknesses, third-party cryptanalysis is needed. In this work, we evaluate the resistance of ICEPOLE-128 against forgery attacks. By using differential cryptanalysis, we are able to create forgeries from a known ciphertext-tag pair with a probability of $2^{-60.3}$ for a round-reduced version of ICEPOLE-128, where the last permutation is reduced to 4 (out of 6) rounds. This is a noticeable advantage compared to simply guessing the right tag, which works with a probability of $2^{-128}$. As far as we know, this is the first published attack in a nonce-respecting setting on round-reduced versions of ICEPOLE-128.

15:17 [Pub][ePrint] Biclique cryptanalysis of MIBS-80 and PRESENT-80, by Mohammad Hossein Faghihi Sereshgi, Mohammad Dakhilalian, and Mohsen Shakiba

  In this paper we present the first biclique cryptanalysis of MIBS block cipher

and a new biclique cryptanalysis of PRESENT block cipher. These attacks are

performed on full-round MIBS-80 and full-round PRESENT-80. Attack on MIBS-

80 uses matching without matrix method and has a data complexity upper bounded

by $2^{52}$ chosen plaintext where it reduced security of this cipher about 1 bit. Attack

on PRESENT-80 has a data complexity of at most $2^{22}$ chosen plaintexts and computational

complexity of $2^{79.37}$ encryptions that both complexities are lower than other

cryptanalyses of PRESENT-80 so far.

00:17 [Pub][ePrint] PAC Learning of Arbiter PUFs, by Fatemeh Ganji and Shahin Tajik and Jean-Pierre Seifert

  The general concept of Physically Unclonable Functions (PUFs) has been nowadays widely accepted and adopted to meet the requirements of secure identification and key generation/storage for cryptographic ciphers. However, shattered by different kinds of attacks, it has been proved that the promised security features of arbiter PUFs, including unclonability and unpredictability do not hold unconditionally. It has been stated in the literature that arbiter PUFs can be broken by launching modeling attacks, i.e., applying machine learning methods. In this case, a large set of Challenge-Response Pairs (CRP) is collected by an attacker, aiming at mathematically modeling the Challenge-Response behavior for a given arbiter PUF. However, the success of all existing modeling attacks rests so far on pure trial and error estimates. This means that neither the probability of obtaining a useful model (confidence), nor the maximum number of required CRPs, nor the correct prediction of an unknown challenge (accuracy) is guaranteed at all. To address these issues, this work will present a Probably Approximately Correct (PAC) learning algorithm. This proves that learning of any arbiter PUF for prescribed levels of accuracy and confidence can be done in polynomial time. Based on some crucial discretization process, we are able to define a Deterministic Finite Automaton (of polynomial size), which exactly accepts that regular language that corresponds to the challenges mapped by the given PUF to one responses. We also prove that the maximum required number of CRPs is polynomial in the number of arbiter stages. A further analysis reveals that this maximum number of CRPs is also polynomial in the maximum deviation of the arbiter delays as well as the pre-defined levels of accuracy and confidence. To the best of our knowledge this is the first algorithm which can provably learn an arbitrary arbiter PUF. Moreover, our proof of the PAC learnability of arbiter PUFs gives many new insights into the correct construction of secure arbiter PUFs in general.

00:17 [Pub][ePrint] MMBcloud-tree: Authenticated Index for Verifiable Cloud Service Selection, by Jingwei Li, Anna Squicciarini, Dan Lin, Smitha Sundareswaran, Chunfu Jia

  Cloud brokers have been recently introduced as an additional computational layer to facilitate cloud selection and service

management tasks for cloud consumers. However, existing brokerage schemes on cloud service selection typically assume that

brokers are completely trusted, and do not provide any guarantee over the correctness of the service recommendations. It is then

possible for a compromised or dishonest broker to easily take advantage of the limited capabilities of the clients and provide incorrect

or incomplete responses. To address this problem, we propose an innovative Cloud Service Selection Verification (CSSV) scheme and

index structures (MMBcloud-tree) to enable cloud clients to detect misbehavior of the cloud brokers during the service selection

process. We demonstrate correctness and efficiency of our approaches both theoretically and empirically.

00:17 [Pub][ePrint] Protecting against Multidimensional Linear and Truncated Differential Cryptanalysis by Decorrelation, by Céline Blondeau and Aslí Bay and Serge Vaudenay

  The decorrelation theory provides a different point of view on the security of

block cipher primitives. Results on some statistical attacks obtained in

this context can support or provide new insight on the security of symmetric

cryptographic primitives.

In this paper, we study, for the first time, the

multidimensional linear attacks as well as the truncated differential

attacks in this context. We show that the cipher should be decorrelated of

order two to be resistant against some multidimensional linear and

truncated differential attacks. Previous results obtained with this theory

for linear, differential, differential-linear and boomerang attacks

are also resumed and improved in this paper.

00:17 [Pub][ePrint] Financial Cryptography: Algorithmic Mechanisms for a Hedonic Game, by Sumit Chakraborty

  A (or a group of) selling agent wants to allocate and sell a (or a set of) parcel of land optimally and fairly to a buying agent within the capacity constraint of the selling agent and budget constraint of the buying agent. This problem has been solved by combining the concept of algorithmic cooperative game theory and financial cryptography. This is an approach for a group of decision-making agents to reach a mutually beneficial agreement through compromise and stable matching of preference. The work presents a cooperative game and a set of algorithmic coordination mechanisms: SBSS, SBMS (for collective and non-collective bargaining in holdout problem) and MBSS. The game is characterized by a set of agents, inputs, strategic moves, revelation principle, payment function and outputs. The coordination mechanisms are designed based on domain planning, rational fair data exchange and compensation negotiation. These mechanisms preserve the privacy of strategic data through secure multi-party computation (SMC), more specifically solving Yao\'s millionaire problem. The mechanisms are analyzed from the perspectives of revelation principle, computational intelligence and communication complexity. The communication complexity depends on the time constraint of the negotiating agents, their information state and the number of negotiation issues. The computational complexity depends on the valuation of pricing plan, compensation estimation and private comparison. It is a mixed strategy game; both sequential and simultaneous moves can be applied intelligently to search a neighborhood space of core solutions.

00:17 [Pub][ePrint] Speed Records for Ideal Lattice-Based Cryptography on AVR, by Thomas Pöppelmann and Tobias Oder and Tim Güneysu

  Over the last years lattice-based cryptography has received much attention due to versatile average-case problems like Ring-LWE or Ring-SIS that appear to be intractable by quantum computers. But despite of promising constructions, only few results have been published on implementation issues on very constrained platforms. In this work we therefore study and compare implementations of Ring-LWE encryption and the Bimodal Lattice Signature Scheme (BLISS) on an 8-bit Atmel ATxmega128 microcontroller. Since the number theoretic transform (NTT) is one of the core components in implementations of lattice-based cryptosystems, we review the application of the NTT in previous implementations and present an improved approach that significantly lowers the runtime for polynomial multiplication. Our implementation of Ring-LWE encryption takes 41 ms for encryption and 12 ms for decryption. To compute a BLISS signature, our software takes 316 ms and 111 ms for verification. These results outperform implementations on similar platforms and underline the feasibility of lattice-based cryptography on constrained devices.

00:17 [Pub][ePrint] Impossibility of VBB Obfuscation with Ideal Constant-Degree Graded Encodings, by Rafael Pass and abhi shelat

  A celebrated result by Barak et al (JACM\'12) shows the impossibility of general-purpose virtual black-box (VBB) obfuscation in the plain model. A recent work by Canetti, Kalai, and Paneth (TCC\'15) extends this result also to the random oracle model (assuming trapdoor per- mutations).

In contrast, Brakerski-Rothblum (TCC\'15) and Barak et al (EuroCrypt\'14) show that in idealized graded encoding models, general-purpose VBB obfuscation indeed is possible; these construction require graded encoding schemes that enable evaluating high-degree (polynomial in the size of the circuit to be obfuscated) polynomials on encodings.

We show a complementary impossibility of general-purpose VBB obfuscation in idealized graded encoding models that enable only evaluation of constant-degree polynomials (assuming trapdoor permutations).

00:17 [Pub][ePrint] Condensed Unpredictability, by Maciej Skorski and Alexander Golovnev and Krzysztof Pietrzak

  We consider the task of deriving a key with high HILL

entropy (i.e., being computationally indistinguishable from

a key with high min-entropy) from an unpredictable source.

Previous to this work, the only known way to transform unpredictability into

a key that was $\\eps$ indistinguishable from having min-entropy was via

pseudorandomness, for example by Goldreich-Levin (GL) hardcore bits.

This approach has the inherent limitation that from a source with $k$ bits of unpredictability entropy one can derive a key of length (and thus HILL entropy)

at most $k-2\\log(1/\\epsilon)$ bits. In many settings, e.g. when dealing with biometric data, such a $2\\log(1/\\epsilon)$ bit entropy loss in not an option.

Our main technical contribution is a theorem that states that in the high entropy regime, unpredictability implies HILL entropy.

Concretely, any variable $K$ with $|K|-d$ bits of unpredictability entropy has the same amount of so called

metric entropy (against real-valued, deterministic distinguishers), which is known to imply the same amount of HILL entropy.

The loss in circuit size in this argument is exponential in the entropy gap $d$, and thus this result only applies for small $d$ (i.e., where the

size of distinguishers considered is exponential in $d$).

To overcome the above restriction, we investigate if it\'s possible to first ``condense\'\' unpredictability entropy and make the entropy gap small. We show that any source with

$k$ bits of unpredictability can be condensed into a source of length $k$ with $k-3$ bits of unpredictability entropy.

Our condenser simply ``abuses\" the GL construction and derives a $k$ bit key from a source with $k$ bits of unpredicatibily. The original GL theorem

implies nothing when extracting

that many bits, but we show that in this regime, GL still behaves like a ``condenser\" for unpredictability.

This result comes with two caveats (1) the loss in circuit size is exponential in $k$ and (2) we require that the source we start with has \\emph{no} HILL entropy (equivalently, one can efficiently check if a guess is correct). We leave it as an intriguing open problem to

overcome these restrictions or to prove they\'re inherent.