International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also get this service via

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2012-04-23
00:17 [Pub][ePrint] Perfect Algebraic Immune Functions, by Meicheng Liu and Yin Zhang and Dongdai Lin

  A perfect algebraic immune function is a Boolean function with perfect immunity against algebraic and fast algebraic attacks. The main results are that for a perfect algebraic immune balanced function the number of input variables is one more than a power of two; for a perfect algebraic immune unbalanced function the number of input variables is a power of two. Also the Carlet-Feng function on $2^s+1$ variables and the modified Carlet-Feng function on $2^s$ variables are shown to be perfect algebraic immune functions. Furthermore, it is shown that a perfect algebraic immune function behaves good against probabilistic algebraic attacks as well.



00:17 [Pub][ePrint] Relation between Verifiable Random Functions and Convertible Undeniable Signatures, and New Constructions, by Kaoru Kurosawa and Ryo Nojima and Le Trieu Phong

  Verifiable random functions (VRF) and selectively-convertible undeniable signature (SCUS) schemes were proposed independently in the literature. In this paper, we observe that they are tightly related. This directly yields several deterministic SCUS schemes based on existing VRF constructions. In addition, we create a new probabilistic SCUS scheme, which is very compact. The confirmation and disavowal protocols of these SCUS are efficient, and can be run either sequentially, concurrently, or arbitrarily. These protocols are based on what we call zero-knowledge protocols for generalized DDH and non-DDH, which are of independent interest.



00:17 [Pub][ePrint] Automatic Search of Truncated Impossible Differentials and Applications, by Shengbao Wu, Mingsheng Wang

  Finding the longest impossible differentials is an essential assignment in proceeding

impossible differential cryptanalysis.

In this paper, we introduce a novel tool to search the longest truncated impossible

differentials for word-oriented block ciphers with bijective S-boxes. It costs polynomial time to

return a flag indicating whether a truncated differential is impossible under several filter conditions.

To demonstrate the strength of our tool, we show that it allows to automatically

find the longest truncated impossible differentials for many word-oriented block ciphers.

It independently rediscovers all known truncated impossible differentials on nine round CLEFIA.

What\'s more, it finds

new and longest truncated impossible differentials for the AES, ARIA, Camellia without $FL$ and $FL^{-1}$ layers, E2, MIBS,

LBlock and Piccolo.

Finally,

we give an impossible differential of 14-round LBlock to illustrate that our tool is more powerful than the $\\mathcal{U}$-method and UID-method.

We expect that the tool proposed in this paper will be useful for evaluating the security of block ciphers

against impossible differentials, especially when one tries to design a word-oriented block cipher with bijective S-boxes.



00:17 [Pub][ePrint] Quadratic Span Programs and Succinct NIZKs without PCPs, by Rosario Gennaro and Craig Gentry and Bryan Parno and Mariana Raykova

  We introduce a new characterization of the NP complexity class, called Quadratic Span Programs (QSPs), which is a natural extension of span programs defined by Karchmer and Wigderson. Our main motivation is the construction of succinct arguments of NP-statements that are quick to construct and verify. QSPs seem well-suited for this task, perhaps even better than Probabilistically Checkable Proofs (PCPs).

In 2010, Groth constructed a NIZK argument in the common reference string (CRS) model for Circuit-SAT consisting of only 42 elements in a bilinear group. Interestingly, his argument does not (explicitly) use PCPs. But his scheme has some disadvantages -- namely, the CRS size and prover computation are both quadratic in the circuit size. In 2011, Lipmaa reduced the CRS size to quasi-linear, but with prover computation still quadratic.

Using QSPs we construct a NIZK argument in the CRS model for Circuit-SAT consisting of just 7 group elements. The CRS size is linear in the circuit size, and prover computation is quasi-linear, making our scheme seemingly quite practical. (The prover only needs to do a linear number of group operations; the quasi-linear computation is a multipoint evaluation and interpolation.)

Our results are complementary to those of Valiant (TCC 2008) and Bitansky et al. (2012), who use \"bootstrapping\" (recursive composition) of arguments to reduce CRS size and prover and verifier computation. QSPs also provide a crisp mathematical abstraction of some of the techniques underlying Groth\'s and Lipmaa\'s constructions.



00:17 [Pub][ePrint] Adaptive CCA Broadcast Encryption with Constant-Size Secret Keys and Ciphertexts, by Duong-Hieu Phan and David Pointcheval and Siamak F. Shahandashti and Mario Strefler

  We consider designing broadcast encryption schemes with constant-size secret keys and ciphertexts, achieving chosen-ciphertext security. We first argue that known CPA-to-CCA transforms currently do not yield such schemes. We then propose a scheme, modifying a previous selective CPA secure proposal by Boneh, Gentry, and Waters. Our proposed scheme has constant-size secret keys and ciphertexts and we prove that it is selective chosen-ciphertext secure based on standard assumptions. Our scheme has ciphertexts that are shorter than those of the previous CCA secure proposals. Then we propose a second scheme that provides the functionality of both broadcast encryption and revocation schemes simultaneously using the same set of parameters. Finally we show that it is possible to prove our first scheme adaptive chosen-ciphertext secure under reasonable extensions of the bilinear Diffie-Hellman exponent and the knowledge of exponent assumptions. We prove both of these extended assumptions in the generic group model. Hence, our scheme becomes the first to achieve constant-size secret keys and ciphertexts (both asymptotically optimal) and adaptive chosen-ciphertext security at the same time.



00:17 [Pub][ePrint] Efficient Dissection of Composite Problems, with Applications to Cryptanalysis, Knapsacks, and Combinatorial Search Problems, by Itai Dinur and Orr Dunkelman and Nathan Keller and Adi Shamir

  In this paper we show that a large class of diverse problems have a

bicomposite structure which makes it possible to solve them with a new

type of algorithm called {\\it dissection}, which has much better time/memory

tradeoffs than previously known algorithms. A typical example is the problem

of finding the key of multiple encryption schemes with $r$ independent

$n$-bit keys. All the previous error-free attacks required time $T$ and memory

$M$ satisfying $TM = 2^{rn}$, and even if ``false negatives\'\' are allowed, no

attack could achieve $TM

00:17 [Pub][ePrint] Differentially Private Continual Monitoring of Heavy Hitters from Distributed Streams, by T-H. Hubert Chan and Mingfei Li and Elaine Shi and Wenchang Xu

  We consider applications scenarios where an untrusted aggregator wishes to continually monitor the heavy-hitters across a set of distributed streams. Since each stream can contain sensitive data, such as the purchase history of customers, we wish to guarantee the privacy of each stream, while allowing the untrusted aggregator to accurately detect the heavy hitters and their approximate frequencies.

Our protocols are scalable in settings where the volume of streaming data is large, since we guarantee low memory usage and processing overhead by each data source, and low communication overhead between the data sources and the aggregator.



00:17 [Pub][ePrint] Private Fingerprint Matching, by Siamak F. Shahandashti and Reihaneh Safavi-Naini and Philip Ogunbona

  We propose a fully private fingerprint matching protocol that compares two fingerprints based on the most widely-used minutia-based fingerprint matching algorithm. The protocol enables two parties, each holding a private fingerprint, to find out if their fingerprints belong to the same individual. Unlike previous works, we do not make any simplifying assumption on the matching algorithm or use generic multiparty computation protocols in our constructions. We employ a commonly-used algorithm that works by first comparing minutia pairs from the two fingerprints based on their types, locations, and orientations, and then checking if the number of matching minutia pairs is more than a threshold, and we propose a concrete, scalable, and modular protocol. We prove security against honest-but-curious adversaries and discuss how security against malicious adversaries can be achieved using standard cryptographic techniques. Our protocol is realized using common cryptographic primitives and do not require pairing- or lattice-based cryptography.



00:17 [Pub][ePrint] Hedged Public-key Encryption: How to Protect against Bad Randomness, by Mihir Bellare and Zvika Brakerski and Moni Naor and Thomas Ristenpart and Gil Segev and Hovav Shacham and Scott Yilek

  Public-key encryption schemes rely for their IND-CPA security on per-message fresh randomness. In practice, randomness may be of poor quality for a variety of reasons, leading to failure of the schemes. Expecting the systems to improve is unrealistic. What we show in this paper is that we can, instead, improve the cryptography to offset the lack of possible randomness. We provide public-key encryption schemes that achieve IND-CPA security when the randomness they use is of high quality, but, when the latter is not the case, rather than breaking completely, they achieve a weaker but still useful notion of security that we call IND-CDA. This hedged public-key encryption provides the best possible security guarantees in the face of bad randomness. We provide simple RO-based ways to make in-practice IND-CPA schemes hedge secure with minimal software changes. We also provide non-RO model schemes relying on lossy trapdoor functions (LTDFs) and techniques from deterministic encryption. They achieve adaptive security by establishing and exploiting the anonymity of LTDFs which we believe is of independent interest.



00:17 [Pub][ePrint] Almost-Everywhere Secure Computation with Edge Corruptions, by Nishanth Chandran and Juan Garay and Rafail Ostrovsky

  We consider secure multi-party computation (MPC) in a setting where

the adversary can separately corrupt not only the parties (nodes) but

also the communication channels (edges), and can furthermore choose

selectively and adaptively which edges or nodes to corrupt. Note that

if an adversary corrupts an edge, even if the two nodes that share

that edge are honest, the adversary can control the link and thus

deliver wrong messages to both players. We consider this question in

the information-theoretic setting, and require security against a

computationally unbounded adversary.

In a fully connected network the above question is simple (and we

also provide an answer

that is optimal up to a constant factor). What makes the problem

more challenging is to consider the case of sparse networks.

Partially connected networks are far more realistic than fully

connected networks, which led Garay and Ostrovsky [Eurocrypt\'08] to

formulate the notion of (unconditional) \\emph{almost everywhere (a.e.)

secure computation} in the node-corruption model, i.e., a model in

which not all pairs of nodes are connected by secure channels and the

adversary can corrupt some of the nodes (but not the edges). In such a setting,

MPC amongst all honest nodes cannot be guaranteed due

to the possible poor connectivity of some honest nodes with other

honest nodes, and hence some of

them must be ``given up\'\' and left out of the

computation. The number of such nodes is a function of the underlying

communication graph and the adversarial set of nodes.

In this work we introduce the notion of \\emph{almost-everywhere secure

computation with edge corruptions}, which is exactly the same problem as

described above, except that we additionally allow the adversary to

completely control some of the communication channels between two

correct nodes---i.e., to ``corrupt\'\' edges in the network. While it is

easy to see that an a.e. secure computation protocol for the original

node-corruption model is also an a.e. secure computation protocol tolerating

edge corruptions (albeit for a reduced fraction of edge corruptions

with respect to the bound for node corruptions), no polynomial-time

protocol is known in the case where a {\\bf constant fraction} of the edges can be corrupted (i.e., the maximum that can be tolerated)

and the degree of the network is sub-linear.

We make progress on this front, by constructing graphs of degree

$O(n^\\epsilon)$ (for arbitrary constant $0



2012-04-22
09:17 [Pub][ePrint] (Pseudo) Preimage Attack on Round-Reduced Gr{\\o}stl Hash Function and Others (Extended Version), by Shuang Wu and Dengguo Feng and Wenling Wu and Jian Guo and Le Dong and Jian Zou

  The Gr{\\o}stl hash function is one of the 5 final round candidates of the {\\shathree} competition hosted by NIST. In this paper, we study the preimage resistance of the Gr{\\o}stl hash function. We propose pseudo preimage attacks on Gr{\\o}stl hash function for both 256-bit and 512-bit versions, i.e. we need to choose the initial value in order to invert the hash function. Pseudo preimage attack on 5(out of 10)-round Gr{\\o}stl-256 has a complexity of $(2^{244.85},2^{230.13})$ (in time and memory) and pseudo preimage attack on 8(out of 14)-round Gr{\\o}stl-512 has a complexity of $(2^{507.32},2^{507.00})$. To the best of our knowledge, our attacks are the first (pseudo) preimage attacks on round-reduced Gr{\\o}stl hash function, including its compression function and output transformation. These results are obtained by a variant of meet-in-the-middle preimage attack framework by Aoki and Sasaki. We also improve the time complexities of the preimage attacks against 5-round Whirlpool and 7-round AES hashes by Sasaki in FSE~2011.