International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also get this service via

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2013-06-10
15:17 [Pub][ePrint] Counter-cryptanalysis, by Marc Stevens

  We introduce \\emph{counter-cryptanalysis} as a new paradigm for strengthening weak cryptographic primitives against cryptanalytic attacks.

Redesigning a weak primitive to more strongly resist cryptanalytic techniques will unavoidably break backwards compatibility.

Instead, counter-cryptanalysis exploits unavoidable anomalies introduced by cryptanalytic attacks to detect and block

cryptanalytic attacks while maintaining full backwards compatibility.

Counter-cryptanalysis in principle enables the continued secure use of weak cryptographic primitives.

Furthermore, we present the first example of counter-cryptanalysis, namely the efficient detection whether any given single message has been constructed -- together with an \\emph{unknown} sibling message -- using a cryptanalytic collision attack on MD5 or SHA-1.

An immediate application is in digital signature verification software to ensure that an (older) MD5 or SHA-1 based digital signature is not a forgery using a collision attack.

This would certainly be desirable for two reasons.

Firstly, it might still be possible to generate malicious forgeries using collision attacks as too many parties still sign using MD5 (or SHA-1) based signature schemes.

Secondly, any such forgeries are currently accepted nearly everywhere due to the ubiquitous support of MD5 and SHA-1 based signature schemes.

Despite the academic push to use more secure hash functions over the last decade, these two real-world arguments (arguably) will remain valid for many more years.

Only due to counter-cryptanalysis were we able to discover that Flame,

a highly advanced malware for cyberwarfare uncovered in May 2012,

employed an as of yet unknown variant of our chosen-prefix collision attack on MD5 \\cite{DBLP:conf/eurocrypt/StevensLW07,DBLP:conf/crypto/StevensSALMOW09}.

In this paper we disect the revealed cryptanalytic details and work towards the reconstruction of the algorithms underlying Flame\'s new variant attack.

Finally, we make a preliminary comparision between Flame\'s attack and our chosen-prefix collision attack.



15:17 [Pub][ePrint] A heuristic for finding compatible differential paths with application to HAS-160, by Aleksandar Kircanski and Riham AlTawy and Amr M. Youssef

  The question of compatibility of differential paths plays a central role in second order

collision attacks on hash functions. In this context, attacks typically proceed by starting from the

middle and constructing the middle-steps quartet in which the two paths are enforced on the respec-

tive faces of the quartet structure. Finding paths that can fit in such a quartet structure has been

a major challenge and the currently known compatible paths extend over a suboptimal number of

steps for hash functions such as SHA-2 and HAS-160. In this paper, we investigate a heuristic that

searches for compatible differential paths. The application of the heuristic in case of HAS-160 yields

a practical second order collision over all of the function steps, which is the first practical result that

covers all of the HAS-160 steps. An example of a colliding quartet is provided



12:17 [Pub][ePrint] Multi-file proofs of retrievability for cloud storage auditing, by Bin Wang and Xiaojing Hong

  Cloud storage allows clients to store a large amount of data with the help of storage service providers (SSPs). Proof-of-retrievability(POR) protocols allow one server to prove to a verifier the availability of data stored by some client. Shacham et al. presented POR protocols based on homomorphic authenticators and proved security of their schemes under a stronger security model, which requires the existence of an extractor to retrieve the original file by receiving the program of a successful prover. When using their POR protocol with public verifiability to verify the availability of multiple files separately, the number of pairing operations computed by a verifier is linear with the number of files. To improve the heavy burden on the verifier, we introduce a notion called multi-proof-of-retrievability(MPOR), allowing one verifier to verify the availability of multiple files stored by a server in one pass. We also design a MPOR protocol with public verifiability by extending the work of Shacham et al. The advantage of our MPOR scheme is that computational overhead of a verifier in our scheme is constant, independent of the number of files. Nevertheless, the soundness of our MPOR protocol is proved under a relatively weak security notion. In particular, analysis of our MPOR protocol shows that each file can be extracted in expected polynomial time under certain restriction on the size of processed files.



12:17 [Pub][ePrint] A Dynamic Tradeoff Between Active and Passive Corruptions in Secure Multi-Party Computation, by Martin Hirt and Ueli Maurer and Christoph Lucas

  At STOC \'87, Goldreich et al.~presented two protocols for secure multi-party computation (MPC) among $n$ parties: The first protocol provides \\emph{passive} security against $t

05:27 [Event][New] GreHack 13: Symp on Research in Grey-Hat Hacking (Applied Cryptography & Cryptanalysis)

  Submission: 30 June 2013
Notification: 4 September 2013
From November 15 to November 15
Location: Grenoble, France
More Information: http://grehack.org




2013-06-09
21:17 [Pub][ePrint] Trapdoor Smooth Projective Hash Functions, by Fabrice Benhamouda and David Pointcheval

  Katz and Vaikuntanathan recently improved smooth projective hash functions in order to build one-round password-authenticated key exchange protocols (PAKE). To achieve security in the UC framework they allowed the simulator to extract the hashing key, which required simulation-sound non-interactive zero-knowledge proofs that are unfortunately inefficient.

We improve the way the latter extractability is obtained by introducing the notion of trapdoor smooth projective hash function (TSPHF). A TSPHF is an SPHF with a trapdoor, which may not allow to recover the complete hashing key, but which still allows to compute the hash value, which is enough for an application to PAKE with UC-security against static corruptions. We additionally show that TSPHFs yield zero-knowledge proofs in two flows, with straight-line extractability.

Besides those quite interesting applications of TSPHF, we also show how to generically build them on languages of ciphertexts, using any ElGamal-like encryption. Our concrete instantiations lead to efficient one-round UC-secure PAKE, extractable zero-knowledge arguments, and verifiable encryption of Waters signatures. In the case of the PAKE, our construction is the most efficient one-round UC-secure PAKE to date.



21:17 [Pub][ePrint] Attribute-Based Encryption for a Subclass of Circuits with Bounded Depth from Lattices, by Xiang Xie and Rui Xue

  In this work, we present two Key-Policy Attribute-Based Encryption (ABE) schemes for some subclass of circuits based on the Learning with Error (LWE) assumption. Our constructions are selectively secure in the standard model. More specifically, our first construction supports a subclass of circuits with polynomially bounded depth. We call this subclass the OR-restricted circuits which means that for any input $x$, if $f(x)=0$ then for all the OR gates in $f$, at least one of its incoming wires will evaluate to $0$. The second one is a Key-Policy ABE scheme for shallow circuits whose depth is bounded by $O(\\log\\log\\lambda)$, where $\\lambda$ is the security parameter.



21:17 [Pub][ePrint] Quantum one-time programs, by Anne Broadbent and Gus Gutoski and Douglas Stebila

  A one-time program is a hypothetical device by which a user may evaluate a circuit on exactly one input of his choice, before the device self-destructs. One-time programs cannot be achieved by software alone, as any software can be copied and re-run. However, it is known that every circuit can be compiled into a one-time program using a very basic hypothetical hardware device called a one-time memory. At first glance it may seem that quantum information, which cannot be copied, might also allow for one-time programs. But it is not hard to see that this intuition is false: one-time programs for classical or quantum circuits based solely on quantum information do not exist, even with computational assumptions.

This observation raises the question, \"what assumptions are required to achieve one-time programs for quantum circuits?\" Our main result is that any quantum circuit can be compiled into a one-time program assuming only the same basic one-time memory devices used for classical circuits. Moreover, these quantum one-time programs achieve statistical universal composability (UC-security) against any malicious user. Our construction employs methods for computation on authenticated quantum data, and we present a new quantum authentication scheme called the trap scheme for this purpose. As a corollary, we establish UC-security of a recent protocol for delegated quantum computation.



21:17 [Pub][ePrint] Limits of provable security for homomorphic encryption, by Andrej Bogdanov and Chin Ho Lee

  We show that public-key bit encryption schemes which support weak (i.e., compact) homomorphic evaluation of any sufficiently \"sensitive\" collection of functions cannot be proved message indistinguishable beyond AM intersect coAM via general (adaptive) reductions, and beyond statistical zero-knowledge via reductions of constant query complexity. Examples of sensitive collections include parities, majorities, and the class consisting of all AND and OR functions.

Our techniques also give a method for converting a strong (i.e., distribution-preserving) homomorphic evaluator for essentially any boolean function (except the trivial ones, the NOT function, and the AND and OR functions) into a rerandomization algorithm: This is a procedure that converts a ciphertext into another ciphertext which is statistically close to being independent and identically distributed with the original one. Our transformation preserves negligible statistical error.



21:17 [Pub][ePrint] Analysis and Improvement of the Generic Higher-Order Masking Scheme of FSE 2012, by Arnab Roy and Srinivas Vivek

  Masking is a well-known technique used to prevent block cipher implementations from side-channel attacks. Higher-order side channel attacks (e.g. higher-order DPA attack) on widely used block cipher like AES have motivated the design of efficient higher-order masking schemes. Indeed, it is known that as the masking order increases, the difficulty of side-channel attack increases exponentially. However, the main problem in higher-order masking is to design an efficient and secure technique for S-box computations in block cipher implementations. At FSE 2012, Carlet et al. proposed a generic masking scheme that can be applied to any S-box at any order. This is the first generic scheme for efficient software implementations. Analysis of the running time, or \\textit{masking complexity}, of this scheme is related to a variant of the well-known problem of efficient exponentiation (\\textit{addition chain}), and evaluation of polynomials.

In this paper we investigate optimal methods for exponentiation

in $\\mathbb{F}_{2^{n}}$ by studying a variant of addition chain,

which we call \\textit{cyclotomic-class addition chain}, or \\textit{CC-addition chain}. Among several interesting properties, we prove lower bounds on min-length CC-addition

chains. We define the notion of \\GFn-polynomial chain, and use it to count the number of \\textit{non-linear} multiplications required while evaluating polynomials over $\\mathbb{F}_{2^{n}}$. We also give a lower bound on the length of such a chain for any polynomial. As a consequence, we show that a lower bound for the masking complexity of DES S-boxes is three, and that of PRESENT S-box is two. We disprove a claim previously made by Carlet et al. regarding min-length CC-addition chains. Finally, we give a polynomial evaluation method, which results into an improved masking scheme (compared to the technique of Carlet et al.) for DES S-boxes. As an illustration we apply this method to several other S-boxes and show significant improvement for them.



21:17 [Pub][ePrint] Using Bleichenbacher\'s Solution to the Hidden Number Problem to Attack Nonce Leaks in 384-Bit ECDSA, by Elke De Mulder and Michael Hutter and Mark E. Marson and Peter Pearson

  In this paper we describe an attack against nonce leaks in 384-bit ECDSA using an FFT-based attack due to Bleichenbacher. The signatures were computed by a modern smart card. We extracted the low-order bits of each nonce using a template-based power analysis attack against the modular inversion of the nonce. We also developed a BKZ-based method for the range reduction phase of the attack, as it was impractical to collect enough signatures for the collision searches originally used by Bleichenbacher. We confirmed our attack by extracting the entire signing key using a 5-bit nonce leak from 4000 signatures.