International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also get this service via

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2013-06-17
21:19 [Pub][JoC] [IACR Publication Reform] The speed of science: two case studies by djb

  Nigel Smart was quite clear at Eurocrypt in advertising the Proceedings of the IACR as fixing our "High review load". Well, gee, sounds great, but how come the IACR Board seems unable to explain to the rest of us _how_ this reduction in review load is supposed to happen? Nigel doesn\'t answer the question but says he\'s putting together "a more detailed proposal". Christian Cachin says that there "could" be a one-year "ban on resubmission" but he fails to define "resubmission". Ivan Damgård (not on the current IACR Board) says "Claiming you added something substantial in two weeks is probably bogus anyway." Let\'s think about this "two weeks" figure for a moment. Case study 1: DBLP for "Ivan Damgård" finds 7 conference papers in 2012 (Crypto, CT-RSA, ICITS, PKC, SCN, SCN, TCC), not to mention 7 eprint papers the same year. That\'s a throughput of one conference paper every 7.4 weeks. How can Ivan claim that 2 weeks isn\'t enough time for a "substantial" improvement to a paper, if he spends a _total_ of only 7.4 weeks per successful conference paper? Furthermore, surely Ivan would agree that some papers are easier to write than others, and also that he\'s not spending all of his time on paper-writing---if he really focuses on a paper then he can probably get it done much more quickly. Is it really so hard to believe that an author has done "something substantial in two weeks"? Of course, it\'s actually Ivan plus coauthors, and increased use of the Internet is in general making it easier and easier to have many coauthors, which makes it even easier to believe that a research team is doing something very quickly. How can anyone imagine that a knee-jerk time-based response could substitute for a proper scientific evaluation? Case study 2: Let\'s look at what happened to one of those eprint papers, 2012/699, in which Ivan proposed a specific "practical" LPN-based cryptosystem. A few days later I pointed out publicly that this specific proposal failed to account for the attack in 2012/355, a paper at RFIDsec 2012. Of course, RFIDsec isn\'t a top-tier IACR conference, but surely Ivan will agree that 2012/355---forcing changes in the parameters and "practicality" of his paper 2012/699---was worthy of publication. Here\'s how 2012/355 evolved. An LPN-related system "Lapin" was presented at FSE 2012 the morning of 21 March 2012. Tanja Lange and I were in the audience, were both immediately skeptical of the security of the system, and started investigating attacks. We had our attack paper ready for the RFIDsec submission deadline on 31 March 2012, and had it in essentially final form by 5 April 2012---two weeks and one day after the FSE talk. We prioritized other tasks at that point, and didn\'t end up doing the last few days of work to post the paper until June 2012, but with some slight rescheduling we would have had the complete paper online two weeks after we started. I\'m sure that Ivan, and many hundreds of other people here, can think of similarly efficient paper-writing examples from their own experience. So why do we have Ivan saying "two weeks is probably bogus anyway" for a mere revision? And how can Christian possibly think that a one-year ban is even marginally reasonable? ---Dan From: 2013-15-06 20:31:51 (UTC)

15:17 [Pub][ePrint] Breaking the Even-Mansour Hash Function: Collision and Preimage Attacks on JH and Gr{\\o}stl, by Bingke Ma and Bao Li and Ronglin Hao

  The Even-Mansour structure and the chopMD mode are two widely-used strategies in hash function designs. They are adopted by many hash functions including two SHA-3 finalists, the JH hash function and the Gr{\\o}stl hash function. The Even-Mansour structure combining the chopMD mode is supposed to enhance the security of hash functions against collision and preimage attacks, while our results show that it is not possible to achieve this goal with an unbalanced compression function. In this paper, we show generic attacks on the Even-Mansour hash functions including both collision and preimage attacks. Our attacks show the structure flaws of the Even-Mansour hash functions. All these attacks can be applied to specific hash functions based on the Even-Mansour structure. We achieve the first collision and (2nd-)preimage attacks on full JH and Gr{\\o}stl respectively. For the JH hash function, we achieve collision and (2nd-)preimage attacks on the full JH compression function with a time gain $2^{10.22}$. After a simple modification of the padding rules, we obtain full round collision and (2nd-)preimage attacks on the modified JH hash function with a time gain $2^{10.22}$. For the Gr{\\o}stl hash function, we obtain both collision and (2nd-)preimage attacks on the full Gr{\\o}stl hash function with a limited time gain $2^{0.58}$.



15:17 [Pub][ePrint] To Hash or Not to Hash Again? (In)differentiability Results for H^2 and HMAC, by Yevgeniy Dodis and Thomas Ristenpart and John Steinberger and Stefano Tessaro

  We show that the second iterate H^2(M) = H(H(M)) of a random oracle H cannot achieve strong security in the sense of indifferentiability from a random oracle. We do so by proving that indifferentiability for H 2 holds only with poor concrete security by providing a lower bound (via an attack) and a matching upper bound (via a proof requiring new techniques) on the complexity of any successful simulator. We then investigate HMAC when it is used as a general-purpose hash function with arbitrary keys (and not as a MAC or PRF with uniform, secret keys). We uncover that HMAC\'s handling of keys gives rise to two types of weak key pairs. The first allows trivial attacks against its indifferentiability; the second gives rise to structural issues similar to that which ruled out strong indifferentiability bounds in the case of H^2 . However, such weak key pairs do not arise, as far as we know, in any deployed applications of HMAC. For example, using keys of any fixed length shorter than d − 1, where d is the block length in bits of the underlying hash function, completely avoids weak key pairs. We therefore conclude with a positive result: a proof that HMAC

is indifferentiable from a RO (with standard, good bounds) when applications use keys of a fixed length less than d − 1.



15:17 [Pub][ePrint] Lattice Signatures and Bimodal Gaussians, by Léo Ducas and Alain Durmus and Tancrède Lepoint and Vadim Lyubashevsky

  Our main result is a construction of a lattice-based digital signature scheme that represents an improvement, both in theory and in practice, over today\'s most efficient

lattice schemes. The novel scheme is obtained as a result of a modification of the rejection sampling algorithm that is at the heart of Lyubashevsky\'s signature scheme (Eurocrypt, 2012) and several other lattice primitives. Our new rejection sampling algorithm which samples from a bimodal Gaussian distribution, combined with a modified

scheme instantiation, ends up reducing the standard deviation of the resulting

signatures by a factor that is asymptotically square root in the security

parameter. The implementations of our signature scheme for security levels of

128, 160, and 192 bits compare very favorably to existing schemes such as

RSA and ECDSA in terms of efficiency. In addition, the new scheme has shorter

signature and public key sizes than all previously proposed lattice signature

schemes.

As part of our implementation, we also designed several novel algorithms which

could be of independent interest. Of particular note, is a new algorithm for

efficiently generating discrete Gaussian samples over Z^n. Current

algorithms either require many high-precision floating point exponentiations

or the storage of very large pre-computed tables, which makes them completely

inappropriate for usage in constrained devices. Our sampling algorithm

reduces the hard-coded table sizes from linear to logarithmic as compared to

the time-optimal implementations, at the cost of being only a small factor

slower.



15:17 [Pub][ePrint] Sequential Aggregate Signatures Made Shorter, by Kwangsu Lee and Dong Hoon Lee and Moti Yung

  Sequential aggregate signature (SAS) is a special type of public-key signature that allows a signer to add his signature into a previous aggregate signature in sequential order. In this case, since many public keys are used and many signatures are employed and compressed, it is important to reduce the sizes of signatures and public keys. Recently, Lee, Lee, and Yung (PKC 2013) proposed an efficient SAS scheme with short public keys and proved its security without random oracles under static assumptions.

In this paper, we propose an improved SAS scheme that has a shorter signature size compared with that of Lee et al.\'s SAS scheme. Our SAS scheme is also secure without random oracles under static assumptions. To achieve the improvement, we devise a new public-key signature scheme that supports multi-users and public re-randomization. Compared with the SAS scheme of Lee et al., our SAS scheme employs new techniques which allow us to reduce the size of signatures by increasing the size of the public keys (obviously, since signature compression is at the heart of aggregate signature this is a further step in understanding the aggregation capability of such schemes).



15:17 [Pub][ePrint] Cryptographically Protected Prefixes for Location Privacy in IPv6, by Jonathan Trostle and Hosei Matsuoka and James Kempf and Toshiro Kawahara and Ravi Jain

  There is a growing concern with preventing unauthorized agents from discovering the geographical location of Internet users, a kind of security called location privacy. Typical deployments of IPv6 make it possible to deduce the approximate geographical location of a device from its IPv6 address. We present a scheme called Cryptographically Protected Prefixes (CPP), to address this problem at the level of IPv6 addressing and forwarding. CPP randomizes the address space of a defined topological region (privacy domain), thereby making it infeasible to infer location information from an IP address.

CPP can be deployed incrementally. We present an adversary model and show that CPP is secure within the model, assuming the existence of pseudorandom functions. We have implemented CPP as a pre-processing step within the forwarding algorithm in the FreeBSD 4.8 kernel. Our performance testing indicates that CPP pre-processing results in a 40-50 percent overhead for packet forwarding in privacy domain routers. The additional end to end per packet delay is roughly 20 to 60 microseconds. We also give an attack against the address encryption scheme in [Raghavan et al. 2009]. We show that the CPP forwarding algorithm is resilient in the event of network failures.



15:17 [Pub][ePrint] Parallel Gauss Sieve Algorithm: Solving the SVP in the Ideal Lattice of 128 dimensions, by Tsukasa Ishiguro and Shinsaku Kiyomoto and Yutaka Miyake and Tsuyohsi Takagi

  In this paper, we report that we have solved the shortest vector problem (SVP) over a 128-dimensional lattice, which is currently the highest dimension of the SVP that has ever been solved. The security of lattice-based cryptography is based on the hardness of solving the SVP in lattices. In 2010 Micciancio \\textit{et al.} proposed a Gauss Sieve algorithm for heuristically solving the SVP using list $L$ of Gauss-reduced vectors. Milde \\textit{et al.} proposed a parallel implementation method for the Gauss Sieve algorithm. However, the efficiency of more than 10 threads in their implementation decreases due to a large number of non-Gauss-reduced vectors appearing in the distributed list of each thread. In this paper, we propose a more practical parallelized Gauss Sieve algorithm. Our algorithm deploys an additional Gauss-reduced list $V$ of sample vectors assigned to each thread, and all vectors in list $L$ remain Gauss-reduced by mutually reducing them using all sample vectors in $V$. Therefore, our algorithm enables the Gauss Sieve algorithm to run without excessive overhead even in a large-scale parallel computation of more than 1,000 threads. Moreover, for speed-up, we use the bi-directional rotation structure of an ideal lattice that makes the generation of additional vectors in the list with almost no additional overhead. Finally, we have succeeded in solving the SVP over a 128-dimensional ideal lattice generated by cyclotomic polynomial $x^{128}+1$ using about 30,000 CPU hours.