International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2015-06-21
09:17 [Pub][ePrint] How Secure and Quick is QUIC? Provable Security and Performance Analyses, by Robert Lychev and Samuel Jero and Alexandra Boldyreva and Cristina Nita-Rotaru

  QUIC is a secure transport

protocol developed by Google and implemented in Chrome in 2013, currently

representing one of the most promising solutions to decreasing latency

while intending to provide security properties similar with TLS.

In this work we shed some light on QUIC\'s strengths and weaknesses

in terms of its provable security and performance guarantees in the presence of attackers.

We first introduce a security model for analyzing performance-driven protocols like QUIC

and prove that QUIC satisfies our definition under reasonable assumptions on the protocol\'s building blocks.

However, we find that QUIC does not satisfy the traditional notion of forward secrecy that is provided by some modes of TLS,

e.g., TLS-DHE.

Our analyses also reveal that with simple bit-flipping and replay attacks on some

public parameters exchanged during the handshake, an

adversary could easily prevent QUIC from achieving minimal latency

advantages either by having it fall back to TCP or by causing

the client and server to have an inconsistent view of their

handshake leading to a failure to complete the connection.

We have implemented these attacks and demonstrated that they

are practical.

Our results suggest that QUIC\'s security weaknesses are introduced by the very mechanisms used to reduce latency,

which highlights the seemingly inherent trade off between minimizing latency and providing `good\' security guarantees.



09:17 [Pub][ePrint] Secure Key Generation from Biased PUFs, by Roel Maes and Vincent van der Leest and Erik van der Sluis and Frans Willems

  PUF-based key generators have been widely considered as a root-of-trust in digital systems. They typically require an error-correcting mechanism (e.g. based on the code-offset method) for dealing with bit errors between the enrollment and reconstruction of keys. When the used PUF does not have full entropy, entropy leakage between the helper data and the device-unique key material can occur. If the entropy level of the PUF becomes too low, the PUF-derived key can be attacked through the publicly available helper data. In this work we provide several solutions for preventing this entropy leakage for PUFs suffering from bias. The methods proposed in this work pose no limit on the amount of bias that can be tolerated, which solves an important open problem for PUF-based key generation. Additionally, the solutions are all evaluated based on reliability, efficiency, leakage and reusability showing that depending on requirements for the key generator different solutions are preferable.





2015-06-20
01:50 [News] FSE 2013 videos

  Videos from FSE 2013 are now online.



2015-06-19
20:10 [Event][New] CTISRM2016: The International Conference on Computing Technology, Information Security

  Submission: 3 February 2016
Notification: 3 February 2016
From March 3 to March 5
Location: Academic City, UAE
More Information: http://sdiwc.net/conferences/ctisrm2016/




2015-06-18
21:17 [Pub][ePrint] A Simple Proof of a Distinguishing Bound of Iterated Uniform Random Permutation, by Mridul Nandi

  Let P be chosen uniformly from the set P := Perm(S), the set of all permutations over a set S of size N. In Crypto 2015, Minaud and Seurin proved that for any unbounded time adversary A, making at most q queries, the distinguishing advantage between P^r (after sampling P, compose it for r times) and P, denoted Delta(P^r ; P), is at most (2r + 1)q/N. In this paper we provide an alternative simple proof of this result for an upper bound 2q(r+1)^2/N by using well known coefficient H-technique.





2015-06-17
18:17 [Pub][ePrint] Improved (Pseudo) Preimage Attacks on Reduced-Round GOST and Grøstl-256 and Studies on Several Truncation Patterns for AES-like Compression Functions (Full Version), by Bingke Ma and Bao Li and Rongl

  In this paper, we present improved preimage attacks on the reduced-round \\texttt{GOST} hash function family, which serves as the new Russian hash standard, with the aid of techniques such as the rebound attack, the Meet-in-the-Middle preimage attack and the multicollisions. Firstly, the preimage attack on 5-round \\texttt{GOST-256} is proposed which is the first preimage attack for \\texttt{GOST-256} at the hash function level. Then we extend the (previous) attacks on 5-round \\texttt{GOST-256} and 6-round \\texttt{GOST-512} to 6.5 and 7.5 rounds respectively by exploiting the involution property of the \\texttt{GOST} transposition operation.

Secondly, inspired by the preimage attack on \\texttt{GOST-256}, we also study the impacts of four representative truncation patterns on the resistance of the Meet-in-the-Middle preimage attack against \\texttt{AES}-like compression functions, and propose two stronger truncation patterns which make it more difficult to launch this type of attack. Based on our investigations, we are able to slightly improve the previous pseudo preimage attacks on reduced-round \\texttt{Gr{\\o}stl-256}.



18:17 [Pub][ePrint] Constant Communication Oblivious RAM, by Tarik Moataz and Travis Mayberry and Erik-Oliver Blass

  There have been several attempts recently at using homomorphic encryption to increase the efficiency of Oblivious RAM protocols. One of the most successful has been Onion ORAM, which achieves O(1) communication overhead with polylogarithmic server com- putation. However, it has a number of drawbacks. It requires a very large block size of B = Ω(log^5 N), with large constants. Although it needs only polylogarithmic computation complexity, that computation consists mostly of expensive homomorphic mul- tiplications. Finally, it achieves O(1) communication complexity but only amortized over a number of accesses. In this work we aim to address these problems, reducing the required block size to Ω(log^3 N), removing almost all of the homomorphic multiplica- tions and achieving O(1) worst-case communication complexity. We achieve this by replacing their homomorphic eviction routine with a much less expensive permute-and-merge one which elim- inates homomorphic multiplications while maintaining the same level of security. In turn, this removes the need for layered encryp- tion that Onion ORAM relies on and reduces both the minimum block size and worst-case bandwidth.



18:17 [Pub][ePrint] Robust and One-Pass Parallel Computation of Correlation-Based Attacks at Arbitrary Order, by Tobias Schneider and Amir Moradi and Tim Güneysu

  The protection of cryptographic implementations against higher-order attacks has risen to an important topic in the side-channel community after the advent of enhanced measurement equipment that enables the capture of millions of power traces in reasonably short time. However, the preprocessing of multi-million traces for such an attack is still challenging, in particular when in the case of (multivariate) higher-order attacks all traces need to be parsed at least two times. Even worse, partitioning the captured traces into smaller groups to parallelize computations is hardly possible with current techniques.

In this work we introduce procedures that allow iterative computation of correlation in a side-channel analysis attack at any arbitrary order in both univariate and multivariate settings. The advantages of our proposed solutions are manifold: i) they provide stable results, i.e., by increasing the number of used traces high accuracy of the estimations is still maintained, ii) each trace needs to be processed only once and at any time the result of the attack can be obtained (without requiring to reparse the whole trace pull when adding more traces), and iii) the computations can be efficiently parallelized, e.g., by splitting the trace pull into smaller subsets and processing each by a single thread on a multi-threading or cloud-computing platform. In short, our constructions allow efficiently performing higher-order side-channel analysis attacks (e.g., on hundreds of million traces) which is of crucial importance when practical evaluation of the masking schemes need to be performed.



18:17 [Pub][ePrint] On Public Key Encryption from Noisy Codewords, by Eli Ben-Sasson and Iddo Ben-Tov and Ivan Damgard and Yuval Ishai and Noga ron-Zewi

  Several well-known public key encryption schemes, including those of Alekhnovich (FOCS 2003), Regev (STOC 2005), and Gentry, Peikert and Vaikuntanathan (STOC 2008), rely on the conjectured intractability of inverting noisy linear encodings. These schemes are limited in that they either require the underlying field to grow with the security parameter, or alternatively they can work over the binary field but have a low noise entropy that gives rise to sub-exponential attacks.

Motivated by the goal of efficient public key cryptography, we study the possibility of obtaining improved security over the binary field by using different noise distributions.

Inspired by an abstract encryption scheme of Micciancio (PKC 2010), we consider an abstract encryption scheme that unifies all the three schemes mentioned above and allows for arbitrary choices of the underlying field and noise distributions.

Our main result establishes an unexpected connection between the power of such encryption schemes and additive combinatorics. Concretely, we show that under the ``approximate duality conjecture\" from additive combinatorics (Ben-Sasson and Zewi, STOC 2011), every instance of the abstract encryption scheme over the binary field can be attacked in time $2^{O(\\sqrt{n})}$, where $n$ is the maximum of the ciphertext size and the public key size (and where the latter excludes public randomness used for specifying the code).

On the flip side, counter examples to the above conjecture (if false) may lead to candidate public key encryption schemes with improved security guarantees.

We also show, using a simple argument that relies on agnostic learning of parities (Kalai, Mansour and Verbin, STOC 2008), that any such encryption scheme can be {\\em unconditionally} attacked in time $2^{O(n/\\log n)}$, where $n$ is the ciphertext size.

Combining this attack with the security proof of Regev\'s cryptosystem, we immediately obtain an algorithm that solves the {\\em learning parity with noise (LPN)} problem in time $2^{O(n/\\log \\log n)}$ using only $n^{1+\\epsilon}$ samples, reproducing the result of Lyubashevsky (Random 2005) in a conceptually different way.

Finally, we study the possibility of instantiating the abstract encryption scheme over constant-size rings to yield encryption schemes with no decryption error. We show that over the binary field decryption errors are inherent. On the positive side, building on the construction of matching vector families

(Grolmusz, Combinatorica 2000; Efremenko, STOC 2009; Dvir, Gopalan and Yekhanin, FOCS 2010),

we suggest plausible candidates for secure instances of the framework over constant-size rings that can offer perfectly correct decryption.



18:17 [Pub][ePrint] Last fall degree, HFE, and Weil descent attacks on ECDLP, by Ming-Deh A. Huang and Michiel Kosters and Sze Ling Yeo

  Weil descent methods have recently been applied to attack the Hidden Field Equation (HFE) public key systems and solve the elliptic curve discrete logarithm problem (ECDLP) in small characteristic. However the claims of quasi-polynomial time attacks on the HFE systems and the subexponential time algorithm for the ECDLP depend on various heuristic assumptions.

In this paper we introduce the notion of the last fall degree of a polynomial system, which is independent of choice of a monomial order. We then develop complexity bounds on solving polynomial systems based on this last fall degree.

We prove that HFE systems have a small last fall degree, by showing that one can do division with remainder after Weil descent. This allows us to solve HFE systems unconditionally in polynomial time if the degree of the defining polynomial and the cardinality of the base field are fixed.

For the ECDLP over a finite field of characteristic 2, we provide computational evidence that raises doubt on the validity of the first fall degree assumption, which was widely adopted in earlier works and which promises sub-exponential algorithms for ECDLP. In addition, we construct a Weil descent system from a set of summation polynomials in which the first fall degree assumption is unlikely to hold. These examples suggest that greater care needs to be exercised when applying this heuristic assumption to arrive at complexity estimates.

These results taken together underscore the importance of rigorously bounding last fall degrees of Weil descent systems, which remains an interesting but challenging open problem.



18:17 [Pub][ePrint] Fair and Robust Multi-Party Computation using a Global Transaction Ledger, by Aggelos Kiayias and Hong-Sheng Zhou and Vassilis Zikas

  Classical results on secure multi-party computation (MPC) imply that fully secure computation, including fairness (either all parties get output or none) and robustness (output delivery is guaranteed), is impossible unless a majority of the parties is honest. Recently, cryptocurrencies like Bitcoin where utilized to leverage the fairness loss in MPC against a dishonest majority. The idea is that when the protocol aborts in an unfair manner (i.e., after the adversary receives output) then honest parties get compensated by the adversarially controlled parties.

Our contribution is three-fold. First, we put forth a new formal model of secure MPC with compensation and we show how the introduction of suitable ledger and synchronization functionalities makes it possible to express completely such protocols using standard interactive Turing machines (ITM) circumventing the need for the use of extra features that are outside the standard model as in previous works. Second, our model, is expressed in the universal composition setting with global setup and is equipped with a composition theorem that enables the design of protocols that compose safely with each other and within larger environments where other protocols with compensation take place; a composition theorem for MPC protocols with compensation was not known before. Third, we introduce the first robust MPC protocol with compensation, i.e., an MPC protocol where not only fairness is guaranteed (via compensation) but additionally the protocol is guaranteed to deliver output to the parties that get engaged and therefore the adversary, after an initial round of deposits, is not even able to mount a denial of service attack without having to suffer a monetary penalty. Importantly, our robust MPC protocol requires only a constant number of (coin-transfer and communication) rounds.