International Association for Cryptologic Research

# IACR News Central

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2014-12-31
19:17 [Pub][ePrint]

We analyse the complexity of algebraic algorithms for solving systems of linear equations with \\emph{noise}. Such systems arise naturally in the theory of error-correcting codes as well as in computational learning theory. More recently, linear systems with noise have found application in cryptography. The \\emph{Learning with Errors} (LWE) problem has proven to be a rich and versatile source of innovative cryptosystems, such as fully homomorphic encryption schemes. Despite the popularity of the LWE problem, the complexity of algorithms for solving it is not very well understood, particularly when variants of the original problem are considered. Here, we focus on and generalise a particular method for solving these systems, due to Arora \\& Ge, which reduces the problem to non-linear but noise-free system solving. Firstly, we provide a refined complexity analysis for the original Arora-Ge algorithm for LWE. Secondly, we study the complexity of applying algorithms for computing Gröbner basis, a fundamental tool in computational commutative algebra, to solving Arora-Ge-style systems of non-linear equations. We show positive and negative results. On the one hand, we show that the use of Gröbner bases yields an exponential speed-up over the basic Arora-Ge approach. On the other hand, we give a negative answer to the natural question whether the use of such techniques can yield a subexponential algorithm for the LWE problem. Under a mild algebraic assumption, we show that it is highly unlikely that such an improvement exists.

We also consider a variant of LWE known as BinaryError-LWE introduced by Micciancio and Peikert recently. By combining Gröbner basis algorithms with the Arora-Ge modelling, we show under a natural algebraic assumption that BinaryError-LWE can be solved in subexponential time as soon as the number of samples is quasi-linear, e.g.\\

$m=O(n \\log \\log n)$. We also derive precise complexity bounds for BinaryError-\\LWE with $m=O(n)$, showing that this new approach yields better results than best currently-known generic (exact) CVP solver as soon as $m/n \\geq 6.6$. More generally, our results provide a good picture of the hardness degradation of BinaryError-LWE for a number of samples ranging from $m=n\\left(1+\\Omega\\big(1/{\\rm log}(n)\\big)$ (a case for which BinaryError-\\LWE{} is as hard as solving some lattice problem in the worst case) to $m=O(n^2)$ (a case for which it can be solved in polynomial-time). This addresses an open question from Micciancio and Peikert. Whilst our results do not contradict the hardness results obtained by Micciancio and Peikert, they should rule out BinaryError-\\LWE for many cryptographic applications. The results in this work depend crucially on the assumption the algebraic systems considered systems are not easier and not harder to solve than a random system of equations. We have verified experimentally such hypothesis. We also have been able to prove formally the assumptions is several restricted situations. We emphasize that these issues are highly non-trivial since proving our assumptions in full generality would allow to prove a famous conjecture in commutative algebra known as Fröberg\'s Conjecture.

19:17 [Pub][ePrint]

ITU{\\scriptsize{BEE}} is a software oriented lightweight block cipher, which is first proposed at LightSec 2013. The cipher is especially suitable for limited resource application, such as sensor nodes in wireless sensor networks. To evaluate the security level of the cipher, we perform differential attacks on ITU{\\scriptsize{BEE}} reduced to 10 rounds and 11 rounds with the time complexities ${2^{65.97}}$ and ${2^{79.03}}$, respectively. To our best knowledge, our analysis is the first related-key differential cryptanalysis on the security of 10-round ITU{\\scriptsize{BEE}}.

19:17 [Pub][ePrint]

Security and safety critical devices must undergo penetration testing including Side-Channel Attacks (SCA) before certification.

SCA are powerful and easy to mount but often need huge computation power, especially in the presence of countermeasures.

Few efforts have been done to reduce the computation complexity of SCA by selecting a small subset of points where leakage prevails.

In this paper, we propose a method to detect relevant leakage points in side-channel traces.

The method is based on Normalized Inter-Class Variance (NICV).

A key advantage of NICV over state-of-the-art is that NICV does neither need a clone device nor the knowledge of secret parameters of the crypto-system.

NICV has a low computation requirement and it detects leakage using public information like input plaintexts or output ciphertexts only.

It is shown that NICV can be related to Pearson correlation and signal to noise ratio (SNR) which are standard metrics.

NICV can be used to theoretically compute the minimum number of traces required to attack an implementation.

A theoretical rationale of NICV with some practical application on real crypto-systems are provided to support our claims.

19:17 [Pub][ePrint]

We give a new framework for obtaining signatures with a tight security reduction from standard hardness assumptions. Concretely, we show that any Chameleon Hash function can be transformed into a (binary) tree-based signature scheme with tight security. The transformation is in the standard model, i.e., it does not make use of any random oracle. For specific assumptions (such as RSA, Diffie-Hellman and Short Integer Solution (SIS)) we further manage to obtain a more efficient flat-tree construction. Our framework explains and generalizes most of the existing schemes as well as providing a generic means for constructing tight signature schemes based on arbitrary assumptions, which improves the standard Merkle tree transformation. Moreover, we obtain the first tightly secure signature scheme from the SIS assumption and several schemes based on Diffie-Hellman in the standard model.

Some of our signature schemes are also structure-preserving and can (using known techniques) be combined with Groth-Sahai proof methodology to yield tightly secure and efficient simulation-sound NIZK proofs of knowledge and CCA-secure encryption in the multi-user/-challenge setting under classical assumptions.

2014-12-29
16:17 [Pub][ePrint]

We study the problem of private outsourced sorting of encrypted

data. We start by proposing a novel sorting protocol that allows a user to outsource his data to a cloud server in an encrypted form and then request the server to perform computations on this data and sort the result. To perform the sorting the server is assisted by a secure coprocessor with minimal computational and memory resources. The server and the coprocessor are assumed to be honest but curious, i.e., they honestly follow the protocol but are interested in learning more about the user data. We refer to the new protocol as private outsourced sorting\'\' since it guarantees that neither the server

nor the coprocessor learn anything about user data as long as they are

non-colluding. We formally define private outsourced sorting and provide an efficient construction that is based on semi-homomorphic encryption.

As an application of our private sort, we present MSRE: the first scheme for outsourced search over encrypted data that efficiently answers multi-term queries with the result ranked using frequency of query terms in the data, while maintaining data privacy. To construct MSRE we use searchable encryption techniques combined with our new private sort framework. Finally, although not discussed in this work, we believe that our private sort framework can turn out to be an important tool for more applications that require outsourced sorting while maintaining data privacy, e.g., database queries.

2014-12-27
01:17 [Pub][ePrint]

n this paper, we study the security margins of hash functions BLAKE and BLAKE2 against the boomerang attack. We launch boomerang attacks on all four members of BLAKE and BLAKE2, and compare their complexities. We propose 8.5-round boomerang attacks on both BLAKE-512 and BLAKE2b with complexities $2^{464}$ and $2^{474}$ respectively. We also propose 8-round attacks on BLAKE-256 with complexity $2^{198}$ and 7.5-round attacks on BLAKE2s with complexity $2^{184}$. We verify the correctness of our analysis by giving practical 6.5-round Type I boomerang quartets for each member of BLAKE and BLAKE2.

According to our analysis, some tweaks introduced by BLAKE2 have increased its resistance against boomerang attacks to a certain extent.

But on the whole, BLAKE still has higher a secure margin than BLAKE2.

01:17 [Pub][ePrint]

We will introduce different notions of independence, especially computational independence (or more precise independence by polynomial-size circuits (PSC)), which is the analog to computational indistinguishability. We will give some first implications and will show that an encryption scheme having PSC independent plaintexts and ciphertexts is equivalent to having indistinguishable encryptions.

01:17 [Pub][ePrint]

One of the most efficient ways to implement a scalar multiplication on elliptic curves with precomputed points is to use mixed coordinates (affine and Jacobian). We show how to relax these preconditions by introducing relative Jacobian coordinates and give an algorithm to compute a scalar multiplication where the precomputed points can be given in Jacobian coordinates. We also show that this new approach is compatible with Meloni\'s trick, which was already used in other papers to reduce the number of multiplications needed for a double-and-add step to 18 field multiplications.

01:17 [Pub][ePrint]

An accumulator is a \\textit{succinct} aggregate of a set of values where it is possible to issue \\textit{short} membership proofs for each accumulated value. A party in possession of such a membership proof can then demonstrate that the value is included in the set. In this paper, we preset the first lattice-based accumulator scheme that issues compact membership proofs. The security of our scheme is based on the hardness of Short Integer Solution ($\\mathsf{SIS}$) problem.

01:17 [Pub][ePrint]

Lightweight cryptography is an emerging field that will play a critical role in areas like pervasive computing and Internet of Things (IoT). In recent years, many lightweight ciphers have been designed that are better suited for small scale embedded security. Lightweight ciphers like PRESENT, KLEIN, Hummingbird 2, XTEA, CLEFIA etc. are the ciphers known for compact hardware implementations. Recently SIMON and SPECK ciphers have been introduced which are Feistel based designs. SIMON and SPECK are flexible and are having very less memory requirements and better performance in both hardware and software. There is always a tradeoff between security and performance. Strengthening the design of these ciphers will increase their acceptability for all embedded applications. In this paper, we have proposed a novel approach which increases the strength and performance of SIMON and SPECK. Further a confusion layer is added in the design of the newly designed cipher RECTANGLE. RECTANGLE has a robust S-box as compared to other lightweight ciphers which makes the design fast and efficient. We have added the substitution property to the SIMON and SPECK cipher after analyzing the cryptanalysis properties of both the ciphers. S-box of RECTANGLE is best suited for SIMON and SPECK because the SIMON and SPECK designs have an asymmetric permutation which is the basic requirement for RECTANGLE. Combination of S-box and asymmetric permutation together achieves a robust design. The hybrid design proposed in this paper needs less memory space as compared to the existing ciphers. This approach makes SIMON and SPECK design more robust and resistive against all possible attacks due to the addition of the non-linear substitution layer. This robust design will have a positive impact in the field of lightweight cryptosystems.

2014-12-26
19:17 [Pub][ePrint]

Side-channel attacks are severe type of attack against implementation of cryptographic primitives. Leakage-resilient cryptography is a new theoretical approach to formally address the problem of side-channel attacks. Recently, the Continuous After-the-Fact Leakage (CAFL) security model has been introduced for two-party authenticated key exchange (AKE) protocols. In the CAFL model, an adversary can adaptively request arbitrary leakage of long-term secrets even after the test session is activated. It supports continuous leakage even when the adversary learns certain ephemeral secrets or session keys. The amount of leakage is limited per query, but there is no bound on the total leakage. A generic leakage-resilient key exchange protocol $\\pi$ has also been introduced that is formally proved to be secure in the CAFL model. In this paper, we comment on the CAFL model, and show that it does not capture its claimed security. Furthermore, we present an attack and counterproofs for the security of protocol $\\pi$ which invalidates the formal security proofs of protocol $\\pi$ in the CAFL model.