International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also receive updates via:

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2015-05-09
10:13 [PhD][New]

Name: Liam Keliher
Topic: Linear Cryptanalysis of Substitution-Permutation Networks
Category: secret-key cryptography

Description: The subject of this thesis is linear cryptanalysis of substitution-permutation networks (SPNs). We focus on the rigorous form of linear cryptanalysis, which requires the concept of linear hulls.\r\n\r\nFirst, we consider SPNs in which the s-boxes are selected independently and uniformly from the set of all bijective n x n s-boxes. We derive an expression for the expected linear probability values of such an SPN, and give evidence that this expression converges to the corresponding value for the true random cipher. This adds quantitative support to the claim that the SPN structure is a good approximation to the true random cipher. We conjecture that this convergence holds for a large class of SPNs.\r\n\r\nIn addition, we derive a lower bound on the probability that an SPN with randomly selected s-boxes is practically secure against linear cryptanalysis after a given number of rounds. For common block sizes, experimental evidence indicates that this probability rapidly approaches 1 with an increasing number of rounds.\r\n\r\nWe then consider SPNs with fixed s-boxes. We present two new algorithms for upper bounding the maximum average linear hull probability for SPNs. These algorithms, named KMT1 and KMT2, are the first completely general algorithms for this purpose -- they can be applied to any SPN, and they compute an upper bound that is a function of the number of encryption rounds being evaluated. In contrast, other approaches to this problem either require that the SPN linear transformation have a specific structure, or compute a single value independent of the number of rounds. By applying KMT1 and KMT2 to the AES, we establish the provable security of the AES against linear cryptanalysis.\r\n\r\nAs a straightforward application of our work with linear hulls, we analyze the Q cipher, an SPN submitted to the European Commission\'s NESSIE cryptographic competition. By using linear characteristics, not linear hulls, the designer of Q evaluates the cipher to [...]

2015-05-08
12:17 [Pub][ePrint]

Several tools and approaches for proving noninterference properties for Java and other languages exist. Some of them have a high degree of automation or are even fully automatic, but overapproximate the actual information flow, and hence, may produce false positives. Other tools, such as those based on theorem proving, are precise, but may need interaction, and hence, analysis is time-consuming.

In this paper, we propose a \\emph{hybrid approach} that aims at obtaining the best of both approaches: We want to use fully automatic analysis as much as possible and only at places in a program where, due to overapproximation, the automatic approaches fail, we resort to more precise, but interactive analysis, where the latter involves the verification only of specific functional properties in certain parts of the program, rather than checking more intricate noninterference properties for the whole program.

To illustrate the hybrid approach, in a case study we use this approach---along with the fully automatic tool Joana for checking noninterference properties for Java programs and the theorem prover KeY for the verification of Java programs--- as well as the CVJ framework proposed by K{\\\"u}sters, Truderung, and Graf to establish cryptographic privacy properties for a non-trivial Java program, namely an e-voting system. The CVJ framework allows one to establish cryptographic indistinguishability properties for Java programs by checking (standard) noninterference properties for such programs.

12:17 [Pub][ePrint]

The multiple ideal query (MIQ) model was introduced by Goyal, Jain and Ostrovsky [Crypto\'10] as a relaxed notion of security which allows one to construct concurrently secure protocols in the plain model. The main question relevant to the MIQ model is how many queries must we allow to the ideal world adversary? The importance of the above question stems from the fact that if the answer is positive, then it would enable meaningful security guarantees in many application scenarios, as well as, lead to resolution of long standing open questions such as fully concurrent password based key exchange in the plain model.

In this work, we continue the study of the MIQ model and prove severe lower bounds on the number of ideal queries per session. Following are our main results:

1) There exists a two-party functionality that cannot be securely realized in the MIQ model with only a constant number of ideal queries per session.

2) There exists a two-party functionality that cannot be securely realized in the MIQ model by any constant round protocol, with any polynomial number of ideal queries per session.

Both of these results are unconditional and even rule out protocols proven secure using a non-black-box simulator. We in fact prove a more general theorem which allows for trade-off between round complexity and the number of ideal queries per session. We obtain our negative results in the following two steps:

1) We first prove our results with respect to black-box simulation, i.e., we only rule out simulators that make black-box use of the adversary.

2) Next, we give a technique to compile our negative results w.r.t. black-box simulation into full impossibility results (ruling out non-black-box simulation as well) in the MIQ model. Interestingly, our compiler uses ideas from the work on obfuscation using tamper-proof hardware, even though our setting does not involve any hardware tokens.

12:17 [Pub][ePrint]

Motivated by the problem of avoiding duplication in storage systems, Bellare, Keelveedhi, and Ristenpart have recently put forward the notion of Message-Locked Encryption (MLE) schemes which subsumes convergent encryption and its variants. Such schemes do not rely on permanent secret keys, but rather encrypt messages using keys derived from the messages themselves.

We strengthen the notions of security proposed by Bellare et al. by considering plaintext distributions that may depend on the public parameters of the schemes. We refer to such inputs as lock-dependent messages. We construct two schemes that satisfy our new notions of security for message-locked encryption with lock-dependent messages.

Our main construction deviates from the approach of Bellare et al. by avoiding the use of ciphertext components derived deterministically from the messages. We design a fully randomized scheme that supports an equality-testing algorithm defined on the ciphertexts.

Our second construction has a deterministic ciphertext component that enables more efficient equality testing. Security for lock-dependent messages still holds under computational assumptions on the message distributions produced by the attacker.

In both of our schemes the overhead in the length of the ciphertext is only additive and independent of the message length.

12:17 [Pub][ePrint]

Extensive use of third party IP cores (e.g., HDL, netlist) and open source tools in the FPGA application design and development process in conjunction with the inadequate bitstream protection measures have raised crucial security concerns in the past for reconfigurable hardware systems. Designing high fidelity and secure methodologies for FPGAs are still infancy and in particular, there are almost no concrete methods/techniques that can ensure trust in FPGA applications not entirely designed and/or developed in a trusted environment. This work strongly suggests the need for an anomaly detection capability within the FPGAs that can continuously monitor the behavior of the underlying FPGA IP cores and the communication activities of IP cores with other IP cores or peripherals for any abnormalities. To capture this need, we propose a technique called FIDelity Enhancing Security (FIDES) methodology for FPGAs that uses a combination of access control policies and behavior learning techniques for anomaly detection.

FIDES essentially comprises of two components: (i) {\\em Trusted Wrappers}, a layer of monitors with sensing capabilities distributed across the FPGA fabric; these wrappers embed the output of each IP core $i$ with a tag $\\tau_i$ according to the pre-defined security policy $\\Pi$ and also verifies the embeddings of each input to the IP core to detect any violation of policies. The use of tagging and tracking enables us to capture the generalized interactions of each IP core with its environment (e.g., other IP cores, memory, OS or I/O ports). {\\em Trusted Wrappers} also monitors the statistical properties exhibited by each IP core functions on execution such as power consumption, number of clock cycles and timing variations to detect any anomalous operations; (ii) a {\\em Trusted Anchor} that monitors the communication between the IP cores and the peripherals with regard to the centralized security policies $\\Psi$ and the statistical properties produced by the peripherals. We target FIDES architecture on a Xilinx Zynq 7020 device for a red-black system comprising of sensitive and non-sensitive IP cores. Our FIDES implementation leads to only 1-2\\% overhead in terms of the logic resources per wrapper but 4-5X increase in latency (worst case scenario), measured in terms of clock cycles, as compared to the baseline implementation.

2015-05-07
21:17 [Forum]

The authors of 2015/313 claim that their result implies a quantum polynomial time key-recovery attack against cryptographic constructions relying on the hardness of finding a short generator of a principal ideal in a cyclotomic field (ex: multilinear maps or Smart-Vercauteren scheme). This statement is not true, and it should be amended. It is based on two references to justify the existence of a quantum polynomial time algorithm to solve the Principal Ideal Problem (PIP). First, an online draft from Campbel, Grove and Shepherd [CGS14]. This work in progress has been publicly released as it was when the corresponding research program at CESG was interrupted. According to its authors, the polynomial run time of the attack is an overstatement that cannot be supported (This has been stated at the Heilbronn Quantum Algorithms Meeting 2015 and at the ICERM workshop on lattices and cybersecurity 2015). Moreover, it is unclear that the approach chosen in [CGS14] could ever lead to a polynomial run time for at least two fundamental reasons explained in this forum post : https://groups.google.com/forum/#!topic/cryptanalytic-algorithms/GdVfp5Kbdb8 Then, the authors of 2015/313 refer to ongoing research by Fang Song and myself. This work in progress (which adopts a totally different approach than that of [CGS14]) has never been shared with the authors and has never been publicly released. We have no timeline for its submission, and although it is promissing, it is clearly too soon to refer to it with the level of confidence chosen by the authors of this preprint ("known algorithm for the PIP"). We have already shared our concerns about this citation with the authors of 2015/313, but no change has been made so far. It is clearly "work in progress", and Fang Song and myself do not refer to it as an actual result. From: 2015-07-05 20:21:15 (UTC)

21:17 [Pub][ePrint]

Gennaro, Gentry, Parno, and Raykova (GGPR) introduced Quadratic Arithmetic Programs (QAPs) as a way of representing arithmetic circuits in a form amendable to highly efficient cryptographic protocols (EUROCRYPT 2013), particularly for verifiable computation and succinct non-interactive arguments. Subsequently, Parno, Gentry, Howell, and Raykova introduced an improved cryptographic protocol (and implementation), which they dubbed Pinocchio (IEEE S&P 2013).

Ben-Sasson et al. then introduced a lightly modified version of the Pinocchio protocol and implemented it as part of their libsnark distribution. Later work by the same authors employed this protocol, as did a few works by others. Many of these works cite the version of the paper which was published at USENIX Security. However, the protocol does not appear in that peer-reviewed paper; instead, it appears only in a technical report, where it is justified via a lemma that lacks a proof. Unfortunately, the lemma is incorrect, and the modified protocol is unsound. With probability one, an adversary can submit false statements and proofs that the verifier will accept. We demonstrate this theoretically, as well as with concrete examples in which the protocol\'s implementation in libsnark accepts invalid statements.

Fixing this problem requires different performance tradeoffs, indicating that the performance results reported by papers building on this protocol (USENIX Security 2013, CRYPTO 2014, NDSS 2014, EUROCRYPT 2015, IEEE S&P 2014, IEEE S&P 2015) are, to a greater or lesser extent, inaccurate.

12:17 [Pub][ePrint]

The (fast) algebraic immunity, including (standard) algebraic immunity and the resistance against fast algebraic attacks, has been considered as an important cryptographic property for Boolean functions used in stream ciphers. This paper is on the determination of the (fast) algebraic immunity of a special class of Boolean functions, called Boolean power functions. An n-variable Boolean power function f can be represented as a monomial trace function over finite field GF(2^n). To determine the (fast) algebraic immunity of Boolean power functions one may need the arithmetic in GF(2^n), which may be not computationally efficient compared with the operations over GF(2). We provide two sufficient conditions for Boolean power functions such that their immunities can determined only by the computations in GF(2). We show that Niho functions and a number of odd variables Kasami functions can satisfy the conditions.

12:17 [Pub][ePrint]

Boolean functions used in stream ciphers should have many cryptographic properties in order to help resist different kinds of cryptanalytic attacks. The resistance of Boolean functions against fast algebraic attacks is an important cryptographic property. Deciding the resistance of an n-variable Boolean function against fast algebraic attacks needs to determine the rank of a square matrix over finite field GF(2). In this paper, for rotation symmetric Boolean functions in prime n variables, exploiting the properties of partitioned matrices and circulant matrices, we show that the rank of such a matrix can be obtained by determining the rank of a reduced square matrix with smaller size over GF(2), so that the computational complexity decreases by a factor of n to the power omega for large n, where omega is approximately equal to 2.38 and is known as the exponent of the problem of computing the rank of matrices.

2015-05-06
15:17 [Pub][ePrint]

In this paper we present known-plaintext single-key and chosen-key attacks on round-reduced LED-64 and LED-128.

We show that with an application of the recently proposed slidex attacks,

one immediately improves the complexity of the previous single-key 4-step attack on LED-128. Further, we explore the possibility of

multicollisions and show single-key attacks on 6 steps of LED-128. A generalization of our multicollision attack

leads to the statement that no 6-round cipher with two subkeys that alternate, or 2-round cipher with

linearly dependent subkeys, is secure in the single-key model. Next, we exploit the possibility of finding pairs of inputs that follow

a certain differential rather than a differential characteristic, and obtain chosen-key differential distinguishers

for 5-step LED-64, as well as 8-step and 9-step LED-128. We provide examples of inputs that follow the 8-step differential,

i.e. we are able to practically confirm our results on 2/3 of the steps of LED-128. We introduce a new type of

chosen-key differential distinguisher, called random-difference distinguisher, and

successfully penetrate 10 of the total 12 steps of LED-128. We show that this type of attack is generic

in the chosen-key model, and can be applied to any 10-round cipher with two alternating subkeys.

15:17 [Pub][ePrint]

Memory-hard functions are becoming an important tool in the design of password hashing schemes, cryptocurrencies, and more generic proof-of-work primitives that are x86-oriented and can not be computed on dedicated hardware more efficiently.

We develop a simple and cryptographically secure approach to the design of such functions and show how to exploit the architecture of modern CPUs and memory chips to make faster and more secure schemes compared to existing alternatives such as scrypt. We also propose cryptographic criteria for the components, that prevent cost reductions using time-memory tradeoffs and side-channel leaks. The concrete proof-of-work instantiation, which we call Argon2, can fill GBytes of RAM within a second, is resilient to various tradeoffs, and is suitable for a wide range of applications, which aim to bind a computation to a certain architecture.

Concerning potential DoS attacks, our scheme is lightweight enough to offset the bottleneck from the CPU to the memory bus thus leaving sufficient computing power for other tasks. We also propose parameters for which our scheme is botnet resistant. As an application, we suggest a cryptocurrency design with fast and memory-hard proof-of-work, which allows memoryless verification.