International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2015-05-09
18:17 [Pub][ePrint] Individualizing Electrical Circuits of Cryptographic Devices as a Means to Hinder Tampering Attacks, by Zoya Dyka, Thomas Basmer, Christian Wittke and Peter Langendoerfer

  Side channel and fault attacks take advantage from the fact that the behavior of crypto implementations can be observed and provides hints that simplify revealing keys. In a real word a lot of devices, that are identical to the target device, can be attacked before attacking the real target to increase the success of the attack. Their package can be opened and their electromagnetic radiation and structure can be analyzed. Another example of how to improve significantly the success rate of attacks is the measurement of the difference of the side channel leakage of two identical devices, one of these devices being the target, using the Wheatstone bridge measurement setup. Here we propose to individualize the electrical circuit of cryptographic devices in order to prevent attacks that use identical devices: attacks, that analyze the structure of devices identical to the target device in a preparation phase; usual side channel attacks, that use always the same target device for collecting many traces, and attacks that use two identical devices at the same time for measuring the difference of side-channel leakages. The proposed individualization can prevent such attacks because the power consumption and the electromagnetic radiation of devices with individualized electrical circuit are individualized while providing the same functionality. We implemented three individualized ECC designs that provide exactly the same cryptographic function on a Spartan-6 FPGA. These designs differ from each other in a single block only, i.e. in the field multiplier. The visualization of the routed design and measurement results show clear differences in the topology, in the resources consumed as well as in the power and electromagnetic traces. We show that the influence of the individualized designs on the power traces is comparable with the influence of inputs. These facts show that individualizing of electrical circuits of cryptographic devices can be exploited as a protection mechanism. We envision that this type of protection mechanism is relevant if an attacker has a physical access to the cryptographic devices, e.g. for wireless sensor networks from which devices can easily be stolen for further analysis in the lab.



18:17 [Pub][ePrint] Security Evaluation and Enhancement of Bistable Ring PUFs, by Xiaolin Xu, Ulrich Rührmair, Daniel E. Holcomb, and Wayne Burleson

  The Bistable Ring (BR) Physical Unclonable Function (PUF) is a newly proposed hardware security primitive in the PUF family. In this work, we comprehensively evaluate its resilience against Machine Learning (ML) modeling attacks. Based on the success of ML attacks, we propose XOR strategies to enhance the security of BR PUFs. Our results show that the XOR BR PUF with more than four parallel BR PUFs is able to resist the ML modeling methods in this work. We also evaluate the other PUF metrics of reliability, uniqueness and uniformity, and find that the XOR function is also effective in improving the uniformity of BR PUFs.



18:17 [Pub][ePrint] Revisiting Security Claims of XLS and COPA, by Mridul Nandi

  Ristenpart and Rogaway proposed XLS in 2007 which is a

generic method to encrypt messages with incomplete last blocks. Later

Andreeva et al., in 2013 proposed an authenticated encryption COPA

which uses XLS while processing incomplete message blocks. Following

the design of COPA, several other CAESAR candidates used the similar

approach. Surprisingly in 2014, Nandi showed a three-query distinguisher against XLS which violates the security claim of XLS and puts a question mark on all schemes using XLS. However, due to the interleaved nature of encryption and decryption queries of the distinguisher, it was not clear whether the security claims of COPA remains true or not. This paper revisits XLS and COPA both in the direction of cryptanalysis and provable security. Our contribution of the paper can be summarized into following two parts:

1. Cryptanalysis: We describe two attacks - (i) a new distinguisher

against XLS and extending this attack to obtain (ii) a forging algo-

rithm with query complexity about 2^n/3 against COPA where n is

the block size of the underlying blockcipher.

2. Security Proof: Due to the above attacks the main claims of XLS

(already known before) and COPA are wrong. So we revise the security analysis of both and show that (i) both XLS and COPA are

pseudorandom function or PRF up to 2^n/2 queries and (ii) COPA is

integrity-secure up to 2^n/3 queries (matching the query complexity

of our forging algorithm).



18:17 [Pub][ePrint] XLS is not a Strong Pseudorandom Permutation, by Mridul Nandi

  In FSE 2007, Ristenpart and Rogaway had described a generic

method XLS to construct a length-preserving strong pseudorandom per-

mutation (SPRP) over bit-strings of size at least n. It requires a length-preserving permutation E over all bits of size multiple of n and a blockcipher E with block size n. The SPRP security of XLS was proved from the SPRP assumptions of both E and E. In this paper we disprove the claim by demonstrating a SPRP distinguisher of XLS which makes only

three queries and has distinguishing advantage about 1/2. XLS uses a

multi-permutation linear function, called mix2. In this paper, we also

show that if we replace mix2 by any invertible linear functions, the construction XLS still remains insecure. Thus the mode has inherit weakness.



18:17 [Pub][ePrint] On the Amortized Complexity of Zero-knowledge Protocols, by Ronald Cramer and Ivan Damgård and Marcel Keller

  We propose a general technique that allows improving the complexity of

zero-knowledge protocols for a large class of problems where

previously the best known solution was a simple cut-and-choose style

protocol, i.e., where the size of a proof for problem instance $x$ and

error probability $2^{-n}$ was $O(|x| n)$ bits. By using our technique

to prove $n$ instances simultaneously, we can bring down the proof

size per instance to $O(|x| + n)$ bits for the same error probability

while using no computational assumptions. Examples where our technique

applies include proofs for quadratic residuosity, proofs of subgroup

membership and knowledge of discrete logarithms in groups of unknown

order, interval proofs of the latter, and proofs of plaintext

knowledge for various types of homomorphic encryption schemes. We

first propose our protocols as $\\Sigma$-protocols and extend them

later to zero-knowledge proofs of knowledge.



15:17 [Forum] [2015 Reports] 2015/313: Quantum attacks against the short PIP problem are not polynomial time by biasse

  The authors of 2015/313 claim that their result implies a quantum polynomial time key-recovery attack against cryptographic constructions relying on the hardness of finding a short generator of a principal ideal in a cyclotomic field (ex: multilinear maps or Smart-Vercauteren scheme). This statement is not true, and it should be amended. It is based on two references to justify the existence of a quantum polynomial time algorithm to solve the Principal Ideal Problem (PIP). First, an online draft from Campbel, Grove and Shepherd [CGS14]. This work in progress has been publicly released as it was when the corresponding research program at CESG was interrupted. According to its authors, the polynomial run time of the attack is an overstatement that cannot be supported (This has been stated at the Heilbronn Quantum Algorithms Meeting 2015 and at the ICERM workshop on lattices and cybersecurity 2015). Moreover, it is unclear that the approach chosen in [CGS14] could ever lead to a polynomial run time for at least two fundamental reasons explained in this forum post : https://groups.google.com/forum/#!topic/cryptanalytic-algorithms/GdVfp5Kbdb8 Then, the authors of 2015/313 refer to ongoing research by Fang Song and myself. This work in progress (which adopts a totally different approach than that of [CGS14]) has never been shared with the authors and has never been publicly released. We have no timeline for its submission, and although it is promissing, it is clearly too soon to refer to it with the level of confidence chosen by the authors of this preprint ("known algorithm for the PIP"). We have already shared our concerns about this citation with the authors of 2015/313, but no change has been made so far. It is clearly "work in progress", and Fang Song and myself do not refer to it as an actual result. From: 2015-07-05 20:21:15 (UTC)

10:13 [PhD][New] Liam Keliher: Linear Cryptanalysis of Substitution-Permutation Networks

  Name: Liam Keliher
Topic: Linear Cryptanalysis of Substitution-Permutation Networks
Category: secret-key cryptography

Description: The subject of this thesis is linear cryptanalysis of substitution-permutation networks (SPNs). We focus on the rigorous form of linear cryptanalysis, which requires the concept of linear hulls.\r\n\r\nFirst, we consider SPNs in which the s-boxes are selected independently and uniformly from the set of all bijective n x n s-boxes. We derive an expression for the expected linear probability values of such an SPN, and give evidence that this expression converges to the corresponding value for the true random cipher. This adds quantitative support to the claim that the SPN structure is a good approximation to the true random cipher. We conjecture that this convergence holds for a large class of SPNs.\r\n\r\nIn addition, we derive a lower bound on the probability that an SPN with randomly selected s-boxes is practically secure against linear cryptanalysis after a given number of rounds. For common block sizes, experimental evidence indicates that this probability rapidly approaches 1 with an increasing number of rounds.\r\n\r\nWe then consider SPNs with fixed s-boxes. We present two new algorithms for upper bounding the maximum average linear hull probability for SPNs. These algorithms, named KMT1 and KMT2, are the first completely general algorithms for this purpose -- they can be applied to any SPN, and they compute an upper bound that is a function of the number of encryption rounds being evaluated. In contrast, other approaches to this problem either require that the SPN linear transformation have a specific structure, or compute a single value independent of the number of rounds. By applying KMT1 and KMT2 to the AES, we establish the provable security of the AES against linear cryptanalysis.\r\n\r\nAs a straightforward application of our work with linear hulls, we analyze the Q cipher, an SPN submitted to the European Commission\'s NESSIE cryptographic competition. By using linear characteristics, not linear hulls, the designer of Q evaluates the cipher to [...]




2015-05-08
12:17 [Pub][ePrint] A Hybrid Approach for Proving Noninterference of Java Programs, by Ralf Kuesters and Tomasz Truderung and Bernhard Beckert and Daniel Bruns and Michael Kirsten and Martin Mohr

  Several tools and approaches for proving noninterference properties for Java and other languages exist. Some of them have a high degree of automation or are even fully automatic, but overapproximate the actual information flow, and hence, may produce false positives. Other tools, such as those based on theorem proving, are precise, but may need interaction, and hence, analysis is time-consuming.

In this paper, we propose a \\emph{hybrid approach} that aims at obtaining the best of both approaches: We want to use fully automatic analysis as much as possible and only at places in a program where, due to overapproximation, the automatic approaches fail, we resort to more precise, but interactive analysis, where the latter involves the verification only of specific functional properties in certain parts of the program, rather than checking more intricate noninterference properties for the whole program.

To illustrate the hybrid approach, in a case study we use this approach---along with the fully automatic tool Joana for checking noninterference properties for Java programs and the theorem prover KeY for the verification of Java programs--- as well as the CVJ framework proposed by K{\\\"u}sters, Truderung, and Graf to establish cryptographic privacy properties for a non-trivial Java program, namely an e-voting system. The CVJ framework allows one to establish cryptographic indistinguishability properties for Java programs by checking (standard) noninterference properties for such programs.



12:17 [Pub][ePrint] On Concurrently Secure Computation in the Multiple Ideal Query Model, by Vipul Goyal and Abhishek Jain

  The multiple ideal query (MIQ) model was introduced by Goyal, Jain and Ostrovsky [Crypto\'10] as a relaxed notion of security which allows one to construct concurrently secure protocols in the plain model. The main question relevant to the MIQ model is how many queries must we allow to the ideal world adversary? The importance of the above question stems from the fact that if the answer is positive, then it would enable meaningful security guarantees in many application scenarios, as well as, lead to resolution of long standing open questions such as fully concurrent password based key exchange in the plain model.

In this work, we continue the study of the MIQ model and prove severe lower bounds on the number of ideal queries per session. Following are our main results:

1) There exists a two-party functionality that cannot be securely realized in the MIQ model with only a constant number of ideal queries per session.

2) There exists a two-party functionality that cannot be securely realized in the MIQ model by any constant round protocol, with any polynomial number of ideal queries per session.

Both of these results are unconditional and even rule out protocols proven secure using a non-black-box simulator. We in fact prove a more general theorem which allows for trade-off between round complexity and the number of ideal queries per session. We obtain our negative results in the following two steps:

1) We first prove our results with respect to black-box simulation, i.e., we only rule out simulators that make black-box use of the adversary.

2) Next, we give a technique to compile our negative results w.r.t. black-box simulation into full impossibility results (ruling out non-black-box simulation as well) in the MIQ model. Interestingly, our compiler uses ideas from the work on obfuscation using tamper-proof hardware, even though our setting does not involve any hardware tokens.



12:17 [Pub][ePrint] Message-Locked Encryption for Lock-Dependent Messages, by Martín Abadi and Dan Boneh and Ilya Mironov and Ananth Raghunathan and Gil Segev

  Motivated by the problem of avoiding duplication in storage systems, Bellare, Keelveedhi, and Ristenpart have recently put forward the notion of Message-Locked Encryption (MLE) schemes which subsumes convergent encryption and its variants. Such schemes do not rely on permanent secret keys, but rather encrypt messages using keys derived from the messages themselves.

We strengthen the notions of security proposed by Bellare et al. by considering plaintext distributions that may depend on the public parameters of the schemes. We refer to such inputs as lock-dependent messages. We construct two schemes that satisfy our new notions of security for message-locked encryption with lock-dependent messages.

Our main construction deviates from the approach of Bellare et al. by avoiding the use of ciphertext components derived deterministically from the messages. We design a fully randomized scheme that supports an equality-testing algorithm defined on the ciphertexts.

Our second construction has a deterministic ciphertext component that enables more efficient equality testing. Security for lock-dependent messages still holds under computational assumptions on the message distributions produced by the attacker.

In both of our schemes the overhead in the length of the ciphertext is only additive and independent of the message length.



12:17 [Pub][ePrint] Enhancing Trust in Reconfigurable Based Hardware Systems with Tags and Monitors, by Devu Manikantan Shila and Vivek Venugopalan and Cameron D Patterson

  Extensive use of third party IP cores (e.g., HDL, netlist) and open source tools in the FPGA application design and development process in conjunction with the inadequate bitstream protection measures have raised crucial security concerns in the past for reconfigurable hardware systems. Designing high fidelity and secure methodologies for FPGAs are still infancy and in particular, there are almost no concrete methods/techniques that can ensure trust in FPGA applications not entirely designed and/or developed in a trusted environment. This work strongly suggests the need for an anomaly detection capability within the FPGAs that can continuously monitor the behavior of the underlying FPGA IP cores and the communication activities of IP cores with other IP cores or peripherals for any abnormalities. To capture this need, we propose a technique called FIDelity Enhancing Security (FIDES) methodology for FPGAs that uses a combination of access control policies and behavior learning techniques for anomaly detection.

FIDES essentially comprises of two components: (i) {\\em Trusted Wrappers}, a layer of monitors with sensing capabilities distributed across the FPGA fabric; these wrappers embed the output of each IP core $i$ with a tag $\\tau_i$ according to the pre-defined security policy $\\Pi$ and also verifies the embeddings of each input to the IP core to detect any violation of policies. The use of tagging and tracking enables us to capture the generalized interactions of each IP core with its environment (e.g., other IP cores, memory, OS or I/O ports). {\\em Trusted Wrappers} also monitors the statistical properties exhibited by each IP core functions on execution such as power consumption, number of clock cycles and timing variations to detect any anomalous operations; (ii) a {\\em Trusted Anchor} that monitors the communication between the IP cores and the peripherals with regard to the centralized security policies $\\Psi$ and the statistical properties produced by the peripherals. We target FIDES architecture on a Xilinx Zynq 7020 device for a red-black system comprising of sensitive and non-sensitive IP cores. Our FIDES implementation leads to only 1-2\\% overhead in terms of the logic resources per wrapper but 4-5X increase in latency (worst case scenario), measured in terms of clock cycles, as compared to the baseline implementation.