International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2015-06-17
18:17 [Pub][ePrint] Robust and One-Pass Parallel Computation of Correlation-Based Attacks at Arbitrary Order, by Tobias Schneider and Amir Moradi and Tim Güneysu

  The protection of cryptographic implementations against higher-order attacks has risen to an important topic in the side-channel community after the advent of enhanced measurement equipment that enables the capture of millions of power traces in reasonably short time. However, the preprocessing of multi-million traces for such an attack is still challenging, in particular when in the case of (multivariate) higher-order attacks all traces need to be parsed at least two times. Even worse, partitioning the captured traces into smaller groups to parallelize computations is hardly possible with current techniques.

In this work we introduce procedures that allow iterative computation of correlation in a side-channel analysis attack at any arbitrary order in both univariate and multivariate settings. The advantages of our proposed solutions are manifold: i) they provide stable results, i.e., by increasing the number of used traces high accuracy of the estimations is still maintained, ii) each trace needs to be processed only once and at any time the result of the attack can be obtained (without requiring to reparse the whole trace pull when adding more traces), and iii) the computations can be efficiently parallelized, e.g., by splitting the trace pull into smaller subsets and processing each by a single thread on a multi-threading or cloud-computing platform. In short, our constructions allow efficiently performing higher-order side-channel analysis attacks (e.g., on hundreds of million traces) which is of crucial importance when practical evaluation of the masking schemes need to be performed.



18:17 [Pub][ePrint] On Public Key Encryption from Noisy Codewords, by Eli Ben-Sasson and Iddo Ben-Tov and Ivan Damgard and Yuval Ishai and Noga ron-Zewi

  Several well-known public key encryption schemes, including those of Alekhnovich (FOCS 2003), Regev (STOC 2005), and Gentry, Peikert and Vaikuntanathan (STOC 2008), rely on the conjectured intractability of inverting noisy linear encodings. These schemes are limited in that they either require the underlying field to grow with the security parameter, or alternatively they can work over the binary field but have a low noise entropy that gives rise to sub-exponential attacks.

Motivated by the goal of efficient public key cryptography, we study the possibility of obtaining improved security over the binary field by using different noise distributions.

Inspired by an abstract encryption scheme of Micciancio (PKC 2010), we consider an abstract encryption scheme that unifies all the three schemes mentioned above and allows for arbitrary choices of the underlying field and noise distributions.

Our main result establishes an unexpected connection between the power of such encryption schemes and additive combinatorics. Concretely, we show that under the ``approximate duality conjecture\" from additive combinatorics (Ben-Sasson and Zewi, STOC 2011), every instance of the abstract encryption scheme over the binary field can be attacked in time $2^{O(\\sqrt{n})}$, where $n$ is the maximum of the ciphertext size and the public key size (and where the latter excludes public randomness used for specifying the code).

On the flip side, counter examples to the above conjecture (if false) may lead to candidate public key encryption schemes with improved security guarantees.

We also show, using a simple argument that relies on agnostic learning of parities (Kalai, Mansour and Verbin, STOC 2008), that any such encryption scheme can be {\\em unconditionally} attacked in time $2^{O(n/\\log n)}$, where $n$ is the ciphertext size.

Combining this attack with the security proof of Regev\'s cryptosystem, we immediately obtain an algorithm that solves the {\\em learning parity with noise (LPN)} problem in time $2^{O(n/\\log \\log n)}$ using only $n^{1+\\epsilon}$ samples, reproducing the result of Lyubashevsky (Random 2005) in a conceptually different way.

Finally, we study the possibility of instantiating the abstract encryption scheme over constant-size rings to yield encryption schemes with no decryption error. We show that over the binary field decryption errors are inherent. On the positive side, building on the construction of matching vector families

(Grolmusz, Combinatorica 2000; Efremenko, STOC 2009; Dvir, Gopalan and Yekhanin, FOCS 2010),

we suggest plausible candidates for secure instances of the framework over constant-size rings that can offer perfectly correct decryption.



18:17 [Pub][ePrint] Last fall degree, HFE, and Weil descent attacks on ECDLP, by Ming-Deh A. Huang and Michiel Kosters and Sze Ling Yeo

  Weil descent methods have recently been applied to attack the Hidden Field Equation (HFE) public key systems and solve the elliptic curve discrete logarithm problem (ECDLP) in small characteristic. However the claims of quasi-polynomial time attacks on the HFE systems and the subexponential time algorithm for the ECDLP depend on various heuristic assumptions.

In this paper we introduce the notion of the last fall degree of a polynomial system, which is independent of choice of a monomial order. We then develop complexity bounds on solving polynomial systems based on this last fall degree.

We prove that HFE systems have a small last fall degree, by showing that one can do division with remainder after Weil descent. This allows us to solve HFE systems unconditionally in polynomial time if the degree of the defining polynomial and the cardinality of the base field are fixed.

For the ECDLP over a finite field of characteristic 2, we provide computational evidence that raises doubt on the validity of the first fall degree assumption, which was widely adopted in earlier works and which promises sub-exponential algorithms for ECDLP. In addition, we construct a Weil descent system from a set of summation polynomials in which the first fall degree assumption is unlikely to hold. These examples suggest that greater care needs to be exercised when applying this heuristic assumption to arrive at complexity estimates.

These results taken together underscore the importance of rigorously bounding last fall degrees of Weil descent systems, which remains an interesting but challenging open problem.



18:17 [Pub][ePrint] Fair and Robust Multi-Party Computation using a Global Transaction Ledger, by Aggelos Kiayias and Hong-Sheng Zhou and Vassilis Zikas

  Classical results on secure multi-party computation (MPC) imply that fully secure computation, including fairness (either all parties get output or none) and robustness (output delivery is guaranteed), is impossible unless a majority of the parties is honest. Recently, cryptocurrencies like Bitcoin where utilized to leverage the fairness loss in MPC against a dishonest majority. The idea is that when the protocol aborts in an unfair manner (i.e., after the adversary receives output) then honest parties get compensated by the adversarially controlled parties.

Our contribution is three-fold. First, we put forth a new formal model of secure MPC with compensation and we show how the introduction of suitable ledger and synchronization functionalities makes it possible to express completely such protocols using standard interactive Turing machines (ITM) circumventing the need for the use of extra features that are outside the standard model as in previous works. Second, our model, is expressed in the universal composition setting with global setup and is equipped with a composition theorem that enables the design of protocols that compose safely with each other and within larger environments where other protocols with compensation take place; a composition theorem for MPC protocols with compensation was not known before. Third, we introduce the first robust MPC protocol with compensation, i.e., an MPC protocol where not only fairness is guaranteed (via compensation) but additionally the protocol is guaranteed to deliver output to the parties that get engaged and therefore the adversary, after an initial round of deposits, is not even able to mount a denial of service attack without having to suffer a monetary penalty. Importantly, our robust MPC protocol requires only a constant number of (coin-transfer and communication) rounds.



18:17 [Pub][ePrint] Known-key Distinguisher on Full PRESENT, by Céline Blondeau and Thomas Peyrin and Lei Wang

  In this article, we analyse the known-key security of the standardized PRESENT lightweight block cipher. Namely, we propose a known-key distinguisher on the full PRESENT, both 80- and 128-bit key versions. We first leverage the very latest advances in differential cryptanalysis on PRESENT, which are as strong as the best linear cryptanalysis in terms of number of attacked rounds. Differential properties are much easier to handle for a known-key distinguisher than linear properties, and we use a bias on the number of collisions on some predetermined input/output bits as distinguishing property. In order to reach the full PRESENT, we eventually introduce a new meet-in-the-middle layer to propagate the differential properties as far as possible. Our techniques have been implemented and verified on the small scale variant of PRESENT. While the known-key security model is very generous with the attacker, it makes sense in practice since PRESENT has been proposed as basic building block to design lightweight hash functions, where no secret is manipulated. Our distinguisher can for example apply to the compression function obtained by placing PRESENT in a Davies-Meyer mode. We emphasize that this is the very first attack that can reach the full number of rounds of the PRESENT block cipher.



18:17 [Pub][ePrint] The Carnac protocol -- or how to read the contents of a sealed envelope, by Michael Scott and Brian Spector

  Johnny Carson as long time host of the Tonight show often appeared in the spoof role of Carnac the Magnificent, a mentalist who could magically read the contents of a sealed envelope. This is in fact a well known stock-in-trade trick of the mentalist\'s craft, known as ``billet reading\'\'. Here we propose a cryptographic solution to the problem of billet reading, apparently allowing a cipher-text to be decrypted without direct knowledge of the cipher-text, and present both a compelling use case and a practical implementation.



18:17 [Pub][ePrint] Twist Insecurity, by Manfred Lochter, Andreas Wiemers

  Several authors suggest that the use of twist secure Elliptic Curves automatically leads to secure implementations. We argue that even for twist secure

curves a point validation has to be performed.

We illustrate this with examples where the security of EC-algorithms is strongly degraded, even for twist secure curves.

We show that the usual blindig countermeasures against SCA are insufficient

(actually they introduce weaknesses)

if no point validation is performed,

or if an attacker has access to certain intermediate points.

In this case the overall security of the system is reduced to the length of the blinding parameter. We emphazise that our methods work even in the case of a very high identification error rate during the SCA-phase.



18:17 [Pub][ePrint] Tampering with the Delivery of Blocks and Transactions in Bitcoin, by Arthur Gervais and Hubert Ritzdorf and Ghassan O. Karame and Srdjan Capkun

  Given the increasing adoption of Bitcoin, the number of transactions and the block sizes within the system are only expected to increase. To sustain its correct operation in spite of its ever-increasing use, Bitcoin implements a number of necessary optimizations and scalability measures. These measures limit the amount of information broadcast in the system to the minimum necessary.

In this paper, we show that current scalability measures adopted by Bitcoin come at odds with the security of the system. More specifically, we show that an adversary can exploit these measures in order to effectively delay the propagation of transactions and blocks to specific nodes--without causing a network partitioning in the system. We show that this allows the adversary to easily mount Denial-of-Service attacks, considerably increase its mining advantage in the network, and double-spend transactions in spite of the current countermeasures adopted by Bitcoin. Based on our results, we propose a number of countermeasures in order to enhance the security of Bitcoin without deteriorating its scalability.



03:17 [Pub][ePrint] PUA - Privacy and Unforgeability for Aggregation, by Iraklis Leontiadis and Kaoutar Elkhiyaoui and Refik Molva and Melek Önen

  Existing work on data collection and analysis for aggregation is mainly

focused on confidentiality issues. That is, the untrusted Aggregator learns only

the aggregation result without divulging individual data inputs. In this paper we

extend the existing models with stronger security requirements. Apart from the

privacy requirements with respect to the individual inputs we ask for unforge-

ability for the aggregate result. We first define the new security requirements of

the model. We also instantiate a protocol for private and unforgeable aggregation

for a non-interactive multi-party environment. I.e, multiple unsynchronized users

owing to personal sensitive information without interacting with each other con-

tribute their values in a secure way: The Aggregator learns the result of a function

without learning individual values and moreover it constructs a proof that is for-

warded to a verifier that will let the latter be convinced for the correctness of the

computation. The verifier is restricted to not communicate with the users. Our

protocol is provably secure in the random oracle model.



03:17 [Pub][ePrint] Privacy in the Genomic Era, by Muhammad Naveed and Erman Ayday and Ellen W. Clayton and Jacques Fellay and Carl A. Gunter and Jean-Pierre Hubaux and Bradley A. Malin and XiaoFeng Wang

  Genome sequencing technology has advanced at a rapid pace and it is now possible to generate highly-detailed genotypes inexpensively. The collection and analysis of such data has the potential to support various applications, including personalized medical services. While the benefits of the genomics revolution are trumpeted by the biomedical community, the increased availability of such data has major implications for personal privacy; notably because the genome has certain essential features, which include (but are not limited to) (i) an association with traits and certain diseases, (ii) identification capability (e.g., forensics), and (iii) revelation of family relationships. Moreover, direct-to-consumer DNA testing increases the likelihood that genome data will be made available in less regulated environments, such as the Internet and for-profit companies. The problem of genome data privacy thus resides at the crossroads of computer science, medicine, and public policy. While the computer scientists have addressed data privacy for various data types, there has been less attention dedicated to genomic data. Thus, the goal of this paper is to provide a systematization of knowledge for the computer science community. In doing so, we address some of the (sometimes erroneous) beliefs of this field and we report on a survey we conducted about genome data privacy with biomedical specialists. Then, after characterizing the genome privacy problem, we review the state-of-the-art regarding privacy attacks on genomic data and strategies for mitigating such attacks, as well as contextualizing these attacks from the perspective of medicine and public policy. This paper concludes with an enumeration of the challenges for genome data privacy and presents a framework to systematize the analysis of threats and the design of countermeasures as the field moves forward.



03:17 [Pub][ePrint] Sanctum: Minimal RISC Extensions for Isolated Execution, by Victor Costan and Ilia Lebedev and Srinivas Devadas

  Sanctum is a set of minimal extensions to a standard RISC architecture that offers strong provable isolation of software modules running concurrently and sharing resources. Sanctum is similar to

SGX in its API, but protects against an important class of additional software attacks, including cache timing and memory access pattern attacks. It does so via a principled approach to eliminating entire attack surfaces through isolation rather than plugging attack-specific privacy leaks.

Sanctum\'s hardware changes over a standard RISC architecture do not impact the cycle time, as they do not extend critical execution paths. Sanctum does not change any major CPU building block (e.g., ALU, MMU, cache), and only requires additional hardware at the interfaces between these building blocks corresponding to less than two percent chip area overhead. Over a set of benchmarks, Sanctum\'s worst observed overhead for isolated execution is 14.6% over an idealized insecure baseline.