International Association for Cryptologic Research

# IACR News Central

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2014-06-12
06:17 [Pub][ePrint]

Fault attacks are active attacks in which an adversary with physical

with the execution of an algorithm to retrieve secret material. Since

the seminal Bellcore attack on RSA signatures, there has been

extensive work to discover new fault attacks against cryptographic

schemes, and to develop countermeasures against such

attacks. Originally focused on high-level algorithmic descriptions,

these works increasingly focus on concrete implementations. While

lowering the abstraction level leads to new fault attacks, it also

makes their discovery significantly more challenging. In order to face

this trend, it is therefore desirable to develop principled,

tool-supported approaches that allow a systematic analysis of the

security of cryptographic implementations against fault attacks.

We propose, implement, and evaluate a new approach for finding fault

attacks against cryptographic implementations. Our approach is based

on identifying implementation-independent mathematical properties we

call fault conditions. We choose them so that it is possible to

recover secret data purely by computing on sufficiently many data

points that satisfy a fault condition. Fault conditions capture the

essence of a large number of attacks from the literature, including

lattice-based attacks on RSA. Moreover, they provide a basis for

discovering automatically new attacks: using fault conditions, we

specify the problem of finding faulted implementations as a program

synthesis problem. Using a specialized form of program synthesis, we

discover multiple faulted implementations on RSA and ECDSA that

realize the fault conditions, and hence lead to fault attacks. Several

of the attacks found by our tool are new, and of independent interest.

06:17 [Pub][ePrint]

In a seminal work at EUROCRYPT \'96, Coppersmith showed how to find

all small roots of a univariate polynomial congruence in polynomial

time: this has found many applications in public-key cryptanalysis

and in a few security proofs. However, the running time of the

algorithm is a high-degree polynomial, which limits experiments: the

bottleneck is an LLL reduction of a high-dimensional matrix with

extra-large coefficients. We present in this paper the first

significant speedups over Coppersmith\'s algorithm. The first

speedup is based on a special property of the matrices used by

Coppersmith\'s algorithm, which allows us to provably speed up the

LLL reduction by rounding, and which can also be used to improve

the complexity analysis of Coppersmith\'s original algorithm.

The exact speedup depends on the LLL algorithm used: for instance,

the speedup is asymptotically quadratic in the bit-size of the

small-root bound if one uses the Nguyen-Stehl\\\'e {\\Ltwo}

algorithm. The second speedup is heuristic and applies whenever

one wants to enlarge the root size of Coppersmith\'s algorithm by

exhaustive search. Instead of performing several LLL reductions

independently, we exploit hidden relationships between these

matrices so that the LLL reductions can be somewhat chained to

decrease the global running time. When both speedups are combined,

the new algorithm is in practice hundreds of times faster for

typical parameters.

06:17 [Pub][ePrint]

Motivated by revelations concerning population-wide surveillance of

encrypted communications, we formalize and investigate the resistance

of symmetric encryption schemes to mass surveillance. The focus is on

algorithm-substitution attacks (ASAs), where a subverted encryption

algorithm replaces the real one. We assume that the goal

of big~brother\'\' is undetectable subversion, meaning

that ciphertexts produced by the subverted encryption algorithm

should reveal plaintexts to big~brother yet

be indistinguishable to users from those produced

by the real encryption scheme. We formalize security

notions to capture this goal and then offer both attacks and

defenses. In the first category we show that successful (from the

point of view of big brother) ASAs may be mounted on a large class of

common symmetric encryption schemes. In the second category we show

how to design symmetric encryption schemes that avoid such attacks and

meet our notion of security. The lesson that emerges is the danger of

choice: randomized, stateless schemes are subject to attack while

deterministic, stateful ones are not.

06:17 [Pub][ePrint]

Non-interactive verifiable outsourced computation enables a computationally weak client to outsource the computation of a function $f$ on input $x$ to a more powerful but untrusted server, who will return the result of the function evaluation as well as a proof that the computation is performed correctly. A basic requirement of a verifiable outsourced computation scheme is that the client should invest less time in preparing the inputs and verifying the proof than computing the function by himself.

One of the best solutions of such non-interactive schemes are based on

Yao\'s garble circuit and full homomorphic encryption, which leads to invest $poly(T)$ running time in offline stage and $poly(log T)$ time in online stage of the client, where $T$ is the time complexity to compute $f$.

In this paper, we\'ll present a scheme which does not need to use garble circuit, but to use a very simple technique to confuse the function we are going to compute, and only invests $poly(log T)$ running time in the offline stage.

06:17 [Pub][ePrint]

Recently, the Residue Number System and the Cox-Rower architecture

have been used to compute efficiently Elliptic Curve Cryptography over

FPGA. In this paper, we are rewriting the conditions of Kawamura\'s theorem for the base extension without error in order to define the maximal range of the set from which the moduli can be chosen to build a base. At the same time, we give a procedure to compute correctly the truncation function of the Cox module. We also present a modified ALU of the Rower architecture using a second level of Montgomery Representation. Such architecture allows us to select the

moduli with the new upper bound defined with the condition. This modification makes the Cox-Rower architecture suitable to compute 521 bits ECC with radix downto 16 bits compared to 18 with the classical Cox-Rower architecture. We validate our results through FPGA implementation of a scalar multiplication at classical cryptography security levels (NIST curves). Our implementation uses 35% less LUTs compared to the state of the art generic implementation of ECC

using RNS for the same performance [5]. We also slightly improve the computation time (latency) and our implementation shows best ratio throughput/area for RNS computation supporting any curve independently of the chosen base.

03:17 [Pub][ePrint]

We put the Gentry-Szydlo algorithm into a mathematical framework, and show that it is part of a general theory of lattices with symmetry\'\'. For large ranks, there is no good algorithm that decides whether a given lattice has an orthonormal basis. But when the lattice is given with enough symmetry, we can construct a provably deterministic polynomial time algorithm to accomplish this, based on the work of Gentry and Szydlo. The techniques involve algorithmic algebraic number theory, analytic number theory, commutative algebra, and lattice basis reduction. This sheds new light on the Gentry-Szydlo algorithm, and the ideas should be applicable to a range of questions in cryptography.

03:17 [Pub][ePrint]

We propose \\emph{RAW Path ORAM}, an ORAM construction that improves the state of the art Path ORAM in several ways.

First, RAW Path ORAM reduces the amount of encryption operations by $4\\times$ compared with Path ORAM.

Second, RAW Path ORAM enables a much more efficient and simpler integrity verification scheme.

Third, RAW Path ORAM dramatically simplifies the theoretical analysis on client storage requirement (stash size).

We build RAW Path ORAM in hardware and name it \\emph{Tiny ORAM}.

Tiny ORAM is the first hardware ORAM design that efficiently supports small client storage, arbitrary block sizes (e.g., 64~Bytes to 4096~Bytes) and integrity verification.

Block size flexibility allows Tiny ORAM to greatly reduce the worst-case access latency for ORAM running programs with erratic data locality.

To reduce the performance overhead that comes with small client storage, we add \\emph{Unified ORAM} scheme that further decreases ORAM access latency by up to 39\\% on real workloads.

We demonstrate a complete working prototype on a stock FPGA board.

Tiny ORAM requires $5\\%/15\\%$ of the FPGA logic/memory (including encryption and integrity verification)

and can complete an ORAM access for a 64 Byte block in $1.25-4.75\\mu s$.

03:17 [Pub][ePrint]

Message authentication is one of the most basic tasks of cryptography, and authentication based on public-key infrastructure (PKI) is one of the most prevalent methods for message and entity authentication. Still, the state of the art in composable security analysis of PKI-based authentication is somewhat unsatisfactory. Specifically, existing treatments either (a)~make the unrealistic assumption that the PKI is accessible only within the confines of the authentication protocol itself, thus failing to capture real-world PKI-based authentication, or (b)~impose often-unnecessary requirements---such as strong on-line non-transferability---on candidate protocols, thus ruling out natural candidates.

We give a modular and composable analytical framework for PKI-based message authentication protocols. This framework guarantees security even when the PKI is pre-existing and globally available, without being unnecessarily restrictive. Specifically, we model PKI as a global set-up functionality within the \\emph{Global~UC} security model [Canetti \\etal, TCC 2007] and relax the ideal authentication functionality accordingly. We then demonstrate the security of a simple signature-based authentication protocol. Our modeling makes minimal security assumptions on the PKI in use; in particular, knowledge of the secret key\'\' is not guaranteed or verified. To enable our treatment, we formulate two new composition theorems.

03:17 [Pub][ePrint]

A popular effective countermeasure to protect block cipher implementations against differential power analysis (DPA) attacks is to mask the internal operations of the cryptographic algorithm with random numbers. While the masking technique resists against first-order (univariate) DPA attacks, higher-order (multivariate) attacks were able to break masked devices. In this paper, we formulate a statistical model for higher-order DPA attack. We derive an analytic success rate formula that distinctively shows the effects of algorithmic confusion property, signal-noise-ratio (SNR), and masking on leakage of masked devices. It further provides a formal proof for the centered product combination function being optimal for higher-order attacks in very noisy scenarios. We believe that the statistical model fully reveals how the higher-order attack works around masking, and would offer good insights for embedded system designers to implement masking techniques.

2014-06-11
21:17 [Pub][ePrint]

Passwords are inherently vulnerable to dictionary attacks, but are quite secure if guessing attempts can be slowed down, for example by an online server. If this server gets compromised, however, the attacker can again perform an offline attack. The obvious remedy is to distribute the password verification process over multiple servers, so that the password remains secure as long as no more than a threshold of the servers are compromised. By letting these servers additionally host shares of a strong secret that the user can recover upon entering the correct password, the user can perform further cryptographic tasks using this strong secret as a key, e.g., encrypting data in the cloud. Threshold password-authenticated secret sharing (TPASS) protocols provide exactly this functionality, but the two only known schemes by Bagherzandi et al. (CCS 2011) and Camenisch et al. (CCS 2012) leak the password if a user mistakenly executes the protocol with malicious servers. Authenticating to the wrong servers is a common scenario when users are tricked in phishing attacks.

We propose the first t-out-of-n TPASS protocol for any n > t that does not suffer from this shortcoming. We prove our protocol secure in the UC framework, which for the particular case of password-based protocols offers important advantages over property-based definitions, e.g., by correctly modeling typos in password attempts.

13:06 [Job][New]

What we offer:

We offer a post-doctoral research contract at Universitat Rovira i Virgili, for up to three years. The selected candidate will work in a new generously funded research project at the UNESCO Chair in Data Privacy/CRISES research group within the Department of Computer Engineering and Mathematics.

What we require:

Candidates should have provable research expertise in game theory (specifically mechanism design and/or implementation theory) and also be familiar with cryptographic protocols. Suitable backgrounds include but are not limited to computer science, mathematics, economics and engineering.

Where we are:

Universitat Rovira i Virgili (URV) is based in Tarragona (Catalonia), which is a coastal city 90 km south of Barcelona. URV has been ranked by Times Higher Education 2014 as the world´s 66th best university under 50 years of age. Also, according to the CWTS Leiden 2014 ranking, URV has the second highest research impact on Math., Comp. Sci. and Engineering´´ among European universities.

What candidates should send:

Send your CV and publication record, plus two recommendation letters to Prof. Josep Domingo-Ferrer ( josep.domingo (at) urv.cat ).