International Association for Cryptologic Research

# IACR News Central

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2014-04-20
18:17 [Pub][ePrint]

The fully homomorphic symmetric encryption scheme \\emph{MORE} encrypts

keys by conjugation with a random invertible matrix over an RSA modulus.

We provide a two known-ciphertexts cryptanalysis recovering a linear dependence among

the two encrypted keys.

18:17 [Pub][ePrint]

We present attacks against CMCC that invalidate the claimed security of integrity protection and misuse resistance. We exploit the fact zero-padding is used on both the message and authenticated data and demonstrate how one may generate a forgery with a single call to the encryption oracle. From this we calculate the ciphertext of the chosen message, yielding a forgery and so breaking INT-CTXT. In the nonce-reuse setting, existence of a forgery leads directly to a 2-query distinguisher.

18:17 [Pub][ePrint]

RSA-CRT is the most widely used implementation for RSA signatures. However, deterministic and many probabilistic RSA signatures based on CRT are vulnerable to fault attacks. Nevertheless, Coron and Mandal (Asiacrypt 2009) show that the randomized PSS padding protects RSA signatures against random faults. In contrast, Fouque et al. (CHES 2012) show that PSS padding does not protect against certain non-random faults that can be injected in widely used implementations based on the Montgomery modular multiplication. In this article, we prove the security of an infective countermeasure against a large class of non-random faults; the proof extends Coron and Mandal\'s result to a strong model where the adversary can force the faulty signatures to be a multiple of one of the prime factors of the RSA modulus. Such non-random faults induce more complex probability distributions than in the original proof, which we analyze using careful estimates of exponential sums attached to suitable rational functions. The security proof is formally verified using appropriate extensions of EasyCrypt, and provides the first application of formal verification to provable (i.e. reductionist) security in the context of fault attacks.

18:17 [Pub][ePrint]

With the increasing popularity of cloud-based data services, data owners are highly motivated to store their huge amount of (potentially sensitive) personal data files on remote servers in encrypted form. Clients later can query over the encrypted database to retrieve files of interest while preventing database servers from learning private information about the contents of files and queries.

In this paper, we investigate new and novel SSE designs which meet all practical properties, including one-round multi-keyword query, comprehensive and practical privacy protection, sublinear search time, and efficient dynamic data operation support. Moreover, our solutions can well support parallel search and run for very large-scale cloud databases. Compared to the existing SSE solutions,

our solution is highly compact, efficient and flexible. Its performance and security are carefully characterized by rigorous analysis. Experimental evaluations conducted over large representative real-word data sets demonstrate that compared with the state-of-the-art our solution indeed achieves desirable properties for large-scale encrypted database systems.

18:17 [Pub][ePrint]

The recent BLISS signature scheme showed that lattice-based constructions have evolved to practical alternatives to RSA or ECC. Besides reasonably small signatures with 5600 bits for a 128-bit level of security, BLISS enables extremely fast signing and signature verification in software. However, due to the complex sampling of Gaussian noise with high precision, it is not clear whether this scheme can be mapped efficiently to embedded devices. In particular, the software approach of using large precomputed tables for Gaussian sampling cannot be transferred to constrained computing environments, such as FPGAs with limited memory. In this work we present techniques for an efficient CDT-based Gaussian sampler on reconfigurable hardware involving Peikert\'s convolution lemma and the Kullback-Leibler divergence. Based on our enhanced sampler design, we provide a first BLISS architecture for Xilinx Spartan-6 FPGAs that integrates fast FFT/NTT-based polynomial multiplication, sparse multiplication, and a Keccak hash function. With on our core a signing operations requires 123 \\textmu s on average, using 2,584 slices, 8 BRAMs, and 6 DSPs. Verification takes slightly less with 70 \\textmu s.

18:17 [Pub][ePrint]

Digital signature schemes are a foundational cryptographic building block in certification and the projection of trust. Based on a signature scheme on committed graphs, we propose a toolkit of certification and proof methods to sign committed topology graphs

and to prove properties of their certificates in zero-knowledge.

This toolkit allows an issuer, such as an auditor, to sign the topology representation of an infrastructure. The prover, such as an infrastructure provider, can then convince a verifier of topology properties, such as partitions, connectivity or isolation, without disclosing the structure of the topology itself. By that, we can achieve the certification of the structure of critical systems, such as infrastructure clouds or outsourced systems, while still maintaining confidentiality. We offer zero-knowledge proofs of knowledge for a general specification language of security goals for virtualized infrastructures, such that high-level security goalscan be proven over the topology certificate. Our method builds upon the Camenisch-Lysyanskaya signature scheme, is based on honest-verifier proofs and the strong RSA assumption.

18:17 [Pub][ePrint]

With the advent of networking applications collecting user data on

a massive scale, the privacy of individual users appears to be a major concern.

The main challenge is the design of a solution that allows the data analyzer to

compute global statistics over the set of individual inputs that are protected by

some confidentiality mechanism. Joye et al. [7] recently suggested a solution

that allows a centralized party to compute the sum of encrypted inputs collected

through a smart metering network. The main shortcomings of this solution are

its reliance on a trusted dealer for key distribution and the need for frequent key

updates. In this paper we introduce a secure protocol for aggregation of timeseries

data that is based on the Joye et al. [7] scheme and in which the main

shortcomings of the latter, namely, the requirement for key updates and for the

trusted dealer are eliminated. As such, during the protocol execution none of the

parties apart from the users themselves are aware of the secret keys. Moreover

our scheme supports a dynamic group management, whereby as opposed to Joye

et al. [7] leave and join operations do not trigger a key update at the users.

18:17 [Pub][ePrint]

Handycipher is a low-tech, randomized, symmetric-key, stream cipher, simple enough to permit pen-and-paper encrypting and decrypting of messages, while providing a significantly high level of security by using a nondeterministic encryption procedure, multiple encryption, and randomly generated session keys.

18:17 [Pub][ePrint]

FASER is a family of authenticated ciphers submitted to the CAESAR competition, which contains two parent ciphers: FASER128 and FASER256. In this work we only focus on FASER128 and present a key recovery attack to FASER128, which needs at most 64 key words and is realtime in a PC. The result shows that FASER128 is very insecure. What\'s more, our attack can be easily applied to FASER256 and break it entirely.

18:17 [Pub][ePrint]

In this paper we mount the cube attack on the Keccak sponge function. The cube attack, formally introduced in 2008, is an algebraic technique applicable to cryptographic primitives whose output can be described as a low-degree polynomial in the input. Our results show that 5- and 6-round Keccak sponge function is vulnerable to this technique. All the presented attacks have practical complexities and were verified on a desktop PC.

18:17 [Pub][ePrint]

Locally decodable codes (LDC)~\\cite{BFLS91,KT00} are error correcting codes that allow decoding (any) individual symbol of the message, by reading only few symbols of the codeword. Consider an application such as storage

solutions for large data, where errors may occur in the disks (or some disks may just crush). In such an application, it is often desirable to recover only small portions of the data (have random access). Thus, in such applications, using LDC provides enormous efficiency gains over standard error correcting codes (ECCs), that need to read the entire encoded message to learn even a single bit of information.

Typically, LDC\'s, as well as standard ECC\'s decode

the encoded messaged if upto some bounded fraction of the symbols had been modified. This corresponds to decoding strings of bounded Hamming distance from a valid codeword. An often more realistic metric is the edit distance, measuring the shortest sequence of insertions and deletions (indel.) of symbols leading from one word to another.

For example, (few) indel. modifications is a more realistic model for mutations occurring in a genome. Even more commonly, communication over the web may sustain deletions (lost packets) and insertions (noise).\\footnote{Edit distance is indeed \"more expressive\" then Hamming distance in the sense that $dist_E(x,y)\\leq 2dist_H(x,y)$ always holds, while edit distance 2 may translate to Hamming distance $n$. For instance, consider $x=1010\\ldots 10,y=0101\\ldots 1$.

}

Standard ECC\'s for edit distance have been previously considered~\\cite{SZ97}. Furthermore,~\\cite{SZ97} devised codes with

rate and distance (error tolerance) optimal upto constants.

LDC\'s, originally considered in the setting of PCP\'s~\\cite{BFLS91}, have found many additional applications, and generated a lot of fascinating work (see~\\cite{Yek11} and references within).

However, combining these two useful settings of LDC, and robustness

against indel. errors has never been considered.

In this work, we study the question of constructing LDC\'s for edit distance. We demonstrate a strong positive result - LDC\'s for edit distance can be achieved, with similar parameters to LDC\'s for Hamming distance. More precisely, we devise a generic transformation from LDC

for Hamming distance to LDC for edit distance with related parameters.