International Association for Cryptologic Research

# IACR News Central

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2015-05-01
12:17 [Pub][ePrint]

Sanitizable signature schemes are a type of malleable signatures where the signer grants

a designated third party, called the sanitizer, signing rights in the sense that the

sanitizer can modify designated parts and adapt the signature accordingly. Ateniese et al. (ESORICS 2005)

introduced this primitive and proposed five security properties, which were formalized by Brzuska et al. (PKC 2009).

Subsequently, Brzuska et al. (PKC 2010) suggested an additional security notion, called unlinkability,

which says one cannot link sanitized message-signature pairs of the same document and gave a generic

construction based on group signatures that have a certain structure.

Here, we present the first efficient instantiation of unlinkable sanitizable signatures. Our construction is

based on a novel type of signature schemes with rerandomizable keys. Intuitively, this property allows to rerandomize both the signing and the verification key independently but consistently.

This allows us to sign the message with a rerandomized key and to prove in zero-knowledge

that the derived key originates from either the signer or the sanitizer. We instantiate this generic idea with

Schnorr signatures and efficient $\\Sigma$-protocols which we convert into

non-interactive zero-knowledge proofs via the Fiat-Shamir transformation. Our construction is

at least one order of magnitude faster than the fastest known construction.

12:17 [Pub][ePrint]

Homomorphic MACs, introduced by Gennaro and Wichs in 2013, allow anyone to validate computations on authenticated data without knowledge of the secret key. Moreover, the secret-key owner can verify the validity of the computation without needing to know the original (authenticated) inputs. Beyond security, homomorphic MACs are required to produce short tags (succinctness) and to support composability (i.e., outputs of authenticated computations should be re-usable as inputs for new computations).

At Eurocrypt 2013, Catalano and Fiore proposed two realizations of homomorphic MACs that support a restricted class of computations (arithmetic circuits of polynomial degree), are practically efficient, but fail to achieve both succinctness and composability at the same time.

In this paper, we generalize the work of Catalano and Fiore in several ways. First, we abstract away their results using the notion of encodings with limited malleability, thus yielding new schemes based on different algebraic settings. Next, we generalize their constructions to work with graded encodings, and more abstractly with $k$-linear groups. The main advantage of this latter approach is that it allows for homomorphic MACs which are (somewhat) composable while retaining succinctness. Interestingly, our construction uses graded encodings in a generic way. Thus, all its limitations (limited composability and non-constant size of the tags) solely depend on the fact that currently known multilinear maps share similar constraints. This means, for instance, that our scheme would support arbitrary circuits (polynomial depth) if we had compact multilinear maps with an exponential number of levels.

12:17 [Pub][ePrint]

We revisit the security (as a pseudorandom permutation) of cascading-based constructions for block-cipher key-length extension. Previous works typically considered the extreme case where the adversary is given the entire codebook of the construction, the only complexity measure being the number $q_e$ of queries to the underlying ideal block cipher, representing adversary\'s secret-key-independent computation. Here, we initiate a systematic study of the more natural case of an adversary restricted to adaptively learning a number $q_c$ of plaintext/ciphertext pairs that is less than the entire codebook. For any such $q_c$, we aim to determine the highest number of block-cipher queries $q_e$ the adversary can issue without being able to successfully distinguish the construction (under a secret key) from a random permutation.

More concretely, we show the following results for key-length extension schemes using a block cipher with $n$-bit blocks and $\\kappa$-bit keys:

- Plain cascades of length $\\ell = 2r+1$ are secure whenever $q_c q_e^r \\ll 2^{r(\\kappa+n)}$, $q_c \\ll 2^\\ka$ and $q_e \\ll 2^{2\\ka}$. The bound for $r = 1$ also applies to two-key triple encryption (as used within Triple DES).

- The $r$-round XOR-cascade is secure as long as $q_c q_e^r \\ll 2^{r(\\kappa+n)}$, matching an attack by Gazi (CRYPTO 2013).

- We fully characterize the security of Gazi and Tessaro\'s two-call 2XOR construction (EUROCRYPT 2012) for all values of $q_c$, and note that the addition of a third whitening step strictly increases security for $2^{n/4} \\le q_c \\le 2^{3/4n}$. We also propose a variant of this construction without re-keying and achieving comparable security levels.

12:17 [Pub][ePrint]

In this paper, we study the problem of factoring an RSA modulus $N=pq$ in polynomial time, when $p$ is a weak prime, that is, $p$ can be expressed as $ap=u_0+M_1u_1+\\ldots+M_ku_k$ for some $k$ integers $M_1,\\ldots, M_k$ and $k+2$ suitably small parameters $a$, $u_0,\\ldots u_k$. We further compute a lower bound for the set of weak moduli, that is, moduli made of at least one weak prime, in the interval $[2^{2n},2^{2(n+1)}]$ and show that this number is much larger than the set of RSA prime factors satisfying Coppersmith\'s conditions, effectively extending the likelihood for factoring RSA moduli. We also prolong our findings to moduli composed of two weak primes.

12:17 [Pub][ePrint]

We present three attacks on the Prime Power RSA with modulus $N=p^rq$. In the first attack, we consider a public exponent $e$ satisfying an equation $ex-\\phi(N)y=z$ where $\\phi(N)=p^{r-1}(p-1)(q-1)$. We show that one can factor $N$ if the parameters $|x|$ and $|z|$ satisfy \$|xz|

12:17 [Pub][ePrint]

Attribute-based signatures, introduced by Maji \\emph{et al.}, are

signatures that prove that an authority has issued the signer

attributes\'\' that satisfy some specified predicate. In existing

attribute-based signature schemes, keys are valid indefinitely once

issued. In this paper, we initiate the study of incorporating time into

attribute-based signatures, where a time instance is embedded in every

signature, and attributes are restricted to producing signatures with

times that fall in designated validity intervals. We provide

three implementations that vary in granularity of assigning validity

intervals to attributes, including a scheme in which each attribute

has its own independent validity interval, a scheme in which all

attributes share a common validity interval, and a scheme in which

sets of attributes share validity intervals. All of our schemes provide anonymity to a signer, hide the attributes used to create the signature, and provide collusion-resistance between

users.

12:17 [Pub][ePrint]

Recently, D\\\"ottling et al. (ASIACRYPT 2012) proposed the first

chosen-ciphertext (IND-CCA) secure public-key encryption scheme from the learning parity with noise (LPN) assumption.

In this work we give an alternative scheme which is conceptually simpler and more efficient.

At the core of our construction is a trapdoor technique originally proposed for lattices by Micciancio and Peikert (EUROCRYPT 2012), which we adapt to the LPN setting. The main technical tool is a new double-trapdoor mechanism, together with a trapdoor switching lemma based on a computational variant of the leftover hash lemma.

12:17 [Pub][ePrint]

Side-channel attacks usually apply a divide-and-conquer strategy, separately recovering different parts of the secret. Their efficiency in practice relies on the adversary ability to precisely assess the success or unsucces of each of these recoveries. This makes the study of the attack success rate a central problem in side-channel analysis. In tis paper we tackle this issue in two different settings for the most popular attack, namely the Correlation Power Analysis (CPA). In the first setting, we assume that the targeted subkey is known and we compare the state of the art formulae expressing the success rate as a function of the leakage noise and the algebraic properties of the cryptographic primitive. We also make the link between these formulae and the recent work of Fei et al. at CHES 2012. In the second setting, the subkey is no longer assumed to be known and we introduce the notion of confidence level in an attack result, allowing for the study of different heuristics. Through experiments, we show that the rank evolution of a subkey hypothesis can be exploited to compute a better confidence than considering only the final result.

2015-04-29
15:17 [Pub][ePrint]

Trying to make it more difficult to hack passwords has a long history. However the research community has not addressed the change of context from traditional Unix mainframe systems to web applications which face new threats (DoS) and have fewer constraints (client-side computation is allowed). In absence of updated guidance, a variety of solutions are scattered all over the web, from amateur to somewhat professional. However, even the best references have issues such as incomplete details, misuse of terminology, assertion of requirements that are not adequately justified, and too many options presented to the developer, opening the door to potential mistakes. The purpose of this research note is to present a solution with complete details and a concise summary of the requirements, and to provide a solution that developers can readily implement with confidence, assuming that the solution is endorsed by the research community. The proposed solution involves client-side processing of a heavy computation in combination with a server-side hash computation. It follows a similar approach to a few other proposals on the web, but is more complete and justified than any that we found.

15:17 [Pub][ePrint]

We construct the first fully succinct garbling scheme for RAM programs, assuming the existence of indistinguishability obfuscation for circuits and one-way functions. That is, the size, space requirements, and runtime of the garbled program are the same as those of the input program, up to poly-logarithmic factors and a polynomial in the security parameter. The scheme can be used to construct indistinguishability obfuscators for RAM programs with comparable efficiency, at the price of requiring sub-exponential security of the underlying primitives.

The scheme builds on the recent schemes of Koppula-Lewko-Waters and Canetti-Holmgren-Jain-Vaikuntanathan [STOC 15]. A key technical challenge here is how to combine the fixed-prefix technique of KLW, which was developed for deterministic programs, with randomized Oblivious RAM techniques. To overcome that, we develop a method for arguing about the indistinguishability of two obfuscated randomized programs that use correlated randomness. Along the way, we also define and construct garbling schemes that offer only partial protection. These may be of independent interest.

15:17 [Pub][ePrint]

In October 2012, the American National Institute of Standards and Technology (NIST) announced the selection of Keccak as the winner of the SHA-3 Cryptographic Hash Algorithm Competition [10,11]. This concluded an open competition that was remarkable both for its magnitude and the involvement of the cryptographic community. Public review is of paramount importance to increase the confidence in the new standard and to favor its quick adoption. The SHA-3 competition explicitly took this into account by giving open access to the candidate algorithms and everyone in the cryptographic community could try to break them, compare their performance, or simply give comments.