International Association for Cryptologic Research

# IACR News Central

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2015-03-04
19:17 [Pub][ePrint]

A pseudorandom function (PRF) is a keyed function $F \\colon {\\cal K}\\times{\\cal X}\\rightarrow {\\cal Y}$ where, for a random key

$k\\in{\\cal K}$, the function $F(k,\\cdot)$ is indistinguishable from a

uniformly random function, given black-box access. A

\\emph{key-homomorphic} PRF has the additional feature that for any

keys $k,k\'$ and any input $x$, we have $F(k + k\', x)= F(k,x) \\oplus F(k\',x)$ for some group operations $+, \\oplus$ on $\\cal{K}$ and

$\\cal{Y}$, respectively. A \\emph{constrained} PRF for a family of

sets ${\\cal S} \\subseteq \\cal{P}({\\cal X})$ has the property that,

given any key $k$ and set $S \\in \\cal{S}$, one can efficiently compute

a constrained\'\' key $k_S$ that enables evaluation of $F(k,x)$ on all

inputs $x\\in S$, while the values $F(k,x)$ for $x \\notin S$ remain

pseudorandom even given $k_{S}$.

In this paper we construct PRFs that are simultaneously constrained

\\emph{and} key homomorphic, where the homomorphic property holds even

for constrained keys. We first show that the multilinear map-based

bit-fixing and circuit-constrained PRFs of Boneh and Waters (Asiacrypt

2013) can be modified to also be \\emph{key-homomorphic}. We then show

that the LWE-based key-homomorphic PRFs of Banerjee and Peikert

(Crypto 2014) are essentially already \\emph{prefix-constrained} PRFs,

using a (non-obvious) definition of constrained keys and associated

group operation. Moreover, the constrained keys themselves are

pseudorandom, and the constraining and evaluation functions can all be

computed in low depth.

As an application of key-homomorphic constrained PRFs, we construct a

proxy re-encryption scheme with fine-grained access control. This

scheme allows storing encrypted data on an untrusted server, where

each file can be encrypted relative to some attributes, so that only

parties whose constrained keys match the attributes can decrypt.

Moreover, the server can re-key (arbitrary subsets of) the ciphertexts

without learning anything about the plaintexts, thus permitting

efficient and fine-grained revocation.

19:17 [Pub][ePrint]

As two important cryptanalytic methods, impossible differential cryptanalysis and integral cryptanalysis have attracted much attention in recent years. Although relations among other important cryptanalytic approaches have been investigated, the link between these two methods has been missing. The motivation in this paper is to fix this gap and establish links between impossible differential cryptanalysis and integral cryptanalysis.

Firstly, by introducing the concept of structure and dual structure, we prove that $a\\rightarrow b$ is an impossible differential of a structure $\\mathcal E$ if and only if it is a zero correlation linear hull of the dual structure $\\mathcal E^\\bot$. More specifically, constructing a zero correlation linear hull of a Feistel structure with $SP$-type round function where $P$ is invertible, is equivalent to constructing an impossible differential of the same structure with $P^T$ instead of $P$. Constructing a zero correlation linear hull of an SPN structure is equivalent to constructing an impossible differential of the same structure with $(P^{-1})^T$ instead of $P$. Meanwhile, our proof shows that the automatic search tool presented by Wu and Wang could find all impossible differentials of both Feistel structures with $SP$-type round functions and SPN structures, which is useful in provable security of block ciphers against impossible differential cryptanalysis.

Secondly, by establishing some boolean equations, we show that a zero correlation linear hull always indicates the existence of an integral distinguisher while a special integral implies the existence of a zero correlation linear hull. With this observation we improve the integral distinguishers of Feistel structures by $1$ round, build a $24$-round integral distinguisher of CAST-$256$ based on which we propose the best known key recovery attack on reduced round CAST-$256$ in the non-weak key model, present a $12$-round integral distinguisher of SMS4 and an $8$-round integral distinguisher of Camellia without $FL/FL^{-1}$. Moreover, this result provides a novel way for establishing integral distinguishers and converting known plaintext attacks to chosen plaintext attacks.

Finally, we conclude that an $r$-round impossible differential of $\\mathcal E$ always leads to an $r$-round integral distinguisher of the dual structure $\\mathcal E^\\bot$. In the case that $\\mathcal E$ and $\\mathcal E^\\bot$ are linearly equivalent, we derive a direct link between impossible differentials and integral distinguishers of $\\mathcal E$. Specifically, we obtain that an $r$-round impossible differential of an SPN structure, which adopts a bit permutation as its linear layer, always indicates the existence of an $r$-round integral distinguisher. Based on this newly established link, we deduce that impossible differentials of SNAKE(2), PRESENT, PRINCE and ARIA, which are independent of the choices of the $S$-boxes, always imply the existence of integral distinguishers.

Our results could help to classify different cryptanalytic tools. Furthermore, when designing a block cipher, the designers need to demonstrate that the cipher has sufficient security margins against important cryptanalytic approaches, which is a very tough task since there have been so many cryptanalytic tools up to now. Our results certainly facilitate this security evaluation process.

19:17 [Pub][ePrint]

We consider tweakable blockciphers with beyond the birthday bound security. Landecker, Shrimpton, and Terashima (CRYPTO 2012) gave the first construction with security up to $\\mathcal{O}(2^{2n/3})$ adversarial queries ($n$ denotes the block size in bits of the underlying blockcipher), and for which changing the tweak does not require changing the keys for blockcipher calls. In this paper, we extend this construction, which consists of two rounds of a previous proposal by Liskov, Rivest, and Wagner (CRYPTO 2002), by considering larger numbers of rounds $r>2$. We show that asymptotically, as $r$ increases, the resulting tweakable blockcipher approaches security up to the information bound, namely $\\mathcal{O}(2^n)$ queries. Our analysis makes use of a coupling argument, and carries some similarities with the analysis of the iterated Even-Mansour cipher by

Lampe, Patarin, and Seurin (ASIACRYPT 2012).

19:17 [Pub][ePrint]

Recently, a number of relations have been established among previously known statistical attacks on block ciphers. Leander showed in 2011 that statistical saturation distinguishers are on average equivalent to multidimensional linear distinguishers. Further relations between these two types of distinguishers and the integral and zero-correlation distinguishers were established by Bogdanov et al.. Knowledge about

such relations is useful for classification of statistical attacks in order to determine those that give essentially complementary information about the security of block ciphers. The purpose of the work presented in this paper is to explore relations between differential and linear attacks. The mathematical link between linear and differential attacks was discovered by Chabaud and Vaudenay already in 1994, but it has never been used in practice. We will show how to use it for computing accurate estimatesof truncated differential probabilities from accurate estimates of correlations of linear approximations. We demonstrate this method in practice and give the first instantiation of multiple differential cryptanalysis using the LLR statistical test on PRESENT. On a more theoretical side,we establish equivalence between a multidimensional linear distinguisher and a truncated differential distinguisher, and show that certain zero-correlation linear distinguishers exist if and only if certain impossible differentials exist.

19:17 [Pub][ePrint]

The mere number of various apparently different statistical attacks on block ciphers has raised the question about their relationships which would allow to classify them and determine those that give essentially complementary information about the security of block ciphers. While mathematical links between some statistical attacks have been derived in the last couple of years, the important link between general truncated differential and multidimensional linear attacks has been missing. In this work we close this gap. The new link is then exploited to relate the complexities of chosen-plaintext and known-plaintext distinguishing attacks of differential and linear types, and further, to explore the relations between the key-recovery attacks. Our analysis shows that a statistical saturation attack is the same as a truncated differential attack, which allows us, for the first time, to provide a justifiable analysis of the complexity of the statistical saturation attack and discuss its validity on 24 rounds of the PRESENT block cipher. By studying the data, time and memory complexities of a multidimensional linear key-recovery attack and its relation with a truncated differential one, we also show that in most cases a known-plaintext attack can be transformed into a less costly chosen-plaintext attack. In particular, we show that there is a differential attack in the chosen-plaintext model on 26 rounds of PRESENT with less memory complexity than the best previous attack, which assumes known plaintext. The links between the statistical attacks discussed in this paper give further examples of attacks where the method used to sample the data required by the statistical test is more differentiating than the method used for finding the distinguishing property

19:17 [Pub][ePrint]

A rapid growth of Machine-to-Machine (M2M) communications is expected in the coming years. M2M applications create new challenges for in-field testing since they typically operate in environments where human supervision is difficult or impossible. In addition, M2M networks may be significant in size. We propose to automate Logic Built-In Self-Test (LBIST) by using a centralized test management system which can test all end-point M2M devices in the same network. Such a method makes possible transferring some of the LBIST functionality from the devices under test to the test management system. This is important for M2M devices which have very limited computing resources and commonly are battery-powered. In addition, the presented method provides protection against both random and malicious faults including some types of hardware Trojans.

19:17 [Pub][ePrint]

In this paper, we analyse the higher order differential properties of NORX, an AEAD scheme submitted to CAESAR competition. NORX is a sponge based construction. Previous efforts, by the designers themselves, have focused on the first order differentials and rotational properties for a small number of steps of the NORX core permutation, which turn out to have quite low biases when extended to the full permutation. In our work, the higher order differential properties are identified that allow to come up with practical distinguishers of the 4-round full permutation for NORX64 and half round less than the full permutation (i.e., 3.5-round) for NORX32. These distinguishers are similar to zero-sum distinguishers but are probabilistic in nature rather than deterministic, and are of order as low as four. The distinguishers have very low complexities, and are significantly more efficient than the generic generalized birthday attack for the same configurations of zero-sums. While these distinguishers identify sharper non-randomness than what the designers identified, our results do not lend themselves for cryptanalysis of full-round NORX encryption or authentication.

19:17 [Pub][ePrint]

In his seminal result, Cleve [STOC\'86] established that secure distributed computation--- guaranteeing fairness---is impossible in the presence of dishonest majorities. A generous number of proposals for relaxed notions of fairness ensued this seminal result, by weakening in various ways the desired security guarantees. While these works also suggest completeness results (i.e., the ability to design protocols which achieve their fairness notion), their assessment is typically of an all-or-nothing nature. That is, when presented with a protocol which is not designed to be fair according to their respective notion, they most likely would render it unfair and make no further statement about it.

In this work we put forth a comparative approach to fairness. We present new intuitive notions that when presented with two arbitrary protocols, provide the means to answer the question \"Which of the protocols is fairer?\" The basic idea is that we can use an appropriate utility function to express the preferences of an adversary who wants to break fairness. Thus, we can compare protocols with respect to how fair they are, placing them in a partial order according to this relative-fairness relation.

After formulating such utility-based fairness notions, we turn to the question of finding optimal protocols---i.e., maximal elements in the above partial order. We investigate---and answer---this question for secure function evaluation, both in the two-party and multi-party settings.

To our knowledge, the only other fairness notion providing some sort of comparative state- ment is that of 1/p-security (aka \"partial fairness\") by Gordon and Katz [Eurocrypt\'10]. We also show in this paper that for a special class of utilities our notion strictly implies 1/p-security. In addition, we fix a shortcoming of the definition which is exposed by our comparison, thus strengthening that result.

19:17 [Pub][ePrint]

Password-authenticated key exchange (PAKE) protocols allow two players to agree on a shared high entropy secret key, that depends on their own passwords only. Following the Gennaro and Lindell\'s approach, with a new kind of smooth-projective hash functions (SPHFs), Katz and Vaikuntanathan recently came up with the first concrete one-round PAKE protocols, where the two players just have to send simultaneous flows to each other. The first one is secure in the Bellare-Pointcheval-Rogaway (BPR) model and the second one in the Canetti\'s UC framework, but at the cost of simulation-sound non-interactive zero-knowledge (SSNIZK) proofs (one for the BPR-secure protocol and two for the UC-secure one), which make the overall constructions not really efficient.

This paper follows their path with, first, a new efficient instantiation of SPHF on Cramer-Shoup ciphertexts, which allows to get rid of the SSNIZK proof and leads to the design of the most efficient one-round PAKE known so far, in the BPR model, and in addition without pairings.

In the UC framework, the security proof required the simulator to be able to extract the hashing key of the SPHF, hence the additional SSNIZK proof. We improve the way the latter extractability is obtained by introducing the notion of trapdoor smooth projective hash functions (TSPHFs). Our concrete instantiation leads to the most efficient one-round PAKE UC-secure against static corruptions to date.

We additionally show how these SPHFs and TSPHFs can be used for blind signatures and zero-knowledge proofs with straight-line extractability.

19:17 [Pub][ePrint]

A definition of \\textit{online authenticated-encryption} (OAE), call it OAE1, was given by Fleischmann, Forler, and Lucks (2012). It has become a popular definitional target because, despite allowing encryption to be online, security is supposed to be maintained even if nonces get reused. We argue that this expectation is effectively wrong. OAE1 security has also been claimed to capture best-possible security for any online-AE scheme. We claim that this understanding is wrong, too. So motivated, we redefine OAE-security, providing a radically different formulation, OAE2. The new notion effectively \\textit{does} capture best-possible security for a user\'s choice of plaintext segmentation and ciphertext expansion. It is achievable by simple techniques from standard tools. Yet even for OAE2, nonce-reuse can still be devastating. The picture to emerge is that no OAE definition can meaningfully tolerate nonce-reuse, but, at the same time, OAE security ought neverhave been understood to turn on this question.

19:17 [Pub][ePrint]

Gennaro et al.\\ (Crypto 2010) introduced the notion of \\emph{non-interactive verifiable computation}, which allows a computationally weak client to outsource the computation of a function $f$ on a series of inputs $x^{(1)}, \\ldots$ to a more

powerful but untrusted server. Following a pre-processing phase (that is carried out only once), the client sends some representation of its current input $x^{(i)}$ to the server; the server returns an answer that allows the client to recover the correct result $f(x^{(i)})$, accompanied by a proof of correctness that ensures the client does not accept an incorrect result. The crucial property is that the work done by the client in preparing its input and verifying the server\'s proof is less than the time required for the client to compute~$f$ on its own.

We extend this notion to the \\emph{multi-client} setting, where $n$ computationally weak clients wish to outsource to an untrusted server the computation of a function $f$ over a series of {\\em joint} inputs $(x_1^{(1)}, \\ldots, x_{\\clients}^{(1)}), \\ldots$ without interacting with each other. We present a construction for this setting by combining the scheme of Gennaro et al.\\ with a primitive called proxy oblivious transfer.