International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2015-03-04
19:17 [Pub][ePrint] Links Between Truncated Differential and Multidimensional Linear Properties of Block Ciphers and Underlying Attack Complexities, by Céline Blondeau and Kaisa Nyberg

  The mere number of various apparently different statistical attacks on block ciphers has raised the question about their relationships which would allow to classify them and determine those that give essentially complementary information about the security of block ciphers. While mathematical links between some statistical attacks have been derived in the last couple of years, the important link between general truncated differential and multidimensional linear attacks has been missing. In this work we close this gap. The new link is then exploited to relate the complexities of chosen-plaintext and known-plaintext distinguishing attacks of differential and linear types, and further, to explore the relations between the key-recovery attacks. Our analysis shows that a statistical saturation attack is the same as a truncated differential attack, which allows us, for the first time, to provide a justifiable analysis of the complexity of the statistical saturation attack and discuss its validity on 24 rounds of the PRESENT block cipher. By studying the data, time and memory complexities of a multidimensional linear key-recovery attack and its relation with a truncated differential one, we also show that in most cases a known-plaintext attack can be transformed into a less costly chosen-plaintext attack. In particular, we show that there is a differential attack in the chosen-plaintext model on 26 rounds of PRESENT with less memory complexity than the best previous attack, which assumes known plaintext. The links between the statistical attacks discussed in this paper give further examples of attacks where the method used to sample the data required by the statistical test is more differentiating than the method used for finding the distinguishing property



19:17 [Pub][ePrint] Remotely Managed Logic Built-In Self-Test for Secure M2M Communications, by Elena Dubrova and Mats Näslund and Gunnar Carlsson and John Fornehed and Ben Smeets

  A rapid growth of Machine-to-Machine (M2M) communications is expected in the coming years. M2M applications create new challenges for in-field testing since they typically operate in environments where human supervision is difficult or impossible. In addition, M2M networks may be significant in size. We propose to automate Logic Built-In Self-Test (LBIST) by using a centralized test management system which can test all end-point M2M devices in the same network. Such a method makes possible transferring some of the LBIST functionality from the devices under test to the test management system. This is important for M2M devices which have very limited computing resources and commonly are battery-powered. In addition, the presented method provides protection against both random and malicious faults including some types of hardware Trojans.



19:17 [Pub][ePrint] Higher Order Differential Analysis of NORX, by Sourav Das and Subhamoy Maitra and and Willi Meier

  In this paper, we analyse the higher order differential properties of NORX, an AEAD scheme submitted to CAESAR competition. NORX is a sponge based construction. Previous efforts, by the designers themselves, have focused on the first order differentials and rotational properties for a small number of steps of the NORX core permutation, which turn out to have quite low biases when extended to the full permutation. In our work, the higher order differential properties are identified that allow to come up with practical distinguishers of the 4-round full permutation for NORX64 and half round less than the full permutation (i.e., 3.5-round) for NORX32. These distinguishers are similar to zero-sum distinguishers but are probabilistic in nature rather than deterministic, and are of order as low as four. The distinguishers have very low complexities, and are significantly more efficient than the generic generalized birthday attack for the same configurations of zero-sums. While these distinguishers identify sharper non-randomness than what the designers identified, our results do not lend themselves for cryptanalysis of full-round NORX encryption or authentication.



19:17 [Pub][ePrint] How Fair is Your Protocol? A Utility-based Approach to Protocol Optimality, by Juan Garay and Jonathan Katz and Bjoern Tackmann and Vassilis Zikas

  In his seminal result, Cleve [STOC\'86] established that secure distributed computation--- guaranteeing fairness---is impossible in the presence of dishonest majorities. A generous number of proposals for relaxed notions of fairness ensued this seminal result, by weakening in various ways the desired security guarantees. While these works also suggest completeness results (i.e., the ability to design protocols which achieve their fairness notion), their assessment is typically of an all-or-nothing nature. That is, when presented with a protocol which is not designed to be fair according to their respective notion, they most likely would render it unfair and make no further statement about it.

In this work we put forth a comparative approach to fairness. We present new intuitive notions that when presented with two arbitrary protocols, provide the means to answer the question \"Which of the protocols is fairer?\" The basic idea is that we can use an appropriate utility function to express the preferences of an adversary who wants to break fairness. Thus, we can compare protocols with respect to how fair they are, placing them in a partial order according to this relative-fairness relation.

After formulating such utility-based fairness notions, we turn to the question of finding optimal protocols---i.e., maximal elements in the above partial order. We investigate---and answer---this question for secure function evaluation, both in the two-party and multi-party settings.

To our knowledge, the only other fairness notion providing some sort of comparative state- ment is that of 1/p-security (aka \"partial fairness\") by Gordon and Katz [Eurocrypt\'10]. We also show in this paper that for a special class of utilities our notion strictly implies 1/p-security. In addition, we fix a shortcoming of the definition which is exposed by our comparison, thus strengthening that result.



19:17 [Pub][ePrint] New Techniques for SPHFs and Efficient One-Round PAKE Protocols, by Fabrice Benhamouda and Olivier Blazy and Céline Chevalier and David Pointcheval and Damien Vergnaud

  Password-authenticated key exchange (PAKE) protocols allow two players to agree on a shared high entropy secret key, that depends on their own passwords only. Following the Gennaro and Lindell\'s approach, with a new kind of smooth-projective hash functions (SPHFs), Katz and Vaikuntanathan recently came up with the first concrete one-round PAKE protocols, where the two players just have to send simultaneous flows to each other. The first one is secure in the Bellare-Pointcheval-Rogaway (BPR) model and the second one in the Canetti\'s UC framework, but at the cost of simulation-sound non-interactive zero-knowledge (SSNIZK) proofs (one for the BPR-secure protocol and two for the UC-secure one), which make the overall constructions not really efficient.

This paper follows their path with, first, a new efficient instantiation of SPHF on Cramer-Shoup ciphertexts, which allows to get rid of the SSNIZK proof and leads to the design of the most efficient one-round PAKE known so far, in the BPR model, and in addition without pairings.

In the UC framework, the security proof required the simulator to be able to extract the hashing key of the SPHF, hence the additional SSNIZK proof. We improve the way the latter extractability is obtained by introducing the notion of trapdoor smooth projective hash functions (TSPHFs). Our concrete instantiation leads to the most efficient one-round PAKE UC-secure against static corruptions to date.

We additionally show how these SPHFs and TSPHFs can be used for blind signatures and zero-knowledge proofs with straight-line extractability.



19:17 [Pub][ePrint] Online Authenticated-Encryption and its Nonce-Reuse Misuse-Resistance, by Viet Tung Hoang and Reza Reyhanitabar and Phillip Rogaway and Damian Vizár

  A definition of \\textit{online authenticated-encryption} (OAE), call it OAE1, was given by Fleischmann, Forler, and Lucks (2012). It has become a popular definitional target because, despite allowing encryption to be online, security is supposed to be maintained even if nonces get reused. We argue that this expectation is effectively wrong. OAE1 security has also been claimed to capture best-possible security for any online-AE scheme. We claim that this understanding is wrong, too. So motivated, we redefine OAE-security, providing a radically different formulation, OAE2. The new notion effectively \\textit{does} capture best-possible security for a user\'s choice of plaintext segmentation and ciphertext expansion. It is achievable by simple techniques from standard tools. Yet even for OAE2, nonce-reuse can still be devastating. The picture to emerge is that no OAE definition can meaningfully tolerate nonce-reuse, but, at the same time, OAE security ought neverhave been understood to turn on this question.



19:17 [Pub][ePrint] Multi-Client Non-Interactive Verifiable Computation, by Seung Geol Choi and Jonathan Katz and Ranjit Kumaresan and Carlos Cid

  Gennaro et al.\\ (Crypto 2010) introduced the notion of \\emph{non-interactive verifiable computation}, which allows a computationally weak client to outsource the computation of a function $f$ on a series of inputs $x^{(1)}, \\ldots$ to a more

powerful but untrusted server. Following a pre-processing phase (that is carried out only once), the client sends some representation of its current input $x^{(i)}$ to the server; the server returns an answer that allows the client to recover the correct result $f(x^{(i)})$, accompanied by a proof of correctness that ensures the client does not accept an incorrect result. The crucial property is that the work done by the client in preparing its input and verifying the server\'s proof is less than the time required for the client to compute~$f$ on its own.

We extend this notion to the \\emph{multi-client} setting, where $n$ computationally weak clients wish to outsource to an untrusted server the computation of a function $f$ over a series of {\\em joint} inputs $(x_1^{(1)}, \\ldots, x_{\\clients}^{(1)}), \\ldots$ without interacting with each other. We present a construction for this setting by combining the scheme of Gennaro et al.\\ with a primitive called proxy oblivious transfer.



19:17 [Pub][ePrint] iDASH Secure Genome Analysis Competition Using ObliVM, by Xiao Shaun Wang, Chang Liu, Kartik Nayak, Yan Huang and Elaine Shi

  This is a short note in supplement to our ObliVM paper.



19:17 [Pub][ePrint] Memory-saving computation of the pairing final exponentiation on BN curves, by Sylvain DUQUESNE and Loubna GHAMMAM

  In this paper, we describe and improve efficient methods for computing

the hard part of the final exponentiation of pairings on Barreto-Naehrig

curves.

Thanks to the variants of pairings which decrease the length of the Miller

loop, the final exponentiation has become a significant component of the

overall calculation. Here we exploit the structure of BN curves to improve

this computation.

We will first present the most famous methods in the literature that en-

sure the computing of the hard part of the final exponentiation. We are

particularly interested in the memory resources necessary for the implementation of these methods. Indeed, this is an important constraint in

restricted environments.

More precisely, we are studying Devegili et al. method, Scott et al. addition chain method and Fuentes et al. method. After recalling these methods and their complexities, we determine the number of required registers

to compute the final result, because this is not always given in the literature. Then, we will present new versions of these methods which require

less memory resources (up to 37%). Moreover, some of these variants are

providing algorithms which are also more efficient than the original ones.



19:17 [Pub][ePrint] Improving Modular Inversion in RNS using the Plus-Minus Method, by Karim Bigou and Arnaud Tisserand

  The paper describes a new RNS modular inversion algorithm based on the extended Euclidean algorithm and the plus-minus trick. In our algorithm, comparisons over large RNS values are replaced by cheap computations modulo 4. Comparisons to an RNS version based on Fermat\'s little theorem were carried out. The number of elementary modular operations is significantly reduced: a factor 12 to 26 for multiplications and 6 to 21 for additions. Virtex 5 FPGAs implementations show that for a similar area, our plus-minus RNS modular inversion is 6 to 10 times faster.



19:17 [Pub][ePrint] Practical Homomorphic MACs for Arithmetic Circuits, by Dario Catalano and Dario Fiore

  Homomorphic message authenticators allow the holder of a (public) evaluation key to perform computations over previously authenticated data, in such a way that the produced tag $\\sigma$ can be used to certify the authenticity of the computation. More precisely, a user knowing the secret key $\\sk$ used to authenticate the original data, can verify that $\\sigma$ authenticates the correct output of the computation. This primitive has been recently formalized by Gennaro and Wichs, who also showed how to realize it from fully homomorphic encryption. In this paper, we show new constructions of this primitive that, while supporting a smaller set of functionalities (i.e., polynomially-bounded arithmetic circuits as opposite to boolean ones), are much more efficient and easy to implement. Moreover, our schemes can tolerate any number of (malicious) verification queries. Our first construction relies on the sole assumption that one way functions exist, allows for arbitrary composition (i.e., outputs of previously authenticated computations can be used as inputs for new ones) but has the drawback that the size of the produced tags grows with the degree of the circuit. Our second solution, relying on the $D$-Diffie-Hellman Inversion assumption, offers somewhat orthogonal features as it allows for very short tags (one single group element!) but poses some restrictions on the composition side.