International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2015-03-08
09:17 [Pub][ePrint] Salsa20 Cryptanalysis: New Moves and Revisiting Old Styles, by Subhamoy Maitra and Goutam Paul and Willi Meier

  In this paper, we revisit some existing techniques in Salsa20 cryptanalysis, and provide some new ideas as well. As a new result, we explain how a valid initial state can be obtained from a Salsa20 state after one round. This helps in studying the non-randomness of Salsa20 after 5 rounds. In particular, it can be seen that the 5-round bias reported by Fischer et al. (Indocrypt 2006) is a special case of our analysis. Towards improving the existing results, we revisit the idea of Probabilistic Neutral Bit (PNB) and how a proper choice of certain parameters reduce the complexity of the existing attacks. For cryptanalysis against 8-round Salsa20, we could achieve the key search complexity of $2^{247.2}$ compared to $2^{251}$ (FSE 2008) and

$2^{250}$ (ICISC 2012).



09:17 [Pub][ePrint] Efficient k-out-of-n oblivious transfer protocol, by wang qinglong

  A new k-out-of-n oblivious transfer protocol is presented in this paper. The communication cost of our scheme are n+1 messages of sender to receiver and k messages from the receiver to sender. To the best knowledge of the authors, the com-munication complexity of our scheme is the least. Also, our scheme has a lower computation cost with (k+1)n modular ex-ponentiations for sender and 3k modular exponentiations for the receiver. The security of our scheme is only based on the Decision Diffie-Hellman assumption. Further, we proved the sender\'s computational security and the receiver\'s uncondition-al security under standard model.



09:17 [Pub][ePrint] Efficient Format Preserving Encrypted Databases, by Prakruti C, Sashank Dara and V.N. Muralidhara

  We propose storage efficient SQL-aware encrypted databases that preserve the format of the fields. We give experimental results of storage improvements in CryptDB using FNR encryption scheme.





2015-03-07
19:37 [Event][New] CyberSec2015: 4th Inter. Conf. on Cyber Security, Cyber Welfare, and Digital Forensic

  Submission: 29 September 2015
Notification: 10 October 2015
From October 29 to October 31
Location: Jakarta, Indonesia
More Information: http://sdiwc.net/conferences/cybersec2015/




2015-03-06
16:17 [Pub][ePrint] Statistical Properties of Multiplication mod $2^n$, by A. Mahmoodi Rishakani and S. M. Dehnavi and M. R. Mirzaee Shamsabad and Hamidreza Maimani and Einollah Pasha

  In this paper, we investigate some statistical properties of multiplication mod $2^n$ for cryptographic use.

For this purpose, we introduce a family of T-functions similar to modular multiplication, which we call

M-functions as vectorial Boolean functions. At first, we determine the joint probability distribution of

arbitrary number of the output of an M-function component bits. Then, we obtain the probability distribution

of the component Boolean functions of combination of a linear transformation with an M-function. After that,

using a new measure for computing the imbalance of maps, we show that the restriction of the output of an

M-function to its upper bits is asymptotically balanced.



16:17 [Pub][ePrint] Adaptively Secure Coin-Flipping, Revisited, by Shafi Goldwasser and Yael Tauman Kalai and Sunoo Park

  The full-information model was introduced by Ben-Or and Linial in 1985 to study collective coin-flipping: the problem of generating a common bounded-bias bit in a network of $n$ players with $t=t(n)$ faults. They showed that the majority protocol, in which each player sends a random bit and the output is the majority of the players\' bits, can tolerate $t(n)=O (\\sqrt n)$ even in the presence of \\emph{adaptive} corruptions, and they conjectured that this is optimal for such adversaries. Lichtenstein, Linial, and Saks proved that the conjecture holds for protocols in which each player sends only a single bit. Their result has been the main progress on the conjecture during the last 30 years.

In this work we revisit this question and ask: what about protocols where players can send longer messages? Can increased communication allow for a larger fraction of corrupt players?

We introduce a model of \\emph{strong adaptive} corruptions, in which an adversary sees all messages sent by honest parties in any given round and, based on the message content, decides whether to corrupt a party (and alter its message or sabotage its delivery) or not. This is in contrast to the (classical) adaptive adversary who can corrupt parties only based on past messages, and cannot alter messages already sent.

We prove that any one-round coin-flipping protocol, \\emph{regardless of message length}, can be secure against at most $\\widetilde{O}(\\sqrt n)$ strong adaptive corruptions. Thus, increased message length does not help in this setting.

We then shed light on the connection between adaptive and strongly adaptive adversaries, by proving that for any symmetric one-round coin-flipping protocol secure against $t$ adaptive corruptions, there is a symmetric one-round coin-flipping protocol secure against $t$ strongly adaptive corruptions. Going back to the standard adaptive model, we can now prove that any symmetric one-round protocol with arbitrarily long messages can tolerate at most $\\widetilde{O}(\\sqrt n)$ adaptive corruptions.

At the heart of our results there is a new technique for converting any one-round secure protocol with arbitrarily long messages into a secure one where each player sends only $\\polylog(n)$ bits. This technique may be of independent interest.



16:17 [Pub][ePrint] Achieving Side-Channel Protection with Dynamic Logic Reconfiguration on Modern FPGAs, by Pascal Sasdrich and Amir Moradi and Oliver Mischke and Tim Güneysu

  Reconfigurability is a unique feature of modern FPGA devices to load hardware circuits just on demand.

This also implies that a completely different set of circuits might operate at the exact same location of the FPGA at different time slots, making it difficult for an external observer or attacker to predict what will happen at what time.

In this work we present and evaluate a novel hardware implementation of the lightweight cipher PRESENT with built-in side-channel countermeasures based on dynamic logic reconfiguration. In our design we make use of Configurable Look-Up Tables (CFGLUT) integrated in modern Xilinx FPGAs to nearly instantaneously change hardware internals of our cipher implementation for improved resistance against side-channel attacks. We provide evidence from practical experiments based on a Spartan-6 platform that even with 10 million recorded power traces we were unable to detect a first-order leakage using the state-of-the-art leakage assessment.



16:17 [Pub][ePrint] Leakage-Resilient Symmetric Encryption via Re-keying, by Michel Abdalla and Sonia Belaïd and Pierre-Alain Fouque

  In the paper, we study whether it is possible to construct an efficient leakage-resilient symmetric scheme using the AES block cipher. We aim at bridging the gap between the theoretical leakage-resilient symmetric primitives used to build encryption schemes and the practical schemes that do not have any security proof against side-channel adversaries. Our goal is to construct an as efficient as possible leakage-resilient encryption scheme, but we do not want to change the cryptographic schemes already implemented. The basic idea consists in adding a leakage-resilient re-keying scheme on top of the encryption scheme and has been already suggested by Kocher to thwart differential power analysis techniques. Indeed, in such analysis, the adversary queries the encryption box and from the knowledge of the plaintext/ciphertext, she can perform a divide-and-conquer key recovery attack. The method consisting in changing the key for each or after a small number of encryptions with the same key is known as re-keying. It prevents DPA adversaries but not SPA attacks which use one single leakage trace. Here, we prove that using a leakage-resilient re-keying scheme on top of a secure encryption scheme in the standard model, leads to a leakage-resilient encryption scheme. The main advantage of the AES block cipher is that its implementations are generally heuristically-secure against SPA adversaries. This assumption is used in many concrete instantiations of leakage-resilient symmetric primitives. Consequently, if we use it and change the key for each new message block, the adversary will not be able to recover any key if the re-keying scheme is leakage-resilient. There is mainly two different techniques for re-keying scheme, either parallel or sequential, but if we want to avoid the adversary having access to many inputs/outputs, only the sequential method is possible. However, the main drawback of the latter technique is that in case of de-synchronization, many useless computations are required. In our re-keying scheme, we use ideas from the skip-list data structure to efficiently recover a specific key.



16:17 [Pub][ePrint] Towards Key-Length Extension\\\\ with Optimal Security: Cascade Encryption and Xor-cascade Encryption, by Jooyoung Lee and Martijn Stam

  This paper discusses provable security of two types of cascade encryptions. The first construction $\\CE^l$, called $l$-cascade encryption, is obtained by sequentially composing $l$ blockcipher calls with independent keys. The security of $\\CE^l$ has been a longstanding open problem until Ga\\v{z}i and Maurer~\\cite{GM09} proved its security up to $2^{\\ka+\\min\\{\\frac{n}{2},\\ka\\}}$ query complexity for large cascading length, where $\\ka$ and $n$ denote the key size and the block size of the underlying blockcipher, respectively. We improve this limit by proving the security of $\\CE^l$ up to $2^{\\ka+\\min\\left\\{\\ka,n\\right\\}-\\frac{16}{l}\\left(\\frac{n}{2}+2\\right)}$ query complexity: this bound approaches $2^{\\ka+\\min\\left\\{\\ka,n\\right\\}}$ with increasing cascade length $l$.

The second construction $\\XCE^l$ is a natural cascade version of the DESX scheme with intermediate keys xored between blockcipher calls. This can also be viewed as an extension of double XOR-cascade proposed by Ga\\v{z}i and Tessaro~\\cite{GT12}. We prove that $\\XCE^l$ is secure up to $2^{\\ka+n-\\frac{8}{l}\\left(\\frac{n}{2}+2\\right)}$ query complexity. As cascade length $l$ increases, this bound approaches $2^{\\ka+n}$.

In the ideal cipher model, one can obtain all the evaluations of the underlying blockcipher by making $2^{\\ka+n}$ queries, so the $(\\ka+n)$-bit security becomes the maximum that key-length extension based on a single $\\ka$-bit key $n$-bit blockcipher is able to achieve. Cascade encryptions $\\CE^l$~(with $n\\leq\\ka$) and $\\XCE^l$ provide almost optimal security with large cascade length.



16:17 [Pub][ePrint] Efficient and Secure Delegation of Group Exponentiation to a Single Server, by Bren Cavallo and Giovanni Di Crescenzo and Delaram Kahrobaei and Vladimir Shpilrain

  We consider the problem of delegating computation of group operations from a computationally weaker client holding an input and a description of a function, to a {\\em single} computationally stronger server holding a description of the same function. Solutions need to satisfy natural correctness, security, privacy and efficiency requirements. We obtain delegated computation protocols for the following functions, defined for an {\\em arbitrary} commutative group:

\\begin{enumerate}

\\item Group inverses, with security and privacy holding against any computationally unrestricted malicious server.

\\item Group exponentiation, with security and privacy holding against any computationally unrestricted ``partially honest\" server.

\\item Group exponentiation, with security and privacy holding against any polynomial-time malicious server, under a pseudorandom generation assumption, and security holding with constant probability.

\\end{enumerate}



16:17 [Pub][ePrint] Leakage Assessment Methodology - a clear roadmap for side-channel evaluations, by Tobias Schneider and Amir Moradi

  Evoked by the increasing need to integrate side-channel countermeasures into security-enabled commercial devices, evaluation labs are seeking a standard approach that enables a fast, reliable and robust evaluation of the side-channel vulnerability of the given products. To this end, standardization bodies such as NIST intend to establish a leakage assessment methodology fulfilling these demands. One of such proposals is the Welch\'s t-test, which is being put forward by Cryptography Research Inc., and is able to relax the dependency between the evaluations and the device\'s underlying architecture. In this work, we deeply study the theoretical background of the test\'s different flavors, and present a roadmap which can be followed by the evaluation labs to efficiently and correctly conduct the tests. More precisely, we express a stable, robust and efficient way to perform the tests at higher orders. Further, we extend the test to multivariate settings, and provide details on how to efficiently and rapidly carry out such a multivariate higher-order test. Including a suggested methodology to collect the traces for these tests, we present practical case studies where different types of t-tests can exhibit the leakage of supposedly secure designs.