International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2015-04-07
15:10 [Event][New] STM 2015: 11th International Workshop on Security and Trust Management

  Submission: 16 June 2015
Notification: 23 July 2015
From September 21 to September 22
Location: Vienna, Austria
More Information: http://stm2015.di.unimi.it/


15:10 [Event][New] SECRYPT'15: 12th International Conference on Security and Cryptography

  Submission: 15 April 2015
Notification: 19 May 2015
From July 20 to July 22
Location: Colmar, Alsace, France
More Information: http://www.secrypt.icete.org/


13:08 [PhD][Update] Pablo Rauzy: Formal Software Methods for Cryptosystems Implementation Security

  Name: Pablo Rauzy
Topic: Formal Software Methods for Cryptosystems Implementation Security
Category:implementation

Description:

Implementations of cryptosystems are vulnerable to physical attacks, and thus need to be protected against them. Of course, malfunctioning protections are useless. Formal methods help to develop systems while assessing their conformity to a rigorous specification. The first goal of my thesis, and its innovative aspect, is to show that formal methods can be used to prove not only the principle of the countermeasures according to a model, but also their implementation, as it is very where the physical vulnerabilities are exploited. My second goal is the proof and the automation of the protection techniques themselves, because handwritten security code is error-prone.

Physical attacks can be classified into two distinct categories. Passive attacks, where the attacker only reads information that leaks through side-channels. And active attacks, where the attacker tampers with the system to have it reveal secrets through its ``normal'' output channel. Therefore, I have pursued my goals in both settings: on a countermeasure that diminishes side-channel leakage (such as power consumption or electromagnetic emanations), and on countermeasures against fault injection attacks.

As there already exists a rigorous security property for protections against side-channel leakage, my contributions concentrate on formal methods for design and verification of protected implementations of algorithms. I have developed a methodology to protect an implementation by generating an improved version of it which has a null side-channel signal-to-noise ratio, as its leakage is made constant (in particular, it does not depend on the secret values). For the sake of demonstration, I have also undertaken to write a tool which automates the application of the methodology on an insecure input code written in assembly language. Independently, the tool is able to prove that this constant leakage property holds for a given implementation, which can be use[...]


12:58 [PhD][New] Hadi Soleimany: Studies in Lightweight Cryptography

  Name: Hadi Soleimany
Topic: Studies in Lightweight Cryptography
Category: secret-key cryptography

Description: The decreasing size of devices is one of the most significant changes in telecommunication and information technologies. This change has been accompanied by a dramatic reduction in the cost of computing devices. The dawning era of ubiquitous computing has opened the door to extensive new applications. Ubiquitous computing has found its way into products thanks to the improvements in the underlying enabling technologies. Considerable developments in constraint devices such as RFID tags facilitate novel services and bring embedded computing devices to our everyday environments. The changes that lie ahead will eventually make pervasive computing devices an integral part of our world.\r\nThe growing prevalence of pervasive computing devices has created a significant need for the consideration of security issues. However, security cannot be considered independently, but instead, should be evaluated alongside related issues such as performance and cost. In particular, there are several limitations facing the design of appropriate ciphers for extremely constrained environments. In response to this challenge, several lightweight ciphers have been designed during the last years. The purpose of this dissertation is to evaluate the security of the emerging lightweight block ciphers.\r\n\r\nThis dissertation develops cryptanalytic methods for determining the exact security level of some inventive and unconventional lightweight block ciphers. The work studies zerocorrelation linear cryptanalysis by introducing the Matrix method to facilitate the finding of zero-correlation linear approximations. As applications, we perform zero-correlation cryptanalysis on the 22-round LBlock and TWINE. We also perform simulations on a small variant of LBlock and present the first experimental results to support the theoretical model of the multidimensional zero-correlation linear cryptanalysis method. In addition, we provide a new perspective on slide cryptanalysis and reflection cryptanalysis [...]


12:57 [PhD][Update] Filipe Beato: Private Information Sharing in Online communities

  Name: Filipe Beato
Topic: Private Information Sharing in Online communities
Category:cryptographic protocols



12:56 [PhD][New] Juraj Šarinay: Cryptographic Hash Functions in Groups and Provable Properties

  Name: Juraj Šarinay
Topic: Cryptographic Hash Functions in Groups and Provable Properties
Category: (no category)

Description:

We consider several “provably secure” hash functions that compute simple sums in a well chosen group (G, ?). Security properties of such functions provably translate in a natural way to computational problems in G\r\nthat are simple to define and possibly also hard to solve. Given k disjoint lists Li of group elements, the k-sum problem asks for gi ? Li such that\r\ng1 ? g2 ? . . . ? gk = 1G. Hardness of the problem in the respective groups follows from some “standard” assumptions used in public-key cryptology such\r\nas hardness of integer factoring, discrete logarithms, lattice reduction and syndrome decoding. We point out evidence that the k-sum problem may even be harder than the above problems.

\r\n\r\n\r\n

Two hash functions based on the group k-sum problem, SWIFFTX and FSB, were submitted to NIST as candidates for the future SHA-3 standard. Both submissions were supported by some sort of a security proof. We show\r\nthat the assessment of security levels provided in the proposals is not related to the proofs included. The main claims on security are supported exclusively\r\nby considerations about available attacks. By introducing “second-order” bounds on bounds on security, we expose the limits of such an approach to\r\nprovable security.

\r\n\r\n

A problem with the way security is quantified does not necessarily mean a problem with security itself. Although FSB does have a history of failures,\r\nrecent versions of the two above functions have resisted cryptanalytic efforts well. This evidence, as well as the several connections to more standard\r\nproblems, suggests that the k-sum problem in some groups may be considered hard on its own and possibly lead to provable bounds on security. Complexity of the non-trivial tree algorithm is becoming a standard tool for measuring the associated hardness.

\r\n\r\n\r\n

We propose modifications to the multiplicative Very Smooth Hash and derive security from multiplicative k-sums in contra[...]


00:17 [Pub][ePrint] Cryptanalysis of GGH Map, by Yupu Hu and Huiwen Jia

  Multilinear map is a novel primitive which has many cryptographic applications, and GGH map is a major candidate of multilinear maps. GGH map has two classes of applications, which are respectively applications for public tools of encoding and hidden tools of encoding. In this paper we show that applications of GGH map for public tools of encoding are not secure. We present an efficient attack on GGH map, aiming at multi-party key exchange (MPKE) and the instance of witness encryption (WE) based on the hardness of 3-exact cover problem. First, for the secret of each user, we obtain an equivalent secret, which is the sum of original secret and a noise. The noise is an element of the specific principal ideal, but its size is not small. To do so, we use weak-DL attack presented by authors of GGH map. Second, we use special modular operations, which we call modified encoding/decoding, to filter the decoded noise much smaller. Such filtering is enough to break MPKE. Moreover, such filtering negates K-GMDDH assumption, which is the security basis of an ABE. The procedure almost breaks away from those lattice attacks and looks like an ordinary algebra. The key point is our special tools for modular operations. Finally, we break the instance of WE based on the hardness of 3-exact cover problem. To do so, we not only use modified encoding/decoding, but also (1) introduce and solve \"combined 3-exact cover problem\", which is a problem never hard to be solved; and (2) compute Hermite normal form of the specific principal ideal. The attack on the instance of WE is under an assumption, which seems at least nonnegligible.



00:17 [Pub][ePrint] Boosting OMD for Almost Free Authentication of Associated Data, by Reza Reyhanitabar and Serge Vaudenay and Damian Vizár

  We propose \\emph{pure} OMD (p-OMD) as a new variant of the Offset Merkle-Damg{\\aa}rd (OMD) authenticated encryption scheme. Our new scheme inherits all desirable security features of OMD while having a more compact structure and providing higher efficiency. The original OMD scheme, as submitted to the CAESAR competition, couples a single pass of a variant of the Merkle-Damg{\\aa}rd (MD) iteration with the counter-based XOR MAC algorithm to provide privacy and authenticity. Our improved p-OMD scheme dispenses with the XOR MAC algorithm and is \\emph{purely} based on the MD iteration; hence, the name ``pure\'\' OMD. To process a message of $\\ell$ blocks and associated data of $a$ blocks, OMD needs $\\ell+a+2$ calls to the compression function while p-OMD only requires $\\max\\left\\{\\ell, a\\right\\}+2$ calls. Therefore, for a typical case where $\\ell \\geq a$, p-OMD makes just $\\ell+2$ calls to the compression function; that is, associated data is processed almost freely compared to OMD. We prove the security of p-OMD under the same standard assumption (pseudo-randomness of the compression function) as made in OMD; moreover, the security bound for p-OMD is the same as that of OMD, showing that the modifications made to boost the performance are without any loss of security.



00:17 [Pub][ePrint] The Design Space of Lightweight Cryptography, by Nicky Mouha

  For constrained devices, standard cryptographic algorithms can be too big, too slow or too energy-consuming. The area of lightweight cryptography studies new algorithms to overcome these problems. In this paper, we will focus on symmetric-key encryption, authentication and hashing. Instead of providing a full overview of this area of research, we will highlight three interesting topics. Firstly, we will explore the generic security of lightweight constructions. In particular, we will discuss considerations for key, block and tag sizes, and explore the topic of instantiating a pseudorandom permutation (PRP) with a non-ideal block cipher construction. This is inspired by the increasing prevalence of lightweight designs that are not secure against related-key attacks, such as PRINCE, PRIDE or Chaskey. Secondly, we explore the efficiency of cryptographic primitives. In particular, we investigate the impact on efficiency when the input size of a primitive doubles. Lastly, we provide some considerations for cryptographic design. We observe that applications do not always use cryptographic algorithms as they were intended, which negatively impacts the security and/or efficiency of the resulting implementations.



00:17 [Pub][ePrint] Communication-Optimal Proactive Secret Sharing for Dynamic Groups, by Joshua Baron and Karim El Defrawy and Joshua Lampkins and Rafail Ostrovsky

  Proactive secret sharing (PSS) schemes are designed for settings where long-term confidentiality of secrets has to be guaranteed, specifically, when all participating parties may eventually be corrupted. PSS schemes periodically refresh secrets and reset corrupted parties to an uncorrupted state; in PSS the corruption threshold $t$ is replaced with a corruption rate which cannot be violated. In dynamic proactive secret sharing (DPSS) the number of parties can vary during the course of execution. DPSS is ideal when the set of participating parties changes over the lifetime of the secret or where removal of parties is necessary if they become severely corrupted. This paper presents the first DPSS schemes with optimal amortized, $O(1)$, per-secret communication compared to $O(n^4)$ or $\\exp(n)$ in number of parties, $n$, required by existing schemes. We present perfectly and statistically secure schemes with near-optimal threshold in each case. We also describe how to construct a communication-efficient dynamic proactively-secure multiparty computation (DPMPC) protocol which achieves the same thresholds.



00:17 [Pub][ePrint] Foundations of Reconfigurable PUFs (Full Version), by Jonas Schneider and Dominique Schröder

  A Physically Unclonable Function (PUF) can be seen as a source of randomness that can be challenged with a stimulus and responds in a way that is to some extent unpredictable. PUFs can be used to provide efficient solutions for common cryptographic primitives such as identification/authentication schemes, key storage, and hardware-entangled cryptography.

Moreover, Brzuska et al.~have recently shown, that PUFs can be used to construct UC secure protocols (CRYPTO 2011). Most PUF instantiations, however, only provide a static challenge/response space which limits their usefulness for practical instantiations. To overcome this limitation, Katzenbeisser et al. (CHES 2011) introduced Logically Reconfigurable PUFs (LR-PUFs), with the idea to introduce an ``update\'\' mechanism that changes the challenge/response behaviour without physically replacing or modifying the hardware.

In this work, we revisit LR-PUFs. We propose several new ways to characterize the unpredictability of LR-PUFs covering a broader class of realistic attacks and examine their relationship to each other.

In addition, we reconcile existing constructions with these new characterizations and show that they can withstand stronger adversaries than originally shown.

Since previous constructions are insecure with respect to our strongest unpredictability notion, we propose a secure construction which relies on the same assumptions and is almost as efficient as previous solutions.