Here you can see all recent updates to the IACR webpage. These updates are also available:

28
April
2016

ePrint Report
Lattice-Based Signature Schemes and their Sensitivity to Fault Attacks
Nina Bindel, Johannes Buchmann, Juliane Krämer

Due to their high efficiency and their strong security properties, lattice-based cryptographic schemes seem to be a very promising post-quantum replacement for currently used public key cryptography. The security of lattice-based schemes has been deeply analyzed mathematically, whereas little effort has been spent on the analysis against implementation attacks.
In this paper, we start with the fault analysis of one of the most important cryptographic primitives: signature schemes. We investigate the vulnerability and resistance of the currently most efficient lattice-based signature schemes BLISS (CRYPTO 2013), ring-TESLA (AfricaCrypt 2016), and the GLP scheme (CHES 2012) and their implementations. We consider different kinds of (first-order) randomizing, zeroing, and skipping faults. For each of the signature
schemes, we found at least six effective attacks. To increase the security of lattice-based signature schemes, we propose countermeasures for each of the respective attacks.

ePrint Report
Automatic Search for Key-Bridging Technique: Applications to LBlock and TWINE (Full Version)
Li Lin, Wenling Wu, Yafei Zheng

Key schedules in block ciphers are often highly simplified, which causes weakness that can be exploited in many attacks. At ASIACRYPT 2011, Dunkelman et al. proposed a technique using the weakness in the key schedule of AES, called key-bridging technique, to improve the overall complexity. The advantage of key-bridging technique is that it allows the adversary to deduce some sub-key bits from some other sub-key bits, even though they are separated by many key mixing steps. Although the relations of successive rounds may be easy to see,
the relations of two rounds separated by some mixing steps are very hard to find. In this paper, we describe a versatile and powerful algorithm for searching key-bridging technique on word-oriented and bit-oriented block ciphers. To demonstrate the usefulness of our approach, we apply our tool to the impossible differential and multidimensional zero correlation linear attacks on 23-round LBlock, 23-round TWINE-80 and 25-round TWINE-128. To the best of our knowledge, these results are the currently best results on LBlock and TWINE in the single-key setting.

ePrint Report
Efficient algorithms for supersingular isogeny Diffie-Hellman
Craig Costello, Patrick Longa, Michael Naehrig

We propose a new suite of algorithms that significantly improve the performance of supersingular isogeny Diffie-Hellman (SIDH) key exchange. Subsequently, we present a full-fledged implementation of SIDH that is geared towards the 128-bit quantum and 192-bit classical security levels. Our library is the first constant-time SIDH implementation and is more than 2.5 times faster than the previous best (non-constant-time) SIDH software. The high speeds in this paper are driven by compact, inversion-free point and isogeny arithmetic and fast SIDH-tailored field arithmetic: on an Intel Haswell processor, generating ephemeral public keys takes 51 million cycles for Alice and 59 million cycles for Bob while computing the shared secret takes 47 million and 57 million cycles, respectively. The size of public keys is only 751 bytes, which is significantly smaller than most of the popular post-quantum key exchange alternatives.
Ultimately, the size and speed of our software illustrates the strong potential of SIDH as a post-quantum key exchange candidate and we hope that these results encourage a wider cryptanalytic effort.

ePrint Report
Solving Quadratic Equations with XL on Parallel Architectures - extended version
Chen-Mou Cheng, Tung Chou, Ruben Niederhagen, Bo-Yin Yang

Solving a system of multivariate quadratic equations (MQ) is an NP-complete problem whose complexity estimates are relevant to many cryptographic scenarios. In some cases it is required in the best known attack; sometimes it is a generic attack (such as for the multivariate PKCs), and sometimes it determines a provable level of security (such as for the QUAD stream ciphers).

Under reasonable assumptions, the best way to solve generic MQ systems is the XL algorithm implemented with a sparse matrix solver such as Wiedemann's algorithm. Knowing how much time an implementation of this attack requires gives us a good idea of how future cryptosystems related to MQ can be broken, similar to how implementations of the General Number Field Sieve that factors smaller RSA numbers give us more insight into the security of actual RSA-based cryptosystems.

This paper describes such an implementation of XL using the block Wiedemann algorithm. In 5 days we are able to solve a system with 32 variables and 64 equations over $\mathbb{F}_{16}$ (a computation of about $2^{60.3}$ bit operations) on a small cluster of 8 nodes, with 8 CPU cores and 36 GB of RAM in each node. We do not expect system solvers of the F$_4$/F$_5$ family to accomplish this due to their much higher memory demand. Our software also offers implementations for $\mathbb{F}_{2}$ and $\mathbb{F}_{31}$ and can be easily adapted to other small fields. More importantly, it scales nicely for small clusters, NUMA machines, and a combination of both.

Under reasonable assumptions, the best way to solve generic MQ systems is the XL algorithm implemented with a sparse matrix solver such as Wiedemann's algorithm. Knowing how much time an implementation of this attack requires gives us a good idea of how future cryptosystems related to MQ can be broken, similar to how implementations of the General Number Field Sieve that factors smaller RSA numbers give us more insight into the security of actual RSA-based cryptosystems.

This paper describes such an implementation of XL using the block Wiedemann algorithm. In 5 days we are able to solve a system with 32 variables and 64 equations over $\mathbb{F}_{16}$ (a computation of about $2^{60.3}$ bit operations) on a small cluster of 8 nodes, with 8 CPU cores and 36 GB of RAM in each node. We do not expect system solvers of the F$_4$/F$_5$ family to accomplish this due to their much higher memory demand. Our software also offers implementations for $\mathbb{F}_{2}$ and $\mathbb{F}_{31}$ and can be easily adapted to other small fields. More importantly, it scales nicely for small clusters, NUMA machines, and a combination of both.

ePrint Report
Polymorphic Encryption and Pseudonymisation for Personalised Healthcare
Eric Verheul, Bart Jacobs, Carlo Meijer, Mireille Hildebrandt, Joeri de Ruiter

Polymorphic encryption and Pseudonymisation, abbreviated as PEP, form
a novel approach for the management of sensitive personal data,
especially in health care. Traditional encryption is rather rigid:
once encrypted, only one key can be used to decrypt the data. This
rigidity is becoming an every greater problem in the context of big
data analytics, where different parties who wish to investigate part
of an encrypted data set all need the one key for decryption.

Polymorphic encryption is a new cryptographic technique that solves these problems. Together with the associated technique of polymorphic pseudonymisation new security and privacy guarantees can be given which are essential in areas such as (personalised) healthcare, medical data collection via self-measurement apps, and more generally in privacy-friendly identity management and data analytics.

The key ideas of polymorphic encryption are: 1. Directly after generation, data can be encrypted in a `polymorphic' manner and stored at a (cloud) storage facility in such a way that the storage provider cannot get access. Crucially, there is no need to a priori fix who gets to see the data, so that the data can immediately be protected.

For instance a PEP-enabled self-measurement device will store all its measurement data in polymorphically encrypted form in a back-end data base.

2. Later on it can be decided who can decrypt the data. This decision will be made on the basis of a policy, in which the data subject should play a key role.

The user of the PEP-enabled device can, for instance, decide that doctors $X,Y,Z$ may at some stage decrypt to use the data in their diagnosis, or medical researcher groups $A, B, C$ may use it for their investigations, or third parties $U,V,W$ may use it for additional services, etc.

3. This `tweaking' of the encrypted data to make it decryptable by a specific party can be done in a blind manner. It will have to be done by a trusted party who knows how to tweak the ciphertext for whom.

This PEP technology can provide the necessary security and privacy infrastructure for big data analytics. People can entrust their data in polymorphically encrypted form, and each time decide later to make (parts of) it available (decryptable) for specific parties, for specific analysis purposes. In this way users remain in control, and can monitor which of their data is used where by whom for which purposes.

The polymorphic encryption infrastructure can be supplemented with a pseudonymisation infrastructure which is also polymorphic, and guarantees that each individual will automatically have different pseudonyms at different parties and can only be de-pseudonymised by participants (like medical doctors) who know the original identity.

This white paper provides an introduction to Polymorphic Encryption and Pseudonymisation (PEP), at different levels of abstraction, focusing on health care as application area. It contains a general description of PEP, explaining the basic functionality for laymen, supplemented by a clarification of the legal framework provided by the upcoming General Data Protection Regulation (GDPR) of the European Union. The paper also contains a more advanced, mathematically oriented description of PEP, including the underlying cryptographic primitives, key and pseudonym managment, interaction protocols, etc. This second part is aimed at readers with a background in computer security and cryptography. The cryptographic basis for PEP is ElGamal public key encryption, which is well-known since the mid 1980s. It is the way in which this encryption is used --- with re-randomisation, re-keying and re-shuffling --- that is new.

The PEP framework is currently elaborated into an open design and open source (prototype) implementation at Radboud University in Nijmegen, The Netherlands. The technology will be used and tested in a real-life medical research project at the Radboud University Medical Center.

Polymorphic encryption is a new cryptographic technique that solves these problems. Together with the associated technique of polymorphic pseudonymisation new security and privacy guarantees can be given which are essential in areas such as (personalised) healthcare, medical data collection via self-measurement apps, and more generally in privacy-friendly identity management and data analytics.

The key ideas of polymorphic encryption are: 1. Directly after generation, data can be encrypted in a `polymorphic' manner and stored at a (cloud) storage facility in such a way that the storage provider cannot get access. Crucially, there is no need to a priori fix who gets to see the data, so that the data can immediately be protected.

For instance a PEP-enabled self-measurement device will store all its measurement data in polymorphically encrypted form in a back-end data base.

2. Later on it can be decided who can decrypt the data. This decision will be made on the basis of a policy, in which the data subject should play a key role.

The user of the PEP-enabled device can, for instance, decide that doctors $X,Y,Z$ may at some stage decrypt to use the data in their diagnosis, or medical researcher groups $A, B, C$ may use it for their investigations, or third parties $U,V,W$ may use it for additional services, etc.

3. This `tweaking' of the encrypted data to make it decryptable by a specific party can be done in a blind manner. It will have to be done by a trusted party who knows how to tweak the ciphertext for whom.

This PEP technology can provide the necessary security and privacy infrastructure for big data analytics. People can entrust their data in polymorphically encrypted form, and each time decide later to make (parts of) it available (decryptable) for specific parties, for specific analysis purposes. In this way users remain in control, and can monitor which of their data is used where by whom for which purposes.

The polymorphic encryption infrastructure can be supplemented with a pseudonymisation infrastructure which is also polymorphic, and guarantees that each individual will automatically have different pseudonyms at different parties and can only be de-pseudonymised by participants (like medical doctors) who know the original identity.

This white paper provides an introduction to Polymorphic Encryption and Pseudonymisation (PEP), at different levels of abstraction, focusing on health care as application area. It contains a general description of PEP, explaining the basic functionality for laymen, supplemented by a clarification of the legal framework provided by the upcoming General Data Protection Regulation (GDPR) of the European Union. The paper also contains a more advanced, mathematically oriented description of PEP, including the underlying cryptographic primitives, key and pseudonym managment, interaction protocols, etc. This second part is aimed at readers with a background in computer security and cryptography. The cryptographic basis for PEP is ElGamal public key encryption, which is well-known since the mid 1980s. It is the way in which this encryption is used --- with re-randomisation, re-keying and re-shuffling --- that is new.

The PEP framework is currently elaborated into an open design and open source (prototype) implementation at Radboud University in Nijmegen, The Netherlands. The technology will be used and tested in a real-life medical research project at the Radboud University Medical Center.

26
April
2016

You will be working on software for mobile platforms, mobile devices like iOS, Android, Windows and connected devices. Your projects are designing and implementing cryptography in our software solutions that are resilient against real world attacks. This will include the analysis of cryptographic implementations embedded in AET’s solutions under evaluation. You will also provide technical expertise in applied cryptography.

**What you bring**

We’re looking for an Applied Cryptographer / PKI developer who is passionate (like us) about software development and not afraid to pioneer new approaches, techniques and technologies. The ideal candidate will have a passion for implementing solutions to complex security problems with a keen understanding of information security challenges and an up-to-date awareness of information security threats.

You will need to have the following qualifications and experience:

- Has a Bachelor or Master degree in IT, Math or other related degree,
- Deep knowledge of applied cryptography and PKI,
- Ability to combine market and technology topics to find technical solutions in complex environments.

Other areas of knowledge and experience that would be considered key assets:

- Programming languages: C/C++, Java,
- An understanding of modern computer architectures and information technology architectures including cloud computing,
- Platforms like iOS, Android, Windows or Linux,
- Smart cards, sims and/or HSMs,

If you are interested in this opportunity and feel that you have the relevant skills, qualifications and experience as a Applied Cryptographer / PKI developer please send your resume to *hrm (at) aeteurope.com*

**Closing date for applications:** 31 December 2017

**Contact:** We can understand you want to know more about the job, your colleagues or the company of AET. Don’t hesitate to give us a call at **+31 26 365 3350**. Ask for **Bert Smit, Development Manager**.

We are looking forward to getting to know you!

**More information:** https://www.aeteurope.com

25
April
2016

We are looking for a research scientist or post-doc in cryptography to work on novel cryptographic concepts for emerging ICT domains (e.g. cloud computing or cyber physical systems). Ideally you have experience in fields like modern public-key cryptography, distributed cryptography, privacy enhancing technologies, or multi-party computation. You will be involved in EU research projects and advance/improve cryptography for secure and privacy preserving cloud based systems. Ideally you have also experience in software development and prototyping.

Further infos:

- Direct job posting: http://www.ait.ac.at/fileadmin/inserate/Jobs/Science/Scientist_for_Applied_Crypthography.pdf
- AIT Digital Safety & Security Department: http://www.ait.ac.at/departments/digital-safety-security

**Closing date for applications:** 30 June 2016

**Contact:**

*Thomas Loruenser*, Department Digital Safety & Security, AIT Austrian Institute of Technology, or*Maria Leonhard-Maurer*, Head of Human Resources, E-Mail:*maria.leonhard-maurer (at) ait.ac.at*

**More information:** http://www.ait.ac.at/fileadmin/inserate/Jobs/Science/Scientist_for_Applied_Crypthography.pdf

ePrint Report
Efficient quantum-resistant trust Infrastructure based on HIMMO
Oscar Garcia-Morchon, Ronald Rietman, Ludo Tolhuizen, Jose-Luis Torre-Arce, Sauvik Bhattacharya, Maarten Bodlaender

Secure Internet communications face conflicting demands: advances in (quantum) computers require stronger, quantum-resistant algorithms, while at the same time the Internet of Things demands better-performing protocols; and finally, communication links usually depend on a single root-of-trust, e.g., a certification authority, a single point-of-failure that is too big of a risk for future systems.
This paper proposes a hybrid infrastructure that combines a quantum-resistant HIMMO key pre-distribution scheme based on multiple Trusted Third Parties with public-key cryptography to address these problems. During operation, any pair of devices can use HIMMO key material and public-keys to establish a secure link, and public-keys are then certified by multiple TTPs. The solution is resilient to the capture of individual roots of trust, without affecting performance, while public keys can provide features such as forward secrecy. Combining HIMMO identities with public-keys enables secure certification of public keys and distribution of
HIMMO key material from multiple TTPs, without requiring an out-of-band channel. The infrastructure can be tuned to fit Internet of Things scenarios benefiting from an efficient, non-interactive and authenticated key exchange, or to fit use cases where the use of multiple TTPs provides privacy safe-guards when lawful interception is required.
As proof-of-concept we show how TLS can benefit from these ideas with minimal extensions, while exhibiting good security features and performance compared with solutions based on public-key cryptography only.

ePrint Report
Automatic Search for the Best Trails in ARX: Application to Block Cipher \textsc{Speck}
Alex Biryukov; Vesselin Velichkov; Yann Le Corre

We propose the first adaptation of Matsui's algorithm for finding the best differential and linear trails to the class of ARX ciphers. It is based on a branch-and-bound search strategy, does not use any heuristics and returns optimal results. The practical application of the new algorithm is demonstrated on reduced round variants of block ciphers from the \textsc{Speck} family. More specifically, we report the probabilities of the best differential trails for up to 10, 9, 8, 7, and 7 rounds of \textsc{Speck32}, \textsc{Speck48}, \textsc{Speck64}, \textsc{Speck96} and \textsc{Speck128} respectively, together with the exact number of differential trails that have the best probability. The new results are used to compute bounds, under the Markov assumption, on the security of \textsc{Speck} against single-trail differential cryptanalysis. Finally, we propose two new ARX primitives with provable bounds against single-trail differential and linear cryptanalysis -- a long standing open problem in the area of ARX design.

ePrint Report
Towards Bitcoin Payment Networks
Patrick McCorry, Malte M\"oser, Siamak F. Shahandashti, Feng Hao

Bitcoin as deployed today does not scale. Scalability research has focused on two directions: 1) redesigning the Blockchain protocol, and 2) facilitating `off-chain transactions' and only consulting the Blockchain if an adjudicator is required.
In this paper we focus on the latter and provide an overview of Bitcoin payment networks.
These consist of two components: payment channels to facilitate off-chain transactions between two parties, and the capability to fairly exchange bitcoins across multiple channels.
We compare Duplex Micropayment Channels and Lightning Channels, before discussing Hashed Time Locked Contracts which enable Bitcoin-based payment networks.
Finally, we highlight challenges for route discovery in these networks.

ePrint Report
MILP-Based Automatic Search Algorithms for Differential and Linear Trails for Speck
Kai Fu; Meiqin Wang; Yinghua Guo; Siwei Sun; Lei Hu

In recent years, Mixed Integer Linear Programming (MILP) has been successfully applied in searching for differential characteristics and linear approximations in block ciphers and has produced the significant results for some ciphers such as SIMON (a family of lightweight and hardware-optimized block ciphers designed by NSA) etc.
However, in the literature, the MILP-based automatic search algorithm for differential characteristics and linear approximations is still infeasible for block ciphers such as ARX constructions.
In this paper, we propose an MILP-based method for automatic search for differential characteristics and linear approximations in ARX ciphers.
By researching the properties of differential characteristic and linear approximation of modular addition in ARX ciphers, we present a method to describe the differential characteristic and linear approximation with linear inequalities under the assumptions of independent inputs to the modular addition and independent rounds.
We use this representation as an input to the publicly available MILP optimizer Gurobi to search for differential characteristics and linear approximations for ARX ciphers.
As an illustration, we apply our method to Speck, a family of lightweight and software-optimized block ciphers designed by NSA, which results in the improved differential characteristics and linear approximations compared with the existing ones.
Moreover, we provide the improved differential attacks on Speck48, Speck64, Speck96 and Speck128, which are the best attacks on them in terms of the number of rounds.

ePrint Report
On the Construction of Lightweight Circulant Involutory MDS Matrices
Yongqiang Li, Mingsheng Wang

In the present paper, we investigate the problem of constructing MDS matrices with as few bit XOR operations as possible. The key contribution of the present paper is constructing MDS matrices with entries in the set of $m\times m$ non-singular matrices over $\mathbb{F}_2$ directly, and the linear transformations we used to construct MDS matrices are not assumed pairwise commutative. With this method, it is shown that circulant involutory MDS matrices, which have been proved do not exist over the finite field $\mathbb{F}_{2^m}$, can be constructed by using non-commutative entries.
Some constructions of $4\times4$ and $5\times5$ circulant involutory MDS matrices are given when $m=4,8$. To the best of our knowledge, it is the first time that circulant involutory MDS matrices have been constructed. Furthermore,
some lower bounds
on XORs that required to evaluate one row of circulant and Hadamard MDS matrices of order 4 are given when $m=4,8$. Some constructions achieving the bound are also given, which have fewer XORs than previous constructions.

ePrint Report
Multiple Differential Cryptanalysis: A Rigorous Analysis
Subhabrata Samajder, Palash Sarkar

Statistical analysis of multiple differential attacks are considered in this paper. Following the work of Blondeau
and G\'{e}rard, the most general situation of multiple differential attack where there are no restrictions on the set of differentials
is studied. We obtain closed form expressions for the data complexity in terms of the success probability and the advantage
of an attack. This is done under two scenarios -- one, where an independence assumption used by Blondeau and G\'{e}rard is assumed
to hold and second, where no such assumption is made. The first case employs the Chernoff bounds while the second case
uses the Azuma-Hoeffding bounds from the theory of martingales. In both cases, we do not make use of any approximations in
our analysis. As a consequence, the results are more generally applicable compared to previous works. The analysis without the
independence assumption is the first of its kind in the literature. We believe that the current work places the statistical
analysis of multiple differential attack on a more rigourous foundation than what was previously known.

ePrint Report
A New Test Statistic for Key Recovery Attacks Using Multiple Linear Approximations
Subhabrata Samajder, Palash Sarkar

The log-likelihood ratio (LLR) test statistic has been proposed in the literature for performing statistical analysis
of attacks on block ciphers. A limitation of the LLR test statistic is that its application requires the full knowledge of the
corresponding distribution. Another test statistic which has been proposed in the literature does not possess this limitation.
The statistical analysis using this test statistic requires {\em approximating} its distribution by a chi-squared distribution.
Problematic issues regarding such approximations have been reported in the literature.
This work proposes a new test statistic which offers the following two features. Its application does not require knowledge
of the underlying distribution and it is possible to carry out an analysis using this test statistic without using any
approximation. This is made possible by applying the theory of martingales to build a Doob martingale satisfying an appropriate
Lipschitz condition so that the Azuma-Hoeffding inequality can be applied. Experimental comparison of the data complexity obtained
using the new method is made to the data complexity obtained using the chi-squared based method for both simulated distributions
and the previously reported distribution for the block cipher SERPENT. In all cases, the data complexity obtained by the new
method turns out to be greater. While this may appear to be a drawback of the new method, this is a rigorous bound while
the data complexity obtained using the chi-squared method is an approximation where there is little knowledge about the error
in the approximation. So, if rigorous bound is desired, then one will have to be satisfied with a more conservative estimate
of the data complexity.

ePrint Report
On Implementing Pairing-Based Protocols with Elliptic Curves of Embedding Degree One
Sanjit Chatterjee, Alfred Menezes, Francisco Rodriguez-Henriquez

We observe that the conventional classification of pairings into Types 1, 2, 3 and 4 is not applicable to pairings from elliptic curves with embedding degree one. We define three kinds of pairings from these elliptic curves, and discuss some subtleties with using them to implement pairing-based protocols.

In this paper, based on the FV scheme, we construct a first fully homomorphic encryption scheme FHE4FX that can homomorphically compute addition and/or multiplication of encrypted fixed point numbers without knowing the secret key. Then, we show that in the FHE4FX scheme one can efficiently and homomorphically compare magnitude of two encrypted numbers. That is, one can compute an encryption of the greater-than bit
that represents whether or not $x > x'$ given two ciphertexts $c$ and $c'$ (of $x$ and $x'$, respectively) without knowing the secret key. Finally we show that these properties of the FHE4FX scheme enables us to construct a fully homomorphic encryption scheme FHE4FL that can homomorphically compute addition and/or multiplication of encrypted floating point numbers.

ePrint Report
Another Look at RSA Signatures With Ane Padding
Jean-Sébastien Coron, David Naccache, Mehdi Tibouchi

Affine-padding {\sc rsa} signatures consist in signing $\omega\cdot m+\alpha$ instead of the message $m$ for some fixed constants $\omega,\alpha$. A thread of publications progressively reduced the size of $m$ for which affine signatures can be forged in polynomial time. The current bound is $\log m \sim \frac{N}{3}$ where $N$ is the {\sc rsa} modulus' bit-size. Improving this bound to $\frac{N}{4}$ has been an elusive open problem for the past decade.\smallskip

In this invited talk we consider a slightly different problem: instead of minimizing $m$'s size we try to minimize its {\sl entropy}. We show that affine-padding signatures on $\frac{N}{4}$ entropy-bit messages can be forged in polynomial time. This problem has no direct cryptographic impact but allows to better understand how malleable the {\sc rsa} function is. In addition, the techniques presented in this talk might constitute some progress towards a solution to the longstanding $\frac{N}{4}$ forgery open problem.\smallskip\smallskip

We also exhibit a sub-exponential time technique (faster than factoring) for creating affine modular relations between strings containing three messages of size $\frac{N}{4}$ and a fourth message of size $\frac{3N}{8}$.\smallskip

Finally, we show than $\frac{N}{4}$-relations can be obtained in specific scenarios, {\sl e.g.} when one can pad messages with two independent patterns or when the modulus' most significant bits can be chosen by the opponent.\smallskip

In this invited talk we consider a slightly different problem: instead of minimizing $m$'s size we try to minimize its {\sl entropy}. We show that affine-padding signatures on $\frac{N}{4}$ entropy-bit messages can be forged in polynomial time. This problem has no direct cryptographic impact but allows to better understand how malleable the {\sc rsa} function is. In addition, the techniques presented in this talk might constitute some progress towards a solution to the longstanding $\frac{N}{4}$ forgery open problem.\smallskip\smallskip

We also exhibit a sub-exponential time technique (faster than factoring) for creating affine modular relations between strings containing three messages of size $\frac{N}{4}$ and a fourth message of size $\frac{3N}{8}$.\smallskip

Finally, we show than $\frac{N}{4}$-relations can be obtained in specific scenarios, {\sl e.g.} when one can pad messages with two independent patterns or when the modulus' most significant bits can be chosen by the opponent.\smallskip

22
April
2016

ePrint Report
Tower Number Field Sieve Variant of a Recent Polynomial Selection Method
Palash Sarkar, Shashank Singh

At Asiacrypt 2015, Barbulescu et al. performed a thorough analysis of the tower number field sieve (TNFS) variant of the number
field sieve algorithm. More recently, Kim and Barbulescu combined the TNFS variant with several polynomial selection methods
including the Generalised Joux-Lercier method and the Conjugation method proposed by Barbulescu et al. at Eurocrypt 2015.
Sarkar and Singh (Eurocrypt 2016) proposed
a polynomial selection method which subsumes both the GJL and the Conjugation methods. This study was done in the context of
the NFS and the multiple NFS (MNFS). The purpose of the present note is to show that the polynomial selection method of Sarkar
and Singh subsumes the GJL and the Conjugation methods also in the context of the TNFS and the multiple TNFS variants. This was not
clear from the recent work by Kim and Barbulescu. Applying the new polynomial selection method to the TNFS variants results in
new asymptotic complexities for certain ranges of primes.

We provide an overview of some of the security issues involved in securely implementing Lalley and Weyl's ``Quadratic Voting'', and
suggest some possible implementation architectures. Our proposals blend ``end-to-end verifiable'' voting methods with anonymous payments. We also consider new refund rules for quadratic voting, such as a ``lottery'' method.

21
April
2016

ePrint Report
Slow Motion Zero Knowledge Identifying With Colliding Commitments
Houda Ferradi, Rémi Géraud, David Naccache

Discrete-logarithm authentication protocols are known to present two interesting features: The first is that the prover's commitment, $x=g^r$, claims most of the prover's computational effort. The second is that $x$ does not depend on the challenge and can hence be computed in advance. Provers exploit this feature by pre-loading (or pre-computing) ready to use commitment pairs $r_i,x_i$. The $r_i$ can be derived from a common seed but storing each $x_i$ still requires 160 to 256 bits when implementing DSA or Schnorr.

This paper proposes a new concept called {\it slow motion zero-knowledge} (SM-ZK). SM-ZK allows the prover to slash commitment size (by a factor of 4 to 6) by combining classical zero-knowledge and a timing side-channel. We pay the conceptual price of requiring the ability to measure time but, in exchange, obtain communication-efficient protocols.

This paper proposes a new concept called {\it slow motion zero-knowledge} (SM-ZK). SM-ZK allows the prover to slash commitment size (by a factor of 4 to 6) by combining classical zero-knowledge and a timing side-channel. We pay the conceptual price of requiring the ability to measure time but, in exchange, obtain communication-efficient protocols.

ePrint Report
Algebraic Insights into the Secret Feistel Network (Full version)
Léo Perrin, Aleksei Udovenko

We introduce the high-degree indicator matrix (HDIM), an object closely related with both the linear approximation table and the algebraic normal form (ANF) of a permutation. We show that the HDIM of a Feistel Network contains very specific patterns depending on the degree of the Feistel functions, the number of rounds and whether the Feistel functions are 1-to-1 or not. We exploit these patterns to distinguish Feistel Networks, even if the Feistel Network is whitened using unknown affine layers.

We also present a new type of structural attack exploiting monomials that cannot be present at round $r-1$ to recover the ANF of the last Feistel function of a $r$-round Feistel Network. Finally, we discuss the relations between our findings, integral attacks, cube attacks, Todo's division property and the congruence modulo 4 of the Linear Approximation Table.

We also present a new type of structural attack exploiting monomials that cannot be present at round $r-1$ to recover the ANF of the last Feistel function of a $r$-round Feistel Network. Finally, we discuss the relations between our findings, integral attacks, cube attacks, Todo's division property and the congruence modulo 4 of the Linear Approximation Table.

ePrint Report
Linear-Time Non-Malleable Codes in the Bit-Wise Independent Tampering Model
Ronald Cramer, Ivan Damgård, Nico Döttling, Irene Giacomelli, Chaoping Xing

Non-malleable codes were introduced by Dziembowski et al. (ICS 2010) as coding schemes that protect a message against tampering attacks. Roughly speaking, a code is non-malleable if decoding an adversarially tampered encoding of a message m produces the original message m or a value m' (eventually $\perp$) completely unrelated with m. Moreover, the probability of which one of these two events happens is also independent of m. It is known that non-malleability is possible only for restricted classes of tampering functions. Since their introduction, a long line of works has established feasibility results of non-malleable codes against different families of tampering functions. However, for many interesting families the challenge of finding “good” non-malleable codes remains open. In particular, we would like to have explicit constructions of non-malleable codes with high-rate and efficient encoding/decoding algorithms (i.e. low computational complexity). In this work we present two explicit constructions: the first one is a natural generalization of the seminal work of Dziembowski et al. and gives rise to the first constant-rate non-malleable code with linear-time complexity (in a model including bit-wise independent tampering). The second construction is inspired by the recent works about non-malleable codes of Agrawal et al. (TCC 2015) and of Cheraghchi and Guruswami (TCC 2014) and improves the previous result in the bit-wise tampering model: it builds the first non-malleable codes with linear-time complexity and optimal-rate (i.e. rate 1 − o(1)).

In this note, we describe attacks on the recently proposed Haraka hash
functions. First, for the two hash functions Haraka-256/256 and
Haraka-512/256 in the family, we show how two colliding messages can
be constructed in about $2^{16}$ function evaluations. Second, we
invalidate the preimage security claim for Haraka-512/256 with an
attack finding one preimage in about $2^{192}$ function
evaluations. These attacks are possible thanks to symmetries in the
internal state that are preserved over several rounds.

ePrint Report
Efficient Beyond-Birthday-Bound-Secure Deterministic Authenticated Encryption with Minimal Stretch
Christian Forler, Eik List, Stefan Lucks, Jakob Wenzel

Block-cipher-based authenticated encryption has obtained considerable attention from the ongoing CAESAR competition. While the focus of CAESAR resides primarily on nonce-based authenticated encryption, Deterministic Authenticated Encryption (DAE) is used in domains such as key wrap, where the available message entropy motivates to omit the overhead for nonces. Since the highest possible security is desirable when protecting keys, beyond-birthday-bound (BBB) security is a valuable goal for DAE. In the past, significant efforts had to be invested into designing BBB-secure AE schemes from conventional block ciphers, with the consequences of losing efficiency and sophisticating security proofs.

This work proposes Deterministic Counter in Tweak (DCT), a BBB-secure DAE scheme inspired by the Counter-in-Tweak encryption scheme by Peyrin and Seurin. Our design combines a fast $\epsilon$-almost-XOR-universal family of hash functions, for $\epsilon$ close to $2^{-2n}$, with a single call to a $2n$-bit SPRP, and a BBB-secure encryption scheme. First, we describe our construction generically with three independent keys, one for each component. Next, we present an efficient instantiation which (1) requires only a single key, (2) provides software efficiency by encrypting at less than two cycles per byte on current x64 processors, and (3) produces only the minimal $\tau$-bit stretch for $\tau$ bit authenticity. We leave open two minor aspects for future work: our current generic construction is defined for messages of at least $2n-\tau$ bits, and the verification algorithm requires the inverse of the used $2n$-bit SPRP and the encryption scheme.

This work proposes Deterministic Counter in Tweak (DCT), a BBB-secure DAE scheme inspired by the Counter-in-Tweak encryption scheme by Peyrin and Seurin. Our design combines a fast $\epsilon$-almost-XOR-universal family of hash functions, for $\epsilon$ close to $2^{-2n}$, with a single call to a $2n$-bit SPRP, and a BBB-secure encryption scheme. First, we describe our construction generically with three independent keys, one for each component. Next, we present an efficient instantiation which (1) requires only a single key, (2) provides software efficiency by encrypting at less than two cycles per byte on current x64 processors, and (3) produces only the minimal $\tau$-bit stretch for $\tau$ bit authenticity. We leave open two minor aspects for future work: our current generic construction is defined for messages of at least $2n-\tau$ bits, and the verification algorithm requires the inverse of the used $2n$-bit SPRP and the encryption scheme.

ePrint Report
Strengthening the Known-Key Security Notion for Block Ciphers
Benoît Cogliati, Yannick Seurin

We reconsider the formalization of known-key attacks against ideal primitive-based block ciphers. This was previously tackled by Andreeva, Bogdanov, and Mennink (FSE 2013), who introduced the notion of known-key indifferentiability. Our starting point is the observation, previously made by Cogliati and Seurin (EUROCRYPT 2015), that this notion, which considers only a single known key available to the attacker, is too weak in some settings to fully capture what one might expect from a block cipher informally deemed resistant to known-key attacks. Hence, we introduce a stronger variant of known-key indifferentiability, where the adversary is given multiple known keys to ``play'' with, the informal goal being that the block cipher construction must behave as an independent random permutation for each of these known keys. Our main result is that the 9-round iterated Even-Mansour construction (with the trivial key-schedule, i.e., the same round key xored between permutations) achieves our new ``multiple'' known-keys indifferentiability notion, which contrasts with the previous result of Andreeva et al. that one single round is sufficient when only a single known key is considered. We also show that the 3-round iterated Even-Mansour construction achieves the weaker notion of multiple known-keys sequential indifferentiability, which implies in particular that it is correlation intractable with respect to relations involving any (polynomial) number of known keys.

We consider the adjacency graphs of linear feedback shift registers (LFSRs) with reducible characteristic polynomials. Let l(x) be a characteristic polynomial, and l(x)=l_1(x)l_2(x)\cdots l_r(x) be a decomposition of l(x) into co-prime factors. Firstly, we show a connection between the adjacency graph of FSR(l(x)) and the association graphs of FSR(l_i(x)), 1\leq i\leq r. By this connection, the problem of determining the adjacency graph of FSR(l(x)) is decomposed to the problem of determining the association graphs of FSR(l_i(x)), 1\leq i\leq r, which is much easier to handle. Then, we study the association graph of LFSRs with irreducible characteristic polynomials and give a relationship between these association graphs and the cyclotomic numbers over finite fields. At last, some applications are suggested.

The PKC 2017 call for papers is now online. PKC will be held March 28-31, in Amsterdam. Submission deadline is October 6.

Event Calendar
: International Cyber Security Workshop and Certificate Program
Istanbul, Turkey, 23 May - 27 May 2016

Event date: 23 May to 27 May 2016

Submission deadline: 15 May 2016

Submission deadline: 15 May 2016

Early registration for Eurocrypt 2016 ends Sunday April 24th.

20
April
2016

Event Calendar
Crypto-CO: First Summer School on Cryptology in Colombia
BogotÃ¡, Colombia, 29 June - 9 July 2016

Event date: 29 June to 9 July 2016