Here you can see all recent updates to the IACR webpage. These updates are also available:

23
October
2016

There is an open position for a Post-Doc in the area of post-quantum cryptography and implementations. Interested applicants with PhD need to demonstrate prior work and related publications. Please send your CV for consideration. The position is for one year with the possibility of extension for another year.

**Closing date for applications:** 1 January 2017

**Contact:** Dr. Reza Azarderakhsh at *razarderakhsh (at) fau.edu.*

Job Posting
Tenure track position in Number Theory
McGill University, Department of Mathematics and Statistics

[version française ci-dessous]

Department of Mathematics and Statistics

Tenure track position in Number Theory, McGill University

The Department of Mathematics and Statistics at McGill University invites applications for a tenure-track position in arithmetic geometry or automorphic forms, broadly interpreted. The Department expects to appoint at the Assistant Professor level, but more senior applicants will also be considered.

Candidates must have a doctoral degree at the date of appointment, and must have demonstrated excellence in mathematical research. They must also have the desire and potential to contribute to the educational programs of the Department at the graduate and undergraduate levels.

Applications should be made through MathJobs.Org (Position ID: NUMTH) and should include a curriculum vitae, a list of publications, a research outline, a teaching statement which includes an account of teaching experience, and at least four references (with one addressing the teaching record). Candidates are also encouraged to provide web links for up to three selected reprints or preprints, or to upload them to MathJobs.Org.

Candidates must ensure that letters of reference are submitted preferably through http://mathjobs.org, although in exceptional circumstances they may be emailed to *numtheory.mathstat (at) mcgill.ca*

Review of applications will begin on November 15 and will continue until the position has been filled. For further details or clarifications, please email *numtheory.mathstat (at) mcgill.ca*

McGill University is committed to diversity and equity in employment. It welcomes applications from: women, Aboriginal persons, persons with disabilities, ethnic minorities, persons of minority sexual orientation or gender identity, visible minorities, and others who may contribute to diversification. All qualified applicants are encouraged to apply; however, in accordance with Canadian immigration requirements, Canadians and permanent residents will be given priority.

**Closing date for applications:** 15 November 2016

21
October
2016

Event Calendar
CTCrypt 2017: 6th Workshop on Current Trends in Cryptology
Saint Petersburg, Russia, 5 June - 7 June 2017

Event date: 5 June to 7 June 2017

Submission deadline: 6 February 2017

Notification: 3 April 2017

Submission deadline: 6 February 2017

Notification: 3 April 2017

Event Calendar
IFIP SEC 2017: 32nd IFIP TC-11 International Information Security and Privacy Conference
Rome, Italy, 29 May - 31 May 2017

Event date: 29 May to 31 May 2017

Submission deadline: 23 December 2016

Notification: 24 February 2017

Submission deadline: 23 December 2016

Notification: 24 February 2017

As a postdoctoral researcher, you will join the European research project DECODE (DEcentralised Citizens Owned Data Ecosystem) that aims to develop distributed architectures for decentralised data governance. You will conduct research on decentralised identity and reputation management using attribute-based credentials and distributed ledgers. You will communicate your findings through papers in peer-reviewed research journals and at international conferences. You will also be involved in training and teaching PhD and BSc/MSc students.

**Work environment**

You will join the Digital Security section, headed by Prof. Bart Jacobs. The group works on a broad range of topics in computer security and privacy, including applied cryptography, security protocols, smartcards and RFID, and software security and correctness. We are also interested in societal aspects of digital security, such as privacy and e-voting, and interaction with disciplines outside computer science such as cryptography and law.

The Digital Security section is part of the vibrant and growing Institute for Computing and Information Sciences (iCIS). iCIS is consistently ranked as one of the top Computer Science departments in the Netherlands (National Research Review Computer Science 2002-2008 and 2009-2014).

You will work with Jaap-Henk Hoepman (scientific director of the Privacy & Identity Lab) on the DECODE project. Other partners in this project are Nesta (UK), Institut Municipal d\'Informatica de Barcelona (Spain), Arduino Verkstad (Sweden), University College London (UK), Waag Society (Netherlands), Eurecat (Spain) and CNRS (France).

**Closing date for applications:** 20 November 2016

**Contact:** Name: Jaap-Henk Hoepman

Email: *jhh (at) cs.ru.nl*

**More information:** http://www.ru.nl/werken/details/details_vacature_0/?taal=uk&recid=590659

20
October
2016

ePrint Report
Revisiting RC4 Key Collision: Faster Search Algorithm and New 22-byte Colliding Key Pairs
Amit Jana, Goutam Paul

If two different secret keys of a stream cipher yield the same internal state after the key scheduling algorithm (KSA) and hence generates the same sequence of keystream bits, they are called a colliding key pair. The number of possible internal states of RC4 stream cipher is very large (approximately $2^{1700}$), which makes finding key collision hard for practical key length (i.e., less than 30 bytes). Matsui [FSE 2009] for the first time reported a 24-byte colliding key pair and one 20-byte near-colliding key pair (i.e., for which the state arrays after the KSA differ in at most two positions) for RC4. Subsequently, Chen and Miyaji [ISC 2011] claimed to design a more efficient search algorithm using Matsui's collision pattern and reported a 22-byte colliding key pair which remains the only shortest known colliding key pair so far. In this paper, we show some limitations of both the above approaches and propose a faster collision search algorithm that overcomes these limitations. Using our algorithm, we are able to find three additional 22-byte colliding key pairs that are different from the one reported by Chen and Miyaji [ISC 2011]. We additionally give 12 new 20-byte near-colliding key pairs different from Matsui's [FSE 2009]. These results are significant, considering the argument by the experts [Biham and Dunkelman, 2007] that for shorter keys there might be no instances of collision at all.

In this paper we present a new attack on cryptosystems based on ideal lattices. We show that, if there is one polynomially large entry in the transformation matrix from trapdoor basis to public basis, then we can obtain the trapdoor basis. The key point is that some class of matrices satisfies multiplication commutative law.

ePrint Report
Indiscreet Logs: Persistent Diffie-Hellman Backdoors in TLS
Kristen Dorey, Nicholas Chang-Fong, Aleksander Essex

Software implementations of discrete logarithm based cryptosystems over finite fields typically make the assumption that any domain parameters they are presented with are trustworthy, i.e., the parameters implement cyclic groups where the discrete logarithm problem is assumed to be hard. An informal and widespread justification for this seemingly exists that says validating parameters at run time is too computationally expensive relative to the perceived risk of a server sabotaging the privacy of its own connection. In this paper we explore this trust assumption and examine situations where it may not always be justified.

We conducted an investigation of discrete logarithm domain parameters in use across the Internet and discovered evidence of a multitude of potentially backdoored moduli of unknown order in TLS and STARTTLS spanning numerous countries, organizations, and protocols. Although our disclosures resulted in a number of organizations taking down suspicious parameters, we argue the potential for TLS backdoors is systematic and will persist until either until better parameter hygiene is taken up by the community, or finite field based cryptography is eliminated altogether.

We conducted an investigation of discrete logarithm domain parameters in use across the Internet and discovered evidence of a multitude of potentially backdoored moduli of unknown order in TLS and STARTTLS spanning numerous countries, organizations, and protocols. Although our disclosures resulted in a number of organizations taking down suspicious parameters, we argue the potential for TLS backdoors is systematic and will persist until either until better parameter hygiene is taken up by the community, or finite field based cryptography is eliminated altogether.

ePrint Report
Cryptanalyses of Candidate Branching Program Obfuscators
Yilei Chen, Craig Gentry, Shai Halevi

We describe new cryptanalytic attacks on the candidate branching-program obfuscator proposed by Garg, Gentry, Halevi, Raykova, Sahai and Waters (GGHRSW) using the GGH13 graded encoding, and its variant using the GGH15 graded encoding as specified by Gentry, Gorbunov and Halevi. All our attacks require very specific structure of the branching programs being obfuscated (which in particular must have some input-partitioning property). Common to all our attacks are techniques to extract information about the ``multiplicative bundling'' scalars that are used in the GGHRSW construction.

For the GGHRSW obfuscator over GGH13, we show how to recover the ideal generating the plaintext space when the branching program has input partitioning. Combined with the information that we extract about the ``multiplicative bundling'' scalars, this lets us extend the annihilation attack of Miles, Sahai and Zhandry, to handle the GGHRSW block-randomization of the branching-program matrices. (We stress that our attack does not break the candidate obfuscators of Miles et al. and Garg et al. (ePrint 2016/588, 2016/817), since their dual-input branching programs are inherently not partitionable.) Alternatively, once we have the ideal we can solve the principle-ideal problem (PIP) in classical subexponential time or quantum polynomial time, hence obtaining a total break.

For the GGH15 variant, we show how to use the left-kernel technique of Coron, Lee, Lepoint and Tibouchi to recover ratios of the bundling scalars. Once we have the ratios of the scalar products, we can use factoring and PIP solvers (in classical subexponential time or quantum polynomial time) to find the scalars themselves, then run mixed-input attacks to break the obfuscation.

For the GGHRSW obfuscator over GGH13, we show how to recover the ideal generating the plaintext space when the branching program has input partitioning. Combined with the information that we extract about the ``multiplicative bundling'' scalars, this lets us extend the annihilation attack of Miles, Sahai and Zhandry, to handle the GGHRSW block-randomization of the branching-program matrices. (We stress that our attack does not break the candidate obfuscators of Miles et al. and Garg et al. (ePrint 2016/588, 2016/817), since their dual-input branching programs are inherently not partitionable.) Alternatively, once we have the ideal we can solve the principle-ideal problem (PIP) in classical subexponential time or quantum polynomial time, hence obtaining a total break.

For the GGH15 variant, we show how to use the left-kernel technique of Coron, Lee, Lepoint and Tibouchi to recover ratios of the bundling scalars. Once we have the ratios of the scalar products, we can use factoring and PIP solvers (in classical subexponential time or quantum polynomial time) to find the scalars themselves, then run mixed-input attacks to break the obfuscation.

ePrint Report
Efficient Commitments and Zero-Knowledge Protocols from Ring-SIS with Applications to Lattice-based Threshold Cryptosystems
Carsten Baum, Ivan Damgård, Sabine Oechsner, Chris Peikert

We present an additively homomorphic commitment scheme with hardness based on the Ring-SIS problem. Our construction is statistically hiding as well as computationally binding and allows to commit to a vector of ring elements at once.
We show how to instantiate efficient zero-knowledge protocols that can be used to prove a number of relations among these commitments, and apply these in the context of lattice-based threshold cryptosystems: we give a generic transformation that can be used with certain (Ring-)LWE-based encryption schemes to make their algorithms actively secure. We show how this transformation can be used to implement distributed decryption with malicious security as well as maliciously secure threshold key generation in an efficient way.

ePrint Report
Leakage-Resilient and Misuse-Resistant Authenticated Encryption
Francesco Berti, François Koeune, Olivier Pereira, Thomas Peters, François-Xavier Standaert

Leakage-resilience and misuse-resistance are two important properties for the deployment of authenticated encryption schemes. They aim at mitigating the impact of implementation flaws due to side-channel leakages and misused randomness. In this paper, we discuss their interactions and incompatibilities. For this purpose, we first show that a generic composition of existing leakage-resilient MAC and encryption schemes leads to a misuse-resistant authenticated encryption mode (without leakage). Next we show that this misuse-resistance does not hold in case the adversary can additionally observe leakages, and that misuse-resistance with leakage may be impossible to achieve with simple primitives such as hash functions and block ciphers. As a result, we formalize a new security notion of ciphertext integrity with misuse and leakage, which seems the best that can be achieved in a symmetric cryptographic setting, and describe first efficient constructions satisfying it.

18
October
2016

Job Posting
Research Fellow (Post-Doc) in Security and Privacy
University of Surrey, Surrey Centre for Cyber Security

We have an open position for a post-doctoral researcher (contract duration 2.5 years, salary GBP 31,076.- per annum, starting March/April 2017) at Surrey Centre for Cyber Security (SCCS) within the Department of Computer Science, University of Surrey to work on a new EPSRC-funded project called TAPESTRY that will investigate, develop and demonstrate new cryptographic protocols to enable people, businesses and services to connect safely online.

The successful candidate will be working on the design, security analysis and implementation of privacy-oriented cryptographic protocols that use zero-knowledge proofs, distributed ledger technologies and authentication mechanisms.

More information about the position and the application process is available at **https://jobs.surrey.ac.uk/vacancy.aspx?ref=079116**

The candidate will be working with **Dr Mark Manulis**, Deputy Director of SCCS. Applications must be made through the online portal. Questions can be sent to ** m.manulis** AT

**Closing date for applications:** 18 November 2016

Fully funded positions for PhD student (3 years) and/or Post-Doc (1 or 2 years) on ID-based Crypto and Advanced Primitives (Broadcast Encryption, Functional Encryption) at XLIM, University of Limoges.

Applicants who have a Master degree (for the PhD position) and a PhD degree (for the post-doc position) in Computer Science / Mathematics or related disciplines are encouraged to apply. Further skills in complexity, coding theory and/or software development will also be very appreciated.

The successful applicant will participate in the project ALAMBIC (AppLicAtions of MalleaBIlity in Cryptography) and IDFIX (IDentity-based cryptography For Identification and eXchange) financed by the French governmental research funding agency ANR (Agence Nationale de la Recherche).

International applicants are welcome but need to be eligible to a proper visa before starting their position.

Applications will be considered until the positions are filled, ideally the positions would start in September October 2017.

**Closing date for applications:**

**Contact:** Applications and questions should be directed as soon as possible to:

- Olivier Blazy
*olivier.blazy (at) unilim.fr* - Duong Hieu Phan
*duong-hieu.phan (at) unilim.fr*.

The candidate will have the opportunity of carrying out her/his Ph.D. program around the study and development of new advanced cryptographic mechanisms with a view to applications to secure cloud systems and data privacy.

We welcome candidates with an MSc degree in computer science, mathematics, engineering and related areas.

Good analytical skills and familiarity with mathematical reasoning are mandatory. Strong background in theoretical computer science and principles of modern cryptography are an advantage.

The candidate will be supervised by Dr. Vincenzo Iovino and Prof. Peter Ryan, head of the APSIA, a very active and large research group aimed to conduct research in several aspects of information assurance from the mathematical foundations to the usability concerns.

The University offers highly competitive salaries, excellent working conditions and may assist in finding accommodation.

**Closing date for applications:** 31 December 2016

**Contact:**

Dr. Vincenzo Iovino, *vincenzo.iovino (at) uni.lu*

Prof. Peter Ryan, *peter.ryan (at) uni.lu*

One or more postdoc positions are vacant at the Cryptography and Security Group of Aarhus University (http://aarhuskrypto.dk/).

Applicants should be able to show solid expertise (in the form of scientific publications at major crypto/security venues such as CRYPTO, EUROCRYPT, ASIACRYPT, TCC, PKC, ACM CCS, IEEE S&P, USENIX Security, etc.) in the design and/or implementation of cryptographic protocols. Topics of particular interest include: secure multiparty computation, zero-knowledge, homomorphic encryption, oblivious RAM, differential-privacy, cryptocurrencies, etc.

The closing date is nominal only. Write to Claudio Orlandi (orlandi AT cs au dk) for more information.

(PhD positions are available too, see earlier posting).

**Closing date for applications:** 31 December 2016

**Contact:** Claudio Orlandi, Associate Professor,

orlandi AT cs au dk

The Department of Computer Science at the University of Illinois at Urbana-Champaign invites applications for several faculty positions at all levels and in all areas of Computer Science. Applicants from both traditional as well as non-traditional and interdisciplinary areas of computer science are encouraged to apply.

Qualified senior candidates may also be considered for tenured full Professor positions as part of the Grainger Engineering Breakthroughs Initiative, which is backed by a $100-million gift from the Grainger Foundation. Over the next few years, more than 35 new endowed professorships and chairs will be established, which will provide incredible opportunities for world-renowned researchers.

**Closing date for applications:** 6 January 2017

**Contact:** Carl A. Gunter

**More information:** https://cs.illinois.edu/about-us/faculty-positions

17
October
2016

ePrint Report
Measuring small subgroup attacks against Diffie-Hellman
Luke Valenta, David Adrian, Antonio Sanso, Shaanan Cohney, Joshua Fried, Marcella Hastings, J. Alex Halderman, Nadia Heninger

Several recent standards, including NIST SP 800- 56A and RFC 5114, advocate the use of “DSA” parameters for Diffie-Hellman key exchange. While it is possible to use such parameters securely, additional validation checks are necessary to prevent well-known and potentially devastating attacks. In this paper, we observe that many Diffie-Hellman implementations do not properly validate key exchange inputs. Combined with other protocol properties and implementation choices, this can radically decrease security. We measure the prevalence of these parameter choices in the wild for HTTPS, POP3S, SMTP with STARTTLS, SSH, IKEv1, and IKEv2, finding millions of hosts using DSA and other non-“safe” primes for Diffie-Hellman key exchange, many of them in combination with potentially vulnerable behaviors. We examine over 20 open-source cryptographic libraries and applications and observe that until January 2016, not a single one validated subgroup orders by default. We found feasible full or partial key recovery vulnerabilities in OpenSSL, the Exim mail server, the Unbound DNS client, and Amazon’s load balancer, as well as susceptibility to weaker attacks in many other applications.

ePrint Report
Improving Authenticated Dynamic Dictionaries, with Applications to Cryptocurrencies
Leonid Reyzin, Dmitry Meshkov, Alexander Chepurnoy, Sasha Ivanov

We improve the design and implementation of two-party and three-party authenticated dynamic dictionaries and apply these dictionaries to cryptocurrency ledgers.

A public ledger (blockchain) in a cryptocurrency needs to be easily verifiable. However, maintaining a data structure of all account balances, in order to verify whether a transaction is valid, can be quite burdensome: a verifier who does not have the large amount of RAM required for the data structure will perform slowly because of the need to continually access secondary storage. We use experiments to demonstrate that authenticated dynamic dictionaries can considerably reduce verifier load. On the other hand, per-transaction proofs generated by authenticated dictionaries increase the size of the blockchain, which motivates us to find a solution with most compact proofs.

Our improvements to the design of authenticated dictionaries reduce proof size and speed up verification by 1.4-2.5 times, making them better suited for the cryptocurrency application. We further show that proofs for multiple transactions in a single block can compressed together, reducing their total length by approximately an additional factor of 2.

A public ledger (blockchain) in a cryptocurrency needs to be easily verifiable. However, maintaining a data structure of all account balances, in order to verify whether a transaction is valid, can be quite burdensome: a verifier who does not have the large amount of RAM required for the data structure will perform slowly because of the need to continually access secondary storage. We use experiments to demonstrate that authenticated dynamic dictionaries can considerably reduce verifier load. On the other hand, per-transaction proofs generated by authenticated dictionaries increase the size of the blockchain, which motivates us to find a solution with most compact proofs.

Our improvements to the design of authenticated dictionaries reduce proof size and speed up verification by 1.4-2.5 times, making them better suited for the cryptocurrency application. We further show that proofs for multiple transactions in a single block can compressed together, reducing their total length by approximately an additional factor of 2.

ePrint Report
Comparing Sboxes of Ciphers from the Perspective of Side-Channel Attacks
Liran Lerman, Olivier Markowitch, Nikita Veshchikov

Side-channel attacks exploit physical characteristics of implementations of cryptographic algorithms in order to extract sensitive information such as the secret key. These physical attacks are among the most powerful attacks against real-world cryptosystems. This paper analyses the non-linear part (called Sboxes) of ciphers, which is often targeted by implementation attacks. We analyse Sboxes of several candidates that were sub- mitted to the competition on authenticated encryption (CAESAR) as well as several other ciphers. We compare theoretical metrics with results from simulations and with real experiments. In this paper, we demonstrate that, in some contexts, the theoretical metrics provide no information on the resiliency of the Sboxes against side-channel attacks.

ePrint Report
Estimating the cost of generic quantum pre-image attacks on SHA-2 and SHA-3
Matthew Amy, Olivia Di Matteo, Vlad Gheorghiu, Michele Mosca, Alex Parent, John Schanck

We investigate the cost of Grover's quantum search algorithm when used in the
context of pre-image attacks on the SHA-2 and SHA-3 families of
hash functions. Our cost model assumes that the attack is run on a surface
code based fault-tolerant quantum computer. Our estimates rely on a time-area
metric that costs the number of logical qubits times the depth of the circuit
in units of surface code cycles. As a surface code cycle involves a
significant classical processing stage, our cost estimates allow for crude, but
direct, comparisons of classical and quantum algorithms.

We exhibit a circuit for a pre-image attack on SHA-256 that is approximately $2^{153.8}$ surface code cycles deep and requires approximately $2^{12.6}$ logical qubits. This yields an overall cost of $2^{166.4}$ logical-qubit-cycles. Likewise we exhibit a SHA3-256 circuit that is approximately $2^{146.5}$ surface code cycles deep and requires approximately $2^{20}$ logical qubits for a total cost of, again, $2^{166.5}$ logical-qubit-cycles. Both attacks require on the order of $2^{128}$ queries in a quantum black-box model, hence our results suggest that executing these attacks may be as much as $275$ billion times more expensive than one would expect from the simple query analysis.

We exhibit a circuit for a pre-image attack on SHA-256 that is approximately $2^{153.8}$ surface code cycles deep and requires approximately $2^{12.6}$ logical qubits. This yields an overall cost of $2^{166.4}$ logical-qubit-cycles. Likewise we exhibit a SHA3-256 circuit that is approximately $2^{146.5}$ surface code cycles deep and requires approximately $2^{20}$ logical qubits for a total cost of, again, $2^{166.5}$ logical-qubit-cycles. Both attacks require on the order of $2^{128}$ queries in a quantum black-box model, hence our results suggest that executing these attacks may be as much as $275$ billion times more expensive than one would expect from the simple query analysis.

ePrint Report
Bootstrapping the Blockchain --- Directly
Juan A. Garay, Aggelos Kiayias, Nikos Leonardos, Giorgos Panagiotakos

The Bitcoin backbone protocol [Eurocrypt 2015] extracts
basic properties of Bitcoin's underlying {\em blockchain} data structure, such as ``common prefix'' and ``chain quality,'' and shows how fundamental applications including consensus and a robust public transaction ledger can be built on top of them. The underlying assumptions are ``proofs of work'' (POWs), adversarial hashing power strictly less than $1/2$ {\em and} no adversarial pre-computation---or, alternatively, the existence of an unpredictable ``genesis'' block.

In this paper we show how to remove the latter assumption, presenting a ``bootstrapped'' Bitcoin-like blockchain protocol relying on POWs that builds genesis blocks ``from scratch'' in the presence of adversarial pre-computation. The only known previous result in the same setting (unauthenticated parties, no trusted setup) [Crypto 2015] is indirect in the sense of creating a PKI first and then employing conventional PKI-based authenticated communication.

With our construction we establish that consensus can be solved directly by a blockchain protocol {\em without} trusted setup assuming an honest majority (in terms of computational power). % We also formalize {\em miner unlinkability}, a privacy property for blockchain protocols, and demonstrate that our protocol retains the same level of miner unlinkability as Bitcoin itself.

In this paper we show how to remove the latter assumption, presenting a ``bootstrapped'' Bitcoin-like blockchain protocol relying on POWs that builds genesis blocks ``from scratch'' in the presence of adversarial pre-computation. The only known previous result in the same setting (unauthenticated parties, no trusted setup) [Crypto 2015] is indirect in the sense of creating a PKI first and then employing conventional PKI-based authenticated communication.

With our construction we establish that consensus can be solved directly by a blockchain protocol {\em without} trusted setup assuming an honest majority (in terms of computational power). % We also formalize {\em miner unlinkability}, a privacy property for blockchain protocols, and demonstrate that our protocol retains the same level of miner unlinkability as Bitcoin itself.

ePrint Report
Revisiting the Wrong-Key-Randomization Hypothesis
Tomer Ashur, Tim Beyne, Vincent Rijmen

Linear cryptanalysis can be considered to be one of the strongest techniques in the cryptanalyst's arsenal. In most cases, Matsui's Algorithm 2 is used for the key recovery part of the attack. The success rate analysis of this algorithm is based on an assumption regarding the bias of a linear approximation for a wrong key, known as the wrong-key-randomization hypothesis.
This hypothesis was refined by Bogdanov and Tischhauser to take into account the stochastic nature of the bias for a wrong key.
We provide further refinements to the analysis of Matsui's algorithm 2 by considering the more natural setting of sampling without replacement.
This paper derives the distribution for the observed bias for wrong keys when sampling is done without replacement and shows that less data is required when duplicate pairs are discarded.
It also develops formulas for the success probability and the required data complexity when this approach is taken. The formulas predict that the success probability may reach a peak, then decrease as more pairs are considered. We provide a new explanation for this behavior and derive the conditions for encountering it. We empirically verify our results and compare them to previous work.

ePrint Report
Scrypt is Maximally Memory-Hard
Jo\"el Alwen, Binyi Chen, Krzysztof Pietrzak, Leonid Reyzin, Stefano Tessaro

Memory-hard functions (MHFs) are hash algorithms whose evaluation cost is dominated by memory cost. As memory, unlike computation, costs about the same across different platforms, MHFs cannot be evaluated at significantly lower cost on dedicated hardware like ASICs. MHFs have found widespread applications including password hashing, key derivation, and proofs-of-work.

This paper focuses on scrypt, a simple candidate MHF designed by Percival, and described in RFC 7914. It has been used within a number of cryptocurrencies (e.g., Litecoin and Dogecoin) and has been an inspiration for Argon2d, one of the winners of the recent password-hashing competition. Despite its popularity, no rigorous lower bounds on its memory complexity are known.

We prove that scrypt is optimally memory hard, i.e., its cumulative memory complexity (cmc) in the parallel random oracle model is $\Omega(n^2 w)$, where $w$ and $n$ are the output length and number of invocations of the underlying hash function, respectively. High cmc is a strong security target for MHFs introduced by Alwen and Serbinenko (STOC '15) which implies high memory cost even for adversaries who can amortise the cost over many evaluations and evaluate the underlying hash functions many times in parallel. Our proof is the first showing optimal memory hardness for any MHF.

Our result improves both quantitatively and qualitatively upon the recent work by Alwen et al. (EUROCRYPT '16) who proved a weaker lower bound of $\Omega(n^2 w /\log^2 n)$ for a restricted class of adversaries.

This paper focuses on scrypt, a simple candidate MHF designed by Percival, and described in RFC 7914. It has been used within a number of cryptocurrencies (e.g., Litecoin and Dogecoin) and has been an inspiration for Argon2d, one of the winners of the recent password-hashing competition. Despite its popularity, no rigorous lower bounds on its memory complexity are known.

We prove that scrypt is optimally memory hard, i.e., its cumulative memory complexity (cmc) in the parallel random oracle model is $\Omega(n^2 w)$, where $w$ and $n$ are the output length and number of invocations of the underlying hash function, respectively. High cmc is a strong security target for MHFs introduced by Alwen and Serbinenko (STOC '15) which implies high memory cost even for adversaries who can amortise the cost over many evaluations and evaluate the underlying hash functions many times in parallel. Our proof is the first showing optimal memory hardness for any MHF.

Our result improves both quantitatively and qualitatively upon the recent work by Alwen et al. (EUROCRYPT '16) who proved a weaker lower bound of $\Omega(n^2 w /\log^2 n)$ for a restricted class of adversaries.

ePrint Report
On Probabilistic Checking in Perfect Zero Knowledge
Eli Ben-Sasson, Alessandro Chiesa, Michael A. Forbes, Ariel Gabizon, Michael Riabzev, Nicholas Spooner

We present the first constructions of *single*-prover proof systems that achieve *perfect* zero knowledge (PZK) for languages beyond NP, under no intractability assumptions:

1. The complexity class #P has PZK proofs in the model of Interactive PCPs (IPCPs) [KR08], where the verifier first receives from the prover a PCP and then engages with the prover in an Interactive Proof (IP).

2. The complexity class NEXP has PZK proofs in the model of Interactive Oracle Proofs (IOPs) [BCS16,RRR16], where the verifier, in every round of interaction, receives a PCP from the prover.

Unlike PZK multi-prover proof systems [BGKW88], PZK single-prover proof systems are elusive: PZK IPs are limited to AM Ⴖ coAM [F87,AH91], while known PCPs and IPCPs achieve only *statistical* simulation [KPT97,GIMS10]. Recent work [BCGV16] has achieved PZK for IOPs but only for languages in NP, while our results go beyond it.

Our constructions rely on *succinct* simulators that enable us to "simulate beyond NP", achieving exponential savings in efficiency over [BCGV16]. These simulators crucially rely on solving a problem that lies at the intersection of coding theory, linear algebra, and computational complexity, which we call the *succinct constraint detection* problem, and consists of detecting dual constraints with polynomial support size for codes of exponential block length. Our two results rely on solutions to this problem for fundamental classes of linear codes:

* An algorithm to detect constraints for Reed--Muller codes of exponential length.

* An algorithm to detect constraints for PCPs of Proximity of Reed--Solomon codes [BS08] of exponential degree.

The first algorithm exploits the Raz--Shpilka [RS05] deterministic polynomial identity testing algorithm, and shows, to our knowledge, a first connection of algebraic complexity theory with zero knowledge. Along the way, we give a perfect zero knowledge analogue of the celebrated sumcheck protocol [LFKN92], by leveraging both succinct constraint detection and low-degree testing. The second algorithm exploits the recursive structure of the PCPs of Proximity to show that small-support constraints are "locally" spanned by a small number of small-support constraints.

1. The complexity class #P has PZK proofs in the model of Interactive PCPs (IPCPs) [KR08], where the verifier first receives from the prover a PCP and then engages with the prover in an Interactive Proof (IP).

2. The complexity class NEXP has PZK proofs in the model of Interactive Oracle Proofs (IOPs) [BCS16,RRR16], where the verifier, in every round of interaction, receives a PCP from the prover.

Unlike PZK multi-prover proof systems [BGKW88], PZK single-prover proof systems are elusive: PZK IPs are limited to AM Ⴖ coAM [F87,AH91], while known PCPs and IPCPs achieve only *statistical* simulation [KPT97,GIMS10]. Recent work [BCGV16] has achieved PZK for IOPs but only for languages in NP, while our results go beyond it.

Our constructions rely on *succinct* simulators that enable us to "simulate beyond NP", achieving exponential savings in efficiency over [BCGV16]. These simulators crucially rely on solving a problem that lies at the intersection of coding theory, linear algebra, and computational complexity, which we call the *succinct constraint detection* problem, and consists of detecting dual constraints with polynomial support size for codes of exponential block length. Our two results rely on solutions to this problem for fundamental classes of linear codes:

* An algorithm to detect constraints for Reed--Muller codes of exponential length.

* An algorithm to detect constraints for PCPs of Proximity of Reed--Solomon codes [BS08] of exponential degree.

The first algorithm exploits the Raz--Shpilka [RS05] deterministic polynomial identity testing algorithm, and shows, to our knowledge, a first connection of algebraic complexity theory with zero knowledge. Along the way, we give a perfect zero knowledge analogue of the celebrated sumcheck protocol [LFKN92], by leveraging both succinct constraint detection and low-degree testing. The second algorithm exploits the recursive structure of the PCPs of Proximity to show that small-support constraints are "locally" spanned by a small number of small-support constraints.

ePrint Report
A Key to Success -- Success Exponents for Side-Channel Distinguishers
Sylvain Guilley, Annelie Heuser, Olivier Rioul

The success rate is the classical metric for evaluating the
performance of side-channel attacks. It is generally computed empirically from measurements for a particular device or using simulations. Closed-form expressions of success rate are desirable because they provide an explicit functional dependence on relevant parameters such as number of measurements and signal-to-noise ratio which help to understand the effectiveness of a given attack and how one can mitigate its threat by countermeasures. However, such closed-form expressions involve high-dimensional complex statistical functions that are hard to estimate.

In this paper, we define the success exponent (SE) of an arbitrary side-channel distinguisher as the first-order exponent of the success rate as the number of measurements increases. Under fairly general assumptions such as soundness, we give a general simple formula for any arbitrary distinguisher and derive closed-form expressions of it for DoM, CPA, MIA and the optimal distinguisher when the model is known (template attack). For DoM and CPA our results are in line with the literature.

Experiments confirm that the theoretical closed-form expression of the SE coincides with the empirically computed one, even for reasonably small numbers of measurements. Finally, we highlight that our study raises many new perspectives for comparing and evaluating side-channel attacks, countermeasures and implementations.

In this paper, we define the success exponent (SE) of an arbitrary side-channel distinguisher as the first-order exponent of the success rate as the number of measurements increases. Under fairly general assumptions such as soundness, we give a general simple formula for any arbitrary distinguisher and derive closed-form expressions of it for DoM, CPA, MIA and the optimal distinguisher when the model is known (template attack). For DoM and CPA our results are in line with the literature.

Experiments confirm that the theoretical closed-form expression of the SE coincides with the empirically computed one, even for reasonably small numbers of measurements. Finally, we highlight that our study raises many new perspectives for comparing and evaluating side-channel attacks, countermeasures and implementations.

We give a systematic overview of techniques to compute efficient arithmetic modulo $2^xp^y\pm 1$. This is useful for computations in the supersingular isogeny Diffie-Hellman (SIDH) key-exchange protocol
which is one of the more recent contenders in the post-quantum public-key arena. One of the main computational bottlenecks in this key-exchange protocol is computing modular arithmetic in a finite field defined by a prime of this special shape.

Recent implementations already use this special prime shape to speed up the cryptographic implementations but it remains unclear if the choices made are optimal or if one can do better. Our overview shows that in the SIDH setting, where arithmetic over a quadratic extension field is required, the approaches based on Montgomery multiplication are to be preferred. Furthermore, the outcome of our search reveals that there exist moduli which result in even faster implementations.

Recent implementations already use this special prime shape to speed up the cryptographic implementations but it remains unclear if the choices made are optimal or if one can do better. Our overview shows that in the SIDH setting, where arithmetic over a quadratic extension field is required, the approaches based on Montgomery multiplication are to be preferred. Furthermore, the outcome of our search reveals that there exist moduli which result in even faster implementations.

15
October
2016

ePrint Report
Hash First, Argue Later: Adaptive Verifiable Computations on Outsourced Data
Dario Fiore, Cédric Fournet, Esha Ghosh, Markulf Kohlweiss, Olga Ohrimenko, Bryan Parno

Proof systems for verifiable computation (VC) have the potential to make cloud outsourcing more trustworthy. Recent schemes enable a verifier with limited resources to delegate large computations and verify their outcome based on succinct arguments: verification complexity is linear in the size of the inputs and outputs (not the size of the computation). However, cloud computing also often involves large amounts of data, which may exceed the local storage and I/O capabilities of the verifier, and thus limit the use of VC.

In this paper, we investigate multi-relation hash & prove schemes for verifiable computations that operate on succinct data hashes. Hence, the verifier delegates both storage and computation to an untrusted worker. She uploads data and keeps hashes; exchanges hashes with other parties; verifies arguments that consume and produce hashes; and selectively downloads the actual data she needs to access.

Existing instantiations that fit our definition either target restricted classes of computations or employ relatively inefficient techniques. Instead, we propose efficient constructions that lift classes of existing arguments schemes for fixed relations to multi-relation hash & prove schemes. Our schemes (1) rely on hash algorithms that run linearly in the size of the input; (2) enable constant-time verification of arguments on hashed inputs; (3) incur minimal overhead for the prover. Their main benefit is to amortize the linear cost for the verifier across all relations with shared I/O. Concretely, compared to solutions that can be obtained from prior work, our new hash & prove constructions yield a 1,400x speed-up for provers. We also explain how to further reduce the linear verification costs by partially outsourcing the hash computation itself, obtaining a 480x speed-up when applied to existing VC schemes, even on single-relation executions.

In this paper, we investigate multi-relation hash & prove schemes for verifiable computations that operate on succinct data hashes. Hence, the verifier delegates both storage and computation to an untrusted worker. She uploads data and keeps hashes; exchanges hashes with other parties; verifies arguments that consume and produce hashes; and selectively downloads the actual data she needs to access.

Existing instantiations that fit our definition either target restricted classes of computations or employ relatively inefficient techniques. Instead, we propose efficient constructions that lift classes of existing arguments schemes for fixed relations to multi-relation hash & prove schemes. Our schemes (1) rely on hash algorithms that run linearly in the size of the input; (2) enable constant-time verification of arguments on hashed inputs; (3) incur minimal overhead for the prover. Their main benefit is to amortize the linear cost for the verifier across all relations with shared I/O. Concretely, compared to solutions that can be obtained from prior work, our new hash & prove constructions yield a 1,400x speed-up for provers. We also explain how to further reduce the linear verification costs by partially outsourcing the hash computation itself, obtaining a 480x speed-up when applied to existing VC schemes, even on single-relation executions.

ePrint Report
Design Strategies for ARX with Provable Bounds: SPARX and LAX (Full Version)
Daniel Dinu, Léo Perrin, Aleksei Udovenko, Vesselin Velichkov, Johann Großschädl, Alex Biryukov

We present, for the first time, a general strategy for designing ARX symmetric-key primitives with provable resistance against single-trail differential and linear cryptanalysis. The latter has been a long standing open problem in the area of ARX design. The wide trail design strategy (WTS), that is at the basis of many S-box based ciphers, including the AES, is not suitable for ARX designs due to the lack of S-boxes in the latter. In this paper we address the mentioned limitation by proposing the long trail design strategy (LTS) -- a dual of the WTS that is applicable (but not limited) to ARX constructions. In contrast to the WTS, that prescribes the use of small and efficient S-boxes at the expense of heavy linear layers with strong mixing properties, the LTS advocates the use of large (ARX-based) S-Boxes together with sparse linear layers. With the help of the so-called Long Trail argument, a designer can bound the maximum differential and linear probabilities for any number of rounds of a cipher built according to the LTS.
To illustrate the effectiveness of the new strategy, we propose SPARX -- a family of ARX-based block ciphers designed according to the LTS. SPARX has 32-bit ARX-based S-boxes and has provable bounds against differential and linear cryptanalysis. In addition, SPARX is very efficient on a number of embedded platforms. Its optimized software implementation ranks in the top 6 of the most software-efficient ciphers along with SIMON, SPECK, Chaskey, LEA and RECTANGLE.

As a second contribution we propose another strategy for designing ARX ciphers with provable properties, that is completely independent of the LTS. It is motivated by a challenge proposed earlier by Wall{\'{e}}n and uses the differential properties of modular addition to minimize the maximum differential probability across multiple rounds of a cipher. A new primitive, called LAX, is designed following those principles. LAX partly solves the Wall{\'{e}}n challenge.

As a second contribution we propose another strategy for designing ARX ciphers with provable properties, that is completely independent of the LTS. It is motivated by a challenge proposed earlier by Wall{\'{e}}n and uses the differential properties of modular addition to minimize the maximum differential probability across multiple rounds of a cipher. A new primitive, called LAX, is designed following those principles. LAX partly solves the Wall{\'{e}}n challenge.

ePrint Report
Exact Security Analysis of Hash-then-Mask Type Probabilistic MAC Constructions
Avijit Dutta, Ashwin Jha, Mridul Nandi

Probabilistic MAC (message authentication code) is an alternative choice for a stateful MAC where maintaining internal state may be difficult or unsafe. Usually tag of a probabilistic MAC consists of an $m$-bit random coin (also called {\em salt}) and an $n$-bit core-tag depending on the salt. In terms of the security, probabilistic MAC falls under birthday collision of salts which is absent in stateful MAC. XMACR is an example of probabilistic MAC which remains secure up to $o(2^{m/2})$ tag generation queries. To achieve security beyond birthday in $n$, one can naturally use a large salt. For example, $\mathrm{MACRX}_3$ sets $m = 3n$ and provides security up to $o(2^{n})$ tag-generation queries. Large salt may restrict its applicability as it increases the cost of random string generation as well as the size of the overall tag. RWMAC (randomized version of WMAC) provides similar security with $m = n$ but it uses a PRF (pseudorandom function) over $2n$-bit inputs which is naturally more costlier than those over $n$-bit inputs. Achieving beyond birthday security using $n$-bit PRF and $n$-bit salt is a practical and challenging problem. Minematsu in FSE 2010 proposed Enhanced Hash-then-Mask (\tx{EHtM}) using $n$-bit salt and showed its security up to $o(2^{2n/3})$ tag-generation queries. In this paper we revisit this construction and we provide exact security analysis of \tx{EHtM}. In particular, we show that it has higher security, namely up to $o(2^{3n/4})$ queries, than what claimed by the designer. Moreover, we demonstrate a single attempt forgery attack which makes about $2^{3n/4}$ tag generation queries. XMACR and \tx{EHtM} follow the hash-then-mask paradigm due to Carter-Wegman. We revisit six possible constructions following hash-then-mask paradigm and we provide exact security analysis for all of these constructions, some of which however were known before.

ePrint Report
Securing Systems with Scarce Entropy: LWE-Based Lossless Computational Fuzzy Extractor for the IoT
Christopher Huth, Daniela Becker, Jorge Guajardo, Paul Duplys, Tim G\"uneysu

With the advent of the Internet of Things, lightweight devices
necessitate secure and cost-efficient key storage. Since traditional
secure storage is expensive, the valuable entropy could originate from
noisy sources, for which fuzzy extractors allow strong key derivation.
While providing information-theoretic security, fuzzy extractors require
large amount of input entropy to account for entropy loss in the key
extraction process. It has been shown by Fuller et al. [20] that the entropy
loss can be reduced if the requirement is relaxed to computational
security based on the hardness of the Learning with Errors problem.
Using this computational fuzzy extractor, we show how to construct a
device-server authentication system providing outsider chosen perturbation
security and pre-application robustness. We present the first implementation
of a lossless computational fuzzy extractor where the entropy
of the source equals the entropy of the key on a constrained device.
The implementation needs only 1.45KB of SRAM and 9.8KB of Flash
memory on an 8-bit microcontroller. We compare our implementation to
existing work in terms of security, while achieving no entropy loss.