International Association for Cryptologic Research

IACR News Central

Here you can see all recent updates to the IACR webpage. These updates are also available:

Now viewing news items related to:

23 June 2017
ePrint Report A Modular Analysis of the Fujisaki-Okamoto Transformation Dennis Hofheinz, Kathrin Hövelmanns, Eike Kiltz
The Fujisaki-Okamoto (FO) transformation (CRYPTO 1999 and Journal of Cryptology 2013) turns any weakly secure public-key encryption scheme into a strongly (i.e., IND-CCA) secure one in the random oracle model. Unfortunately, the FO analysis suffers from several drawbacks, such as a non-tight security reduction, and the need for a perfectly correct scheme. While several alternatives to the FO transformation have been proposed, they have stronger requirements, or do not obtain all desired properties. In this work, we provide a fine-grained and modular toolkit of transformations for turning weakly secure into strongly secure public-key encryption schemes. All of our transformations are robust against schemes with correctness errors, and their combination leads to several tradeoffs among tightness of the reduction, efficiency, and the required security level of the used encryption scheme. For instance, one variant of the FO transformation constructs an IND-CCA secure scheme from an IND-CPA secure one with a tight reduction and very small efficiency overhead. Another variant assumes only an OW-CPA secure scheme, but leads to an IND-CCA secure scheme with larger ciphertexts. We note that we also analyze our transformations in the quantum random oracle model, which yields security guarantees in a post-quantum setting.
ePrint Report Cryptanalytic Time-Memory Tradeoff for Password Hashing Schemes Donghoon Chang, Arpan Jati, Sweta Mishra, Somitra Kumar Sanadhya
A cryptanalytic technique known as time-memory tradeoff (TMTO) was proposed by Hellman for finding the secret key of a block cipher. This technique allows sharing the effort of key search between the two extremes of exhaustively enumerating all keys versus listing all possible ciphertext mappings produced by a given plaintext (i.e. table lookups). The TMTO technique has also been used as an effective cryptanalytic approach for password hashing schemes (PHS). Increasing threat of password leakage from compromised password hashes demands a resource consuming algorithm to prevent the precomputation of the password hashes. A class of password hashing designs provide such a defense against TMTO attack by ensuring that any reduction in the memory leads to exponential increase in runtime. These are called \textit{Memory hard} designs. However, it is generally difficult to evaluate the ``memory hardness" of a given PHS design.\\ In this work, we present a simple technique to analyze TMTO for any password hashing schemes which can be represented as a directed acyclic graph (DAG). The nodes of the DAG correspond to the storage required by the algorithm and the edges correspond to the flow of the execution. Our proposed technique provides expected run-times at varied levels of available storage for the DAG. Although our technique is generic, we show its efficacy by applying it on three designs from the ``Password Hashing Competition" (PHC) - Argon2i (the PHC winner), Catena and Rig. Our analysis shows that Argon2i fails to maintain the claimed memory hardness. In a recent work Corrigan-Gibbs et al. indeed showed an attack highlighting the weak memory hardening of Argon2i. We also analyze these PHS for performance under various settings of time and memory complexities.
Recent efficient constructions of zero-knowledge Succinct Non-interactive Arguments of Knowledge (zk-SNARKs), require a setup phase in which a common-reference string (CRS) with a certain structure is generated. This CRS is sometimes referred to as the public parameters of the system, and is used for constructing and verifying proofs. A drawback of these constructions is that whomever runs the setup phase subsequently possesses trapdoor information enabling them to produce fraudulent pseudoproofs.

Building on a work of Ben-Sasson, Chiesa, Green, Tromer and Virza [BCGTV15], we construct a multi-party protocol for generating the CRS of the Pinocchio zk-SNARK [PHGR15], such that as long as at least one participating party is not malicious, no party can later construct fraudulent proofs except with negligible probability. The protocol also provides a strong zero-knowledge guarantee even in the case that all participants are malicious.

This method has been used in practice to generate the required CRS for the Zcash cryptocurrency blockchain.
ePrint Report Implementation and Evaluation of a Lattice-Based Key-Policy ABE Scheme Wei Dai, Yark{\i}n Dor\"{o}z, Yuriy Polyakov, Kurt Rohloff, Hadi Sajjadpour, Erkay Sava\c{s}, Berk Sunar
In this paper, we report on our implementation of a lattice-based Key-Policy Attribute-Based Encryption (KP-ABE) scheme, which uses short secret keys. The particular KP-ABE scheme can be used directly for Attribute-Based Access Control (ABAC) applications, as well as a building block in more involved applications and cryptographic schemes such as audit log encryption, targeted broadcast encryption, functional encryption, and program obfuscation. We adapt a recently proposed KP-ABE scheme based on the Learning With Errors (LWE) problem to a more efficient scheme based on the Ring Learning With Errors (RLWE) problem, and demonstrate an implementation that can be used in practical applications. Our state-of-the-art GPU implementation shows that the homomorphic public key and ciphertext evaluation operations, which dominate the execution time of the KP-ABE scheme, can be performed in a reasonably short amount of time. Our practicality results also hold when scaled to a relatively large number of attributes. To the best of our knowledge, this is the first KP-ABE implementation that supports both ciphertext and public key homomorphism and the only experimental practicality results reported in the literature.
Area minimization is one of the main efficiency criterion for lightweight encryption primitives. While reducing the implementation data path is a natural strategy for achieving this goal, Substitution-Permutation Network (SPN) ciphers are usually hard to implement in a bit-serial way (1-bit data path). More generally, this is hard for any data path smaller than its Sbox size, since many scan flip-flops would be required for storage, which are more area-expensive than regular flip-flops.% in addition to extra multiplexers.

In this article, we propose the first strategy to obtain extremely small bit-serial ASIC implementations of SPN primitives. Our technique, which we call bit-sliding, is generic and offers many new interesting implementation trade-offs. It manages to minimize the area by reducing the data path to a single bit, while avoiding the use of many scan flip-flops.

Following this general architecture, we could obtain the first bit-serial and the smallest implementation of AES-128 to date (1563 GE for encryption only, and 1744 GE for encryption and decryption with IBM 130nm standard-cell library), greatly improving over the smallest known implementations (about 30% decrease), making AES-128 competitive to many ciphers specifically designed for lightweight cryptography. To exhibit the generality of our strategy, we also applied it to the PRESENT and SKINNY block ciphers, again offering the smallest implementations of these ciphers thus far, reaching an area as low as 1054 GE for a 64-bit block 128-bit key cipher. It is also to be noted that our bit-sliding seems to obtain very good power consumption figures, which makes this implementation strategy a good candidate for passive RFID tags.
21 June 2017
ePrint Report A Subversion-Resistant SNARK Behzad Abdolmaleki, Karim Baghery, Helger Lipmaa, Michal Zajac
While succinct non-interactive zero-knowledge arguments of knowledge (zk-SNARKs) are widely studied, the question of what happens when the CRS has been subverted has received little attention. In ASIACRYPT 2016, Bellare, Fuchsbauer and Scafuro showed the first negative and positive results in this direction, proving also that it is impossible to achieve subversion soundness and (even non-subversion) zero knowledge at the same time. On the positive side, they constructed an involved sound and subversion zero-knowledge argument system for NP. We show that Groth's zk-SNARK for \textsc{Circuit-SAT} from EUROCRYPT 2016 can be made computationally knowledge-sound and perfectly composable Sub-ZK with minimal changes. We just require the CRS trapdoor to be extractable and the CRS to be publicly verifiable. To achieve the latter, we add some new elements to the CRS and construct an efficient CRS verification algorithm. We also provide a definitional framework for sound and Sub-ZK SNARKs and describe implementation results of the new Sub-ZK SNARK.
ePrint Report Quantum Resource Estimates for Computing Elliptic Curve Discrete Logarithms Martin Roetteler, Michael Naehrig, Krysta M. Svore, Kristin Lauter
We give precise quantum resource estimates for Shor's algorithm to compute discrete logarithms on elliptic curves over prime fields. The estimates are derived from a simulation of a Toffoli gate network for controlled elliptic curve point addition, implemented within the framework of the quantum computing software tool suite LIQ$Ui|\rangle$. We determine circuit implementations for reversible modular arithmetic, including modular addition, multiplication and inversion, as well as reversible elliptic curve point addition. We conclude that elliptic curve discrete logarithms on an elliptic curve defined over an $n$-bit prime field can be computed on a quantum computer with at most $9n + 2\lceil\log_2(n)\rceil+10$ qubits using a quantum circuit of at most $448 n^3 \log_2(n) + 4090 n^3$ Toffoli gates. We are able to classically simulate the Toffoli networks corresponding to the controlled elliptic curve point addition as the core piece of Shor's algorithm for the NIST standard curves P-192, P-224, P-256, P-384 and P-521. Our approach allows gate-level comparisons to recent resource estimates for Shor's factoring algorithm. The results also confirm estimates given earlier by Proos and Zalka and indicate that, for current parameters at comparable classical security levels, the number of qubits required to tackle elliptic curves is less than for attacking RSA, suggesting that indeed ECC is an easier target than RSA.
ePrint Report Round Optimal Concurrent MPC via Strong Simulation Saikrishna Badrinarayanan, Dakshita Khurana, Amit Sahai
In this paper, we study the round complexity of concurrently secure multi-party computation (MPC) with super-polynomial simulation (SPS) in the plain model. In the plain model, there are known explicit attacks that show that concurrently secure MPC with polynomial simulation is impossible to achieve; SPS security is the most widely studied model for concurrently secure MPC in the plain model. We obtain the following results: • Three-round concurrent MPC with SPS security against Byzantine adversaries, assuming sub-exponentially secure DDH and LWE. • A barrier to constructing stand-alone two-round MPC for general functionalities, achieving any notion of meaningful security against Byzantine adversaries, from standard falsifiable assumptions. • This barrier breaks down in the setting of input-less functionalities; where we construct two-round concurrent MPC with SPS security against Byzantine adversaries for input- less functionalities, assuming sub-exponentially secure indistinguishability obfuscation and DDH. In particular, this class includes sampling functionalities that allow parties to jointly sample a secure common reference string for cryptographic applications. Prior to our work, to the best of our knowledge, concurrent MPC with SPS security required roughly 20 rounds, although we are not aware of any work that even gave an approximation of the constant round complexity sufficient for the multi-party setting. We also improve over the previous best round complexity for the two-party setting, where 5 rounds were needed (Garg, Kiyoshima, and Pandey, Eurocrypt 2017). To obtain our results, we compile protocols that already achieve security against “semi- malicious” adversaries, to protocols secure against fully malicious adversaries, additionally as- suming sub-exponential DDH. Our protocols develop new techniques to use two-round zero- knowledge with super-polynomial strong simulation, defined by Pass (EUROCRYPT 2003) and very recently realized by Khurana and Sahai (Eprint 2017). These remain zero-knowledge against adversaries running in time larger than the running time of the simulator.
ePrint Report A Side-Channel Assisted Cryptanalytic Attack Against QcBits Melissa Rossi, Mike Hamburg, Michael Hutter, Mark E. Marson
QcBits is a code-based public key algorithm based on a problem thought to be resistant to quantum computer attacks. It is a constant time implementation for a quasi-cyclic moderate density parity check (QC-MDPC) Niederreiter encryption scheme, and has excellent performance and small key sizes. In this paper, we present a key recovery attack against QcBits. We first used differential power analysis (DPA) against the syndrome computation of the decoding algorithm to recover partial information about one half of the private key. We then used the recovered information to set up a system of noisy binary linear equations. Solving this system of equations gave us the entire key. Finally, we propose a simple but effective countermeasure against the power analysis used during the syndrome calculation.
This paper presents an post-quantum secure, efficient, and tunable FPGA implementation of the key generation algorithm for the Niederreiter cryptosystem using binary Goppa codes. Our key generator implementation requires as few as 896,052 cycles to produce both public and private portions of a key, and can achieve an estimated frequency Fmax of over 240 MHz when synthesized for Stratix V FPGAs. To the best of our knowledge, this work is the first hardware-based implementation that works with parameters equivalent to, or exceeding, the recommended 128-bit "post-quantum security" level. The key generator can produce a key pair for parameters $n=6960$, $m=13$, $t=119$ in only $3.7$ms when no systemization failure occurs, and in $3.5 \cdot 3.7$ms on average. To achieve such performance, we implement an optimized and parameterized Gaussian systemizer for matrix systemization, which works for any large-sized matrix over any finite field $\text{GF}(2^m)$. Our work also presents an FPGA-based implementation of the Gao-Mateer additive FFT, which only takes about 1000 clock cycles to finish the evaluation of a degree-119 polynomial at $2^{13}$ data points, for example. The key generator Verilog HDL code is partly code-generated using Python and Sage, and can be easily re-generated for different parameters, not just the ones shown in this paper. Design validation was performed using Sage, iVerilog simulation, and data from real FPGA hardware.
Although lattice-based cryptography has proven to be a particularly efficient approach to post-quantum cryptography, its security against side-channel attacks is still a very open topic. There already exist some first works that use masking to achieve DPA security. However, for public-key primitives SPA attacks that use just a single trace are also highly relevant. For lattice-based cryptography this implementation-security aspect is still unexplored. In this work, we present the first single-trace attack on lattice-based encryption. As only a single side-channel observation is needed for full key recovery, it can also be used to attack masked implementations. We use leakage coming from the Number Theoretic Transform, which is at the heart of almost all efficient lattice-based implementations. This means that our attack can be adapted to a large range of other lattice-based constructions and their respective implementations. Our attack consists of 3 main steps. First, we perform a template matching on all modular operations in the decryption process. Second, we efficiently combine all this side-channel information using belief propagation. And third, we perform a lattice-decoding to recover the private key. We show that the attack allows full key recovery not only in a generic noisy Hamming-weight setting, but also based on real traces measured on an ARM Cortex-M4F microcontroller.
The security of several post-quantum cryptosystems is based on the assumption that solving a system of multivariate (quadratic) polynomial equations $p_1=\dots=p_m=0$ over a finite field is hard. Such a system can be solved by computing a lexicographic Groebner basis of the ideal $(p_1,\dots,p_m)$. The most efficient algorithms for computing Groebner bases, such as $F_4$ and $F_5$, transform the problem into several instances of Gaussian elimination. The computational complexity of these algorithms is not completely understood, especially when the polynomials $p_1,\dots,p_m$ are non-homogeneous. In this paper, we prove that this complexity is bounded by a function of the Castelnuovo-Mumford regularity of the ideal $(p_1^h,\dots,p_m^h)$ obtained by homogenizing the input polynomials. This allows us to bound the complexity of solving a system of polynomial equations when the associated ideal is zero-dimensional, a common situation in cryptography. More precisely, we show that the degree of the polynomials involved in the computation a Groebner basis of a zero-dimensional ideal grows at most linearly in the number of variables. In combination with some theorems in commutative algebra, our results also allow us to bound the complexity of some instances of the MinRank Problem.
Major substep in a lattice sieve algorithm which solves the Euclidean shortest vector problem (SVP) is the computation of sums and Euclidean norms of many vector pairs. Finding a solution to the SVP is the foundation of an attack against many lattice based crypto systems. We optimize the main subfunction of a sieve for the regular main processor and for the co-processor to speed up the algorithm in total. Furthermore, we show that the co-processor can provide a significant performance improvement for highly parallel tasks in the lattice sieve. Four-fold speed up achieved, compared to the CPU, indicates that co-processors are a viable choice for implementation of lattice sieve algorithms.
ePrint Report On the Security of Carrier Phase-based Ranging Hildur Olafsdottir, Aanjhan Ranganathan, Srdjan Capkun
Multicarrier phase-based ranging is fast emerging as a cost-optimized solution for a wide variety of proximity-based applications due to its low power requirement, low hardware complexity and compatibility with existing standards such as ZigBee and 6LoWPAN. Given potentially critical nature of the applications in which phase-based ranging can be deployed (e.g., access control, asset tracking), it is important to evaluate its security guarantees. Therefore, in this work, we investigate the security of multicarrier phase-based ranging systems and specifically focus on distance decreasing relay attacks that have proven detrimental to the security of proximity-based access control systems (e.g., vehicular passive keyless entry and start systems). We show that phase-based ranging, as well as its implementations, are vulnerable to a variety of distance reduction attacks. We describe different attack realizations and verify their feasibility by simulations and experiments on a commercial ranging system. Specifically, we successfully reduced the estimated range to less than 3 m even though the devices were more than 50 m apart. We discuss possible countermeasures against such attacks and illustrate their limitations, therefore demonstrating that phase-based ranging cannot be fully secured against distance decreasing attacks.
20 June 2017
ePrint Report Constant bandwidth ORAM with small block size using PIR operations Linru Zhang, Gongxian Zeng, Yuechen Chen, Siu-Ming Yiu, Nairen Cao, Zheli Liu
Recently, server-with-computation model has been applied in Oblivious RAM scheme to achieve constant communication (constant number of blocks). However, existing works either result in large block size O(log^6N), or have some security flaws. Furthermore, a lower bound of sub-logarithmic bandwidth was given if we do not use expensive fully homomorphic operations. The question of \whether constant bandwidth with smaller block size without fully homomorphic operations is achievable" remains open. In this paper, we provide an armative answer. We propose a constant bandwidth ORAM scheme with block size O(log^3N) using only additive homomorphic operations. Our scheme is secure under the standard model. Technically, we design a non-trivial oblivious clear algorithm with very small bandwidth to improve the eviction algorithm in ORAM for which the lower bound proof does not apply. As an additional bene t, we are able to reduce the server storage due to the reduction in bucket size.
This paper presents an empirical crypt-analytical method to analyse a partially known cipher algorithm. During cipher evaluation, it is always a challenge to make any decision on the strength of a partially known cipher algorithm, and if the algorithm is suitable for deployment. The core concept will be presented first, followed by an example to illustrate the idea. The idea is to focus on one input bit at a time using a known keystream attack, assuming this bit is independent from the rest. By computing the statistics of related keystream bits and using the correlation method, one can derive this input bit with certain confidence.
ePrint Report Renyi Entropy Estimation Revisited Maciej Obremski, Maciej Skorski
We revisit the problem of estimating entropy of discrete distributions from independent samples, studied recently by Acharya, Orlitsky, Suresh and Tyagi (SODA 2015), improving their upper and lower bounds on the necessary sample size $n$.

For estimating Renyi entropy of order $\alpha$, up to constant accuracy and error probability, we show the following (a) Upper bounds $n = O(1) \cdot 2^{\left(1-\frac{1}{\alpha}\right)H_{\alpha}}$ for integer $\alpha>1$, as the worst case over distributions with Renyi entropy equal to $H_{\alpha}$. (b) Lower bounds $n=\Omega(1)\cdot K^{1-\frac{1}{\alpha}}$ for any real $\alpha>1$, as the worst case over all distributions on $K$ elements.

Our upper bounds essentially replace the alphabet size by a factor exponential in the entropy, which offers improvements especially in low or medium entropy regimes (interesting for example in anomaly detection). As for the lower bounds, they were previously given conditionally for sufficiently small accuracy, whereas our result applies even to constant accuracy. Moreover, we prove that small accuracy $\delta$ contributes to the complexity independently on the alphabet, by an extra factor $\delta^{-\frac{1}{\alpha}}$.

The argument for upper bounds derives a clean identity for the variance of falling-power sum of a multinomial distribution. Our approach for lower bounds utilizes convex optimization to find a distribution with worse estimation performance, and may be of independent interest.
At Asiacrypt 2016 Bellare, Fuchsbauer and Scafuro introduced the notion of subversion zero knowledge for non-interactive proof systems, demanding that zero knowledge (ZK) is maintained even when the common reference string is chosen maliciously. Succinct non-interactive arguments of knowledge (SNARKs) are proof systems with short and efficiently verifiable proofs, which were introduced for verifiable computation. They are deployed in cryptocurrencies such as Zcash, which guarantees user anonymity assuming zero-knowledge SNARKs. We show that under a plausible hardness assumption, the most efficient SNARK schemes proposed in the literature, including the one underlying Zcash, satisfy subversion ZK or can be made to at very little cost. We argue that Zcash is thus anonymous even if its parameters were set up maliciously.
ePrint Report Deterministic, Stash-Free Write-Only ORAM Daniel S. Roche, Adam J. Aviv, Seung Geol Choi, Travis Mayberry
Write-Only Oblivious RAM (WoORAM) protocols provide privacy by encrypting the contents of data and also hiding the pattern of write operations over that data. WoORAMs provide better privacy than plain encryption and better performance than more general ORAM schemes (which hide both writing and reading access patterns), and the write-oblivious setting has been applied to important applications of cloud storage synchronization and encrypted hidden volumes. In this paper, we introduce an entirely new technique for Write-Only ORAM, called DetWoORAM. Unlike previous solutions, DetWoORAM uses a deterministic, sequential writing pattern without the need for any "stashing" of blocks in local state when writes fail. Our protocol, while conceptually simple, provides substantial improvement over prior solutions, both asymptotically and experimentally. In particular, under typical settings the DetWoORAM writes only 2 blocks (sequentially) to backend memory for each block written to the device, which is optimal. We have implemented our solution using the BUSE (block device in user-space) module and tested DetWoORAM against both an encryption only baseline of dm-crypt and prior, randomized WoORAM solutions, measuring only a 3x-14x slowdown compared to an encryption-only baseline and around 6x-19x speedup compared to prior work.
ePrint Report Internet Voting Using Zcash Pavel Tarasov, Hitesh Tewari
Voting systems have been around for hundreds of years and despite different views on their integrity, have always been deemed secure with some fundamental security and anonymity principles. Numerous electronic systems have been proposed and implemented but some suspicion has been raised regarding the integrity of elections due to detected security vulnerabilities within these systems. Electronic voting, to be successful, requires a more transparent and secure approach, than is offered by current protocols. The approach presented in this paper involves a protocol developed on blockchain technology. The underlying technology used in the voting system is a payment scheme, which offers anonymity of transactions, a trait not seen in blockchain protocols to date. The proposed protocol offers anonymity of voter transactions, while keeping the transactions private, and the election transparent and secure. The underlying payment protocol has not been modified in any way, the voting protocol merely offers an alternative use case.
ePrint Report Hacking in the Blind: (Almost) Invisible Runtime User Interface Attacks Luka Malisa, Kari Kostiainen, Thomas Knell, David Sommer, Srdjan Capkun
We describe novel, adaptive user interface attacks, where the adversary attaches a small device to the interface that connects user input peripherals to the target system. The device executes the attack when the authorized user is performing safety-, or security-critical operations, by modifying or blocking user input, or injecting new events. Although the adversary fully controls the user input channel, to succeed he needs to overcome a number of challenges, including the inability to directly observe the state of the user interface and avoiding being detected by the legitimate user. We present new techniques that allow the adversary to do user interface state estimation and fingerprinting, and thus attack a new range of scenarios that previous UI attacks do not apply to. We evaluate our attacks on two different types of platforms: e-banking on general-purpose PCs, and dedicated medical terminals. Our evaluation shows that such attacks can be implemented efficiently, are hard for the users to detect, and would lead to serious violations of input integrity.
In this paper, we investigate the security of the BLISS lattice-based signature scheme, one of the most promising candidates for post-quantum-secure signatures, against side-channel attacks. Several works have been devoted to its efficient implementation on various platforms, from desktop CPUs to micro-controllers and FPGAs, and more recent papers have also considered its security against certain types of physical attacks, notably fault injection and cache attacks. We turn to more traditional side-channel analysis, and describe several attacks that can yield a full key recovery.

We first identify a serious source of leakage in the rejection sampling algorithm used during signature generation. Existing implementations of that rejection sampling step, which is essential for security, actually leak the ``relative norm'' of the secret key. We show how an extension of an algorithm due to Howgrave-Graham and Szydlo can be used to recover the key from that relative norm, at least when the absolute norm is easy to factor (which happens for a significant fraction of secret keys). We describe how this leakage can be exploited in practice both on an embedded device (an 8-bit AVR microcontroller) using electromagnetic analysis (EMA), and a desktop computer (recent) Intel CPU running Linux) using branch tracing. The latter attack has been mounted against the open source VPN software strongSwan.

We also show that other parts of the BLISS signing algorithm can leak secrets not just for a subset of secret keys, but for 100% of them. The BLISS Gaussian sampling algorithm in strongSwan is intrinsically variable time. This would be hard to exploit using a noisy source of leakage like EMA, but branch tracing allows to recover the entire randomness and hence the key: we show that a single execution of the strongSwan signature algorithm is actually sufficient for full key recovery. We also describe a more traditional side-channel attack on the sparse polynomial multiplications carried out in BLISS: classically, multiplications can be attacked using DPA; however, our target 8-bit AVR target implementation uses repeated shifted additions instead. Surprisingly, we manage to obtain a full key recovery in that setting using integer linear programming from a single EMA trace.
Fully Homomorphic Encryption (FHE) schemes are a powerful tool that allows arbitrary computations on encrypted data. This makes them a promising tool for a variety of use cases that require outsourcing the computation on privacy-sensitive data to untrusted parties. FHE schemes today operate over finite fields, while many use cases call for natural or even real numbers, requiring appropriate encoding of the data into the plaintext space supported by the encryption scheme. However, the choice of encoding can tremendously impact the overall effort of the computation on the encrypted data. Surprisingly, although the question of selecting the encoding is quite natural and arises immediately in practice, a comprehensive analysis is still missing.

In this work, we formally and experimentally investigate this question for applications that operate over integers and rational numbers based on a selection of natural metrics: the number of finite field additions, the number of finite field multiplications, and the multiplicative depth. Our results are partly constructive and partly negative: We show that for the first two metrics, an optimal choice does exist and we state it explicitly. However, we show likewise that regarding multiplicative depth, the parameters need to be chosen specific to the use-case, as there is no global optimum. Still, we show exactly how one can choose the best encoding depending on the use-case.
ePrint Report Parallel Collision Search with Radix Trees Gilles Dequen, Sorina Ionica, Monika Trimoska
Parallel versions of collision search algorithms require a significant amount of memory to store a proportion of the points computed by the pseudo-random walks. Implementations available in the literature use a hash table to store these points to allow fast memory access. We propose to replace the traditional hash table by a radix tree structure, allowing both look-up and insertion in a single operation. We provide theoretical and experimental evidence that memory access is an important factor in determining the runtime of our algorithm. Our benchmarks indicate that our algorithm achieves linear parallel performance.
In the present day, AES is one the most widely used and most secure Encryption Systems prevailing. So, naturally lots of research work is going on to mount a significant attack on AES. Many different forms of Linear and differential cryptanalysis have been performed on AES. Of late, an active area of research has been Algebraic Cryptanalysis of AES, where although fast progress is being made, there are still numerous scopes for research and improvement. One of the major reasons behind this being that algebraic cryptanalysis mainly depends on I/O relations of the AES S- Box (a major component of the AES). As, already known, that the key recovery algorithm of AES can be broken down as an MQ problem which is itself considered hard. Solving these equations depends on our ability reduce them into linear forms which are easily solvable under our current computational prowess. The lower the degree of these equations, the easier it is for us to linearlize hence the attack complexity reduces. The aim of this paper is to analyze the various relations involving small number of monomials of the AES S- Box and to answer the question whether it is actually possible to have such monomial equations for the S- Box if we restrict the degree of the monomials. In other words this paper aims to study such equations and see if they can be applicable for AES.
In CRYPTO 2017, Mennink and Neves showed almost n-bit security for a dual version of EWCDM. In this paper we describe a birthday attack on this construction which violates their claim.
ePrint Report TLS-N: Non-repudiation over TLS Enabling - Ubiquitous Content Signing for Disintermediation Hubert Ritzdorf, Karl Wüst, Arthur Gervais, Guillaume Felley, Srdjan Capkun
An internet user wanting to share observed content is typically restricted to primitive techniques such as screenshots, web caches or share button-like solutions. These acclaimed proofs, however, are either trivial to falsify or require trust in centralized entities (e.g., search engine caches).

This motivates the need for a seamless and standardized internet-wide non-repudiation mechanism, allowing users to share data from news sources, social websites or financial data feeds in a provably secure manner.

Additionally, blockchain oracles that enable data-rich smart contracts typically rely on a trusted third party (e.g., TLSNotary or Intel SGX). A decentralized method to transfer web-based content into a permissionless blockchain without additional trusted third party would allow for smart contract applications to flourish.

In this work, we present TLS-N, the first TLS extension that provides secure non-repudiation and solves both of the mentioned challenges. TLS-N generates non-interactive proofs about the content of a TLS session that can be efficiently verified by third parties and blockchain based smart contracts. As such, TLS-N increases the accountability for content provided on the web and enables a practical and decentralized blockchain oracle for web content. TLS-N is compatible with TLS 1.3 and adds a minor overhead to a typical TLS session. When a proof is generated, parts of the TLS session (e.g., passwords, cookies) can be hidden for privacy reasons, while the remaining content can be verified.

Practical demonstrations can be found at https://tls-n.org/.
ePrint Report Boot Attestation: Secure Remote Reporting with Off-The-Shelf IoT Sensors Steffen Schulz, André Schaller, Florian Kohnhäuser, Stefan Katzenbeisser
A major challenge in computer security is about establishing the trustworthiness of remote platforms. Remote attestation is the most common approach to this challenge. It allows a remote platform to measure and report its system state in a secure way to a third party. Unfortunately, existing attestation solutions either provide low security, as they rely on unrealistic assumptions, or are not applicable to commodity low-cost and resource-constrained devices, as they require custom secure hardware extensions that are difficult to adopt across IoT vendors. In this work, we propose a novel remote attestation scheme, named Boot Attestation, that is particularly optimized for low-cost and resource-constrained embedded devices. In Boot Attestation, software integrity measurements are immediately committed to during boot, thus relaxing the traditional requirement for secure storage and reporting. Our scheme is very light on cryptographic requirements and storage, allowing efficient implementations, even on the most low-end IoT platforms available today. We also describe extensions for more flexible management of ownership and third party (public-key) attestation that may be desired in fully Internet-enabled devices. Our scheme is supported by many existing off-the-shelf devices. To this end, we review the hardware protection capabilities for a number of popular device types and present implementation results for two such commercially available platforms.
In the present paper, we analyze the security of SIMON-like ciphers against linear cryptanalysis. First, an upper bound is derived on the squared correlation of SIMON-like round function. It is shown that the upper bound on the squared correlation of SIMON-like round function decreases with the Hamming weight of output mask increasing. Based on this, we derive an upper bound on the squared correlation of linear trails for SIMON and SIMECK, which is $2^{-2R+2}$ for any $R$-round linear trail. We also extend this upper bound to SIMON-like ciphers. Meanwhile, an automatic search algorithm is proposed, which can find the optimal linear trails in SIMON-like ciphers under the Markov assumption. With the proposed algorithm, we find the provably optimal linear trails for $12$, $16$, $19$, $28$ and $37$ rounds of SIMON$32/48/64/96/128$. To the best of our knowledge, it is the first time that the provably optimal linear trails for SIMON$64$, SIMON$96$ and SIMON$128$ are reported. The provably optimal linear trails for $13$, $19$ and $25$ rounds of SIMECK$32/48/64$ are also found respectively. Besides the optimal linear trails, we also find the $23$, $31$ and $41$-round linear hulls for SIMON$64/96/128$, and $13$, $21$ and $27$-round linear hulls for SIMECK$32/48/64$. As far as we know, these are the best linear hull distinguishers for SIMON and SIMECK so far. Compared with the approach based on SAT/SMT solvers in \cite{KolblLT15}, our search algorithm is more efficient and practical to evaluate the security against linear cryptanalysis in the design of SIMON-like ciphers.
We study the quantum query complexity of finding a collision for a function $f$ whose outputs are chosen according to a non-uniform distribution $D$. We derive some upper bounds and lower bounds depending on the min-entropy and the collision-entropy of $D$. In particular, we improve the previous lower bound by Ebrahimi, Tabia, and Unruh from $\Omega(2^{k/9})$ to $\Omega(2^{k/5})$ where $k$ is the min-entropy of $D$.

  older items