International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

05 June 2020

Wenbo MAO, Wenxiang WANG
ePrint Report ePrint Report
We present LotMint, a permissionless blockchain, with a purposely low set bar for Proof-of-Work (PoW) difficulty. Our objective is for personal computers, cloud virtual machines or containers, even mobile devices, and hopefully future IoT devices, to become the main, widely distributed, collectively much securer, fairer, more reliable and economically sustainable mining workforce for blockchains. An immediate question arises: how to prevent the permissionless network from being flooded of block dissemination traffic by a massive number of profit enthusiastic miners? We propose a novel notion of {\em Decentralized Clock/Time} (DC/DT) as global and logical clock/time which can be agreed upon as a consensus. Our construction of DC/DT practically uses distributed private clocks of the participation nodes. With DC/DT, a node upon creating or hearing a block can know how luckily short or unluckily long time for the block to have been mined and/or traveled through the network. They can ``time throttle'' a potentially large number of unluckily mined/travelled blocks. The luckier blocks passing through the time throttle are treated as time-tie forks with a volume being ``throttle diameter'' adjustably controlled not to congest the network. With the number of time-tie forks being manageable, it is then easy to break-tie elect a winner, or even to orderly queue a plural number of winners for a ``multi-core'' utilization of resource. We provide succinct and evident analyses of necessary properties for the LotMint blockchain including: decentralization, energy saving, safety, liveness, robustness, fairness, anti-denial-of-service, anti-sybil, anti-censorship, scaled-up transaction processing throughput and sped-up payment confirmation time.
Expand
Leonie Reichert, Samuel Brack, Björn Scheuermann
ePrint Report ePrint Report
To combat the ongoing Covid -19 pandemic, many new ways have been proposed on how to automate the process of finding infected people, also called contact tracing. A special focus was put on preserving the privacy of users. In this survey we define multiple classes of automated contact tracing techniques which most of the approaches fall into. We identify two major groups: systems that rely on a server for finding new infections and systems that distribute this process. They are systematically classified regarding security and privacy criteria.
Expand
Sebastien Carre, Sylvain Guilley, Olivier Rioul
ePrint Report ePrint Report
Persistent fault analysis (PFA) consists in guessing block cipher secret keys by biasing their substitution box. This paper improves the original attack of Zhang et al. on AES-128 presented at CHES 2018. By a thorough analysis, the exact probability distribution of the ciphertext (under a uniformly distributed plaintext) is derived, and the maximum likelihood key recovery estimator is computed exactly. Its expression is turned into an attack algorithm, which is shown to be twice more efficient in terms of number of required encryptions than the original attack of Zhang et al. This algorithm is also optimized from a computational complexity standpoint. In addition, our optimal attack is naturally amenable to key enumeration, which expedites full 16- bytes key extraction. Various tradeoffs between data and computational complexities are investigated.
Expand
Crypto Group at IST Austria
ePrint Report ePrint Report
Automated contract tracing aims at helping with the current COVID-19 pandemic by alerting users of encounters with infected people. There are currently many proposals for protocols (like the ``decentralized" DP-3T and PACT or the ``centralized" ROBERT and DESIRE) to be run on mobile phones, where the basic idea is to regularly broadcast (using low energy Bluetooth) some values, and at the same time store (a function of) incoming messages broadcasted by users in their proximity. Should a user get diagnosed, he can upload some data to a backend server, which then is used to alert users that were in proximity with the diagnosed user.

There are many important aspects one wants those protocols to achieve, in particular simplicity/efficiency, privacy and robustness, the latter including some security against false positives, that is, users getting alerts despite not having been in proximity with a diagnosed user.

In the existing proposals one can trigger false positives on a massive scale by an ``inverse-Sybil" attack, where a large number of devices (malicious users or hacked phones) pretend to be the same user, such that later, just a single person needs to be diagnosed (and allowed to upload) to trigger an alert for all users that were in proximity to any of this large group of devices.

We propose the first protocols that do not succumb to such attacks assuming the devices involved in the attack do not constantly communicate, which we observe is a necessary assumption. Our first protocol requires devices to non-interactively exchange values (like e.g. in DESIRE), while the second requires that the devices have access to some location dependent coordinate (like coarse GPS coordinates or cell tower IDs).

The high level idea of the protocols is to derive the values to be broadcasted by a hash chain, so that two (or more) devices who want to launch an inverse Sybil attack will not be able to connect their respective chains and thus only one of them will be able to upload. Apart from achieving strong privacy and good efficiency, a main challenge is to force the chains on different devices to divert, which we do by infusing unpredictable data (randomness from encounters in the first, location data in the second protocol). Our protocols also achieve security against replay, and the second even against relay attacks.
Expand
Avijit Dutta, Mridul Nandi, Abishanka Saha
ePrint Report ePrint Report
In \textsf{ICISC-05}, \textsf{ePrint-10} and \textsf{patarin-book}, Patarin proved that the number of solutions of $(P_1, \ldots, P_{2q})$ with distinct $P_1, P_2, \ldots$, $P_{2q}$ from $\{0,1\}^n$ satisfying $P_{2i - 1} \oplus P_{2i} = \lambda_i$ ($\neq 0$), $1 \leq i \leq q$ is at least \begin{center} $\frac{(2^n)_{2q}}{2^{nq}}$ for all $q \leq \frac{2^n}{134}$ \end{center} where $(2^n)_{2q} := 2^n(2^n-1) \cdots (2^n - 2q + 1)$. This result is known as \textit{Mirror Theory}. Mirror theory stands out to be a powerful tool to provide a high security guarantee for many block cipher (or even an ideal permutation) based designs. Unfortunately, the proof of mirror theory contains some unverifiable gaps and several mistakes. In this paper, we revisit the proof strategy of \textsf{ePrint-10} and \textit{provide a detailed proof of the mirror theory by correcting the mistakes and filling up gaps}. In particular, we prove the mirror theory for all $q \leq 2^n/33.1$ (a wider range than what was originally claimed by Patarin). As an application, we show that the maximum PRF-advantage of sum of domain separated random permutation is \textbf{exactly} $1- (1 - 2^{-n})^q$, $\forall q \leq 2^n/33.1$. Using similar proof strategy, we also prove the following mirror theory for sum of two independent permutations: the number of solutions of $(P_1, Q_1, \ldots, P_{q}, Q_q)$ with distinct $P_1, P_2, \ldots$, $P_{q}$ and distinct $Q_1, \ldots, Q_q$ satisfying $P_{i} \oplus Q_i = \lambda_i$ for any fixed $\lambda_i \in \{0,1\}^n$, $1 \leq i \leq q$ is at least \begin{center} $\frac{(2^n)_{q} \times (2^n)_q}{2^{nq}} \times (1 - \frac{1.2q^2}{2^{2n}}-\frac{108n^3}{2^{2n}})$,\ \ for all $q \leq \frac{2^n}{13}$. \end{center}
Expand
Behzad Abdolmaleki, Helger Lipmaa, Janno Siim, Michał Zając
ePrint Report ePrint Report
While NIZK arguments in the CRS model are widely studied, the question of what happens when the CRS was subverted has received little attention. In ASIACRYPT 2016, Bellare, Fuchsbauer, and Scafuro showed the first negative and positive results in the case of NIZK, proving also that it is impossible to achieve subversion soundness and (even non-subversion) zero-knowledge at the same time. On the positive side, they constructed an involved sound and subversion-zero-knowledge (Sub-ZK) non-succinct NIZK argument for NP. We consider the practically very relevant case of zk-SNARKs. We make Groth's zk-SNARK for \textsc{Circuit-SAT} from EUROCRYPT 2016 computationally knowledge-sound and perfectly composable Sub-ZK with minimal changes. We only require the CRS trapdoor to be extractable and the CRS to be publicly verifiable. To achieve the latter, we add some new elements to the CRS and construct an efficient CRS verification algorithm. We also provide a definitional framework for knowledge-sound and Sub-ZK SNARKs.
Expand
Sahiba Suryawanshi, Dhiman Saha, Satyam Sachan
ePrint Report ePrint Report
In ToSC 2017 Saha et al. demonstrated an interesting property of SHA3 based on higher-order vectorial derivatives which led to self-symmetry based distinguishers referred to as SymSum and bettered the complexity w.r.t the well-studied ZeroSum distinguisher by a factor of 4. This work attempts to take a fresh look at this distinguisher in the light of the linearization technique developed by Guo et al. in Asiacrypt 2016. It is observed that the efficiency of SymSum against ZeroSum drops from 4 to 2 for any number of rounds linearized. This is supported by theoretical proofs. SymSum augmented with linearization can penetrate up to two more rounds as against the classical version. In addition to that, one more round is extended by inversion technique on the final hash values. The combined approach leads to distinguishers up to 9 rounds of SHA3 variants with a complexity of only 264 which is better than the equivalent ZeroSum distinguisher by the factor of 2. To the best of our knowledge this is the best distinguisher available on this many rounds of SHA3.
Expand
Chao Sun, Mehdi Tibouchi, Masayuki Abe
ePrint Report ePrint Report
Binary error LWE is the particular case of the learning with errors (LWE) problem in which errors are chosen in $\{0,1\}$. It has various cryptographic applications, and in particular, has been used to construct efficient encryption schemes for use in constrained devices. Arora and Ge showed that the problem can be solved in polynomial time given a number of samples quadratic in the dimension $n$. On the other hand, the problem is known to be as hard as standard LWE given only slightly more than $n$ samples.

In this paper, we first examine more generally how the hardness of the problem varies with the number of available samples. Under standard heuristics on the Arora--Ge polynomial system, we show that, for any $\epsilon >0$, binary error LWE can be solved in polynomial time $n^{O(1/\epsilon)}$ given $\epsilon\cdot n^{2}$ samples. Similarly, it can be solved in subexponential time $2^{\tilde O(n^{1-\alpha})}$ given $n^{1+\alpha}$ samples, for $0<\alpha<1$.

As a second contribution, we also generalize the binary error LWE to problem the case of a non-uniform error probability, and analyze the hardness of the non-uniform binary error LWE with respect to the error rate and the number of available samples. We show that, for any error rate $0 < p < 1$, non-uniform binary error LWE is also as hard as worst-case lattice problems provided that the number of samples is suitably restricted. This is a generalization of Micciancio and Peikert's hardness proof for uniform binary error LWE. Furthermore, we also discuss attacks on the problem when the number of available samples is linear but significantly larger than $n$, and show that for sufficiently low error rates, subexponential or even polynomial time attacks are possible.
Expand
Jean Claude Bajard, Sylvain Duquesne
ePrint Report ePrint Report
This paper deals with Montgomery-friendly primes designed for the modular reduction algorithm of Montgomery. These numbers are scattered in the literature and their properties are partially exploited. We exhibit a large family of Montgomery-friendly primes which give rise to efficient modular reduction algorithms. We develop two main uses. The first one is dedicated directly to cryptography, in particular for isogeny based approaches and more generally to Elliptic Curves Cryptography. We suggest more appropriate finite fields and curves in terms of complexity for the recommended security levels, for both isogeny-based cryptography and ECC. The second use is purely arithmetic, and we propose families of alternative RNS bases. We show that, for dedicated architectures with word operators, we can reach, for a same or better complexity, larger RNS bases with Montgomery-friendly pairwise co-primes than the RNS bases generally used in the literature with Pseudo-Mersenne numbers. This is particularly interesting for modular arithmetic used in cryptography.
Expand

03 June 2020

Amos Beimel, Oriol Farràs
ePrint Report ePrint Report
The share size of general secret-sharing schemes is poorly understood. The gap between the best known upper bound on the total share size per party of $2^{0.64n+o(n)}$ (Applebaum et al., STOC 2020) and the best known lower bound of $\Omega(n/\log n)$ (Csirmaz, J. of Cryptology 1997) is huge (where $n$ is the number of parties in the scheme). To gain some understanding on this problem, we study the share size of secret-sharing schemes of almost all access structures, i.e., of almost all collections of authorized sets. This is motivated by the fact that in complexity, many times almost all objects are hardest (e.g., most Boolean functions require exponential size circuits). All previous constructions of secret-sharing schemes were for the worst access structures (i.e., all access structures) or for specific families of access structures.

We prove upper bounds on the share size for almost all access structures. We combine results on almost all monotone Boolean functions (Korshunov, Probl. Kibern. 1981) and a construction of (Liu and Vaikuntanathan, STOC 2018) and conclude that almost all access structures have a secret-sharing scheme with share size $2^{\tilde{0}(\sqrt{n})}$.

We also study graph secret-sharing schemes. In these schemes, the parties are vertices of a graph and a set can reconstruct the secret if and only if it contains an edge. Again, for this family there is a huge gap between the upper bounds $-$ $O(n/\log n)$ (Erdös and Pyber, Discrete Mathematics 1997) $-$ and the lower bounds $-$ $\Omega(\log n)$ (van Dijk, Des. Codes Crypto. 1995). We show that for almost all graphs, the share size of each party is $n^{o(1)}$.

This result is achieved by using robust 2-server conditional disclosure of secrets protocols, a new primitive introduced and constructed in (Applebaum et al., STOC 2020), and the fact that the size of the maximal independent set in a random graph is small. Finally, using robust conditional disclosure of secrets protocols, we improve the total share size for all very dense graphs.
Expand
Wei Dai, Stefano Tessaro, Xihu Zhang
ePrint Report ePrint Report
We build symmetric encryption schemes from a pseudorandom function/permutation with domain size $N$ which have very high security -- in terms of the amount of messages $q$ they can securely encrypt -- assuming the adversary has $S < N$ bits of memory. We aim to minimize the number of calls $k$ we make to the underlying primitive to achieve a certain $q$, or equivalently, to maximize the achievable $q$ for a given $k$. We target in particular $q \gg N$, in contrast to recent works (Jaeger and Tessaro, EUROCRYPT '19; Dinur, EUROCRYPT '20) which aim to beat the birthday barrier with one call when $S < \sqrt{N}$.

Our first result gives new and explicit bounds for the Sample-then-Extract paradigm by Tessaro and Thiruvengadam (TCC '18). We show instantiations for which $q =\Omega((N/S)^{k})$. If $S < N^{1- \alpha}$, Thiruvengadam and Tessaro's weaker bounds only guarantee $q > N$ when $k = \Omega(\log N)$. In contrast, here, we show this is true already for $k = O(1/\alpha)$.

We also consider a scheme by Bellare, Goldreich and Krawczyk (CRYPTO '99) which evaluates the primitive on $k$ independent random strings, and masks the message with the XOR of the outputs. Here, we show $q= \Omega((N/S)^{k/2})$, using new combinatorial bounds on the list-decodability of XOR codes which are of independent interest. We also study best-possible attacks against this construction.
Expand
John Cartlidge, Nigel P. Smart, Younes Talibi Alaoui
ePrint Report ePrint Report
Dark markets for share trading provide an important function in the financial services industry, however the operators of such markets are not operating in the dark; leaving the possibility open for the market operator to gain a market advantage. Multi-Party Computation (MPC) is a method to remove the need for such trusted third parties. In this paper we show how MPC can be used to implement such a dark market. Importantly we base our solution on the throughputs and algorithms needed in real markets. In particular we base our implementation on the Turquoise Plato market used by the London Stock Exchange.
Expand
Peter Gazi, Aggelos Kiayias, Alexander Russell
ePrint Report ePrint Report
We establish the optimal security threshold for the Bitcoin protocol in terms of adversarial hashing power, honest hashing power, and network delays. Specifically, we prove that the protocol is secure if $$r_a < \frac{1}{\Delta + 1/r_h}\; ,$$ where $r_h$ is the expected number of honest proof-of-work successes in unit time, $r_a$ is the expected number of adversarial successes, and no message is delayed by more than $\Delta$ time units. In this regime, the protocol guarantees consistency and liveness with exponentially decaying failure probabilities. Outside this region, the simple private chain attack prevents consensus.

Our analysis immediately applies to any Nakamoto-style proof-of-work protocol; we also present the adaptations needed to apply it in the proof-of-stake setting, establishing a similar threshold there.
Expand
Jing Tian, Piaoyang Wang, Zhe Liu, Jun Lin, Zhongfeng Wang, Johann Großschädl
ePrint Report ePrint Report
Due to the smaller size in public and secret keys over other candidates for post-quantum cryptography (PQC), the supersingular isogeny key encapsulation (SIKE) protocol has survived from the second round fierce competition hosted by the National Institute of Standards and Technology (NIST) in January 2019. Many efforts have been done by researchers to reduce the computation latency, which, however, is still far more than desired. In the SIKE implementation, the Montgomery representation has been mostly adopted in the finite field arithmetic computing as the corresponding reduction algorithm is considered the fastest method for implementing the modular reduction. In this paper, we propose a new data representation for the supersingular isogeny-based elliptic-curve cryptography (ECC), of which the SIKE is a subclass. The new representation can facilitate faster modular reduction implementation than the Montgomery reduction. Meanwhile, the other finite field arithmetic operations in the ECC can also benefit from the proposed data representation. We have implemented all the arithmetic operations in C language with constant execution time based on our proposed data representation and applied them to the newest SIKE software library. Targeting at the SIKEp751, we run our design and the optimized implementation on a 2.6GHz Intel Xeon E5-2690 processor. The experiment results show that for the parameters of SIKEp751, the proposed modular reduction algorithm is about 2.61x faster than the best Montgomery one and our scheme also performs significantly better for the other finite field operations. With these improvements, the overall software implementation for the SIKEp751 achieves about 1.65x speedup compared to the state-of-the-art implementation.
Expand
Alexander Maximov, Martin Hell
ePrint Report ePrint Report
Grain-128AEAD is a stream cipher supporting authenticated encryption with associated data, and it is currently in round 2 of the NIST lightweight crypto standardization process. In this paper we present and benchmark software implementations of the cipher, targeting constrained processors. The processors chosen are the 8-bit (AVR) and 16-bit (MSP) processors used in the FELICS-AEAD framework. Both high speed and small code size implementations are targeted, giving us in total 4 different implementations. Using the FELICS framework for benchmarking, we conclude that Grain-128AEAD is competitive to other algorithms currently included in the FELICS framework. Our detailed discussion regarding particular implementation tricks and choices can hopefully be of use for the community when considering optimizations for other ciphers.
Expand
Masahito Ishizaka, Shinsaku Kiyomoto
ePrint Report ePrint Report
In Time-Specific Signatures (TSS) parameterized by an integer $T\in\mathbb{N}$, a signer with a secret-key associated with a numerical value $t\in[0,T-1]$ can anonymously, i.e., without revealing $t$, sign a message under a numerical range $[L,R]$ such that $0\leq L \leq t\leq R\leq T-1$. An application of TSS is anonymous questionnaire, where each user associated with a numerical value such as age, date, salary, geographical position (represented by longitude and latitude) and etc., can anonymously fill in a questionnaire in an efficient manner.

In this paper, we propose two \textit{polylogarithmically} efficient TSS constructions based on asymmetric pairing with groups of prime order, which achieve different characteristics in efficiency. In the first one based on a forward-secure signatures scheme concretely obtained from a hierarchical identity-based signatures scheme proposed by Chutterjee and Sarker (IJACT'13), size of the master public-key, size of a secret-key and size of a signature are asymptotically $\mathcal{O}(\log T)$, and size of the master secret-key is $\mathcal{O}(1)$. In the second one based on a wildcarded identity-based ring signatures scheme obtained as an instantiation of an attribute-based signatures scheme proposed by Sakai, Attrapadung and Hanaoka (PKC'16), the sizes are $\mathcal{O}(\log T)$, $\mathcal{O}(1)$, $\mathcal{O}(\log^2 T)$ and $\mathcal{O}(\log T)$, respectively.
Expand
Chloé Hébant, David Pointcheval
ePrint Report ePrint Report
Many attribute-based anonymous credential (ABC) schemes have been proposed allowing a user to prove the possession of attributes, anonymously. They became more and more practical with, for the most recent papers, a constant-size credential to show a subset of attributes issued by a unique credential issuer. However, proving possession of attributes coming from $K$ different credential issuers usually requires $K$ independent credentials to be shown. Only attribute-based credential schemes from aggregatable signatures can overcome this issue.

In this paper, we propose new ABC schemes from aggregatable signatures with randomizable tags. We consider malicious credential issuers, with adaptive corruptions and collusions with malicious users. Whereas our constructions only support selective disclosures of attributes, our approach significantly improves the complexity in both time and memory of the showing of multiple attributes: for the first time, the cost for the prover is (almost) independent of the number of attributes \emph{and} the number of credential issuers. Moreover, we propose the first schemes allowing traceability in case of abuse of credentials.

We formally define an aggregatable signature scheme with (traceable) randomizable tags, which is of independent interest. We build concrete schemes from linearly homomorphic signatures. As all the recent ABC schemes, our constructions are proven in the bilinear generic group model.
Expand
Bishwajit Chakraborty, Soumya Chattopadhyay, Ashwin Jha, Mridul Nandi
ePrint Report ePrint Report
At FSE 2017, Ga\v{z}i et al. demonstrated a pseudorandom function (PRF) distinguisher (Ga\v{z}i et al., ToSC 2016(2)) on PMAC with $ \Omega(\ell q^2/2^n) $ advantage, where $ q $, $ \ell $, and $ n $, denote the number of queries, maximum permissible query length (in terms of $ n $-bit blocks), and block size of the underlying block cipher. This, in combination with the upper bounds of $ O(\ell q^2/2^n) $ (Minematsu and Matsushima, FSE 2007) and $ O(q\sigma/2^n) $ (Nandi and Mandal, J. Mathematical Cryptology 2008(2)), resolved the long-standing problem of exact security of PMAC. Ga\v{z}i et al. also showed that the dependency on $ \ell $ can be dropped (i.e. $ O(q^2/2^n) $ bound up to $ \ell \leq 2^{n/2} $) for a simplified version of PMAC, called sPMAC, by replacing the Gray code-based masking in PMAC with any $ 4 $-wise independent universal hash-based masking. Recently, Naito proposed another variant of PMAC with two powering-up maskings (Naito, ToSC 2019(2)) that achieves $ \ell $-free bound of $ O(q^2/2^n) $, provided $ \ell \leq 2^{n/2} $. In this work, we first identify a flaw in the analysis of Naito's PMAC variant that invalidates the security proof. Apparently, the flaw is not easy to fix under the existing proof setup. We then formulate an equivalent problem which must be solved in order to achieve $ \ell $-free security bounds for this variant. Second, we show that sPMAC achieves $ O(q^2/2^n) $ bound for a weaker notion of universality as compared to the earlier condition of $ 4 $-wise independence.
Expand
Yoo-Seung Won, Dirmanto Jap, Shivam Bhasin
ePrint Report ePrint Report
Side-channel analysis has seen rapid adoption of deep learning techniques over the past years. While many paper focus on designing efficient architectures, some works have proposed techniques to boost the efficiency of existing architectures. These include methods like data augmentation, oversampling, regularization etc. In this paper, we compare data augmentation and oversampling (particularly SMOTE and its variants) on public traces of two side-channel protected AES. The techniques are compared in both balanced and imbalanced classes setting, and we show that adopting SMOTE variants can boost the attack efficiency in general. Further, we report a successful key recovery on ASCAD(desync=100) with 180 traces, a 50% improvement over current state of the art.
Expand
Eli Ben-Sasson, Dan Carmon, Yuval Ishai, Swastik Kopparty, Shubhangi Saraf
ePrint Report ePrint Report
A collection of sets displays a proximity gap with respect to some property if for every set in the collection, either (i) all members are $\delta$-close to the property in relative Hamming distance or (ii) only a tiny fraction of members are $\delta$-close to the property. In particular, no set in the collection has roughly half of its members $\delta$-close to the property and the others $\delta$-far from it.

We show that the collection of affine spaces displays a proximity gap with respect to Reed-Solomon (RS) codes, even over small fields, of size polynomial in the dimension of the code, and the gap applies to any $\delta$ smaller than the Johnson/Guruswami-Sudan list-decoding bound of the RS code. We also show near-optimal gap results, over fields of (at least) linear size in the RS code dimension, for $\delta$ smaller than the unique decoding radius. Finally, we discuss several applications of our proximity gap results to distributed storage, multi-party cryptographic protocols, and concretely efficient proof systems.

We prove the proximity gap results by analyzing the execution of classical algebraic decoding algorithms for Reed-Solomon codes (due to Berlekamp-Welch and Guruswami-Sudan) on a formal element of an affine space. This involves working with Reed-Solomon codes whose base field is an (infinite) rational function field. Our proofs are obtained by developing an extension (to function fields) of a strategy of Arora and Sudan for analyzing low-degree tests.
Expand
◄ Previous Next ►