11 March 2025
Martin R. Albrecht, Lenka Mareková, Kenneth G. Paterson, Eyal Ronen, Igors Stepanovs
We describe, formally model, and prove the security of Telegram's key exchange protocols for client-server communications. To achieve this, we develop a suitable multi-stage key exchange security model along with pseudocode descriptions of the Telegram protocols that are based on analysis of Telegram's specifications and client source code. We carefully document how our descriptions differ from reality and justify our modelling choices. Our security proofs reduce the security of the protocols to that of their cryptographic building blocks, but the subsequent analysis of those building blocks requires the introduction of a number of novel security assumptions, reflecting many design decisions made by Telegram that are suboptimal from the perspective of formal analysis. Along the way, we provide a proof of IND-CCA security for the variant of RSA-OEAP+ used in Telegram and identify a hypothetical attack exploiting current Telegram server behaviour (which is not captured in our protocol descriptions). Finally, we reflect on the broader lessons about protocol design that can be taken from our work.
Lewis Glabush, Felix Günther, Kathrin Hövelmanns, Douglas Stebila
Cryptographic schemes often contain verification steps that are essential for security. Yet, faulty implementations missing these steps can easily go unnoticed, as the schemes might still function correctly. A prominent instance of such a verification step is the re-encryption check in the Fujisaki-Okamoto (FO) transform that plays a prominent role in the post-quantum key encapsulation mechanisms (KEMs) considered in NIST's PQC standardization process. In KEMs built from FO, decapsulation performs a re-encryption check that is essential for security, but not for functionality. In other words, it will go unnoticed if this essential step is omitted or wrongly implemented, opening the door for key recovery attacks. Notably, such an implementation flaw was present in HQC's reference implementation and was only noticed after 19 months.
In this work, we develop a modified FO transform that binds re-encryption to functionality, ensuring that a faulty implementation which skips re-encryption will be exposed through basic correctness tests. We do so by adapting the "verifiable verification" methodology of Fischlin and Günther (CCS 2023) to the context of FO-based KEMs. More concretely, by exporting an unpredictable confirmation code from the public key encryption and embedding it into the key derivation function, we can confirm that (most of) the re-encryption step was indeed performed during decapsulation. We formalize this concept, establish modified FO transforms, and prove how unpredictable PKE confirmation codes turn into noticeable correctness errors for faulty implementations. We show how to apply this technique to ML-KEM and HQC, both with negligible overhead, by leveraging the entropy lost through ciphertext compression or truncation. We confirm that our approach works through mathematical proofs, as well as experimental data. Our experiments show that the implementation flaw in HQC's reference implementation indeed makes basic test cases when following our approach.
In this work, we develop a modified FO transform that binds re-encryption to functionality, ensuring that a faulty implementation which skips re-encryption will be exposed through basic correctness tests. We do so by adapting the "verifiable verification" methodology of Fischlin and Günther (CCS 2023) to the context of FO-based KEMs. More concretely, by exporting an unpredictable confirmation code from the public key encryption and embedding it into the key derivation function, we can confirm that (most of) the re-encryption step was indeed performed during decapsulation. We formalize this concept, establish modified FO transforms, and prove how unpredictable PKE confirmation codes turn into noticeable correctness errors for faulty implementations. We show how to apply this technique to ML-KEM and HQC, both with negligible overhead, by leveraging the entropy lost through ciphertext compression or truncation. We confirm that our approach works through mathematical proofs, as well as experimental data. Our experiments show that the implementation flaw in HQC's reference implementation indeed makes basic test cases when following our approach.
Feng Han, Xiao Lan, Weiran Liu, Lei Zhang, Hao Ren, Lin Qu, Yuan Hong
Oblivious permutation (OP) enables two parties, a sender with a private data vector $x$ and a receiver with a private permutation π, to securely obtain the shares of π(x). OP has been used to construct many important MPC primitives and applications such as secret shuffle, oblivious sorting, private set operations, secure database analysis, and privacy-preserving machine learning. Due to its high complexity, OP has become a performance bottleneck in several practical applications, and many efforts have been devoted to enhancing its concrete efficiency. Chase et al. (Asiacrypt'20) proposed an offline-online OP paradigm leveraging a pre-computable resource termed Share Translation. While this paradigm significantly reduces online costs, the substantial offline cost of generating Share Translation remains an area for further investigation.
In this work, we redefine the pre-computable resource as a cryptographic primitive known as Correlated Oblivious Permutation (COP) and conduct in-depth analyses and optimizations of the two COP generation solutions: network-based solution and matrix-based solution. The optimizations for the network-based solution halve the communication/computation cost of constructing a switch (the basic unit of the permutation network) and reduce the number of switches in the permutation network. The optimizations for the matrix-based solution halve the communication cost of small-size COP generation and reduce the cost of large-size COP generation with in-outside permutation decomposition.
We implement our two COP generation protocols and conduct comprehensive evaluations. Taking commonly used 128-bit input data as an example, our network-based and matrix-based solutions are up to 1.7x and 1.6x faster than baseline protocols, respectively. We further facilitate the state-of-the-art (SOTA) PSU protocols with our optimized COP, achieving over 25% reduction in communication cost and 35% decrease in execution time. This shows that our COP optimizations bring significant improvements for real-world MPC primitives.
In this work, we redefine the pre-computable resource as a cryptographic primitive known as Correlated Oblivious Permutation (COP) and conduct in-depth analyses and optimizations of the two COP generation solutions: network-based solution and matrix-based solution. The optimizations for the network-based solution halve the communication/computation cost of constructing a switch (the basic unit of the permutation network) and reduce the number of switches in the permutation network. The optimizations for the matrix-based solution halve the communication cost of small-size COP generation and reduce the cost of large-size COP generation with in-outside permutation decomposition.
We implement our two COP generation protocols and conduct comprehensive evaluations. Taking commonly used 128-bit input data as an example, our network-based and matrix-based solutions are up to 1.7x and 1.6x faster than baseline protocols, respectively. We further facilitate the state-of-the-art (SOTA) PSU protocols with our optimized COP, achieving over 25% reduction in communication cost and 35% decrease in execution time. This shows that our COP optimizations bring significant improvements for real-world MPC primitives.
Jai Hyun Park
Matrix multiplication of two encrypted matrices (CC-MM) is a key challenge for privacy-preserving machine learning applications. As modern machine learning models focus on scalability, fast CC-MM on large datasets is increasingly in demand.
In this work, we present a CC-MM algorithm for large matrices. The algorithm consists of plaintext matrix multiplications (PP-MM) and ciphertext matrix transpose algorithms (C-MT). We propose a fast C-MT algorithm, which is computationally inexpensive compared to PP-MM. By leveraging high-performance BLAS libraries to optimize PP-MM, we implement large-scale CC-MM with substantial performance improvements. Furthermore, we propose lightweight algorithms, significantly reducing the key size from $1\ 960$ MB to $1.57$ MB for CC-MM with comparable efficiency.
In a single-thread implementation, the C-MT algorithm takes $0.76$ seconds to transpose a $2\ 048\times 2\ 048$ encrypted matrix. The CC-MM algorithm requires $85.2$ seconds to multiply two $4\ 096\times 4\ 096$ encrypted matrices. For large matrices, our algorithm outperforms the state-of-the-art CC-MM method from Jiang-Kim-Lauter-Song [CCS'18] by a factor of over $800$.
In this work, we present a CC-MM algorithm for large matrices. The algorithm consists of plaintext matrix multiplications (PP-MM) and ciphertext matrix transpose algorithms (C-MT). We propose a fast C-MT algorithm, which is computationally inexpensive compared to PP-MM. By leveraging high-performance BLAS libraries to optimize PP-MM, we implement large-scale CC-MM with substantial performance improvements. Furthermore, we propose lightweight algorithms, significantly reducing the key size from $1\ 960$ MB to $1.57$ MB for CC-MM with comparable efficiency.
In a single-thread implementation, the C-MT algorithm takes $0.76$ seconds to transpose a $2\ 048\times 2\ 048$ encrypted matrix. The CC-MM algorithm requires $85.2$ seconds to multiply two $4\ 096\times 4\ 096$ encrypted matrices. For large matrices, our algorithm outperforms the state-of-the-art CC-MM method from Jiang-Kim-Lauter-Song [CCS'18] by a factor of over $800$.
Registration for Eurocrypt 2025 is now open. The early registration deadline is April 4, 2025.
More information can be found here:
https://eurocrypt.iacr.org/2025/registration.php
In the conference website you can also find, among other information, the preliminary schedule, the list of accepted papers, and the list of affiliated events: https://eurocrypt.iacr.org/2025/
Finally, considering that Madrid is a popular tourist city which gets busy in May, we recommend you to book your hotel early. Information on accommodation is available at: https://eurocrypt.iacr.org/2025/accommodations.php
In the conference website you can also find, among other information, the preliminary schedule, the list of accepted papers, and the list of affiliated events: https://eurocrypt.iacr.org/2025/
Finally, considering that Madrid is a popular tourist city which gets busy in May, we recommend you to book your hotel early. Information on accommodation is available at: https://eurocrypt.iacr.org/2025/accommodations.php
10 March 2025
Yuval Ishai, Yifan Song
We consider the question of protecting a general computation device, modeled by a stateful Boolean circuit, against leakage of partial information about its internal wires. Goyal et al. (FOCS 2016) obtained a solution for the case of bounded-communication leakage, where the wires are partitioned into two parts and the leakage can be any function computed using $t$ bits of communication between the parts. However, this solution suffers from two major limitations: (1) it only applies to a one-shot (stateless) computation, mapping an encoded input to an encoded output, and (2) the leakage-resilient circuit consumes fresh random bits, whose number scales linearly with the circuit complexity of the computed function.
In this work, we eliminate the first limitation and make progress on the second. Concretely: - We present the first construction of stateful circuits that offer information-theoretic protection against continuous bounded-communication leakage. As an application, we extend a two-party ``malware-resilient'' protocol of Goyal et al. to the continuous-leakage case. - For simple types of bounded-communication leakage, which leak $t$ parities or $t$ disjunctions of circuit wires or their negations, we obtain a deterministic variant that does not require any fresh randomness beyond the randomness in the initial state. Here we get computational security based on a subexponentially secure one-way function. This is the first deterministic leakage-resilient circuit construction for any nontrivial class of global leakage.
In this work, we eliminate the first limitation and make progress on the second. Concretely: - We present the first construction of stateful circuits that offer information-theoretic protection against continuous bounded-communication leakage. As an application, we extend a two-party ``malware-resilient'' protocol of Goyal et al. to the continuous-leakage case. - For simple types of bounded-communication leakage, which leak $t$ parities or $t$ disjunctions of circuit wires or their negations, we obtain a deterministic variant that does not require any fresh randomness beyond the randomness in the initial state. Here we get computational security based on a subexponentially secure one-way function. This is the first deterministic leakage-resilient circuit construction for any nontrivial class of global leakage.
Tiantian Gong, Aniket Kate, Hemanta K. Maji, Hai H. Nguyen
In verifiable secret sharing (VSS), a dealer shares a secret input among several parties, ensuring each share is verifiable. Motivated by its applications in the blockchain space, we focus on a VSS where parties holding shares are not allowed to reconstruct the dealer's secret (even partially) on their own terms, which we address as privacy-targeted collusion if attempted.
In this context, our work investigates mechanisms deterring such collusion in VSS among rational and malicious parties. For this problem, we make both algorithmic and combinatorial contributions: 1. We provide two collusion-deterrent mechanisms to discourage parties from colluding and recovering the dealer's secret. Notably, when it is desired to achieve fairness---where non-colluding parties are not at a loss---while allowing for the best achievable malicious fault tolerance, we define ``trackable access structures'' (TAS) and design a deterrence mechanism tailored for VSS on these structures. 2. We estimate the size of the optimal TAS, construct them from Steiner systems, provide highly robust TAS using partial Steiner systems, and present efficient secret sharing schemes for the latter close-to-optimal TAS for various parameter regimes. 3. We demonstrate that trackability in access structures is connected to combinatorial objects like (partial) Steiner systems, uniform subsets with restricted intersections, and appropriate binary codes. The robustness of access structures is equivalent to the minimum vertex cover of hypergraphs.
We believe these connections between cryptography, game theory, and discrete mathematics will be of broader interest.
In this context, our work investigates mechanisms deterring such collusion in VSS among rational and malicious parties. For this problem, we make both algorithmic and combinatorial contributions: 1. We provide two collusion-deterrent mechanisms to discourage parties from colluding and recovering the dealer's secret. Notably, when it is desired to achieve fairness---where non-colluding parties are not at a loss---while allowing for the best achievable malicious fault tolerance, we define ``trackable access structures'' (TAS) and design a deterrence mechanism tailored for VSS on these structures. 2. We estimate the size of the optimal TAS, construct them from Steiner systems, provide highly robust TAS using partial Steiner systems, and present efficient secret sharing schemes for the latter close-to-optimal TAS for various parameter regimes. 3. We demonstrate that trackability in access structures is connected to combinatorial objects like (partial) Steiner systems, uniform subsets with restricted intersections, and appropriate binary codes. The robustness of access structures is equivalent to the minimum vertex cover of hypergraphs.
We believe these connections between cryptography, game theory, and discrete mathematics will be of broader interest.
Gao Ming
P vs NP problem is the most important unresolved problem in the field of computational complexity. Its impact has penetrated into all aspects of algorithm design, especially in the field of cryptography. The security of cryptographic algorithms based on short keys depends on whether P is equal to NP. In fact, Shannon strictly proved that the one-time-pad system meets unconditional security, but because the one-time-pad system requires the length of key to be at least the length of plaintext, how to transfer the key is a troublesome problem that restricts the use of the one-time-pad system in practice. Cryptography algorithms used in practice are all based on short key, and the security of the short key mechanism is ultimately based on one-way assumption. In fact, the existence of one-way function can directly lead to the important conclusion P≠NP.
In this paper, we originally constructed a short-key block cipher algorithm. The core feature of this algorithm is that for any block, when a plaintext-ciphertext pair is known, any key in the key space is valid, that is, for each block, the plaintext-ciphertext pair and the key are independence, and the independence between blocks is also easy to construct. This feature is completely different from all existing short-key cipher algorithms.
Based on the above feature, we construct a problem and theoretically prove that the problem satisfies the properties of one-way functions, thereby solving the problem of the existence of one-way functions, that is, directly proving that P≠NP.
David Heath, Vladimir Kolesnikov, Varun Narayanan, Rafail Ostrovsky, Akash Shah
State-of-the-art protocols that achieve constant-round secure multiparty computation currently present a trade-off: either consume an amount of communication that scales quadratically in the number of parties, or achieve better asymptotics at the cost of high constant factors (e.g. schemes based on LPN or DDH).
We construct a constant-round MPC protocol where communication scales linearly in the number of parties n. Our construction relies only on OT and RO, and it leverages packed secret sharing. Due to building on simple primitives, our protocol offers concrete improvement over asymptotically-efficient LPN-based schemes. We consider security in the presence of a dishonest majority where the malicious (with abort) adversary corrupts an arbitrary constant fraction of parties.
By leveraging tri-state circuits (Heath et al. Crypto 2023), we extend our protocol to the RAM model of computation. For a RAM program that halts within $T$ steps, our maliciously-secure protocol communicates $O(n \cdot T \log^3 T \log \log T \cdot \kappa)$ total bits, where $\kappa$ is a security parameter.
By leveraging tri-state circuits (Heath et al. Crypto 2023), we extend our protocol to the RAM model of computation. For a RAM program that halts within $T$ steps, our maliciously-secure protocol communicates $O(n \cdot T \log^3 T \log \log T \cdot \kappa)$ total bits, where $\kappa$ is a security parameter.
Alireza Kavousi, István András Seres
Practical signature-based witness encryption (SWE) schemes recently emerged as a viable alternative to instantiate timed-release cryptography in the honest majority setting. In particular, assuming threshold trust in a set of parties that release signatures at a specified time, one can ``encrypt to the future'' using an SWE scheme. Applications of SWE schemes include voting, auctions, distributed randomness beacons, and more. However, the lack of homomorphism in existing SWE schemes reduces efficiency and hinders deployment. In this work, we introduce the notion of homomorphic SWE (HSWE) to improve the practicality of timed-release encryption schemes. We show one can build HSWE using a pair of encryption and signature schemes where the uniqueness of the signature is required when the encryption scheme relies on injective one-way functions. We then build three HSWE schemes in various settings using BLS, RSA, and Rabin signatures and show how to achieve a privacy-preserving variant that only allows extracting the homomorphically aggregated result while keeping the individual plaintexts confidential
Yuval Ishai, Hanjun Li, Huijia Lin
A major challenge in cryptography is the construction of succinct garbling schemes that have asymptotically smaller size than Yao’s garbled circuit construction. We present a new framework for succinct garbling that replaces the heavy machinery of most previous constructions by lighter-weight homomorphic secret sharing techniques.
Concretely, we achieve 1-bit-per-gate (amortized) garbling size for Boolean circuits under circular variants of standard assumptions in composite-order or prime-order groups, as well as a lattice-based instantiation. We further extend these ideas to layered circuits, improving the per-gate cost below 1 bit, and to arithmetic circuits, eliminating the typical Ω(λ)-factor overhead for garbling mod-p computations. Our constructions also feature “leveled” variants that remove circular-security requirements at the cost of adding a depth-dependent term to the garbling size.
Our framework significantly extends a recent technique of Liu, Wang, Yang, and Yu (Eurocrypt 2025) for lattice-based succinct garbling, and opens new avenues toward practical succinct garbling. For moderately large circuits with a few million gates, our garbled circuits can be two orders of magnitude smaller than Yao-style garbling. While our garbling and evaluation algorithms are much slower, they are still practically feasible, unlike previous fully succinct garbling schemes that rely on expensive tools such as iO or a non-black-box combination of FHE and ABE. This trade-off can make our framework appealing when a garbled circuit is used as a functional ciphertext that is broadcast or stored in multiple locations (e.g., on a blockchain), in which case communication and storage may dominate computational cost.
Concretely, we achieve 1-bit-per-gate (amortized) garbling size for Boolean circuits under circular variants of standard assumptions in composite-order or prime-order groups, as well as a lattice-based instantiation. We further extend these ideas to layered circuits, improving the per-gate cost below 1 bit, and to arithmetic circuits, eliminating the typical Ω(λ)-factor overhead for garbling mod-p computations. Our constructions also feature “leveled” variants that remove circular-security requirements at the cost of adding a depth-dependent term to the garbling size.
Our framework significantly extends a recent technique of Liu, Wang, Yang, and Yu (Eurocrypt 2025) for lattice-based succinct garbling, and opens new avenues toward practical succinct garbling. For moderately large circuits with a few million gates, our garbled circuits can be two orders of magnitude smaller than Yao-style garbling. While our garbling and evaluation algorithms are much slower, they are still practically feasible, unlike previous fully succinct garbling schemes that rely on expensive tools such as iO or a non-black-box combination of FHE and ABE. This trade-off can make our framework appealing when a garbled circuit is used as a functional ciphertext that is broadcast or stored in multiple locations (e.g., on a blockchain), in which case communication and storage may dominate computational cost.
Matthias Trannoy
Every cryptographic implementation on embedded device is vulnerable to side-channel attacks. To prevent these attacks, the main countermeasure consists in splitting each sensitive variable in shares and processing them independently.
With the upcoming of new algorithms designed to resist quantum computers and the complexity of their operations, this protection represents a real challenge.
In this article, we present an attack on an earlier attempt to protect the decoder of BIKE cryptosystem against first-order attack.
Additionally, we introduce a new procedure for the high-order masking of the decoder, up-to-date with its latest improvement.
We also present the first fully masked implementation of the whole cryptosystem, including the key generation and the encapsulation.
Eventually, to assess the correctness of our countermeasures and initiate further comparison, we implemented our countermeasures in C and provide benchmarks of their performance.
Mohamed Malhou, Ludovic Perret, Kristin Lauter
We introduce the use of machine learning in the cryptanalysis of code-based cryptography. Our focus is on distinguishing problems related to the security of NIST round-4 McEliece-like cryptosystems, particularly for Goppa codes used in ClassicMcEliece and Quasi-Cyclic Moderate Density Parity-Check (QC-MDPC) codes used in BIKE. We present DeepDistinguisher, a new algorithm for distinguishing structured codes from random linear codes that uses a transformer. The results show that the new distinguisher achieves a high level of accuracy in distinguishing Goppa codes, suggesting that their structure may be more recognizable by AI models. Our approach outperforms traditional attacks in distinguishing Goppa codes in certain settings and does generalize to larger code lengths without further training using a puncturing technique. We also present the first distinguishing results dedicated to MDPC and QC-MDPC codes.
Zhongyi Zhang, Chengan Hou, Meicheng Liu
In this paper, we study preimage resistance of the SHA-3 standard. We propose a squeeze meet-in-the-middle attack as a new preimage attack method for the sponge functions. This attack combines the squeeze attack and meet-in-the-middle attack, and is implemented by internal differentials. We analyze the inverse operation of the SHA-3 round function, and develop a new target internal differential algorithm as well as a linearization technique for the Sbox in the backward phase. In addition, we propose the concept of a value-difference distribution table (VDDT) to optimize the attack complexity. These techniques lead to faster preimage attacks on five (out of six) SHA-3 functions reduced to 4 rounds, and also bring preimage attacks on 5 rounds of four SHA-3 instances. The attack techniques are verified by performing practical preimage attack on a small variant of 4-round Keccak.
Gideon Samid
Presenting a novel use of encryption, not for hiding a secret, but for marking letters. Given a 2n letters plaintext, the transmitter encrypts the first n letters with key K1 to generate corresponding n cipherletters, and encrypts the second n letters with key K2 to generate n corresponding cipherletters. The transmitter sends the 2n cipherletters along with the keys, K1 and K2 The recipient (and any interceptor) will readily decrypt the 2n cipherletters to the original plaintext. This makes the above procedure equivalent to sending out the plaintext. So why bother? When decrypting the 2n cipherletters one will make a note of how the letters that were encrypted with K1 are mixed with the letters encrypted with K2 while keeping the original order of the letters encrypted with each key. There are 2^n possible mixings. Which means that the choice of mixing order can deliver a secret message, S, comprising n bits. So while on the surface a given plaintext is sent out from transmitter to recipient, this plaintext hides a secret. Imagine a text messaging platform that uses this protocol. An adversary will not know which plain innocent message harbors a secret message. This allows residents of cyberspace to communicate secrets without exposing the fact that they communicated a secret. Expect a big impact on the level of cyberspace privacy.
08 March 2025
Antonio Flórez-Gutiérrez, Yosuke Todo
ChaCha is a widely deployed stream cipher and one of the most important symmetric primitives. Due to this practical importance, many cryptanalysis have been proposed. Until now, Probabilistic Neutral Bits (PNBs) have been the most successful. Given differential-linear distinguishers, PNBs are the technique for key recovery relying on an experimental backward correlation obtained through blackbox analysis. A careful theoretical analysis exploiting the round function design may find a better attack and improve our understanding, but the complicated nature of the ARX structure makes such analysis difficult.
%
We propose a theoretical methodology inspired by bit puncturing, which was recently proposed at Eurocrypt 2024. Our method has a theoretical foundation and is thus fundamentally different from PNBs, to which it is the first effective alternative. As a result, we significantly improved the attack complexity for 6, 7, and 7.5-round ChaCha. The 7-round attack is about $2^{40}$ times faster than the previous best. Furthermore, we propose the first 7.5-round attack with a non-negligible advantage over an exhaustive search.
Chenzhi Zhu, Stefano Tessaro
This paper introduces a new one-more computational problem for lattice-based cryptography, which we refer to as the Algebraic One-More MISIS problem, or AOM-MISIS for short. It is a modification of the AOM-MLWE problem recently introduced by Espitau et al. (CRYPTO ’24) to prove security of new two-round threshold signatures.
Our first main result establishes that the hardness of AOM-MISIS is implied by the hardness of MSIS and MLWE (with suitable parameters), both of which are standard assumptions for efficient lattice-based cryptography. We prove this result via a new generalization of a technique by Tessaro and Zhu (EUROCRYPT ’23) used to prove hardness of a one-more problem for linear hash functions assuming their collision resistance, for which no clear lattice analogue was known. Since the hardness of AOM-MISIS implies the hardness of AOM-MLWE, our result resolves the main open question from the work of Espitau et al., who only provided a similar result for AOM-MLWE restricted to selective adversaries, a class which does not cover the use for threshold signatures.
Furthermore, we show that our novel formulation of AOM-MISIS offers a better interface to develop tighter security bounds for state-of-the-art two-round threshold signatures. We exemplify this by providing new proofs of security, assuming the hardness of MLWE and MSIS, for two threshold signatures, the one proposed in the same work by Espitau et al., as well as a recent construction by Chairattana-Apirom et al. (ASIACRYPT 2024). For the former scheme, we also show that it satisfies the strongest security notion (TS-UF-4) in the security hierarchy of Bellare et al. (CRYPTO ’22), as a result of independent interest.
Our first main result establishes that the hardness of AOM-MISIS is implied by the hardness of MSIS and MLWE (with suitable parameters), both of which are standard assumptions for efficient lattice-based cryptography. We prove this result via a new generalization of a technique by Tessaro and Zhu (EUROCRYPT ’23) used to prove hardness of a one-more problem for linear hash functions assuming their collision resistance, for which no clear lattice analogue was known. Since the hardness of AOM-MISIS implies the hardness of AOM-MLWE, our result resolves the main open question from the work of Espitau et al., who only provided a similar result for AOM-MLWE restricted to selective adversaries, a class which does not cover the use for threshold signatures.
Furthermore, we show that our novel formulation of AOM-MISIS offers a better interface to develop tighter security bounds for state-of-the-art two-round threshold signatures. We exemplify this by providing new proofs of security, assuming the hardness of MLWE and MSIS, for two threshold signatures, the one proposed in the same work by Espitau et al., as well as a recent construction by Chairattana-Apirom et al. (ASIACRYPT 2024). For the former scheme, we also show that it satisfies the strongest security notion (TS-UF-4) in the security hierarchy of Bellare et al. (CRYPTO ’22), as a result of independent interest.
Thomas Pornin
This note discusses the problem of writing cryptographic implementations in software, free of timing-based side-channels, and many ways in which that endeavour can fail in practice. It is a pessimist view: it highlights why such failures are expected to become more common, and how constant-time coding is, or will soon become, infeasible in all generality.
Shuai Han, Shengli Liu, Xiangyu Liu, Dawu Gu
In this paper, we propose a new type of non-interactive zero-knowledge (NIZK), called Fine-grained Verifier NIZK (FV-NIZK), which provides more flexible and more fine-grained verifiability of proofs than standard NIZK that supports public verifiability and designated-verifier NIZK (DV-NIZK) that supports private verifiability. FV-NIZK has two statistically (or computationally) equivalent verification approaches:
--- a master verification using the master secret key $msk$;
--- a fine-grained verification using a derived secret key $sk_d$, which is derived from $msk$ w.r.t. $d$ (which may stand for user identity, email address, vector, etc.).
We require unbounded simulation soundness (USS) of FV-NIZK to hold, even if an adversary obtains derived secret keys $sk_d$ with $d$ of its choices, and define proof pseudorandomness which stipulates the pseudorandomness of proofs for adversaries that are not given any secret key.
We present two instantiations of FV-NIZK for linear subspace languages, based on the matrix decisional Diffie-Hellman (MDDH) assumption. One of the FV-NIZK instantiations is pairing-free and achieves almost tight USS and proof pseudorandomness. We also adapt the two instantiations to support unbounded fine-grained secret key delegations.
We illustrate the usefulness of FV-NIZK by showing two applications and obtain the following pairing-free schemes:
--- the first almost tightly multi-challenge CCA (mCCA)-secure inner-product functional encryption (IPFE) scheme without pairings;
--- the first public-key encryption (PKE) scheme that reconciles the inherent contradictions between public verifiability and anonymity. We formalize such PKE as Fine-grained Verifiable PKE (FV-PKE), which derives a special key from the decryption secret key, such that for those who obtain the derived key, they can check the validity of ciphertexts but the anonymity is lost from their views (CCA-security still holds for them), while for others who do not get the derived key, they cannot do the validity check but the anonymity holds for them.
Our FV-PKE scheme achieves almost tight mCCA-security for adversaries who obtain the derived keys, and achieves almost tight ciphertext pseudorandomness (thus anonymity) for others who do not get any derived key.
--- a master verification using the master secret key $msk$;
--- a fine-grained verification using a derived secret key $sk_d$, which is derived from $msk$ w.r.t. $d$ (which may stand for user identity, email address, vector, etc.).
We require unbounded simulation soundness (USS) of FV-NIZK to hold, even if an adversary obtains derived secret keys $sk_d$ with $d$ of its choices, and define proof pseudorandomness which stipulates the pseudorandomness of proofs for adversaries that are not given any secret key.
We present two instantiations of FV-NIZK for linear subspace languages, based on the matrix decisional Diffie-Hellman (MDDH) assumption. One of the FV-NIZK instantiations is pairing-free and achieves almost tight USS and proof pseudorandomness. We also adapt the two instantiations to support unbounded fine-grained secret key delegations.
We illustrate the usefulness of FV-NIZK by showing two applications and obtain the following pairing-free schemes:
--- the first almost tightly multi-challenge CCA (mCCA)-secure inner-product functional encryption (IPFE) scheme without pairings;
--- the first public-key encryption (PKE) scheme that reconciles the inherent contradictions between public verifiability and anonymity. We formalize such PKE as Fine-grained Verifiable PKE (FV-PKE), which derives a special key from the decryption secret key, such that for those who obtain the derived key, they can check the validity of ciphertexts but the anonymity is lost from their views (CCA-security still holds for them), while for others who do not get the derived key, they cannot do the validity check but the anonymity holds for them.
Our FV-PKE scheme achieves almost tight mCCA-security for adversaries who obtain the derived keys, and achieves almost tight ciphertext pseudorandomness (thus anonymity) for others who do not get any derived key.
Akashdeep Saha, Siddhartha Chowdhury, Rajat Subhra Chakraborty, Debdeep Mukhopadhyay
Logic locking has surfaced as a notable safeguard
against diverse hazards that pose a risk to the integrated circuit
(IC) supply chain. Existing literature on logic locking largely
encompasses the art of proposing new constructions, on the one
hand, and unearthing weaknesses in such algorithms on the
other. Somehow, in this race of make and break, the stress on
automation of adopting such techniques on real-life circuits has
been rather limited. For the first time, we present a generic
end-to-end combinational logic locking CAD framework, MIDAS.
This framework analyses circuit netlists and generates locked
netlists. Due to its generic circuit analysis, it bridges the gap,
integrates diverse logic locking techniques, and offers a scope of
integration of potential future ones. MIDAS framework’s efficacy
has been verified through its application on ISCAS’85 and
ISCAS’99 benchmark circuits, locked using six different schemes
such as EPIC, Anti-SAT, SFLL-HD, SFLL-fault, CAS-Lock, and
LoPher. MIDAS minimizes the hardware overhead requirements
of otherwise resource-intensive locking technique LoPher by
extracting an influential portion of circuit to lock and utilizing
a simple fitness function. We also assess the overhead increase
for the aforementioned locking methods, thereby complicating
the identification of influential nodes within the locked netlists.
Finally, we evaluate MIDAS by selectively locking parts of a
commercially-designed open-source RISC-V core.