IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
11 September 2025
Ruida Wang, Jikang Bai, Xuan Shen, Xianhui Lu, Zhihao Li, Binwu Xiang, Zhiwei Wang, Hongyu Wang, Lutan Zhao, Kunpeng Wang, Rui Hou
Fully Homomorphic Encryption (FHE) enables computation over encrypted data, but deployment is hindered by the gap between plaintext and ciphertext programming models. FHE compilers aim to automate this translation, with a promising approach being FHE Instruction Set Architecture (FHE-ISA) based on homomorphic look-up-tables (LUT). However, existing FHE LUT techniques are limited to 16-bit precision and face critical performance bottlenecks.
We introduce Tetris, a versatile TFHE LUT framework for high-precision FHE instructions. Tetris incorporates three advances: a) A GLWE-based design with refined noise control, enabling up to 32-bit LUTs; b) A batched TFHE circuit bootstrapping algorithm that enhances the LUT performance; and c) Adaptive parameterization and parallel execution strategy that is optimized for high-precision evaluation.
These techniques deliver: a) The first general FHE instruction set with 16-bit bivariate and 32-bit univariate operations; b) Performance improvements of 2×-863× over modern TFHE LUT approaches, and 2901× lower latency than the leading CKKS-based solution [CRYPTO'25]; (c) Up to 65× speedups over existing FHE-ISA implementations, and 3×-40× faster than related FHE compilers.
Fredrik Meisingseth, Christian Rechberger, Fabian Schmid
At Crypto'24, Bell et al. propose a definition of Certified Differential Privacy (DP) and a quite general blueprint for satisfying it. The core of the blueprint is the new primitive called random variable commitment schemes (RVCS). Bell et al. construct RVCS's for fair coins and binomial distributions.
In this work, we prove three lemmata that enable simple modular design of new RVCS's: First, we show that the properties of RVCS's are closed under polynomial sequential composition. Secondly, we show that homomorphically evaluating a function $f$ on the output of an RVCS for distribution $Z$ leads to an RVCS for distribution $f(Z)$. Thirdly, we show (roughly) that applying a `Commit-and-Prove'-style proof of knowledge for a function $f$ onto the output of an RVCS for distribution $Z$ results in an RVCS for distribution $f(Z)$. These lemmata together implies that there exists RVCS's for any distribution which can be sampled exactly in strict polynomial time, under, for instance, the discrete logarithm assumption. This is, to the best of our knowledge, the first result establishing the possibility of RVCS's for arbitrary distributions. We demonstrate the usefulness of these lemmata by constructing RVCS's for arbitrarily biased coins and a discrete Laplace distribution, leading to a certified DP protocol for a discrete Laplace mechanism. We observe that the definitions in BGKW do not directly allow sampling algorithms with non-zero honest abort probability, which rules out many practical sampling algorithms. We propose a slight relaxation of the definitions which enables the use of sampling methods with negligible abort probability. It is with respect to these weakened definitions that we prove our certified discrete Laplace mechanism.
Francesca Falzon, Zichen Gui, Michael Reichle
Encrypted multi-maps (EMMs) allow a client to outsource a multi-map to an untrusted server and then later retrieve the values corresponding to a queried label. They are a core building block for various applications such as encrypted cloud storage and searchable encryption. One important metric of EMMs is memory-efficiency: most schemes incur many random memory accesses per search query, leading to larger overhead compared to plaintext queries. Memory-efficient EMMs reduce random accesses but, in most known solutions, this comes at the cost of higher query bandwidth.
This work focuses on EMMs run on SSDs and we construct two page-efficient schemes---one static and one dynamic---both with optimal search bandwidth. Our static scheme achieves $\bigOtilde{\log N/p}$ page-efficiency and $\bigo(1)$ client storage, where $N$ denotes the size of the EMM and $p$ the SSD's page size. Our dynamic scheme achieves forward and backward privacy with $\bigOtilde{\log N/p}$ page-efficiency and $\bigo(M)$ client storage, where $M$ denotes the number of labels. Among schemes with optimal server storage, these are the first to combine optimal bandwidth with good page-efficiency, saving up to $\bigo(p)$ and $\bigOtilde{p\log\log (N/p)}$ bandwidth over the state-of-the-art static and dynamic schemes, respectively. Our implementation on real-world data shows strong practical performance.
This work focuses on EMMs run on SSDs and we construct two page-efficient schemes---one static and one dynamic---both with optimal search bandwidth. Our static scheme achieves $\bigOtilde{\log N/p}$ page-efficiency and $\bigo(1)$ client storage, where $N$ denotes the size of the EMM and $p$ the SSD's page size. Our dynamic scheme achieves forward and backward privacy with $\bigOtilde{\log N/p}$ page-efficiency and $\bigo(M)$ client storage, where $M$ denotes the number of labels. Among schemes with optimal server storage, these are the first to combine optimal bandwidth with good page-efficiency, saving up to $\bigo(p)$ and $\bigOtilde{p\log\log (N/p)}$ bandwidth over the state-of-the-art static and dynamic schemes, respectively. Our implementation on real-world data shows strong practical performance.
Danilo Francati, Yevin Nikhel Goonatilake, Shubham Pawar, Daniele Venturi, Giuseppe Ateniese
We prove a sharp threshold for the robustness of cryptographic watermarking for generative models. This is achieved by introducing a coding abstraction, which we call messageless secret-key codes, that formalizes sufficient and necessary requirements of robust watermarking: soundness, tamper detection, and pseudorandomness. Thus, we establish that robustness has a precise limit: For binary outputs no scheme can survive if more than half of the encoded bits are modified, and for an alphabet of size q the corresponding threshold is $(1−1/q)$ of the symbols.
Complementing this impossibility, we give explicit constructions that meet the bound up to a constant slack. For every $\delta > 0$, assuming pseudorandom functions and access to a public counter, we build linear-time codes that tolerate up to $(1/2)(1−\delta)$ errors in the binary case and $(1−1/q)(1−\delta)$ errors in the $q$-ary case. Together with the lower bound, these yield the maximum robustness achievable under standard cryptographic assumptions.
We then test experimentally whether this limit appears in practice by looking at the recent watermarking for images of Gunn, Zhao, and Song (ICLR 2025). We show that a simple crop and resize operation reliably flipped about half of the latent signs and consistently prevented belief-propagation decoding from recovering the codeword, erasing the watermark while leaving the image visually intact.
These results provide a complete characterization of robust watermarking, identifying the threshold at which robustness fails, constructions that achieve it, and an experimental confirmation that the threshold is already reached in practice.
Complementing this impossibility, we give explicit constructions that meet the bound up to a constant slack. For every $\delta > 0$, assuming pseudorandom functions and access to a public counter, we build linear-time codes that tolerate up to $(1/2)(1−\delta)$ errors in the binary case and $(1−1/q)(1−\delta)$ errors in the $q$-ary case. Together with the lower bound, these yield the maximum robustness achievable under standard cryptographic assumptions.
We then test experimentally whether this limit appears in practice by looking at the recent watermarking for images of Gunn, Zhao, and Song (ICLR 2025). We show that a simple crop and resize operation reliably flipped about half of the latent signs and consistently prevented belief-propagation decoding from recovering the codeword, erasing the watermark while leaving the image visually intact.
These results provide a complete characterization of robust watermarking, identifying the threshold at which robustness fails, constructions that achieve it, and an experimental confirmation that the threshold is already reached in practice.
Lea Thiemt, Paul Rösler, Alexander Bienstock, Rolfe Schmidt, Yevgeniy Dodis
Modern messengers use advanced end-to-end encryption protocols to protect message content even if user secrets are ever temporarily exposed. Yet, encryption alone does not prevent user tracking, as protocols often attach metadata, such as sequence numbers, public keys, or even plain user identifiers. This metadata reveals the social network as well as communication patterns between users. Existing protocols that hide metadata in Signal (i.e., Sealed Sender), for MLS-like constructions (Hashimoto et al., CCS 2022), or in mesh networks (Bienstock et al., CCS 2023) are relatively inefficient or specially tailored for only particular settings. Moreover, all existing practical solutions reveal crucial metadata upon exposures of user secrets.
In this work, we introduce a formal definition of Anonymity Wrappers (AW) that generically hide metadata of underlying two-party and group messaging protocols. Our definition captures forward and post-compromise anonymity as well as authenticity in the presence of temporary state exposures. Inspired by prior wrapper designs, the idea of our provably secure AW construction is to use shared keys of the underlying wrapped (group) messaging protocols to derive and continuously update symmetric keys for hiding metadata. Beyond hiding metadata on the wire, we also avoid and hide structural metadata in users' local states for stronger anonymity upon their exposure.
We implement our construction, evaluate its performance, and provide a detailed comparison with Signal's current approach based on Sealed Sender: Our construction reduces the wire size of small 1:1 messages from 441 bytes to 114 bytes. For a group of 100 members, it reduces the wire size of outgoing group messages from 7240 bytes to 155 bytes. We see similar improvements in computation time for encryption and decryption, but these improvements come with substantial storage costs for receivers. For this reason, we develop extensions with a Bloom filter for compressing the receiver storage. Based on this, Signal considers deploying our solution.
In this work, we introduce a formal definition of Anonymity Wrappers (AW) that generically hide metadata of underlying two-party and group messaging protocols. Our definition captures forward and post-compromise anonymity as well as authenticity in the presence of temporary state exposures. Inspired by prior wrapper designs, the idea of our provably secure AW construction is to use shared keys of the underlying wrapped (group) messaging protocols to derive and continuously update symmetric keys for hiding metadata. Beyond hiding metadata on the wire, we also avoid and hide structural metadata in users' local states for stronger anonymity upon their exposure.
We implement our construction, evaluate its performance, and provide a detailed comparison with Signal's current approach based on Sealed Sender: Our construction reduces the wire size of small 1:1 messages from 441 bytes to 114 bytes. For a group of 100 members, it reduces the wire size of outgoing group messages from 7240 bytes to 155 bytes. We see similar improvements in computation time for encryption and decryption, but these improvements come with substantial storage costs for receivers. For this reason, we develop extensions with a Bloom filter for compressing the receiver storage. Based on this, Signal considers deploying our solution.
Tabitha Ogilvie
Approximate Homomorphic Encryption (AHE), introduced by Cheon et al. [CKKS17] offers a powerful solution for encrypting real-valued data by relaxing the correctness requirement and allowing small decryption errors. Existing constructions from (Ring) Learning with Errors achieve standard IND-CPA security, but this does not fully capture scenarios where an adversary observes decrypted outputs. Li and Micciancio [LiMic21] showed that when decryptions are passively leaked, these schemes become vulnerable to practical key recovery attacks even against honest-but-curious attackers. They formalise security when decryptions are shared with new notions of IND-CPA-D and KR-D security.
We propose new techniques to achieve provable IND-CPA-D and KR-D security for AHE, while adding substantially less additional decryption noise than the prior provable results. Our approach hinges on refined "game-hopping" tools in the bit-security framework, which allow bounding security loss with a lower noise overhead. We also give a noise-adding strategy independent of the number of oracle queries, removing a costly dependence inherent in the previous solution.
Beyond generic noise-flooding, we show that leveraging the recently introduced HintLWE problem [KLSS23] can yield particularly large security gains for AHE ciphertexts that are the result of "rescaling", a common operation in CKKS. Our analysis uses the fact that rescale-induced noise amounts to a linear ``hint" on the secret to enable a tighter reduction to LWE (via HintLWE). In many practical parameter regimes where the rescaling noise dominates, our results imply an additional precision loss of as little as two bits is sufficient to restore a high level of security against passive key-recovery attacks for standard parameters.
Overall, our results enable a provably secure and efficient real-world deployment of Approximate Homomorphic Encryption in scenarios with realistic security requirements.
We propose new techniques to achieve provable IND-CPA-D and KR-D security for AHE, while adding substantially less additional decryption noise than the prior provable results. Our approach hinges on refined "game-hopping" tools in the bit-security framework, which allow bounding security loss with a lower noise overhead. We also give a noise-adding strategy independent of the number of oracle queries, removing a costly dependence inherent in the previous solution.
Beyond generic noise-flooding, we show that leveraging the recently introduced HintLWE problem [KLSS23] can yield particularly large security gains for AHE ciphertexts that are the result of "rescaling", a common operation in CKKS. Our analysis uses the fact that rescale-induced noise amounts to a linear ``hint" on the secret to enable a tighter reduction to LWE (via HintLWE). In many practical parameter regimes where the rescaling noise dominates, our results imply an additional precision loss of as little as two bits is sufficient to restore a high level of security against passive key-recovery attacks for standard parameters.
Overall, our results enable a provably secure and efficient real-world deployment of Approximate Homomorphic Encryption in scenarios with realistic security requirements.
Forest Zhang, Ke Wu
Fair multi-party coin toss protocols are fundamental to many cryptographic applications. While Cleve's celebrated result (STOC'86) showed that strong fairness is impossible against half-sized coalitions, recent works have explored a relaxed notion of fairness called Cooperative-Strategy-Proofness (CSP-fairness).
CSP-fairness ensures that no coalition has an incentive to deviate from the protocol. Previous research has established the feasibility of CSP-fairness against majority-sized coalitions, but these results focused on specific preference structures where each player prefers exactly one outcome out of all possible choices. In contrast, real-world scenarios often involve players with more complicated preference structures, rather than simple binary choices.
In this work, we initiate the study of CSP-fair multi-party coin toss protocols that accommodate arbitrary preference profiles, where each player may have an unstructured utility distribution over all possible outcomes. We demonstrate that CSP-fairness is attainable against majority-sized coalitions even under arbitrary preferences. In particular, we give the following results: \begin{itemize} \item We give a reduction from achieving CSP-fairness for arbitrary preference profiles to achieving CSP-fairness for structured split-bet profiles, where each player distributes the same amount of total utility across all outcomes. \item We present two CSP-fair protocols: (1) a \emph{size-based protocol}, which defends against majority-sized coalitions, and (2) a \emph{bets-based protocol}, which defends against coalitions controlling a majority of the total utilities. \item Additionally, we establish an impossibility result for CSP-fair binary coin toss with split-bet preferences, showing that our protocols are nearly optimal. \end{itemize}
In this work, we initiate the study of CSP-fair multi-party coin toss protocols that accommodate arbitrary preference profiles, where each player may have an unstructured utility distribution over all possible outcomes. We demonstrate that CSP-fairness is attainable against majority-sized coalitions even under arbitrary preferences. In particular, we give the following results: \begin{itemize} \item We give a reduction from achieving CSP-fairness for arbitrary preference profiles to achieving CSP-fairness for structured split-bet profiles, where each player distributes the same amount of total utility across all outcomes. \item We present two CSP-fair protocols: (1) a \emph{size-based protocol}, which defends against majority-sized coalitions, and (2) a \emph{bets-based protocol}, which defends against coalitions controlling a majority of the total utilities. \item Additionally, we establish an impossibility result for CSP-fair binary coin toss with split-bet preferences, showing that our protocols are nearly optimal. \end{itemize}
Nouhou Abdou Idris, Yunusa Abdulsalam, Mustapha Hedabou
The inherent presence of large-scale quantum computers is
deemed to threaten the traditional public-key cryptographic systems.
As a result, NIST has necessitated the development of post-quantum
cryptography (PQC) algorithms. Isogeny-based schemes have compact
key sizes; however, there have been some fundamental vulnerabilities in
the smooth-degree isogeny systems. Therefore, the literature has concentrated its efforts on higher-dimensional isogenies of non-smooth degrees.
In this work, we present POKE-KEM, a key encapsulation mechanism
(KEM) derived from the POKE public-key encryption (PKE) scheme
via a Fujisaki-Okamoto (FO) transformation with its corresponding optimized security levels during transformation. Despite POKE’s efficiency
advantages, its current formulation provides only IND-CPA security, limiting practical deployment where adaptive chosen-ciphertext security is
essential. The resulting POKE-KEM construction achieves IND-CCA security under the computational hardness of the C-POKE problem in both
the random oracle model (ROM) and quantum random oracle model
(QROM). Our implementation demonstrates significant practical advantages across all NIST security levels (128, 192, and 256 bits), supporting
the practical viability of POKE-KEM for post-quantum cryptographic
deployments. We provide a comprehensive security analysis including
formal proofs of correctness, rigidity, and IND-CCA security. Our underlying construction leverages the difficulty of recovering isogenies with
unknown, non-smooth degrees, resistant to known quantum attacks
MINKA MI NGUIDJOI Thierry Emmanuel
Weintroduce the Chaotic Entropic Expansion (CEE), a new one-way function based on iterated
polynomial maps over finite fields. For polynomials f in a carefully defined class Fd, we
prove that N iterations preserve min-entropy of at least log2q − N log2d bits and achieve
statistical distance ≤ (q − 1)(dN − 1)/(2√q) from uniform. We formalize security through the
Affine Iterated Inversion Problem (AIIP) and provide reductions to the hardness of solving
multivariate quadratic equations (MQ) and computing discrete logarithms (DLP).
Against quantum adversaries, CEE achieves O(2λ/2) security for λ-bit classical security. We
provide comprehensive cryptanalysis and parameter recommendations for practical deployment.
While slower than AES, CEE’s algebraic structure enables unique applications in verifiable
computation and post-quantum cryptography within the CASH framework.
Michele Ciampi, Divya Ravi, Luisa Siniscalchi, Yu Xia
When investigating the round-complexity of multi-party computation protocols (MPC) protocols, it is common to assume that in each round parties can communicate over broadcast channels. However, broadcast is an expensive resource, and as such its use should be minimized. For this reason, Cohen, Garay, and Zikas (Eurocrypt 2020) investigated the tradeoffs between the use of broadcast in two-round protocols assuming setup and the achievable security guarantees.
Despite the prolific line of research that followed the results of Cohen, Garay, and Zikas, none of the existing results considered the problem of minimizing the use of broadcast while relying in a ?????-??? way on the underlying cryptographic primitives.
In this work, we fully characterize the necessary and sufficient requirements in terms of broadcast usage in the ???ℎ????? ???????? setting for round-optimal MPC with black-box use of minimal cryptographic assumptions. Our main result shows that to securely realize any functionality with ????????? ????? in the common reference string model with black-box use of two-round oblivious transfer it is necessary and sufficient for the parties to adhere to the following pattern: in the first two rounds the parties exchange messages over peer-to-peer channels, and in the last round the messages are sent over a broadcast channel. We also extend our results to the correlated randomness setting where we prove that one round of peer-to-peer interaction and one round of broadcast is optimal to evaluate any functionality with unanimous abort.
Shuai Han, Hongxu Yi, Shengli Liu, Dawu Gu
Currently, the only tightly secure inner-product functional encryption (IPFE) schemes in the multi-user and multi-challenge setting are the IPFE scheme due to Tomida (Asiacrypt 2019) and its derivatives. However, these tightly secure schemes have large ciphertext expansion and are all based on the matrix decisional Diffie-Hellman (DDH) assumption.
To improve the efficiency of tightly secure IPFE and enrich the diversity of its underlying assumptions, we construct a set of tightly secure IPFE schemes, with very compact ciphertexts and efficient algorithms, from the matrix DDH, the decisional composite residuosity (DCR), and the learning-with-errors (LWE) assumptions, respectively. In particular,
-- our DDH-based scheme has about 5×~100× shorter public/secret keys, 2×~2.9× shorter ciphertexts and about 5×~100× faster algorithms than all existing tightly secure IPFE schemes, at the price of 3~7 bits of decreased security level, resolving an open problem raised by Tomida (Asiacrypt 2019);
-- our DCR-based scheme constitutes the first tightly secure IPFE scheme from the DCR assumption;
-- our LWE-based scheme is the first almost tightly secure IPFE scheme from lattices, hence achieving post-quantum security.
We obtain our schemes by proposing a generic and flexible construction of IPFE with a core technical tool called two-leveled inner-product hash proof system (TL-IP-HPS). Specifically, our IPFE construction is in a compact design, and to achieve tight security, we propose an economic proof strategy to reuse the spaces in such compact IPFE. Interestingly, our new proof strategy also applies to the loosely secure IPFE schemes proposed by Agrawal, Libert and Stehlé (Crypto 2016), and yields a tighter bound on their security loss.
To improve the efficiency of tightly secure IPFE and enrich the diversity of its underlying assumptions, we construct a set of tightly secure IPFE schemes, with very compact ciphertexts and efficient algorithms, from the matrix DDH, the decisional composite residuosity (DCR), and the learning-with-errors (LWE) assumptions, respectively. In particular,
-- our DDH-based scheme has about 5×~100× shorter public/secret keys, 2×~2.9× shorter ciphertexts and about 5×~100× faster algorithms than all existing tightly secure IPFE schemes, at the price of 3~7 bits of decreased security level, resolving an open problem raised by Tomida (Asiacrypt 2019);
-- our DCR-based scheme constitutes the first tightly secure IPFE scheme from the DCR assumption;
-- our LWE-based scheme is the first almost tightly secure IPFE scheme from lattices, hence achieving post-quantum security.
We obtain our schemes by proposing a generic and flexible construction of IPFE with a core technical tool called two-leveled inner-product hash proof system (TL-IP-HPS). Specifically, our IPFE construction is in a compact design, and to achieve tight security, we propose an economic proof strategy to reuse the spaces in such compact IPFE. Interestingly, our new proof strategy also applies to the loosely secure IPFE schemes proposed by Agrawal, Libert and Stehlé (Crypto 2016), and yields a tighter bound on their security loss.
Onur Gunlu, Maciej Skorski, H. Vincent Poor
Semantic communication systems focus on the transmission of meaning rather than the exact reconstruction of data, reshaping communication network design to achieve transformative efficiency in latency-sensitive and bandwidth-limited scenarios. Within this context, we investigate the rate-distortion-perception (RDP) problem for image compression, which plays a central role in producing perceptually realistic outputs under rate constraints. By employing the randomized distributed function computation (RDFC) framework, we derive an achievable non-asymptotic RDP region that captures the finite blocklength trade-offs among rate, distortion, and perceptual quality, aligning with the objectives of semantic communications. This region is further generalized to include either side information or a secrecy requirement, the latter ensuring strong secrecy against eavesdroppers through physical-layer security mechanisms and maintaining robustness in the presence of quantum-capable adversaries. The main contributions of this work are: (i) achievable bounds for non-asymptotic RDP regions subject to realism and distortion constraints; (ii) extensions to cases where side information is available at both the encoder and decoder; (iii) achievable, nonasymptotic RDP bounds with strong secrecy guarantees; (iv) characterization of the asymptotic secure RDP region under a perfect realism constraint; and (v) demonstrations of significant rate reductions and the effects of finite blocklengths, side information, and secrecy constraints. These findings offer concrete guidelines for the design of low-latency, secure, and high-fidelity image compression and generative modeling systems that generate realistic outputs, with relevance for, e.g., privacy-critical applications.
Marc Fischlin, Moritz Huppert, Sam A. Markelon
Probabilistic data structures like hash tables, skip lists, and treaps support efficient operations through randomized hierarchies that enable "skipping" elements, achieving sub-linear query complexity on average for perfectly correct responses. They serve as critical components in performance-sensitive systems where correctness is essential and efficiency is highly desirable. While simpler than deterministic alternatives like balanced search trees, these structures traditionally assume that input data are independent of the structure's internal randomness and state - an assumption questionable in malicious environments - potentially leading to a significantly increased query complexity. We present adaptive attacks on all three aforementioned structures that, in the case of hash tables and skip lists, cause exponential degradation compared to the input-independent setting. While efficiency-targeting attacks on hash tables are well-studied, our attacks on skip lists and treaps provide new insights into vulnerabilities of skipping-based probabilistic data structures. Next, we propose simple and efficient modifications to the original designs of these data structures to provide provable security against adaptive adversaries. Our approach is formalized through Adaptive Adversary Property Conservation (AAPC), a general security notion that captures deviation from the expected efficiency guarantees in adversarial scenarios. We use this notion to present rigorous robustness proofs for our versions of the data structures. Lastly, we perform experiments whose empirical results closely agree with our analytical results.
Rujia Li, Mingfei Zhang, Xueqian Lu, Wenbo Xu, Ying Yan, Sisi Duan
Ethereum, a leading blockchain platform, relies on incentive mechanisms to improve its stability. Recently, several attacks targeting the incentive mechanisms have been proposed. Examples include the so-called reorganization attacks that cause blocks proposed by honest validators to be discarded to gain more rewards. Finding these attacks, however, heavily relies on expert knowledge and may involve substantial manual effort.
We present BunnyFinder, a semi-automated framework for finding incentive flaws in Ethereum. BunnyFinder is inspired by failure injection, a technique commonly used in software testing for finding implementation vulnerabilities. Instead of finding implementation vulnerabilities, we aim to find design flaws. Our main technical contributions involve a carefully designed strategy generator that generates a large pool of attack instances, an automatic workflow that launches attacks and analyzes the results, and a workflow that integrates reinforcement learning to fine-tune the attack parameters and identify the most profitable attacks. We simulate a total of 9,354 attack instances using our framework and find the following results. First, our framework reproduces five known incentive attacks that were previously found manually. Second, we find three new attacks that can be identified as incentive flaws. Finally and surprisingly, one of our experiments also identified two implementation flaws.
We present BunnyFinder, a semi-automated framework for finding incentive flaws in Ethereum. BunnyFinder is inspired by failure injection, a technique commonly used in software testing for finding implementation vulnerabilities. Instead of finding implementation vulnerabilities, we aim to find design flaws. Our main technical contributions involve a carefully designed strategy generator that generates a large pool of attack instances, an automatic workflow that launches attacks and analyzes the results, and a workflow that integrates reinforcement learning to fine-tune the attack parameters and identify the most profitable attacks. We simulate a total of 9,354 attack instances using our framework and find the following results. First, our framework reproduces five known incentive attacks that were previously found manually. Second, we find three new attacks that can be identified as incentive flaws. Finally and surprisingly, one of our experiments also identified two implementation flaws.
Nam Tran, Khoa Nguyen, Dongxi Liu, Josef Pieprzyk, Willy Susilo
Linkable ring signatures (Liu et al., ACISP'04) is a ring signature scheme with a linking mechanism for detecting signatures from the same signer. This functionality has found many practical applications in electronic voting, cryptocurrencies, and whistleblowing systems. However, existing linkable ring signature schemes impose a fundamental limitation: users can issue only one signature, and after that their anonymity is not guaranteed. This limited number of usage is inadequate for many real-world scenarios.
This work introduces the notion of Many-time Linkable Ring Signatures, extending the anonymity guarantees of standard linkable ring signatures. Specifically, many-time linkable ring signatures ensure that signers remain anonymous as long as the number of their signatures is smaller than a system-global threshold. Only when a signer exceeds this threshold the anonymity is lost. We formalize this via a security notion called T-anonymity, which guarantees that adversaries cannot distinguish signatures from users who have each produced at most T signatures. This new notion of anonymity generalizes one-time anonymity in previous linkable schemes, while providing stronger guarantees than existing constructions. We also present a lattice-based construction with proven security in the quantum random oracle model (QROM).
This work introduces the notion of Many-time Linkable Ring Signatures, extending the anonymity guarantees of standard linkable ring signatures. Specifically, many-time linkable ring signatures ensure that signers remain anonymous as long as the number of their signatures is smaller than a system-global threshold. Only when a signer exceeds this threshold the anonymity is lost. We formalize this via a security notion called T-anonymity, which guarantees that adversaries cannot distinguish signatures from users who have each produced at most T signatures. This new notion of anonymity generalizes one-time anonymity in previous linkable schemes, while providing stronger guarantees than existing constructions. We also present a lattice-based construction with proven security in the quantum random oracle model (QROM).
Haiyue Dong, Qian Guo
The Hamming Quasi-Cyclic (HQC) key encapsulation mechanism (KEM), recently selected by NIST for standardization in the Post-Quantum Cryptography (PQC) process, distinguishes itself through its efficiency, robust design based on hard decoding problems in coding theory, and well-characterized decryption failure rates. Despite its selection, practical security concerns arise from implementation threats, particularly those exploiting plaintext-checking (PC) oracles. While multi-value PC (MV-PC) and full decryption (FD) oracle attacks have been extensively studied in the context of lattice-based cryptography, their applicability to code-based schemes like HQC has remained relatively unexplored.
In this work, we present the first MV-PC and FD oracle attacks targeting code-based KEMs, specifically on HQC. Our MV-PC attack significantly reduces the required oracle queries compared to previous PC oracle-based methods and holds implications for side-channel, key-mismatch, and fault-injection attacks. Our FD attack exhibits remarkable efficiency in trace complexity, achieving secret key recovery for hqc-128 with just two queries to a perfect oracle, and four queries for hqc-192 and hqc-256. Simulations further demonstrate the robustness of our MV-PC and FD oracle attacks against imperfect oracle responses. We experimentally validate the new attacks on an ARM Cortex-M4 microcontroller, highlighting the critical need for robust countermeasures. In particular, on such a platform, substantial leakage during operations like syndrome computation poses significant challenges to the efficiency of masking techniques in mitigating FD oracle attacks.
In this work, we present the first MV-PC and FD oracle attacks targeting code-based KEMs, specifically on HQC. Our MV-PC attack significantly reduces the required oracle queries compared to previous PC oracle-based methods and holds implications for side-channel, key-mismatch, and fault-injection attacks. Our FD attack exhibits remarkable efficiency in trace complexity, achieving secret key recovery for hqc-128 with just two queries to a perfect oracle, and four queries for hqc-192 and hqc-256. Simulations further demonstrate the robustness of our MV-PC and FD oracle attacks against imperfect oracle responses. We experimentally validate the new attacks on an ARM Cortex-M4 microcontroller, highlighting the critical need for robust countermeasures. In particular, on such a platform, substantial leakage during operations like syndrome computation poses significant challenges to the efficiency of masking techniques in mitigating FD oracle attacks.
José Bacelar Almeida, Gustavo Xavier Delerue Marinho Alves, Manuel Barbosa, Gilles Barthe, Luı́s Esquı́vel, Vincent Hwang, Tiago Oliveira, Hugo Pacheco, Peter Schwabe, Pierre-Yves Strub
We propose a hybrid formal verification approach that combines high-level deductive reasoning and circuit-based reasoning and apply it to highly optimized cryptographic assembly code. Our approach permits scaling up formal verifi- cation in two complementary directions: 1) it reduces the proof effort required for low-level functions where the computation logics are obfuscated by the intricate use of architecture-specific instructions and 2) it permits amortizing the effort of proving one implementation by using equivalence checking to propagate the guarantees to other implementations of the same computation using different optimizations or targeting different architectures. We demonstrate our approach via an extension to the EasyCrypt proof assistant and by revisiting formally verified implementations of ML-KEM in Jasmin. As a result, we obtain the first formally verified implementation of ML-KEM that offers performance comparable to the fastest non-verified implementation in x86-64 architectures.
Shaurya Pratap Singh, Bhupendra Singh, Alok Mishra
In this paper, we introduce a new cryptographic hash algorithm in that we used the Collatz
problem, focusing on its conditional branching structure, an element often overlooked despite the fame
of the 3x + 1 conjecture. This hash algorithm takes an arbitrary-length input and produces a fixedlength 256, 384, 512-bit output. The presented hash algorithm is in the category of Collision Resistance
Hash Function (CRHF), and this hash algorithm is designed by focusing on its use in password storing,
HMAC, and digital signatures etc.
Until now, quantum computers also didn’t show any speedup in the Collatz conjectures problem, so
still the proving and disproving of the Collatz conjecture is a big question, so we believe that there is
no structural attack on this hash algorithm.
Eda Kırımlı, Gaurish Korpal
In this paper, we discuss refined Humbert invariants of principally polarized superspecial abelian surfaces. Kani introduced the refined Humbert invariant of a principally polarized abelian surface in 1994. The main contribution of this paper is to calculate the refined Humbert invariant of a principally polarized superspecial abelian surface. We present three applications of computing this invariant in the context of isogeny-based cryptography. First, we discuss the maximum of the minimum degrees of isogenies between two uniformly random supersingular elliptic curves independent of their endomorphism ring structures. Second, we provide a different perspective on the fixed isogeny degree problem using refined Humbert invariants, and analyze this problem on average without endomorphism rings. Third, we give experimental evidence for the proven upper bounds that the minimum distance is $\approx \sqrt{p}$; our work verifies this claim up to p=727.
Giacomo Borin, Maria Corte-Real Santos, Jonathan Komada Eriksen, Riccardo Invernizzi, Marzio Mula, Sina Schaeffler, Frederik Vercauteren
The main building block in isogeny-based cryptography is an algorithmic
version of the Deuring correspondence, called $\mathsf{IdealToIsogeny}$. This algorithm
takes as input left ideals of the endomorphism ring of a supersingular elliptic
curve and computes the associated isogeny. Building on ideas from $\mathsf{QFESTA}$, the
$\mathsf{Clapoti}$ framework by Page and Robert reduces this problem to solving a certain
norm equation. The current state of the art is however unable to efficiently
solve this equation, and resorts to a relaxed version of it instead. This
impacts not only the efficiency of the $\mathsf{IdealToIsogeny}$ procedure, but also its
success probability. The latter issue has to be mitigated with complex and
memory-heavy rerandomization procedures, but still leaves a
gap between the security analysis and the actual implementation of
cryptographic schemes employing $\mathsf{IdealToIsogeny}$ as a subroutine.
For instance, in $\mathsf{SQIsign}$ the failure probability is still $2^{-60}$ which
is not cryptographically negligible.
The main contribution of this paper is a very simple and efficient algorithm called $\mathsf{Qlapoti}$ which approaches the norm equation from $\mathsf{Clapoti}$ directly, solving all the aforementioned problems at once. First, it makes the $\mathsf{IdealToIsogeny}$ subroutine between $2.2$ and $2.6$ times faster. This signigicantly improves the speed of schemes using this subroutine, including notably $\mathsf{SQIsign}$ and \prism. On top of that, $\mathsf{Qlapoti}$ has a cryptographically negligible failure probability. This eliminates the need for rerandomization, drastically reducing memory consumption, and allows for cleaner security reductions.
The main contribution of this paper is a very simple and efficient algorithm called $\mathsf{Qlapoti}$ which approaches the norm equation from $\mathsf{Clapoti}$ directly, solving all the aforementioned problems at once. First, it makes the $\mathsf{IdealToIsogeny}$ subroutine between $2.2$ and $2.6$ times faster. This signigicantly improves the speed of schemes using this subroutine, including notably $\mathsf{SQIsign}$ and \prism. On top of that, $\mathsf{Qlapoti}$ has a cryptographically negligible failure probability. This eliminates the need for rerandomization, drastically reducing memory consumption, and allows for cleaner security reductions.