IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
12 September 2025
Wenquan Zhou, An Wang, Yaoling Ding, Annv Liu, Jingqi Zhang, Jiakun Li, Liehuang Zhu
Non-invasive security constitutes an essential component of hardware security, primarily involving side-channel analysis (SCA), with various international standards explicitly mandating rigorous testing. However, current SCA assessments heavily depend on expert manual procedures, resulting in significant expenditures of time and resources. Automated evaluation frameworks are not yet available. In recent years, Large Language Models (LLMs) have been widely adopted in various fields such as language generation, owing to their emergent capabilities. Particularly, LLM agents equipped with tool-usage capabilities have significantly expanded the potential of these models to interact with the physical world.
Motivated by these recent advances in LLM agents, we propose SCA-GPT, a LLM agent framework tailored for SCA tasks. The framework incorporates a Retrieval-Augmented Generation (RAG)-based expert knowledge base along with multiple SCA tools. We present a domain-specific expert knowledge base construction approach and two complementary evaluation metrics. Retrieval experiments validate the effectiveness of our knowledge base construction, achieving an average weighted score of 88.7% and an nDCG@5 of 90%, which demonstrates the contribution of structured expert entries to retrieval accuracy.By effectively infusing expert knowledge, SCA-GPT achieves fully automated, end-to-end ISO/IEC 17825-compliant tests. We conduct comprehensive experiments across three leading LLMs—DeepSeek V3, Kimi K2 and GLM 4.5—using datasets spanning seven cryptographic algorithms (e.g., AES, RSA, ECC, Kyber) and deploying on four hardware platforms, including smart cards, microcontrollers, and FPGAs. Results show that DeepSeek V3, Kimi K2, and GLM 4.5 achieve accuracies of 91.0%, 87.7%, and 82.0%, respectively, with the agent reducing testing time by an average of 76% compared with manual procedures. Notably, SCA-GPT is the first advanced LLM agent specifically designed for SCA tasks.
Furkan Kerim Çabaş, Oğuz Yayla
Secure multi party computation protocols (MPC) translating between arithmetic and binary data types have recently gained attraction which is introduced by Rotaru and Wood in 2019, called daBit, and improved by Escudero et. al. called edaBits. EdaBits are simply secret shares in arithmetic domain and bit decomposition of the arithmetic share is the binary form the secret shares. These protocols are preprocessing for MPC protocols in order to improve efficiency. Furthermore, fluid MPC setting, introduced by Choudhuri, Goel, Green, Jain and Kaptchuk in 2021, is a framework which the parties, called dynamic parties, do not have be online throughout the whole process. Fluid MPC framework is still has to be worked and conventional protocols and techniques has to be adapted for it. Our work introduces an edaBits protocol specifically designed for the fluid MPC setting under a four-party honest-majority model. The protocol, which consists of eight subprotocols, achieves security against an honest majority and departs slightly from the traditional cut-and-choose paradigm. It attains linear memory and time complexity, as well as constant communication complexity. Finally, we demonstrate two applications that employ edaBits within the fluid MPC framework under the semi-honest adversary model.
Jiangxia Ge, Kang Yang, Yang Yu, Yu Yu
The decryption error is an important parameter that requires estimation when applying the Fujisaki-Okamoto (FO) transformation. In general, compared with the worst-case decryption error $\delta_{wc}$, the average-case decryption error $\delta_{ac}$ is simpler to estimate, as the latter is decoupled from the malicious message selected by the adversary. In light of this, Duman et al. (PKC 2023) proposed FO variants $FOAC_0:=FO_m^\bot\circ ACWC_0$ and $FOAC:=FO_m^\bot\circ ACWC$, and subsequently proved their IND-CCA security based on $\delta_{ac}$ and $\gamma$-spread in the ROM and QROM.
In this paper, we revisit the security proof of these variants and obtain the following results:
1, We present a tighter IND-CCA security proof of $FOAC_0$ in the ROM and QROM, while removing the requirement of $\gamma$-spread imposed by Duman et al. Furthermore, as a direct corollary, we fill the gap in the IND-CCA security proof of BAT (CHES 2022) and give a correct one.
2, We present a tighter IND-CCA msecurity proof of $FOAC$ in the QROM. In this proof, we also provide a tighter OW-CPA security proof of ACWC in the QROM, which reduces the loss factor of $q^2$ introduced by Duman et al. to $q$. This actually answers an open question proposed by them, where $q$ denotes the total number of random oracle queries.
3, Based on FOmnbot, we define $FOACmnbot:=FOmnbot\circ ACWC$ and provide its IND-CCA security proof in the ROM and QROM. The advantage of FOACmnbot is that it neither introduces ciphertext expansion as $FOAC_0$ does nor requires $\gamma$-spread as FOAC does.
In addition, we propose a new Check Query Replacement technique to complete our QROM proofs, which may be of independent interest.
In this paper, we revisit the security proof of these variants and obtain the following results:
1, We present a tighter IND-CCA security proof of $FOAC_0$ in the ROM and QROM, while removing the requirement of $\gamma$-spread imposed by Duman et al. Furthermore, as a direct corollary, we fill the gap in the IND-CCA security proof of BAT (CHES 2022) and give a correct one.
2, We present a tighter IND-CCA msecurity proof of $FOAC$ in the QROM. In this proof, we also provide a tighter OW-CPA security proof of ACWC in the QROM, which reduces the loss factor of $q^2$ introduced by Duman et al. to $q$. This actually answers an open question proposed by them, where $q$ denotes the total number of random oracle queries.
3, Based on FOmnbot, we define $FOACmnbot:=FOmnbot\circ ACWC$ and provide its IND-CCA security proof in the ROM and QROM. The advantage of FOACmnbot is that it neither introduces ciphertext expansion as $FOAC_0$ does nor requires $\gamma$-spread as FOAC does.
In addition, we propose a new Check Query Replacement technique to complete our QROM proofs, which may be of independent interest.
Artyom Kuninets, Anton Leevik, Ekaterina Malygina, Evgeniy Melnichuk, Denis Nabokov
In this work, we investigate the application of Barnes-Wall lattices in post-quantum cryptographic schemes. We survey and analyze several constructions of Barnes-Wall lattices, including subgroup chains, the generalized $k$-ing construction, and connections with Reed-Muller codes, highlighting their equivalence over both $\mathbb{Z}[i]$ and $\mathbb{Z}$. Building on these structural insights, we introduce a new algorithm for efficient sampling from discrete Gaussian distribution on $BW_{2^n}$ lattices. Our approach exploits the $k$-ing and squaring constructions to achieve low-variance sampling, which is particularly relevant for cryptographic applications such as digital signature schemes. We further examine the cryptographic hardness of Lattice Isomorphism Problem (LIP), showing that Barnes-Wall lattices provide inherent resistance to hull-based and other known attacks. Our results on sampling algorithms, combined with existing advances in the cryptanalysis of the LIP, indicate that Barnes-Wall lattices hold strong potential for the design of post-quantum schemes based on the LIP problem.
Mario Yaksetig, Jiayu Xu
In this work, we introduce Rayls, a new central bank digital currency (CBDC) design that provides privacy, high performance, and regulatory compliance. In our construction, commercial banks each run their own (private) ledger and are connected to an underlying decentralized programmable blockchain via a relayer.
We also introduce a novel protocol that allows for efficient anonymous transactions between banks, which we call Enygma. Our design is `quantum-private' as a quantum adversary is not able to infer any information (i.e., payer, payee, amounts) about the transactions that take place in the network. We achieve high performance in cross-bank settlement via the use of ZK-SNARKs and 'double' batching. Concretely, our transactions consist of a set of commitments and a zero-knowledge proof. As a result, each transaction can pay more than 1 bank at once and, secondly, each of these individual commitments can contain aggregated transfers from multiple users. For example, bank A transfers $1M to a different bank B and that amount is actually a sum of multiple users making transfers to bank B. Commercial banks can then enforce regulatory rules locally within their ledgers. Our system is in production with one of the largest clearing houses in the world and is currently being explored in a CBDC pilot.
Mario Yaksetig, Pedro M. F. Pereira, Stephen Yang, Mahdi Nejadgholi, Jiayu Xu
In this work, we introduce an upgrade to Rayls, a novel central bank digital currency (CBDC) design that provides privacy, high performance, and regulatory compliance. In the Rayls construction, commercial banks each run their own (private) ledger and are connected to an underlying decentralized programmable blockchain via a relayer.
We introduce an improved and more efficient protocol that allows for efficient anonymous transactions between banks, which is (still) called Enygma. Our design is 'quantum-private' as a quantum adversary is not able to infer any information (i.e., payer, payee, amounts) about the transactions that take place in the network. One of the main improvements of this work is that the design is now completely quantum-secure except for the current use of ZK-SNARKs. We note, however, that this is modular and can simply be updated as desired.
We achieve high performance in cross-bank settlement via the use of ZK-SNARKs and 'double' batching and an optimized ZK implementation, with a prover time three times faster than the initial implementation and a cheaper on-chain verifier. Our transactions consist of a set of commitments and a zero-knowledge proof. As a result, each transaction can pay more than 1 bank at once and, secondly, each of these individual commitments can contain aggregated transfers from multiple users. For example, bank A transfers $1M to a different bank B and that amount is actually a sum of multiple users making transfers to bank B. Commercial banks can then enforce regulatory rules locally within their ledgers. Our system is in production with one of the largest clearing houses in the world and is currently being explored in a CBDC pilot.
We achieve high performance in cross-bank settlement via the use of ZK-SNARKs and 'double' batching and an optimized ZK implementation, with a prover time three times faster than the initial implementation and a cheaper on-chain verifier. Our transactions consist of a set of commitments and a zero-knowledge proof. As a result, each transaction can pay more than 1 bank at once and, secondly, each of these individual commitments can contain aggregated transfers from multiple users. For example, bank A transfers $1M to a different bank B and that amount is actually a sum of multiple users making transfers to bank B. Commercial banks can then enforce regulatory rules locally within their ledgers. Our system is in production with one of the largest clearing houses in the world and is currently being explored in a CBDC pilot.
Sebastian Hasler, Pascal Reisert, Ralf Küsters
State-of-the-art actively secure multi-party computation protocols, like SPDZ (Damgård et al., CRYPTO 2012), use correlated randomness, like Beaver triples, to achieve a highly efficient online phase. For a long time, the generation of the correlated randomness in the offline phase relied on classical cryptographic primitives, like somewhat homomorphic encryption or oblivious transfer, that required significant communication.
More recently, Boyle et al. (FOCS 2020) defined a new primitive called pseudorandom correlation functions (PCFs) to generate correlated randomness non-interactively. PCFs set up keys for each party in an initial interactive phase, which can then be used by the parties to generate a large number of shares of the correlated randomness without further communication. In the random oracle model (ROM), two-party PCFs can be generically constructed based on evaluating a weak pseudorandom function (WPRF) using a powerful-enough HSS scheme. However, the concrete efficiency of instantiations of this approach has not been analyzed so far. There are also some works that construct PCFs based on other approaches, but they cannot be used for correlations of degree $\ge 2$ (e.g., Beaver triples) over large rings/fields (such as those used in SPDZ).
In this paper, we improve the complexity and concrete efficiency of PCFs over large rings/fields by presenting a new generic PCF based on the hardness of the Ring-Learning with Rounding problem (Ring-LWR) and FHE. We only share BFV keys in the initial interactive phase. The two parties then use the random oracle to locally sample BFV (pseudo-)ciphertexts encrypting pseudorandom plaintexts. We use a new bootstrapping algorithm for these (pseudo-)ciphertexts that reduces initially saturated noise to a level where the parties can use the homomorphic properties of the BFV scheme to correlate the encrypted randomness locally. Both parties can then produce, without further interaction, shares of the correlated randomness with their secret key share. Our new PCF works with any form of correlated randomness that can be expressed as an arithmetic circuit over a base ring $\mathbb Z_t$ or field $\mathbb F_{p^d}$, e.g., Beaver or matrix triples.
More recently, Boyle et al. (FOCS 2020) defined a new primitive called pseudorandom correlation functions (PCFs) to generate correlated randomness non-interactively. PCFs set up keys for each party in an initial interactive phase, which can then be used by the parties to generate a large number of shares of the correlated randomness without further communication. In the random oracle model (ROM), two-party PCFs can be generically constructed based on evaluating a weak pseudorandom function (WPRF) using a powerful-enough HSS scheme. However, the concrete efficiency of instantiations of this approach has not been analyzed so far. There are also some works that construct PCFs based on other approaches, but they cannot be used for correlations of degree $\ge 2$ (e.g., Beaver triples) over large rings/fields (such as those used in SPDZ).
In this paper, we improve the complexity and concrete efficiency of PCFs over large rings/fields by presenting a new generic PCF based on the hardness of the Ring-Learning with Rounding problem (Ring-LWR) and FHE. We only share BFV keys in the initial interactive phase. The two parties then use the random oracle to locally sample BFV (pseudo-)ciphertexts encrypting pseudorandom plaintexts. We use a new bootstrapping algorithm for these (pseudo-)ciphertexts that reduces initially saturated noise to a level where the parties can use the homomorphic properties of the BFV scheme to correlate the encrypted randomness locally. Both parties can then produce, without further interaction, shares of the correlated randomness with their secret key share. Our new PCF works with any form of correlated randomness that can be expressed as an arithmetic circuit over a base ring $\mathbb Z_t$ or field $\mathbb F_{p^d}$, e.g., Beaver or matrix triples.
Daniel Pöllman, Tianxin Tang
Encrypted search focuses on protecting sensitive data in outsourced environments while enabling private queries. Although standard encrypted search algorithms are efficient, they often leak some information about the queries and data. One such leakage is the access pattern on the outsourced storage. Recent leakage-abuse attacks have exploited this seemingly harmless leakage to successfully recover both queries and data, shifting research priorities towards finding the right balance between privacy and performance. While some proposals leverage oblivious RAM or other oblivious data structures to hide the access pattern, they typically incur significant bandwidth costs.
In response, researchers have developed new schemes that ensure access leakage satisfies differential privacy (DP). Yet the security implications of these new guarantees remain unclear. Especially, compared with conventional differential privacy, the application and threat model are significantly different. To understand these implications, we investigate two concrete instances of (encrypted) range-query schemes (appeared in SODA ’19 and CCS ’22) that achieve differentially private access. We analyze their security guarantees using inference attacks to recover queries and data on real-world datasets. Our findings raise a critical concern that ensuring access leakage is differentially private either falls short of providing strong security for the queries and data, diverging from the initial goals, or offers only weak security but at a high efficiency/correctness cost. As part of our analysis, we also propose a generic security definition for DP access, and identify two general techniques for leakage mitigation, bucketization and partitioning, that may be of independent interest.
Alex Charlès, Aleksei Udovenko
In the area of white-box cryptography implementations, many existing protections are susceptible to attacks derived from physical cryptanalysis, which can be applied with minimal human effort and no prior design knowledge. The absence of a clear and comprehensive security model hinders the development of effective countermeasures against these attacks.
We introduce the Haystack ciphers, a formal model for the security of white-box countermeasures against such attacks. In this model, the countermeasures are represented simply as symmetric-key encryption schemes. We show that their chosen-plaintext (IND-CPA) security is closely related to the resistance of the countermeasures against computational trace-based attacks. Similarly, their chosen-ciphertext (IND-CCA) security is closely associated with the resistance against fault injection attacks in the white-box model. Secure Haystack ciphers constitute the next formal milestone for advancing white-box designs and countermeasures, the minimal requirement that is not currently clearly achieved but is plausibly feasible with available tools.
We review the white-box literature with respect to our model and bridge the gap between white-box and fault attacks, which are very powerful but were only partially considered in the white-box literature so far. We study known fault protections from the physical cryptography literature and present new fault attacks in the white-box setting, which raises the need and shapes the requirements for future secure countermeasures against fault attacks.
We introduce the Haystack ciphers, a formal model for the security of white-box countermeasures against such attacks. In this model, the countermeasures are represented simply as symmetric-key encryption schemes. We show that their chosen-plaintext (IND-CPA) security is closely related to the resistance of the countermeasures against computational trace-based attacks. Similarly, their chosen-ciphertext (IND-CCA) security is closely associated with the resistance against fault injection attacks in the white-box model. Secure Haystack ciphers constitute the next formal milestone for advancing white-box designs and countermeasures, the minimal requirement that is not currently clearly achieved but is plausibly feasible with available tools.
We review the white-box literature with respect to our model and bridge the gap between white-box and fault attacks, which are very powerful but were only partially considered in the white-box literature so far. We study known fault protections from the physical cryptography literature and present new fault attacks in the white-box setting, which raises the need and shapes the requirements for future secure countermeasures against fault attacks.
Chi Feng, Lei Fan
This paper presents BlockLens, a supervised, trace-level framework for detecting malicious Ethereum transactions using large language models. Unlike previous approaches that rely on static features or storage-level abstractions, our method processes complete execution traces, capturing opcode sequences, memory information, gas usage, and call structures to accurately represent the runtime behavior of each transaction. This framework harnesses the exceptional reasoning capabilities of LLMs for long input sequences and is fine-tuned on transaction data.
We present a tokenization strategy aligned with Ethereum Virtual Machine (EVM) semantics that converts transaction execution traces into tokens. Each transaction captures its complete execution trace through simulated execution and is sliced into overlapping chunks using a sliding window, allowing for long-range context modeling within memory constraints. During inference, the model outputs both a binary decision and a probability score indicating the likelihood of malicious behavior.
We implemented the framework based on LLaMA 3.2-1B and fine-tuned the model using LoRA. We evaluated it on a curated dataset that includes both real-world attacks and normal DeFi transactions. Our model outperforms representative baselines, achieving higher F1 scores and recall at top-k thresholds. Additionally, this work offers interpretable chunk-level outputs that enhance explainability and facilitate actionable decision-making in security-critical environments.
We present a tokenization strategy aligned with Ethereum Virtual Machine (EVM) semantics that converts transaction execution traces into tokens. Each transaction captures its complete execution trace through simulated execution and is sliced into overlapping chunks using a sliding window, allowing for long-range context modeling within memory constraints. During inference, the model outputs both a binary decision and a probability score indicating the likelihood of malicious behavior.
We implemented the framework based on LLaMA 3.2-1B and fine-tuned the model using LoRA. We evaluated it on a curated dataset that includes both real-world attacks and normal DeFi transactions. Our model outperforms representative baselines, achieving higher F1 scores and recall at top-k thresholds. Additionally, this work offers interpretable chunk-level outputs that enhance explainability and facilitate actionable decision-making in security-critical environments.
Sohyun Jeon, Calvin Abou Haidar, Mehdi Tibouchi
In this paper, we construct the first lattice-based threshold ring signature scheme with signature size scaling logarithmically in the size of the ring while supporting arbitrary thresholds. Our construction is also concretely efficient, achieving signature sizes of less than 150kB for ring sizes up to $N = 4096$ (with threshold size $T=N/2$, say). This is substantially more compact than previous work.
Our approach is inspired by the recent work of Aardal et al. (CRYPTO 2024) on the compact aggregation of $\mathsf{Falcon}$ signatures, that uses the $\mathsf{LaBRADOR}$ lattice-based SNARKs to combine a collection of $\mathsf{Falcon}$ signatures into a single succinct argument of knowledge of those signatures. We proceed in a similar way to obtain compact threshold ring signatures from \falcon, but crucially require that the proof system be zero-knowledge in order to ensure the privacy of signers. Since $\mathsf{LaBRADOR}$ is not a zkSNARK, we associate it with a separate (non-succinct) lattice-based zero-knowledge proof system to achieve our desired properties.
Our approach is inspired by the recent work of Aardal et al. (CRYPTO 2024) on the compact aggregation of $\mathsf{Falcon}$ signatures, that uses the $\mathsf{LaBRADOR}$ lattice-based SNARKs to combine a collection of $\mathsf{Falcon}$ signatures into a single succinct argument of knowledge of those signatures. We proceed in a similar way to obtain compact threshold ring signatures from \falcon, but crucially require that the proof system be zero-knowledge in order to ensure the privacy of signers. Since $\mathsf{LaBRADOR}$ is not a zkSNARK, we associate it with a separate (non-succinct) lattice-based zero-knowledge proof system to achieve our desired properties.
Cheng Che, Tian Tian
Differential-linear cryptanalysis was introduced by Langford and Hellman at CRYPTO'94 and has been an important cryptanalysis method against symmetric-key primitives. The current primary framework for constructing differential-linear distinguishers involves dividing the cipher into three parts: the differential part $E_0$, the middle connection part $E_m$, and the linear part $E_1$. This framework was first proposed at EUROCRYPT 2019, where DLCT was introduced to evaluate the differential-linear bias of $E_m$ over a single round. Recently, the TDT method and the generalized DLCT method were proposed at CRYPTO 2024 to evaluate the differential-linear bias of $E_m$ covering multiple rounds. Unlike the DLCT framework, the DATF technique could also handle $E_m$ covering more rounds.
In this paper, we enhance the DATF technique in differential-linear cryptanalysis from three perspectives. First, we improve the precision of differential-linear bias estimation by introducing new transitional rules, a backtracking strategy, and a partitioning technique to DATF. Second, we present a general bias computation method for Boolean functions that significantly reduces computational complexity compared to the exhaustive search used by Liu et al. in the previous DATF technique. Third, we propose an effective method for searching for differential-linear distinguishers with good biases based on DATF. Additionally, the bias computation method has independent interests with a wide application in other cryptanalysis methods, such as differential cryptanalysis and cube attacks. Notably, all these enhancements to DATF are equally applicable to HATF. To show the validity and versatility of our new techniques, we apply the enhanced DATF to the NIST standard Ascon, the AES finalist Serpent, the NIST LWC finalist Xoodyak, and the eSTREAM finalist Grain v1. In all applications, we either present the first differential-linear distinguishers for more rounds or update the best-known ones.
In this paper, we enhance the DATF technique in differential-linear cryptanalysis from three perspectives. First, we improve the precision of differential-linear bias estimation by introducing new transitional rules, a backtracking strategy, and a partitioning technique to DATF. Second, we present a general bias computation method for Boolean functions that significantly reduces computational complexity compared to the exhaustive search used by Liu et al. in the previous DATF technique. Third, we propose an effective method for searching for differential-linear distinguishers with good biases based on DATF. Additionally, the bias computation method has independent interests with a wide application in other cryptanalysis methods, such as differential cryptanalysis and cube attacks. Notably, all these enhancements to DATF are equally applicable to HATF. To show the validity and versatility of our new techniques, we apply the enhanced DATF to the NIST standard Ascon, the AES finalist Serpent, the NIST LWC finalist Xoodyak, and the eSTREAM finalist Grain v1. In all applications, we either present the first differential-linear distinguishers for more rounds or update the best-known ones.
Akhil Bandarupalli, Xiaoyu Ji, Soham Jog, Aniket Kate, Chen-Da Liu-Zhang, Yifan Song
Verifiable Secret Sharing (VSS) is a fundamental primitive in threshold cryptography and multi-party computation. It preserves secrecy, integrity, and availability of a shared secret for a fixed set of parties, with a subset of them being malicious. In practical applications, especially when the secret sharing is expected to be maintained over long durations, the VSS scheme should be able to cater to a dynamic setting where involved parties may change. The primitive known as Dynamic Proactive Secret Sharing (DPSS) is beneficial here, as it facilitates the secure transfer of secrets from the original committee to a new committee. Nevertheless, prior works on DPSS protocols either rely on unrealistic time bounds on message delivery or have a high computational cost, which limits their scalability beyond tens of parties.
In this work, we present a scalable asynchronous DPSS protocol that utilizes lightweight cryptographic tools, such as hash functions and symmetric-key encryption. Our protocol achieves full security and optimal fault tolerance with amortized linear communication costs. Unlike existing solutions, our proposed protocol is also post-quantum secure. By balancing computation and communication, our approach offers practical performance at scale. Our implementation results demonstrate improved scalability and efficiency, surpassing the current state-of-the-art achieving a $22.1\times$ lower latency than the prior best work. Furthermore, our solution also scales gracefully with increasing $n$ and reshares a batch of $100,000$ secrets between committees of sizes $n=112$ parties in under a minute.
In this work, we present a scalable asynchronous DPSS protocol that utilizes lightweight cryptographic tools, such as hash functions and symmetric-key encryption. Our protocol achieves full security and optimal fault tolerance with amortized linear communication costs. Unlike existing solutions, our proposed protocol is also post-quantum secure. By balancing computation and communication, our approach offers practical performance at scale. Our implementation results demonstrate improved scalability and efficiency, surpassing the current state-of-the-art achieving a $22.1\times$ lower latency than the prior best work. Furthermore, our solution also scales gracefully with increasing $n$ and reshares a batch of $100,000$ secrets between committees of sizes $n=112$ parties in under a minute.
Akhil Bandarupalli, Xiaoyu Ji, Aniket Kate, Chen-Da Liu-Zhang, Daniel Pöllmann, Yifan Song
Multi-party computation (MPC) enables a set of mutually $n$ distrusting parties to compute any function on their private inputs. Mainly, MPC facilitates agreement on the function’s output while preserving the secrecy of honest inputs, even against a subset of $t$ parties controlled by an adversary. With applications spanning from anonymous broadcast to private auctions, MPC is considered a cornerstone of distributed cryptography, and significant research efforts have been aimed at making MPC practical in the last decade. However, most libraries either make strong assumptions like the network being bounded synchronous, or incur high computation overhead from the extensive use of expensive public-key operations that prevent them from scaling beyond a few dozen parties.
This work presents Velox, an asynchronous MPC protocol that offers fairness against an optimal adversary corrupting up to $t<\frac{n}{3}$ parties. Velox significantly enhances practicality by leveraging lightweight cryptographic primitives - such as symmetric-key encryption and hash functions - which are 2-3 orders of magnitude faster than public-key operations, resulting in substantial computational efficiency. Moreover, Velox is highly communication-efficient, with linear amortized communication relative to circuit size and only $\mathcal{O}(n^3)$ field elements of additive overhead. Concretely, Velox requires just $9.33$ field elements per party per multiplication gate, more than $10\times$ reduction compared to the state of the art. Moreover, Velox also offers Post-Quantum Security as lightweight cryptographic primitives retain their security against a quantum adversary.
We implement Velox comprehensively, covering both offline and online phases, and evaluate its performance on a geographically distributed testbed through a real-world application: anonymous broadcast. Our implementation securely shuffles a batch of $k=256$ messages in $4$ seconds with $n=16$ parties and $18$ seconds with $n=64$ parties, a $36\times$ and $28.6\times$ reduction in latency compared to the prior best work. At scale with $n=112$ parties, Velox is able to shuffle the same batch of messages in under $50$ seconds from end to end, illustrating its effectiveness and scalability. Overall, our work removes significant barriers faced by prior asynchronous MPC solutions, making asynchronous MPC practical and efficient for large-scale deployments involving $100$s of parties.
This work presents Velox, an asynchronous MPC protocol that offers fairness against an optimal adversary corrupting up to $t<\frac{n}{3}$ parties. Velox significantly enhances practicality by leveraging lightweight cryptographic primitives - such as symmetric-key encryption and hash functions - which are 2-3 orders of magnitude faster than public-key operations, resulting in substantial computational efficiency. Moreover, Velox is highly communication-efficient, with linear amortized communication relative to circuit size and only $\mathcal{O}(n^3)$ field elements of additive overhead. Concretely, Velox requires just $9.33$ field elements per party per multiplication gate, more than $10\times$ reduction compared to the state of the art. Moreover, Velox also offers Post-Quantum Security as lightweight cryptographic primitives retain their security against a quantum adversary.
We implement Velox comprehensively, covering both offline and online phases, and evaluate its performance on a geographically distributed testbed through a real-world application: anonymous broadcast. Our implementation securely shuffles a batch of $k=256$ messages in $4$ seconds with $n=16$ parties and $18$ seconds with $n=64$ parties, a $36\times$ and $28.6\times$ reduction in latency compared to the prior best work. At scale with $n=112$ parties, Velox is able to shuffle the same batch of messages in under $50$ seconds from end to end, illustrating its effectiveness and scalability. Overall, our work removes significant barriers faced by prior asynchronous MPC solutions, making asynchronous MPC practical and efficient for large-scale deployments involving $100$s of parties.
Simon Damm, Asja Fischer, Alexander May, Soundes Marzougui, Leander Schwarz, Henning Seidler, Jean-Pierre Seifert, Jonas Thietke, Vincent Quentin Ulitzsch
Lattice-based signatures like Dilithium (ML-DSA) prove knowledge of a secret key $s \in \mathbb{Z}_n$ by using Integer LWE (ILWE) samples $z = \langle \vec c, \vec s \rangle +y $, for some known hash value $c \in \mathbb{Z}_n$ of the message and unknown error $y$. Rejection sampling guarantees zero-knowledge, which makes the ILWE problem, that asks to recover s from many z’s, unsolvable.
Side-channel attacks partially recover y, thereby obtaining more informative samples resulting in a—potentially tractable—ILWE problem. The standard method to solve the resulting problem is Ordinary Least Squares (OLS), which requires independence of $y$ from $\langle c, s \rangle$ —an assumption that is violated by zero-knowledge samples.
We present efficient algorithms for a variant of the ILWE problem that was not addressed in prior work, which we coin Concealed ILWE (CILWE). In this variant, only a fraction of the ILWE samples is zero-knowledge. We call this fraction the concealment rate. This ILWE variant naturally occurs in side-channel attacks on lattice-based signatures. A case in point are profiling side-channel attacks on Dilithium implementations that classify whether y = 0. This gives rise to either zero-error ILWE samples $z = \langle c, s \rangle$ with $y = 0$ (in case of correct classification), or ordinary zero-knowledge ILWE samples (in case of misclassification).
As we show, OLS is not practical for CILWE instances, as it requires a prohibitively large amount of samples for even small (under 10%) concealment rates. A known integer linear programming-based approach can solve some CILWE instances, but suffers from two short-comings. First, it lacks provable efficiency guarantees, as ILP is NP-hard in the worst case. Second, it does not utilize small, independent error y samples, that could occur in addition to zero-knowledge samples. We introduce two statistical regression methods to cryptanalysis, Huber and Cauchy regression. They are both efficient and can handle instances with all three types of samples. At the same time, they are capable of handling high concealment rates, up to 90% in practical experiments. While Huber regression comes with theoretically appealing correctness guarantees, Cauchy regression performs best in practice. We use this efficacy to execute a novel profiling attack against a masked Dilithium implementation. The resulting ILWE instances suffer from both concealment and small, independent errors. As such, neither OLS nor ILP can recover the secret key. Cauchy regression, however, allows us to recover the secret key in under three minutes for all NIST security levels.
Side-channel attacks partially recover y, thereby obtaining more informative samples resulting in a—potentially tractable—ILWE problem. The standard method to solve the resulting problem is Ordinary Least Squares (OLS), which requires independence of $y$ from $\langle c, s \rangle$ —an assumption that is violated by zero-knowledge samples.
We present efficient algorithms for a variant of the ILWE problem that was not addressed in prior work, which we coin Concealed ILWE (CILWE). In this variant, only a fraction of the ILWE samples is zero-knowledge. We call this fraction the concealment rate. This ILWE variant naturally occurs in side-channel attacks on lattice-based signatures. A case in point are profiling side-channel attacks on Dilithium implementations that classify whether y = 0. This gives rise to either zero-error ILWE samples $z = \langle c, s \rangle$ with $y = 0$ (in case of correct classification), or ordinary zero-knowledge ILWE samples (in case of misclassification).
As we show, OLS is not practical for CILWE instances, as it requires a prohibitively large amount of samples for even small (under 10%) concealment rates. A known integer linear programming-based approach can solve some CILWE instances, but suffers from two short-comings. First, it lacks provable efficiency guarantees, as ILP is NP-hard in the worst case. Second, it does not utilize small, independent error y samples, that could occur in addition to zero-knowledge samples. We introduce two statistical regression methods to cryptanalysis, Huber and Cauchy regression. They are both efficient and can handle instances with all three types of samples. At the same time, they are capable of handling high concealment rates, up to 90% in practical experiments. While Huber regression comes with theoretically appealing correctness guarantees, Cauchy regression performs best in practice. We use this efficacy to execute a novel profiling attack against a masked Dilithium implementation. The resulting ILWE instances suffer from both concealment and small, independent errors. As such, neither OLS nor ILP can recover the secret key. Cauchy regression, however, allows us to recover the secret key in under three minutes for all NIST security levels.
Pratish Datta, Junichi Tomida, Nikhil Vanjani
We revisit decentralized multi‑authority attribute‑based encryption (MA‑ABE) through the lens of fully adaptive security -- the most realistic setting in which an adversary can decide on‑the‑fly which users and which attribute authorities to corrupt. Previous constructions either tolerated only static authority corruption or relied on highly complex “dual system with dual‑subsystems” proof technique that inflated ciphertexts and keys.
Our first contribution is a streamlined security analysis showing -- perhaps surprisingly -- that the classic Lewko–Waters MA-ABE scheme [EUROCRYPT 2011] already achieves full adaptive security, provided its design is carefully reinterpreted and, more crucially, its security proof is re-orchestrated to conclude with an information-theoretic hybrid in place of the original target-group-based computational step. By dispensing with dual subsystems and target-group-based assumptions, we achieve a significantly simpler and tighter security proof along with a more lightweight implementation. Our construction reduces ciphertext size by 33 percent, shrinks user secret keys by 66 percent, and requires 50 percent fewer pairing operations during decryption -- all while continuing to support arbitrary collusions of users and authorities. These improvements mark a notable advance over the state-of-the-art fully adaptive decentralized MA-ABE scheme of Datta et al. [EUROCRYPT 2023]. We instantiate the scheme in both composite‑order bilinear groups under standard subgroup‑decision assumptions and in asymmetric prime‑order bilinear groups under the Matrix‑Diffie–Hellman assumption. We further show how the Kowalczyk–Wee attribute‑reuse technique [EUROCRYPT 2019] seamlessly lifts our construction from ``one‑use’’ boolean span programs (BSP) to ``multi‑use’’ policies computable in $\mathsf{NC^{1}}$, resulting in a similarly optimized construction over the state-of-the-art by Chen et al. [ASIACRYPT 2023].
Going beyond the Boolean world, we present the first MA-ABE construction for arithmetic span program (ASP) access policies, capturing a richer class of Boolean, arithmetic, and combinatorial computations. This advancement also enables improved concrete efficiency by allowing attributes to be handled directly as field elements, thereby eliminating the overhead of converting arithmetic computations into Boolean representations. The construction -- again presented in composite and prime orders -- retains decentralization and adaptive user‑key security, and highlights inherent barriers to handling corrupted authorities in the arithmetic setting.
Our first contribution is a streamlined security analysis showing -- perhaps surprisingly -- that the classic Lewko–Waters MA-ABE scheme [EUROCRYPT 2011] already achieves full adaptive security, provided its design is carefully reinterpreted and, more crucially, its security proof is re-orchestrated to conclude with an information-theoretic hybrid in place of the original target-group-based computational step. By dispensing with dual subsystems and target-group-based assumptions, we achieve a significantly simpler and tighter security proof along with a more lightweight implementation. Our construction reduces ciphertext size by 33 percent, shrinks user secret keys by 66 percent, and requires 50 percent fewer pairing operations during decryption -- all while continuing to support arbitrary collusions of users and authorities. These improvements mark a notable advance over the state-of-the-art fully adaptive decentralized MA-ABE scheme of Datta et al. [EUROCRYPT 2023]. We instantiate the scheme in both composite‑order bilinear groups under standard subgroup‑decision assumptions and in asymmetric prime‑order bilinear groups under the Matrix‑Diffie–Hellman assumption. We further show how the Kowalczyk–Wee attribute‑reuse technique [EUROCRYPT 2019] seamlessly lifts our construction from ``one‑use’’ boolean span programs (BSP) to ``multi‑use’’ policies computable in $\mathsf{NC^{1}}$, resulting in a similarly optimized construction over the state-of-the-art by Chen et al. [ASIACRYPT 2023].
Going beyond the Boolean world, we present the first MA-ABE construction for arithmetic span program (ASP) access policies, capturing a richer class of Boolean, arithmetic, and combinatorial computations. This advancement also enables improved concrete efficiency by allowing attributes to be handled directly as field elements, thereby eliminating the overhead of converting arithmetic computations into Boolean representations. The construction -- again presented in composite and prime orders -- retains decentralization and adaptive user‑key security, and highlights inherent barriers to handling corrupted authorities in the arithmetic setting.
Zeyu Liu, Yunhao Wang, Ben Fisch
Fully homomorphic encryption (FHE) is a powerful and widely used primitive in lots of real-world applications, with IND-CPA as its standard security guarantee. Recently, Li and Micciancio [Eurocrypt'21] introduced IND-CPA-D security, which strengthens the standard IND-CPA security by allowing the attacker to access a decryption oracle for honestly generated ciphertexts (generated via either an encryption oracle or an honest homomorphic circuit evaluation process).
Recently, Jung et al. [CCS'24] and Checri et al. [Crypto'24] have shown that even exact FHE schemes like FHEW/TFHE/BGV/BFV may still not be IND-CPA-D secure, by exploiting the bootstrapping failure. However, such existing attacks can be mitigated by setting negligible bootstrapping failure probability.
On the other hand, Liu and Wang [Asiacrypt'24] proposed relaxed functional bootstrapping, which has orders of magnitude performance improvement and furthermore allows a free function evaluation during bootstrapping. These efficiency advantages make it a competitive choice in many applications but its ``relaxed'' nature also opens new directions of IND-CPA-D attack. In this work, we show that the underlying secret key could be recovered within 10 minutes against all existing relaxed functional bootstrapping constructions, and even within 1 minute for some of them. Moreover, our attack works even with a negligible bootstrapping failure probability, making it immune to existing mitigation methods.
Additionally, we propose a general fix that mitigates all the existing modulus-switching-error-based attacks, including ours, in the IND-CPA-D model. This is achieved by constructing a new modulus switching procedure with essentially no overhead. Lastly, we show that IND-CPA-D may not be sufficient for some applications, even in the passive adversary model. Thus, we extend this model to IND-CPA-D with randomness (IND-CPA-DR).
Recently, Jung et al. [CCS'24] and Checri et al. [Crypto'24] have shown that even exact FHE schemes like FHEW/TFHE/BGV/BFV may still not be IND-CPA-D secure, by exploiting the bootstrapping failure. However, such existing attacks can be mitigated by setting negligible bootstrapping failure probability.
On the other hand, Liu and Wang [Asiacrypt'24] proposed relaxed functional bootstrapping, which has orders of magnitude performance improvement and furthermore allows a free function evaluation during bootstrapping. These efficiency advantages make it a competitive choice in many applications but its ``relaxed'' nature also opens new directions of IND-CPA-D attack. In this work, we show that the underlying secret key could be recovered within 10 minutes against all existing relaxed functional bootstrapping constructions, and even within 1 minute for some of them. Moreover, our attack works even with a negligible bootstrapping failure probability, making it immune to existing mitigation methods.
Additionally, we propose a general fix that mitigates all the existing modulus-switching-error-based attacks, including ours, in the IND-CPA-D model. This is achieved by constructing a new modulus switching procedure with essentially no overhead. Lastly, we show that IND-CPA-D may not be sufficient for some applications, even in the passive adversary model. Thus, we extend this model to IND-CPA-D with randomness (IND-CPA-DR).
Kigen Fukuda, Shin’ichiro Matsuo
The emergence of Cryptographically Relevant Quantum Com-
puters (CRQCs) poses an existential threat to the security of contem-
porary blockchain networks, which rely on public-key cryptography vul-
nerable to Shor’s algorithm. While the need for a transition to Post-
Quantum Cryptography (PQC) is widely acknowledged, the evolution of
blockchains from simple transactional ledgers to complex, multi-layered
financial ecosystems has rendered early, simplistic migration plans ob-
solete. This paper provides a comprehensive analysis of the blockchain
PQC migration landscape as it stands in 2025. We dissect the core tech-
nical challenges, including the performance overhead of PQC algorithms,
the loss of signature aggregation efficiency vital for consensus and scala-
bility, and the cascading complexities within Layer 2 protocols and smart
contracts. Furthermore, the analysis extends to critical operational and
socio-economic hurdles, such as the ethical dilemma of dormant assets
and the conflicting incentives among diverse stakeholders including users,
developers, and regulators. By synthesizing ongoing community discus-
sions and roadmaps for Bitcoin, Ethereum, and others, this work estab-
lishes a coherent framework to evaluate migration requirements, aiming
to provide clarity for stakeholders navigating the path toward a quantum-
secure future.
Véronique Cortier, Alexandre Debant, Olivier Esseiva, Pierrick Gaudry, Audhild Høgåsen, Chiara Spadafora
Internet voting in Switzerland for political elections is strongly regulated by the Federal Chancellery (FCh). It puts a great emphasis on the individual verifiability: security against a corrupted voting device is ensured via return codes, sent by postal mail. For a long time, the FCh was accepting to trust an offline component to set up data and in particular the voting material. Today, the FCh aims at removing this strong trust assumption.
We propose a protocol that abides by this new will. At the heart of our system lies a setup phase where several parties create the voting material in a distributed way, while allowing one of the parties to remain offline during the voting phase. A complication arises from the fact that the voting material has to be printed, sent by postal mail, and then used by the voter to perform several operations that are critical for security. Usability constraints are taken into account in our design,
both in terms of computation complexity (linear setup and tally) and in terms of user experience (we ask the voter to type a high-entropy string only once).
The security of our scheme is proved in a symbolic setting, using the ProVerif prover, for various corruption scenarios, demonstrating that it fulfills the Chancellery’s requirements and sometimes goes slightly beyond them.
Sven Schäge, Marc Vorstermans
We make progress on the foundational problem of determining the strongest security notion achievable by homomorphic encryption. Our results are negative. We prove that a wide class of semi-homomorphic public key encryption schemes (SHPKE) cannot be proven IND-PCA secure (indistinguishability against plaintext checkability attacks), a relaxation of IND-CCA security. This class includes widely used and versatile schemes like ElGamal PKE, Paillier PKE, and the linear encryption system by Boneh, Boyen, and Shacham (Crypto'04). Besides their homomorphic properties, these schemes have in common that they all provide efficient algorithms for recognizing valid ciphertexts (and public keys) from public values. In contrast to CCA security, where the adversary is given access to a decryption oracle, in a PCA attack, the adversary solely has access to an oracle that decides for a given ciphertext/plaintext pair $(c,m)$ if decrypting $c$ indeed gives $m$. Since the notion of IND-PCA security is weaker than IND-CCA security, it is not only easier to achieve, leading to potentially more efficient schemes in practice, but it also side-steps existing impossibility results that rule out the IND-CCA security.
To rule out IND-PCA security we thus have to rely on novel techniques. We provide two results, depending on whether the attacker is allowed to query the PCA oracle after it has received the challenge (IND-PCA2 security) or not (IND-PCA1 security -- the more challenging scenario). First, we show that IND-PCA2 security can be ruled out unconditionally if the number of challenges is smaller than the number of queries made after the challenge. Next, we prove that no Turing reduction can reduce the IND-PCA1 security of SHPKE schemes with $O(\kappa^3)$ PCA queries overall to interactive complexity assumptions that support $t$-time access to their challenger with $t=O(\kappa)$.
To obtain our second impossibility results, we develop a new meta-reduction-based methodology that can be used to tackle security notions where the attacker is granted access to a \emph{decision} oracle. This makes it challenging to utilize the
techniques of existing meta-reduction-based impossibility results that focus on definitions where the attacker is allowed to access an inversion oracle (e.g. long-term key corruptions or a signature oracle).
To obtain our result, we have to overcome several technical challenges that are entirely novel to the setting of public key encryption.