International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

01 November 2025

Nirajan Koirala, Seunghun Paik, Sam Martin, Helena Berens, Tasha Januszewicz, Jonathan Takeshita, Jae Hong Seo, Taeho Jung
ePrint Report ePrint Report
Private Set Intersection (PSI) protocols allow a querier to determine whether an item exists in a dataset without revealing the query or exposing non-matching records. It has many applications in fraud detection, compliance monitoring, healthcare analytics, and secure collaboration across distributed data sources. In these cases, the results obtained through PSI can be sensitive and even require some kind of downstream computation on the associated data before the outcome is revealed to the querier, computation that may involve floating-point arithmetic, such as the inference of a machine learning model. Although many such protocols have been proposed, and some of them even enable secure queries over distributed encrypted sets, they fail to address the aforementioned real-world complexities.

In this work, we present the first encrypted label selection and analytics protocol construction, which allows the querier to securely retrieve not just the results of intersections among identifiers but also the outcomes of downstream functions on the data/label associated with the intersected identifiers. To achieve this, we construct a novel protocol based on an approximate CKKS fully homomorphic encryption that supports efficient label retrieval and downstream computations over real-valued data. In addition, we introduce several techniques to handle identifiers in large domains, e.g., 64 or 128 bits, while ensuring high precision for accurate downstream computations.

Finally, we implement and benchmark our protocol, compare it against state-of-the-art methods, and perform evaluation over real-world fraud datasets, demonstrating its scalability and efficiency in large-scale use case scenarios. Our results show up to 1.4$\times$ to 6.8$\times$ speedup over prior approaches and select and analyze encrypted labels over real-world datasets in under 65 sec., making our protocol practical for real-world deployments.
Expand
Kristiana Ivanova, Daniel Gardham, Stephan Wesemeyer
ePrint Report ePrint Report
CAPTCHA is a ubiquitous challenge-response system for preventing spam (typically bots) on the internet. Requiring users to solve visual challenges, its design is inherently cumbersome, and can unfairly punish those using low reputation IP addresses, such as anonymous services e.g. TOR.

To minimise the frequency in which a user must solve CAPTCHAs, Privacy Pass (PETS 2018) allows users to collect and spend anonymous tokens instead of solving challenges. Despite 400,000 reported monthly users and standardisation efforts by the IETF, it has not been subject of formal verification, which has been proven to be a valuable tool in security analysis.

In this paper we perform the first analysis of Privacy Pass using formal verification tools, and verify standard security properties hold in the symbolic model. Motivated by concerns of Davidson et al. and the IETF contributors, we also explore a stronger attack model, where additional key leakage uncovers a potential token forgery. We present a new protocol, Privacy Pass Plus, in which we show the attack fails in the symbolic model and give new cryptographic reductions to show our scheme maintains the security properties. Moreover, our work also highlights the complementary nature of analysing protocols in both symbolic and computational models.
Expand
Supriyo Banerjee, Sayon Duttagupta
ePrint Report ePrint Report
Secure communication in the Internet of Things (IoT) requires lightweight protocols that scale across unicast, multicast, and broadcast settings. Existing solutions typically depend on centralized gateways, which introduce single points of failure and scalability limitations. We present TreeCast, a decentralized group key establishment protocol that organizes devices in a binary tree and derives communication keys through hybrid key exchange. The protocol achieves efficient and scalable key management, supporting dynamic membership with localized rekeying. We provide a formal security analysis and proof, showing that TreeCast achieves authentication, session key confidentiality, post-compromise security, and partial forward secrecy. In addition, we evaluate computational and storage costs of the protocol, demonstrating logarithmic scalability in both communication overhead and device state. By enabling a single framework for unicast, multicast, and broadcast communication, our approach bridges the gap between cryptographic rigor and practical IoT deployment. TreeCast provides a deployable, communication-oriented solution to secure large-scale IoT networks.
Expand
Wenjie Qu, Yanpei Guo, Yue Ying, Jiaheng Zhang
ePrint Report ePrint Report
With the widespread deployment of machine learning services, concerns about potential misconduct by service providers have emerged. Providers may deviate from their promised methodologies when delivering their services, undermining customer trust. Zero-knowledge proofs (ZKPs) offer a promising solution for customers to verify service integrity while preserving the intellectual property of the model weights. However, existing ZKP systems for convolutional neural networks (CNNs) impose significant computational overhead on the prover, hindering their practical deployment.

To address this challenge and facilitate real-world deployment of ZKPs for CNNs, we introduce VerfCNN, a novel and efficient ZKP system for CNN inference. The core innovation of VerfCNN lies in a specialized protocol for proving multi-channel convolutions, achieving optimal prover complexity that matches the I/O size of the convolution. Our design significantly reduces the prover overhead for verifiable CNN inference. Experiments on VGG-16 demonstrate that our system achieves a prover time of just 12.6 seconds, offering a 6.7× improvement over zkCNN (CCS'21). Remarkably, VerfCNN incurs only a 10× overhead compared to plaintext inference on CPU, whereas general-purpose zkSNARKs typically impose overheads exceeding 1000×. These results underscore VerfCNN's strong potential to enhance the integrity and transparency of real-world ML services.
Expand
Yewei Guan, Hua Guo, Man Ho Au, Jiarong Huo, Jin Tan, Zhenyu Guan
ePrint Report ePrint Report
Multi-party Private Set Intersection (mPSI) enables $n(n\geq3)$ parties, each holding a set of size $m$, to jointly compute their intersection while preserving the confidentiality of each set, which is essential for privacy-preserving data analysis and secure database queries. Existing mPSI protocols have limitations in achieving both sufficient security and practical efficiency.

This paper presents a novel and efficient mPSI construction in the semi-honest model while resisting arbitrary collusion attacks. Our construction works in the offline/online paradigm. Given the corruption threshold $t$, the online phase achieves linear total computational and communication complexity, that is $O((n+t)m)$, and solely uses symmetric operations. This makes our construction theoretically outperform the existing works. The technical core of the construction is our newly extracted primitive called reducible zero-sharing, which allows $t(t
With extensive experiments, we demonstrate that our construction outperforms state-of-the-art works in terms of online running time and communication cost. Specifically, compared to works with sufficient security, the online running time of our mPSI construction is $9.57-114.46\times$ faster in the LAN setting, $2.69-28.41\times$ faster in the WAN setting, while the communication cost is $0.29-28.70\times$ lower. Notably, the total performance (offline+online) still obtains up to $18.73\times$ improvement. Compared with works with practical efficiency, our mPSI construction achieves similar performance while providing stronger security.
Expand
Shahla Atapoor, Karim Baghery, Georgio Nicolas, Robi Pedersen, Jannik Spiessens
ePrint Report ePrint Report
Verifiable Secret Sharing (VSS) allows a dealer to distribute a secret among $n$ parties so that each can verify their share's validity, and any qualified subset can reconstruct the secret. Publicly Verifiable Secret Sharing (PVSS) extends VSS by enabling anyone to verify the correctness of distributed shares. Both VSS and PVSS schemes are core building blocks in many cryptographic applications. We introduce a $k$-batched and $l$-packed extension of \pie, a unified framework from PKC 2025 for Shamir-based computational VSS in the synchronous setting with optimal resilience. Our framework enables the sharing and verification of $l\times k$ secrets in a single protocol execution, offering a tunable trade-off between efficiency and robustness: the $k$-batched, non-packed variant ($l=1$) improves performance while maintaining optimal resilience, whereas the $k$-batched, $l$-packed variant achieves even greater efficiency at the cost of slightly reduced fault tolerance. Using this framework, we construct several Batched and Packed (BP) VSS and PVSS schemes that significantly reduce both computational and communication costs for the dealer and parties. When sharing many secrets, two of our VSS schemes and our PVSS scheme perform almost as efficiently as plain Shamir sharing. For example, when sharing more than 100 secrets, the overhead of our hash-based BP-VSS is below 3%, for our BP-VSS with information-theoretic privacy it remains around 8%, and for our BP-PVSS it is under 2%. These results show that verifiability in Shamir secret sharing can be achieved in post-quantum and large-scale settings with negligible overhead for the dealer. Our proposed BP-PVSS scheme is the first that can achieve these properties and outperforms existing state-of-the-art protocols. As an application, we show that our BP-PVSS yields substantial performance improvements for the ALBATROSS randomness generation protocol from ASIACRYPT 2020.
Expand
Jean Paul Degabriele, Alessandro Melloni, Martijn Stam
ePrint Report ePrint Report
The recently introduced Counter Galois Onion (CGO) is a new symmetric onion encryption scheme designed to replace the current one used by Tor, with integration in Tor’s Rust implementation Arti ongoing. Intuitively, CGO uses an updatable tweakable split-domain cipher as its building block, which provides it with the necessary non-malleability properties while attaining better performance than the alternative approach of realising it from a wide blockcipher (with full SPRP security). However, onion encryption as used in Tor with various functionality features and security trade-offs, is not that well-studied by the cryptographic community. As a result, the requirements of this important primitive which protects the privacy of millions of users on a daily basis, is not well understood and whether CGO fulfills all its security goals unclear.

In this work, we initiate the study of real-world symmetric onion encryption by presenting a new security model capturing Tor’s leaky pipes functionality, associated data, and partial forward security, neither of which were covered previously. We then use this new security model to solidify the security claims of CGO in the forward direction by proving that if the underlying primitive is a suitably secure tweakable split-domain cipher, then CGO is a secure onion encryption scheme.
Expand
Zhaole Li, Deng Tang
ePrint Report ePrint Report
Side-channel attacks can uncover sensitive data by analyzing information leakages of cryptographic hardware devices caused by the power consumption, timing, electromagnetic, glitches, etc. An attack exploiting these leakages is the differential power analysis (DPA). Threshold Implementation (TI), introduced by Nikova et al. [JoC 24(2):292-321, 2011], was proposed to resist DPA on hardware implementations of block ciphers and eliminate information leakage due to glitches. TI is based on secret sharing and multi-party computation. Since the cost of implementing a TI is directly proportional to the number of shares, minimizing the number of shares is of importance. Note that Nikova et al. proved that, for a target function of algebraic degree $t\geq 2$, the lower bound on the number of shares to implement a TI is $t+1$. And we call a TI with $t+1$ shares an optimal TI. However, achieving this bound is challenging. To date, the only universal construction for any bijective function of algebraic degree $t\geq 2$ achieves a TI with $t+2$ shares, which was proposed by Piccione et al. [IEEE TIT 69(10):6700-6710, 2023]. Only two studies managed to implement optimal TIs. They either concentrated on the Feistel structure or were based on Shannon's expansion. It should be noted that adding randomness can meet the $t+1$ bound, but generating randomness is expensive in practice. Consequently, this paper endeavors to fill this gap by systematically investigating the substitution-boxes (S-boxes to be brief) that can achieve optimal TIs without additional randomness. In this paper, inspired by the Feistel structure in the design of S-boxes, we present two constructions of bijective S-boxes with optimal TIs. Of particular interest is the S-boxes constructed from two permutations exhibiting nonzero nonlinearity, making them potential candidates for S-boxes with desirable properties. For applications, our constructions can interpret the existence of $3$-share or $4$-share TIs for certain functions in $3$, $4$ and $5$ variables, as previously reported by Bilgin et al. [CHES 7428:76-91, 2012] and Božilov et al. [ToSC 2017(1):398-404, 2017], including $\mathcal{Q}_5^{25}$, which cannot be interpreted by the previous works. We also give the bijective S-boxes, which are Examples 4 to 11, that possess the optimal TIs by our results.
Expand
Jiaxin Pan, Runzhi Zeng
ePrint Report ePrint Report
We initiate the study of memory efficiency in proving the security of authenticated key exchange (AKE) protocols: We first revise the security model for AKE protocols in order to prove their security in a memory-efficient manner without comprising its capability of capturing usual attacks. We formally show that security in our model implies previous ones, and thus our model captures the same security as before.

After that we propose a generic construction of AKE from key encapsulation mechanisms (KEMs) and digital signature schemes, motivated by the signed Diffie-Hellman protocol. Under the multi-user security of the signature scheme and (relatively weak) oneway-security against plaintext checking attacks of the KEM, our generic construction is proven to be tightly secure (in terms of success probability) via memory-efficient reductions in the random oracle model. We refer to our reductions as memory-efficient rather than memory-tight, since their memory requirements grow proportionally with the number of users. This growth is not an artifact of our analysis, but rather stems from the necessity of solving the dictionary problem within our proof. By the result of Pagh (SIAM J. Computing, 2002), such user-dependent memory consumption is unavoidable. Nevertheless, our proof is more memory-efficient than the previous reductions for AKE, including even those that are non-tight. Given that most post-quantum assumptions (e.g., the Learning-With-Errors and Short-Integer-Solution assumptions) are memory-sensitive, our work holds significant value for post-quantum AKE protocols.
Expand
Sanjit Chatterjee, Tapas Pandit, Subhabrata Samajder
ePrint Report ePrint Report
This paper proposes modular security proofs for some identification scheme (IDS)-based signature schemes in the multivariate quadratic (MQ) setting. More precisely, our contributions include concrete security reduction for both 3-pass and 5-pass MQDSS signature schemes in the random oracle model. Although no formal security argument for the former was available in the literature, the one for the latter provides only a qualitative treatment. Our concrete analysis shows that the 3-pass scheme enjoys a comparatively tighter reduction. This result, considered in conjunction with a reported attack on the 5-pass MQDSS from the NIST PQC competition, thus indicates that contrary to the initial suggestion, the 3-pass MQDSS could be a better choice at a concrete security level. Our next focus is on the only blind signature scheme available in the MQ-setting, proposed by Petzoldt et al. While the security of the original scheme was discussed in a non-standard and significantly weak model; we propose a concrete security reduction for a slightly modified scheme in the standard one-more unforgeability (OMF) model.

Central to all our modular proofs are new forking algorithms. The forking algorithm/lemma has been widely used in the formal security reduction of numerous cryptographic schemes, mainly in the discrete logarithm and RSA setting. The abstractions proposed here allow multiple forkings at the same index while satisfying certain additional conditions for the underlying IDS in the MQ-setting. Thus, the forking algorithms capture the nuances of the MQ-setting while being agnostic of the underlying structure.
Expand
Benjamin Fuller, Arinjita Paul, Maryam Rezapour, Ronak Sahu, Amey Shukla
ePrint Report ePrint Report
In searchable encryption, a data owner outsources data to a server while allowing efficient search by clients. A multimap associates keywords with a variable number of documents. We consider the setting with multiple owners and multiple clients (Wang and Papadopolous, Cloud Computing 2023). The goal is for each owner to store a multimap and grant access to clients. Prior work shares three weaknesses:

* Restricting patterns of adversarial behavior, * Duplicating any data shared with a new client, and * Leaking each client's access pattern and share pattern.

We present MARS, the first SSE for multiple owners and clients that considers security for an arbitrary collection of adversarial parties. Our scheme only leaks the volume of the result size and the number of requested keywords, both of which can be padded. No data is replicated.

Our scheme combines 1) private information retrieval (PIR) to protect search patterns, 2) efficient delegation of the ability to index keywords and decrypt records, and 3) tabulation hashing to allow a single query for locations associated with a keyword. The first two items can be thought of as a keyword unkeyed PIR where the data owner gives the client the identifiers for individual keywords and keys to decrypt records.

Our system is implemented on multimaps up to size $24$ million (the Enron dataset) with total time of $1.2$s for keywords that match $100$ documents. This is in comparison to a time $.500$s for Wang and Papadapolous, which replicates data and has access, sharing, and query equality leakage.

Storage overhead is a factor of $6.6$. Our implementation uses FrodoPIR as the underlying PIR. Our system can incorporate batch or doubly-efficient unkeyed PIR as their performance improves.
Expand
Jan-Pieter D'Anvers, Xander Pottier, Thomas de Ruijter, Ingrid Verbauwhede
ePrint Report ePrint Report
TFHE bootstrapping is typically limited to a small plaintext space, with an exponential increase in cost for larger plaintext spaces. To bootstrap larger integers, one can use digit decomposition, a procedure that iteratively extracts and bootstraps a part of the larger plaintext space. Conventional state-of-the-art methods typically extract bits starting from the least significant bits (LSBs) and progress to the most significant bits (MSBs). However, we introduce a DirtyMSB extraction procedure that enables the digit decomposition from MSBs to LSB for the first time. However, this procedure introduces a small error during the extraction procedure. We demonstrate how to compensate this error in subsequent iterations. Compared to traditional LSB-to-MSB digit decomposition, our method improves the throughput, with for example an increase of 20% for a 5-bit plaintext and 50% increase for an 8-bit plaintext. In contrast to LSB-to-MSB methods, our extracted output ciphertexts have fresh noise, allowing us to directly use the extracted outputs for further computation without the need for an additional bootstrap or less efficient parameters. We demonstrate the applicability of our method by improving large-scale addition and scalar multiplication. Our method is particularly effective for vector addition operations, accelerating the addition of 1000 16-bit numbers by a factor of $\times2.75$. Furthermore, we demonstrate a $\times2.27$ speedup over the state-of-the-art implementation of scalar multiplication.
Expand
Christof Beierle, Gregor Leander, Yevhen Perehuda
ePrint Report ePrint Report
An integral distinguisher for a block cipher is defined by a nontrivial subset of plaintexts for which the bitwise sum of (parts of) a certain internal state is independent of the secret key. Such a distinguishing property can be turned into a key-recovery procedure by partially decrypting the ciphertexts under all possible keys and then filtering the key candidates using the integral distinguisher. The behavior of this filter has never been analyzed in depth, and we show that the ubiquitous hypothesis about its behavior is incorrect.

Fortunately, the deviation is either limited or can be lifted to improve the underlying attacks. By algorithmically determining the exact subspaces of key candidates to be guessed - whose dimensions are often lower than expected - we are able to improve upon the best known integral key-recovery attacks on various ciphers.
Expand
Benjamin E. Diamond, Angus Gruen
ePrint Report ePrint Report
For each positive integer $c^*$, we construct an infinite sequence of Reed–Solomon codes $C \subset \mathbb{F}_q^n$, together with ball radii $z$, for which the proportion of $\mathbb{F}_q^n$ collectively covered by the radius-$z$ Hamming balls decays asymptotically more slowly than $\frac{n^{c^*}}{q}$ does. To pinpoint this decay rate, we develop various new, sharp combinatorial estimates, pertaining to the volumes of balls and their intersections.

Our result proves that the capacity conjecture of Ben-Sasson, Carmon, Ishai, Kopparty and Saraf (J. ACM '23) is false. Our code families' relative rates converge to 0 and their relative radii converge to 1. We suggest avenues by the means of which the capacity conjecture might be resuscitated; roughly, we suggest that that conjecture be restricted to the case of families whose relative rates are bounded from below by a positive constant. Our work shows that many deployed SNARKs may be less secure than they were formerly—optimistically—assumed to be.
Expand
Hariprasad Kelassery Valsaraj, Prasanna Ravi, Shivam Bhasin
ePrint Report ePrint Report
Post-quantum cryptographic schemes like ML-KEM and ML-DSA have been standardized to secure digital communication against quantum threats. While their theoretical foundations are robust, we identify a critical implementation-level vulnerability in both: a single point of failure centered on the random seed pointer used in polynomial sampling. By corrupting this pointer, an attacker can deterministically compromise the entire scheme, bypassing standard countermeasures. We present the first practical fault-injection attacks exploiting this weakness and validate them on an STM32H7 microcontroller using laser fault injection. Our results demonstrate full key and message recovery for ML-KEM and signature forgery for ML-DSA, with success rates up to 100%. We further verify the presence of this vulnerable implementation style in widely used public libraries, including PQM4, LibOQS, PQClean, and WolfSSL, and propose effective countermeasures to mitigate this overlooked yet severe threat.
Expand
Alexandra Henzinger, Seyoon Ragavan
ePrint Report ePrint Report
We build two-server private information retrieval (PIR) that achieves information-theoretic security and strong double-efficiency guarantees. On a database of $n > 10^6$ bits, the servers store a preprocessed data structure of size $1.5 \sqrt{\log_2 n} \cdot n$ bits and then answer each PIR query by probing $12 \cdot n^{0.82}$ bits in this data structure. To our knowledge, this is the first information-theoretic PIR with any constant number of servers that has quasilinear server storage $n^{1+o(1)}$ and polynomially sublinear server time $n^{1-\Omega(1)}$.

Our work builds on the PIR-with-preprocessing protocol of Beimel, Ishai, and Malkin (CRYPTO 2000). The insight driving our improvement is a compact data structure for evaluating a multivariate polynomial and its derivatives. Our data structure and PIR protocol leverage the fact that Hasse derivatives can be efficiently computed on-the-fly by taking finite differences between the polynomial's evaluations. We further extend our techniques to improve the state-of-the-art in PIR with three or more servers, building on recent work by Ghoshal, Li, Ma, Dai, and Shi (TCC 2025).

On a 55 GB database with 64-byte records, our two-server PIR encodes the database into a 1 TB data structure – which is 1,600,000$\times$ smaller than that of prior two-server PIR-with-preprocessing schemes, while maintaining the same communication and time per query. To answer a PIR query, the servers probe 102 MB from this data structure, requiring 550$\times$ fewer memory accesses than linear-time PIR. The main limitation of our protocol is its large communication complexity, which we show how to shrink to $n^{0.31} \cdot \mathsf{poly}(\lambda)$ using compact linearly homomorphic encryption.
Expand
◄ Previous Next ►