International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

23 September 2025

Jeremiah Blocki, Seunghoon Lee, Brayan Sebastian Yepes-Garcia
ePrint Report ePrint Report
We initiate the study of differentially private data-compression schemes motivated by the insecurity of the popular "Compress-Then-Encrypt" framework. Data compression is a useful tool which exploits redundancy in data to reduce storage/bandwidth when files are stored or transmitted. However, if the contents of a file are confidential then the length of a compressed file might leak confidential information about the content of the file itself. Encrypting a compressed file does not eliminate this leakage as data encryption schemes are only designed to hide the content of confidential message instead of the length of the message. In our proposed Differentially Private Compress-Then-Encrypt framework, we add a random positive amount of padding to the compressed file to ensure that any leakage satisfies the rigorous privacy guarantee of $(\epsilon,\delta)$-differential privacy. The amount of padding that needs to be added depends on the sensitivity of the compression scheme to small changes in the input, i.e., to what degree can changing a single character of the input message impact the length of the compressed file. While some popular compression schemes are highly sensitive to small changes in the input, we argue that effective data compression schemes do not necessarily have high sensitivity. Our primary technical contribution is analyzing the fine-grained sensitivity of the LZ77 compression scheme (IEEE Trans. Inf. Theory 1977) which is one of the most common compression schemes used in practice. We show that the global sensitivity of the LZ77 compression scheme has the upper bound $\mathcal{O}(W^{2/3}\log n)$ where $W\leq n$ denotes the size of the sliding window. When $W=n$, we show the lower bound $\Omega(n^{2/3}\log^{1/3}n)$ for the global sensitivity of the LZ77 compression scheme which is tight up to a sublogarithmic factor.
Expand
Arman Riasi, Haodi Wang, Rouzbeh Behnia, Viet Vo, Thang Hoang
ePrint Report ePrint Report
Artificial Intelligence as a Service (AIaaS) enables users to query a model hosted by a service provider and receive inference results from a pre-trained model. Although AIaaS makes artificial intelligence more accessible, particularly for resource-limited users, it also raises verifiability and privacy concerns for the client and server, respectively. While zero-knowledge proof techniques can address these concerns simultaneously, they incur high proving costs due to the non-linear operations involved in AI inference and suffer from precision loss because they rely on fixed-point representations to model real numbers.

In this work, we present ZIP, an efficient and precise commit and prove zero-knowledge SNARK for AIaaS inference (both linear and non-linear layers) that natively supports IEEE-754 double-precision floating-point semantics while addressing reliability and privacy challenges inherent in AIaaS. At its core, ZIP introduces a novel relative-error-driven technique that efficiently proves the correctness of complex non-linear layers in AI inference computations without any loss of precision, and hardens existing lookup-table and range proofs with novel arithmetic constraints to defend against malicious provers. We implement ZIP and evaluate it on standard datasets (e.g., MNIST, UTKFace, and SST-2). Our experimental results show, for non-linear activation functions, ZIP reduces circuit size by up to three orders of magnitude while maintaining the full precision required by modern AI workloads.
Expand
Vıctor Duarte Melo, William J Buchanan
ePrint Report ePrint Report
Whilst many key exchange and digital signature systems still rely on NIST P-256 (secp256r1) and secp256k1, offering around 128-bit security, there is an increasing demand for transparent and reproducible curves at the 256-bit security level. Standard higher-security options include NIST P-521, Curve448, and Brainpool-P512. This paper presents ECCFROG522PP ('Presunto Powered'), a 522-bit prime-field elliptic curve that delivers security in the same classical $\sim$260-bit ballpark as NIST P-521, but with a fundamentally different design philosophy. All of the curve parameters are deterministically derived from a fixed public seed via BLAKE3, with zero hidden choices. The curve has prime order (cofactor = 1), a verified twist with a proven approximate 505-bit prime factor, safe embedding degree, and passes anti-MOV checks up to k <200 and CM discriminant sanity up to 100k. Unlike prior opaque or ad-hoc constructions, ECCFROG522PP is fully reproducible: anyone can regenerate and verify it byte-for-byte using the published scripts. The intent is not to outperform NIST P-521 in raw speed, but to maximise trust, verifiability, and long-term auditability in a practical curve of equivalent security level.
Expand
Damiano Abram, Serge Fehr, Maciej Obremski, Peter Scholl
ePrint Report ePrint Report
One-round secure computation is generally believed impossible due to the residual function attack: any honest-but-curious participant can replay the protocol in their head changing their input, and learn, in this way, a new output. Inputless functionalities are among the few that are immune to this problem.

This paper studies one-round, multi-party computation protocols (MPC) that implement the most natural inputless functionality: one that generates a random sample from a fixed distribution. These are called distributed samplers. At Eurocrypt 2022, Abram, Scholl and Yakoubov showed how to build this primitive in the semi-honest model with dishonest majority. In this work, we give a lower bound for constructing distributed samplers with a malicious adversary in the standard model. More in detail, we show that for any construction in the stand-alone model with black-box simulation, even with a CRS and honest majority, the output of the sampling protocol must have low entropy. This essentially implies that this type of construction is useless in applications. Our proof is based on an entropic argument, drawing a new connection between computationally secure MPC, information theory and learning theory.
Expand
Mohammad Hashemi, Domenic Forte, Fatemeh Ganji
ePrint Report ePrint Report
The rapid growth of deep learning (DL) has raised serious concerns about users’ data and neural network (NN) models’ security and privacy, particularly the risk of backdoor insertion when outsourcing the training or employing pre-trained models. To ensure resilience against such backdoor attacks, this work presents GuardianMPC, a novel framework leveraging secure multiparty computation (MPC). GuardianMPC is built upon garbled circuits (GC) within the LEGO protocol framework to accelerate oblivious inference on FPGAs in the presence of malicious adversaries that can manipulate the model weights and/or insert a backdoor in the architecture of a pre-trained model. In this regard, GuardianMPC is the first to offer private function evaluation in the LEGO family. GuardianMPC also supports private training to effectively counter backdoor attacks targeting NN model architectures and parameters. With optimized pre-processing, GuardianMPC significantly accelerates the online phase, achieving up to x13.44 faster computation than its software counterparts. Our experimental results for multilayer perceptrons (MLPs) and convolutional neural networks (CNNs) assess GuardianMPC’s time complexity and scalability across diverse NN model architectures. Interestingly, GuardianMPC does not adversely affect the training accuracy, as opposed to many existing private training frameworks. These results confirm GuardianMPC as a high-performance, model-agnostic solution for secure NN computation with robust security and privacy guarantees.
Expand
Arsalan Ali Malik, Furkan Aydin, Aydin Aysu
ePrint Report ePrint Report
Fault injection attacks (FIAs) present a significant threat to the integrity of deep neural networks (DNNs), particularly in hardware-accelerated deployments on field-programmable gate arrays (FPGAs). These attacks intentionally introduce faults into the system, leading the DNN to generate incorrect outputs. This work presents the first successful targeted misclassification attack against a convolutional neural network (CNN) implemented on FPGA hardware, achieved by injecting a single clock glitch at the final layer (argmax) to manipulate the predicted output class. Our attack targets the commonly adopted argmax layer, a lightweight replacement for softmax in resource-constrained implementations. By precisely injecting a single clock glitch during the comparison phase of the argmax operation, the attack reliably induces misclassifications, forcing the model to 'skip' a specifically chosen class and output an incorrect label for it without affecting the computed scores of other classes.

Unlike prior works that only cause random misclassifications, our attack achieves a high success rate of 80–87% for a targeted class, without inducing collateral misclassifications of other classes. Our evaluations show a significant reduction in classification accuracy, with the model’s performance dropping from an initial 94.7% to an average final accuracy ranging from 7.7–14.7%. Our attack is demonstrated on a CNN model implemented using a common systolic array architecture, which is well-suited for resource-constrained edge devices and artificial intelligence (AI) accelerators. Our study confirms the vulnerability of hardware-accelerated machine learning systems to low-cost physical attacks, emphasizing the critical need for hardware-level countermeasures in safety-critical machine learning applications.
Expand
Armando Faz-Hernandez
ePrint Report ePrint Report
Prio, tailored under privacy-by-design principles, is a protocol for aggregating client-provided measurements between non-colluding entities. The validity of measurements is determined by using a fully linear probabilistically-checkable proof (FLPCP). The Prover distributes secret shares of the measurement and the proof to multiple Verifiers. These Verifiers can only use linear queries on the input statement for validation without accessing the actual measurement. Efficiency is key for the practical application of Prio. The FLPCP operates with polynomials represented in the Lagrange basis using roots of unity as the nodes. However, we observe opportunities to improve its performance by embracing the Lagrange basis more extensively. For instance, we show an inversion-free O(n) time-complexity algorithm for polynomial evaluation in the Lagrange basis (an alternative to the classic rational barycentric formula). By applying our methods to libprio-rs, a cutting-edge Rust implementation, the Sharding phase (proof generation) runs a 36% faster and the Prep-Init phase (proof verification) is twice as fast, showing a substantial acceleration of the most time-consuming phases of Prio.
Expand
Elif Ozbay Gurler, Patrick Struck
ePrint Report ePrint Report
In this work we show obstacles when constructing identity-based encryption (IBE) from isogenies. We first give a modular description for IBEs, what we call a canonical IBE, that consists of two components: an identity key derivation scheme and a public-key encryption scheme. This allows us to investigate the identity key derivation scheme (where the obstacles are rooted in) in isolation. We present several approaches, showing that they can either not be realized—extracting the secret keys would require to solve the underlying hardness assumption—or result in IBE schemes that are insecure—users can use their secret keys to compute secret keys of other users. Finally, we identify properties for isogeny-based trapdoor functions that one would need in order to overcome the identified obstacles.
Expand
Navid Abapour, Amir Goharshady, Catalin Dragan, Mahdi Mahdavi
ePrint Report ePrint Report
Electronic voting has demonstrated that it streamlines the democratic process, making it more convenient for citizens and enhancing the accuracy and speed of election results in real-world scenarios in the US, Estonia, Switzerland, and many other countries. One major challenge for e-voting, especially online voting, is ensuring that voting and tallying devices behave honestly, particularly in cases involving monetary transactions. These are addressed by economic voting, where everything is on-chain; in essence, voters utilize smart contracts to conduct all voting stages. There are very few results on economic voting, and none post-quantum secure. The challenge comes from having the entire voting system run by smart contracts. In this work, we propose the first post-quantum economic voting scheme, which combines hybrid on- and off-chain operations, called the Post-Quantum Blind Vote (PQBV). The core idea is to utilize smart contracts that enable blind signatures during the voting process. We enhance our contribution by introducing a post-quantum blind signature with Posterior Security, as proposed by Yuen et al. (CCS 2025), which retroactively enhances the privacy of already generated signatures. This has a significant impact on PQBV, as it is able to satisfy formal cryptographic privacy definitions, including ballot privacy. Our efficiency analysis reveals competitive performance compared to existing state-of-the-art post-quantum e-voting systems, such as Epoque (EuroS&P 2021), which is done without blockchain.
Expand
Rebekah Mercer, Kaoutar El Khiyaoui, Angelo De Caro, Elli Androulaki
ePrint Report ePrint Report
Anonymous credential schemes allow users to prove claims about themselves in a privacy-preserving manner. Put simply, a user can prove that she has an attribute value (e.g., she is a citizen) without revealing any other information about herself. Traditional schemes (including those with multiple credential issuers) operate under the assumption that each recognized issuer produces credentials for all user attributes. This assumption, however, is not practical: in the real world, no such issuer exists; an identity provider today issues a user credentials for a subset of user attributes, not all. A promising approach to support this setting is aggregate anonymous credentials, which allow a user to receive credentials from several issuers such that she can aggregate them and prove various claims about her attribute values in one shot. In this paper, we first introduce what we call aggregate tag-based signatures and describe an efficient instantiation. We then leverage the latter together with structure-preserving signatures and signatures of knowledge to construct an efficient aggregate anonymous credential scheme. We finally, formally evaluate the security of the proposed schemes and run benchmarks to showcase the practicality of the resulting scheme and its relevance for decentralized identity applications.
Expand
Jesko Dujmovic, Christoph U. Günther, Krzysztof Pietrzak
ePrint Report ePrint Report
We introduce and construct a new proof system called Non-interactive Arguments of Knowledge or Space (NArKoS), where a space bounded prover can convince a verifier they know a secret, while having access to sufficient space allows one to forge indistinguishable proofs without the secret.

An application of NArKoS are space-deniable proofs, which are proofs of knowledge (say for authentication in access control) that are sound when executed by a lightweight device like a smart-card or an RFID chip that cannot have much storage, but are deniable (in the strong sense of online deniability) as the verifier, like a card reader, can efficiently forge such proofs.

We construct NArKoS in the random oracle model using an OR-proof combining a sigma protocol (for the proof of knowledge of the secret) with a new proof system called simulatable Proof of Transient Space (simPoTS). We give two different constructions of simPoTS, one based on labelling graphs with high pebbling complexity, a technique used in the construction of memory-hard functions and proofs of space, and a more practical construction based on the verifiable space-hard functions from TCC'24 where a prover must compute a root of a sparse polynomial. In both cases, the main challenge is making the proofs efficiently simulatable.
Expand
Jack Doerner, Iftach Haitner, Yuval Ishai, Nikolaos Makriyannis
ePrint Report ePrint Report
Oblivious Linear Evaluation (OLE) is an algebraic generalization of oblivious transfer (OT) that forms a critical part of a growing number of applications. An OLE protocol over a modulus $q$ enables the receiver party to securely evaluate a line $a\cdot X+b$ chosen by the sender party on a secret point $x\in\mathbb{Z}_q$. Motivated by the big efficiency gap between OLE and OT and by fast OT extension techniques, we revisit the question of reducing OLE to OT, aiming to improve the communication cost of known reductions.

We start by observing that the Chinese Remainder Theorem (CRT) can be combined with a prior protocol of Gilboa (Crypto ’99) to reduce its communication cost from $O(\ell^2)$ to $\tilde O(\ell)$ bits, for $\ell=\log q$. Unfortunately, whereas Gilboa's protocol is secure against a semi-honest sender and a malicious receiver, a direct application of the CRT technique is only semi-honest secure (it is insecure against malicious receivers). Thus, we employ number-theoretic techniques to protect our CRT-based protocol against malicious receivers, while still retaining a concrete advantage over Gilboa's protocol (e.g., $10.2\times$ less communication for $\ell=256$). Furthermore, we obtain a fully malicious OLE-to-OT reduction by applying either information-theoretic techniques with moderate overhead, or RSA-based cryptographic techniques with very low overhead.

We demonstrate the usefulness of our results in the context of OLE applications, including a post-quantum oblivious pseudorandom function (OPRF) and distributed signatures. In particular, assuming pre-existing random OT correlations, we can use our malicious-receiver OLE protocol to realize (a single instance of) the power-residue based OPRF candidate with security against a malicious client and a semi-honest server using only 1.14 KB of communication, a $16\times$ improvement over the best previous protocol in this setting. Using our RSA-based fully malicious OLE protocol, we achieve a $5\times$ communication improvement over previous OT and EC-based distributed ECDSA protocols. Compared to other ECDSA protocols (including ones that use Paillier and class groups), the communication gains are more modest, but come at only a fraction of the computational cost as we avoid all expensive group operations.
Expand
Adrian Neal
ePrint Report ePrint Report
Information-theoretic security (ITS) offers the strongest known form of cryptographic protection, guaranteeing confidentiality even against adversaries with unbounded computational power. However, Shannon’s perfect secrecy theorem requires keys as long as the message, which has made ITS widely regarded as impractical for real-world deployment.

This paper updates Q-Stream, introduced in prior work (“A Quantum-Safe Key-Distribution Mechanism having Non-Conjectured Hardness, while scalable for a Vernam Cipher, under Shannon Conditions,” Proceedings of the Future Technologies Conference (FTC 2025), Springer LNNS, Munich, 2025), the first practical system to achieve ITS under the framework of Operational Perfect Secrecy (OPS), having also been introduced in prior work. Q-Stream uses ephemeral public quantum-random blocks (Q-Blocks) combined with short secret defragmentation keys (DFKs) to produce one-time pads with provable OPS security levels. The system implements both Combinatorial ITS (C-ITS) and Dimensional Ambiguity ITS (DA-ITS) modes, providing tunable secrecy levels while drastically reducing the burden of key distribution.

We describe Q-Stream’s architecture, protocol design, security analysis, and implementation, and evaluate its performance against conventional and post-quantum cryptography. Our results show that Q-Stream delivers high-throughput, low-latency encryption while providing provable information-theoretic confidentiality, demonstrating that OPS-based security is practical at scale.
Expand
◄ Previous Next ►