International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

24 April 2025

Hemin Rahimi, Amir Moradi
ePrint Report ePrint Report
Safeguarding cryptographic implementations against the increasing threat of Side-Channel Analysis (SCA) attacks is essential. Masking, a countermeasure that randomizes intermediate values, is a cornerstone of such defenses. In particular, SCA-secure implementation of AES, the most-widely used encryption standard, can employ Boolean masking as well as multiplicative masking due to its underlying Galois field operations. However, multiplicative masking is susceptible to vulnerabilities, including the zero-value problem, which has been identified right after theintroduction of multiplicative masking. At CHES 2018, De Meyer et al. proposed a hardware-based approach to manage these challenges and implemented multiplicative masking for AES, incorporating a Kronecker delta function and randomness optimization. In this work, we evaluate their design using the PROLEAD evaluation tool under the glitch- and transition-extended probing model. Our findings reveal a critical vulnerability in their first- and second-order implementation of the Kronecker delta function, stemming from the employed randomness optimization. This leakage compromises the security of their presented masked AES Sbox. After pinpointing the source of such a leakage, we propose an alternative randomness optimization for the first-order design to address this issue, and demonstrate its effectiveness through rigorous evaluations by means of PROLEAD.
Expand
Alex B. Grilo, Álvaro Yángüez
ePrint Report ePrint Report
While one-way functions (OWFs) serve as the minimal assumption for computational cryptography in the classical setting, in quantum cryptography, we have even weaker cryptographic assumptions such as pseudo-random states, and EFI pairs, among others. Moreover, the minimal assumption for computational quantum cryptography remains an open question. Recently, it has been shown that pseudoentanglement is necessary for the existence of quantum cryptography (Goulão and Elkouss 2024), but no cryptographic construction has been built from it.

In this work, we study the cryptographic usefulness of quantum pseudoresources —a pair of families of quantum states that exhibit a gap in their resource content yet remain computationally indistinguishable. We show that quantum pseudoresources imply a variant of EFI pairs, which we call EPFI pairs, and that these are equivalent to quantum commitments and thus EFI pairs. Our results suggest that, just as randomness is fundamental to classical cryptography, quantum resources may play a similarly crucial role in the quantum setting.

Finally, we focus on the specific case of entanglement, analyzing different definitions of pseudoentanglement and their implications for constructing EPFI pairs. Moreover, we propose a new cryptographic functionality that is intrinsically dependent on entanglement as a resource.
Expand
Gorjan Alagic, Joseph Carolan, Christian Majenz, Saliha Tokat
ePrint Report ePrint Report
The sponge is a cryptographic construction that turns a public permutation into a hash function. When instantiated with the Keccak permutation, the sponge forms the NIST SHA-3 standard. SHA-3 is a core component of most post-quantum public-key cryptography schemes slated for worldwide adoption.

While one can consider many security properties for the sponge, the ultimate one is \emph{indifferentiability from a random oracle}, or simply \emph{indifferentiability}. The sponge was proved indifferentiable against classical adversaries by Bertoni et al. in 2008. Despite significant efforts in the years since, little is known about sponge security against quantum adversaries, even for simple properties like preimage or collision resistance beyond a single round. This is primarily due to the lack of a satisfactory quantum analog of the lazy sampling technique for permutations.

In this work, we develop a specialized technique that overcomes this barrier in the case of the sponge. We prove that the sponge is in fact indifferentiable from a random oracle against quantum adversaries. Our result establishes that the domain extension technique behind SHA-3 is secure in the post-quantum setting. Our indifferentiability bound for the sponge is a loose $O(\mathsf{poly}(q) 2^{-\min(r, c)/4})$, but we also give bounds on preimage and collision resistance that are tighter.
Expand
Gennaro Avitabile, Vincenzo Botta, Dario Fiore
ePrint Report ePrint Report
Traceable ring signatures enhance ring signatures by adding an accountability layer. Specifically, if a party signs two different messages within the protocol, their identity is revealed. Another desirable feature is $\textit{extendability}$. In particular, $\textit{extendable threshold}$ ring signatures (ETRS) allow to $\textit{non-interactively}$ update already finalized signatures by enlarging the ring or the set of signers.

Combining traceability and extendability in a single scheme is unexplored and would offer a new tool for privacy-preserving voting schemes in scenarios where the voters are not known in advance. In this paper, we show how to reconcile both properties by introducing and constructing a new cryptographic primitive called Tetris. Notably, our Tetris construction simultaneously achieves strong anonymity and linear-size signatures, which is the main technical challenge in existing techniques. To solve this challenge, we develop a new approach to traceability that leads to several conceptual and technical contributions. Among those, we introduce and construct, based on Groth-Sahai proofs, $\textit{extendable}$ shuffle arguments that can be $\textit{non-interactively}$ updated by several provers.
Expand
Jaeseon Kim, Jeongeun Park, Hyewon Sung
ePrint Report ePrint Report
Private information retrieval (PIR) enables a client to retrieve data from a server while preserving the confidentiality of the client's query. When PIR is instantiated with fully homomorphic encryption (FHE), the protocol becomes non-interactive, requiring only a query-answer exchange, and it achieves asymptotically optimal communication and computation complexity. Although several FHE-based PIR protocols have been practically implemented with the desired properties, there has been little detailed comparison among them. As a result, it remains unclear which protocol is most efficient in practice with respect to various aspects such as performance and scalability.

In this paper, we revisit existing protocols by categorizing them into two different structures in order to analyze the advantages and disadvantages of each class in detail, with a focus on practical implementations. Furthermore, we introduce and compare various homomorphic algorithms that can be utilized for query optimization, discussing the strengths and limitations of each. Finally, with the goal of identifying the most efficient protocol in terms of computational cost and memory usage, based on database size. Additionally, we address common misconceptions that may lead to inefficient choices in real-world deployment scenarios and offer the best solutions. Consequently, our analysis and experimental results demonstrate that the less-explored design achieves a 90% reduction in communication cost and an 8× decrease in computational overhead compared to the other one, challenging the common misconception.
Expand
Ole Hylland Spjeldnæs
ePrint Report ePrint Report
SNAIL (Succinct, Non-interactive, Alon-compressed, Instant argument for Layered circuits) turns any depth-\(d\) arithmetic circuit into a non-interactive argument whose prover runs within \(1 + c(d,k,n)\) of plain circuit execution, where \(c(d,k,n) = \frac{3\,(k+n+1)}{k\,d + n + 1}\). For the representative choice \(k = n = 4\) and \(24 \le d \le 32\) this means only 21–28 % overhead.

Core idea: A constant-round zerocheck based on a difference-driven Alon decomposition compresses all \(d\) layers into one multivariate identity. Fiat–Shamir in the random-oracle model yields a transparent argument whose hot loop uses only field additions and multiplications (no MSMs, FFTs, or Merkle trees).

Cost summary: with \(k = n = 4\) and \(24 \le d \le 32\) \[ T_{\text{prov}}(S,d)=\Bigl(1+\frac{6.75}{d}\Bigr)S,\qquad |\pi|=(d^{2}+2d)(4d+1),\qquad \varepsilon_{\text{sound}}=\frac{4nd}{|\mathbb F|}. \]

A 2.6-billion-gate trace at \(d = 32\) is proven in \(\approx 300\;\text{ms}\) on an RTX-4090 and verified on a smartphone in \(< 1\;\text{ms}\).

Width scalability: Proof size depends only on depth, so widening to billions of parallel gates leaves the verifier unchanged—ideal for ultra-wide workloads such as live-audited VM traces.

Security: The six-round interactive core is statistically sound; the Fiat–Shamir variant is computationally sound in the random-oracle model and needs no trusted setup.

Implications: SNAIL shows that a prover can run near real-time on commodity GPUs while emitting proofs small enough for mobile light clients, enabling applications such as live-streamed provable gaming, pay-per-cycle cloud nodes, and always-on verifiable CPUs.
Expand

23 April 2025

Award Award
We are proud to announce the winners of the 2025 IACR Test-of-Time Award for Eurocrypt. The IACR Test-of-Time Award honors papers published at the 3 IACR flagship conferences 15 years ago which have had a lasting impact on the field.

The Test-of-Time award for Eurocrypt 2010 is awarded to the following paper:

On Ideal Lattices and Learning with Errors over Rings, by Vadim Lyubashevsky, Chris Peikert and Oded Regev, for introducing the Ring-LWE problem, laying the theoretical and practical foundation for efficient lattice-based cryptography.

For more information, see https://www.iacr.org/testoftime.

Congratulations to the winners!
Expand
Atsuki Momose, Kailun Qin, Ao Sakurai, Mona Vij
ePrint Report ePrint Report
Confidential Computing-as-a-Service has gained significant attention in recent years, driven by rapid advances in Trusted Execution Environment (TEE) technology. Among various architectures, confidential serverless computing has emerged as a promising model. A common approach to designing confidential serverless computing involves decoupling the client workload from the initial enclave image and dynamically provisioning the workload at runtime. This enables both offloading the costly enclave bootstrapping and maintaining a fixed reference measurement for attestation. This decoupling necessitates nested attestation, where the client’s workload is attested via a custom attestation module embedded in a platform-attested enclave established at boot time. The challenge in designing nested attestation, however, is to distinguish fresh enclaves from the used ones to prevent enclave reuse. Specifically, a previously used enclave may be compromised and its attestation module tampered with. If the enclave is reused for another workload, it could bypass runtime attestation, allowing unverified code to execute.

In this paper, we present a novel approach to securing nested attestation for confidential serverless computing on Intel Software Guard Extensions (Intel SGX). Unlike prior works, our approach does not rely on intra-enclave isolation techniques to sandbox the client’s workload. Instead, we leverage the Key Separation and Sharing (KSS) feature of Intel SGX to track and prevent enclave reuse based on its immutable IDs. We develop a prototype system for confidential serverless computing for Python workloads, incorporating our nested attestation scheme, and present an empirical evaluation of its performance.

We believe our scheme unveils a unique yet previously overlooked role of KSS---post-compromise identification, with broader implications beyond serverless computing. As a concrete example, we demonstrate how KSS can be leveraged to implement an enclave-binding TPM on Intel SGX, which is of independent interest.
Expand
Alper Çakan, Vipul Goyal, Omri Shmueli
ePrint Report ePrint Report
Quantum fire was recently formalized by Bostanci, Nehoran and Zhandry (STOC’25). This notion considers a distribution of quantum states that can be efficiently cloned, but cannot be converted into a classical string. Previously, work of Nehoran and Zhandry (ITCS’24) showed how to construct quantum fire relative to an inefficient unitary oracle. Later, the work of Bostanci, Nehoran, Zhandry gave a candidate construction based on group action assumptions, and proved the correctness of their scheme; however, even in the classical oracle model they only conjectured the security, and no security proof was given.

In this work, we give the first construction of public-key quantum fire relative to a classical oracle, and prove its security unconditionally. This gives the first classical oracle seperation between the two fundamental principles of quantum mechanics that are equivalent in the information-theoretic setting: no-cloning and no-telegraphing.

Going further, we introduce a stronger notion called quantum key-fire where the clonable fire states can be used to run a functionality (such as a signing or decryption key), and prove a secure construction relative to a classical oracle. As an application of this notion, we get the first public-key encryption scheme whose secret key is clonable but satisfies unbounded leakage-resilience (Cakan, Goyal, Liu-Zhang, Ribeiro [TCC’24]), relative to a classical oracle. Unbounded leakage-resilience is closely related to, and can be seen as a generalization of the notion of no-telegraphing.

For all of our constructions, the oracles can be made efficient (i.e. polynomial time), assuming the existence of post-quantum one-way functions
Expand
Jiangshan Long, Changhai Ou, Yukun Cheng, Kexin Qiao, Wei Cheng, Fan Zhang
ePrint Report ePrint Report
Side-channel analysis recovers a secret by exploiting the key-dependent characteristics of the leakages. Practical techniques, such as Distance-of-Means analysis (DoM), Kolmogorov-Smirnov analysis (KSA) and Cramér-von Mises analysis (CvMA), provide valuable insights about the secret from the indirect perspectives of statistical moment and cumulative distribution function (CDF) respectively, circumventing the direct and costly estimation of leakage probability densities and therefore enabling wider applicability in practice. Though both the perspectives are informative, their relationships in the context of side-channel analysis remain unclear. In other words, the fundamental questions of "which one is better?'' and ``why and under what circumstances?" leave as open problems. In this paper, we introduce the probability-probability (PP) plot in statistics as a common framework for explaining the mathematical foundations of CDF-based techniques, which facilitates an intuitive understanding of different variant strategies. Then, inspired by the growth pattern of the PP curve, we propose a novel distinguisher based on the famous Mann-Kendall test, where measurements are managed with ordinality and nominality. This goodness-of-fit test checks whether a key-dependent binary sequence originates from a random binomial distribution, by efficiently searching potential label clusters. Finally, we explore the symmetry and dual counterpart of CDF in mathematics, introducing the quantile-quantile (QQ) plot and develop an interesting technique based on the inverse cumulative distribution function (ICDF). We present a general discussion of its bridging role, regarding detail capture as well as signal-to-noise ratio (SNR). On this basis, we establish the relationships among moment-based, ICDF-based, and CDF-based techniques, which additionally allows for bounding the security level of the CDF-based techniques using well-established metrics that are originally proposed for evaluating the traditional moment-based family. Experiments across various settings validate our theoretical findings and demonstrate the effectiveness of the two proposed distinguishers.
Expand
Daniel Alabi, Sainyam Galhotra, Shagufta Mehnaz, Zeyu Song, Eugene Wu
ePrint Report ePrint Report
Data markets play a pivotal role in modern industries by facilitating the exchange of data for predictive modeling, targeted marketing, and research. However, as data becomes a valuable commodity, privacy and security concerns have grown, particularly regarding the personal information of individuals. This tutorial explores privacy and security issues when integrating different data sources in data market platforms. As motivation for the importance of enforcing privacy requirements, we discuss attacks on data markets focusing on membership inference and reconstruction attacks. We also discuss security vulnerabilities in decentralized data marketplaces, including adversarial manipulations by buyers or sellers. We provide an overview of privacy and security mechanisms designed to mitigate these risks. In order to enforce the least amount of trust for buyers and sellers, we focus on distributed protocols. Finally, we conclude with opportunities for future research on understanding and mitigating privacy and security concerns in distributed data markets.
Expand
Krzysztof Pietrzak, Pengxiang Wang
ePrint Report ePrint Report
Truncation of cryptographic outputs is a technique that was recently introduced in Baldimtsi et al. [BCCK22]. The general idea is to try out many inputs to some cryptographic algorithm until the output (e.g. a public-key or some hash value) falls into some sparse set and thus can be compressed: by trying out an expected $2^k$ different inputs one will find an output that starts with $k$ zeros.

Using such truncation one can for example save substantial gas fees on Blockchains where storing values is very expensive. While [BCCK22] show that truncation preserves the security of the underlying primitive, they only consider a setting without preprocessing. In this work we show that lower bounds on the time-space tradeoff for inverting random functions and permutations also hold with truncation, except for parameters ranges where the bound fails to hold for ''trivial'' reasons.

Concretely, it's known that any algorithm that inverts a random function or permutation with range $N$ making $T$ queries and using $S$ bits of auxiliary input must satisfy $S\cdot T\ge N\log N$. This lower bound no longer holds in the truncated setting where one must only invert a challenge from a range of size $N/2^k$, as now one can simply save the replies to all $N/2^k$ challenges, which requires $S=\log N\cdot N /2^k$ bits and allows to invert with $T=1$ query.

We show that with truncation, whenever $S$ is somewhat smaller than the $\log N\cdot N /2^k$ bits required to store the entire truncated function table, the known $S\cdot T\ge N\log N$ lower bound applies.
Expand
Foteinos Mergoupis-Anagnou
ePrint Report ePrint Report
Threshold zero-knowledge protocols have not been widely adopted, presumably due to the relevant network overhead, complicated certification processes and thus limited interoperability chances. In this work, we propose $\mathsf{OSST}$, a Schnorr-based threshold identification scheme that is both non-interactive and non-reliant on the public shares. Given a $(n, t)$-shared secret $x$, the proposed protocol allows any $t^* \ge t$ (but no less) shareholders to collectively prove that their secret keys combine to $x$ in the sense of interpolation without revealing any information beyond that. On the one side, the provers do not need to engage in multi-party computations, sending their packets to the verifier asynchronously. On the other side, the verification operation involves the combined public key $y \equiv g ^ x$ alone, meaning that the verifier does not need to have validated and registered the individual member identities. The protocol can be cheaply adopted in permissionless and dynamic environments, where no certification processes or infrastructure support are available, and be easily integrated with existing verifiers by means of pure software extension. No publicly trusted setup is required beyond the assumption that $x$ has been distributed by means of Shamir's secret sharing (or equivalent distribution scheme) and that the public counterpart has been advertised correctly; in particular, the protocol is intended to be secure against impersonation attacks in the plain public-key model. We provide evidence that this has good chances to hold true by giving a formal security proof in the random oracle model under the one-more discrete-logarithm hardness ($\mathsf{OMDL}$) assumption.
Expand
Hangyu Bai, Fan Huang, Xiaolin Duan, Honggang Hu
ePrint Report ePrint Report
Scloud+ is a next-generation post-quantum key encapsulation mechanism that combines unstructured-LWE and a ternary key encoding technique to achieve efficient lattice cryptographic operations while eliminating traditional ring structure constraints. Despite its rigorously formalized theoretical security, its practical deployment faces side-channel threats, notably Correlation Power Analysis (CPA) attacks. This paper systematically investigates the physical security of its core ciphertext-key matrix multiplication module by proposing a CPA framework that integrates instruction-level timing analysis. A SoST (Sum of Squared T-values) model, inspired by multi-group Welch's t-test, is used to analyze the Hamming weight leakage during ciphertext loading. At the same time, dynamic sampling windows, combined with processor pipeline modeling, are employed to pinpoint critical leakage intervals. Exploiting the characteristics of ternary keys, an iterative recovery strategy is devised: following a predefined scan order, the candidate set $\{-1, 0, 1\}$ and partial intermediate sums are used to construct a Hamming weight model for hypothesized leakage vectors. Pearson correlation analysis and trace-count stabilization are applied within each dynamic sampling window to determine the optimal estimate for each key element. Experiments targeting 4800 key elements, illustrated through a detailed analysis of the first 32 elements, demonstrate a 100% recovery accuracy with no more than 15 traces per element, indicating high efficiency and stability that can extend to the full key reconstruction. This study reveals a side-channel vulnerability in Scloud$^{+}$ rooted in the strong correlation between the instruction-level timing characteristics of the ciphertext-key multiplication and the Hamming weight of intermediate values, underscoring the urgent need to incorporate side-channel defense metrics into the standardization process. To thwart such CPA attacks, we have further designed and implemented a first‐order arithmetic masking countermeasure that splits the original ternary secret key into two subkeys, thereby expanding the attacker's key hypothesis space from $\{-1, 0, 1\}$ to $\{0, \ldots, 4095\}^2$ and significantly enhancing side‐channel resilience. Our results demonstrate that Scloud$^{+}$ remains vulnerable to side‐channel exploits at the implementation level, highlighting the urgent need to integrate appropriate protections into its standardization process.
Expand
Jung Hee Cheon, Minsik Kang, Jai Hyun Park
ePrint Report ePrint Report
The large key size for fully homomorphic encryption (FHE) requires substantial costs to generate and transmit the keys. This has been problematic for FHE clients who want to delegate the computation, as they often have limited power. A recent work, Lee-Lee-Kim-No [Asiacrypt 2023], partly solved this problem by suggesting a hierarchical key management system. However, the overall key size was still several gigabytes for real-world applications, and it is barely satisfactory for mobile phones and IoT devices.

In this work, we propose new key management systems, KG+ and BTS+, which reduce the client's cost for FHE on top of Lee-Lee-Kim-No. The KG+system significantly reduces the key size without any compromise in the efficiency of homomorphic computation compared to Lee-Lee-Kim-No. The BTS+ system further reduces the key size, while it compromises only the granularity of the homomorphic computation.

In our new systems, the client generates and sends ``transmission keys'' with size-optimal parameters, and the server generates ``evaluation keys'' with computation-optimal parameters. For this purpose, we introduce a new ring-switching technique for keys to bridge keys with different parameters. Using the new ring-switching technique, a client can transmit the transmission keys in extension rings that can generate FHE keys in the computation-efficient subring. By decoupling the rings of FHE keys during transmission and computation, we significantly reduce the communication cost for transferring FHE keys.

We provide concrete CKKS FHE parameters that the client's keys are $325$--$609$ MB and $285$ MB, by using KG+ and BTS+, respectively. Note that all parameters generate keys for CKKS with ring degree $2^{16}$, which is a conventional choice for CKKS applications to privacy-preserving machine learning. These are $3.09$--$4.37$ and $3.51$--$9.30$ times lower than Lee-Lee-Kim-No, respectively. For real-world applications, the server requires more evaluation keys for faster homomorphic computation. For the secure ResNet-20 inference, the parameters for KG+ and BTS+ result in client key sizes of $325$--$609$ MB and $285$ MB, respectively. These are $3.95$--$5.73\times$ and $4.53$--$12.25\times$ smaller than Lee-Lee-Kim-No.
Expand
Yuanju Wei, Kaixuan Wang, Binwu Xiang, Xinxuan Zhang, Yi Deng, Hailong Wang, Xudong Zhu
ePrint Report ePrint Report
Verifiable computation over encrypted data is gaining increasing attention, and using SNARKs to provide proofs for FHE operations has emerged as a promising approach. However, the mismatch between FHE's typically small prime fields and SNARKs' larger field requirements creates verifiable FHE challenges. In this work, we construct a packed sumcheck algorithm specifically designed for small fields. This approach leverages folding and repetition techniques to maintain security without field expansion, with all operations performed on the base domain. For a domain $\mathbb{F}_p$ requiring $k$-fold expansion, our sumcheck protocol operates with $(\log k + l)$ variables, where each sumcheck statement consists of $d$ multiplied multilinear polynomial statements. The prover can complete the computation in $O(kd^2 + k^{1.807}d) \cdot 2^l$ modular multiplications over $\mathbb{F}_p$.

By exploiting the highly repetitive computational structure in bit-wise FHE bootstrapping operations, we decompose the process into a series of vector operations. Building upon the packed sumcheck technique along with the Brakedown (CRYPTO 2023) and Binius (EUROCRYPT 2025) commitment schemes, we construct an efficient proof system for these vector operations, ultimately yielding a proof system for bit-wise FHE. Our system achieves linear prover time while performing all computations on the base field, resulting in significant improvements in prover efficiency.
Expand
Bill Fefferman, Soumik Ghosh, Makrand Sinha, Henry Yuen
ePrint Report ePrint Report
We show that concrete hardness assumptions about learning or cloning the output state of a random quantum circuit can be used as the foundation for secure quantum cryptography. In particular, under these assumptions we construct secure one-way state generators (OWSGs), digital signature schemes, quantum bit commitments, and private key encryption schemes. We also discuss evidence for these hardness assumptions by analyzing the best-known quantum learning algorithms, as well as proving black-box lower bounds for cloning and learning given state preparation oracles.

Our random circuit-based constructions provide concrete instantiations of quantum cryptographic primitives whose security do not depend on the existence of one-way functions. The use of random circuits in our constructions also opens the door to NISQ-friendly quantum cryptography. We discuss noise tolerant versions of our OWSG and digital signature constructions which can potentially be implementable on noisy quantum computers connected by a quantum network. On the other hand, they are still secure against noiseless quantum adversaries, raising the intriguing possibility of a useful implementation of an end-to-end cryptographic protocol on near-term quantum computers. Finally, our explorations suggest that the rich interconnections between learning theory and cryptography in classical theoretical computer science also extend to the quantum setting.
Expand

21 April 2025

Yuncong Hu, Chongrong Li, Zhi Qiu, Tiancheng Xie, Yue Ying, Jiaheng Zhang, Zhenfei Zhang
ePrint Report ePrint Report
Succinct Non-Interactive Arguments of Knowledge (SNARKs) provide a powerful cryptographic framework enabling short, quickly verifiable proofs for computational statements.

Existing SNARKs primarily target computations represented as arithmetic circuits. However, they become inefficient when handling binary operations, as each gates operates on a few bits while still incurring the cost of multiple field operations per gate. This inefficiency stems, in part, from their inability to capture the word-level operations that are typical in real-world programs. To better reflect this computational pattern, we shift our attention to data-parallel boolean circuits, which serve as a natural abstraction of word-level operations by allowing parallel munipulation of multiple bits. To precisely characterize the prover's overheads in our scheme, we adopt the word RAM model, which aligns with the nature of modern programming languages. Under this model, we propose a novel approach to achieve SNARKs with only sub-linear prover overhead for proving boolean circuits.

Specifically, we present an optimized GKR protocol for boolean circuits that captures the word-level operations. To achieve this, we pack multiple bits as a single univariate polynomial, and exploiting the binary nature of circuit values to enable precomputation to accelerate the sumcheck process. This optimization leads to a highly efficient prover requiring only sub-linear RAM operations. Furthermore, we introduce a sub-linear polynomial commitment scheme designed specifically for binary polynomials, which ensures efficient commitments with minimal computational overhead.

Comprehensive evaluations reveal that our scheme achieves both theoretical efficiency and substantial practical performance gains. For instance, in proving randomly generated Boolean circuits with $2^{30}$ gates, proof generation with our optimized GKR protocol completes in just 5.38 seconds, yielding a $223\times$ speedup over LogUp (Haböck, ePrint 2022), the most efficient known scheme for lookup arguments. Furthermore, in an end-to-end benchmark over the Keccak-$f$ task, our scheme achieves a $106\times$ speedup compared to Binius (Diamond et al., EUROCRYPT 2025), the state-of-the-art work for boolean circuits.
Expand
Kanav Gupta, Nishanth Chandran, Divya Gupta, Jonathan Katz, Rahul Sharma
ePrint Report ePrint Report
We consider the problem of actively secure two-party machine-learning inference in the preprocessing model, where the parties obtain (input-independent) correlated randomness in an offline phase that they can then use to run an efficient protocol in the (input-dependent) online phase. In this setting, the state-of-the-art is the work of Escudero et al. (Crypto 2020); unfortunately, that protocol requires a large amount of correlated randomness, extensive communication, and many rounds of interaction, which leads to poor performance. In this work, we show protocols for this setting based on function secret sharing (FSS) that beat the state-of-the-art in all parameters: they use less correlated randomness and fewer rounds, and require lower communication and computation. We achieve this in part by allowing for a mix of boolean and arithmetic values in FSS-based protocols (something not done in prior work), as well as by relying on “interactive FSS,” a generalization of FSS we introduce. To demonstrate the effectiveness of our approach we build SHARK—the first FSS- based system for actively secure inference—which outperforms the state-of-the-art by up to 2300×.
Expand
Haotian Yin, Jie Zhang, Wanxin Li, Yuji Dong, Eng Gee Lim, Dominik Wojtczak
ePrint Report ePrint Report
The Updatable Signature (US) allows valid signatures to be updated by an update token without accessing the newly generated signing key. Cini et al. (PKC'21) formally defined this signature and gave several constructions. However, their security model requires the secrecy of the update token, which is only applicable in some specific scenarios, such as software verification in the trusted App Store. In Web3, information is usually shared via a public blockchain, and decentralized private computation is expensive. In addition, one can use the same token to update both the signing key and signatures and all signatures can be updated with a single token. The adversarial signature generated by an adversary might also be updated. Therefore, this work explores the (im)possibility of constructing an Updatable Signature with public tokens (USpt), the tokens of which are signature-dependent. Specifically, we define the updatable signature with public tokens and present its security model. Then, we present a concrete USpt scheme based on the Boneh–Lynn–Shacham signature. This variant introduces a limitation for the signer who must maintain a dataset about its signed messages or hashes of them, which is applicable in our applications.
Expand
Next ►