International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

20 August 2025

Tianyao Gu, Afonso Tinoco, Sri Harish G Rajan, Elaine Shi
ePrint Report ePrint Report
Making 2-party computation scale up to big datasets is a long-cherished dream of our community. More than a decade ago, a line of work has implemented and optimized interactive RAM-model 2-party computation (2PC), achieving somewhat reasonable concrete performance on large datasets, but unfortunately suffering from $\widetilde{O}(T)$ roundtrips for a $T$-time computation. Garbled RAM promises to compress the number of roundtrips to $2$, and encouragingly, a line of recent work has designed concretely efficient Garbled RAM schemes whose asymptotic communication and computation costs almost match the best known interactive RAM-model 2PC, but still leaves $({\sf poly})\log\log$ gaps.

We present ${\sf PicoGRAM}$, a practical garbled RAM (GRAM) scheme that not only asymptotically matches the prior best RAM-model 2PC, but also achieves an order of magnitude concrete improvement in online time relative to interactive RAM-model 2PC, on a dataset of size $8$GB. Moreover, our work also gives the first Garbled RAM whose total cost (including bandwidth and computation) achieves an optimal dependency on the database size (up to an arbitrarily small super-constant factor).

Our work shows that for high-value real-life applications such as Signal, blockchains, and Meta that require oblivious accesses to large datasets, Garbled RAM is a promising direction towards eventually removing the trusted hardware assumption that exist in production implementations today. Our open source code is available at https://github.com/picogramimpl/picogram.
Expand
Paul Gerhart, Davide Li Calsi, Luigi Russo, Dominique Schröder
ePrint Report ePrint Report
Threshold Schnorr signatures enable $t$-out-of-$n$ parties to collaboratively produce signatures that are indistinguishable from standard Schnorr signatures, ensuring compatibility with existing verification systems. While static-secure constructions are well understood and achieve optimal round complexity, obtaining full adaptive security - withstanding up to $t-1$ dynamic corruptions under standard assumptions has proven elusive: Recent impossibility results (CRYPTO’25) either rule out known proof techniques for widely deployed schemes or require speculative assumptions and idealized models, while positive examples achieving full adaptivity from falsifiable assumptions incur higher round complexity (EUROCRYPT’25, CRYPTO’25).

We overcome these barriers with the first round-optimal threshold Schnorr signature scheme that, under a slightly relaxed security model, achieves full adaptive security from DDH in the random oracle model.

Our model is relaxed in the sense that the adversary may adaptively corrupt parties at any time, but each signer must refresh part of their public key after a fixed number of signing queries. These updates are executed via lightweight, succinct, stateless tokens, preserving the aggregated signature format. Our construction is enabled by a new proof technique, equivocal deterministic nonce derivation, which may be of independent interest.
Expand
Sourav Das, Ling Ren, Ziling Yang
ePrint Report ePrint Report
Threshold decryption schemes allow a group of decryptors, each holding a private key share, to jointly decrypt ciphertexts. Over the years, numerous threshold decryption schemes have been proposed for applications such as secure data storage, internet auctions, and voting, and recently as a tool to protect against miner-extractable value attacks in blockchain. Despite the importance and popularity of threshold decryption, many natural and practical threshold decryption schemes have only been proven secure against static adversaries.

In this paper, we present two threshold decryption schemes that withstand malicious adaptive corruption. Our first scheme is based on the standard ElGamal encryption scheme and is secure against chosen plaintext attack~(CPA). Our second scheme, based on the chosen ciphertext attack~(CCA) secure Shoup-Gennaro encryption scheme, is also CCA secure. Both of our schemes have non-interactive decryption protocols and comparable efficiency to their static secure counterparts. Building on the technique introduced by Das and Ren (CRYPTO 2024), our threshold ElGamal decryption scheme relies on the hardness of Decisional Diffie-Hellman and the random oracle model.
Expand
Hanlin Liu, Xiao Wang, Kang Yang, Longhui Yin, Yu Yu
ePrint Report ePrint Report
The Regular Syndrome Decoding (RSD) problem, introduced nearly two decades ago, is a regular version of the Syndrome Decoding (SD) problem, where the noise vector is divided into consecutive, equally sized blocks, each containing exactly one noisy coordinate. Recently, RSD has gained renewed attention for its applications in Pseudorandom Correlation Generator (PCG) and more. Very recently, several works presented the improved algebraic approach (AGB for short) and ISD approach (including regular-ISD and regular-RP) to utilize the regular structure of noise vectors and provide a more precise security evaluation for the RSD problem.

In this paper, we refine the AGB algorithm from a one-round process to a two-round process, and refer to the new algebraic algorithm as AGB 2.0. For each round, we guess a few noise-free positions, followed by a tailored partial XL algorithm. This interleaving strategy increases the probability of success by reducing the number of guessing noise-free positions and effectively lowers the problem's dimension in each round. By fine-tuning position guesses in each round and optimizing the aggregate running time, our AGB 2.0 algorithm reduces the concrete security of the RSD problem by up to $6$ bits for the parameter sets used in prior works, compared to the best-known attack. In particular, for a specific parameter set in Wolverine, the RSD security is $7$ bits below the $128$-bit target. We analyze the asymptotic complexity of algebraic attacks on the RSD problem over a finite field $\mathbb{F}$ with the field size $|\mathbb{F}|>2$, when the noise rate $\rho$ and code rate $R$ satisfy $\rho + R < 1$. If $n \rho R^2 = O(1)$ where $n$ is the noise length, the RSD problem over $\mathbb{F}$ can be solved in polynomial time, but it does not hold for the SD problem. We show that the ISD and its variants, including regular-ISD and regular-RP, are asymptotically less efficient than AGB for solving RSD problems with $R = o(1/\log(n))$ and $|\mathbb{F}| > e^{n\rho R}$.
Expand
Michael Adjedj, Geoffroy Couteau, Arik Galansky, Nikolaos Makriyannis, Oren Yomtov
ePrint Report ePrint Report
The industry is moving away from passwords for authentication and authorization, with hardware devices for storing long-term cryptographic keys emerging as the leading alternative. However, these devices often have limited displays and remain vulnerable to theft, malware, or tricking users into signing malicious payloads. Current systems provide little fallback security in such cases. Any solution must also meet strict requirements: compatibility with industry standards, scalability to handle high request volumes, and high availability.

We present a novel design for authentication and authorization that meets these demands. Our approach virtualizes the authenticating/authorizing party via a two-party signing protocol with a helper entity, ensuring that keys remain secure even if a device is compromised and that every signed message conforms to a security policy.

We formalize the required properties for such protocols and show how they are met by existing schemes (e.g., FROST for Schnorr, Boneh–Haitner–Lindell-Segev'25 for ECDSA). Motivated by the widespread use of ECDSA (FIDO2/Passkeys, blockchains), we introduce a new, optimized two-party ECDSA protocol that is significantly more efficient than prior work. At its core is a new variant of exponent-VRF, improving on earlier constructions and of independent interest. We validate our design with a proof-of-concept virtual authenticator for the FIDO2 Passkeys framework.
Expand
Jonas Janneck, Jonas Meers, Massimo Ostuzzi, Doreen Riepel
ePrint Report ePrint Report
An Authenticated Key Encapsulation Mechanism (AKEM) combines public-key encryption and digital signatures to provide confidentiality and authenticity. AKEMs build the core of Hybrid Public Key Encryption (RFC 9180) and serve as a useful abstraction for messaging applications like the Messaging Layer Security (MLS) protocol (RFC 9420) and Signal's X3DH protocol. To date, most existing AKEM constructions either rely on classical (non post-quantum) assumptions or on unoptimized black-box approaches leading to suboptimal efficiency.

In this work, we choose a different abstraction level to combine KEMs and identification schemes more efficiently by leveraging randomness reuse. We construct a generic scheme and identify the necessary security requirements on the underlying KEM and identification scheme when reusing parts of their randomness. This allows for a concrete instantiation from isogenies based on the POKÉ KEM (EUROCRYPT'25) and the SQIsignHD identification scheme (EUROCRYPT'24). To be used in our black-box construction, the identification scheme requires the more advanced security property of response non-malleability. Hence, we further show that a slight modification of SQIsignHD satisfies this notion, which might be of independent interest.

Putting everything together, our final scheme yields the most compact AKEM from PQ assumptions with public keys of 366 bytes and ciphertexts of 216 bytes while fulfilling the strongest confidentiality and authenticity notions.
Expand

14 August 2025

Anubhav Baweja, Alessandro Chiesa, Elisabetta Fedele, Giacomo Fenzi, Pratyush Mishra, Tushar Mopuri, Andrew Zitek-Estrada
ePrint Report ePrint Report
The sumcheck protocol is a fundamental building block in the design of probabilistic proof systems, and has become a key component of recent work on efficient succinct arguments.

We study time-space tradeoffs for the prover of the sumcheck protocol in the streaming model, and provide upper and lower bounds that tightly characterize the efficiency achievable by the prover.

$\bullet{}$ For sumcheck claims about a single multilinear polynomial we demonstrate an algorithm that runs in time $O(kN)$ and uses space $O(N^{1/k})$ for any $k \geq 1$. For non-adaptive provers (a class which contains all known sumcheck prover algorithms) we show this tradeoff is optimal.

$\bullet{}$ For sumcheck claims about products of multilinear polynomials, we describe a prover algorithm that runs in time $O(N(\log \log N + k))$ and uses space $O(N^{1/k})$ for any $k \geq 1$. We show that, conditioned on the hardness of a natural problem about multiplication of multilinear polynomials, any ``natural'' prover algorithm that uses space $O(N^{1/2 - \varepsilon})$ for some $\varepsilon > 0$ must run in time $\Omega(N(\log \log N + \log \varepsilon))$.

We implement and evaluate the prover algorithm for products of multilinear polynomials. We show that our algorithm consumes up to $120\times$ less memory compare to the linear-time prover algorithm, while incurring a time overhead of less than $2\times$.

The foregoing algorithms and lowerbounds apply in the interactive proof model. We show that in the polynomial interactive oracle proof model one can in fact design a new protocol that achieves a better time-space tradeoff of $O(N^{1/k})$ space and $O(N(\log^* N + k))$ time for any $k \geq 1$.
Expand
Katharina Boudgoust, Corentin Jeudy, Erkan Tairi, Weiqiang Wen
ePrint Report ePrint Report
The Module Learning With Errors (M-LWE) problem has become a fundamental hardness assumption for lattice-based cryptography. It offers an attractive trade-off between strong robustness guarantees, sometimes directly based on worst-case lattice problems, and efficiency of the subsequent cryptographic primitives. Different flavors of M-LWE have then been introduced towards improving performance. Such variants look at different secret-error distributions and might allow for additional hints on the secret-error vector. Existing hardness results however only cover restricted classes of said distributions, or are tailored to specific leakage models. This lack of generality hinders the design of efficient and versatile cryptographic schemes, as each new distribution or leakage model requires a separate and nontrivial hardness evaluation.

In this work, we address this limitation by establishing the hardness of MLWE under general distributions. As a first step, we show that MLWE remains hard when the error vector follows an arbitrary bounded distribution with sufficient entropy, with some restriction on the number of samples. Building on this, we then reduce to the Hermite Normal Form (HNF) where the secret-error vector follows said arbitrary distribution. Overall, our result shows the actual shape of the distribution does not matter, as long as it keeps sufficient entropy.

To demonstrate the versatility of our framework, we further analyze a range of leakage scenarios. By examining the residual entropy given the leakage, we show that our results of M-LWE with general distributions encompass various types of leakage. More precisely, we cover exact and approximate linear hints which are widely used in recent cryptographic designs, as well as quadratic, and even non-algebraic forms, some of which were not yet covered by any theoretical hardness guarantees. The generality of our results aims at facilitating future cryptographic designs and security analyses.
Expand
Jakub Mielczarek, Małgorzata Zajęcka
ePrint Report ePrint Report
In this article, we introduce a new post-quantum cryptosystem, NTWR Prime, which is based on the NTRU Prime and Learning With Rounding (LWR) problems. This scheme is inspired by the NTWE construction proposed by Joel Gartner in 2023. Unlike NTWE, our algorithm employs an irreducible, non-cyclotomic polynomial whose Galois group is isomorphic to the symmetric group. Additionally, the LWR problem is used in place of the LWE problem, offering potential advantages for structural security due to its deterministic nature. We conduct a security analysis demonstrating that solving the NTWR Prime problem requires solving both the underlying NTRU Prime and LWR problems. Consequently, given the absence of definitive post-quantum security proofs for these problems, our construction offers redundancy, which may fulfill the requirements of applications with exceptionally high security standards. Importantly, we show that there exists a set of parameters satisfying the hardness assumptions for both contributing problems.
Expand

13 August 2025

Dung Bui, Kelong Cong
ePrint Report ePrint Report
Fuzzy-labeled private set intersection (PSI) outputs the corresponding label if there is a ``fuzzy'' match between two items, for example, when the Hamming distance is low between the two items. Such protocols can be applied in privacy-preserving biometric authentication, proximity testing, and so on. The only fuzzy-labeled PSI protocol designed for practical purposes is by Uzun et al. (USENIX’21), which is based on homomorphic encryption. This design puts constraints on the item size, label size, and communication cost since it is difficult for homomorphic encryption to support large plaintext space and it is well-known that the ciphertext-expansion factor is large.

Our construction begins with a new primitive which we call vector ring-oblivious linear evaluation (vector ring-OLE). This primitive does not rely on existing instantiations of ring-OLE over the quotient ring, but leverages the more efficient vector-OLE. It is ideal for building unbalanced threshold-labeled PSI and is also of independent interest.

Our main contribution, fuzzy-labeled PSI, is bootstrapped from our threshold-labeled PSI protocol. Through a prototype implementation, we demonstrate our communication cost is up to $4.6\times$ better than the prior state-of-the-art with comparable end-to-end latency while supporting a significantly higher label size.
Expand
Andrej Bogdanov, Alon Rosen, Kel Zin Tan
ePrint Report ePrint Report
The $k$LIN problem concerns solving noisy systems of random sparse linear equations mod 2. It gives rise to natural candidate hard CSP distributions and is a cornerstone of local cryptography. Recently, it was used in advanced cryptographic constructions, under the name 'sparse LPN'.

For constant sparsity $k$ and inverse polynomial noise rate, both search and decision versions of $k$LIN are statistically possible and conjectured to be computationally hard for $n\ll m\ll n^{k/2}$, where $m$ is the number of $k$-sparse linear equations, and $n$ is the number of variables.

We show an algorithm that given access to a distinguisher for $(k-1)$LIN with $m$ samples, solves search $k$LIN with roughly $O(nm)$ samples. Previously, it was only known how to reduce from search $k$LIN with $O(m^3)$ samples, yielding meaningful guarantees for decision $k$LIN only when $m \ll n^{k/6}$.

The reduction succeeds even if the distinguisher has sub-constant advantage at a small additive cost in sample complexity. Our technique applies with some restrictions to Goldreich's function and $k$LIN with random coefficients over other finite fields.
Expand
Sam Buxbaum, Lucas M. Tassis, Lucas Boschelli, Giovanni Comarela, Mayank Varia, Mark Crovella, Dino P. Christenson
ePrint Report ePrint Report
We present a real-world deployment of secure multiparty computation to predict political preference from private web browsing data. To estimate aggregate preferences for the 2024 U.S. presidential election candidates, we collect and analyze secret-shared data from nearly 8000 users from August 2024 through February 2025, with over 2000 daily active users sustained throughout the bulk of the survey. The use of MPC allows us to compute over sensitive web browsing data that users would otherwise be more hesitant to provide. We collect data us- ing a custom-built Chrome browser extension and perform our analysis using the CrypTen MPC library. To our knowledge, we provide the first implementation under MPC of a model for the learning from label pro- portions (LLP) problem in machine learning, which allows us to train on unlabeled web browsing data using publicly available polling and elec- tion results as the ground truth. The client code is open source, and the remaining code will be open source in the future.
Expand
Randy Kuang
ePrint Report ePrint Report
In this paper, we present an optimized construction of the Homomorphic Polynomial Public Key (HPPK) cryptosystem, a novel framework designed to provide enhanced security and efficiency in the post-quantum era. Our work introduces a layered cryptographic design that combines modular arithmetic permutations with an innovative additive random masking technique. This approach effectively obscures the underlying factorizable structure of the public key, thereby mitigating vulnerabilities to known lattice reduction attacks and other algebraic cryptanalyses. The security of our scheme is formally grounded in the computational hardness of three new problems: the Hidden Modulus Product Problem (HMPP), the HPPK Key Recovery Problem (HKRP), and the HPPK Secret Recovery Problem (HSRP). We demonstrate through rigorous analysis that the optimal attacks on our scheme are computationally infeasible for appropriately chosen parameters. Furthermore, we show that HPPK achieves remarkably compact key, ciphertext, and signature sizes, offering a significant advantage over leading NIST post-quantum finalists such as Kyber, Dilithium, and Falcon, particularly in bandwidth-constrained environments. The HPPK cryptosystem offers a compelling and mathematically-grounded solution for next-generation cryptography, delivering both provable security and practical efficiency.
Expand
Weidan Ji, Zhedong Wang, Lin Lyu, Dawu Gu
ePrint Report ePrint Report
Most adaptively secure identity-based encryption (IBE) constructions from lattices in the standard model follow the framework proposed by Agrawal et al. (EUROCRYPT 2010). However, this framework has an inherent restriction: the modulus is quadratic in the trapdoor norm. This leads to an unnecessarily large modulus, reducing the efficiency of the IBE scheme. In this paper, we propose a novel framework for adaptively secure lattice-based IBE in the standard model, that removes this quadratic restriction of modulus while keeping the dimensions of the master public key, secret keys, and ciphertexts unchanged. More specifically, our key observation is that the original framework has a \textit{natural} cross-multiplication structure of trapdoor. Building on this observation, we design two novel algorithms with non-spherical Gaussian outputs that fully utilize this structure and thus remove the restriction. Furthermore, we apply our framework to various IBE schemes with different partitioning functions in both integer and ring settings, demonstrating its significant improvements and broad applicability. Besides, compared to a concurrent and independent work by Ji et al. (PKC 2025), our framework is significantly simpler in design, and enjoys a smaller modulus, a more compact master public key and shorter ciphertexts.
Expand
Felix Dörre, Marco Liebel, Jeremias Mechler, Jörn Müller-Quade
ePrint Report ePrint Report
If the system of an honest user is corrupted, all of its security may be lost: The system may perform computations using different inputs, report different outputs or perform a different computation altogether, including the leakage of secrets to an adversary. In this paper, we present an approach that complements arbitrary computations to protect against the consequences of malicious systems. Tothis end, we adapt a well-known technique traditionally used to increase fault tolerance, namely redundant executions on different machines that are combined by a majority vote on the results. However, using this conceptually very simple technique for general computations is surprisingly difficult due to non-determinism on the hardware and software level that may cause the executions to deviate. The CoRReCt approach, short for Compute, Record, Replay, Compare, considers two synchronized executions on different machines. Only if both executions lead to the same result, this result is returned. Our realization uses virtual machines (VMs): On one VM, the software is executed and non-deterministic events are recorded. On a second VM, the software is executed in lockstep and non-deterministic events are replayed. The outputs of both VMs, which are hosted on different machines, are compared by a dedicated trusted entity and only allowed if they match. The following security guarantees can be proven: – Integrity: If at most one host is corrupted, then the computation is performed using the correct inputs and returns either the correct result or no result at all. – Privacy: If timing side-channels are not considered and at most one host is corrupted, the additional leakage introduced by our approach can be bounded by $\log_2(n)$ bits, where n is the number of messages sent. If timing side-channels are considered and the recording system is honest, the same leakage bound can be obtained. As VMs can be run on completely different host platforms, e.g. Windows on Intel x86-64 or OpenBSD on ARM, the assumption of at least one system being honest is very plausible. To prove our security guarantees, we provide a proof within a formal model. To demonstrate the viability of our approach, we provide a ready-to-use implementation that allows the execution of arbitrary (networked) x86-64 Linux programs and discuss different real-world applications.
Expand
Bernardo David, Arup Mondal, Rahul Satish
ePrint Report ePrint Report
Constructing MPC with ephemeral committees has gained a lot of attention since the seminal works on Fluid MPC and YOSO MPC (CRYPTO'21). However, most protocols in this setting focus on the extreme case of ephemeral committees who can only act for one round (i.e. the maximally fluid case). The Layered MPC model (CRYPTO'23) recasts this notion as a protocol execution against an adaptive rushing adversary over a layered interaction graph, where each committee sits on a layer and can only communicate with the immediate next committee. Although protocols with abort allow for linear communication complexity (CRYPTO'23, CiC'24), Perfect Layered MPC with guaranteed output delivery (GOD) and its statistically secure counterpart (TCC'24) suffer from $O(n^9)$ and $O(\kappa n^{18})$ communication complexity for $n$ parties per committee, respectively. In this work, we investigate communication complexity improvements gained in a relaxed Multi-Layered MPC model that allows for limited interaction among the parties in each committee. However, committees have only one round to communicate with the immediate next committee. We construct Rumors MPC protocols, where the interaction among each committee's members is constant-round. Our protocols achieve GOD and optimal corruption threshold in the perfect (resp. statistical) security setting with committees acting for $\delta=5$ (resp. $\delta=13$) rounds and $O(n^6)$ (resp. $O(\kappa n^8)$) communication.
Expand
Yuyu Wang
ePrint Report ePrint Report
In this study, we revisit leakage-resilient circuits (LRCs) against NC1-leakage and propose new constructions that minimize the reliance on leak-free hardware. Specifically, we first present a stateless LRC scheme that is resilient to NC1-leakage, and then extend it to a leakage-tolerant circuit with auxiliary input (AI-LTC). By integrating this with a 2-adaptive leakage-resilient encoding scheme, we achieve a stateful LRC scheme that uses a secure hardware component. In comparison to the state-of-the-art constructions against NC1-leakage by Miles and Viola (STOC 2013), both the encoder during the leak-free phase in our stateless LRC and the secure hardware component in our stateful LRC are typically much smaller, as their sizes are independent of the original circuit size. Additionally, we provide a non-black-box instantiation of stateful LRC, resulting in a smaller compiled circuit. The security of all our constructions is based on the very mild worst-case assumption NC1⊊⊕L/poly, which is strictly weaker than the assumption NC1⊊L used by Miles and Viola. Furthermore, we propose a generic conversion from AI-LTCs to non-interactive zero-knowledge proofs with offline simulation (oNIZK) for all NP in the fine-grained setting. Our instantiation derived from it has small common reference strings, perfect soundness, zero-knowledge against adversaries in NC1 under NC1⊊⊕L/poly, and minimal verification complexity. Finally, we show that any fine-grained oNIZK cannot simultaneously achieve perfect soundness and verifiable common reference strings, thereby ruling out the possibility of constructing stateful LRCs without secure hardware by eliminating the trusted setup of our AI-LTC.
Expand
Erik Mulder, Bruno Sterner, Wessel van Woerden
ePrint Report ePrint Report
Finding the largest pair of consecutive $B$-smooth integers is computationally challenging. Current algorithms to find such pairs have an exponential runtime -- which has only be provably done for $B \leq 100$ and heuristically for $100 < B \leq 113$. We improve this by detailing a new algorithm to find such large pairs. The core idea is to solve the shortest vector problem (SVP) in a well constructed lattice. With this we are able to significantly increase $B$ and notably report the heuristically largest pair with $B = 751$ which has $196$-bits. By slightly modifying the lattice, we are able to find larger pairs for which one cannot conclusively say whether it is the largest or not for a given $B$. This notably includes a $213$-bit pair with $B = 997$ which is the largest pair found in this work.
Expand
Christopher Battarbee, Arman Darbinyan, Delaram Kahrobaei
ePrint Report ePrint Report
Let f be an arbitrary positive integer valued function. The goal of this note is to show that one can construct a finitely generated group in which the discrete log problem is polynomially equivalent to computing the function f. In particular, we provide infinite, but finitely generated groups, in which the discrete logarithm problem is arbitrarily hard. As another application, we construct a family of two-generated groups that have polynomial time word problem and NP-complete discrete log problem. Additionally, using our framework, we propose a generic scheme of cryptographic protocols, which might be of independent interest.
Expand
Clemens Krüger, Bhavinkumar Moriya, Dominik Schoop
ePrint Report ePrint Report
Homomorphic encryption (HE) is a promising technique for privacy-preserving data analysis. Several HE schemes have been developed, with the CKKS and TFHE schemes being two of the most advanced. However, due to their differences, it is hard to compare their performance and suitability for a given application. We therefore conducted an empirical study of the performance of the two schemes in a comparable scenario. We benchmarked the commonly used operations addition, multiplication, division, square root, evaluation of a polynomial and a comparison function, each on a common pair of datasets with 65536 32-bit integers. Since the CKKS scheme is an approximate scheme, we set a requirement of at least 32 bits of precision to match that of the input data. Our results show that CKKS outperforms TFHE in most operations. TFHE’s only advantage is its fast bootstrapping. Even though TFHE performs bootstrapping after every operation, while CKKS typically performs bootstrapping only after a certain number of multiplications, CKKS’s bootstrapping still presents a bottleneck. This can be seen specifically with the comparison operation, where TFHE is much faster than CKKS in many settings, as it requires several bootstrapping operations in CKKS due to its multiplicative depth. Generally speaking, CKKS should be preferred in applications which can be parallelized. CKKS’s advantages decreases in applications with a large depth that require many bootstrapping operations.
Expand
◄ Previous Next ►