IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
14 January 2025
Bhargav Nagaraja Bhatt, Fatemeh Shirazi, Alistair Stewart
ePrint Report
The increasing number of blockchain projects introduced annually has led to a pressing need for secure and efficient interoperability solutions. Currently, the lack of such solutions forces end-users to rely on centralized intermediaries, contradicting the core principle of decentralization and trust minimization in blockchain technology. In this paper, we propose a decentralized and efficient interoperability solution (aka Bridge Protocol) that operates without additional trust assumptions, relying solely on the Byzantine Fault Tolerance (BFT) of the two chains being connected. In particular, relayers (actors that exchange messages between networks) are permissionless and decentralized, hence eliminating any single point of failure. We introduce Random Sampling, a novel technique for on-chain light clients to efficiently follow the history of PoS blockchains by reducing the signature verifications required. Here, the randomness is drawn on-chain, for example, using Ethereum's RANDAO. We analyze the security of the bridge from a crypto- economic perspective and provide a framework to derive the security parameters. This includes handling subtle concurrency issues and randomness bias in strawman designs. While the protocol is applicable to various PoS chains, we demonstrate its feasibility by instantiating a bridge between Polkadot and Ethereum (currently deployed), and discuss some practical security challenges. We also evaluate the efficiency (gas costs) of an on-chain light-client verifier implemented as a smart contract on ethereum against SNARK-based approaches. Even for large validator set sizes (up to $10^6$), the signature verification gas costs of our light-client verifier are a magnitude lower.
Xichao Hu, Lin Jiao
ePrint Report
The impossible boomerang attack is a very powerful attack, and the existing results show that it is more effective than the impossible differential attack in the related-key scenario. However, in the current key recovery process, the details of a block cipher are ignored, only fixed keys are pre-guessed, and the time complexity of the early abort technique is roughly estimated. These limitations are obstacles to the broader application of impossible boomerang attack. In this paper, we propose the pre-sieving technique, partial pre-guess key technique and precise complexity evaluation technique. For the pre-sieving technique, we capitalize on the specific features of both the linear layer and the nonlinear layer to expeditiously filter out the impossible quartets at the earliest possible stage. Regarding the partial pre-guess key technique, we are able to selectively determine the keys that require guessing according to our requirements. Moreover, the precise complexity evaluation technique empowers us to explicitly compute the complexity associated with each step of the attack. We integrate these techniques and utilize them to launch an attack on ARADI, which is a low-latency block cipher proposed by the NSA (National Security Agency) in 2024 for the purpose of memory encryption. Eventually, we achieve the first full-round attack with a data complexity of $2^{130}$, a time complexity of $2^{254.81}$, and a memory complexity of $2^{252.14}$. None of the previous key recovery methods have been able to attain such an outcome, thereby demonstrating the high efficacy of our new technique.
Justin Drake, Dmitry Khovratovich, Mikhail Kudinov, Benedikt Wagner
ePrint Report
With the threat posed by quantum computers on the horizon, systems like Ethereum must transition to cryptographic primitives resistant to quantum attacks. One of the most critical of these primitives is the non-interactive multi-signature scheme used in Ethereum's proof-of-stake consensus, currently implemented with BLS signatures. This primitive enables validators to independently sign blocks, with their signatures then publicly aggregated into a compact aggregate signature.
In this work, we introduce a family of hash-based signature schemes as post-quantum alternatives to BLS. We consider the folklore method of aggregating signatures via (hash-based) succinct arguments, and our work is focused on instantiating the underlying signature scheme. The proposed schemes are variants of the XMSS signature scheme, analyzed within a novel and unified framework. While being generic, this framework is designed to minimize security loss, facilitating efficient parameter selection. A key feature of our work is the avoidance of random oracles in the security proof. Instead, we define explicit standard model requirements for the underlying hash functions. This eliminates the paradox of simultaneously treating hash functions as random oracles and as explicit circuits for aggregation. Furthermore, this provides cryptanalysts with clearly defined targets for evaluating the security of hash functions. Finally, we provide recommendations for practical instantiations of hash functions and concrete parameter settings, supported by known and novel heuristic bounds on the standard model properties.
In this work, we introduce a family of hash-based signature schemes as post-quantum alternatives to BLS. We consider the folklore method of aggregating signatures via (hash-based) succinct arguments, and our work is focused on instantiating the underlying signature scheme. The proposed schemes are variants of the XMSS signature scheme, analyzed within a novel and unified framework. While being generic, this framework is designed to minimize security loss, facilitating efficient parameter selection. A key feature of our work is the avoidance of random oracles in the security proof. Instead, we define explicit standard model requirements for the underlying hash functions. This eliminates the paradox of simultaneously treating hash functions as random oracles and as explicit circuits for aggregation. Furthermore, this provides cryptanalysts with clearly defined targets for evaluating the security of hash functions. Finally, we provide recommendations for practical instantiations of hash functions and concrete parameter settings, supported by known and novel heuristic bounds on the standard model properties.
Hyunjung Son, Seunghun Paik, Yunki Kim, Sunpill Kim, Heewon Chung, Jae Hong Seo
ePrint Report
Fuzzy private set intersection (Fuzzy PSI) is a cryptographic protocol for privacy-preserving similarity matching, which is one of the essential operations in various real-world applications such as facial authentication, information retrieval, or recommendation systems. Despite recent advancements in fuzzy PSI protocols, still a huge barrier remains in deploying them for these applications. The main obstacle is the high dimensionality, e.g., from 128 to 512, of data; lots of existing methods, Garimella et al. (CRYPTO’23, CRYPTO’24) or van Baarsen et al. (EUROCRYPT’24), suffer from exponential overhead on communication and/or computation cost. In addition, the dominant similarity metric in these applications is cosine similarity, which disables several optimization tricks based on assumptions for the distribution of data, e.g., techniques by Gao et al. (ASIACRYPT’24). In this paper, we propose a novel fuzzy PSI protocol for cosine similarity, called FPHE, that overcomes these limitations at the same time. FPHE features linear complexity on both computation and communication with respect to the dimension of set elements, only requiring much weaker assumption than prior works. The basic strategy of ours is to homomorphically compute cosine similarity and run an approximated comparison function, with a clever packing method for efficiency. In addition, we introduce a novel proof technique to harmonize the approximation error from the sign function with the noise flooding, proving the security of FPHE under the semi-honest model. Moreover, we show that our construction can be extended to support various functionalities, such as labeled or circuit fuzzy PSI. Through experiments, we show that FPHE can perform fuzzy PSI over 512-dimensional data in a few minutes, which was computationally infeasible for all previous proposals under the same assumption as ours.
Gefei Tan, Adrià Gascón, Sarah Meiklejohn, Mariana Raykova, Xiao Wang, Ning Luo
ePrint Report
Zero-knowledge proofs of training (zkPoT) allow a party to prove that a model is trained correctly on a committed dataset without revealing any additional information about the model or the dataset. Existing zkPoT protocols prove the entire training process in zero knowledge; i.e., they prove that the final model was obtained in an iterative fashion starting from the training data and a random seed (and potentially other parameters) and applying the correct algorithm at each iteration. This approach inherently requires the prover to perform work linear to the number of iterations.
In this paper, we take a different approach to proving the correctness of model training. Our approach is motivated by efficiency but also more urgently by the observation that the prover's ability to pick the random seed used for training introduces the potential for it to bias the model. In other words, if the input to the training algorithm is biased, the resulting model will be biased even if the prover correctly ran the training algorithm. Rather than prove the correctness of the training process, we thus directly prove the correctness of the training model using a notion we call optimum vicinity, which bounds the distance between the trained model and the mathematically optimal model for models that can be viewed as the solution to a convex optimization problem. We show both theoretically and experimentally that this ensures the trained model behaves similarly to the optimal model, and show this is not true for existing approaches. We also demonstrate significant performance improvements as compared to the existing zkPoT paradigm: the statement proven in ZK in our protocol has a size independent of the number of training iterations, and our Boolean (respectively arithmetic) circuit size is up to $246\times$ (respectively $5\times$) smaller than that of a baseline zkPoT protocol that verifies the whole training process.
In this paper, we take a different approach to proving the correctness of model training. Our approach is motivated by efficiency but also more urgently by the observation that the prover's ability to pick the random seed used for training introduces the potential for it to bias the model. In other words, if the input to the training algorithm is biased, the resulting model will be biased even if the prover correctly ran the training algorithm. Rather than prove the correctness of the training process, we thus directly prove the correctness of the training model using a notion we call optimum vicinity, which bounds the distance between the trained model and the mathematically optimal model for models that can be viewed as the solution to a convex optimization problem. We show both theoretically and experimentally that this ensures the trained model behaves similarly to the optimal model, and show this is not true for existing approaches. We also demonstrate significant performance improvements as compared to the existing zkPoT paradigm: the statement proven in ZK in our protocol has a size independent of the number of training iterations, and our Boolean (respectively arithmetic) circuit size is up to $246\times$ (respectively $5\times$) smaller than that of a baseline zkPoT protocol that verifies the whole training process.
Yashvanth Kondi, Divya Ravi
ePrint Report
Secure Multiparty Computation (MPC) protocols that achieve Identifiable Abort (IA) guarantee honest parties that in the event that they are denied output, they will be notified of the identity of at least one corrupt party responsible for the abort. Cheater identification provides recourse in the event of a protocol failure, and in some cases can even be desired over Guaranteed Output Delivery. However, protocols in the literature typically make use of broadcast as a necessary tool in identifying cheaters. In many deployments, the broadcast channel itself may be the most expensive component.
In this work, we investigate if it is inherent that MPC with IA should bear the full complexity of broadcast. As the implication of broadcast from IA has been established in previous work, we relax our target to circumvent this connection: we allow honest parties to differ in which cheaters they identify, nonetheless retaining the ability to prove claims of cheating to an auditor.
We show that in the honest majority setting, our notion of Provable Identifiable Selective Abort (PISA) can be achieved without a traditional broadcast channel. Indeed, broadcast in this setting---which we term Broadcast with Selective Identifiable Abort (BC-IA)---is achievable in only two point-to-point rounds with a simple echoing technique. On the negative side, we also prove that BC-IA is impossible to achieve in the dishonest majority setting.
As a general result, we show that any MPC protocol that achieves IA with $r$ broadcasts, can be compiled to one that achieves PISA with $2(r+1)$ point to point rounds. As a practical application of this methodology, we design, implement, and benchmark a six-round honest majority threshold ECDSA protocol that achieves PISA, and can be deployed in any environment with point to point communication alone.
In this work, we investigate if it is inherent that MPC with IA should bear the full complexity of broadcast. As the implication of broadcast from IA has been established in previous work, we relax our target to circumvent this connection: we allow honest parties to differ in which cheaters they identify, nonetheless retaining the ability to prove claims of cheating to an auditor.
We show that in the honest majority setting, our notion of Provable Identifiable Selective Abort (PISA) can be achieved without a traditional broadcast channel. Indeed, broadcast in this setting---which we term Broadcast with Selective Identifiable Abort (BC-IA)---is achievable in only two point-to-point rounds with a simple echoing technique. On the negative side, we also prove that BC-IA is impossible to achieve in the dishonest majority setting.
As a general result, we show that any MPC protocol that achieves IA with $r$ broadcasts, can be compiled to one that achieves PISA with $2(r+1)$ point to point rounds. As a practical application of this methodology, we design, implement, and benchmark a six-round honest majority threshold ECDSA protocol that achieves PISA, and can be deployed in any environment with point to point communication alone.
Ziqi Zhu, Kai Zhang, Zhili Chen, Junqing Gong, Haifeng Qian
ePrint Report
This paper presents the first black-box registered ABE for circuit from lattices. The selective security is based on evasive LWE assumption [EUROCRYPT'22, CRYPTO'22]. The unique prior Reg-ABE scheme from lattices is derived from non-black-box construction based on function-binding hash and witness encryption [CRYPTO'23]. Technically, we first extend the black-box registration-based encryption from standard LWE [CRYPTO'23] so that we can register a public key with a function; this yields a LWE-based Reg-ABE with ciphertexts of size $L \cdot \mathsf{polylog}(L)$ where $L$ is the number of users. We then make use of the special structure of its ciphertext to reduce its size to $\mathsf{polylog}(L)$ via an algebraic obfuscator based on evasive LWE [CRYPTO'24].
Elodie Ngoie Mutombo, Mike Wa Nkongolo, Mahmut Tokmak
ePrint Report
Cryptojacking, the unauthorised use of computing resources to mine cryptocurrency, has emerged as a critical threat in today’s digital landscape. These attacks not only compromise system integrity but also result in increased costs, reduced hardware lifespan, and heightened network security risks. Early and accurate detection is essential to mitigate the adverse effects of cryptojacking. This study focuses on developing a semi-supervised machine learning (ML) approach that leverages an autoencoder for feature extraction and a random forest (RF) model for classification. The objective is to enhance cryptojacking detection while maintaining a balance between accuracy and interpretability. The proposed methodology is further enhanced with explainable artificial intelligence (XAI) techniques such as local interpretable model-agnostic explanations (LIME) to offer insights into model predictions. Results from datasets such as UGRansome and BitcoinHeist indicate that the semi-supervised approach achieves accuracy rates ranging from 70% to 99%. The study demonstrates that the proposed model provides an efficient, interpretable, and scalable solution for real-time cryptojacking detection across various scenarios.
Theophilus Agama
ePrint Report
In this paper, we study the distribution of the \textit{gap} between terms in an addition chain. In particular, we show that if $1,2,\ldots,s_{\delta(n)}=n$ is an addition chain of length $\delta(n)$ leading to $n$, then $$\underset{1\leq l\leq \delta(n)}{\mathrm{sup}}(s_{l+k}-s_l)\gg k\frac{n}{\delta(n)}$$ and $$\underset{1\leq l\leq \delta(n)}{\mathrm{inf}}(s_{l+k}-s_l)\ll k\frac{n}{\delta(n)}$$ for fixed $k\geq 1$.
13 January 2025
Jianqiao Cambridge Mo, Brandon Reagen
ePrint Report
Privacy and security have become critical priorities in many scenarios. Privacy-preserving computation (PPC) is a powerful solution that allows functions to be computed directly on encrypted data. Garbled circuit (GC) is a key PPC technology that enables secure, confidential computing. GC comes in two forms: Boolean GC supports all operations by expressing functions as logic circuits; arithmetic GC is a newer technique to efficiently compute a set of arithmetic operations like addition and multiplication. Mixed GC combines both Boolean and arithmetic GC, in an attempt to optimize performance by computing each function in its most natural form. However, this introduces additional costly conversions between the two forms. It remains unclear if and when the efficiency gains of arithmetic GC outweigh these conversion costs.
In this paper, we present A̲rithmetic Ḇoolean Ḻogic E̲xchange for Garbled Circuit, the first real implementation of mixed GC. ABLE profiles the performance of Boolean and arithmetic GC operations along with their conversions. We assess not only communication but also computation latency, a crucial factor in evaluating the end-to-end runtime of GC. Based on these insights, we propose a method to determine whether it is more efficient to use general Boolean GC or mixed GC for a given application. Rather than implementing both approaches to identify the better solution for each unique case, our method enables users to select the most suitable GC realization early in the design process. This method evaluates whether the benefits of transitioning operations from Boolean to arithmetic GC offset the associated conversion costs. We apply this approach to a neural network task as a case study.
Implementing ABLE reveals opportunities for further GC optimization. We propose a heuristic to reduce the number of primes in arithmetic GC, cutting communication by 14.1% and compute latency by 15.7% in a 16-bit system. Additionally, we optimize mixed GC conversions with row reduction technique, achieving a 48.6% reduction in garbled table size for bit-decomposition and a 50% reduction for bit-composition operation. These improvements reduce communication overhead in stream GC and free up storage in the GC with preprocessing approach. We open source our code for community use.
In this paper, we present A̲rithmetic Ḇoolean Ḻogic E̲xchange for Garbled Circuit, the first real implementation of mixed GC. ABLE profiles the performance of Boolean and arithmetic GC operations along with their conversions. We assess not only communication but also computation latency, a crucial factor in evaluating the end-to-end runtime of GC. Based on these insights, we propose a method to determine whether it is more efficient to use general Boolean GC or mixed GC for a given application. Rather than implementing both approaches to identify the better solution for each unique case, our method enables users to select the most suitable GC realization early in the design process. This method evaluates whether the benefits of transitioning operations from Boolean to arithmetic GC offset the associated conversion costs. We apply this approach to a neural network task as a case study.
Implementing ABLE reveals opportunities for further GC optimization. We propose a heuristic to reduce the number of primes in arithmetic GC, cutting communication by 14.1% and compute latency by 15.7% in a 16-bit system. Additionally, we optimize mixed GC conversions with row reduction technique, achieving a 48.6% reduction in garbled table size for bit-decomposition and a 50% reduction for bit-composition operation. These improvements reduce communication overhead in stream GC and free up storage in the GC with preprocessing approach. We open source our code for community use.
Shweta Agrawal, Giulio Malavolta, Tianwei Zhang
ePrint Report
Time-lock puzzles (TLP) are a cryptographic tool that allow one to encrypt a message into the future, for a predetermined amount of time $T$. At present, we have only two constructions with provable security: One based on the repeated squaring assumption and the other based on obfuscation. Basing TLP on any other assumption is a long-standing question, further motivated by the fact that known constructions are broken by quantum algorithms.
In this work, we propose a new approach to construct time-lock puzzles based on lattices, and therefore with plausible post-quantum security. We obtain the following main results:
* In the preprocessing model, where a one-time public-coin preprocessing is allowed, we obtain a time-lock puzzle with encryption time $\log(T)$.
* In the plain model, where the encrypter does all the computation, we obtain a time-lock puzzle with encryption time $\sqrt{T}$.
Both constructions assume the existence of any sequential function $f$, and the hardness of the circular small-secret learning with errors (LWE) problem. At the heart of our results is a new construction of succinct randomized encodings (SRE) for $T$-folded repeated circuits, where the complexity of the encoding is $\sqrt{T}$. This is the first construction of SRE where the overall complexity of the encoding algorithm is sublinear in the runtime $T$, and which is not based on obfuscation. As a direct corollary, we obtain a non-interactive RAM delegation scheme with sublinear complexity (in the number of steps $T$).
In this work, we propose a new approach to construct time-lock puzzles based on lattices, and therefore with plausible post-quantum security. We obtain the following main results:
* In the preprocessing model, where a one-time public-coin preprocessing is allowed, we obtain a time-lock puzzle with encryption time $\log(T)$.
* In the plain model, where the encrypter does all the computation, we obtain a time-lock puzzle with encryption time $\sqrt{T}$.
Both constructions assume the existence of any sequential function $f$, and the hardness of the circular small-secret learning with errors (LWE) problem. At the heart of our results is a new construction of succinct randomized encodings (SRE) for $T$-folded repeated circuits, where the complexity of the encoding is $\sqrt{T}$. This is the first construction of SRE where the overall complexity of the encoding algorithm is sublinear in the runtime $T$, and which is not based on obfuscation. As a direct corollary, we obtain a non-interactive RAM delegation scheme with sublinear complexity (in the number of steps $T$).
Benny Applebaum, Oded Nir
ePrint Report
A secret-sharing scheme allows the distribution of a secret $s$ among $n$ parties, such that only certain predefined “authorized” sets of parties can reconstruct the secret, while all other “unauthorized” sets learn nothing about $s$. The collection of authorized/unauthorized sets is defined by a monotone function $f: \{0,1\}^n \rightarrow \{0,1\}$. It is known that any monotone function can be realized by a secret-sharing scheme; thus, the smallest achievable \emph{total share size}, $S(f)$, serves as a natural complexity measure.
In this paper, we initiate the study of the following meta-complexity question: Given a monotone function $f$, is it possible to efficiently distinguish between cases where the secret-sharing complexity of $f$ is small versus large? We examine this question across several computational models, yielding the following main results.
(Hardness for formulas and circuits): Given a monotone formula $f$ of size $L$, it is coNP-hard to distinguish between ``cheap'' functions, where the maximum share size is 1 bit and the total share size is $O(L^{0.01})$, and ``expensive'' functions, where the maximum share size is $\Omega(\sqrt{L})$ and the total share size is $\Omega(L/\log L)$. This latter bound nearly matches known secret-sharing constructions yielding a total share size of $L$ bits. For monotone circuits, we strengthen the bound on the expensive case to a maximum share size of $\Omega(L/\log L)$ and a total share size of $\Omega(L^2/\log L)$. These results rule out the existence of instance-optimal compilers that map a formula $f$ to a secret-sharing scheme with complexity polynomially related to $S(f)$.
(Hardness for truth tables): Under cryptographic assumptions, either (1) every $n$-bit slice function can be realized by a $\text{poly}(n)$-size secret-sharing scheme, or (2) given a truth-table representation of $f$ of size $N = 2^n$, it is computationally infeasible to distinguish in time $\text{poly}(N)$ between cases where $S(f) = \text{poly}(n)$ and $S(f) = n^{\omega(1)}$. Option (1) would be considered a breakthrough result, as the best-known construction for slices has a sub-exponential complexity of $2^{\tilde{O}(\sqrt{n})}$ (Liu, Vaikuntanathan, and Wee; Eurocrypt 2018). Our proof introduces a new worst-case-to-average-case reduction for slices, which may be of independent interest.
(Hardness for graphs): We examine the simple case where $f$ is given as a 2-DNF, represented by a graph $G$ whose edges correspond to 2-terms, and ask whether it is possible to distinguish between cases where the share size is constant and those where the share size is large, say $\Omega(\log n)$. We establish several connections between this question and questions in communication complexity. For instance, we show that graphs admitting constant-cost secret sharing form a subclass of graphs with constant randomized communication complexity and constant-size adjacency sketches (Harms, Wild, and Zamaraev; STOC 2022). We leverage these connections to establish new lower bounds for specific graph families, derive a combinatorial characterization of graphs with constant-size linear secret-sharing schemes, and show that a natural class of myopic algorithms fails to distinguish cheap graphs from expensive ones.
(Hardness for formulas and circuits): Given a monotone formula $f$ of size $L$, it is coNP-hard to distinguish between ``cheap'' functions, where the maximum share size is 1 bit and the total share size is $O(L^{0.01})$, and ``expensive'' functions, where the maximum share size is $\Omega(\sqrt{L})$ and the total share size is $\Omega(L/\log L)$. This latter bound nearly matches known secret-sharing constructions yielding a total share size of $L$ bits. For monotone circuits, we strengthen the bound on the expensive case to a maximum share size of $\Omega(L/\log L)$ and a total share size of $\Omega(L^2/\log L)$. These results rule out the existence of instance-optimal compilers that map a formula $f$ to a secret-sharing scheme with complexity polynomially related to $S(f)$.
(Hardness for truth tables): Under cryptographic assumptions, either (1) every $n$-bit slice function can be realized by a $\text{poly}(n)$-size secret-sharing scheme, or (2) given a truth-table representation of $f$ of size $N = 2^n$, it is computationally infeasible to distinguish in time $\text{poly}(N)$ between cases where $S(f) = \text{poly}(n)$ and $S(f) = n^{\omega(1)}$. Option (1) would be considered a breakthrough result, as the best-known construction for slices has a sub-exponential complexity of $2^{\tilde{O}(\sqrt{n})}$ (Liu, Vaikuntanathan, and Wee; Eurocrypt 2018). Our proof introduces a new worst-case-to-average-case reduction for slices, which may be of independent interest.
(Hardness for graphs): We examine the simple case where $f$ is given as a 2-DNF, represented by a graph $G$ whose edges correspond to 2-terms, and ask whether it is possible to distinguish between cases where the share size is constant and those where the share size is large, say $\Omega(\log n)$. We establish several connections between this question and questions in communication complexity. For instance, we show that graphs admitting constant-cost secret sharing form a subclass of graphs with constant randomized communication complexity and constant-size adjacency sketches (Harms, Wild, and Zamaraev; STOC 2022). We leverage these connections to establish new lower bounds for specific graph families, derive a combinatorial characterization of graphs with constant-size linear secret-sharing schemes, and show that a natural class of myopic algorithms fails to distinguish cheap graphs from expensive ones.
Bhuvnesh Chaturvedi, Anirban Chakraborty, Nimish Mishra, Ayantika Chatterjee, Debdeep Mukhopadhyay
ePrint Report
Fully Homomorphic Encryption (FHE) allows a server to perform computations directly over the encrypted data. In general FHE protocols, the client is tasked with decrypting the computation result using its secret key. However, certain FHE applications benefit from the server knowing this result, especially without the aid of the client. Providing the server with the secret key allows it to decrypt all the data, including the client's private input. Protocols such as Goldwasser et. al. (STOC'13) have shown that it is possible to provide the server with the capability of conditional decryption that allows it to decrypt the result of some pre-defined computation and nothing else. While beneficial to an honest-but-curious server to aid in providing better services, a malicious server may utilize this added advantage to perform active attacks on the overall FHE application to leak secret information. Existing security notions fail to capture this scenario since they assume that only the client has the ability to decrypt FHE ciphertexts. Therefore, in this paper, we propose a new security notion named IND-CPA$^{\text{C}}$, that provides the adversary with access to a conditional decryption oracle. We then show that none of the practical exact FHE schemes are secure under this notion by demonstrating a generic attack that utilizes this restricted decryption capability to recover the underlying errors in the FHE ciphertexts. Given the security guarantee of the underlying (R)LWE hardness problem collapses with the leakage of these error values, we show a full key recovery attack. Finally, we propose a technique to convert any IND-CPA secure FHE scheme into an IND-CPA$^\text{C}$ secure FHE scheme. Our technique utilizes Succinct Non-Interactive Argument of Knowledge, where the server is forced to generate a proof of an honest computation of the requested function along with the computation result. During decryption, the proof is verified, and the decryption proceeds only if the verification holds. Both the verification and decryption steps run inside a Garbled Circuit and thus are not observable or controllable by the server.
Jeffrey Champion, Yao-Ching Hsieh, David J. Wu
ePrint Report
Registered attribute-based encryption (ABE) is a generalization of public-key encryption that enables fine-grained access control to encrypted data (like standard ABE), but without needing a central trusted authority. In a key-policy registered ABE scheme, users choose their own public and private keys and then register their public keys together with a decryption policy with an (untrusted) key curator. The key curator aggregates all of the individual public keys into a short master public key which serves as the public key for an ABE scheme.
Currently, we can build registered ABE for restricted policies (e.g., Boolean formulas) from pairing-based assumptions and for general policies using witness encryption or indistinguishability obfuscation. In this work, we construct a key-policy registered ABE for general policies (specifically, bounded-depth Boolean circuits) from the $\ell$-succinct learning with errors (LWE) assumption in the random oracle model. The ciphertext size in our registered ABE scheme is $\mathsf{poly}(\lambda, d)$, where $\lambda$ is a security parameter and $d$ is the depth of the circuit that computes the policy circuit $C$. Notably, this is independent of the length of the attribute $\mathbf{x}$ and is optimal up to the $\mathsf{poly}(d)$ factor.
Previously, the only lattice-based instantiation of registered ABE uses witness encryption, which relies on private-coin evasive LWE, a stronger assumption than $\ell$-succinct LWE. Moreover, the ciphertext size in previous registered ABE schemes that support general policies (i.e., from obfuscation or witness encryption) scales with $\mathsf{poly}(\lambda, |\mathbf{x}|, |C|)$. The ciphertext size in our scheme depends only on the depth of the circuit (and not the length of the attribute or the size of the policy). This enables new applications to identity-based distributed broadcast encryption.
Our techniques are also useful for constructing adaptively-secure (distributed) broadcast encryption, and we give the first scheme from the $\ell$-succinct LWE assumption in the random oracle model. Previously, the only lattice-based broadcast encryption scheme with adaptive security relied on witness encryption in the random oracle model. All other lattice-based broadcast encryption schemes only achieved selective security.
Currently, we can build registered ABE for restricted policies (e.g., Boolean formulas) from pairing-based assumptions and for general policies using witness encryption or indistinguishability obfuscation. In this work, we construct a key-policy registered ABE for general policies (specifically, bounded-depth Boolean circuits) from the $\ell$-succinct learning with errors (LWE) assumption in the random oracle model. The ciphertext size in our registered ABE scheme is $\mathsf{poly}(\lambda, d)$, where $\lambda$ is a security parameter and $d$ is the depth of the circuit that computes the policy circuit $C$. Notably, this is independent of the length of the attribute $\mathbf{x}$ and is optimal up to the $\mathsf{poly}(d)$ factor.
Previously, the only lattice-based instantiation of registered ABE uses witness encryption, which relies on private-coin evasive LWE, a stronger assumption than $\ell$-succinct LWE. Moreover, the ciphertext size in previous registered ABE schemes that support general policies (i.e., from obfuscation or witness encryption) scales with $\mathsf{poly}(\lambda, |\mathbf{x}|, |C|)$. The ciphertext size in our scheme depends only on the depth of the circuit (and not the length of the attribute or the size of the policy). This enables new applications to identity-based distributed broadcast encryption.
Our techniques are also useful for constructing adaptively-secure (distributed) broadcast encryption, and we give the first scheme from the $\ell$-succinct LWE assumption in the random oracle model. Previously, the only lattice-based broadcast encryption scheme with adaptive security relied on witness encryption in the random oracle model. All other lattice-based broadcast encryption schemes only achieved selective security.
Colin Finkbeiner, Mohamed E. Najd, Julia Guskind, Ghada Almashaqbeh
ePrint Report
Selfish mining attacks present a serious threat to Bitcoin security, enabling a miner with less than 51% of the network hashrate to gain higher rewards than when mining honestly. A growing body of works has studied the impact of such attacks and presented numerous strategies under a variety of model settings. This led to a complex landscape with conclusions that are often exclusive to certain model assumptions. This growing complexity makes it hard to comprehend the state of the art and distill its impact and trade-offs.
In this paper, we demystify the landscape of selfish mining by presenting a systematization framework of existing studies and evaluating their strategies under realistic model adaptations. To the best of our knowledge, our work is the first of its kind. We develop a multi-dimensional systematization framework assessing prior works based on their strategy formulation and targeted models. We go on to distill a number of insights and gaps clarifying open questions and understudied areas. Among them, we find that most studies target the block-reward setting and do not account for transaction fees, and generally study the single attacker case. To bridge this gap, we evaluate many of the existing strategies in the no-block-reward setting—so miner’s incentives come solely from transaction fees—for both the single and multi-attacker scenarios. We also extend their models to include a realistic honest-but-rational miners showing how such adaptations could garner more-performant strategy variations.
In this paper, we demystify the landscape of selfish mining by presenting a systematization framework of existing studies and evaluating their strategies under realistic model adaptations. To the best of our knowledge, our work is the first of its kind. We develop a multi-dimensional systematization framework assessing prior works based on their strategy formulation and targeted models. We go on to distill a number of insights and gaps clarifying open questions and understudied areas. Among them, we find that most studies target the block-reward setting and do not account for transaction fees, and generally study the single attacker case. To bridge this gap, we evaluate many of the existing strategies in the no-block-reward setting—so miner’s incentives come solely from transaction fees—for both the single and multi-attacker scenarios. We also extend their models to include a realistic honest-but-rational miners showing how such adaptations could garner more-performant strategy variations.
James Clements
ePrint Report
Fix odd primes $p, \ell$ with $p \equiv 3 \mod 4$ and $\ell \neq p$. Consider the rational quaternion algebra ramified at $p$ and $\infty$ with a fixed maximal order $\mathcal{O}_{1728}$. We give explicit formulae for bases of all cyclic norm $\ell^n$ ideals of $\mathcal{O}_{1728}$ and their right orders, in Hermite Normal Form (HNF). Further, in the case $\ell \mid p+1$, or more generally, $-p$ is a square modulo $\ell$, we derive a parametrization of these bases along paths of the $\ell$-ideal graph, generalising the results of [1]. With such orders appearing as the endomorphism rings of supersingular elliptic curves defined over $\overline{\mathbb{F}_{p}}$, we note several potential applications to isogeny-based cryptography including fast ideal sampling algorithms. We also demonstrate how our findings may lead to further structural observations, by using them to prove a result on the successive minima of endomorphism rings of supersingular curves defined over $\mathbb{F}_p$.
[1] = https://eprint.iacr.org/2025/033
[1] = https://eprint.iacr.org/2025/033
Omid Mirzamohammadi, Jan Bobolz, Mahdi Sedaghat, Emad Heydari Beni, Aysajan Abidin, Dave Singelee, Bart Preneel
ePrint Report
An anonymous credential (AC) system with partial disclosure allows users to prove possession of a credential issued by an issuer while selectively disclosing a subset of their attributes to a verifier in a privacy-preserving manner. In keyed-verification AC (KVAC) systems, the issuer and verifier share a secret key. Existing KVAC schemes rely on computationally expensive zero-knowledge proofs during credential presentation, with the presentation size growing linearly with the number of attributes. In this work, we propose two highly efficient KVAC constructions that eliminate the need for zero-knowledge proofs during the credential presentation and achieve constant-size presentations.
Our first construction adapts the approach of Fuchsbauer et al. (JoC'19), which achieved constant-size credential presentation in a publicly verifiable setting using their proposed structure-preserving signatures on equivalence classes (SPS-EQ) and set commitment schemes, to the KVAC setting. We introduce structure-preserving message authentication codes on equivalence classes (SP-MAC-EQ) and designated-verifier set commitments (DVSC), resulting in a KVAC system with constant-size credentials (2 group elements) and presentations (4 group elements). To avoid the bilinear groups and pairing operations required by SP-MAC-EQ, our second construction uses a homomorphic MAC with a simplified DVSC. While this sacrifices constant-size credentials ($n+2$ group elements, where $n$ is the number of attributes), it retains constant-size presentations (2 group elements) in a pairingless setting.
We formally prove the security of both constructions and provide open-source implementation results demonstrating their practicality. We extensively benchmarked our KVAC protocols and, additionally, bechmarked the efficiency of our SP-MAC-EQ scheme against the original SPS-EQ scheme, showcasing significant performance improvements.
Keitaro Hashimoto, Shuichi Katsumata, Thom Wiggers
ePrint Report
The Signal protocol relies on a special handshake protocol, formerly X3DH and now PQXDH, to set up secure conversations. Prior analyses of these protocols (or proposals for post-quantum alternatives) have all used highly tailored models to the individual protocols and generally made ad-hoc adaptations to "standard" AKE definitions, making the concrete security attained unclear and hard to compare between similar protocols. Indeed, we observe that some natural Signal handshake protocols cannot be handled by these tailored models. In this work, we introduce Bundled Authenticated Key Exchange (BAKE), a concrete treatment of the Signal handshake protocol. We formally model prekey bundles and states, enabling us to define various levels of security in a unified model. We analyze Signal's classically secure X3DH and harvest-now-decrypt-later-secure PQXDH, and show that they do not achieve what we call optimal security (as is documented). Next, we introduce RingXKEM, a fully post-quantum Signal handshake protocol achieving optimal security; as RingXKEM shares states among many prekey bundles, it could not have been captured by prior models. Lastly, we provide a security and efficiency comparison of X3DH, PQXDH, and RingXKEM.
VDORAM: Towards a Random Access Machine with Both Public Verifiability and Distributed Obliviousness
Huayi Qi, Minghui Xu, Xiaohua Jia, Xiuzhen Cheng
ePrint Report
Verifiable random access machines (vRAMs) serve as a foundational model for expressing complex computations with provable security guarantees, serving applications in areas such as secure electronic voting, financial auditing, and privacy-preserving smart contracts. However, no existing vRAM provides distributed obliviousness, a critical need in scenarios where multiple provers seek to prevent disclosure against both other provers and the verifiers.
Implementing a publicly verifiable distributed oblivious RAM (VDORAM) presents several challenges. Firstly, the development of VDORAM is hindered by the limited availability of sophisticated publicly verifiable multi-party computation (MPC) protocols. Secondly, the lack of readily available front-end implementations for multi-prover zero-knowledge proofs (ZKPs) poses a significant obstacle to developing practical applications. Finally, directly adapting existing RAM designs to the VDORAM paradigm may prove either impractical or inefficient due to the inherent complexities of reconciling oblivious computation with the generation of publicly verifiable proofs.
To address these challenges, we introduce CompatCircuit, the first multi-prover ZKP front-end implementation to our knowledge. CompatCircuit integrates collaborative zkSNARKs to implement publicly verifiable MPC protocols with rich functionalities beyond those of an arithmetic circuit, enabling the development of multi-prover ZKP applications. Building upon CompatCircuit, we present VDORAM, the first publicly verifiable distributed oblivious RAM. By combining distributed oblivious architectures with verifiable RAM, VDORAM achieves an efficient RAM design that balances communication overhead and proof generation time. We have implemented CompatCircuit and VDORAM in approximately 15,000 lines of code, demonstrating usability by providing a practical and efficient implementation. Our performance evaluation result reveals that the system still provides moderate performance with distributed obliviousness.
To address these challenges, we introduce CompatCircuit, the first multi-prover ZKP front-end implementation to our knowledge. CompatCircuit integrates collaborative zkSNARKs to implement publicly verifiable MPC protocols with rich functionalities beyond those of an arithmetic circuit, enabling the development of multi-prover ZKP applications. Building upon CompatCircuit, we present VDORAM, the first publicly verifiable distributed oblivious RAM. By combining distributed oblivious architectures with verifiable RAM, VDORAM achieves an efficient RAM design that balances communication overhead and proof generation time. We have implemented CompatCircuit and VDORAM in approximately 15,000 lines of code, demonstrating usability by providing a practical and efficient implementation. Our performance evaluation result reveals that the system still provides moderate performance with distributed obliviousness.
Zhongtang Luo, Yanxue Jia, Alejandra Victoria Ospina Gracia, Aniket Kate
ePrint Report
Stateless blockchain designs have emerged to address the challenge of growing blockchain size by utilizing succinct global states. Previous works have developed vector commitments that support proof updates and aggregation to be used as such states. However, maintaining proofs for multiple users still demands significant computational resources, particularly in updating proofs with every transaction. This paper introduces Cauchyproofs, a batch-updatable vector commitment enabling proof-serving nodes to efficiently update proofs in quasi-linear time relative to the number of users and transactions, utilizing an optimized KZG scheme to achieve complexity $O\left(\left(\left|\vec{\alpha}\right| + \left|\vec{\beta}\right|\right) \log^2 (\left|\vec{\alpha}\right| + \left|\vec{\beta}\right|)\right)$, compared to previous $O\left(\left|\vec{\alpha}\right|\cdot|\vec{\beta}|\right)$ approaches. This advancement reduces the computational burden on proof-serving nodes, allowing for efficient proof maintenance across large user groups. We demonstrate that our approach is approximately five times faster than the naive approach at the Ethereum-level block size. Additionally, we present a novel matrix representation for KZG proofs utilizing Cauchy matrices, enabling faster all-proof computations with reduced elliptic curve operations. Finally, we propose an algorithm for history proof query, supporting retrospective proof generation with high efficiency. Our contributions substantially enhance the scalability and practicality of proof-serving nodes in stateless blockchain frameworks.