IACR News
Here you can see all recent updates to the IACR webpage. These updates are also available:
08 November 2024
Brian Koziel, S. Dov Gordon, Craig Gentry
ePrint Report
We present a new construction of two-party, threshold ECDSA, building on a 2017 scheme of Lindell and improving his scheme in several ways.
ECDSA signing is notoriously hard to distribute securely, due to non-linearities in the signing function. Lindell's scheme uses Paillier encryption to encrypt one party's key share and handle these non-linearities homomorphically, while elegantly avoiding any expensive zero knowledge proofs over the Paillier group during the signing process. However, the scheme pushes that complexity into key generation. Moreover, avoiding ZK proofs about Paillier ciphertexts during signing comes with a steep price -- namely, the scheme requires a ``global abort" when a malformed ciphertext is detected, after which an entirely new key must be generated.
We overcome all of these issues with a proactive Refresh procedure. Since the Paillier decryption key is part of the secret that must be proactively refreshed, our first improvement is to radically accelerate key generation by replacing one of Lindell's ZK proofs -- which requires 80 Paillier ciphertexts for statistical security $2^{-40}$ -- with a much faster "weak" proof that requires only 2 Paillier ciphertexts, and which proves a weaker statement about a Paillier ciphertext that we show is sufficient in the context of our scheme. Secondly, our more efficient key generation procedure also makes frequent proactive Refreshes practical. Finally, we show that adding noise to one party's key share suffices to avoid the need to reset the public verification key when certain bad behavior is detected. Instead, we prove that our Refresh procedure, performed after each detection, suffices for addressing the attack, allowing the system to continue functioning without disruption to applications that rely on the verification key.
Our scheme is also very efficient, competitive with the best constructions that do not provide proactive security, and state-of-the-art among the few results that do. Our optimizations to ECDSA key generation speed up runtime and improve bandwidth over Lindell's key generation by factors of 7 and 13, respectively. Our Key Generation protocol requires 20% less bandwidth than existing constructions, completes in only 3 protocol messages, and executes much faster than all but OT-based key generation. For ECDSA signing, our extra Refresh protocol does add a 10X latency and 5X bandwidth overhead compared to Lindell. However, this still fits in 150 ms runtime and about 5.4 KB of messages when run in our AWS cluster benchmark.
ECDSA signing is notoriously hard to distribute securely, due to non-linearities in the signing function. Lindell's scheme uses Paillier encryption to encrypt one party's key share and handle these non-linearities homomorphically, while elegantly avoiding any expensive zero knowledge proofs over the Paillier group during the signing process. However, the scheme pushes that complexity into key generation. Moreover, avoiding ZK proofs about Paillier ciphertexts during signing comes with a steep price -- namely, the scheme requires a ``global abort" when a malformed ciphertext is detected, after which an entirely new key must be generated.
We overcome all of these issues with a proactive Refresh procedure. Since the Paillier decryption key is part of the secret that must be proactively refreshed, our first improvement is to radically accelerate key generation by replacing one of Lindell's ZK proofs -- which requires 80 Paillier ciphertexts for statistical security $2^{-40}$ -- with a much faster "weak" proof that requires only 2 Paillier ciphertexts, and which proves a weaker statement about a Paillier ciphertext that we show is sufficient in the context of our scheme. Secondly, our more efficient key generation procedure also makes frequent proactive Refreshes practical. Finally, we show that adding noise to one party's key share suffices to avoid the need to reset the public verification key when certain bad behavior is detected. Instead, we prove that our Refresh procedure, performed after each detection, suffices for addressing the attack, allowing the system to continue functioning without disruption to applications that rely on the verification key.
Our scheme is also very efficient, competitive with the best constructions that do not provide proactive security, and state-of-the-art among the few results that do. Our optimizations to ECDSA key generation speed up runtime and improve bandwidth over Lindell's key generation by factors of 7 and 13, respectively. Our Key Generation protocol requires 20% less bandwidth than existing constructions, completes in only 3 protocol messages, and executes much faster than all but OT-based key generation. For ECDSA signing, our extra Refresh protocol does add a 10X latency and 5X bandwidth overhead compared to Lindell. However, this still fits in 150 ms runtime and about 5.4 KB of messages when run in our AWS cluster benchmark.
Peter Gaži, Zahra Motaqy, Alexander Russell
ePrint Report
The GHOST protocol has been proposed as an improvement to the Nakamoto consensus mechanism that underlies Bitcoin. In contrast to the Nakamoto fork-choice rule, the GHOST rule justifies selection of a chain with weights computed over subtrees rather than individual paths. This mechanism has been adopted by a variety of consensus protocols, and is a part of the currently deployed protocol supporting Ethereum.
We establish an exact characterization of the security region of the GHOST protocol, identifying the relationship between the rate of honest block production, the rate of adversarial block production, and network delays that guarantee that the protocol reaches consensus. In contrast to the closely related Nakamoto consensus protocol, we find that the region depends on the convention used by the protocol for tiebreaking; we establish tight results for both adversarial tiebreaking, in which ties are broken adversarially in order to frustrate consensus, and deterministic tiebreaking, in which ties between pairs of blocks are broken consistently throughout an execution. We provide explicit attacks for both conventions which stall consensus outside of the security region.
Our results conclude that the security region of GHOST can be strictly improved by incorporating a tiebreaking mechanism; in either case, however, the final region of security is inferior to the region of Nakamoto consensus.
We establish an exact characterization of the security region of the GHOST protocol, identifying the relationship between the rate of honest block production, the rate of adversarial block production, and network delays that guarantee that the protocol reaches consensus. In contrast to the closely related Nakamoto consensus protocol, we find that the region depends on the convention used by the protocol for tiebreaking; we establish tight results for both adversarial tiebreaking, in which ties are broken adversarially in order to frustrate consensus, and deterministic tiebreaking, in which ties between pairs of blocks are broken consistently throughout an execution. We provide explicit attacks for both conventions which stall consensus outside of the security region.
Our results conclude that the security region of GHOST can be strictly improved by incorporating a tiebreaking mechanism; in either case, however, the final region of security is inferior to the region of Nakamoto consensus.
Kaniuar Bacho, Alexander Kulpe, Giulio Malavolta, Simon Schmidt, Michael Walter
ePrint Report
A recent work of Kalai et al. (STOC 2023) shows how to compile any multi-player nonlocal game into a protocol with a single computationally-bounded prover. Subsequent works have built on this to develop new cryptographic protocols, where a completely classical client can verify the validity of quantum computation done by a quantum server. Their compiler relies on the existence of quantum fully-homomorphic encryption.
In this work, we propose a new compiler for transforming nonlocal games into single-prover protocols. Our compiler is based on the framework of measurement-based quantum computation. It can be instantiated assuming the existence of \emph{any} trapdoor function that satisfies the claw-freeness property. Leveraging results by Natarajan and Zhang (FOCS 2023) on compiled nonlocal games, our work implies the existence of new protocols to classically verify quantum computation from potentially weaker computational assumptions than previously known.
In this work, we propose a new compiler for transforming nonlocal games into single-prover protocols. Our compiler is based on the framework of measurement-based quantum computation. It can be instantiated assuming the existence of \emph{any} trapdoor function that satisfies the claw-freeness property. Leveraging results by Natarajan and Zhang (FOCS 2023) on compiled nonlocal games, our work implies the existence of new protocols to classically verify quantum computation from potentially weaker computational assumptions than previously known.
Peizhou Gan, Prasanna Ravi, Kamal Raj, Anubhab Baksi, Anupam Chattopadhyay
ePrint Report
In this work, we propose the first hardware implementation
of Classic McEliece protected with countermeasures
against Side-Channel Attacks (SCA) and Fault Injection Attacks
(FIA). Classic Mceliece is one of the leading candidates for Key
Encapsulation Mechanisms (KEMs) in the ongoing round 4 of
the NIST standardization process for post-quantum cryptography.
In particular, we implement a range of generic countermeasures
against SCA and FIA, particularly protected the vulnerable
operations such as additive Fast Fourier Transform (FFT) and
Gaussian elimination, that have been targeted by prior SCA
and FIA attacks. We also perform a detailed SCA evaluation
demonstrating no leakage even with 100000 traces (improvement
of more than 100× the number of traces compared to unprotected
implementation). This comes at a modest total area overhead
of between 4× to 7×, depending on the type of implemented
SCA countermeasure. Furthermore, we present a thorough ASIC
benchmark for SCA and FIA protected Classic McEliece design
Xander Pottier, Thomas de Ruijter, Jonas Bertels, Wouter Legiest, Michiel Van Beirendonck, Ingrid Verbauwhede
ePrint Report
The Multi-Scalar Multiplication (MSM) is the main barrier to accelerating Zero-Knowledge applications. In recent years, hardware acceleration of this algorithm on both FPGA and GPU has become a popular research topic and the subject of a multi-million dollar prize competition (ZPrize). This work presents OPTIMSM: Optimized Processing Through Iterative Multi-Scalar Multiplication. This novel accelerator focuses on the acceleration of the MSM algorithm for any Elliptic Curve (EC) by improving upon the Pippenger algorithm. A new iteration technique is introduced to decouple the required buckets from the window size, resulting in fewer EC computations for the same on-chip memory resources. Furthermore, we combine known optimizations from the literature for the first time to achieve additional latency improvements. Our enhanced MSM implementation significantly reduces computation time, achieving a speedup of up to $\times 12.77$ compared to recent FPGA implementations. Specifically, for the BLS12-381 curve, we reduce the computation time for an MSM of size $2^{24}$ to 914 ms using a single compute unit on the U55C FPGA or to 231 ms using four U55C devices. These results indicate a substantial improvement in efficiency, paving the way for more scalable and efficient Zero-Knowledge proof systems.
Alexander Poremba, Seyoon Ragavan, Vinod Vaikuntanathan
ePrint Report
The no-cloning principle has played a foundational role in quantum information and cryptography. Following a long-standing tradition of studying quantum mechanical phenomena through the lens of interactive games, Broadbent and Lord (TQC 2020) formalized cloning games in order to quantitatively capture no-cloning in the context of unclonable encryption schemes.
The conceptual contribution of this paper is the new, natural, notion of Haar cloning games together with two applications. In the area of black-hole physics, our game reveals that, in an idealized model of a black hole which features Haar random (or pseudorandom) scrambling dynamics, the information from infalling entangled qubits can only be recovered from either the interior or the exterior of the black hole---but never from both places at the same time. In the area of quantum cryptography, our game helps us construct succinct unclonable encryption schemes from the existence of pseudorandom unitaries, thereby, for the first time, bridging the gap between ``MicroCrypt'' and unclonable cryptography. The technical contribution of this work is a tight analysis of Haar cloning games which requires us to overcome many long-standing barriers in our understanding of cloning games:
1. Are there cloning games which admit no non-trivial winning strategies? Resolving this particular question turns out to be crucial for our application to black-hole physics. Existing work analyzing the $n$-qubit BB84 game and the subspace coset game only achieve the bounds of $2^{-0.228n}$ and $2^{-0.114n+o(n)}$, respectively, while the trivial adversarial strategy wins with probability $2^{-n}$. We show that the Haar cloning game is the hardest cloning game, by demonstrating a worst-case to average-case reduction for a large class of games which we refer to as oracular cloning games. We then show that the Haar cloning game admits no non-trivial winning strategies. 2. All existing works analyze $1\mapsto 2$ cloning games; can we prove bounds on $t\mapsto t+1$ games for large $t$? Such bounds are crucial in our application to unclonable cryptography. Unfortunately, the BB84 game is not even $2\mapsto 3$ secure, and the subspace coset game is not $t\mapsto t+1$ secure for a polynomially large $t$. We show that the Haar cloning game is $t\mapsto t+1$ secure provided that $t = o(\log n / \log \log n)$, and we conjecture that this holds for $t$ that is polynomially large (in $n$).
Answering these questions provably requires us to go beyond existing methods (Tomamichel, Fehr, Kaniewski and Wehner, New Journal of Physics 2013). In particular, we show a new technique for analyzing cloning games with respect to binary phase states through the lens of binary subtypes, and combine it with novel bounds on the operator norms of block-wise tensor products of matrices.
The conceptual contribution of this paper is the new, natural, notion of Haar cloning games together with two applications. In the area of black-hole physics, our game reveals that, in an idealized model of a black hole which features Haar random (or pseudorandom) scrambling dynamics, the information from infalling entangled qubits can only be recovered from either the interior or the exterior of the black hole---but never from both places at the same time. In the area of quantum cryptography, our game helps us construct succinct unclonable encryption schemes from the existence of pseudorandom unitaries, thereby, for the first time, bridging the gap between ``MicroCrypt'' and unclonable cryptography. The technical contribution of this work is a tight analysis of Haar cloning games which requires us to overcome many long-standing barriers in our understanding of cloning games:
1. Are there cloning games which admit no non-trivial winning strategies? Resolving this particular question turns out to be crucial for our application to black-hole physics. Existing work analyzing the $n$-qubit BB84 game and the subspace coset game only achieve the bounds of $2^{-0.228n}$ and $2^{-0.114n+o(n)}$, respectively, while the trivial adversarial strategy wins with probability $2^{-n}$. We show that the Haar cloning game is the hardest cloning game, by demonstrating a worst-case to average-case reduction for a large class of games which we refer to as oracular cloning games. We then show that the Haar cloning game admits no non-trivial winning strategies. 2. All existing works analyze $1\mapsto 2$ cloning games; can we prove bounds on $t\mapsto t+1$ games for large $t$? Such bounds are crucial in our application to unclonable cryptography. Unfortunately, the BB84 game is not even $2\mapsto 3$ secure, and the subspace coset game is not $t\mapsto t+1$ secure for a polynomially large $t$. We show that the Haar cloning game is $t\mapsto t+1$ secure provided that $t = o(\log n / \log \log n)$, and we conjecture that this holds for $t$ that is polynomially large (in $n$).
Answering these questions provably requires us to go beyond existing methods (Tomamichel, Fehr, Kaniewski and Wehner, New Journal of Physics 2013). In particular, we show a new technique for analyzing cloning games with respect to binary phase states through the lens of binary subtypes, and combine it with novel bounds on the operator norms of block-wise tensor products of matrices.
Vineet Nair, Ashish Sharma, Bhargav Thankey
ePrint Report
We propose a Polynomial Commitment Scheme (PCS), called BrakingBase, which allows a prover to commit to multilinear (or univariate) polynomials with $n$ coefficients in $O(n)$ time. The evaluation protocol of BrakingBase operates with an $O(n)$ time-complexity for the prover, while the verifier time-complexity and proof-complexity are $O(\lambda \log^2 n)$, where $λ$ is the security parameter. Notably, BrakingBase is field-agnostic, meaning it can be instantiated over any field of sufficiently large size. Additionally, BrakingBase can be combined with the Polynomial Interactive Oracle Proof (PIOP) from Spartan (Crypto 2020) to yield a Succinct Non-interactive ARgument of Knowledge (SNARK) with a linear-time prover, as well as poly-logarithmic complexity for both the verifier runtime and the proof size. We obtain our PCS by combining the Brakedown and Basefold PCS. The commitment protocol of BrakingBase is similar to that of Brakedown. The evaluation protocol of BrakingBase improves upon Brakedown’s verifier work by reducing it through multiple instances of the sum-check protocol. Basefold PCS is employed to commit to and later evaluate the multilinear extension (MLE) of the witnesses involved in the sum-check protocol at random points. This includes the MLE corresponding to the parity-check matrix of the linear-time encodable code used in Brakedown. We show that this matrix is sparse and use the Spark compiler from Spartan to evaluate its multilinear extension at a random point. We implement BrakingBase and compare its performance to Brakedown and Basefold over a 128 bit prime field.
Yuyin Yu, Yanbin Zheng, Yongqiang Li, Jingang Liu
ePrint Report
We establish a one-to-one correspondence between Dembowski-Ostrom (DO) polynomials and upper triangular matrices. Based on this correspondence, we give a bijection between DO permutation polynomials and a special class of upper triangular matrices, and construct a new batch of DO permutation polynomials. To the best of our knowledge, almost all other known DO permutation polynomials are located in finite fields of $\mathbb{F}_{2^n}$, where $n$ contains odd factors (see Table 1). However, there are no restrictions on $n$ in our results, and especially the case of $n=2^m$ has not been studied in the literature. For example, we provide a simple necessary and sufficient condition to determine when $\gamma\, Tr(\theta_{i}x)Tr(\theta_{j}x) + x$ is a DO permutation polynomial. In addition, when the upper triangular matrix degenerates into a diagonal matrix and the elements on the main diagonal form a basis of $\mathbb{F}_{q^{n}}$ over $\mathbb{F}_{q}$, this diagonal matrix corresponds to all linearized permutation polynomials. In a word, we construct several new DO permutation polynomials, and our results can be viewed as an extension of linearized permutation polynomials.
Juan Garay, Yun Lu, Julien Prat, Brady Testa, Vassilis Zikas
ePrint Report
As the first proof-of-work (PoW) permissionless blockchain, Bitcoin aims at maintaining a decentralized yet consistent transaction ledger as protocol participants (“miners”) join and leave as they please. This is achieved by means of a subtle PoW difficulty adjustment mechanism that adapts to the perceived block generation rate, and important steps have been taken in previous work to provide a rigorous analysis of the conditions (such as bounds on dynamic participation) that are sufficient for Bitcoin’s security properties to be ascertained.
Such existing analysis, however, is property-based, and as such only guarantees security when the protocol is run $\textbf{in isolation}$. In this paper we present the first (to our knowledge) simulation-based analysis of the Bitcoin ledger in the dynamic setting where it operates, and show that the protocol abstraction known as the Bitcoin backbone protocol emulates, under certain participation restrictions, Bitcoin’s intended specification. Our formulation and analysis extend the existing Universally Composable treatment for the fixed-difficulty setting, and develop techniques that might be of broader applicability, in particular to other composable formulations of blockchain protocols that rely on difficulty adjustment.
Such existing analysis, however, is property-based, and as such only guarantees security when the protocol is run $\textbf{in isolation}$. In this paper we present the first (to our knowledge) simulation-based analysis of the Bitcoin ledger in the dynamic setting where it operates, and show that the protocol abstraction known as the Bitcoin backbone protocol emulates, under certain participation restrictions, Bitcoin’s intended specification. Our formulation and analysis extend the existing Universally Composable treatment for the fixed-difficulty setting, and develop techniques that might be of broader applicability, in particular to other composable formulations of blockchain protocols that rely on difficulty adjustment.
Alper Çakan, Vipul Goyal, Takashi Yamakawa
ePrint Report
Quantum information allows us to build quantum money schemes, where a bank can issue banknotes in the form of authenticatable quantum states that cannot be cloned or counterfeited: a user in possession of k banknotes cannot produce k +1 banknotes. Similar to paper banknotes, in existing quantum money schemes, a banknote consists of an unclonable quantum state and a classical serial number, signed by bank. Thus, they lack one of the most fundamental properties cryptographers look for in a currency scheme: privacy. In this work, we first further develop the formal definitions of privacy for quantum money schemes. Then, we construct the first public-key quantum money schemes that satisfy these security notions. Namely,
• Assuming existence of indistinguishability obfuscation and hardness of Learning with Errors, we construct a public-key quantum money scheme with anonymity against users and traceability by authorities.
Since it is a policy choice whether authorities should be able to track banknotes or not, we also construct an untraceable money scheme, where no one (not even the authorities) can track banknotes. • Assuming existence of indistinguishability obfuscation and hardness of Learning with Er- rors, we construct a public-key quantum money scheme with untraceability.
Further, we show that the no-cloning principle, a result of quantum mechanics, allows us to construct schemes, with security guarantees that are classically impossible, for a seemingly unrelated application: voting! • Assuming existence of indistinguishability obfuscation and hardness of Learning with Errors, we construct a universally verifiable quantum voting scheme with classical votes.
Finally, as a technical tool, we introduce the notion of publicly rerandomizable encryption with strong correctness, where no adversary is able to produce a malicious ciphertext and a malicious random tape such that the ciphertext before and after rerandomization (with the malicious tape) decrypts to different values! We believe this might be of independent interest. • Assuming the (quantum) hardness of Learning with Errors, we construct a (post-quantum) classical publicly rerandomizable encryption scheme with strong correctness
Since it is a policy choice whether authorities should be able to track banknotes or not, we also construct an untraceable money scheme, where no one (not even the authorities) can track banknotes. • Assuming existence of indistinguishability obfuscation and hardness of Learning with Er- rors, we construct a public-key quantum money scheme with untraceability.
Further, we show that the no-cloning principle, a result of quantum mechanics, allows us to construct schemes, with security guarantees that are classically impossible, for a seemingly unrelated application: voting! • Assuming existence of indistinguishability obfuscation and hardness of Learning with Errors, we construct a universally verifiable quantum voting scheme with classical votes.
Finally, as a technical tool, we introduce the notion of publicly rerandomizable encryption with strong correctness, where no adversary is able to produce a malicious ciphertext and a malicious random tape such that the ciphertext before and after rerandomization (with the malicious tape) decrypts to different values! We believe this might be of independent interest. • Assuming the (quantum) hardness of Learning with Errors, we construct a (post-quantum) classical publicly rerandomizable encryption scheme with strong correctness
Jianan Su, Laasya Bangalore, Harel Berger, Jason Yi, Alivia Castor, Micah Sherr, Muthuramakrishnan Venkitasubramaniam
ePrint Report
Secure aggregation is the distributed task of securely computing a sum of values (or a vector of values) held by a set of parties, revealing only the output (i.e., the sum) in the computation. Existing protocols, such as Prio (NDSI’17), Prio+ (SCN’22), Elsa (S&P’23), and Whisper (S&P’24), support secure aggregation with input validation to ensure inputs belong to a specified domain. However, when malicious servers are present, these protocols primarily guarantee privacy but not input validity. Also, malicious server(s) can cause the protocol to abort. We introduce SCIF, a novel multi-server secure aggregation protocol with input validation, that remains secure even in the presence of malicious actors, provided fewer than one-third of the servers are malicious. Our protocol overcomes previous limitations by providing two key properties: (1) guaranteed output delivery, ensuring malicious parties cannot prevent the protocol from completing, and (2) guaranteed input inclusion, ensuring no malicious party can prevent an honest party’s input from being included in the computation. Together, these guarantees provide strong resilience against denial-of-service attacks. Moreover, SCIF offers these guarantees without increasing client costs over Prio and keeps server costs moderate. We present a robust end-to-end implementation of SCIF and demonstrate the ease with which it can be instrumented by integrating it in a simulated Tor network for privacy-preserving measurement.
James Bartusek, Dakshita Khurana
ePrint Report
We put forth Oblivious State Preparation (OSP) as a cryptographic primitive that unifies techniques developed in the context of a quantum server interacting with a classical client. OSP allows a classical polynomial-time sender to input a choice of one out of two public observables, and a quantum polynomial-time receiver to recover an eigenstate of the corresponding observable -- while keeping the sender's choice hidden from any malicious receiver.
We obtain the following results: - The existence of (plain) trapdoor claw-free functions implies OSP, and the existence of dual-mode trapdoor claw-free functions implies round-optimal (two-round) OSP. - OSP implies the existence of proofs of quantumness, test of a qubit, blind classical delegation of quantum computation, and classical verification of quantum computation. - Two-round OSP implies quantum money with classical communication, classically-verifiable position verification, and (additionally assuming classical FHE with log-depth decryption) quantum FHE.
Thus, the OSP abstraction helps separate the cryptographic layer from the information-theoretic layer when building cryptosystems across classical and quantum participants. Indeed, several of the aforementioned applications were previously only known via tailored LWE-based constructions, whereas our OSP-based constructions yield new results from a wider variety of assumptions, including hard problems on cryptographic group actions.
Finally, towards understanding the minimal hardness assumptions required to realize OSP, we prove the following:
- OSP implies oblivious transfer between one classical and one quantum party. - Two-round OSP implies public-key encryption with classical keys and ciphertexts.
In particular, these results help to ''explain'' the use of public-key cryptography in the known approaches to establishing a ''classical leash'' on a quantum server. For example, combined with a result of Austrin et al. (CRYPTO 22), we conclude that perfectly-correct OSP cannot exist unconditionally in the (quantum) random oracle model.
We obtain the following results: - The existence of (plain) trapdoor claw-free functions implies OSP, and the existence of dual-mode trapdoor claw-free functions implies round-optimal (two-round) OSP. - OSP implies the existence of proofs of quantumness, test of a qubit, blind classical delegation of quantum computation, and classical verification of quantum computation. - Two-round OSP implies quantum money with classical communication, classically-verifiable position verification, and (additionally assuming classical FHE with log-depth decryption) quantum FHE.
Thus, the OSP abstraction helps separate the cryptographic layer from the information-theoretic layer when building cryptosystems across classical and quantum participants. Indeed, several of the aforementioned applications were previously only known via tailored LWE-based constructions, whereas our OSP-based constructions yield new results from a wider variety of assumptions, including hard problems on cryptographic group actions.
Finally, towards understanding the minimal hardness assumptions required to realize OSP, we prove the following:
- OSP implies oblivious transfer between one classical and one quantum party. - Two-round OSP implies public-key encryption with classical keys and ciphertexts.
In particular, these results help to ''explain'' the use of public-key cryptography in the known approaches to establishing a ''classical leash'' on a quantum server. For example, combined with a result of Austrin et al. (CRYPTO 22), we conclude that perfectly-correct OSP cannot exist unconditionally in the (quantum) random oracle model.
Devon Tuma, Nicholas Hopper
ePrint Report
As cryptographic protocols continue to become more complex and specialized, their security proofs have grown more complex as well, making manual verification of their correctness more difficult. Formal verification via proof assistants has become a popular approach to solving this, by allowing researchers to write security proofs that can be verified correct by a computer.
In this paper we present a new framework of this kind for verifying security proofs, taking a foundational approach to representing and reasoning about protocols. We implement our framework in the Lean programming language, and give a number of security proofs to demonstrate that our system is both powerful and usable, with comparable automation to similar systems.
Our framework is especially focused on reasoning about and manipulating oracle access, and we demonstrate the usefulness of this approach by implementing both a general forking lemma and a version of the Fiat-Shamir transform for sigma protocols. As a simple case study we then instantiate these to an implementation of a Schnorr-like signature scheme.
In this paper we present a new framework of this kind for verifying security proofs, taking a foundational approach to representing and reasoning about protocols. We implement our framework in the Lean programming language, and give a number of security proofs to demonstrate that our system is both powerful and usable, with comparable automation to similar systems.
Our framework is especially focused on reasoning about and manipulating oracle access, and we demonstrate the usefulness of this approach by implementing both a general forking lemma and a version of the Fiat-Shamir transform for sigma protocols. As a simple case study we then instantiate these to an implementation of a Schnorr-like signature scheme.
Thomas Aulbach, Fabio Campos, Juliane Krämer
ePrint Report
Multivariate cryptography currently centres mostly around UOV-based signature schemes: All multivariate round 2 candidates in the selection process for additional digital signatures by NIST are either UOV itself or close variations of it: MAYO, QR-UOV, SNOVA, and UOV. Also schemes which have been in the focus of the multivariate research community, but are broken by now - like Rainbow and LUOV - are based on UOV. Both UOV and the schemes based on it have been frequently analyzed regarding their physical security in the course of the NIST process. However, a comprehensive analysis regarding the physical security of UOV-based signature schemes is missing.
In this work, we want to bridge this gap and create a comprehensive overview of physical attacks on UOV and its variants from the second round of NIST’s selection process for additional post-quantum signature schemes, which just started. First, we collect all existing side-channel and fault attacks on UOV-based schemes and transfer them to the current UOV specification. Since UOV was subject to significant changes over the past few years, e.g., adaptions to the expanded secret key, some attacks need to be reassessed. Next, we introduce new physical attacks in order to obtain an overview as complete as possible. We then show how all these attacks would translate to MAYO, QR-UOV, and SNOVA. To improve the resistance of UOV-based signature schemes towards physical attacks, we discuss and introduce dedicated countermeasures. As related result, we observe that certain implementation decisions, like key compression techniques and randomization choices, also have a large impact on the physical security, in particular on the effectiveness of the considered fault attacks. Finally, we provide implementations of UOV and MAYO for the ARM Cortex-M4 architecture that feature first-order masking and protection against selected fault attacks. We benchmark the resulting overhead on a NUCLEO-L4R5ZI board and validate our approach by performing a TVLA on original and protected subroutines, yielding significantly smaller t-values for the latter.
Kamal Raj, Prasanna Ravi, Tee Kiah Chia, Anupam Chattopadhyay
ePrint Report
We present the protected hardware implementation of the Module-Lattice-Based Digital Signature Standard (MLDSA). ML-DSA is an extension of Dilithium 3.1, which is the winner of the Post Quantum Cryptography (PQC) competition in the digital signature category. The proposed design is based on the existing high-performance Dilithium 3.1 design. We implemented existing Dilithium masking gadgets in hardware, which were only implemented in software. The masking gadgets are integrated with the unprotected ML-DSA design and functional verification of the complete design is verified with the Known Answer Tests(KATs) generated from an updated ML-DSA software implementation. We also present the practical power side-channel attack experimental results by implementing masking gadgets on the standard sidechannel evaluation FPGA board and collecting power traces up-to 1 million traces. The proposed protected design has the overhead of 1.127× LUT, 1.2× Flip-Flop, and 378× execution time compared to unprotected design. The experimental results show that it resists side-channel attacks.
Ritul Satish, Alfred Daimari, Argha Chakrabarty, Kahaan Shah, Debayan Gupta
ePrint Report
Remote Keyless Entry (RKE) systems are ubiqui-
tous in modern day automobiles, providing convenience for
vehicle owners - occasionally at the cost of security. Most
automobile companies have proprietary implementations of
RKE; these are sometimes built on insecure algorithms and
authentication mechanisms. This paper presents a compre-
hensive study conducted on the RKE systems of multiple
cars from four automobile manufacturers not previously
explored.
Specifically, we analyze the design, implementation, and
security levels of 7 different cars manufactured by Honda,
Maruti-Suzuki, Toyota, and Mahindra. We also do a deep
dive into the RKE system of a particular Honda model.
We evaluate the susceptibility of these systems to known
vulnerabilities (such as RollJam and RollBack at-
tacks). This is accomplished using a novel tool – ‘Puck-
py’, that helps analyze RKE protocols. Our tool automates
several aspects of the protocol analysis process, reducing
time and logistical constraints in RKE research; we provide
standardized protocols to execute various attacks using our
Puck-Py tool. We find that, despite having a long period
of time to fix security issues, several popular automobiles
remain susceptible to attacks, including the basic RollJam
attack.
Nir Bitansky, Rachit Garg
ePrint Report
Succinct randomized encodings allow encoding the input $x$ of a time-$t$ uniform computation $M(x)$ in sub-linear time $o(t)$. The resulting encoding $\tilde{x}$ allows recovering the result of the computation $M(x)$, but hides any other information about $x$. Such encodings are known to have powerful applications such as reducing communication in MPC, bootstrapping advanced encryption schemes, and constructing time-lock puzzles.
Until not long ago, the only known constructions were based on indistinguishability obfuscation, and in particular they were not based on standard post-quantum assumptions. In terms of efficiency, these constructions' encoding time is $\rm{polylog}(t)$, essentially the best one can hope for. Recently, a new construction was presented based on Circular Learning with Errors, an assumption similar to the one used in fully-homomorphic encryption schemes, and which is widely considered to be post-quantum resistant. However, the encoding efficiency significantly falls behind obfuscation-based scheme and is $\approx \sqrt{t} \cdot s$, where $s$ is the space of the computation.
We construct, under the same assumption, succinct randomized encodings with encoding time $\approx t^{\varepsilon} \cdot s$ for arbitrarily small constant $\varepsilon<1$. Our construction is relatively simple, generic and relies on any non-compact single-key functional encryption that satisfies a natural {\em efficiency preservation} property.
Until not long ago, the only known constructions were based on indistinguishability obfuscation, and in particular they were not based on standard post-quantum assumptions. In terms of efficiency, these constructions' encoding time is $\rm{polylog}(t)$, essentially the best one can hope for. Recently, a new construction was presented based on Circular Learning with Errors, an assumption similar to the one used in fully-homomorphic encryption schemes, and which is widely considered to be post-quantum resistant. However, the encoding efficiency significantly falls behind obfuscation-based scheme and is $\approx \sqrt{t} \cdot s$, where $s$ is the space of the computation.
We construct, under the same assumption, succinct randomized encodings with encoding time $\approx t^{\varepsilon} \cdot s$ for arbitrarily small constant $\varepsilon<1$. Our construction is relatively simple, generic and relies on any non-compact single-key functional encryption that satisfies a natural {\em efficiency preservation} property.
Keewoo Lee, Yongdong Yeo
ePrint Report
Privacy-preserving blockchains and private messaging services that ensure receiver-privacy face a significant UX challenge: each client must scan every payload posted on the public bulletin board individually to avoid missing messages intended for them. Oblivious Message Retrieval (OMR) addresses this issue by securely outsourcing this expensive scanning process to a service provider using Homomorphic Encryption (HE).
In this work, we propose a new OMR scheme that substantially improves upon the previous state-of-the-art, PerfOMR (USENIX Security'24). Our implementation demonstrates reductions of 3.3x in runtime, 2.2x in digest size, and 1.3x in key size, in a scenario with 65536 payloads (each 612 bytes), of which up to 50 are pertinent.
At the core of these improvements is a new homomorphic compression mechanism, where ciphertexts of length proportional to the number of total payloads are compressed into a digest whose length is proportional to the upper bound on the number of pertinent payloads. Unlike previous approaches, our scheme fully exploits the native homomorphic SIMD structure of the underlying HE scheme, significantly enhancing efficiency. In the setting described above, our compression scheme achieves 7.4x speedup compared to PerfOMR.
In this work, we propose a new OMR scheme that substantially improves upon the previous state-of-the-art, PerfOMR (USENIX Security'24). Our implementation demonstrates reductions of 3.3x in runtime, 2.2x in digest size, and 1.3x in key size, in a scenario with 65536 payloads (each 612 bytes), of which up to 50 are pertinent.
At the core of these improvements is a new homomorphic compression mechanism, where ciphertexts of length proportional to the number of total payloads are compressed into a digest whose length is proportional to the upper bound on the number of pertinent payloads. Unlike previous approaches, our scheme fully exploits the native homomorphic SIMD structure of the underlying HE scheme, significantly enhancing efficiency. In the setting described above, our compression scheme achieves 7.4x speedup compared to PerfOMR.
Mustafa Khairallah
ePrint Report
Pseudo-Random Injections (PRIs) have had several applications in symmetric-key cryptography, such as in the idealization of Authenticated Encryption with Associated Data (AEAD) schemes, building robust AEAD, and, recently, in converting a committing AEAD scheme into a succinctly committing AEAD scheme. In Crypto 2024, Bellare and Hoang showed that if an AEAD scheme is already committing, it can be transformed into a succinctly committed scheme by encrypting part of the plaintext using a PRI. In this paper, we revisit the applications of PRIs in building Message Authentication Codes (MACs) and AEAD schemes.
First, we look at some of the properties and definitions PRIs, such as collision resistance and unforgeability when used as a MAC with small plaintext space, under different leakage models. Next, we show how they can be combined with collision-resistant hash functions to build a MAC for long plaintexts, offering flexible security depending on how the PRI and equality check are implemented. If both the PRI and equality check are leak-free, the MAC provides almost optimal security, but the security only degrades a little if the equality check is only leakage-resilient (rather than leak-free). If the equality check has unbounded leakage, the security drops to a baseline security, rather than being completely insecure. Next, we show how to use PRIs to build a succinctly committing online AEAD scheme dubbed as scoAE from scratch that achieves succinct CMT4 security, privacy, and Ciphertext Integrity with Misuse and Leakage (CIML2) security. Last but not least, we show how to build a succinct nonce Misuse-Resistant (MRAE) AEAD scheme, dubbed as scMRAE. The construction combines the SIV paradigm with PRI-based encryption (e.g. the Encode-then-Encipher (EtE) framework).
First, we look at some of the properties and definitions PRIs, such as collision resistance and unforgeability when used as a MAC with small plaintext space, under different leakage models. Next, we show how they can be combined with collision-resistant hash functions to build a MAC for long plaintexts, offering flexible security depending on how the PRI and equality check are implemented. If both the PRI and equality check are leak-free, the MAC provides almost optimal security, but the security only degrades a little if the equality check is only leakage-resilient (rather than leak-free). If the equality check has unbounded leakage, the security drops to a baseline security, rather than being completely insecure. Next, we show how to use PRIs to build a succinctly committing online AEAD scheme dubbed as scoAE from scratch that achieves succinct CMT4 security, privacy, and Ciphertext Integrity with Misuse and Leakage (CIML2) security. Last but not least, we show how to build a succinct nonce Misuse-Resistant (MRAE) AEAD scheme, dubbed as scMRAE. The construction combines the SIV paradigm with PRI-based encryption (e.g. the Encode-then-Encipher (EtE) framework).
Lalita Devadas, Brent Waters, David J. Wu
ePrint Report
A succinct non-interactive argument (SNARG) for NP allows a prover to convince a verifier that an NP statement $x$ is true with a proof whose size is sublinear in the length of the traditional NP witness. Moreover, a SNARG is adaptively sound if the adversary can choose the statement it wants to prove after seeing the scheme parameters. Very recently, Waters and Wu (STOC 2024) showed how to construct adaptively-sound SNARGs for NP in the plain model from falsifiable assumptions (specifically, sub-exponentially-secure indistinguishability obfuscation, sub-exponentially-secure one-way functions, and polynomial hardness of discrete log).
We consider the batch setting where the prover wants to prove a collection of $T$ statements $x_1, \ldots, x_T$ and its goal is to construct a proof whose size is sublinear in both the size of a single witness and the number of instances $T$. In this setting, existing constructions either require the size of the public parameters to scale linearly with $T$ (and thus, can only support an a priori bounded number of instances), or only provide non-adaptive soundness, or have proof size that scales linearly with the size of a single NP witness. In this work, we give two approaches for batching adaptively-sound SNARGs for NP, and in particular, show that under the same set of assumptions as those underlying the Waters-Wu adaptively-sound SNARG, we can obtain an adaptively-sound SNARG for batch NP where the size of the proof is $\mathsf{poly}(\lambda)$ and the size of the CRS is $\mathsf{poly}(\lambda + |C|)$, where $\lambda$ is a security parameter and $|C|$ is the size of the circuit that computes the associated NP relation.
Our first approach builds directly on top of the Waters-Wu construction and relies on indistinguishability obfuscation and a homomorphic re-randomizable one-way function. Our second approach shows how to combine ideas from the Waters-Wu SNARG with the chaining-based approach by Garg, Sheridan, Waters, and Wu (TCC 2022) to obtain a SNARG for batch NP.
We consider the batch setting where the prover wants to prove a collection of $T$ statements $x_1, \ldots, x_T$ and its goal is to construct a proof whose size is sublinear in both the size of a single witness and the number of instances $T$. In this setting, existing constructions either require the size of the public parameters to scale linearly with $T$ (and thus, can only support an a priori bounded number of instances), or only provide non-adaptive soundness, or have proof size that scales linearly with the size of a single NP witness. In this work, we give two approaches for batching adaptively-sound SNARGs for NP, and in particular, show that under the same set of assumptions as those underlying the Waters-Wu adaptively-sound SNARG, we can obtain an adaptively-sound SNARG for batch NP where the size of the proof is $\mathsf{poly}(\lambda)$ and the size of the CRS is $\mathsf{poly}(\lambda + |C|)$, where $\lambda$ is a security parameter and $|C|$ is the size of the circuit that computes the associated NP relation.
Our first approach builds directly on top of the Waters-Wu construction and relies on indistinguishability obfuscation and a homomorphic re-randomizable one-way function. Our second approach shows how to combine ideas from the Waters-Wu SNARG with the chaining-based approach by Garg, Sheridan, Waters, and Wu (TCC 2022) to obtain a SNARG for batch NP.