International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

09 June 2025

Elizabeth Crites, Jonathan Katz, Chelsea Komlo, Stefano Tessaro, Chenzhi Zhu
ePrint Report ePrint Report
FROST and its variants are state-of-the-art protocols for threshold Schnorr signatures that are used in real-world applications. While static security of these protocols has been shown by several works, the security of these protocols under adaptive corruptions—where an adversary can choose which parties to corrupt at any time based on information it learns during protocol executions—has remained a notorious open problem that has received renewed attention due to recent standardization efforts for threshold schemes.

We show adaptive security (without erasures) of FROST and several variants under different corruption thresholds and computational assumptions. Let n be the total number of parties, t+1 the signing threshold, and t_c an upper bound on the number of corrupted parties.

1. We prove adaptive security when t_c = t/2 in the random oracle model (ROM) based on the algebraic one-more discrete logarithm assumption (AOMDL)—the same conditions under which FROST is proven statically secure.

2. We introduce the low-dimensional vector representation (LDVR) problem, parameterized by t_c, t, and n, and prove adaptive security in the algebraic group model (AGM) and ROM based on the AOMDL assumption and the hardness of the LDVR problem for the corresponding parameters. In some regimes (including some t_c >t/2) we show the LDVR problem is unconditionally hard, while in other regimes (in particular, when t_c = t) we show that hardness of the LDVR problem is necessary for adaptive security to hold. In fact, we show that hardness of the LDVR problem is necessary for proving adaptive security of a broad class of threshold Schnorr signatures.
Expand
Marshall Ball, Dana Dachman-Soled
ePrint Report ePrint Report
We construct $t$-non-malleable extractors---which allow an attacker to tamper with a source $t$ times---for high min-entropy sources samplable by poly-time hierarchy circuits and for tampering classes corresponding to poly-time hierarchy functions from derandomization-type assumptions.

We then show an application of this new object to ruling out constructions of succinct, non-interactive, arguments (SNARGs) secure against \emph{uniform} adversaries from \emph{uniform} falsifiable assumptions via a class of black-box reductions that has not been previously considered in the literature. This class of black-box reductions allows the reduction to arbitrarily set the \emph{coins}, as well as the input, of the uniform adversary it interacts with. The class of reductions we consider is restricted in allowing only non-adaptive queries to the adversary.
Expand
Akinori Hosoyamada
ePrint Report ePrint Report
Sponge-based constructions have successfully been receiving widespread adoption, as represented by the standardization of SHA-3 and Ascon by NIST. Yet, their provable security against quantum adversaries has not been investigated much. This paper studies the post-quantum security of some keyed sponge-based constructions in the quantum ideal permutation model, focusing on the Ascon AEAD mode and KMAC as concrete instances. For the Ascon AEAD mode, we prove the post-quantum security in the single-user setting up to about $\min(2^{c/3},2^{\kappa/3})$ queries, where $c$ is the capacity and $\kappa$ is the key length. Unlike the recent work by Lang et al.~(ePrint 2025/411), we do not need non-standard restrictions on nonce sets or the number of forgery attempts. In addition, our result guarantees even non-conventional security notions such as the nonce-misuse resilience confidentiality and authenticity under release of unverified plaintext. For KMAC, we show the security up to about $\min(2^{c/3}, 2^{r/2},2^{(\kappa-r)/2})$ queries, where $r$ is the rate, ignoring some small factors. In fact, we prove the security not only for KMAC but also for general modes such as the inner-, outer-, and full-keyed sponge functions. We take a modular proof approach, adapting the ideas by several works in the classical ideal permutation model into the quantum setting: For the Ascon AEAD mode, we observe it can be regarded as an iterative application of a Tweakable Even-Mansour (TEM) cipher with a single low-entropy key, and gives the security bound as the sum of the post-quantum TPRP advantage of TEM and the classical security advantage of Ascon when TEM is replaced with a secret random object. The proof for keyed sponges is obtained analogously by regarding them as built on an Even-Mansour (EM) cipher with a single low-entropy key. The post-quantum security of (T)EM has been proven by Alagic et al. (Eurocrypt 2022 and Eurocrypt 2024). However, they show the security only when the keys are uniformly random. In addition, the proof techniques, so-called the resampling lemmas, are inapplicable to our case with a low-entropy key. Thus, we introduce and prove a modified resampling lemma, thereby showing the security of (T)EM with a low-entropy key.
Expand
Fuyuki Kitagawa, Takahiro Matsuda
ePrint Report ePrint Report
We present a generic construction of adaptive trapdoor function (TDF) from any public-key encryption (PKE) scheme that satisfies two properties: the randomness recoverability and the pseudorandom ciphertext property. As a direct corollary, we can obtain adaptive TDF from any trapdoor permutation (TDP) whose domain is both recognizable and sufficiently dense. In our construction, we can prove that the function's output is indistinguishable from uniform even when an adversary has access to the inversion oracle.
Expand
Wenhao Zhang, Xiao Wang, Chenkai Weng
ePrint Report ePrint Report
Oblivious RAMs (ORAMs) allow data outsourcing to servers so that the access pattern to the outsourced data is kept private. It is also a crucial building block to enable private RAM access within secure multi-party computation (MPC). In recent years, schemes that match the ORAM lower bound have been proposed in both the outsourcing setting and the RAM-model MPC setting, seemingly putting an epilogue in the theory of ORAM. In this paper, we initiate a study of mixed-mode ORAMs, where accesses to the ORAM are a mix of both public and private accesses. Although existing ORAMs can support public access by treating them as private ones, achieving better efficiency is highly non-trivial.

- We present a mixed-mode ORAM algorithm, assuming the existence of private information retrieval (PIR). When the PIR scheme is communication-efficient, this ORAM achieves the best possible outcome: it has a bandwidth blowup of $O(\log N)$ for private accesses and $O(1)$ for public accesses. This construction can be easily extended for the MPC setting achieving $O(B\log N )$ circuit size for private accesses to $B$-sized blocks and $O(B)$ circuit size for public accesses to the same array.

- We instantiate the above protocol in the three-party computation (3PC) setting with more concrete optimizations, yielding a protocol that performs almost as efficiently as state-of-the-art RAM-3PC protocols for private accesses while being $3\times$ more efficient for public accesses in the LAN setting.
Expand

06 June 2025

Haotian Chu, Xiao Wang, Yanxue Jia
ePrint Report ePrint Report
Private signaling allows servers to identify a recipient's messages on a public bulletin board without knowing the recipient's metadata. It is a central tool for systems like privacy-preserving blockchains and anonymous messaging. However, unless with TEE, current constructions all assume that the servers are only passively corrupted, which significantly limits their practical relevance. In this work, we present a TEE-free simulation-secure private signaling protocol assuming two non-colluding servers, either of which can be actively corrupted. Crucially, we convert signal retrieval into a problem similar to private set intersection and use custom-built zero-knowledge proofs to ensure consistency with the public bulletin board. As a result, our protocol achieves lower server-to-server communication overhead and a much smaller digest compared to state-of-the-art semi-honest protocol. For example, for a board size of $2^{19}$ messages, the resulting digest size is only 33.57KB. Our protocol is also computationally efficient: retrieving private signals only takes about 2 minutes, using 16 threads and a LAN network.
Expand
PRANAV SHRIRAM ARUNACHALARAMANAN, Ling Ren
ePrint Report ePrint Report
Recent stateful private information retrieval (PIR) schemes have significantly improved amortized computation and amortized communication while aiming to keep client storage minimal. However, all the schemes in the literature still suffer from a poor tradeoff between client storage and computation. We present BALANCED-PIR, a stateful PIR scheme that effectively balances computation and client storage. For a database of a million entries, each of 8 bytes, our scheme requires 0.2 MB of client storage, 0.2 ms of amortized computation, and 11.14 KB of amortized communication. Compared with the state-of-the-art scheme using a similar storage setting, our scheme is almost 9x better in amortized computation and 40x better in offline computation.

Verifiable private information retrieval has been gaining more attention recently. However, all existing schemes require linear amortized computation and huge client storage. We present Verifiable BALANCED-PIR, a verifiable stateful PIR scheme with sublinear amortized computation and small client storage. In fact, our Verifiable BALANCED-PIR adds modest computation, communication, and storage costs on top of BALANCED-PIR. Compared with the state-of-the-art verifiable scheme, the client storage of our scheme is 100x smaller, the amortized computation is 15x less, and the amortized communication is 2.5x better.
Expand
Gennaro Avitabile, Luisa Siniscalchi, Ivan Visconti
ePrint Report ePrint Report
Combining verifiable computation with optimistic approaches is a promising direction to scale blockchain applications. The basic idea consists of saving computations by avoiding the verification of proofs unless there are complaints.

A key tool to design systems in the above direction has been recently proposed by Seres, Glaeser and Bonneau [FC'24] who formalized the concept of a Naysayer proof: an efficient to verify proof disproving a more demanding to verify original proof.

In this work, we discuss the need of rewarding naysayer provers, the risks deriving from front-running attacks, and the failures of generic approaches trying to defeat them. Next, we introduce the concept of verifiable delayed naysayer proofs and show a construction leveraging proofs of sequential work, without relying on any additional infrastructure.
Expand
Geoffroy Couteau, Carmit Hazay, Aditya Hegde, Naman Kumar
ePrint Report ePrint Report
Garbled circuits, introduced in the seminal work of Yao (FOCS, 1986), have received considerable attention in the boolean setting due to their efficiency and application to round-efficient secure computation. In contrast, arithmetic garbling schemes have received much less scrutiny. The main efficiency measure of garbling schemes is their rate, defined as the bit size of each gate's output divided by the size of the (amortized) garbled gate. Despite recent progress, state-of-the-art garbling schemes for arithmetic circuits suffer from important limitations: all existing schemes are either restricted to $B$-bounded integer arithmetic circuits (a computational model where the arithmetic is performed over $\mathbb{Z}$ and correctness is only guaranteed if no intermediate computation exceeds the bound $B$) and achieve constant rate only for very large bounds $B = 2^{\Omega(\lambda^3)}$, or have a rate at most $O(1/\lambda)$ otherwise, where $\lambda$ denotes a security parameter. In this work, we improve this state of affairs in both settings.

- As our main contribution, we introduce the first arithmetic garbling scheme over modular rings $\mathbb{Z}_B$ with rate $O(\log\lambda/\lambda)$, breaking for the first time the $1/\lambda$-rate barrier for modular arithmetic garbling. Our construction relies on the power-DDH assumption.

- As a secondary contribution, we introduce a new arithmetic garbling scheme for $B$-bounded integer arithmetic that achieves a constant rate for bounds $B$ as low as $2^{O(\lambda)}$. Our construction relies on a new non-standard KDM-security assumption on Paillier encryption with small exponents.
Expand
Pedro Branco, Matthew Green, Aditya Hegde, Abhishek Jain, Gabriel Kaptchuk
ePrint Report ePrint Report
We study the problem of combating *viral* misinformation campaigns in end-to-end encrypted (E2EE) messaging systems such as WhatsApp. We propose a new notion of Hop Tracking Signatures (HTS) that allows for tracing originators of messages that have been propagated on long forwarding paths (i.e., gone viral), while preserving anonymity of everyone else. We define security for HTS against malicious servers.

We present both negative and positive results for HTS: on the one hand, we show that HTS does not admit succinct constructions if tracing and anonymity thresholds differ by exactly one "hop". On the other hand, by allowing for a larger gap between tracing and anonymity thresholds, we can build succinct HTS schemes where the signature size does not grow with the forwarding path. Our positive result relies on streaming algorithms and strong cryptographic assumptions.

Prior works on tracing within E2EE messaging systems either do not achieve security against malicious servers or focus only on tracing originators of pre-defined banned content.
Expand
Anders Lindman
ePrint Report ePrint Report
Synergy is a lightweight block cipher designed for resource-constrained environments such as IoT devices, embedded systems, and mobile applications. Built around a 16-round Feistel network, 8 independent pseudorandom number generators (PRNGs) ensure strong diffusion and confusion through the generation of per-block unique round keys. With a 1024-bit key and a 64-bit block size, Synergy mitigates vulnerabilities to ML-based cryptanalysis by using a large key size in combination with key- and data-dependent bit rotations, which reduce statistical biases and increase unpredictability. By utilizing 32-bit arithmetic for efficient processing, Synergy achieves high throughput, low latency, and low power consumption, providing performance and security for applications where both are critical.
Expand
Christof Beierle, Phil Hebborn, Gregor Leander, Yevhen Perehuda
ePrint Report ePrint Report
Integral attacks exploit structural weaknesses in symmetric cryptographic primitives by analyzing how subsets of inputs propagate to produce outputs with specific algebraic properties. For the case of (XOR) key-alternating block ciphers using (independent) round keys, at ASIACRYPT'21, Hebborn et al. established the first non-trivial lower bounds on the number of rounds required for ensuring integral resistance in a quite general sense. For the case of adding keys by modular addition, no security arguments are known so far. Here, we present a unified framework for analyzing the integral resistance of primitives using (word-wise) modular addition for key whitening, allowing us to not only fill the gap for security arguments, but also to overcome the heavy computational cost inherent in the case of XOR-whitening.
Expand
Rune Fiedler, Felix Günther, Jiaxin Pan, Runzhi Zeng
ePrint Report ePrint Report
The Signal protocol is the most widely deployed end-to-end-encrypted messaging protocol. Its initial handshake protocol X3DH allows parties to asynchronously derive a shared session key without the need to be online simultaneously, while providing implicit authentication, forward secrecy, and a form of offline deniability. The X3DH protocol has been extensively studied in the cryptographic literature and is acclaimed for its strong "maximum-exposure" security guarantees, hedging against compromises of users' long-term keys and medium-term keys but also the ephemeral randomness used in the handshake. This maximum-exposure security is achieved by deriving keys from the concatenation of 3–4 Diffie–Hellman (DH) secrets, each combining two long-term, medium-term, or ephemeral DH shares.

Remarkably, X3DH's approach of concatenating plain DH combinations is sub-optimal, both in terms of maximum-exposure security and performance. Indeed, Krawczyk's well-known HMQV protocol (Crypto '05) is a high-performance, DH-based key exchange that provides strong security against long-term and ephemeral key compromise. One might hence wonder: why not base Signal's initial handshake on HMQV?

In this work, we study this question and show that a carefully adapted variant of HMQV, which we call XHMQV, indeed enables stronger security and efficiency while matching the constraints of Signal's initial handshake. Most notably, HMQV does not work as a drop-in replacement for X3DH, as the latter's asynchronicity requires the protocol to handle cases where one party runs out of ephemeral keys (pre-uploaded to the Signal server). Our XHMQV design hence augments HMQV with medium-term keys analogous to those used in X3DH. We prove that XHMQV provides security in all 3–4 compromise scenarios where X3DH does and additionally in 1–2 further scenarios, strengthening the handshake's maximum-exposure guarantees while using more efficient group operations. We further confirm that our XHMQV design achieves deniability guarantees comparable to X3DH. Our security model is the first to capture Signal's long-term key reuse between DH key exchange and signatures, which may be of independent interest.
Expand
Stanislav Semenov
ePrint Report ePrint Report
We introduce and analyze a novel class of binary operations on finite-dimensional vector spaces over a field \( K \), defined by second-order multilinear expressions with linear shifts. These operations generate polynomials whose degree increases linearly with each iterated application, while the number of distinct monomials grows combinatorially. We demonstrate that, despite the non-associative and non-commutative nature in general, these operations exhibit power associativity and internal commutativity when iterated on a single vector. This allows for well-defined exponentiation \( a^n \). Crucially, the absence of a simple closed-form expression for \( a^n \) suggests a one-way property: computing \( a^n \) from \( a \) and \( n \) is straightforward, but recovering \( n \) from \( a^n \) (the Discrete Iteration Problem) appears computationally hard. We propose a Diffie–Hellman-like key exchange protocol utilizing these properties over finite fields, defining an Algebraic Diffie–Hellman Problem (ADHP). The proposed structures are of interest for cryptographic primitives, algebraic dynamics, and computational algebra.
Expand

05 June 2025

Wouter Castryck, Riccardo Invernizzi, Gioella Lorenzon, Jonas Meers, Frederik Vercauteren
ePrint Report ePrint Report
In this paper we study supersingular elliptic curves primitively oriented by an imaginary quadratic order, where the orientation is determined by an endomorphism that factors through the Frobenius isogeny. In this way, we partly recycle one of the main features of CSIDH, namely the fact that the Frobenius orientation can be represented for free. This leads to the most efficient family of ideal-class group actions in a range where the discriminant is significantly larger than the field characteristic $p$. Moreover, if we orient with a non-maximal order $\mathcal{O} \subset \mathbb{Q}(\sqrt{-p})$ and we assume that it is feasible to compute the ideal-class group of the maximal order, then also the ideal-class group of $\mathcal{O}$ is known and we recover the central feature of SCALLOP-like constructions.

We propose two variants of our scheme. In the first one, the orientation is by a suborder of the form $\mathbb{Z}[f\sqrt{-p}]$ for some $f$ coprime to $p$, so this is similar to SCALLOP. In the second one, inspired by the work of Chenu and Smith, the orientation is by an order of the form $\mathbb{Z}[\sqrt{-dp}]$ where $d$ is square-free and not a multiple of $p$. We give practical ways of generating parameters, together with a proof-of-concept SageMath implementation of both variants, which shows the effectiveness of our construction.
Expand
Shi Bai, Hansraj Jangir, Elena Kirshanova, Tran Ngo, William Youmans
ePrint Report ePrint Report
The Learning With Errors (LWE) problem, introduced by Regev (STOC'05), is one of the fundamental problems in lattice-based cryptography, believed to be hard even for quantum adversaries. Regev (FOCS'02) showed that LWE reduces to the quantum Dihedral Coset Problem (DCP). Later, Brakerski, Kirshanova, Stehlé and Wen (PKC'18) showed that LWE reduces to a generalization known as the Extrapolated Dihedral Coset Problem (EDCP). We present a quasi-polynomial time quantum algorithm for the EDCP problems over power-of-two moduli using a quasi-polynomial number of samples, which also applies to the SLWE problem defined by Chen, Liu, and Zhandry (Eurocrypt'22). Our EDCP algorithm can be viewed as a provable variant to the "Simon-meets-Kuperberg" algorithm introduced by Bonnetain and Naya-Plasencia (Asiacrypt'18), adapted to the EDCP setting. We stress that our algorithm does not affect the security of LWE with standard parameters, as the reduction from standard LWE to EDCP limits the number of samples to be polynomial.
Expand
Nicholas Brandt, Miguel Cueto Noval, Christoph U. Günther, Akin Ünal, Stella Wohnig
ePrint Report ePrint Report
CVRFs are PRFs that unify the properties of verifiable and constrained PRFs. Since they were introduced concurrently by Fuchsbauer and Chandran-Raghuraman-Vinayagamurthy in 2014, it has been an open problem to construct CVRFs without using heavy machinery such as multilinear maps, obfuscation or functional encryption.

We solve this problem by constructing a prefix-constrained verifiable PRF that does not rely on the aforementioned assumptions. Essentially, our construction is a verifiable version of the Goldreich-Goldwasser-Micali PRF. To achieve verifiability we leverage degree-2 algebraic PRGs and bilinear groups. In short, proofs consist of intermediate values of the Goldreich-Goldwasser-Micali PRF raised to the exponents of group elements. These outputs can be verified using pairings since the underlying PRG is of degree 2.

We prove the selective security of our construction under the Decisional Square Diffie-Hellman (DSDH) assumption and a new assumption, which we dub recursive Decisional Diffie-Hellman (recursive DDH).

We prove the soundness of recursive DDH in the generic group model assuming the hardness of the Multivariate Quadratic (MQ) problem and a new variant thereof, which we call MQ+.

Last, in terms of applications, we observe that our CVRF is also an exponent (C)VRF in the plain model. Exponent VRFs were recently introduced by Boneh et al. (Eurocrypt’25) with various applications to threshold cryptography in mind. In addition to that, we give further applications for prefix-CVRFs in the blockchain setting, namely, stake-pooling and compressible randomness beacons.
Expand
Hien Chu, Khue Do, Lucjan Hanzlik, Sri AravindaKrishnan Thyagarajan
ePrint Report ePrint Report
Anamorphic signatures allow covert communication through signatures in environments where encryption is restricted. They enable trusted recipients with a double key to extract hidden messages while the signature remains indistinguishable from a fresh and regular one. However, the traditional notion of anamorphic signatures suffers from vulnerabilities, particularly when a single recipient or sender is compromised, exposing all hidden messages and providing undeniable proof that citizens are part of the anamorphic exchange.

To address these limitations, we explore a threshold-based approach to distribute trust among multiple recipients, preventing adversaries from decrypting anamorphic messages even if some recipients are compromised. Our first contribution is the formalization of the notion of \emph{threshold-recipient anamorphic signatures}, where decryption is possible only through collaboration among a subset of recipients.

We then explore a \emph{stronger model} where the dictator controls the key generation process through which it learns all secret keys and how citizens store cryptographic keys. A particular example of this model in the real world is a dictator providing citizens with electronic identity documents (eIDs) and blocking all other usage of cryptography. We demonstrate that anamorphic communication is still possible even in such a scenario. Our construction is secure against post-quantum adversaries and does not rely on any computational assumptions except the random oracle model.

Finally, we show an \emph{impossibility result} for encoding anamorphic messages with a threshold-sender model when using many existing threshold signature schemes and the adversary is part of the signing group. Our work outlines both the possibilities and limitations of extending anamorphic signatures with threshold cryptography, offering new insights into improving the security and privacy of individuals under authoritarian regimes.
Expand
Alessandro Annechini, Alessandro Barenghi, Gerardo Pelosi, Simone Perriello
ePrint Report ePrint Report
Post-quantum public key encryption (PKE) schemes employing Quasi-cyclic (QC) sparse parity-check matrix codes are enjoying significant success, thanks to their good performance profile and reduction to believed-hard problems from coding theory. However, using QC sparse parity-check matrix codes (i.e., QC-MDPC/LDPC codes) comes with a significant challenge: determining in closed-form their decoding failure rate (DFR), as decoding failures are known to leak information on the private key. Furthermore, there is no formal proof that changing the (constant) rate of the employed codes does not change the nature of the underlying hard problem, nor of the hardness of decoding random QC codes is formally related to the decoding hardness of random codes. In this work, we address and solve these challenges, providing a novel closed-form estimation of the decoding failure rate for three-iteration bit flipping decoders, and proving computational equivalences among the aforementioned problems. This allows us to design systematically a Niederreiter-style QC-MDPC PKE, enjoying the flexibility granted by freely choosing the code rate, and the significant improvements in tightness of our DFR bound. We report a $2\times$ improvement in public key and ciphertext size w.r.t. the previous best cryptosystem design with DFR closed-form bounds, LEDAcrypt-KEM. Furthermore, we show that our PKE parameters yield $30$% smaller public key size and $2.6\times$ smaller ciphertexts w.r.t. HQC, which is the key encapsulation method employing a code based PKE, recently selected by the US NIST for standardization.
Expand
Calvin Abou Haidar, Quentin Payet, Mehdi Tibouchi
ePrint Report ePrint Report
The Rowhammer attack is a fault-injection technique leveraging the density of RAM modules to trigger persistent hardware bit flips that can be used for probing or modifying protected data. In this paper, we show that Falcon, the hash-and-sign signature scheme over NTRU lattices selected by NIST for standardization, is vulnerable to an attack using Rowhammer.

Falcon's Gaussian sampler is the core component of its security, as it allows to provably decorrelate the short basis used for signing and the generated signatures. Other schemes, lacking this guarantee (such as NTRUSign, GGH or more recently Peregrine) were proven insecure. However, performing efficient and secure lattice Gaussian sampling has proved to be a difficult task, fraught with numerous potential vulnerabilities to be exploited. To avoid timing attacks, a common technique is to use distribution tables that are traversed to output a sample. The official Falcon implementation uses this technique, employing a hardcoded reverse cumulative distribution table (RCDT). Using Rowhammer, we target Falcon's RCDT to trigger a very small number of targeted bit flips, and prove that the resulting distribution is sufficiently skewed to perform a key recovery attack.

Namely, we show that a single targeted bit flip suffices to fully recover the signing key, given a few hundred million signatures, with more bit flips enabling key recovery with fewer signatures. Interestingly, the Nguyen–Regev parallelepiped learning attack that broke NTRUSign, GGH and Peregrine does not readily adapt to this setting unless the number of bit flips is very large. However, we show that combining it with principal component analysis (PCA) yields a practical attack.

This vulnerability can also be triggered with other types of persistent fault attacks on memory like optical faults. We suggest cheap countermeasures that largely mitigate it, including rejecting signatures that are unusually short.
Expand
◄ Previous Next ►