International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

25 July 2025

Lili Tang, Yao Sun, Xiaorui Gong
ePrint Report ePrint Report
The Generalized Birthday Problem ($\textsf{GBP}$), which seeks $k$ hash values from $k$ lists whose XOR is zero, is a fundamental problem across multiple cryptographic domains. Wagner's \(k\)-list algorithm (Crypto'02) for $\textsf{GBP}$ has advanced the optimization of solving the syndrome decoding problem and established new cryptanalytic benchmarks for incremental cryptography and blind signatures. $\textsf{Equihash}$ (NDSS'16) underscores the critical advantages of $\textsf{GBP}$ in proof-of-work design, particularly its ASIC-resistance in blockchain. While the k-list $\textsf{GBP}$ has been extensively studied, many schemes including $\textsf{Equihash}$ utilize a single-list variant (selecting hash values from a single list) without clear theoretical grounding. In this work, we revisit these two long-conflated problems and fill in theoretical gaps in solving the single-list $\textsf{GBP}$.

In the realm of $\textsf{Equihash}$, the index-pointer technique has significantly weakened its ASIC-resistance. Our trade-off optimization to Wagner's algorithmic framework further diminishes this resistance by reducing peak memory by at least 50% across most $\textsf{Equihash}$ parameters. To address this, we propose $\textsf{Sequihash}$, a PoW with enhanced ASIC-resistance, rigorously aligned with the $k$-list $\textsf{GBP}$. Furthermore, we explore the implications of $\textsf{GBP}$ in the field of incremental hash and propose a new collision attack on ID-based incremental hash (Eurocrypt'97). Our attack achieves an asymptotic time complexity of $\mathcal{O}(\sqrt{n} \cdot 2^{\sqrt{2n}})$, significantly improving upon the previous Wagner's bound of $\mathcal{O}(2^{\sqrt{4n}})$. Applying our attack to $\textsf{iSHAKE256}$, we reduce its security lower bound from \( 2^{256} \) to \( 2^{189} \).
Expand
Zhongxiang Zheng, Anyu Wang, Chunhuan Zhao, Guangwu Xu, Zhengtao Jiang, Sibo Feng, Zhichen Yan, Shuang Sun, Xiaoyun Wang
ePrint Report ePrint Report
In this paper, we propose a new postquantum lattice-based digital signature named Rhyme. The scheme is based on the Fiat-Shamir structure and does not rely on flooding, rejection sampling, or Gaussian convolution as previous methods. Instead, its security is based on a variant of LWE combined with a new sampling method (3C sampling). We prove its security in ROM/QROM and provide concrete parameters as well as reference implementation to show that our scheme enjoys high efficiency and very compact signature size compared with former results.
Expand
Yuanzhuo Yu, Mengling Liu, Yuncong Zhang, Shi-Feng Sun, Tianyi Ma, Man Ho Au, Dawu Gu
ePrint Report ePrint Report
Recent years have witnessed the surge of academic researches and industrial implementations of succinct non-interactive arguments of knowledge (SNARKs). However, proving time remains a bottleneck for applying SNARKs to large-scale circuits. To accelerate the proof generation process, a promising way is to distribute the workload to several machines running in parallel, the SNARKs with which feature are called distributed SNARKs. Nevertheless, most existing works either require a trusted setup, or rely on quantum-insecure assumptions, or suffer from linear communication costs.

In this paper, we introduce $\mathsf{HyperFond}$, the first distributed SNARK that enjoys a transparent setup, post-quantum security and polylogarithmic communication cost, as well as the field-agnostic property (no reliance on specific choices of fields). To this end, we first propose a distributed proof system based on HyperPlonk (by Chen et al. in EUROCRYPT 2023). To instantiate the system, we then put forward a novel approach to distribute the multilinear polynomial commitment scheme in BaseFold (by Zeilberger et al. in CRYPTO 2024), and also present a trade-off between communication cost and proof size. In $\mathsf{HyperFond}$, after committing to polynomial coefficients with quasilinear complexity, each sub-prover generates proofs with time linear in subcircuit size.

We implement $\mathsf{HyperFond}$ using up to 16 machines. Experimental results demonstrate that the proving time of $\mathsf{HyperFond}$ is 14.3 $\times$ faster than HyperPlonk instantiated with BaseFold. We also compare to deVirgo (by Xie et al. in CCS 2022), so far the only post-quantum distributed SNARK, and achieve a 1.89 $\times$ speedup.
Expand
Thierry Emmanuel MINKA MI NGUIDJOI, MANI ONANA Flavien Serge, DJOTIO NDIÉ Thomas
ePrint Report ePrint Report
This work establishes the CRO Trilemma: no post-quantum proof system can simultaneously satisfy the following three properties beyond negligible failure probability: (i) Confidentiality, quantified by Priv > 1 - negl(lambda); (ii) Reliability, with Rel > 1 - negl(lambda); and (iii) Legal Opposability, measured by contextual entropy H_Opp ≈ log |V_J|, where V_J denotes the validation space of a polynomial-time institutional verifier J.

The result formalizes the Invisible Authenticity Paradox through contextual entropy H(C) and quantum interpretability loss eta_q(C). Under standard quantum assumptions, we prove the following impossibility bound against QPT adversaries:

H_Opp <= (Priv * Rel) / (epsilon_eff * 2^H(C)) + eta_q(C) + negl(lambda)

A composable framework is introduced, including a composable model with verifier J (Algorithm 1), and an optimal entropy decomposition theorem (Theorem 2.3). Theoretical predictions are supported by empirical evidence indicating consistent violation of the bound (Gamma_CRO > 0.8) across NIST PQC finalists (e.g., Dilithium) and structured ZKPs (e.g., STARKs, Groth16).
Expand
Sébastien Canard, Nathan Papon, Duong Hieu Phan
ePrint Report ePrint Report
Tracing techniques have been used to identify users who have leaked their decryption keys in a secure multi-receiver encryption system. Very recently, in the field of distributed cryptography, where trust is distributed, Boneh et al. extended traitor tracing to the framework of threshold decryption, where a single user doesn't hold the whole secret to decrypt but needs to collaborate with others. However, the tracing capacity in their collusion-secure codes-based schemes is still centralized: only the authority holding the secret tracing key can perform tracing. We continue in the direction of not relying on a single entity and propose decentralizing tracing in this context so that the tracing procedure does not need to rely on any secret key and can be done by anyone. Technically, as binary collusion-secure codes only support secret tracing, we switch to robust $q$-ary IPP codes supporting public tracing. This requires us to generalize the bipartite threshold KEM for two users in Boneh et al.'s paper to $q$-partite KEM for q users. In terms of security, their static one-sided security in the binary case is not appropriate, which requires us to define an adaptive one-sided security notion for $q$-partite KEM to be compatible with $q$-ary IPP codes. Finally, we generalize the Boneh et al. construction to achieve this security notion and achieve public traceability for threshold decryption without degrading efficiency.
Expand
Antoine Bak, Shibam Ghosh, Fukang Liu, Jianqiang Ni, Willi Meier, Léo Perrin
ePrint Report ePrint Report
FRAST is a TFHE-friendly stream cipher that was published at FSE 2025. The cipher is defined over $\mathbb{Z}_{16}$, and makes extensive use of negacyclic S-boxes over $\mathbb Z_{16}$ as they are less costly in TFHE. Like many FHE-friendly ciphers, FRAST randomizes some of its components to increase its security against statistical attacks. In the case of FRAST, some S-boxes are randomized using an XOF that takes a nonce as input.

In this work, we point out a strong structural property of the full FRAST permutation, which leads to a much simpler alternative representation of the primitive. We study the consequences of this representation and find a weak key space of non-negligible size (i.e., much larger than $2^{128}$) on which every ciphertext leaks one bit of plaintext. This corresponds to a distinguishing attack on the full FRAST in the weak-key setting. In particular, we emphasize that, apart from the structural property, the usage of negacyclic S-boxes further leads to a much larger weak-key space for our attack.

Finally, we provide a general framework to mount a linear attack on FRAST in the average key setting. We briefly describe our approach in the end of the paper, and observe that standard assumptions expected to work in the context of linear cryptanalysis do not hold in the case of FRAST: our experiment indicate that a linear attack in the average key setting does not work as expected.
Expand
Brandon Goodell, Rigo Salazar, Freeman Slaughter, Luke Szramowski
ePrint Report ePrint Report
In Zero Knowledge Proofs of Elliptic Curve Inner Products from Principal Divisors and Weil Reciprocity, Eagen proposes a method for checking whether a sum of points in an elliptic curve group have been computed correctly. Eagen's method verifiably checks elliptic curve group operations only with linear combinations in the base field, allowing very general proofs to be encoded in inner product argument systems and rank-1 constraint systems (R1CS). We call this method "straight-line verification,'' due to how it utilizes straight lines in the verification procedure. In FCMP++ by Parker, the author uses Eagen's method in a R1CS to verify scalar multiplication of group elements, proposing a protocol based on Eagen's arguments. Although the iterative witness construction algorithm proposed by Eagen is correct, the arguments are rather informal, lacking precise protocol descriptions, claims, proofs, or efficiency analyses. We present a method based on Eagen's technique for straight-line verification of cryptographic protocols, offloading the bulk of the computational costs faced by verifiers onto the prover, which is useful for light-weight devices or when verification must be performed repeatedly. Computational improvements are largely due to replacing expensive division operations with lower-cost arithmetic by applying logarithmic derivatives.

Our work, while not fully novel, is a healthy expansion of previous work. In Soundness Proof for an Interactive Protocol for the Discrete Logarithm Relation, Bassa contributed towards formalizing Eagen's method, explicitly describing Parker's proof of scalar multiplication and sketching arguments towards proofs of soundness. However, the soundness arguments in Bassa's work are not without obstacles. Per Eagen, rational solutions to certain systems of polynomial equations are assumed to exist. We point out that these equations do not, in fact, admit rational solutions in general. Bassa lifts to the surface of pairs of elliptic curve group points to avoid this problem, but the verification equations proposed by Eagen and studied by Bassa are not sufficiently justified in those documents.

The verification equations described by Bassa reduce to our verification equations, and therefore they are equivalent. In particular, given a variable choice of a line in the affine plane with three distinct, non-identity, and collinear points on an elliptic curve $\mathcal{E}$, say $P, Q, R$ with interpolating slope $\lambda$ and $x$-coordinates $X_P$, $X_Q$, and $X_R$, these points are necessarily dependent: $\lambda^2 = X_P + X_Q + X_R$. Thus, given any derivation $d$ on the function field $K(\mathcal{E})$ over $K$, we have $dX_R = -dX_P - dX_Q$, allowing computations to reduce further than presented in Bassa's work.

Nonetheless, Bassa's amended approach results in formal arguments, if not proofs, of security. We fully justify the verification equations presented by Bassa, show they reduce further, and reconsider the security of Eagen's approach under the reduced verification equations. We encourage the reader to keep in mind that our verification equations are equivalent reduced versions of those presented by Bassa. Along the way, we exploit the techniques utilized by Eagen to further reduce the total number of field divisions required by prover to just a single one. Our framework is easily applied to generically speed up verification in discrete-logarithm-based protocols.

We present our protocol, and explicitly compute the completeness and soundness errors. We remark on how to complete the scheme, and our soundness error supersedes the previous estimates in the literature. We also show how the corrected scheme can be used to verify scalar multiplication as in Parker's proposed protocol. We then apply this approach to zero-knowledge proof systems such as the Schnorr identification scheme and Bulletproofs to illustrate the generic computational advantage offered by divisors. Because these protocols are simple, secure, and popular, we demonstrate that this work is readily applicable to improve verifier computation in cryptocurrencies such as Monero or Salvium.
Expand

23 July 2025

Guillaume Goy, Maxime Spyropoulos, Nicolas Aragon, Philippe Gaborit, Renaud Pacalet, Fabrice Perion, Laurent Sauvage, David Vigilant
ePrint Report ePrint Report
Hamming Quasi-Cyclic (HQC) has recently been officially selected for standardization by NIST as a post-quantum KEM alternative to ML-KEM. This milestone raises new requirements, in particular the need to design and deploy secure implementations of the scheme. This paper presents two major contributions to secure HQC against Side-Channel Attacks (SCAs). First, we present a detailed sensitivity analysis of HQC, highlighting the critical variables and critical internal functions that need to be protected. Second and main contribution, we propose the first fully masked HQC implementation at any order. It is also the first PQC masked implementation that is formally proved to be secure in the MIMO-SNI security model. This security, introduced by Cassiers and Standaert in 2020, ensures the security of gadgets composition against propagating probes. In this paper, we provide benchmarks of our implementation, showing that our masked implementation is competitive in the state-of-the-art masked PQC implementations.
Expand
Jelle Vos, Stanislaw Jarecki, Christopher A. Wood, Cathie Yun, Steve Myers, Yannick Sierra
ePrint Report ePrint Report
Symmetric encryption allows us to establish a secure channel based on a shared, strong key. However, users cannot remember or cannot store such keys securely. Password-Authenticated Key Exchange (PAKE) protocols address this by using low-entropy, human-memorizable passwords to establish secure channels. PAKEs are widely used and are foundational in practical cryptographic protocols, but while cryptographic tools like Key Encapsulation Mechanism (KEM) and Signatures have been implemented to resist attacks from quantum computers, PAKEs have gained quantum security only recently.

To hedge against any potential vulnerabilities in recent quantum-secure PAKEs and in their implementations, we primarily focus on hybrid PAKE constructions that compose CPace, a classically-secure PAKE, with a variant of a recently proposed quantum-secure PAKE, which we call OQUAKE. Specifically we introduce and analyze two new hybrid PAKEs designed to be efficient, easy to implement, and utilize a minimized set of standard building blocks. The first, called CPaceOQUAKE, is a hybrid symmetric PAKE that remains secure as long as either a classical or post-quantum assumption holds. The second, called CPaceOQUAKE+, is a hybrid asymmetric PAKE (aPAKE) where the server party holds a verifier that obscures the password, instead of holding the password itself. In our analysis we present the necessary security proofs in the Universal Composability framework. In particular, we prove that OQUAKE, the underlying KEM-based PAKE in our hybrid constructions, realizes a relaxed UC PAKE variant that exposes password equality to passive observers, an observation available anyway in typical applications of PAKEs where the network interactions which follow the PAKE depend on authentication success. Moreover, we prove that our variant of the PAKE(+KEM)-to-aPAKE compiler is a similarly relaxed UC aPAKE.
Expand
Ke Ma, Jiabo Wang, Shanxiang Lyu, Junzuo Lai, Zsolt Lángi
ePrint Report ePrint Report
Discrete Gaussian Sampling (DGS) over the integers—also known as integer Gaussian sampling— is used to generate integer values that statistically follow the discrete Gaussian distribution and plays a central role in lattice-based cryptography. Among existing approaches for integer DGS, the cumulative distribution table (CDT) method is widely adopted. However, CDT sampling typically incurs substantial storage costs due to the need to store high-precision fixed-point probability tables, where a precision of $k$ bits is required to achieve a statistical distance of $2^{-k}$ from the ideal distribution. In this work, we propose a more compact representation of CDT based on Simultaneous Diophantine Approximation (SDA). Instead of storing fixed-point values, our method expresses the probabilities in the CDT as a sequence of rational numbers with a common denominator. With parameter selection guided by SDA, this compact fractional representation enables reducing data width while maintaining the same level of statistical accuracy. Our SDA-CDT construction offers clear advantages in both computation speed and storage compared to classical CDT implementations. For example, in Frodo-1344, our sampler achieves a 19.97% increase in speed (from 12.10 million to 14.51 million samples per second) and a 3.85% reduction in memory usage (from 104 bits to 100 bits). Similarly, in Frodo-976, we observe a 10.88% speedup and a 21.60% decrease in memory cost. In addition, our design eliminates floating-point arithmetic and supports a fully constant-time online sampling procedure, which ensures resistance to timing side-channel attacks without compromising performance.
Expand
Pierre Briaud, Maxime Bros, Ray Perlner, Daniel Smith-Tone
ePrint Report ePrint Report
HPPC is a multivariate signature scheme submitted to the NIST PQC standardization process in response to the recent call for additional signature schemes. We show that, despite some non-standard notational choices in the submission document, HPPC can be viewed as a special case of the well-studied, but broken for all practical parameters, HFE signature scheme. We further show that the HPPC construction introduces additional structure that further weakens the scheme. For instance, the central map has $Q$-rank $2$ independently from the degree $D$ of the central polynomial that is used.

Using these observations, we show that HPPC is weaker against the direct attack than claimed in the submission document and more crucially that all parameter sets can be practically broken using MinRank techniques. For instance, with a very naive implementation, we have been able to recover an equivalent key in approximately 8 minutes for security level 2, an hour and a half for security level 4, and slightly more than 7 hours for security level 5.
Expand
Ashrujit Ghoshal, Mingxun Zhou, Bo Peng, Elaine Shi
ePrint Report ePrint Report
Private Information Retreival (PIR) schemes without preprocessing are known to incur linear server computation per client query. Several recent works have shown that by relying on a one-time preprocessing phase, we can get around this barrier, and achieve sublinear computation per query without relying on any cryptographic assumptions. Beimel et al. (CRYPTO'00) first showed a family of schemes whose bandwidth and computation per query scale as fast as $n^{O(1/S)}$ where $S$ denotes the number of servers and $n$ denotes the database size. Unfortunately, their schemes are not practical partly because the servers must each store an encoded version of the database, and the encoding length grows sharply as we increase $S$. The recent work of Singh et al. (TCC'24) showed how to achieve similar bandwidth scaling but without the server space blowup. To get this, they rely on a different type of preprocessing called client-specific preprocessing, where the stateful client stores some hints and the servers store only the original database. Unfortunately, Singh et al.'s result is completely impractical due to the reliance on Dvir and Gopi's PIR as a building block.

We propose Zelda (short for ZEro-Leakage Data Access), the first concretely efficient, information-theoretic multi-server PIR scheme with sublinear computation. Our work makes both theoretical and practical contributions. On the theoretical front, we devise a unified framework for constructing multi-server PIR with client-specific preprocessing. This gives us a parametrizable family of schemes that asymptotically outperform all prior constructions in the same setting, including Singh et al. (TCC'24) and Ishai et al. (CRYPTO'24). On the practical front, Zelda is conceptually simple, self-contained, and does not rely on any underlying PIR as a building block. We implemented Zelda and open sourced our code. We compared the concrete performance of Zelda with a state-of-the-art PIR scheme called QuarterPIR (Eurocrypt'24), which relies on pseudorandom functions for security. Experimental results show that Zelda outperforms QuarterPIR in terms of online response time and client space (assuming typical fiber optical links), at the price of increased costs for offline maintenance operations.
Expand
Debasmita Chakraborty, Hosein Hadipour, Anup Kumar Kundu, Mostafizar Rahman, Prathamesh Ram, Yu Sasaki, Dilip Sau, Aman Sinha
ePrint Report ePrint Report
This paper studies the Twinkle family of low-latency symmetric key schemes designed by Wang et al. (CiC 2024). In particular, it presents cryptanalysis of both the mode and the underlying primitive. Twinkle is a PRF-based design, and an authenticated encryption scheme Twinkle-AE is specified based on a dedicated PRF called Twinkle-PRF. To achieve low latency, Twinkle-PRF uses a large key and state to produce sufficient randomness in a single step. Twinkle-AE uses a 1024- or 512-bit key for authentication and generates a $t$-bit tag, where $t \in \{64, 128\}$. It claims to provide $t$ bits of integrity. Several Twinkle-AE parameter sets claim higher confidentiality than integrity. In this setup, for any ciphertext, an adversary can obtain the message after $O(2^t)$ decryption attempts by guessing the tag, allowing attacks in the chosen-ciphertext setting. We show that a 1024- or 512-bit authentication key can be recovered using only $O(2^t)$ queries. The recovered authentication key enables the generation of valid ciphertexts for arbitrary plaintexts, thus achieving universal forgery.

In the second part of the paper, we perform cryptanalysis on reduced-round variants of the 1280-bit public permutation Twinkle-P, which serves as a core component of Twinkle-PRF. We investigate impossible differential, zero-correlation linear, integral, and differential-linear distinguishers by developing automated analytic tools. We provide practical distinguishers for up to 5 rounds, and the longest distinguisher reaches 6 rounds with a complexity of $2^{74.32}$. This surpasses the round bounds evaluated by the designers. We stress that our attacks on mode exploits the gap between the claimed confidentiality and integrity levels, thus have no impact on the parameter sets having the same security level.

Our attacks on the permutation do not have any significant impact on the whole specifications. Moreover, we note that Twinkle-AE-512b/Twinkle-AE-1024b and Twinkle-PA remain secure, and the versions we attacked would also be secure if the claimed confidentiality level matched the integrity level.
Expand
Roman Langrehr
ePrint Report ePrint Report
Constrained PRFs are PRFs that allow the generation of constrained keys, which can be used to evaluate the PRF on a subset of the inputs. The PRF is still pseudorandom for an adversary how obtains multiple of these constrained keys on all inputs where none of the constrained keys allow it to evaluate the PRF. Constrained PRFs are known for some simple constraint classes (such as puncturing or intervals) from one-way functions, but for more powerful constraint classes such as bitfixing the only known constructions need heavy machinery, like indistinguishability obfuscation or multilinear maps.

In this work we show that constrained PRFs (for any constraint class) do not imply key agreement in a black-box way with an oracle separation. The result also applies to delegatable constrained PRFs, where constrained keys can be used to generate other, more restrictive constrained keys.

We show that this result has interesting applications for identity-based cryptography, where users obtain secret keys from a trusted, central authority. Namely, it shows that primitives that allow key agreement in this setting, like identity-based non-interactive key exchange and a weaker variant of identity-based encryption that we introduce in this work, do not imply key agreement and thus can exist even in a world where traditional key agreement and public-key encryption is impossible.

As a side result, we obtain a very simple proof of the classic result by Impagliazzo and Rudich (STOC 89) that separates key agreement from one-way functions. The proof is similar to a proof by Brakerski, Katz, Segev, and Yerukhimovich (TCC 2011), but more general because it applies to key agreement with non-perfect correctness. The proof also shows that Merkle puzzles are optimal, which Barak and Mahmoody (Crypto 2009) have shown before, but our proof is much simpler (and has tighter constants).
Expand

22 July 2025

Nikita Snetkov, Jelizaveta Vakarjuk, Peeter Laud
ePrint Report ePrint Report
Migration to quantum-safe cryptography represents a significant technological shift, addressing the vulnerabilities of traditional cryptographic primitives, such as KEMs and digital signatures. Yet, a number of challenges remain, especially in the development of secure solutions for sophisticated cryptographic applications. One of them is Smart-ID, European server-supported (threshold) signing service.

To address this issue, we present $\textsf{Electrum}$, a fail-stop server-supported signature scheme designed to enhance security of existing Smart-ID service. $\textsf{Electrum}$ combines multiprime RSA-based signatures with fail-stop features: providing not only unforgeability against classical adversaries but also allowing to prove that a given signature is a forgery made by classical and/or quantum adversaries. Proposed protocol can be seen as a temporary remedy against the quantum threat until standardised threshold signature schemes become a common practice. To prove security of $\textsf{Electrum}$, we introduce a new ideal functionality $\mathcal{F}^{\textsf{SplFS}}$ for a fail-stop server-supported signing in the Universal Composability model. We show that $\textsf{Electrum}$ protocol securely realizes the proposed functionality $\mathcal{F}^{\textsf{SplFS}}$.
Expand
Tung Chou
ePrint Report ePrint Report
This paper presents a family of representations of elementary vectors, which covers existing representations as special cases. We make use of the family of representations to reduce signature size of existing signature schemes based on the VOLE-in-the-head framework. In particular, we use new representations to build modelings for the regular syndrome decoding problem in $\mathbb{F}_2$, which is used in the post-quantum signature scheme SDith, and for the permuted kernel problem in characteristic-2 fields, which is used in a post-quantum signature scheme recently proposed by Bettaieb, Bidoux, Gaborit, and Kulkarni. For the “short” parameter sets of SDith, which are designed for minimiz- ing signature size, we achieve a size reduction of 3.6% to 4.2%. For parameter sets of the Bettaieb-Bidoux-Gaborit-Kulkarni signature scheme, we achieve a size reduction of 9% to 12%.
Expand
Farzin Renan
ePrint Report ePrint Report
Digital signatures are essential cryptographic tools that provide authentication and integrity in digital communications. However, privacy-sensitive applications—such as e-voting and digital cash—require more restrictive verification models to ensure confidentiality and control. Strong Designated Verifier Signature (SDVS) schemes address this need by enabling the signer to designate a specific verifier, ensuring that only this party can validate the signature. Existing SDVS constructions are primarily based on number-theoretic assumptions and are therefore vulnerable to quantum attacks. Although post-quantum alternatives—particularly those based on lattices—have been proposed, they often entail large key and signature sizes. In this work, we introduce $\mathsf{CSI\text{-}SDVS}$, a novel isogeny-based SDVS scheme that offers a compact, quantum-resistant alternative. Our construction builds on the ideal class group action framework of CSIDH and the signature techniques of CSI-FiSh, and relies on the hardness of the Multi-Target Group Action Inverse Problem (MT-GAIP). $\mathsf{CSI\text{-}SDVS}$ achieves strong security guarantees—namely, Strong Unforgeability under Chosen-Message Attacks (SUF-CMA), Non-Transferability (NT), and Privacy of Signer’s Identity (PSI)—in the random oracle model. Remarkably, both the keys and signatures in $\mathsf{CSI\text{-}SDVS}$ are of size $\mathcal{O}(\lambda)$, representing a significant improvement over the typical $\mathcal{O}(\lambda^2)$ bounds in existing post-quantum SDVS schemes, thereby making it among the most compact PQC-based SDVS schemes and the only post-quantum secure construction based on isogenies.
Expand
Lucas C. Cardoso, Marcos A. Simplicio Jr
ePrint Report ePrint Report
In 2009, Galindo and Garcia proposed the usage of concatenated Schnorr signatures for the hierarchical delegation of public keys, creating a quite efficient identity-based signature scheme (IBS). Essentially, the scheme builds upon the Schnorr signature scheme to generate a primary signature, part of which is then used as a secret key to produce signatures on subsequent messages. The resulting IBS is proven secure against existential forgery on adaptive chosen-message and adaptive identity attacks using variants of the Forking Lemma. In this paper, our goal is to answer the following question: would it be feasible to build upon the widely used elliptic curve digital signature algorithm (ECDSA) scheme to obtain a similarly secure and efficient IBS? We answer this affirmatively, opening interesting possibilities not only for identity-based signatures with ECDSA but also for applications such as secure credential delegation. This latter application is of particular interest considering the wide support for ECDSA in web- and cloud-oriented authentication systems (e.g., based on JSON Web Tokens). The resulting scheme is proven secure, combining the Bijective Random Oracle model and the existential unforgeability game in an identity-based setup. Our results show that even considering ECDSA's non-linear characteristic and more convoluted verification process when compared to Schnorr signatures, it is possible to obtain shorter signatures than Galindo-Garcia's scheme.
Expand
Zachary A Kissel
ePrint Report ePrint Report
A redactable set signature scheme is a signature scheme that allows a redactor, without possessing the signing key, to convert a signature on set $S$ to a signature on set $S'$ if $S' \subset S$. This paper introduces a new form of redactable set signature scheme called a policy-based redactable set signature scheme. These redactable set signatures allow for a signer to provide a redaction policy at signing time that limits the possible redactions that can be made by a redactor. In particular, a signature on set $S$ can only be redacted to a signature on $S'$ if $S' \subset S$ and $P\left(S'\right) = 1$.
Expand
Justin Drake, Dmitry Khovratovich, Mikhail Kudinov, Benedikt Wagner
ePrint Report ePrint Report
In this note, we present a new instantiation of the hash-based multi-signature framework introduced by Drake, Khovratovich, Kudinov, and Wagner (CiC Vol 2 Issue 1, eprint 2025/055) for Ethereum’s consensus layer. Inspired by a recent work of Khovratovich, Kudinov, and Wagner (Crypto 2025, eprint 2025/889), we instantiate the framework with a novel incomparable encoding that improves the tradeoff between signature size and verification hashing. The purpose of this document is to make explicit how to use the ideas of the latter work within the framework of Drake, Khovratovich, Kudinov, and Wagner.
Expand
◄ Previous Next ►