IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
21 September 2025
Russell Okamoto
We resolve the Correlated Agreement (CA) problem for Reed-Solomon codes up to the information-theoretic capacity limit by introducing a fundamental change of basis: from the traditional evaluation domain to the syndrome space. Viewed through this “Syndrome-Space Lens,” the problem of proximity testing transforms into a transparent question of linear-algebraic geometry: a single affine line of syndromes traversing a family of low-dimensional subspaces. This new perspective makes a sharp phase transition at the capacity boundary visible, allowing for a complete characterization of the problem's behavior across all parameter regimes, yielding short, self-contained proofs.
Classification. We establish a precise trichotomy organized by the rank margin $\Delta := t-d$. At the capacity boundary ($\Delta=0$), the CA premise is information-theoretically vacuous, and we prove that no rigidity can be concluded without imposing additional structure. One step beyond capacity ($\Delta=1$), the problem enters a “knife-edge” regime where unconditional rigidity does not hold; soundness is recovered either through a combinatorial witness (such as a repeated error support or a small union of supports) or by adding protocol-level structure (such as independent two-fold MCA checks, DEEP/STIR out-of-domain sampling, or a global error locator). For stricter gaps ($\Delta\ge 2$), unconditional rigidity holds under a simple algebraic condition ($(r{+}1)k
MCA and Practical Implications. Below capacity ($\delta<1-\rho$), the strengthened mutual correlated agreement (MCA) problem reduces to ordinary correlated agreement. MCA holds under the same hypotheses as CA. When folds are generated with independent challenges (e.g., via domain-separated Fiat-Shamir), the per-round security margins add. The model-scoped soundness law is $\Pr[\mathrm{FA}] \le q^{-(\sum \Delta_i) s}$, providing a clear and complete rulebook for selecting safe and efficient parameters in FRI/STARK systems. This work bypasses the complex machinery of list-decoding algorithms entirely and resolves the long-standing open problem concerning the gap between the Johnson bound and capacity.
Classification. We establish a precise trichotomy organized by the rank margin $\Delta := t-d$. At the capacity boundary ($\Delta=0$), the CA premise is information-theoretically vacuous, and we prove that no rigidity can be concluded without imposing additional structure. One step beyond capacity ($\Delta=1$), the problem enters a “knife-edge” regime where unconditional rigidity does not hold; soundness is recovered either through a combinatorial witness (such as a repeated error support or a small union of supports) or by adding protocol-level structure (such as independent two-fold MCA checks, DEEP/STIR out-of-domain sampling, or a global error locator). For stricter gaps ($\Delta\ge 2$), unconditional rigidity holds under a simple algebraic condition ($(r{+}1)k
MCA and Practical Implications. Below capacity ($\delta<1-\rho$), the strengthened mutual correlated agreement (MCA) problem reduces to ordinary correlated agreement. MCA holds under the same hypotheses as CA. When folds are generated with independent challenges (e.g., via domain-separated Fiat-Shamir), the per-round security margins add. The model-scoped soundness law is $\Pr[\mathrm{FA}] \le q^{-(\sum \Delta_i) s}$, providing a clear and complete rulebook for selecting safe and efficient parameters in FRI/STARK systems. This work bypasses the complex machinery of list-decoding algorithms entirely and resolves the long-standing open problem concerning the gap between the Johnson bound and capacity.
Han Wang, Ming Luo, Han Xia, Mingsheng Wang, Hanxu Hou
This work introduces a new configuration of the GSW fully homomorphic encryption (FHE) (Gentry, Sahai, Waters~Crypto 2013), with a squared gadget ,batching and scale-based homomorphic operation.
This configuration offers improved efficiency compared to existing approaches. By utilizing our proposed method as the underlying building block, we can accelerate
FHEW-like bootstrapping implementations, including the libraries of FHEW and TFHE. We conduct comprehensive experiments to evaluate the concrete performance of our method, demonstrating improvements of more than 2 times faster. For example, the current ring GSW under OpenFHE takes 84 ms and TFHE takes 11.4 ms, while our approach achieves 26.2 ms and 4.8 ms, respectively. These improvements have significant implications for the practical aspects of FHE, enhancing real-world usability.
Michele Ciampi, Ivan Damgård, Divya Ravi, Luisa Siniscalchi, Sophia Yakoubov
Broadcast, though often used as a black box in cryptographic protocols, is expensive to realize in terms of rounds and communication complexity. We investigate the minimal use of broadcast in round-optimal information-theoretic MPC, with statistical security. For information-theoretic MPC with guaranteed output delivery, four rounds of communication are necessary and sufficient (Applebaum, Kachlon and Patra, FOCS 2020; Applebaum, Kachlon and Patra, STOC 2023). We show that broadcast is unavoidable in the second and third rounds of statistical MPC protocols. To complement our lower bounds, we modify the protocol of Applebaum, Kachlon and Patra (STOC 2023) to make use of broadcast only in the second and third round. Along the way, we show that the sharing phase of any three-round information-theoretic VSS protocol must also make use of broadcast in the second and third rounds.
Yunus Gürlek, Kadircan Bozkurt
zkVot is a client side trustless distributed computation protocol that utilizes zero knowledge proving technology. It is designed to achieve anonymous and censorship resistant voting while ensuring scalability. The protocol is created as an example of how modular and distributed computation can improve both the decentralization and the scalability of the internet.
A complete and working implementation of this paper is available on https://github.com/node101-io/zkvot. It is important to emphasize that zkVot is one of the first complete implementations of a fully censorship resistant anonymous voting application that can scale up to a meaningful number of voters.
MINKA MI NGUIDJOI Thierry Emmanuel
This manuscript introduces Semantic Holder (SH), the opposability primitive within the Chaotic Affine Secure Hash (CASH) toolkit, completing the framework’s implementation of the Q2CSI philosophy. SH enables legally opposable interpretations through algebraic extraction from polynomial iteration traces, working in concert with CEE (confidentiality) and AOW (reliability). Building upon the Affine Iterated Inversion Problem (AIIP) foundation, SH provides mathematically verifiable legal interpretations with guaranteed minimum opposability bounds. We establish that SH maintains an opposability score Ω ≥ 0.60 through rigorous entropy preservation, institutional explainability, and legal contestability guarantees. The primitive features efficient STARK-proof verifiable computation, cross-jurisdictional compatibility, and quantum resistance through its reduction to AIIP hardness. We demonstrate practical applications in legal smart contracts, regulatory compliance auditing, and digital evidence authentication, providing concrete parameter recommendations for standard security levels. SH represents a
significant advancement in cryptographic systems that must operate within legal constraints, enabling transparent and verifiable legal opposability without compromising security or performance.
20 September 2025
Ran Cohen, Pouyan Forghani, Juan Garay, Rutvik Patel, Vassilis Zikas
Despite several known idiosyncrasies separating the synchronous and the asynchronous models, asynchronous secure multi-party computation (MPC) protocols demonstrate high-level similarities to synchronous MPC, both in design philosophy and abstract structure. As such, a coveted, albeit elusive, desideratum is to devise automatic translators (e.g., protocol compilers) of feasibility and efficiency results from one model to the other.
In this work, we demonstrate new challenges associated with this goal. Specifically, we study the case of parallel composition in the asynchronous setting. We provide formal definitions of this composition operation in the UC framework, which, somewhat surprisingly, have been missing from the literature. Using these definitions, we then turn to charting the feasibility landscape of asynchronous parallel composition.
We first prove strong impossibility results for composition operators that do not assume knowledge of the functions and/or the protocols that are being composed. These results draw a grim feasibility picture, which is in sharp contrast with the synchronous model, and highlight the question:
Is asynchronous parallel composition even a realistic goal?
To answer the above (in the affirmative), we provide conditions on the composed protocols that enable a useful form of asynchronous parallel composition, as it turns out to be common in existing constructions.
In this work, we demonstrate new challenges associated with this goal. Specifically, we study the case of parallel composition in the asynchronous setting. We provide formal definitions of this composition operation in the UC framework, which, somewhat surprisingly, have been missing from the literature. Using these definitions, we then turn to charting the feasibility landscape of asynchronous parallel composition.
We first prove strong impossibility results for composition operators that do not assume knowledge of the functions and/or the protocols that are being composed. These results draw a grim feasibility picture, which is in sharp contrast with the synchronous model, and highlight the question:
Is asynchronous parallel composition even a realistic goal?
To answer the above (in the affirmative), we provide conditions on the composed protocols that enable a useful form of asynchronous parallel composition, as it turns out to be common in existing constructions.
Tomoki Moriya
In 1997, Kani proved Kani's lemma, which asserts that a commutative diagram of four $g$‑dimensional abelian varieties induces an isogeny between product abelian varieties of dimension $2g$, in counting the number of genus-$2$ curves admitting two distinct elliptic subcovers. In these years, Kani’s lemma plays a fundamental role in isogeny-based cryptography: Kani’s lemma has found numerous cryptographic applications, including both cryptanalysis and protocol construction. However, direct investigation into the lemma itself remains scarce.
In this work, we propose a generalization of Kani’s lemma. We present a novel formulation that, given a commutative diagram of $2^{n+1}$ abelian varieties of dimension $g$, yields an isogeny of dimension $2^{n}g$. We further establish a connection between this generalized lemma and the theory of Clifford algebras, using the latter as a foundational tool in our construction. To exemplify our framework, we explicitly construct the resulting $2^{n}g$‑dimensional isogenies for the cases $n=1,2,3$. The cases of $n=2,3$ provide nontrivial generalizations of the original Kani's lemma. This generalization is expected to have novel applications in the fields of both mathematics and cryptography.
In this work, we propose a generalization of Kani’s lemma. We present a novel formulation that, given a commutative diagram of $2^{n+1}$ abelian varieties of dimension $g$, yields an isogeny of dimension $2^{n}g$. We further establish a connection between this generalized lemma and the theory of Clifford algebras, using the latter as a foundational tool in our construction. To exemplify our framework, we explicitly construct the resulting $2^{n}g$‑dimensional isogenies for the cases $n=1,2,3$. The cases of $n=2,3$ provide nontrivial generalizations of the original Kani's lemma. This generalization is expected to have novel applications in the fields of both mathematics and cryptography.
Karen Azari, Cecilia Boschini, Kristina Hostáková, Michael Reichle
The current standardization calls for threshold signatures have highlighted the need for appropriate security notions providing security guarantees strong enough for broad application. To address this, Bellare et al. [Crypto'22] put forward a hierarchy of unforgeability notions for threshold signatures. Recently, Navot and Tessaro [Asiacrypt'24] introduced a new game-based definition of strong (one-more) unforgeability for threshold signatures, which however does not achieve Bellare's strongest level of security.
Navot and Tessaro analyzed several existing schemes w.r.t. their strong unforgeability security notion, but all positive results rely on idealized models. This is in contrast to the weaker security notion of (standard) unforgeability, for which standard-model constructions exist. This leaves open a fundamental question: is getting strong unforgeability fundamentally harder than standard unforgeability for threshold signatures?
In this paper we bridge this gap, by showing a generic construction lifting any unforgeable threshold signature scheme to strong unforgeability. The building blocks of our construction can be instantiated in the standard model under standard assumptions. The achieved notion of strong unforgeability extends the definition of Navot and Tessaro to achieve the strongest level of security according to the hierarchy of Bellare et al. (following a recent classification of security notions for (blind) threshold signatures by Lehmann, Nazarian, and Özbay [Eurocrypt'25]).
The starting point for our transformation is an existing construction for single-user signatures from chameleon hash functions by Steinfeld, Pieprzyk and Wang [RSA'07]. We first simplify their construction by relying on a stronger security notion for chameleon hash functions. The bulk of our technical contribution is then to translate this framework into the threshold setting. Towards this goal, we introduce a game-based definition for threshold chameleon hash functions (TCHF) and provide a construction of TCHF that is secure under DLOG in the standard model. We believe that our new notion of TCHF might also be of independent interest.
Navot and Tessaro analyzed several existing schemes w.r.t. their strong unforgeability security notion, but all positive results rely on idealized models. This is in contrast to the weaker security notion of (standard) unforgeability, for which standard-model constructions exist. This leaves open a fundamental question: is getting strong unforgeability fundamentally harder than standard unforgeability for threshold signatures?
In this paper we bridge this gap, by showing a generic construction lifting any unforgeable threshold signature scheme to strong unforgeability. The building blocks of our construction can be instantiated in the standard model under standard assumptions. The achieved notion of strong unforgeability extends the definition of Navot and Tessaro to achieve the strongest level of security according to the hierarchy of Bellare et al. (following a recent classification of security notions for (blind) threshold signatures by Lehmann, Nazarian, and Özbay [Eurocrypt'25]).
The starting point for our transformation is an existing construction for single-user signatures from chameleon hash functions by Steinfeld, Pieprzyk and Wang [RSA'07]. We first simplify their construction by relying on a stronger security notion for chameleon hash functions. The bulk of our technical contribution is then to translate this framework into the threshold setting. Towards this goal, we introduce a game-based definition for threshold chameleon hash functions (TCHF) and provide a construction of TCHF that is secure under DLOG in the standard model. We believe that our new notion of TCHF might also be of independent interest.
David Garvin, Mattia Fiorentini, Oleksiy Kondratyev, Marco Paini
We propose a new data anonymisation method based on the concept of a quantum feature map. The main advantage of the proposed solution is that a high degree of security is combined with the ability to perform classification tasks directly on the anonymised (encrypted) data resulting in the same or even higher accuracy compared to that obtained when working with the original plain text data. This enables important usecases in medicine and finance where anonymised datasets from different organisations can be combined to facilitate improved machine learning outcomes utilising the combined dataset. Examples include combining medical diagnostic imaging results across hospitals, or combining fraud detection datasets across financial institutions. We use the Wisconsin Breast Cancer dataset to obtain results on Rigetti's quantum simulator and Ankaa-3 quantum processor. We compare the results with classical benchmarks and with those obtained from an alternative anonymisation approach using a Restricted Boltzmann Machine to generate synthetic datasets. Finally, we introduce concepts from the theory of quantum magic to optimise the circuit ansatz and hyperparameters used within the quantum feature map.
Haotian Yin, Jie Zhang, Wanxin Li, Yuji Dong, Eng Gee Lim, Dominik Wojtczak
Updatable Signature (US) schemes allow updating signatures so that they can be verified using a new key. This updating feature is useful for key rotation in practice. Cini et al. (PKC'21) first formalised this primitive. However, their post-quantum-secure US scheme does not satisfy their security definition, i.e., without unlinkability and only bounded unforgeability. This paper aims to solve this problem by providing a new fully secure construction. First, we simplify the definition of unlinkability by a hybrid argument, and reduce the update oracle of the unforgeability experiment by assuming unlinkability. Then, we construct our US scheme from verifiable encryption and the SIS assumption. This scheme is fully unlinkable and unforgeable, but also a unique signature scheme in each epoch, allowing only one signature for each message during one epoch and rendering a stateful signer/proxy. This is sufficient for many applications.
19 September 2025
Raitenhaslach, Germany, 7 September - 11 September 2026
Event date: 7 September to 11 September 2026
Saint-Malo, France, 14 April - 16 April 2026
Event date: 14 April to 16 April 2026
Submission deadline: 31 October 2025
Notification: 12 January 2026
Submission deadline: 31 October 2025
Notification: 12 January 2026
Nam Tran, Khoa Nguyen, Dongxi Liu, Josef Pieprzyk, Willy Susilo
The study of lattice-based group signatures has been a prominent research direction since 2010. While recent advances in the field have yielded schemes in the random oracle model with strong security properties and nearly practical efficiency, the current state of affairs for lattice-based group signatures in the standard model is still much less satisfactory. Existing schemes, proposed by Katsumata and Yamada (EUROCRYPT'19) or implied by generic non-interactive zero-knowledge proofs for NP (by Peikert and Shiehian at CRYPTO'19 and by Waters at STOC'24), either only fulfil a weak notion of anonymity called selfless anonymity, or require a strong lattice assumption, or suffer from extremely large signatures and/or public keys.
This work aims to enhance the state of affairs for lattice-based group signatures in the standard model. We provide improved constructions that simultaneously achieves: (i) signature and public key sizes significantly smaller than those of known schemes; (ii) full anonymity in the CPA and CCA senses; (iii) security based on standard SIS and LWE assumptions with polynomial approximation factors regarding worst-case lattice problems (in general lattices). Our design approach slightly departs from that of existing pairing-based and lattice-based constructions. In the design process, we adapt and develop several lattice-based cryptographic ingredients that may be of independent interest. At the heart of our constructions is a reasonably efficient non-interactive zero-knowledge proof system for relations typically appearing in advanced privacy-preserving lattice-based cryptographic protocols. These relations are addressed by a trapdoor $\Sigma$-protocol with an inverse polynomial soundness error, which is made non-interactive via the standard-model Fiat-Shamir transform of Canetti et al. (STOC'19) and a compiler by Libert et al. (ASIACRYPT'20).
Xisen Tian, Paul Westland
Key agreement is the cornerstone of any secure communications channel whether over the traditional internet or in delay tolerant networks used in space communications. However, space systems that rely on pre-shared keys face insurmountable limitations in both security and scalability. A single key compromise can expose all past and future encrypted communications, and the static nature of pre-shared keys prevents dynamic group membership, making it difficult to add or remove devices without invalidating entire key sets. To address these challenges, the Messaging Layer Security (MLS) protocol -- a recently introduced internet standard for asynchronous group key establishment -- offers promising capabilities. Its potential to provide efficient and dynamic key agreement for federated interplanetary architectures (e.g. government-commercial collaborations) has been previously recognized, particularly with integration of MLS with the Bundle Protocol Security (BPSec) framework. In this work, we present the first empirical findings on the integration of MLS with BPSec in a realistic space communications scenario and provide insights into its deployment in future space architectures.
Stefan Dziembowski, Grzegorz Fabiański, Daniele Micciancio, Rafał Stefański
We present a formally-verified (in Lean 4) framework for translating symbolic cryptographic proofs into the computationally-sound ones. Symbolic cryptography is a well-established field that allows reasoning about cryptographic protocols in an abstract way and is relatively easy to verify using proof assistants. Unfortunately, it often lacks a connection to the computational aspects of real-world cryptography. Computationally-sound cryptography, on the other hand, captures this connection much better, but it is often more complex, less accessible, and much harder to verify formally. Several works in the past have provided a bridge between the two, but, to our knowledge, none of them have been implemented in a proof assistant.
We close this gap by formalizing the translation from symbolic to computationally-sound cryptography in Lean 4. Our framework is based on the work of Micciancio (Eurocrypt, 2010) and Li and Micciancio (CSF, 2018), which builds on the idea of using co-induction (instead of induction) for reasoning about an adversary's knowledge in a symbolic setting. Our work encompasses (1) the formalization of the symbolic cryptography framework, (2) the formalization of the computationally sound cryptography framework, and (3) the formalization of the translation between the two. We also provide (4) an extended example of circuit garbling, which is a well-known cryptographic protocol frequently used in secure multi-party computation.
We believe that our work will serve as a foundation for future research in the area of formal verification of cryptographic protocols, as it enables reasoning about cryptographic protocols more abstractly while still providing a formally verified connection to the computational aspects of real-world cryptography.
We close this gap by formalizing the translation from symbolic to computationally-sound cryptography in Lean 4. Our framework is based on the work of Micciancio (Eurocrypt, 2010) and Li and Micciancio (CSF, 2018), which builds on the idea of using co-induction (instead of induction) for reasoning about an adversary's knowledge in a symbolic setting. Our work encompasses (1) the formalization of the symbolic cryptography framework, (2) the formalization of the computationally sound cryptography framework, and (3) the formalization of the translation between the two. We also provide (4) an extended example of circuit garbling, which is a well-known cryptographic protocol frequently used in secure multi-party computation.
We believe that our work will serve as a foundation for future research in the area of formal verification of cryptographic protocols, as it enables reasoning about cryptographic protocols more abstractly while still providing a formally verified connection to the computational aspects of real-world cryptography.
Ran Gelles, Carmit Hazay, Manuj Mukherjee, Jaspal Singh, Arun Yeragudipati, Vassilis Zikas
The study of efficient multi-party computation (MPC) has been a central focus in the cryptographic literature, producing a wide range of innovative techniques that have substantially improved the practicality of MPC in real-world applications. However, the vast majority of this work assumes reliable communication channels and neglects the impact of network-level noise—a fundamental characteristic of modern communication systems. Although classical error-correcting codes can be used to address such noise, they often introduce significant overhead, potentially offsetting the efficiency gains provided by advanced cryptographic methods. To the best of our knowledge, existing efforts to circumvent this limitation rely on alternative techniques restricted to the two-party setting, with approaches that do not naturally generalize to the multi-party case.
We initiate the study of information-theoretic MPC over noisy networks for more than two parties. We construct an MPC protocol that is secure against a semi-honest majority with an optimal communication overhead in circuit size: it computes any circuit $\mathcal{C}$ with communication complexity proportional to its size, i.e., $O(|\mathcal{C}|)$. Our protocol departs from the classical MPC methodology by introducing a novel multi-party interactive-coding that remedies channel noise through a new rewinding technique that also maintains the properties required for secure computation. The ideas of our interactive coding can be applied also to other MPC scenarios to eliminate channel noise. In fact, our treatment uncovers (and resolves) a subtle flaw in the constant-rate 2PC construction by Gelles et al. [SCN~'18] over noisy channels.
We initiate the study of information-theoretic MPC over noisy networks for more than two parties. We construct an MPC protocol that is secure against a semi-honest majority with an optimal communication overhead in circuit size: it computes any circuit $\mathcal{C}$ with communication complexity proportional to its size, i.e., $O(|\mathcal{C}|)$. Our protocol departs from the classical MPC methodology by introducing a novel multi-party interactive-coding that remedies channel noise through a new rewinding technique that also maintains the properties required for secure computation. The ideas of our interactive coding can be applied also to other MPC scenarios to eliminate channel noise. In fact, our treatment uncovers (and resolves) a subtle flaw in the constant-rate 2PC construction by Gelles et al. [SCN~'18] over noisy channels.
Rishabh Bhadauria, Alexander R. Block, Prantar Ghosh, Justin Thaler
We initiate the study of lower bounding the verification time of Succinct Non-interactive ARguments of Knowledge (SNARKs) built in the Polynomial Interactive Oracle Proof + Polynomial Commitment Scheme paradigm. The verification time of these SNARKs is generally dominated by the polynomial commitment scheme, and so we want to understand if polynomial commitment schemes admit lower bounds on the verification time. By recognizing that polynomial commitment schemes are also often built by applying cryptography to some information-theoretic core protocol, we seek to separate this core from the cryptography in a way that meaningfully captures the verification time required by the polynomial commitment scheme verifier.
We provide strong evidence that several polynomial commitment schemes have (nearly) optimal verifier times. Our evidence comes from connecting polynomial commitment schemes to certain information-theoretic protocols known as communication protocols from the field of communication complexity, a link which we believe to be of independent interest. Through this lens, we model the verifier work in the cryptographic protocols as information (i.e., number of bits) exchanged between parties in the communication protocols, allowing us to leverage lower bounds from communication complexity. These lower bounds give strong evidence that the verifier time in these polynomial commitment schemes must be at least the number of bits exchanged in the communication protocol.
We extract the communication protocol cores of three polynomial commitment schemes and lower bound the bits exchanged in these cores. The lower bounds we obtain match (up to poly-logarithmic factors) the best-known (asymptotic) verification times of the polynomial commitment schemes we examine in this work. Specifically, we show that for univariate/multilinear polynomials of size $N=2^n$: - the communication core of Hyrax PCS (Wahby et al., S&P 2016) requires $\Omega(\sqrt{N})$ bits to be exchanged; - the communication core of Bulletproofs PCS (Bootle et al., EUROCRYPT 2016; Bünz et al., S&P 2018) requires $\Omega(N)$ bits to be exchanged; and - the communication core of Dory PCS (Lee, TCC 2021) requires $\Omega(\log(N))$ bits to be exchanged. Our results strongly suggest a negative answer to a longstanding open question on whether the Bulletproofs verifier can be made sublinear time.
We provide strong evidence that several polynomial commitment schemes have (nearly) optimal verifier times. Our evidence comes from connecting polynomial commitment schemes to certain information-theoretic protocols known as communication protocols from the field of communication complexity, a link which we believe to be of independent interest. Through this lens, we model the verifier work in the cryptographic protocols as information (i.e., number of bits) exchanged between parties in the communication protocols, allowing us to leverage lower bounds from communication complexity. These lower bounds give strong evidence that the verifier time in these polynomial commitment schemes must be at least the number of bits exchanged in the communication protocol.
We extract the communication protocol cores of three polynomial commitment schemes and lower bound the bits exchanged in these cores. The lower bounds we obtain match (up to poly-logarithmic factors) the best-known (asymptotic) verification times of the polynomial commitment schemes we examine in this work. Specifically, we show that for univariate/multilinear polynomials of size $N=2^n$: - the communication core of Hyrax PCS (Wahby et al., S&P 2016) requires $\Omega(\sqrt{N})$ bits to be exchanged; - the communication core of Bulletproofs PCS (Bootle et al., EUROCRYPT 2016; Bünz et al., S&P 2018) requires $\Omega(N)$ bits to be exchanged; and - the communication core of Dory PCS (Lee, TCC 2021) requires $\Omega(\log(N))$ bits to be exchanged. Our results strongly suggest a negative answer to a longstanding open question on whether the Bulletproofs verifier can be made sublinear time.
18 September 2025
Zoushaojie Jiang, An Wang, Yaoling Ding, Annv Liu, Zheng Liu, Jing Yu, Liehuang Zhu
Deep learning-based profiled side-channel analysis (SCA) targeting cryptographic implementations has attracted significant attention in recent years. Generalizable deep learning mechanisms, such as contrastive learning-based profiled SCA (CL-SCA), can enhance the effectiveness of SCA without reliance on specific network architectures and hyperparameters. This independence enables robust adaptation across diverse attack scenarios. Nonetheless, CL-SCA relies heavily on data augmentation and may mistakenly push apart physical leakage traces that should belong to the same class, which interferes with the extraction of discriminative features crucial for SCA performance. To address these limitations, we propose a profiled SCA method based on supervised contrastive learning, called SupCL-SCA. This method enhances the learning of discriminative features that facilitate key recovery by leveraging supervised information to guide the extraction of similarities in feature space. Compared with state-of-the-art methods, SupCL-SCA not only retains their general applicability and inherent advantages but also eliminates reliance on complex data augmentation and multi-stage training. Additionally, we propose a cosine distance-based Intra-Inter Distance Ratio (IIDR) metric to assess the discriminative capability of models in deep learning-based profiled SCA methods. We evaluate SupCL-SCA on three publicly available datasets covering different implementations and platforms. Experimental results show that SupCL-SCA consistently reduces the number of traces required to recover the key compared to the original methods, demonstrating enhanced attack capability.
Yingjie Lyu, Zengpeng Li, Hong-Sheng Zhou, Xudong Deng
We propose the first two-round multi-party signing protocol for the Elliptic Curve Digital Signature Algorithm (ECDSA) in the threshold-optimal setting, reducing the number of rounds by one compared to the state of the art (Doerner et al., S&P '24). We also resolve the security issue of presigning pointed out by Groth and Shoup (Eurocrypt '22), evading a security loss that increases with the number of pre-released, unused presignatures, for the first time among threshold-optimal schemes.
Our construction builds on Non-Interactive Multiplication (NIM), a notion proposed by Boyle et al. (PKC '25), which allows parties to evaluate multiplications on secret-shared values in one round. In particular, we use the construction of Abram et al. (Eurocrypt '24) instantiated with class groups. The setup is minimal and transparent, consisting of only two class-group generators. The signing protocol is efficient in bandwidth, with a message size of 1.9 KiB at 128-bit security, and has competitive computational performance.
Our construction builds on Non-Interactive Multiplication (NIM), a notion proposed by Boyle et al. (PKC '25), which allows parties to evaluate multiplications on secret-shared values in one round. In particular, we use the construction of Abram et al. (Eurocrypt '24) instantiated with class groups. The setup is minimal and transparent, consisting of only two class-group generators. The signing protocol is efficient in bandwidth, with a message size of 1.9 KiB at 128-bit security, and has competitive computational performance.
Shengnan Zhao, Junyu Lu, Yuchen Huang, Dongdong Miao, Chuan Zhao
Private information retrieval (PIR) enables a client to fetch a record from databases held by untrusted servers while hiding the access pattern (index or keyword) from the servers.
In practical settings, however, data objects (e.g., articles, videos) are commonly tagged with multiple identifiers, which can be structured as {index, value, keywords}. Current PIR schemes are constrained to retrieving records based on a single index or a single keyword, and cannot efficiently handle conjunctive queries requiring multiple keywords. To address this limitation, we propose Mk-PIR, a PIR scheme that enables a client to retrieve records that match all specified keywords simultaneously.
We propose two distinct constructions: $\textsf{MkPIR}^\mathbf{I}$, an inverted-index-based solution built upon our Oblivious Set Intersection (OSI) primitive, which enables private intersection computation on the server side without revealing client queries; and $\textsf{MkPIR}^\mathbf{F}$, a forward-index-based solution utilizing our Private Subset Determination (PSD), which privately outputs matching indices by verifying subset relationships.
Two constructions adapt to diverse database configurations where keywords are not required to be the primary key. Experimental results show that the average time to determine whether an index satisfies multiple keywords ranges from 0.5 to 332 ms, demonstrating that Mk-PIR achieves flexible query capabilities with modest performance overhead.