International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

17 September 2025

Andreas Wiemers
ePrint Report ePrint Report
Over the past years, the so called Goppa Code Distinguishing (GD) problem has been studied. The GD problem asks at recognizing a generator matrix of a binary Goppa code from a random matrix. The main motivation for introducing the GD problem is the connection to the security of the McEliece public-key cryptosytem. A main contribution in addressing this problem is the so called syzygy distinguisher. In this article, we introduce another distinguisher. From a geometric perspective, the distinguisher considers certain invariants of the space of all homogeneous polynomials that vanish in higher order on the columns of the generator matrix. Based on heuristic arguments, the distinguisher described in this article might be favorable (but not practically computable) for specific practical parameters such as the combination (m = 12, s = 64, k =768, n = 3488) compared to the values given by the syzygy distinguisher.
Expand
Xiaojie Guo, Hanlin Liu, Zhicong Huang, Hongrui Cui, Wenhao Zhang, Cheng Hong, Xiao Wang, Kang Yang, Yu Yu
ePrint Report ePrint Report
Pseudorandom correlation generators (PCGs) have been popular in generating a huge amount of correlated randomness, a critical resource in secure computation. However, existing PCGs are memory-consuming and not friendly to resource-constrained devices. Even for moderate devices, the need for large memory can also be a disadvantage in applications like zero-knowledge proofs or large-scale secure computation. In this paper, we propose a malicious streaming PCG (sPCG), which generates a bounded number of tuples of subfield vector oblivious linear evaluation (sVOLE) on-the-fly, each with sublinear memory and computation.

1. We propose an efficient protocol that replaces the relaxed distributed comparison function in the best pseudorandom correlation function (PCF) for sVOLE (CRYPTO'22), which has the same streaming features for any polynomial number of tuples. With this protocol, our sPCG is doubly efficient in memory and the computation per sVOLE. Moreover, we augment the black-box distributed setup to malicious security and yield 4x communication improvement. Our sPCG can be extended to a more efficient sVOLE PCF with the same improvements in memory and computation, and a 2x faster malicious non-black-box distributed setup.

2. We present a practical attack on the Learning Parity with Noise (LPN) assumption for expand-accumulate codes with regular noise, revealing that some previous parameters provide around 14~22 bits of security over binary noises, far below the target 128 bits. To address this, we introduce a low-Hamming-weight noise distribution to withstand the attack. We then derive some updated LPN parameters with the new noise distribution, restoring 128-bit security and reducing the noise-related computation and communication.

3. We provide an implementation of our sPCG for the special case of correlated oblivious transfer (COT). In addition to the improvements over the best PCF, our sPCG can have a comparable end-to-end performance to Ferret (CCS'20) and the PCG from expand-convolute codes (CRYPTO'23), two state-of-the-art PCGs, with the advantage of being able to produce 10 million COTs on-the-fly and reducing the memory from 337 MB and 624 MB to 20 MB, respectively.
Expand
Zonglun Li, Wangze Ni, Shuhao Zheng, Junliang Luo, Weijie Sun, Lei Chen, Xue Liu, Tianhang Zheng, Zhan Qin, Kui Ren
ePrint Report ePrint Report
While transaction transparency is fundamental, it introduces privacy vulnerabilities for blockchain users requiring confidentiality. Existing privacy mixers, intended to mitigate the issue by offering obfuscation of transactional links, have been leveraged to evade emerging financial regulations in DeFi and facilitate harmful practices within the community. Regulatory concerns, driven by prosocial intentions, are raised to ensure that mixers are used responsibly complying with regulations. The research challenge is to reconcile privacy with enforceable compliance by providing designated-only transaction traceability, blocking sanctioned actors and preserving honest-user anonymity. We tackle this challenge by introducing the Hurricane Mixer, the mixer framework that embeds compliance logic without forfeiting privacy of regular transactions. Hurricane comes in two deployable variants: Cash for fixed-denomination pools and UTXO for arbitrary-amount transfers. Both variants share the key components: a sanction list mechanism that prevents transactions involving sanctioned entities, and a mechanism that allows for possible regulatory access to encrypted transaction details for compliance purposes. We implement the full stack: Gnark Groth-16 circuits for deposit/withdraw proofs, contracts maintaining an on-chain sanction list, and dual public-key encryption for bidirectional tracing. The comprehensive evaluation illustrates the efficacy of Hurricane Mixer in ensuring privacy preservation, regulatory conformity, and cost efficiency. Experiments show that the Cash variant is more economical when the payment amount matches the denomination, the UTXO variant is better suited for large or fractional payments, and the framework overall sustains competitive gas efficiency without compromising regulator traceability.
Expand
Bowen Zhang, Hao Cheng, Johann Großschädl, Peter Y. A. Ryan
ePrint Report ePrint Report
The Edwards-curve Digital Signature Algorithm (EdDSA) is a deterministic digital signature scheme that has recently been adopted in a range of popular security protocols. Verifying an EdDSA signature involves the computation of a double-scalar multiplication of the form $SB - hA$, which is a costly operation. The vector extensions of modern Intel processors, such as AVX2 and AVX-512, offer a variety of options to speed up double-scalar multiplication thanks to their massive SIMD-parallel processing capabilities. However, in certain application domains like fintech or e-voting, several or many EdDSA verifications have to be performed, and what counts in the end is not the execution time of one single signature verification, but how long it takes to verify a certain number of signatures. For such applications, it makes more sense to use SIMD instructions to maximize the throughput of a batch of verification operations instead of minimizing the latency of one verification. In this paper, we introduce high-throughput AVX2/AVX-512 implementations of EdDSA verification executing four (resp., eight) instances of double-scalar multiplication in a SIMD-parallel fashion, whereby each instance uses a 64-bit element of the 256-bit (resp., 512-bit) vectors. We analyze three techniques for double-scalar multiplication, one that separates the computation of $SB$ and $hA$, while the other two integrate or interleave them based on a joint-sparse form or non-adjacent form representation of the scalars $S$ and $h$. Our experiments with 256-bit AVX2 vectorization on an Intel Cascade Lake CPU show that the separate method achieves the best results and reaches a single-core throughput of 48,182 double-scalar multiplications per second, which exceeds the throughput of the currently fastest latency-optimized implementation by 33%.
Expand
Eli Baum, Sam Buxbaum, Nitin Mathai, Muhammad Faisal, Vasiliki Kalavri, Mayank Varia, John Liagouris
ePrint Report ePrint Report
We present ORQ, a system that enables collaborative analysis of large private datasets using cryptographically secure multi-party computation (MPC). ORQ protects data against semi-honest or malicious parties and can efficiently evaluate relational queries with multi-way joins and aggregations that have been considered notoriously expensive under MPC. To do so, ORQ eliminates the quadratic cost of secure joins by leveraging the fact that, in practice, the structure of many real queries allows us to join records and apply the aggregations “on the fly” while keeping the result size bounded. On the system side, ORQ contributes generic oblivious operators, a data-parallel vectorized query engine, a communication layer that amortizes MPC network costs, and a dataflow API for expressing relational analytics—all built from the ground up.

We evaluate ORQ in LAN and WAN deployments on a diverse set of workloads, including complex queries with multiple joins and custom aggregations. When compared to state-of-the-art solutions, ORQ significantly reduces MPC execution times and can process one order of magnitude larger datasets. For our most challenging workload, the full TPC-H benchmark, we report results entirely under MPC with Scale Factor 10—a scale that had previously been achieved only with information leakage or the use of trusted third parties.
Expand
Suvradip Chakraborty, Sebastian Faller, Dennis Hofheinz, Kristina Hostáková
ePrint Report ePrint Report
We put forward the concept of "forgetful encryption". A forgetful public-key encryption scheme guarantees that (a limited amount of) information that is leaked through the encryption process does not reveal the whole message. This notion is related to, but different from leakage-resilient encryption (where leakage on the decryption key is considered) and big-key encryption (which is defined for secret-key encryption). Forgetful encryption is useful, e.g., in settings in which a cloud server encrypts data stored for a client, using that client’s public key. In that setting, a limited form of leakage on the encryption process may occur accidentally, through security breaches, or through targeted attacks. In this work, we define the notion of forgetful encryption using the simulation paradigm, and provide a number of instantiations from mild computational assumptions. In the random oracle model, we provide a generic instantiation from “lossy key encapsulation mechanisms”, an existing paradigm that allows for efficient instantiations in the group, RSA, and lattice settings. In the standard model, we provide a somewhat less efficient construction (that can also be instantiated under standard assumptions) from a mild variant of (sender-)deniable encryption.
Expand
Zeyu Liu, Katerina Sotiraki, Eran Tromer, Yunhao Wang
ePrint Report ePrint Report
The efficiency of Public Key Encryption (PKE) and Key Encapsulation Mechanism (KEM), and in particular their large ciphertext size, is a bottleneck in real-world systems. This worsens in post-quantum secure schemes (e.g., lattice-based ones), whose ciphertexts are an order of magnitude larger than prior ones.%their non-post-quantum counterparts. The work of Kurosawa (PKC'02) introduced multi-message multi-recipient PKE (mmPKE) to reduce the amortized ciphertext size when sending messages to more than one recipient. This notion naturally extends to multi-message multi-recipient KEM (mmKEM).

In this work, we first show concrete attacks on existing lattice-based mmPKE schemes: Using maliciously-crafted recipient public keys, these attacks completely break semantic security and key privacy, and are inherently undetectable. We then introduce the first lattice-based mmKEM scheme that maintains full privacy even in the presence of maliciously-generated public keys. Concretely, the ciphertext size of our mmKEM for 100 recipients is $>10\times$ smaller than naively using Crystals-Kyber. We also show how to extend our mmKEM to mmPKE, achieving a scheme that outperforms all prior lattice-based mmPKE schemes in terms of both security and efficiency. We additionally show a similar efficiency gain when applied to batched random oblivious transfer, and to group oblivious message retrieval.

Our scheme is proven secure under a new Module-LWE variant assumption, Oracle Module-LWE, which can be of its own independent interest. We reduce standard MLWE to this new assumption for some parameter regimes, which also gives intuition on why this assumption holds for the parameter we are interested in (along with additional cryptanalysis).

Furthermore, we show an asymptotically efficient compiler that removes the assumption made in prior works that recipients know their position in the list of intended recipients for every ciphertext.
Expand
Yanqi Gu, Stanislaw Jarecki, Phillip Nazarian, Apurva Rai
ePrint Report ePrint Report
Message authentication (MA) in the Short Authenticated String (SAS) model, defined by Vaudenay, allows for authenticating arbitrary messages sent over an insecure channel as long as the sender can also transmit to the receiver a short authenticated message, e.g. d = 20 bits. The flagship application of SAS-MA is Authenticated Key Exchange (AKE) in the SAS model (SAS-AKE), which allows parties communicating over insecure network to establish a secure channel without prior source of trust except an ability to exchange d-bit authenticated strings. SAS-AKE is applicable e.g. for device pairing, i.e. creating secure channels between devices capable of displaying d-bit values, e.g. encoded as decimal strings, verified by a human operator, or to secure messaging applications like Signal or WhatsApp, where such short values can be read off by participants who trust each others’ voices. A string of works showed light-weight SAS-MA schemes, using only symmetric-key crypto and 3 communication flows, which is optimal. This was extended to group SAS-(M)MA, for (mutual) message authentication among any number of parties, using two simultaneous flows. We show a new two simultaneous flows SAS-(M)MA protocol, based on Verifiable Random Functions (VRF), with a novel property that the first flow, which consists of exchanging VRF public keys, can be re-used in multiple SAS-MA instances. Moreover, instantiated with ECVRF, these keys have the same form $vk = g^{sk}$ as Diffie-Hellman keys exchanged in DH-based (A)KE protocols like X3DH. We show that X3DH keys can be re-used in our SAS-MA, implying SAS-AKE which adds a minimal overhead of a single flow to X3DH. Crucially, while X3DH is secure only if participants’ public keys are certified by a shared source of trust, e.g. a Public Key Infrastructure (PKI) or a trusted Key Distribution Center (KDC) ran by Signal or WhatsApp, if X3DH is amended by our SAS-AKE then the established channel is secure even if PKI or KDC is compromised, assuming trust in user-assisted authentication of short d-bit strings.
Expand
Zesheng Li, Dongliang Cai, Yimeng Tian, Yihang Du, Xinxuan Zhang, Yi Deng
ePrint Report ePrint Report
Succinct Non-interactive Arguments of Knowledge(SNARKs) have gained widespread application due to their succinct proof size and efficient verification. However, they face significant scalability limitations in proof generation for large-scale circuits. To address this challenge, distributing the prover's computation across multiple nodes has emerged as a promising solution. Existing distributed SNARK constructions rely on distributed polynomial commitments, requiring each prover to perform computationally intensive group operations during the polynomial commitment opening phase.

In this paper, we propose a novel distributed SNARK system constructed by compiling distributed PIOP with additively homomorphic polynomial commitment, rather than distributed polynomial commitment. The core technical component is distributed SumFold, which folds multiple sum-check instances into one. After the folding process, only one prover is required to perform polynomial commitment openings. It facilitates compilation with SamaritanPCS, which is a recently proposed additively homomorphic multilinear polynomial commitment scheme. The resulting SNARK system is specifically optimized for data-parallel circuits. Compared to prior HyperPlonk-based distributed proof systems (e.g., Hyperpianist and Cirrus), our construction achieves improvements in both proof size and prover time. We implement our protocol and conduct a comprehensive comparison with HyperPianist with 8 machines. Our system achieves shorter proof and 4.1~4.9× speedup in prover time, while maintaining comparable verification efficiency.
Expand

12 September 2025

ExeQuantum, Docklands, Melbourne (Remote-friendly for the right candidate)
Job Posting Job Posting

ExeQuantum is a Melbourne-based company pioneering post-quantum cryptography (PQC) and sovereign-grade secure systems. We are working with critical industries and governments to deliver solutions that are sovereign, transparent, agile, and compliant. Our projects range from PQC-as-a-Service APIs to secure integrations in finance, healthcare, and national infrastructure.

We are looking for a Software Engineer to join our engineering team. This role reports directly to the CTO and will involve building prototypes into production-ready solutions across cryptography, email security, and payment infrastructure. This is not a generic coding role. You will be working on systems where discipline, confidentiality, and creativity matter as much as technical skill.

Responsibilities
  • Design, develop, and maintain secure software components (Python, Node.js, C/C++/Rust depending on project scope).
  • Integrate PQC algorithms (ML-KEM, ML-DSA, HQC, FN-DSA, etc.) into real-world applications.
  • Contribute to internal tools, SDKs, APIs, and add-ins (e.g., Outlook, payment gateways).
  • Collaborate with the CTO on system design and architecture.
  • Follow strict security and confidentiality practices.
  • Participate in code reviews, testing, and documentation to ensure auditability and compliance.
What We Are Looking For
  • Open-mindedness and willingness to study cutting-edge technologies. Demonstrated ability to think outside the box and avoid “impossible” as a default answer.
  • 3+ years of professional software development experience (startup or high-assurance sector preferred).
  • Strong skills in at least one of: Python, Node.js/TypeScript, C/C++/Rust.
  • Familiarity with cryptographic libraries, secure coding practices, or networking protocols is a plus.
  • Comfort working with prototypes, debugging, and delivering solutions in ambiguous/problem-solving contexts.
  • High standard of confidentiality and discipline in handling IP, code, and client data.

Closing date for applications:

Contact: Send your CV, links of your code repositories (GitHub, GitLab, etc.), and a short note about why you want to work on PQC and secure systems with ExeQuantum to raymond@exequantum.com.

More information: https://www.linkedin.com/hiring/jobs/4298309236/detail/

Expand
Monash University, Melbourne, Australia
Job Posting Job Posting
Monash cybersecurity group has an opening for a post-doc position. The topics of interest are Lattice-Based Cryptography and/or Privacy Enhancing Technologies (such as advanced forms of cryptographic tools and zero-knowledge proofs). We provide
  1. a highly competitive salary on par with lecturer (assistant professor) salaries in Australia,
  2. opportunities to collaborate with leading academic and industry experts in the related areas,
  3. opportunities to participate in international grant-funded projects,
  4. collaborative and friendly research environment,
  5. an opportunity to live/study in one of the most liveable and safest cities in the world.
The position will be filled as soon as a suitable candidate is found.

Requirements. significant research experience in Lattice-Based Cryptography and/or Privacy-Enhancing Technologies is required. A strong mathematical background is highly desired. Some knowledge/experience in coding (for example, Python, C/C++, SageMath) is a plus. Candidates must have completed (or be about to complete within the next 8 months) a PhD degree in a relevant field.

How to apply. please first refer to mfesgin.github.io/supervision/ for more information about our team. Then, please fill out the following form (also clickable from the advertisement title): https://docs.google.com/forms/d/e/1FAIpQLSeU6O65yJQW3rrcAi4dzatBYPyWU7Y5otLPReHFPuQf8dtggw/viewform

Closing date for applications:

Contact: Muhammed Esgin

More information: https://docs.google.com/forms/d/e/1FAIpQLSeU6O65yJQW3rrcAi4dzatBYPyWU7Y5otLPReHFPuQf8dtggw/viewform

Expand
Yuhao Zheng, Jianming Lin, Chang-an Zhao
ePrint Report ePrint Report
Bilinear pairings have emerged as a fundamental tool in public-key cryptography, enabling advanced protocols such as Identity-Based Encryption (IBE), short signatures, and zero-knowledge proofs. This paper focuses on optimizing pairing computations on curves with embedding degree 2, addressing both theoretical foundations and practical implementations. We propose an optimized double-and-add ladder algorithm that leverages the technique of y-coordinate recovery, achieving superior performance for the Tate pairing on supersingular curves and the Omega pairing on non-supersingular curves. Our method is implemented based on the RELIC cryptographic library, demonstrating significant efficiency improvements over Miller’s algorithm. Specifically, it reduces the number of Fp-multiplications (resp. CPU clock cycles) by 17.53% (resp. 13.58%) for the reduced Tate pairing on SS-1536 and by 12.37% (resp. 8.39%) for the Omega pairing on NSS-1536. This work establishes the first comprehensive implementation framework for cubical-based pairing computations on curves with embedding degree 2, providing quantified optimizations for practical cryptographic deployment.
Expand
Maxence Jauberty, Pierrick Méaux
ePrint Report ePrint Report
We provide a complete characterization of the possible cardinalities of Walsh supports of Boolean functions. Our approach begins with a detailed study of Siegenthaler’s construction and its properties, which allow us to derive relations between admissible support sizes in successive numbers of variables. We then introduce new notions such as Walsh space, reduction, and equivalence on supports, which form the structural framework of our analysis. For $n=6$, we perform an experimental enumeration of affine-equivalence classes, and we analyze the geometric structure of supports of small cardinalities, proving uniqueness for sizes $10$ and $13$ and obtaining partial results for size $16$. By combining these findings with a sieving method, we rule out twelve impossible cardinalities and establish constructive methods that transform a support of size $s$ into one of size $4s+r$ for different values of $r$, sufficient to obtain every admissible cardinality for $n \geq 7$. As a consequence, we provide a complete characterization and resolve several open problems.
Expand
Ariel Futoransky, Ramses Fernandez, Emilio Garcia, Gabriel Larotonda, Sergio Demian Lerner
ePrint Report ePrint Report
We present WISCH, a commit-reveal protocol that combines compact aggregate signatures with hash-based commitments to enable selective disclosure of correlated data in multiparty computation. The protocol separates an on-chain verification core from off chain preparation, so that verification cost depends only on the number of openings, not on the size of the underlying message space. This yields asymptotic efficiency: on-chain cost grows linearly in the number of revealed items and is independent of the ambient domain, while the per-byte overhead decreases with the message granularity. Security is established via a simulation-based proof in a UC framework with an ideal ledger functionality, in the algebraic group and global random-oracle models, under standard assumptions for discrete-log-based signatures and hash-based commitments. Thus WISCH provides selectively verifiable revelation with succinct on-chain checks and provable security guarantees.
Expand
Won Kim, Jeonghwan Lee, Hyeonhak Kim, Changmin Lee
ePrint Report ePrint Report
SQIsign is an isogeny-based, post-quantum signature scheme over supersingular elliptic curves that represents isogenies via objects of a quaternion algebra, enabling very compact signatures and efficient computations. However, the SQIsign implementation relies on GMP library, which dynamically allocates size of integers so hinders portability and complicates memory control. Furthermore, a consolidated worst-case bound on the integer coefficients representing quaternion algebra elements does not exist, leaving the required static precision unclear for a GMP-free implementation.

In this work, we audit every routine in the SQIsign Round-2 specification that manipulates quaternion elements and prove a uniform worst-case bound on coefficient growth. Complementing the theoretical bounds, we repeat the key generation and signing process of Round-2 SQIsign reference code implemented with GMP library, record peak operand sizes, and derive experimental bounds. Based on this bound, we choose a fixed-size precision representation and implement SQIsign in C without dynamic allocation such as GMP library.
Expand
Jian Guo, Shichang Wang, Tianyu Zhang
ePrint Report ePrint Report
ChiLow is a family of tweakable block ciphers proposed at EUROCRYPT 2025. In this paper, we present a cryptanalysis on ChiLow based on the Meet-in-the-Middle (MITM) attack framework. For ChiLow-32, we first present an MITM attack on full ChiLow-32 exploiting the cipher's diffusion properties, which achieves a time complexity of $2^{122.6}$ using 97 known plaintext-ciphertext (P-C) pairs. Building on this, we further introduce a refinement based on the linearization of $\chi$ function. By using more known pairs, we significantly improve the attack, reducing the time complexity to $2^{108.6}$ with 196 known P-C pairs. For ChiLow-40, we mount an attack on reduced-round versions: a 7-round attack with time complexity $2^{127.4}$ requiring 164 known P-C pairs, and a 6-round attack with time complexity $2^{88.9}$ requiring 162 known P-C pairs.
Expand
Behzad Abdolmaleki, Ruben Baecker, Paul Gerhart, Mike Graf, Mojtaba Khalili, Daniel Rausch, Dominique Schröder
ePrint Report ePrint Report
Password-Hardened Encryption (PHE) protects against offline brute-force attacks by involving an external ratelimiter that enforces rate-limited decryption without learning passwords or keys. Threshold Password-Hardened Encryption (TPHE), introduced by Brost et al. (CCS’20), distributes this trust among multiple ratelimiters. Despite its promise, the security foundations of TPHE remain unclear. We make three contributions: (1) We uncover a flaw in the proof of Brost et al.’s TPHE scheme, which invalidates its claimed security and leaves the guarantees of existing constructions uncertain; (2) We provide the first universal composability (UC) formalization of PHE and TPHE, unifying previous fragmented models and supporting key rotation, an essential feature for long-term security and related primitives such as updatable encryption; (3) We present the first provably secure TPHE scheme, which is both round-optimal and UC-secure, thus composable in real-world settings; and we implement and evaluate our protocol, demonstrating practical efficiency that outperforms prior work in realistic WAN scenarios.
Expand
Mingshu Cong, Sherman S. M. Chow, Siu Ming Yiu, Tsz Hon Yuen
ePrint Report ePrint Report
Sublinear proof sizes have recently become feasible in verifiable machine learning (VML), yet no approach achieves the trio of strictly linear prover time, logarithmic proof size and verification time, and architecture privacy. Hurdles persist because we lack a succinct commitment to the full neural network and a framework for heterogeneous models, leaving verification dependent on architecture knowledge. Existing limits motivate our new approach: a unified proof-composition framework that casts VML as the design of zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs) for matrix computations. Representing neural networks with linear and non-linear layers as a directed acyclic graph of atomic matrix operations enables topology-aware composition without revealing the graph. Modeled this way, we split proving into a reduction layer and a compression layer that attests to the reduction with a proof of proof. At the reduction layer, inspired by reduction of knowledge (Crypto '23), root-node proofs are reduced to leaf-node proofs under an interface standardized for heterogeneous linear and non-linear operations. Next, a recursive zkSNARK compresses the transcript into a single proof while preserving architecture privacy.

Complexity-wise, for a matrix expression with $M$ atomic operations on $n \times n$ matrices, the prover runs in $O(M n^2)$ time while proof size and verification time are $O(\log(M n))$, outperforming known VML systems. Honed for this framework, we formalize relations directly in matrices or vectors---a more intuitive form for VML than traditional polynomials. Our LiteBullet proof, an inner-product proof based on folding and its connection to sumcheck (Crypto '21), yields a polynomial-free alternative. With these ingredients, we reconcile heterogeneity, zero-knowledge, succinctness, and architecture privacy in a single VML system.
Expand
Gustavo Banegas, Andreas Hellenbrand, Matheus Saldanha
ePrint Report ePrint Report
Isogeny-based cryptography has emerged as a promising post-quantum alternative, with CSIDH and its constant-time variants \ctidh and \dctidh offering efficient group-action protocols. However, \ctidh and~\dctidh rely on dummy operations in differential addition chains (DACs) and Matryoshka, which can be exploitable by fault-injection attacks. In this work, we present the first \emph{dummy-free} implementation of \dctidh. Our approach combines two recent ideas: \dacshund, which enforces equal-length DACs within each batch without padding, and a reformulated Matryoshka structure that removes dummy multiplications and validates all intermediate points. Our analysis shows that small primes such as $3,5,$ and $7$ severely restrict feasible \dacshund configurations, motivating new parameter sets that exclude them. We implement dummy-free \dctidh-2048-194 and \dctidh-2048-205, achieving group action costs of roughly $357{,}000$–$362{,}000$ $\Fp$-multiplications, with median evaluation times of $1.59$–$1.60$ (Gcyc). These results do not surpass \dctidh, but they outperform \ctidh by roughly $5\%$ while eliminating dummy operations entirely. Compared to dCSIDH, our construction is more than $4\times$ faster. To the best of our knowledge, this is the first \textit{efficient} implementation of a CSIDH-like protocol that is simultaneously deterministic, constant-time, and fully dummy-free.
Expand
Lennart Braun, Geoffroy Couteau, Kelsey Melissaris, Mahshid Riahinia, Elahe Sadeghi
ePrint Report ePrint Report
We introduce a new and efficient pseudorandom correlation function whose security reduces to the sparse LPN assumption in the random oracle model. Our construction is the first to achieve high concrete efficiency while relying on well-established assumptions: previous candidates either required introducing new assumptions, or had poor concrete performances. We complement our result with an in-depth analysis of the sparse LPN assumption, providing new insight on how to evaluate the strength of concrete sets of parameters.
Expand
Next ►