International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

18 September 2025

Zoushaojie Jiang, An Wang, Yaoling Ding, Annv Liu, Zheng Liu, Jing Yu, Liehuang Zhu
ePrint Report ePrint Report
Deep learning-based profiled side-channel analysis (SCA) targeting cryptographic implementations has attracted significant attention in recent years. Generalizable deep learning mechanisms, such as contrastive learning-based profiled SCA (CL-SCA), can enhance the effectiveness of SCA without reliance on specific network architectures and hyperparameters. This independence enables robust adaptation across diverse attack scenarios. Nonetheless, CL-SCA relies heavily on data augmentation and may mistakenly push apart physical leakage traces that should belong to the same class, which interferes with the extraction of discriminative features crucial for SCA performance. To address these limitations, we propose a profiled SCA method based on supervised contrastive learning, called SupCL-SCA. This method enhances the learning of discriminative features that facilitate key recovery by leveraging supervised information to guide the extraction of similarities in feature space. Compared with state-of-the-art methods, SupCL-SCA not only retains their general applicability and inherent advantages but also eliminates reliance on complex data augmentation and multi-stage training. Additionally, we propose a cosine distance-based Intra-Inter Distance Ratio (IIDR) metric to assess the discriminative capability of models in deep learning-based profiled SCA methods. We evaluate SupCL-SCA on three publicly available datasets covering different implementations and platforms. Experimental results show that SupCL-SCA consistently reduces the number of traces required to recover the key compared to the original methods, demonstrating enhanced attack capability.
Expand
Yingjie Lyu, Zengpeng Li, Hong-Sheng Zhou, Xudong Deng
ePrint Report ePrint Report
We propose the first two-round multi-party signing protocol for the Elliptic Curve Digital Signature Algorithm (ECDSA) in the threshold-optimal setting, reducing the number of rounds by one compared to the state of the art (Doerner et al., S&P '24). We also resolve the security issue of presigning pointed out by Groth and Shoup (Eurocrypt '22), evading a security loss that increases with the number of pre-released, unused presignatures, for the first time among threshold-optimal schemes.

Our construction builds on Non-Interactive Multiplication (NIM), a notion proposed by Boyle et al. (PKC '25), which allows parties to evaluate multiplications on secret-shared values in one round. In particular, we use the construction of Abram et al. (Eurocrypt '24) instantiated with class groups. The setup is minimal and transparent, consisting of only two class-group generators. The signing protocol is efficient in bandwidth, with a message size of 1.9 KiB at 128-bit security, and has competitive computational performance.
Expand
Shengnan Zhao, Junyu Lu, Yuchen Huang, Dongdong Miao, Chuan Zhao
ePrint Report ePrint Report
Private information retrieval (PIR) enables a client to fetch a record from databases held by untrusted servers while hiding the access pattern (index or keyword) from the servers. In practical settings, however, data objects (e.g., articles, videos) are commonly tagged with multiple identifiers, which can be structured as {index, value, keywords}. Current PIR schemes are constrained to retrieving records based on a single index or a single keyword, and cannot efficiently handle conjunctive queries requiring multiple keywords. To address this limitation, we propose Mk-PIR, a PIR scheme that enables a client to retrieve records that match all specified keywords simultaneously. We propose two distinct constructions: $\textsf{MkPIR}^\mathbf{I}$, an inverted-index-based solution built upon our Oblivious Set Intersection (OSI) primitive, which enables private intersection computation on the server side without revealing client queries; and $\textsf{MkPIR}^\mathbf{F}$, a forward-index-based solution utilizing our Private Subset Determination (PSD), which privately outputs matching indices by verifying subset relationships. Two constructions adapt to diverse database configurations where keywords are not required to be the primary key. Experimental results show that the average time to determine whether an index satisfies multiple keywords ranges from 0.5 to 332 ms, demonstrating that Mk-PIR achieves flexible query capabilities with modest performance overhead.
Expand
Léo Ducas, Johanna Loyer
ePrint Report ePrint Report
Most concrete analyses of lattice reduction focus on the BKZ algorithm or its variants relying on Shortest Vector Problem (SVP) oracles. However, a variant by Li and Nguyen (Cambridge U. Press 2014) exploits more powerful oracles, namely for the Densest rank-$k$ Sublattice Problem ($DSP_k$) for $k \geq 2$.

We first observe that, for random lattices, $DSP_2$ --and possibly even $DSP_3$-- seems heuristically not much more expensive than solving SVP with the current best algorithm. We indeed argue that a densest sublattice can be found among pairs or triples of vectors produced by lattice sieving, at a negligible additional cost. This gives hope that this approach could be competitive.

We therefore proceed to a heuristic and average-case analysis of the slope of $DSP_k$-BKZ output, inspired by a theorem of Kim (Journal of Number Theory 2022) which suggests a prediction for the volume of the densest rank-$k$ sublattice of a random lattice.

Under this heuristic, the slope for $k=2$ or $3$ appears tenuously better than that of BKZ, making this approach less effective than regular BKZ using the "Dimensions for Free'' of Ducas (EUROCRYPT 2018). Furthermore, our experiments show that this heuristic is overly optimistic.

Despite the hope raised by our first observation, we therefore conclude that this approach appears to be a No-Go for cryptanalysis of generic lattice problems.
Expand
Dimitri Koshelev
ePrint Report ePrint Report
This short note is devoted to a significant enhancement of [8] by resolving satisfactorily the problem formulated at the end of that article. More precisely, a new laconic, secure, and efficient (de)compression method is provided for points of any elliptic curve over any highly $2$-adic finite field of large characteristic. Such fields are ubiquitous in modern elliptic curve cryptography, whereas they severely slow down the conventional $x$-coordinate (de)compression technique. In comparison with the main method from the cited work, the new one requires neither complicated mathematical formulas nor conditions on the curve. Thereby, the current work is universal and much more implementation-friendly, which justifies its existence, despite the absence of interesting mathematics behind it.

8. Koshelev, D.: Point (de)compression for elliptic curves over highly $2$-adic finite fields. Advances in Mathematics of Communications 19(5), 1539–1559 (2025), https://doi.org/10.3934/amc.2025008
Expand
Dilara Toprakhisar, Svetla Nikova, Ventzislav Nikov
ePrint Report ePrint Report
Physical attacks pose serious challenges to the secure implementation of cryptographic algorithms. While side-channel analysis (SCA) has received significant attention, leading to well-established countermeasures, fault attacks and especially their combination with SCA (i.e., combined attacks) remain less researched. Addressing such combined attacks often requires a careful integration of masking and redundancy techniques to resist the reciprocal effects of faults and probes. Recent research on combined security has gained momentum, with most approaches relying on composable security notions involving error correction, typically applied after each nonlinear operation. While effective, this approach introduces an area and performance overhead, along with additional security challenges posed by the correction circuits themselves.

In this work, we take a different direction, following the concept of stability introduced in StaTI (CHES 2024), which ensures fault propagation to protect against ineffective faults. We extend this concept to combined security by proposing a new composable security notion, combined stability, which integrates an extended stability notion, diffused stability, with arbitrarily composable glitch-extended probing security notions. Notably, this framework requires only a single error detection at the end of the computation, avoiding costly intermediate error checks and corrections. To demonstrate practicality, we describe a combined secure AES S-box hardware implementation. Our results show that this approach, achieving combined security with competitive implementation costs, offers a promising alternative to error-correction-based schemes.
Expand
Valerio Cini, Russell W. F. Lai, Ivy K. Y. Woo
ePrint Report ePrint Report
Threshold public-key encryption (tPKE) enables any subset of $t$ out of $K$ parties to decrypt non-interactively, while any ciphertext remain secure if less that $t$ decryption shares are known. Despite recent progress, existing lattice-based tPKEs face at least one of the following drawbacks: (1) having large decryption share size -- polynomial in $K$ and some even exponential in $t$, (2) proven secure only against relaxed security models where the adversary is not allowed to see decryption shares of challenge ciphertexts, and (3) lack of concrete efficiency, in particular due to the requirement of super-polynomial modulus for noise flooding.

We present $\mathsf{Pilvi}$, a new thresholdised variant of Regev’s public-key encryption scheme, which achieves both small decryption shares and a strong form of simulation-based security under the Learning with Errors (LWE) assumption. Our construction has decryption share size $t \cdot \log K \cdot \mathsf{poly}(\lambda)$ and allows the use of a polynomial-size modulus assuming an a priori bound on the number of queries $Q$. It remains secure even when an adaptive adversary requests partial decryptions of both challenge and non-challenge ciphertexts, as long as for each ciphertext the number of corrupt parties plus the number of shares obtained is less than $t$. We provide concrete parameter suggestions for 128-bit security for a wide range of $(t,K,Q)$, including cases where $t \approx K/2$ for up to $K \leq 32$ users and $Q \leq 2^{60}$ partial decryption queries. The ciphertext size ranges from $14$ to $58$ KB and the partial decryption share size ranges from $1$ to $4$ KB.

Along the way, we abstract out a general purpose tool called the threshold-LWE assumption, which we prove to follow from LWE. The threshold-LWE assumption captures the core steps in security proofs of schemes involving Shamir's secret-sharing the LWE secret with carefully chosen evaluation points, the algebraic structures from the latter being what enabling the efficiency of our tPKE scheme. As an additional application, we also show how to construct distributed pseudorandom functions (dPRFs) from the threshold-LWE assumption.
Expand
Xavier Bonnetain, Johanna Loyer, André Schrottenloher, Yixin Shen
ePrint Report ePrint Report
Searching for collisions in random functions is a fundamental computational problem, with many applications in symmetric and asymmetric cryptanalysis. When one searches for a single collision, the known quantum algorithms match the query lower bound. This is not the case for the problem of finding multiple collisions, despite its regular appearance as a sub-component in sieving-type algorithms.

At EUROCRYPT 2019, Liu and Zhandry gave a query lower bound $\Omega{}(2^{m/3 + 2k/3})$ for finding $2^k$ collisions in a random function with $m$-bit output. At EUROCRYPT 2023, Bonnetain et al. gave a quantum algorithm matching this bound for a large range of $m$ and $k$, but not all admissible values. Like many previous collision-finding algorithms, theirs is based on the MNRS quantum walk framework, but it chains the walks by reusing the state after outputting a collision.

In this paper, we give a new algorithm that tackles the remaining non-optimal range, closing the problem. Our algorithm is tight (up to a polynomial factor) in queries, and also in time under a quantum RAM assumption. The idea is to extend the chained walk to a regime in which several collisions are returned at each step, and the "walks" themselves only perform a single diffusion layer.
Expand
Frank Denis
ePrint Report ePrint Report
This paper introduces efficient, practical methods for encrypting IPv4/IPv6 addresses while preserving utility in logs, telemetry, and third-party data exchange. We focus on three practical goals: (i) format-compatible encryption that keeps outputs in the IPv6 address space and handles IPv4 inputs canonically; (ii) prefix-preserving encryption that retains network structure for analytics while hiding host identity; and (iii) non-deterministic encryption that resists correlation while remaining compact and invertible. We give deterministic, prefix-preserving, and two non-deterministic variants, with security models and arguments under standard assumptions, plus explicit usage bounds and operating limits. We also relate each variant to known efficiency lower bounds (ciphertext expansion and primitive calls) and state our claims within deployable parameter ranges.
Expand
Yuange Li, Xiong Fan
ePrint Report ePrint Report
Zero-knowledge proofs of training (zkPoT) enable a prover to certify that a model was trained on a committed dataset under a prescribed algorithm without revealing the model or data. Proving recurrent neural network (RNN) training is challenging due to hidden-state recurrence and cross-step weight sharing, which require proofs to enforce recurrence, gradients, and nonlinear activations across time.

We present SUMMER (SUMcheck and MERkle tree), a recursive zkPoT for scalable RNNs. SUMMER generates sumcheck-based proofs that backpropagation through time (BPTT) was computed correctly over a quantized finite field, while nonlinearities such as $\tanh$ and softmax are validated by lookup arguments. Per-step commitments and proofs are folded with Merkle trees, yielding a final commitment and a succinct proof whose size and verification time are independent of the number of iterations.

SUMMER offers (i) the first end-to-end zkPoT for RNN training, including forward, backward, and parameter updates; (ii) the first use of LogUp for nonlinear operations with a batched interface; and (iii) efficient recursive composition of lookup and sumcheck proofs. On a Mini-Char-RNN with 12M parameters, the prover runs in 70.1 seconds per iteration, $8.5\times$ faster and $11.6\times$ more memory efficient than the IVC baseline, with 165 kilobyte proofs verified in 20 milliseconds.
Expand
Easwar Vivek Mangipudi, Panagiotis Chatzigiannis, Konstantinos Chalkias, Aniket Kate, Mohsen Minaei, Mainack Mondal
ePrint Report ePrint Report
In a Web3 (blockchain) setting, account recovery allows users to regain access to their accounts after losing their authentication credentials. Although recovery mechanisms are well-established and extensively analyzed in the context of Web2 systems, Web3 presents distinct challenges. Web3 account access is typically tied to cryptographic key pairs, and private keys are not entrusted to centralized entities. This design improves security, but significantly complicates the recovery process, making it difficult or even impossible for users to regain access after loss of keys. Given the critical role that recovery plays in ensuring long-term feasibility and trust in digital systems, a range of recovery mechanisms has been proposed to accommodate the unique properties of Web3. These mechanisms aim to help users manage key loss without introducing undue friction or risk.

Although there has been an exponential increase in the use of cryptocurrency wallets in the last decade, the popularity and usage of the corresponding recovery mechanisms remain unclear. Furthermore, it is still unclear how users perceive these recovery mechanisms and what they expect from them. In this work, our objective is to empirically understand and analyze user perceptions of the various recovery mechanisms. To this end, we conducted a user survey of 331 participants and asked them to rate different mechanisms on usability, security, and availability. The results show interesting aspects of the user preferences, including their view of sharing keys among different devices and trusting their friends or family. Based on our findings, we provide insight and future directions for the developer and research community.
Expand
Ole Martin Edstrøm, Kristian Gjøsteen, Hans Heum, Sjouke Mauw, Felix Stutz
ePrint Report ePrint Report
Electronic identification (eID) protocols and federated identity management systems play an increasingly important role in our modern society, both on the internet through services from Google and others, and through the eIDAS regulation in Europe. A key feature of eID protocols is that humans are intimately involved in the protocol, often responsible for critical security steps. Traditional security analyses of such protocols typically assume flawless user behaviour, yet widespread real-world adoption makes user mistakes inevitable.

We present a framework for analysing the security of eID protocols that can model users making mistakes. It is suitable for automated analysis with Tamarin and supports fine-grained corruption modelling of protocol actors. We demonstrate the framework's utility by describing and analysing common eID protocols based on passwords, mobile applications and authentication tokens, as well as by systematically evaluating the impact of various combinations of user mistakes on security.
Expand
Lucien K. L. Ng, Vladimir Kolesnikov
ePrint Report ePrint Report
Garbled Circuits (GC) are a foundational primitive for secure two-party computation (2PC). Garbled Private Information Retrieval (GPIR) is a GC technique for looking up a public array or database (DB) on a private index unknown to either party. GPIR immediately enables GC evaluation of functions implemented as a publicly known lookup table (LUT).

However, GPIR is costly. It can be realized by a linear scan, by adapting Garbled RAM, by stacking GC branches implementing access to table elements, and, most recently, via the GC lookup table scheme logrow (Heath et al., Eurocrypt 2024). For an $N$-row DB lookup of $m$-bit rows, logrow has computation cost $\approx O(N m \kappa)$ and communication cost $O(m(\log N \cdot \kappa + N))$. This makes logrow practical only for tables up to about $2^{15}$ rows.

We propose Toss, a new efficient GPIR protocol with dramatically reduced bandwidth consumption—an especially scarce resource in MPC—both asymptotically and concretely. Our communication cost is $O\!\left(\sqrt{N}\, m \sqrt{\kappa}\right)$ with a small constant, which is sublinear in both $N$ and the security parameter $\kappa$. Our computation cost is $O\!\left(N m \kappa + \bigl(\sqrt{N/\kappa}\, m + N\bigr) c_\kappa \right)$, where $c_\kappa$ is the cost of a hash evaluation. This computation cost is comparable to, or slightly lower than, that of logrow.

In concrete terms, for a $2^{20}$-row LUT of 8-bit items, we achieve more than a $31\times$ reduction in communication compared to logrow. On a laptop over a 100 Mbps channel, throughput increases from approximately $10.6$ lookups/s to $81$ lookups/s, a $>7.5\times$ improvement. On a 10 Mbps channel, Toss achieves more than $28\times$ better throughput. The improvement grows with $N$; for example, for $N=2^{25}$ and $m=32$, the gain exceeds $512\times$.

Toss builds on stacked garbling (SGC) and logrow, incorporating multiple low-level optimizations and requiring a reworking of their internals and interfaces. We emphasize that constructing GPIR directly from SGC incurs logarithmic computational overhead, which decreases throughput in typical “laptop + LAN” testbeds. Our design avoids this pitfall. We implement Toss and report on its performance, demonstrating its substantial communication savings and practical efficiency.
Expand
B PRADEEP KUMAR REDDY, SAMEEKSHA GOYAL, RUCHIKA MEEL, Ayantika Chatterjee
ePrint Report ePrint Report
Machine learning (ML) has revolutionized various industries by leveraging predictive models and data-driven insights, often relying on cloud computing for large-scale data processing. However, this dependence introduces challenges such as bandwidth constraints and network latency. Edge computing mitigates these issues by enabling localized processing, reducing reliance on continuous cloud connectivity, and optimizing resource allocation for dynamic workloads. Given the limited computational capacity of sensory nodes in ML systems, edge devices provide an effective solution by offloading processing tasks. However, a critical challenge in this paradigm is to ensure user privacy while handling sensitive data both in the cloud and in edge processing. To address this, we propose a Fully Homomorphic Encryption (FHE) enabled framework that enables ML computations directly on encrypted data, eliminating need for decryption. The main challenge to design such framework is that ML complex implementation steps need to be revisited with suitable optimizations to match FHE processing requirements. There are different standard libraries to support basic computation blocks on which encrypted ML processing is to be developed. These libraries vary in supported computation operators, computational complexity and memory demands. Those in-turn introduces latency and throughput challenges, especially on resource-constrained edge nodes. For example, in general HE library CKKS(Cheon-Kim-Kim-Song) with packing and approximate homomorphic operation support is known to be the best choice for privacy preserving AI algorithm implementation. However, analysis shows leveled CKKS is limited in implementing complex operators and hence not suitable for few specific ML algorithms like KNN, Logistic Regression or general activations in NN etc without any approximation. To avoid accuracy drops associated with approximations, Torus based FHE library (TFHE) can be a better choice to make certain ML implementations feasible. Moreover, our study shows compared to TFHE, CKKS with huge memory requirement is not suitable for resource constrained edge. Thus, underlying library choice to design such framework is crucial considering the trade-off between latency and accuracy. In this work, we propose an integrated framework FHEMaLe for encrypted ML processing which takes model architecture, desired accuracy, and platform preference as inputs and based on that appropriate execution environment is selected: a cloud platform leveraging the CKKS homomorphic encryption library or an edge platform using the TFHE library. Further, analysis shows the limitation of performing FHE ML on a single edge device and hence our framework partitions encrypted data, transmits it via a fabric API, and performs distributed encrypted ML computations across the edge cluster. We implement distributed ML inference for algorithms such as ?-Nearest Neighbors (KNN) (Cloud CKKS=248 sec, Edge TFHE=37 min), Support Vector Machine (SVM) (Cloud CKKS=18 sec, Edge TFHE=4.15 min), and Logistic Regression (LR) ( Cloud CKKS=17 sec, Edge TFHE=7.82 min) on a cluster of 11 edge nodes. This work explains why KNN suffers from a major performance bottleneck in encrypted domain and may not be a great choice for encrypted ML processing without application specific optimizations. Furthermore, our encrypted operators are capable of supporting encrypted NN processing (Cloud CKKS= 57 sec), but we explain why CKKS is a preferred choice in this case. The distributed nature of our implementation shows a promise of further improvement and scalability with the support of larger cluster.
Expand
Benedikt Wagner, Arantxa Zapico
ePrint Report ePrint Report
Data availability sampling (DAS) enables clients to verify availability of data without downloading it entirely. This concept is crucial to Ethereum's roadmap. An instantiation of this concept, known as PeerDAS, relies at its core on a variant of KZG polynomial commitments and is set to be integrated into Ethereum. To assess the security of PeerDAS, Wagner and Zapico (ePrint 2024) provided a formal analysis, proving its security as a cryptographic primitive. However, their proof relies on the algebraic group model - an idealized framework known to be uninstantiable (Zhandry, CRYPTO 2022).

In this work, we establish the security of \peerdas in the standard model under falsifiable assumptions. Specifically, we eliminate reliance on the algebraic group model and instead base our proof on the ARSDH assumption (Lipmaa et al., EUROCRYPT 2024), thus strengthening the theoretical foundations of PeerDAS and enhancing confidence in its security.
Expand
Orestis Alpos, Bernardo David, Jakov Mitrovski, Odysseas Sofikitis, Dionysis Zindros
ePrint Report ePrint Report
This work addresses the inherent issues of high latency in blockchains and low scalability in traditional consensus protocols. We present pod, a novel notion of consensus whose first priority is to achieve the physically-optimal latency of $2\delta$, or one round-trip, i.e., requiring only one network trip (duration $\delta$) for writing a transaction and one for reading it.

To accomplish this, we first eliminate inter-replica communication. Instead, clients send transactions directly to all replicas, which independently process transactions and append them to local logs. Replicas assigns a timestamp and a sequence number to each transaction in their logs, allowing clients to extract valuable metadata about the transactions and the system state. Later on, clients retrieve these logs and extract transactions (and associated metadata) from them.

Necessarily, this construction achieves weaker properties than a total-order broadcast protocol, due to existing lower bounds. Our work models the primitive of pod and defines its security properties. We then show pod-core, a protocol that satisfies properties such as transaction confirmation within $2\delta$, censorship resistance against Byzantine replicas, and accountability for safety violations. We show that single-shot auctions can be realized using the pod notion and observe that it is also sufficient for other popular applications.
Expand
Trey Li
ePrint Report ePrint Report
We introduce modular forms and Hecke operators to cryptography and propose the Hecke problem as a new foundation for post-quantum cryptography. Given two modular forms, the Hecke problem asks to recover the Hecke operator that maps one to the other. While there is a deep relation to isogeny problems through the modularity theorem, this problem is rooted in arithmetic geometry and differs fundamentally in structure and mechanism. We prove NP-hardness of this problem and use it to construct a non-interactive key exchange scheme that achieves higher efficiency than isogeny-based schemes and smaller key sizes than lattice-based and code-based schemes.
Expand
Dmitrii A. Gerasimov
ePrint Report ePrint Report
We present ChipmunkRing, a post-quantum ring signature scheme designed for blockchain deployment. Built upon the Chipmunk lattice-based signature scheme, ChipmunkRing achieves signature sizes of 20.5-279.7KB with signing times of 1.1-15.1ms and verification times of 0.4-4.5ms for rings of 2-64 participants. Our key innovation is Acorn Verification, a novel zero-knowledge scheme that replaces the Fiat-Shamir transform, enabling O(n) verification complexity with 96-byte proofs per participant and achieving 17.7× speedup for 32-participant rings compared to traditional approaches. We provide formal security proofs demonstrating 112-bit post-quantum security (NIST Level 1), comprehensive performance analysis, and support for both standard and threshold ring signatures with arbitrary threshold values.
Expand
Martin Zbudila, Ajith Suresh, Hossein Yalame, Omid Mirzamohammadi, Aysajan Abidin, Bart Preneel
ePrint Report ePrint Report
Privacy-preserving machine learning (PPML) has become increasingly important due to the need to protect sensitive data during training and inference. Secure multiparty computation (MPC) and homomorphic encryption (HE) have emerged as foundational technologies, enabling secure computation over private data. In this work, we provide a systematic comparative overview of MPC frameworks for PPML, focusing on protocols that introduce novel approaches rather than incremental improvements. Frameworks are analyzed based on computational and communication complexity, throughput, security guarantees, and applicability in small-party settings. Each underlying primitive in PPML is examined from an MPC perspective, highlighting its role and trade-offs. We also emphasize the diversity of secret-sharing schemes and associated interoperability challenges, proposing scheme conversions to facilitate efficient hybrid solutions. This Systematization of Knowledge guides researchers in identifying open problems and practitioners in selecting effective MPC-based frameworks for real-world PPML deployment.
Expand
Shreya Dey, Avijit Dutta, Kazuhiko Minematsu
ePrint Report ePrint Report
In EUROCRYPT'20, Bao et al. have proved that three rounds of cascaded LRW1 construction provide security up to $2^{2n/3}$ queries. However, in a recent work by Khairallah et al., it has been shown that the construction provides only birthday bound security via exhibiting a distinguishing attack on the construction, and thereby invalidating the claim of Bao et al. In an independent and contemporaneous work, Datta et al. have shown that four rounds of cascading of the $\textsf{LRW1}$ construction, dubbed as $\textsf{CLRW1}^4$—based on four independent keyed block ciphers—achieves $3n/4$-bit CCA security. In this paper, we have shown that a key reduced variant of the $\textsf{CLRW1}^4$ construction, dubbed as $\textsf{R}\mbox{-}\textsf{CLRW1}^4$ based on two independent keyed block ciphers, achieves $2n/3$-bit CCA security. The security proof of our construction relies on a heavy use of the H-Coefficient technique and non-trivial analysis in lower-bounding the real interpolation probability for good transcripts.
Expand
Next ►