International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

11 September 2025

Michele Ciampi, Divya Ravi, Luisa Siniscalchi, Yu Xia
ePrint Report ePrint Report
When investigating the round-complexity of multi-party computation protocols (MPC) protocols, it is common to assume that in each round parties can communicate over broadcast channels. However, broadcast is an expensive resource, and as such its use should be minimized. For this reason, Cohen, Garay, and Zikas (Eurocrypt 2020) investigated the tradeoffs between the use of broadcast in two-round protocols assuming setup and the achievable security guarantees. Despite the prolific line of research that followed the results of Cohen, Garay, and Zikas, none of the existing results considered the problem of minimizing the use of broadcast while relying in a ?????-??? way on the underlying cryptographic primitives. In this work, we fully characterize the necessary and sufficient requirements in terms of broadcast usage in the ???ℎ????? ???????? setting for round-optimal MPC with black-box use of minimal cryptographic assumptions. Our main result shows that to securely realize any functionality with ????????? ????? in the common reference string model with black-box use of two-round oblivious transfer it is necessary and sufficient for the parties to adhere to the following pattern: in the first two rounds the parties exchange messages over peer-to-peer channels, and in the last round the messages are sent over a broadcast channel. We also extend our results to the correlated randomness setting where we prove that one round of peer-to-peer interaction and one round of broadcast is optimal to evaluate any functionality with unanimous abort.
Expand
Shuai Han, Hongxu Yi, Shengli Liu, Dawu Gu
ePrint Report ePrint Report
Currently, the only tightly secure inner-product functional encryption (IPFE) schemes in the multi-user and multi-challenge setting are the IPFE scheme due to Tomida (Asiacrypt 2019) and its derivatives. However, these tightly secure schemes have large ciphertext expansion and are all based on the matrix decisional Diffie-Hellman (DDH) assumption.

To improve the efficiency of tightly secure IPFE and enrich the diversity of its underlying assumptions, we construct a set of tightly secure IPFE schemes, with very compact ciphertexts and efficient algorithms, from the matrix DDH, the decisional composite residuosity (DCR), and the learning-with-errors (LWE) assumptions, respectively. In particular,

-- our DDH-based scheme has about 5×~100× shorter public/secret keys, 2×~2.9× shorter ciphertexts and about 5×~100× faster algorithms than all existing tightly secure IPFE schemes, at the price of 3~7 bits of decreased security level, resolving an open problem raised by Tomida (Asiacrypt 2019);

-- our DCR-based scheme constitutes the first tightly secure IPFE scheme from the DCR assumption;

-- our LWE-based scheme is the first almost tightly secure IPFE scheme from lattices, hence achieving post-quantum security.

We obtain our schemes by proposing a generic and flexible construction of IPFE with a core technical tool called two-leveled inner-product hash proof system (TL-IP-HPS). Specifically, our IPFE construction is in a compact design, and to achieve tight security, we propose an economic proof strategy to reuse the spaces in such compact IPFE. Interestingly, our new proof strategy also applies to the loosely secure IPFE schemes proposed by Agrawal, Libert and Stehlé (Crypto 2016), and yields a tighter bound on their security loss.
Expand
Onur Gunlu, Maciej Skorski, H. Vincent Poor
ePrint Report ePrint Report
Semantic communication systems focus on the transmission of meaning rather than the exact reconstruction of data, reshaping communication network design to achieve transformative efficiency in latency-sensitive and bandwidth-limited scenarios. Within this context, we investigate the rate-distortion-perception (RDP) problem for image compression, which plays a central role in producing perceptually realistic outputs under rate constraints. By employing the randomized distributed function computation (RDFC) framework, we derive an achievable non-asymptotic RDP region that captures the finite blocklength trade-offs among rate, distortion, and perceptual quality, aligning with the objectives of semantic communications. This region is further generalized to include either side information or a secrecy requirement, the latter ensuring strong secrecy against eavesdroppers through physical-layer security mechanisms and maintaining robustness in the presence of quantum-capable adversaries. The main contributions of this work are: (i) achievable bounds for non-asymptotic RDP regions subject to realism and distortion constraints; (ii) extensions to cases where side information is available at both the encoder and decoder; (iii) achievable, nonasymptotic RDP bounds with strong secrecy guarantees; (iv) characterization of the asymptotic secure RDP region under a perfect realism constraint; and (v) demonstrations of significant rate reductions and the effects of finite blocklengths, side information, and secrecy constraints. These findings offer concrete guidelines for the design of low-latency, secure, and high-fidelity image compression and generative modeling systems that generate realistic outputs, with relevance for, e.g., privacy-critical applications.
Expand
Marc Fischlin, Moritz Huppert, Sam A. Markelon
ePrint Report ePrint Report
Probabilistic data structures like hash tables, skip lists, and treaps support efficient operations through randomized hierarchies that enable "skipping" elements, achieving sub-linear query complexity on average for perfectly correct responses. They serve as critical components in performance-sensitive systems where correctness is essential and efficiency is highly desirable. While simpler than deterministic alternatives like balanced search trees, these structures traditionally assume that input data are independent of the structure's internal randomness and state - an assumption questionable in malicious environments - potentially leading to a significantly increased query complexity. We present adaptive attacks on all three aforementioned structures that, in the case of hash tables and skip lists, cause exponential degradation compared to the input-independent setting. While efficiency-targeting attacks on hash tables are well-studied, our attacks on skip lists and treaps provide new insights into vulnerabilities of skipping-based probabilistic data structures. Next, we propose simple and efficient modifications to the original designs of these data structures to provide provable security against adaptive adversaries. Our approach is formalized through Adaptive Adversary Property Conservation (AAPC), a general security notion that captures deviation from the expected efficiency guarantees in adversarial scenarios. We use this notion to present rigorous robustness proofs for our versions of the data structures. Lastly, we perform experiments whose empirical results closely agree with our analytical results.
Expand
Rujia Li, Mingfei Zhang, Xueqian Lu, Wenbo Xu, Ying Yan, Sisi Duan
ePrint Report ePrint Report
Ethereum, a leading blockchain platform, relies on incentive mechanisms to improve its stability. Recently, several attacks targeting the incentive mechanisms have been proposed. Examples include the so-called reorganization attacks that cause blocks proposed by honest validators to be discarded to gain more rewards. Finding these attacks, however, heavily relies on expert knowledge and may involve substantial manual effort.

We present BunnyFinder, a semi-automated framework for finding incentive flaws in Ethereum. BunnyFinder is inspired by failure injection, a technique commonly used in software testing for finding implementation vulnerabilities. Instead of finding implementation vulnerabilities, we aim to find design flaws. Our main technical contributions involve a carefully designed strategy generator that generates a large pool of attack instances, an automatic workflow that launches attacks and analyzes the results, and a workflow that integrates reinforcement learning to fine-tune the attack parameters and identify the most profitable attacks. We simulate a total of 9,354 attack instances using our framework and find the following results. First, our framework reproduces five known incentive attacks that were previously found manually. Second, we find three new attacks that can be identified as incentive flaws. Finally and surprisingly, one of our experiments also identified two implementation flaws.
Expand
Nam Tran, Khoa Nguyen, Dongxi Liu, Josef Pieprzyk, Willy Susilo
ePrint Report ePrint Report
Linkable ring signatures (Liu et al., ACISP'04) is a ring signature scheme with a linking mechanism for detecting signatures from the same signer. This functionality has found many practical applications in electronic voting, cryptocurrencies, and whistleblowing systems. However, existing linkable ring signature schemes impose a fundamental limitation: users can issue only one signature, and after that their anonymity is not guaranteed. This limited number of usage is inadequate for many real-world scenarios.

This work introduces the notion of Many-time Linkable Ring Signatures, extending the anonymity guarantees of standard linkable ring signatures. Specifically, many-time linkable ring signatures ensure that signers remain anonymous as long as the number of their signatures is smaller than a system-global threshold. Only when a signer exceeds this threshold the anonymity is lost. We formalize this via a security notion called T-anonymity, which guarantees that adversaries cannot distinguish signatures from users who have each produced at most T signatures. This new notion of anonymity generalizes one-time anonymity in previous linkable schemes, while providing stronger guarantees than existing constructions. We also present a lattice-based construction with proven security in the quantum random oracle model (QROM).
Expand
Haiyue Dong, Qian Guo
ePrint Report ePrint Report
The Hamming Quasi-Cyclic (HQC) key encapsulation mechanism (KEM), recently selected by NIST for standardization in the Post-Quantum Cryptography (PQC) process, distinguishes itself through its efficiency, robust design based on hard decoding problems in coding theory, and well-characterized decryption failure rates. Despite its selection, practical security concerns arise from implementation threats, particularly those exploiting plaintext-checking (PC) oracles. While multi-value PC (MV-PC) and full decryption (FD) oracle attacks have been extensively studied in the context of lattice-based cryptography, their applicability to code-based schemes like HQC has remained relatively unexplored.

In this work, we present the first MV-PC and FD oracle attacks targeting code-based KEMs, specifically on HQC. Our MV-PC attack significantly reduces the required oracle queries compared to previous PC oracle-based methods and holds implications for side-channel, key-mismatch, and fault-injection attacks. Our FD attack exhibits remarkable efficiency in trace complexity, achieving secret key recovery for hqc-128 with just two queries to a perfect oracle, and four queries for hqc-192 and hqc-256. Simulations further demonstrate the robustness of our MV-PC and FD oracle attacks against imperfect oracle responses. We experimentally validate the new attacks on an ARM Cortex-M4 microcontroller, highlighting the critical need for robust countermeasures. In particular, on such a platform, substantial leakage during operations like syndrome computation poses significant challenges to the efficiency of masking techniques in mitigating FD oracle attacks.
Expand
José Bacelar Almeida, Gustavo Xavier Delerue Marinho Alves, Manuel Barbosa, Gilles Barthe, Luı́s Esquı́vel, Vincent Hwang, Tiago Oliveira, Hugo Pacheco, Peter Schwabe, Pierre-Yves Strub
ePrint Report ePrint Report
We propose a hybrid formal verification approach that combines high-level deductive reasoning and circuit-based reasoning and apply it to highly optimized cryptographic assembly code. Our approach permits scaling up formal verifi- cation in two complementary directions: 1) it reduces the proof effort required for low-level functions where the computation logics are obfuscated by the intricate use of architecture-specific instructions and 2) it permits amortizing the effort of proving one implementation by using equivalence checking to propagate the guarantees to other implementations of the same computation using different optimizations or targeting different architectures. We demonstrate our approach via an extension to the EasyCrypt proof assistant and by revisiting formally verified implementations of ML-KEM in Jasmin. As a result, we obtain the first formally verified implementation of ML-KEM that offers performance comparable to the fastest non-verified implementation in x86-64 architectures.
Expand
Shaurya Pratap Singh, Bhupendra Singh, Alok Mishra
ePrint Report ePrint Report
In this paper, we introduce a new cryptographic hash algorithm in that we used the Collatz problem, focusing on its conditional branching structure, an element often overlooked despite the fame of the 3x + 1 conjecture. This hash algorithm takes an arbitrary-length input and produces a fixedlength 256, 384, 512-bit output. The presented hash algorithm is in the category of Collision Resistance Hash Function (CRHF), and this hash algorithm is designed by focusing on its use in password storing, HMAC, and digital signatures etc. Until now, quantum computers also didn’t show any speedup in the Collatz conjectures problem, so still the proving and disproving of the Collatz conjecture is a big question, so we believe that there is no structural attack on this hash algorithm.
Expand
Eda Kırımlı, Gaurish Korpal
ePrint Report ePrint Report
In this paper, we discuss refined Humbert invariants of principally polarized superspecial abelian surfaces. Kani introduced the refined Humbert invariant of a principally polarized abelian surface in 1994. The main contribution of this paper is to calculate the refined Humbert invariant of a principally polarized superspecial abelian surface. We present three applications of computing this invariant in the context of isogeny-based cryptography. First, we discuss the maximum of the minimum degrees of isogenies between two uniformly random supersingular elliptic curves independent of their endomorphism ring structures. Second, we provide a different perspective on the fixed isogeny degree problem using refined Humbert invariants, and analyze this problem on average without endomorphism rings. Third, we give experimental evidence for the proven upper bounds that the minimum distance is $\approx \sqrt{p}$; our work verifies this claim up to p=727.
Expand
Giacomo Borin, Maria Corte-Real Santos, Jonathan Komada Eriksen, Riccardo Invernizzi, Marzio Mula, Sina Schaeffler, Frederik Vercauteren
ePrint Report ePrint Report
The main building block in isogeny-based cryptography is an algorithmic version of the Deuring correspondence, called $\mathsf{IdealToIsogeny}$. This algorithm takes as input left ideals of the endomorphism ring of a supersingular elliptic curve and computes the associated isogeny. Building on ideas from $\mathsf{QFESTA}$, the $\mathsf{Clapoti}$ framework by Page and Robert reduces this problem to solving a certain norm equation. The current state of the art is however unable to efficiently solve this equation, and resorts to a relaxed version of it instead. This impacts not only the efficiency of the $\mathsf{IdealToIsogeny}$ procedure, but also its success probability. The latter issue has to be mitigated with complex and memory-heavy rerandomization procedures, but still leaves a gap between the security analysis and the actual implementation of cryptographic schemes employing $\mathsf{IdealToIsogeny}$ as a subroutine. For instance, in $\mathsf{SQIsign}$ the failure probability is still $2^{-60}$ which is not cryptographically negligible.

The main contribution of this paper is a very simple and efficient algorithm called $\mathsf{Qlapoti}$ which approaches the norm equation from $\mathsf{Clapoti}$ directly, solving all the aforementioned problems at once. First, it makes the $\mathsf{IdealToIsogeny}$ subroutine between $2.2$ and $2.6$ times faster. This signigicantly improves the speed of schemes using this subroutine, including notably $\mathsf{SQIsign}$ and \prism. On top of that, $\mathsf{Qlapoti}$ has a cryptographically negligible failure probability. This eliminates the need for rerandomization, drastically reducing memory consumption, and allows for cleaner security reductions.
Expand
Jyotirmoy Basak, Ritam Bhaumik, Amit Kumar Chauhan, Ravindra Jejurikar, Ashwin Jha, Anandarup Roy, André Schrottenloher, Suprita Talnikar
ePrint Report ePrint Report
Since Kuwakado and Morii's work (ISIT 2010 & ISITA 2012), it is known that the classically secure 3-round Luby-Rackoff PRP and Even-Mansour cipher become insecure against an adversary equipped with quantum query access. However, while this query model (the so-called Q2 model) has led to many more attacks, it seems that restricting the adversary to classical query access prevents such breaks (the so-called Q1 model). Indeed, at EUROCRYPT 2022, Alagic et al. proved the Q1-security of the Even-Mansour cipher. Notably, such a proof needs to take into account the dichotomy between construction queries, which are classical, and primitive queries, which are quantum (since the random oracle / permutation models a public function that the adversary can compute).

In this paper, we focus on Feistel ciphers. More precisely, we consider Key-Alternating Feistels built from random functions or permutations. We borrow the tools used by Alagic et al. and adapt them to this setting, showing that in the Q1 setting: $\bullet$ the 3-round Key-Alternating Feistel, even when the round functions are the same random oracle, is a pseudo-random permutation; $\bullet$ similarly the 4-round KAF is a strong pseudo-random permutation.
Expand
Kohei Nakagawa, Hiroshi Onuki
ePrint Report ePrint Report
PRISM is an isogeny-based cryptographic framework that relies on the hardness of computing a large prime-degree isogeny from a supersingular elliptic curve with an unknown endomorphism ring. It includes both an identification scheme PRISM-id and a signature scheme PRISM-sig. In this work, we present two attacks on PRISM-d. First, we analyze the probability that a randomly sampled prime $q$ in PRISM-id results in a $q$-torsion subgroup defined over a small extension field, and we show that this probability is higher than claimed in the original proposal. Exploiting this observation, we construct classical forgery attacks on PRISM-id. The first one handles the scenario where the attacker is not allowed to reject a challenge. It succeeds with success probability $\Theta(2^{-(\lambda + \log\lambda)(1-\varepsilon)})$ and runs in expected time $\tilde{O}(\max\{2^{3\lambda\varepsilon}, 2^{\lambda(\varepsilon + 1/2)}\})$ for any positive real number $\varepsilon < 1/3$. If you take $\varepsilon = 1/4$, the success probability becomes $\Theta(2^{-3(\lambda + \log\lambda)/4})$ and the expected time complexity $\tilde O(2^{3\lambda/4})$. The second forgery attack works in the scenario where the attacker is allowed to reject challenges. It always succeeds and runs in expected time $\tilde O(2^{6\lambda/7})$. Finally, we describe an attack on the underlying hardness assumption of PRISM-id, achieving expected time complexity $\tilde O(2^{\lambda/2})$. Note that our results do not affect the security of PRISM-sig.
Expand
Eran Lambooij, Patrick Neumann, Michiel Verbauwhede
ePrint Report ePrint Report
This work present attacks on full ChiLow-32, a tweakable block cipher presented at EUROCRYPT'25. We first show that an attack on full ChiLow-32 is possible with a straight forward Meet-in-the-Middle attack on the data path. Here, we introduce a method based on linear structures of the round functions to optimally select the meeting point in our attack. Then, we improve this attack using novel high correlation non-linear approximations of the inverse of the $\chi$ map. This results in a drastic reduction in the time complexity of the attack, in exchange for a reduced success probability. The final attack has a time complexity of $2^{111}$, a success probability of 7% and requires 165 messages encrypted under the same tweak. Application of the same techniques to ChiLow-40 results in a deterministic attack on 7 rounds with a time complexity of $2^{125}$ and 29 messages, and a probabilistic attack on 6 rounds with a time complexity of $2^{95}$, a 14% success probability and $128$ messages encrypted under the same tweak.
Expand
Utkarsh Sahai, Arijit Saha, Ramprasad Sarkar, Mriganka Mandal
ePrint Report ePrint Report
Distributed Broadcast Encryption (DBE) is a registration-based cryptographic paradigm in which users generate their own public/secret keys and register their public keys on a public bulletin board. Any sender can encrypt messages for any subset of registered users, and only intended recipients can decrypt, thereby enabling scalable, trustless, and provably secure communication. While Wee et al. (CRYPTO 2025) constructed the first unbounded optimal DBE from the succinct LWE assumption, their construction achieves only selective security. Until now, it was unknown how to achieve adaptive security for unbounded optimal DBE under standard lattice assumptions without relying on heavy primitives such as indistinguishability obfuscation or witness encryption. In this work, we present the first adaptively secure unbounded optimal DBE schemes from falsifiable lattice assumptions. Specifically, for the security parameter $\lambda$ and number of users $N$, we achieve the following:

- A semi-statically secure DBE in the plain model for an arbitrary polynomial number of users, where the sizes of public parameters, user public/secret keys and ciphertext are all optimal (i.e., have size $\textsf{poly}(\lambda,\log N)$), based on the falsifiable $\textsf{poly}(\lambda,\log N$)-succinct LWE assumption.

- An adaptively-secure DBE in the random oracle model supporting an arbitrary polynomial number of users, with optimal public parameters, user public/secret keys and ciphertext sizes, again under $\textsf{poly}(\lambda,\log N)$-succinct LWE assumption.

- An adaptively-secure DBE in the plain model supporting a priori-maximum polynomially many users under the $\textsf{poly}(\lambda,\log N)$-succinct LWE assumption. Our construction achieves optimal sizes for both the user public/secret keys and the ciphertext, whereas the public parameters grow linearly with the number of users (i.e., have size $N \cdot \textsf{poly}(\lambda,\log N)$).
Expand
Hiroshi Amagasa, Rei Ueno, Naofumi Homma
ePrint Report ePrint Report
QR-UOV is a multivariate signature scheme selected as one of the candidates in the second round of the NIST PQC Additional Digital Signatures process. This paper presents software acceleration methods for QR-UOV optimized for modern x86 architectures. QR-UOV operates over small odd prime-power extension fields such as $\mathrm{GF}(31^3)$ and $\mathrm{GF}(127^3)$ unlike other multivariate cryptosystem candidates. This property allows direct utilization of hardware multipliers for field arithmetic, offering a distinctive advantage for high-performance implementations. Yet, how to implement QR-UOV efficiently on modern CPUs based on this property remains unclear so far. Our implementation benefits from two proposed optimizations: (1) reducing the computational overhead of the QR-UOV algorithm through algorithm-level optimization, and (2) leveraging advanced SIMD instruction set extensions (e.g., AVX2, AVX512) to accelerate main operations such as matrix multiplication. Our implementation achieves substantial speedups over the Round 2 reference: for the parameter set $(q,\ell)=(127,3)$ at NIST security level I, it delivers a $5.1\times$ improvement in key generation, $3.6\times$ in signature generation, and $5.7\times$ in signature verification. These results demonstrate that QR-UOV achieves performance comparable or higher than that of UOV implementations, particularly at higher security levels.
Expand
Wasilij Beskorovajnov, Jörn Müller-Quade
ePrint Report ePrint Report
We study “send this data to that device now” exchanges under an active network adversary without PKI or pre-shared secrets. We model a human-verifiable out-of-band channel for comparing short codes as a first-class UC functionality. Building on this, we give two commitment-style protocols, a one-sided and a mutual one, that are direct UC equivalents of MANA-IV and prove they realize authenticated channels with explicit misbinding parameter \(\varepsilon=2^{-t}\). We then compose SAS-based authentication with standard KEM/DEM encryption to obtain a UC-secure message-transfer functionality, preserving the explicit \(\varepsilon\) under composition, and we detail practical instantiations over signatures or MACs. Complementing the theory, we systematize real-world tooling: popular file-transfer utilities either form unauthenticated WebRTC/DTLS channels or single use PAKE “one-code” designs that couple rendezvous and a longer password string but none deploy a session-bound SAS. Our approach decouples rendezvous from authentication and reduces the out-of-band burden to comparing a short \(t\)-bit string. We also sketch an RO-free variant (coin-flip plus non-malleable commitments) with the same user interface.
Expand
Dounia M'Foukh, María Naya-Plasencia, Patrick Neumann
ePrint Report ePrint Report
The state-test technique, originally introduced in the context of impossible-differential cryptanalysis and recently used as an improvement for truncated-differential Meet-in-the-Middle attacks, has proven to be useful for reducing the complexity of attacks. In essence, the idea is to guess parts of the state instead of the key during the key-guessing stage of an attack, ultimately reducing the number of guesses needed. We generalize the idea of the state-test technique, allowing it to be applied not only to impossible-differential and truncated-)differential Meet-in-the-Middle, but also to differential and differential-linear cryptanalysis, proposing also a new performant technique exploiting the state-test technique and probabilistic key-recovery. Additionally, we provide insights on the interaction between cipher and difference needed for the state-test technique to be applicable, finding it to be a promising option for many ciphers. To illustrate our findings, we provide 3 new applications of the state-test technique: we show how it can be used to improve the best known attack on the block cipher Pride, how it can be used to improve a step in the best known attack on Serpent, and use it to present the first known attacks on 24, 25 and 26 rounds of CRAFT (out of 32), improving by up to three rounds over the previous best ones.
Expand
Daniel Römer, Gero Knoblauch, Alexander Wiesmaier
ePrint Report ePrint Report
The rise of quantum computers results in many cryptographic systems being no longer considered sufficiently secure. Algorithms from the field of post-quantum cryptography promise to provide security against the new systems. However, PQC algorithms are generally more computationally intensive than classical cryptography. In order to increase suitability of PQC for everyday use, this paper investigates their acceleration using GPUs. For this purpose, we analyzed research in the field and closed some gaps with our own implementations. We implemented Dilithium, FrodoKEM, and SPHINCS+ on GPUs using CUDA and benchmarked them together with an existing GPU implementation of Kyber on a Tesla A100 and on a RTX 2070 Super. Dilithium performed convincingly on both GPUs, achieving speed-ups in key generation, signing, and verify by factors of around 820, 2,724 and 2,609 on the A100 and 198, 714 and 802 on the RTX 2070 using the optimal batch sizes. SPHINCS+ achieved speed-ups by factors of around 715, 4,114 and 5,915 on the A100 and 193, 193 and 134 on the RTX 2070. FrodoKEM’s key generation, encapsulation, and decapsulation on the A100 were accelerated by factors of 9,989, 4,726, and 3,566. It performed speed-up factors of 107, 108, and 206 on the RTX 2070, respectively. We compared to Kyber's acceleration factors of 476, 513 and 1782 on the A100 and 18.5, 17.4 and 184.5 on the RTX 2070. In addition, we investigated the effect of using a variable set of CUDA streams for FrodoKEM. Here, using 8 streams, a speedup of another 2% could be achieved.
Expand
Vipul Goyal, Xiao Liang, Omkant Pandey, Yuhao Tang, Takashi Yamakawa
ePrint Report ePrint Report
We study secure computation in the plain model against fully concurrent quantum adversaries. While classical simulation-based notions --- such as Super-Polynomial Simulation (SPS) security --- have enabled meaningful forms of concurrent security, very little is known about their quantum counterparts, particularly under standard polynomial-time hardness assumptions.

Our main result is the first post-quantum two-party computation protocol that achieves concurrent SPS security, based solely on the minimal assumption of semi-honest post-quantum oblivious transfer (PQ-OT). Moreover, our protocol has constant round complexity when the underlying PQ-OT protocol is constant-round. This can be viewed as a post-quantum analog of the classical result by Garg et al. [Eurocrypt'12], but with a crucial difference: our security proof completely avoids rewinding, making it suitable for quantum settings where rewinding is notoriously challenging due to the no-cloning principle.

By leveraging a compiler of Bartusek et al. [Crypto'21], we further extend our result to the fully quantum setting, yielding the first constant-round concurrent SPS two-party computation for quantum functionalities in the plain model.

Additionally, we construct a two-round, public-coin, concurrent SPS post-quantum zero-knowledge protocol for languages in $\mathsf{NP} \cap \mathsf{coNP}$, under the quantum polynomial-time hardness of LWE. This result is notable even in the classical setting.
Expand
◄ Previous Next ►