IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
28 August 2025
Dennis Dayanikli, Laura Holz, Anja Lehmann
Doctolib is a popular healthcare platform, used by over 90 million users across France, Italy, and Germany. One of its main features is the secure data exchange between patients and doctors, with 7 million documents shared per month. Doctolib claims to provide the "world's first end-to-end encryption platform built for health applications". The encryption protocol, described in a Whitepaper and Github repository, relies on envelope encryption and lets users upload ciphertexts for the secure data exchange. The ciphertexts are stored and retrieved through a distributed system, consisting of a data server and a key server. To access the data, recipients fetch the ciphertexts and decrypt them with their private key. However, the platform does not require end-users to maintain any cryptographic keys themselves and instead relies on a virtual device that leverages the two-server setting. The virtual device splits the user's private key over both servers, and uses password-based authentication for its retrieval. Overall, the goal of the protocol is to ensure confidentiality of the uploaded medical records as long as at most one server is corrupt. In this work, we analyze the security of Doctolib's distributed encryption protocol. First, we define a set of formal security models for such password-based distributed envelope encryption, that capture the optimal security properties under different corruption settings. We then analyze the protocol - abstracted from the available information - in our model, and show that it does not achieve the desired security guarantees. We finally propose a simple modification that strengthens the original protocol through the use of a distributed oblivious pseudorandom function that provably achieves all our security properties.
Dennis Dayanikli, Anja Lehmann
Asymmetric Password-Authenticated Key Exchange (aPAKE) enables secure key establishment between a client and a server using a pre-shared password, while providing security against offline attacks. However, aPAKE does not guarantee any precomputation resistance, and considers passwords to become immediately available upon server compromise. A recent work by Dayanikli and Lehmann (EuroS&P'24) observed that many existing aPAKE protocols provide stronger precomputation attack resistance than what is guaranteed through the aPAKE model: they often rely on salted password hashes, where a unique salt makes precomputation attacks more difficult. While these salts are sent in clear to the client during authentication, and thus trivial to obtain for an attacker, this makes a difference in multi-user settings with millions of user accounts per server. In order to run bulk precomputation attacks on all users' passwords, the attacker needs to start an authentication session on behalf of every user to obtain their salts. However, this protection is still limited as salts are static, and the attacker can gradually extract all salt values for precomputation attacks.
In this work, we build upon the observation that many aPAKE protocols include salts for their password protection, and propose a new aPAKE variant that makes such bulk precomputation attacks practically infeasible. We propose updatable aPAKE which employs updatable salts. In updatable aPAKE, the salt is implicitly refreshed with each successful user authentication, forcing an attacker to rebuild their precomputation table after every honest user's login -- offering a level of precomputation resistance similar to that of strong aPAKE protocols. We formalize the security of updatable aPAKE in the Universal Composability framework and show how OKAPE-HMQV, the currently most efficient aPAKE protocol, can be lifted to the updatable aPAKE setting in a provably secure way. The core idea is that this salt update can be integrated through relying on the password-based server-side authentication, that is already guaranteed through aPAKE. We also observe that OKAPE-HMQV is very similar to SRP-6a, the currently most widely deployed aPAKE protocol, and explain how the same idea can be used to upgrade this legacy protocol to achieve strong bulk precomputation attack resistance with minimal overhead.
In this work, we build upon the observation that many aPAKE protocols include salts for their password protection, and propose a new aPAKE variant that makes such bulk precomputation attacks practically infeasible. We propose updatable aPAKE which employs updatable salts. In updatable aPAKE, the salt is implicitly refreshed with each successful user authentication, forcing an attacker to rebuild their precomputation table after every honest user's login -- offering a level of precomputation resistance similar to that of strong aPAKE protocols. We formalize the security of updatable aPAKE in the Universal Composability framework and show how OKAPE-HMQV, the currently most efficient aPAKE protocol, can be lifted to the updatable aPAKE setting in a provably secure way. The core idea is that this salt update can be integrated through relying on the password-based server-side authentication, that is already guaranteed through aPAKE. We also observe that OKAPE-HMQV is very similar to SRP-6a, the currently most widely deployed aPAKE protocol, and explain how the same idea can be used to upgrade this legacy protocol to achieve strong bulk precomputation attack resistance with minimal overhead.
Ke Cheng, Yuheng Xia, Anxiao Song, Jiaxuan Fu, Wenjie Qu, Yulong Shen, Jiaheng Zhang
Transformer-based models like BERT and GPT have achieved state-of-the-art performance across a wide range of AI tasks but raise serious privacy concerns when deployed as cloud inference services.
To address this, secure multi-party computation (MPC) is commonly employed, encrypting both user inputs and model parameters to enable inference without revealing any private information.
However, existing MPC-based secure transformer inference protocols are predominantly designed under the semi-honest security model. Extending these protocols to support malicious security remains a significant challenge, primarily due to the substantial overhead introduced by securely evaluating complex non-linear functions required for adversarial resilience.
We introduce Mosformer, the first maliciously secure three-party (3PC) inference framework that efficiently supports large transformers such as BERT and GPT.
We first design constant-round comparison and lookup table protocols with malicious security, leveraging verifiable distributed point functions (VDPFs). Building on these, we develop a suite of 3PC protocols for efficient and secure evaluation of complex non-linear functions in transformers. Together with optimized modulus conversion, our approach substantially reduces the overhead of secure transformer inference while preserving model accuracy.
Experimental results on the vanilla transformer block show that Mosformer achieves up to a $5.3\times$ speedup and a $4.3\times$ reduction in communication over prior maliciously secure protocols.
Despite offering stronger security guarantees, Mosformer achieves comparable or even superior online performance to state-of-the-art semi-honest 2PC and 3PC frameworks, including BOLT (Oakland 2024), BumbleBee (NDSS 2025), SHAFT (NDSS 2025), and Ditto (ICML 2024), on full-scale models such as BERT and GPT-2.
Yu Zhang, Xianhui Lu, Yijian Liu, Yongjian Yin, Kunpeng Wang
At EUROCRYPT2012, Banerjee, Peikert, and Rosen introduced Ring Learning With Rounding (RLWR) problem and constructed lattice-based pseudorandom functions for the first time. Subsequently, Banerjee, Brenner, Leurent, Peikert, and Rosen named this family of lattice-based pseudorandom functions as SPRING, reanalyzed the security, and gave two practical instances. Building upon the SPRING family, Bouillaguet, Delaplace, Fouque, and Kirchner further extended it to a pseudorandom number generator called SPRING-RS. It is quite fast but still has a certain gap compared with the classical pseudorandom number generator based on symmetric cryptography, and the key size is large.
In this work, we present LEAP, a lattice-based pseudorandom number generation scheme characterized by high performance, adaptable parameter selection, and extensive support for parallel processing. Unlike the RLWR problem used in public key cryptography, LEAP treats the public parameter in the RLWR problem as the key as well. Hiding the public parameters leads to larger lattice dimensions and higher standard deviations of error in the concrete security analysis compared to RLWR under identical parameters. These adjustments imply enhanced security, allowing smaller parameters while maintaining the same security level, thereby improving performance. Additionally, we introduce a novel framework that reuses multiple parameters, significantly enhancing overall performance. To mitigate the issue of increased key size caused by treating the public parameter as the key, we design a pseudorandom number generator leveraging the small key size characteristic of a variant of the NTRU assumption, which provides the key required for the high-performance pseudorandom number generator.
Compared with the SPRING-RS, the LEAP can reduce the key size by 1.71X while improving performance by 3.30X at the same security level. Under the AVX2 and AVX512 implementations, the performance reaches 1.61 Cycles/byte and 1.14 Cycles/byte, and the throughput reaches 16.12 Gbps and 22.60 Gbps, respectively.
Yubo Zeng, Kang Yang, Dengguo Feng, Min Zhang
Secure Multi-Party Computation (MPC) in the classical setting requires parties to stay online through the whole computation, which engenders significant inconvenience, especially when dealing with large-scale and complex tasks. The notion of fluid MPC, introduced by Choudhuri et al. (Crypto 2021), aims to tackle this obstacle by presenting a dynamic participation model where parties have the flexibility to join and leave as needed. The best-known honest-majority MPC protocol by Bienstock et al. (Crypto 2023) in the fluid setting achieves linear communication complexity, but still requires significantly larger communication than MPC in the classical setting.
In this paper, we present two concretely efficient fluid MPC protocols with semi-honest security and linear communication in the honest majority setting. To achieve low concrete communication, we propose a separation-generation approach for random double sharings, along with a linear-communication resharing technique for transmitting sharings across two committees. For evaluating multiplication gates, our protocols in the fluid setting achieve the (almost) same communication as the state-of-the-art protocol ATLAS by Goyal et al. (Crypto 2021) in the classical setting. We also extend the two protocols to achieve malicious security while doubling the communication cost. Compared to the best-known fluid MPC protocol, we reduce the communication cost per multiplication gate by a factor of 6 ∼ 9× (resp., 19 ∼ 28×) for semi-honest security (resp., malicious security). As a trade-off, we relax the maximal fluidity where the parties only need to be active in a single round to the submaximal fluidity allowing an extra internal communication round for each committee. We believe that the new setting is a reasonable relaxation for applications and allows us to achieve practical efficiency for fluid MPC.
In this paper, we present two concretely efficient fluid MPC protocols with semi-honest security and linear communication in the honest majority setting. To achieve low concrete communication, we propose a separation-generation approach for random double sharings, along with a linear-communication resharing technique for transmitting sharings across two committees. For evaluating multiplication gates, our protocols in the fluid setting achieve the (almost) same communication as the state-of-the-art protocol ATLAS by Goyal et al. (Crypto 2021) in the classical setting. We also extend the two protocols to achieve malicious security while doubling the communication cost. Compared to the best-known fluid MPC protocol, we reduce the communication cost per multiplication gate by a factor of 6 ∼ 9× (resp., 19 ∼ 28×) for semi-honest security (resp., malicious security). As a trade-off, we relax the maximal fluidity where the parties only need to be active in a single round to the submaximal fluidity allowing an extra internal communication round for each committee. We believe that the new setting is a reasonable relaxation for applications and allows us to achieve practical efficiency for fluid MPC.
Yu-Yuan Chou, Wen-Ching Wu, Jue-Sam Chou
In this paper, we specifically review Xu et al.’s quantum blind signature scheme for distributed e-voting systems, which primarily focuses on simulating real-life e-voting. The scheme aims to ensure voter anonymity in an e-voting system. However, we found that it not only suffers from identity impersonation attacks but also lacks the blindness property essential to a blind quantum signature. To address these shortcomings, we propose a new quantum blind signature scheme that leverages quantum mechanical properties and a one-way hash function. Considering that a voting scheme naturally involves an election committee member blindly signing a ballot embedded with the name of the selected candidate, we use our quantum blind signature as the foundation to design a quantum voting system. This system effectively prevents the repudiation and counterfeiting issues present in Xu et al.’s scheme. Additionally, we provide relevant security analyses to support our theoretical framework. The results demonstrate that our scheme outperforms existing literature not only in terms of e-voting security properties—such as undeniability, anonymity, and untraceability—but also in conceptual simplicity and computational efficiency.
Carlos Cid, David Elkouss, Manuel Goulão
Quantum security most commonly encompasses only offline passive quantum attacks, where a quantum computer is used by an adversary to solve some computationally hard problem, e.g. factoring or discrete logarithm. However, we are witnessing major efforts for the development and deployment of quantum communication networks, and in this environment, cryptographic protocols may also be implemented in quantum devices. In this new setting, a wider range of online active attacks may become possible, for example against targets that may, either deliberately or inadvertently, run a cryptographic scheme in superposition. In this work, we demonstrate that authentication protocols whose security is based on the difficulty of learning linear functions subject to errors, may be vulnerable to attacks where adversaries can make queries in superposition -- that is, under the so-called "Q2" adversarial model. We do so by describing superposition attacks against a family of symmetric-key authentication protocols based on the LPN problem, a post-quantum cryptography assumption. Our attacks against the HB+ and HB# protocols, both of which have classical proofs of security against active attacks, are based on the Bernstein-Vazirani algorithm, and can efficiently recover the secret key. Despite being conceptually simple, we suggest that our attack techniques might be extended and adapted to also allow for superposition attacks against some modern lattice-based identification and post-quantum signature schemes.
Marie Bolzer, Sébastien Duval, Marine Minier
The problem of finding a minimal circuit to implement a given function is one of the oldest in electronics. It is known to be NP-hard. Still, many tools exist to find sub-optimal circuits to implement a function. In electronics, such tools are known as synthesisers. However, these synthesisers aim to implement very large functions (a whole electronic chip). In cryptography, the focus is on small functions, hence the necessity for new dedicated tools for small functions.
Several tools exist to implement small functions. They differ by their algorithmic approach (some are based on Depth-First-Search as introduced by Ullrich in 2011, some are based on SAT-solvers like the tool desgined by Stoffelen in 2016, some non-generic tools use subfield decomposition) and by their optimisation criteria (some optimise for circuit size, others for circuit depth, and some for side-channel-protected implementations). However, these tools are limited to functions operating on less than 5 bits, sometimes 6 bits for quadratic functions, or to very simple functions. The limitation lies in a high computing time.
We propose a new tool to implement quadratic functions up to 9 bits within AND-depth 1, minimising the number of AND gates. This tool is more time-efficient than previous ones, allowing to explore larger implementations than others on 6 bits or less and allows to reach larger sizes, up to 9 bits.
Several tools exist to implement small functions. They differ by their algorithmic approach (some are based on Depth-First-Search as introduced by Ullrich in 2011, some are based on SAT-solvers like the tool desgined by Stoffelen in 2016, some non-generic tools use subfield decomposition) and by their optimisation criteria (some optimise for circuit size, others for circuit depth, and some for side-channel-protected implementations). However, these tools are limited to functions operating on less than 5 bits, sometimes 6 bits for quadratic functions, or to very simple functions. The limitation lies in a high computing time.
We propose a new tool to implement quadratic functions up to 9 bits within AND-depth 1, minimising the number of AND gates. This tool is more time-efficient than previous ones, allowing to explore larger implementations than others on 6 bits or less and allows to reach larger sizes, up to 9 bits.
Hyun Ji Kwag, Jonghyun Kim, Changmin Lee, Jong Hwan Park
Achieving (at least) a worst-case correctness error is essential for an underlying public-key encryption (PKE) scheme to which the Fujisaki-Okamoto (FO) transformation is applied. There are three average-case to worst-case (ACWC) transformations—denoted as $\mathsf{ACWC}_{0}$, $\mathsf{ACWC}_{1}$ (PKC 2023), and $\mathsf{ACWC}_{2}$ (TIFS 2023)—which generically convert a PKE scheme with an average-case correctness error into one with a worst-case correctness error. However, in these ACWC transformations the \textit{$\gamma$-spreadness}, a critical factor in determining explicit rejection ($\mathsf{FO}^{\perp}$) or implicit rejection ($\mathsf{FO}^{\not\perp}$), has not been established with rigorous proofs. Existing analyses of $\gamma$-spreadness lack rigorous proofs, include analytical flaws, or fail to achieve the tightest possible bounds. In this work, we reprove the $\gamma$-spreadness of ACWC-transformed PKE schemes by leveraging two key facts: the random oracle is chosen at random and the encoding mechanism used in the ACWC framework is message-hiding. Our new proofs are applied to the previous NTRU-based PKE schemes, called $\mathsf{\mathsf{NTRU}\text{-}\mathsf{C}}$, $\mathsf{NTRU}\text{-}\mathsf{B}$, and $\mathsf{NTRU+}$, giving the corrected $\gamma$-spreaness for those PKE schemes with concrete parameters.
Jens Groth, Harjasleen Malvai, Andrew Miller, Yi-Nuo Zhang
Hashing to elliptic curve groups is a fundamental operation used in many cryptographic applications, including multiset hashing and BLS signatures. With the recent rise of zero-knowledge applications, they are increasingly used in constraint programming settings. For example, multiset hashing enables memory consistency checks in zkVMs,
while BLS signatures are used in proof of stake protocols.
In such cases, it becomes critical for hash-to-elliptic-curve-group constructions to be constraint-friendly such that one can efficiently generate succinct proofs of correctness. However, existing constructions rely on cryptographic hash functions that are expensive to represent in arithmetic constraint systems, resulting in high proving costs.
We propose a constraint-efficient alternative: a map-to-elliptic-curve-group relation that bypasses the need for cryptographic hash functions and can serve as a drop-in replacement for hash-to-curve constructions in practical settings, including the aforementioned applications. Our relation naturally supports non-deterministic map-to-curve choices making them more efficient in constraint programming frameworks and enabling efficient integration into zero-knowledge proofs. We formally analyze the security of our approach in the elliptic curve generic group model (EC-GGM).
Our implementation in Noir/Barretenberg demonstrates the efficiency of our construction in constraint programming: it achieves over $23\times$ fewer constraints than the best hash-to-elliptic-curve-group alternatives, and, enables $50$-$100\times$ faster proving times at scale.
We propose a constraint-efficient alternative: a map-to-elliptic-curve-group relation that bypasses the need for cryptographic hash functions and can serve as a drop-in replacement for hash-to-curve constructions in practical settings, including the aforementioned applications. Our relation naturally supports non-deterministic map-to-curve choices making them more efficient in constraint programming frameworks and enabling efficient integration into zero-knowledge proofs. We formally analyze the security of our approach in the elliptic curve generic group model (EC-GGM).
Our implementation in Noir/Barretenberg demonstrates the efficiency of our construction in constraint programming: it achieves over $23\times$ fewer constraints than the best hash-to-elliptic-curve-group alternatives, and, enables $50$-$100\times$ faster proving times at scale.
Sayon Duttagupta, Dave Singelée, Xavier Carpent, Volkan Guler, Takahito Yoshizawa, Seyed Farhad Aghili, Aysajan Abidin, Bart Preneel
Multiple authentication solutions are widely deployed, such as OTP/TOTP/HOTP codes, hardware tokens, PINs, or biometrics. However, in practice, one sometimes needs to authenticate not only the user but also their location. The current state-of-the-art secure localisation schemes are either unreliable or insecure, or require additional hardware to reliably prove the user's location. This paper proposes CARPOOL, a novel, secure, and reliable approach to affirm the location of the user by solely relying on location-bounded interactions with commercial off-the-shelf devices. Our solution does not require any additional hardware, leverages devices already present in a given environment, and can be integrated effortlessly with existing security components, such as identity and access control systems. To demonstrate the feasibility of our work and to show that it can be deployed in a realistic closed environment setting, we implemented a proof of concept realisation of CARPOOL on an Android phone and multiple Raspberry Pi boards and integrated CARPOOL with Amazon Web Services (AWS) Cognito.
Riddhi Ghosal, Isaac M. Hair, Aayush Jain, Amit Sahai
We give a public key encryption scheme that is provably secure against poly-size adversaries, assuming $n^{\log^\alpha n}$ hardness of the standard planted clique conjecture, for any $\alpha \in (0,1)$, and a relatively mild hardness conjecture about noisy $k\mbox{-}\mathsf{LIN}$ over expanders that is not known to imply public-key encryption on its own. Both of our conjectures correspond to natural average-case variants of NP-complete problems and have been studied for multiple decades, with unconditional lower bounds supporting them in a variety of restricted models of computation. Our encryption scheme answers an open question in a seminal work by Applebaum, Barak, and Wigderson [STOC'10].
Dmitry Khovratovich, Mikhail Vladimirov, Benedikt Wagner
SNARKs enable compact proofs that an NP statement is true and that the prover knows a valid witness. They have become a key building block in modern smart contract applications, including rollups and privacy-focused cryptocurrencies.
In the widely used Groth16 framework, however, long statements incur high costs.
A common workaround is to pass the statement’s hash to the SNARK and move the statement into the witness. The smart contract then hashes the statement first, and the circuit that is proven additionally checks consistency of the hash and the statement.
Unfortunately, virtually any hash function is expensive to call either in a smart contract (in terms of gas) or in the proven circuit (in terms of prover time).
We demonstrate a novel solution to this dilemma, which we call hybrid compression. Our method allows us to use two different hash functions—one optimized for the proof circuit, and another optimized for on-chain verification—thereby combining the efficiency advantages of both. We prove the security of this approach in the standard model under reasonable assumptions about the two hash functions, and our benchmarks show that it achieves near-optimal performance in both gas usage and prover time. As an example, compressing an 8 KB statement with our approach results in a 10-second prover time and a smart contract spending 270K gas, whereas the existing approaches either need a much longer proof generation (290 seconds for SHA-256 hashing) or a much more expensive contract (5M gas for Poseidon hashing).
Along the way, we develop a two-party protocol of independent interest in communication complexity: an efficient deterministic method for checking input equality when the two parties do not share the same hash function.
We demonstrate a novel solution to this dilemma, which we call hybrid compression. Our method allows us to use two different hash functions—one optimized for the proof circuit, and another optimized for on-chain verification—thereby combining the efficiency advantages of both. We prove the security of this approach in the standard model under reasonable assumptions about the two hash functions, and our benchmarks show that it achieves near-optimal performance in both gas usage and prover time. As an example, compressing an 8 KB statement with our approach results in a 10-second prover time and a smart contract spending 270K gas, whereas the existing approaches either need a much longer proof generation (290 seconds for SHA-256 hashing) or a much more expensive contract (5M gas for Poseidon hashing).
Along the way, we develop a two-party protocol of independent interest in communication complexity: an efficient deterministic method for checking input equality when the two parties do not share the same hash function.
Qi Cheng, Hongru Cao, Sian-Jheng Lin, Nenghai Yu, Yunghsiang S. Han, Xianhong Xie
The threshold secret sharing scheme enables a dealer to distribute the share to every participant such that the secret is correctly recovered from a certain amount of shares. The traditional $(k, n)$ threshold secret sharing scheme requires that the number of participants $n$ is known in advance. In contrast, the evolving secret sharing scheme allows that $n$ can be uncertain and even ever-growing. In this paper, we consider the evolving secret sharing scenario. Based on the prefix codes, we propose a brand-new construction of evolving $k$-threshold secret sharing scheme for an $\ell$-bit secret over a polynomial ring, with correctness and perfect security. The proposed scheme is the first evolving $k$-threshold secret sharing scheme by generalizing Shamir's scheme onto a polynomial ring. Besides, the proposed scheme also establishes the connection between prefix codes and the evolving schemes for $k\geq2$. The analysis shows that the size of the $t$-th share is $(k-1)(\ell_t-1)+\ell$ bits, where $\ell_t$ denotes the length of a binary prefix code of encoding integer $t$. In particular, when $\delta$ code is chosen as the prefix code, the share size is $(k-1)\lfloor\lg t\rfloor+2(k-1)\lfloor\lg ({\lfloor\lg t\rfloor+1}) \rfloor+\ell$, which improves the prior best result $(k-1)\lg t+6k^4\ell\lg{\lg t}\cdot\lg{\lg {\lg t}}+ 7k^4\ell\lg k$, where $\lg$ denotes the binary logarithm. Specifically, when $k=2$, the proposal also provides a unified mathematical decryption for prior evolving $2$-threshold secret sharing schemes and also achieves the minimal share size for a single-bit secret, which is the same as the best-known scheme.
Yimeng Sun, Jiamin Cui, Shiyao Chen, Meiqin Wang, Longzheng Cui, Chao Niu
Motivated by LowMC cryptanalysis challenge, research in recent years focuses more on attacking LowMC in \PICNIC application setting, \ie an attacker can see only a single plaintext/ciphertext pair. It can be noted that in the security proof of \PICNIC, LowMC is required to be secure under two plaintexts, it is thus meaningful to investigate the security of LowMC in this direction. Pioneered by Liu, Isobe and Meier at Crypto 2021, they combined algebraic techniques with difference enumeration attack, which could attack all three 4-round LowMC instances adopting full S-Box layers in \PICNIC with only two chosen plaintexts. However, the research on cryptanalysis of LowMC using two plaintext/ciphertext pairs is yet far from complete. Previous works using a single known plaintext are better than those with two plaintexts in terms of attack complexity or attacked rounds when considering a comparable success probability.
In this paper, to address such counter-intuitive gaps between existing attacks on LowMC with full S-box layers using a single and two plaintext/ciphertext pairs, we first develop an algebraic key-derived attack framework, where an algebraic property of the key-derived difference is utilized to build an equation system with lower algebraic degree. This directly contributes to less cost for solving equation system and naturally works under known-plaintext setting, which can be further enhanced with chosen-plaintext attack setting. We then present an improved difference enumeration attack framework. Instead of enumerating all possible differences in the second round, variables for part of S-boxes in the second and third rounds are introduced to derive cubic equations, which will lead to fewer variables for the last round. Finally, applying our new attack frameworks to LowMC, we propose \text{8-round} attacks on LowMC for the very first time, which remain under known-plaintext setting. Moreover, we give the first attacks on three LowMC instances, \ie 129-bit block size of 6 rounds and 129-/192-bit block size of 7 rounds, which cannot be obtained using previous attacking methods. Also, previous attacks on LowMC from 4 to 7 rounds could be improved for almost all three LowMC instances in this paper. All these results, we believe, could be a positive answer that given one more pair, more information indeed can be gained to improve attacks on LowMC when compared to those using only a single plaintext. As well as our newly proposed algebraic key-derived attack framework, we hope that, could provide more insights into the cryptanalysis of LowMC with low allowable data complexity.
In this paper, to address such counter-intuitive gaps between existing attacks on LowMC with full S-box layers using a single and two plaintext/ciphertext pairs, we first develop an algebraic key-derived attack framework, where an algebraic property of the key-derived difference is utilized to build an equation system with lower algebraic degree. This directly contributes to less cost for solving equation system and naturally works under known-plaintext setting, which can be further enhanced with chosen-plaintext attack setting. We then present an improved difference enumeration attack framework. Instead of enumerating all possible differences in the second round, variables for part of S-boxes in the second and third rounds are introduced to derive cubic equations, which will lead to fewer variables for the last round. Finally, applying our new attack frameworks to LowMC, we propose \text{8-round} attacks on LowMC for the very first time, which remain under known-plaintext setting. Moreover, we give the first attacks on three LowMC instances, \ie 129-bit block size of 6 rounds and 129-/192-bit block size of 7 rounds, which cannot be obtained using previous attacking methods. Also, previous attacks on LowMC from 4 to 7 rounds could be improved for almost all three LowMC instances in this paper. All these results, we believe, could be a positive answer that given one more pair, more information indeed can be gained to improve attacks on LowMC when compared to those using only a single plaintext. As well as our newly proposed algebraic key-derived attack framework, we hope that, could provide more insights into the cryptanalysis of LowMC with low allowable data complexity.
Yanyi Liu, Rafael Pass
We consider the question of whether worst-case hardness of the time-bounded Kolmogorov complexity problem, $\KpolyA$---that is, determining whether a string is time-bounded Kolmogorov random ($K^t$-random) or not---suffices to imply the existence of one-way functions (OWF).
Roughly speaking, our main result shows that under a natural strengthening of standard-type derandomization assumptions, worst-case hardness of the \emph{boundary} version of this classic problem characterizes OWFs.
In more detail, let $\bKtA$ denote the problem of, given an instance $x$, deciding whether (a) $K^{t_2}(x)\geq n-1$, or (b) $K^{t_1}(x) < n-1$ \emph{but} $K^{t_2}> n - \log n$; that is, deciding whether $x$ is $K^t$-random, or just ``near" $K^t$-random. We say that $\bKpolyA \notin \ioBPP$ if $\bKpolyA \notin \ioBPP$ for all polynomials $t_1,t_2$.
We show that under a natural strengthening of standard derandomization assumptions (namely, there exists a constant $\varepsilon > 0$ such that $\E \not\subseteq {\sf ioNTIME}[2^{kn}] \slash 2^{\varepsilon n}$ for every $k \in \N$), OWF exist iff $\bKpolyA \notin \ioBPP$. Along the way, we also demonstrate that if we consider the probabilistic version of Kolmogorov complexity (referred to as $pK^t$) instead, then the characterization holds unconditionally.
We finally observe that for most standard optimization problems, hardness ``along boundary" is equivalent to ``plain" worst-case hardness, indicating that assuming hardness along the boundary may be WLOG.
Roughly speaking, our main result shows that under a natural strengthening of standard-type derandomization assumptions, worst-case hardness of the \emph{boundary} version of this classic problem characterizes OWFs.
In more detail, let $\bKtA$ denote the problem of, given an instance $x$, deciding whether (a) $K^{t_2}(x)\geq n-1$, or (b) $K^{t_1}(x) < n-1$ \emph{but} $K^{t_2}> n - \log n$; that is, deciding whether $x$ is $K^t$-random, or just ``near" $K^t$-random. We say that $\bKpolyA \notin \ioBPP$ if $\bKpolyA \notin \ioBPP$ for all polynomials $t_1,t_2$.
We show that under a natural strengthening of standard derandomization assumptions (namely, there exists a constant $\varepsilon > 0$ such that $\E \not\subseteq {\sf ioNTIME}[2^{kn}] \slash 2^{\varepsilon n}$ for every $k \in \N$), OWF exist iff $\bKpolyA \notin \ioBPP$. Along the way, we also demonstrate that if we consider the probabilistic version of Kolmogorov complexity (referred to as $pK^t$) instead, then the characterization holds unconditionally.
We finally observe that for most standard optimization problems, hardness ``along boundary" is equivalent to ``plain" worst-case hardness, indicating that assuming hardness along the boundary may be WLOG.
Julius Hermelink, Erik Mårtensson, Maggie Tran
Plaintext-checking (PC) oracles are among the most prominent types of attacks against the recently standardized ML-KEM. Previous works have drastically reduced the number of queries to recover the secret key. While the number of traces is close to the information theoretic bound, current attacks have not yet been adapted to setting with increased noise and highly imperfect oracles.
In attacks targeting real-world protected implementations, we have to expect noisy information leading to oracles that only give a small advantage over guessing. In this work, we show how to deal with imperfect oracles arising from side-channels under a high noise level. We present several new highly noise-tolerant parallel PC-oracle attacks. Our attacks rely on soft-analytic techniques and can deal with just a slight advantage over guessing. We present several attacks that either optimally update the sub-key distribution or compute approximations to the marginals. In addition, we extend the generic framework for side-channel information from Eurocrypt 2025.
We then discuss several oracle instantiations based on the noisy Hamming weight model. These oracles rely on widely accepted assumptions while also being easy to simulate and allowing for fair comparisons between different attacks. Furthermore, we take masking countermeasures into account. Our evaluations in these and previous models show that PC-oracle attacks are highly noise-tolerant -- on an entirely different scale compared to previous work. These improvements are of algorithmic nature and orthogonal to the fact that the Fujisaki-Okamoto transform in ML-KEM offers a large attack surface. We discuss the implications of our findings for protected ML-KEM implementations.
We then discuss several oracle instantiations based on the noisy Hamming weight model. These oracles rely on widely accepted assumptions while also being easy to simulate and allowing for fair comparisons between different attacks. Furthermore, we take masking countermeasures into account. Our evaluations in these and previous models show that PC-oracle attacks are highly noise-tolerant -- on an entirely different scale compared to previous work. These improvements are of algorithmic nature and orthogonal to the fact that the Fujisaki-Okamoto transform in ML-KEM offers a large attack surface. We discuss the implications of our findings for protected ML-KEM implementations.
Tim Beyne, Gregor Leander, Immo Schütt
We show that $4r + 4$ rounds of a variant of the AES with independent and uniform random round keys are $\varepsilon$-pairwise independent with $\varepsilon = 2^{14}\, 2^{-30r}$. We deduce this bound from a two-norm version of pairwise-independence for SHARK-type ciphers based on the third-largest singular value of the difference-distribution table of the S-box. This approach was worked out in the master thesis of Immo Schütt. Our bounds leave room for improvement, both in the constant prefactor $2^{14}$ — due to a rough conversion between norms — and in the exponent. These improvements will be worked out in an extended version of this note.
Haoyu Liao, Qingbin Luo
Symmetric cryptography is confronting threats posed by quantum computing, including Grover's search algorithm and Simon's algorithm. In the fault-tolerant quantum computation, the limited qubit count, connectivity constraints, and error rates of quantum hardware impose stringent requirements on the implementation of cryptographic quantum circuits. Constructing low-resource quantum circuit models forms the foundation for evaluating algorithmic resistance to quantum threats. In this work, we address the fundamental limitations in in-place implementations of AES quantum circuits by proposing a set of in-place synthesis methods centered on DW-cost optimization. First, we prove that within the composite field arithmetic framework, intermediate circuit states can be utilized to uncompute S-box input states, and introduce a novel design pathway and circuit structure for in-place S-box quantum circuits. Second, we establish the necessary conditions for maximizing parallelization of Toffoli gates under minimal-width constraints in binary field multiplication. Through co-design and optimization of multiple nonlinear components, we construct a compact in-place S-box with a DW-cost of merely 276. Finally, building on this, we achieve quantum circuit implementations for AES-128, AES-192, and AES-256 via co-optimization of key expansion and round functions, reducing their DW-cost values to 65,280, 87,552, and 112,896 respectively. These results indicate a reduction of at least 46%, 45%, and 45% compared to existing state-of-the-art solutions. Building upon these advancements, this study establishes new technical benchmarks for low-quantum-resource and fault-tolerant implementations of symmetric cryptography in the post-quantum era.
Yao Sun, Ting Li
The efficiency of a cryptographic algorithm's circuit implementation is critical for its practical deployment. The implementation cost of the linear layer can be evaluated by the number of XOR operations, typically measured in terms of s-\xor (serial \xor) and g-\xor (generalized \xor). In this work, we reduce the g-\xor count of the AES linear layer from 91 to 90 by enhancing the Boyar–Peralta (BP) algorithm and incorporating sub-graph reconstruction. We propose an improved framework for the BP algorithm that significantly accelerates the optimization process. Furthermore, we improve the local optimization strategy introduced in \cite{LinDa2021} by sub-graph reconstruction, implemented by solving a mixed-integer linear programming (MILP) model using Gurobi. As a result, we achieve the first implementation of the AES linear layer with 90 g-XOR operations, improving upon the previous best result of 91 reported in \cite{LinDa2021}.