International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

20 August 2025

Michael Adjedj, Geoffroy Couteau, Arik Galansky, Nikolaos Makriyannis, Oren Yomtov
ePrint Report ePrint Report
The industry is moving away from passwords for authentication and authorization, with hardware devices for storing long-term cryptographic keys emerging as the leading alternative. However, these devices often have limited displays and remain vulnerable to theft, malware, or tricking users into signing malicious payloads. Current systems provide little fallback security in such cases. Any solution must also meet strict requirements: compatibility with industry standards, scalability to handle high request volumes, and high availability.

We present a novel design for authentication and authorization that meets these demands. Our approach virtualizes the authenticating/authorizing party via a two-party signing protocol with a helper entity, ensuring that keys remain secure even if a device is compromised and that every signed message conforms to a security policy.

We formalize the required properties for such protocols and show how they are met by existing schemes (e.g., FROST for Schnorr, Boneh–Haitner–Lindell-Segev'25 for ECDSA). Motivated by the widespread use of ECDSA (FIDO2/Passkeys, blockchains), we introduce a new, optimized two-party ECDSA protocol that is significantly more efficient than prior work. At its core is a new variant of exponent-VRF, improving on earlier constructions and of independent interest. We validate our design with a proof-of-concept virtual authenticator for the FIDO2 Passkeys framework.
Expand
Jonas Janneck, Jonas Meers, Massimo Ostuzzi, Doreen Riepel
ePrint Report ePrint Report
An Authenticated Key Encapsulation Mechanism (AKEM) combines public-key encryption and digital signatures to provide confidentiality and authenticity. AKEMs build the core of Hybrid Public Key Encryption (RFC 9180) and serve as a useful abstraction for messaging applications like the Messaging Layer Security (MLS) protocol (RFC 9420) and Signal's X3DH protocol. To date, most existing AKEM constructions either rely on classical (non post-quantum) assumptions or on unoptimized black-box approaches leading to suboptimal efficiency.

In this work, we choose a different abstraction level to combine KEMs and identification schemes more efficiently by leveraging randomness reuse. We construct a generic scheme and identify the necessary security requirements on the underlying KEM and identification scheme when reusing parts of their randomness. This allows for a concrete instantiation from isogenies based on the POKÉ KEM (EUROCRYPT'25) and the SQIsignHD identification scheme (EUROCRYPT'24). To be used in our black-box construction, the identification scheme requires the more advanced security property of response non-malleability. Hence, we further show that a slight modification of SQIsignHD satisfies this notion, which might be of independent interest.

Putting everything together, our final scheme yields the most compact AKEM from PQ assumptions with public keys of 366 bytes and ciphertexts of 216 bytes while fulfilling the strongest confidentiality and authenticity notions.
Expand

14 August 2025

Anubhav Baweja, Alessandro Chiesa, Elisabetta Fedele, Giacomo Fenzi, Pratyush Mishra, Tushar Mopuri, Andrew Zitek-Estrada
ePrint Report ePrint Report
The sumcheck protocol is a fundamental building block in the design of probabilistic proof systems, and has become a key component of recent work on efficient succinct arguments.

We study time-space tradeoffs for the prover of the sumcheck protocol in the streaming model, and provide upper and lower bounds that tightly characterize the efficiency achievable by the prover.

$\bullet{}$ For sumcheck claims about a single multilinear polynomial we demonstrate an algorithm that runs in time $O(kN)$ and uses space $O(N^{1/k})$ for any $k \geq 1$. For non-adaptive provers (a class which contains all known sumcheck prover algorithms) we show this tradeoff is optimal.

$\bullet{}$ For sumcheck claims about products of multilinear polynomials, we describe a prover algorithm that runs in time $O(N(\log \log N + k))$ and uses space $O(N^{1/k})$ for any $k \geq 1$. We show that, conditioned on the hardness of a natural problem about multiplication of multilinear polynomials, any ``natural'' prover algorithm that uses space $O(N^{1/2 - \varepsilon})$ for some $\varepsilon > 0$ must run in time $\Omega(N(\log \log N + \log \varepsilon))$.

We implement and evaluate the prover algorithm for products of multilinear polynomials. We show that our algorithm consumes up to $120\times$ less memory compare to the linear-time prover algorithm, while incurring a time overhead of less than $2\times$.

The foregoing algorithms and lowerbounds apply in the interactive proof model. We show that in the polynomial interactive oracle proof model one can in fact design a new protocol that achieves a better time-space tradeoff of $O(N^{1/k})$ space and $O(N(\log^* N + k))$ time for any $k \geq 1$.
Expand
Katharina Boudgoust, Corentin Jeudy, Erkan Tairi, Weiqiang Wen
ePrint Report ePrint Report
The Module Learning With Errors (M-LWE) problem has become a fundamental hardness assumption for lattice-based cryptography. It offers an attractive trade-off between strong robustness guarantees, sometimes directly based on worst-case lattice problems, and efficiency of the subsequent cryptographic primitives. Different flavors of M-LWE have then been introduced towards improving performance. Such variants look at different secret-error distributions and might allow for additional hints on the secret-error vector. Existing hardness results however only cover restricted classes of said distributions, or are tailored to specific leakage models. This lack of generality hinders the design of efficient and versatile cryptographic schemes, as each new distribution or leakage model requires a separate and nontrivial hardness evaluation.

In this work, we address this limitation by establishing the hardness of MLWE under general distributions. As a first step, we show that MLWE remains hard when the error vector follows an arbitrary bounded distribution with sufficient entropy, with some restriction on the number of samples. Building on this, we then reduce to the Hermite Normal Form (HNF) where the secret-error vector follows said arbitrary distribution. Overall, our result shows the actual shape of the distribution does not matter, as long as it keeps sufficient entropy.

To demonstrate the versatility of our framework, we further analyze a range of leakage scenarios. By examining the residual entropy given the leakage, we show that our results of M-LWE with general distributions encompass various types of leakage. More precisely, we cover exact and approximate linear hints which are widely used in recent cryptographic designs, as well as quadratic, and even non-algebraic forms, some of which were not yet covered by any theoretical hardness guarantees. The generality of our results aims at facilitating future cryptographic designs and security analyses.
Expand
Jakub Mielczarek, Małgorzata Zajęcka
ePrint Report ePrint Report
In this article, we introduce a new post-quantum cryptosystem, NTWR Prime, which is based on the NTRU Prime and Learning With Rounding (LWR) problems. This scheme is inspired by the NTWE construction proposed by Joel Gartner in 2023. Unlike NTWE, our algorithm employs an irreducible, non-cyclotomic polynomial whose Galois group is isomorphic to the symmetric group. Additionally, the LWR problem is used in place of the LWE problem, offering potential advantages for structural security due to its deterministic nature. We conduct a security analysis demonstrating that solving the NTWR Prime problem requires solving both the underlying NTRU Prime and LWR problems. Consequently, given the absence of definitive post-quantum security proofs for these problems, our construction offers redundancy, which may fulfill the requirements of applications with exceptionally high security standards. Importantly, we show that there exists a set of parameters satisfying the hardness assumptions for both contributing problems.
Expand

13 August 2025

Dung Bui, Kelong Cong
ePrint Report ePrint Report
Fuzzy-labeled private set intersection (PSI) outputs the corresponding label if there is a ``fuzzy'' match between two items, for example, when the Hamming distance is low between the two items. Such protocols can be applied in privacy-preserving biometric authentication, proximity testing, and so on. The only fuzzy-labeled PSI protocol designed for practical purposes is by Uzun et al. (USENIX’21), which is based on homomorphic encryption. This design puts constraints on the item size, label size, and communication cost since it is difficult for homomorphic encryption to support large plaintext space and it is well-known that the ciphertext-expansion factor is large.

Our construction begins with a new primitive which we call vector ring-oblivious linear evaluation (vector ring-OLE). This primitive does not rely on existing instantiations of ring-OLE over the quotient ring, but leverages the more efficient vector-OLE. It is ideal for building unbalanced threshold-labeled PSI and is also of independent interest.

Our main contribution, fuzzy-labeled PSI, is bootstrapped from our threshold-labeled PSI protocol. Through a prototype implementation, we demonstrate our communication cost is up to $4.6\times$ better than the prior state-of-the-art with comparable end-to-end latency while supporting a significantly higher label size.
Expand
Andrej Bogdanov, Alon Rosen, Kel Zin Tan
ePrint Report ePrint Report
The $k$LIN problem concerns solving noisy systems of random sparse linear equations mod 2. It gives rise to natural candidate hard CSP distributions and is a cornerstone of local cryptography. Recently, it was used in advanced cryptographic constructions, under the name 'sparse LPN'.

For constant sparsity $k$ and inverse polynomial noise rate, both search and decision versions of $k$LIN are statistically possible and conjectured to be computationally hard for $n\ll m\ll n^{k/2}$, where $m$ is the number of $k$-sparse linear equations, and $n$ is the number of variables.

We show an algorithm that given access to a distinguisher for $(k-1)$LIN with $m$ samples, solves search $k$LIN with roughly $O(nm)$ samples. Previously, it was only known how to reduce from search $k$LIN with $O(m^3)$ samples, yielding meaningful guarantees for decision $k$LIN only when $m \ll n^{k/6}$.

The reduction succeeds even if the distinguisher has sub-constant advantage at a small additive cost in sample complexity. Our technique applies with some restrictions to Goldreich's function and $k$LIN with random coefficients over other finite fields.
Expand
Sam Buxbaum, Lucas M. Tassis, Lucas Boschelli, Giovanni Comarela, Mayank Varia, Mark Crovella, Dino P. Christenson
ePrint Report ePrint Report
We present a real-world deployment of secure multiparty computation to predict political preference from private web browsing data. To estimate aggregate preferences for the 2024 U.S. presidential election candidates, we collect and analyze secret-shared data from nearly 8000 users from August 2024 through February 2025, with over 2000 daily active users sustained throughout the bulk of the survey. The use of MPC allows us to compute over sensitive web browsing data that users would otherwise be more hesitant to provide. We collect data us- ing a custom-built Chrome browser extension and perform our analysis using the CrypTen MPC library. To our knowledge, we provide the first implementation under MPC of a model for the learning from label pro- portions (LLP) problem in machine learning, which allows us to train on unlabeled web browsing data using publicly available polling and elec- tion results as the ground truth. The client code is open source, and the remaining code will be open source in the future.
Expand
Randy Kuang
ePrint Report ePrint Report
In this paper, we present an optimized construction of the Homomorphic Polynomial Public Key (HPPK) cryptosystem, a novel framework designed to provide enhanced security and efficiency in the post-quantum era. Our work introduces a layered cryptographic design that combines modular arithmetic permutations with an innovative additive random masking technique. This approach effectively obscures the underlying factorizable structure of the public key, thereby mitigating vulnerabilities to known lattice reduction attacks and other algebraic cryptanalyses. The security of our scheme is formally grounded in the computational hardness of three new problems: the Hidden Modulus Product Problem (HMPP), the HPPK Key Recovery Problem (HKRP), and the HPPK Secret Recovery Problem (HSRP). We demonstrate through rigorous analysis that the optimal attacks on our scheme are computationally infeasible for appropriately chosen parameters. Furthermore, we show that HPPK achieves remarkably compact key, ciphertext, and signature sizes, offering a significant advantage over leading NIST post-quantum finalists such as Kyber, Dilithium, and Falcon, particularly in bandwidth-constrained environments. The HPPK cryptosystem offers a compelling and mathematically-grounded solution for next-generation cryptography, delivering both provable security and practical efficiency.
Expand
Weidan Ji, Zhedong Wang, Lin Lyu, Dawu Gu
ePrint Report ePrint Report
Most adaptively secure identity-based encryption (IBE) constructions from lattices in the standard model follow the framework proposed by Agrawal et al. (EUROCRYPT 2010). However, this framework has an inherent restriction: the modulus is quadratic in the trapdoor norm. This leads to an unnecessarily large modulus, reducing the efficiency of the IBE scheme. In this paper, we propose a novel framework for adaptively secure lattice-based IBE in the standard model, that removes this quadratic restriction of modulus while keeping the dimensions of the master public key, secret keys, and ciphertexts unchanged. More specifically, our key observation is that the original framework has a \textit{natural} cross-multiplication structure of trapdoor. Building on this observation, we design two novel algorithms with non-spherical Gaussian outputs that fully utilize this structure and thus remove the restriction. Furthermore, we apply our framework to various IBE schemes with different partitioning functions in both integer and ring settings, demonstrating its significant improvements and broad applicability. Besides, compared to a concurrent and independent work by Ji et al. (PKC 2025), our framework is significantly simpler in design, and enjoys a smaller modulus, a more compact master public key and shorter ciphertexts.
Expand
Felix Dörre, Marco Liebel, Jeremias Mechler, Jörn Müller-Quade
ePrint Report ePrint Report
If the system of an honest user is corrupted, all of its security may be lost: The system may perform computations using different inputs, report different outputs or perform a different computation altogether, including the leakage of secrets to an adversary. In this paper, we present an approach that complements arbitrary computations to protect against the consequences of malicious systems. Tothis end, we adapt a well-known technique traditionally used to increase fault tolerance, namely redundant executions on different machines that are combined by a majority vote on the results. However, using this conceptually very simple technique for general computations is surprisingly difficult due to non-determinism on the hardware and software level that may cause the executions to deviate. The CoRReCt approach, short for Compute, Record, Replay, Compare, considers two synchronized executions on different machines. Only if both executions lead to the same result, this result is returned. Our realization uses virtual machines (VMs): On one VM, the software is executed and non-deterministic events are recorded. On a second VM, the software is executed in lockstep and non-deterministic events are replayed. The outputs of both VMs, which are hosted on different machines, are compared by a dedicated trusted entity and only allowed if they match. The following security guarantees can be proven: – Integrity: If at most one host is corrupted, then the computation is performed using the correct inputs and returns either the correct result or no result at all. – Privacy: If timing side-channels are not considered and at most one host is corrupted, the additional leakage introduced by our approach can be bounded by $\log_2(n)$ bits, where n is the number of messages sent. If timing side-channels are considered and the recording system is honest, the same leakage bound can be obtained. As VMs can be run on completely different host platforms, e.g. Windows on Intel x86-64 or OpenBSD on ARM, the assumption of at least one system being honest is very plausible. To prove our security guarantees, we provide a proof within a formal model. To demonstrate the viability of our approach, we provide a ready-to-use implementation that allows the execution of arbitrary (networked) x86-64 Linux programs and discuss different real-world applications.
Expand
Bernardo David, Arup Mondal, Rahul Satish
ePrint Report ePrint Report
Constructing MPC with ephemeral committees has gained a lot of attention since the seminal works on Fluid MPC and YOSO MPC (CRYPTO'21). However, most protocols in this setting focus on the extreme case of ephemeral committees who can only act for one round (i.e. the maximally fluid case). The Layered MPC model (CRYPTO'23) recasts this notion as a protocol execution against an adaptive rushing adversary over a layered interaction graph, where each committee sits on a layer and can only communicate with the immediate next committee. Although protocols with abort allow for linear communication complexity (CRYPTO'23, CiC'24), Perfect Layered MPC with guaranteed output delivery (GOD) and its statistically secure counterpart (TCC'24) suffer from $O(n^9)$ and $O(\kappa n^{18})$ communication complexity for $n$ parties per committee, respectively. In this work, we investigate communication complexity improvements gained in a relaxed Multi-Layered MPC model that allows for limited interaction among the parties in each committee. However, committees have only one round to communicate with the immediate next committee. We construct Rumors MPC protocols, where the interaction among each committee's members is constant-round. Our protocols achieve GOD and optimal corruption threshold in the perfect (resp. statistical) security setting with committees acting for $\delta=5$ (resp. $\delta=13$) rounds and $O(n^6)$ (resp. $O(\kappa n^8)$) communication.
Expand
Yuyu Wang
ePrint Report ePrint Report
In this study, we revisit leakage-resilient circuits (LRCs) against NC1-leakage and propose new constructions that minimize the reliance on leak-free hardware. Specifically, we first present a stateless LRC scheme that is resilient to NC1-leakage, and then extend it to a leakage-tolerant circuit with auxiliary input (AI-LTC). By integrating this with a 2-adaptive leakage-resilient encoding scheme, we achieve a stateful LRC scheme that uses a secure hardware component. In comparison to the state-of-the-art constructions against NC1-leakage by Miles and Viola (STOC 2013), both the encoder during the leak-free phase in our stateless LRC and the secure hardware component in our stateful LRC are typically much smaller, as their sizes are independent of the original circuit size. Additionally, we provide a non-black-box instantiation of stateful LRC, resulting in a smaller compiled circuit. The security of all our constructions is based on the very mild worst-case assumption NC1⊊⊕L/poly, which is strictly weaker than the assumption NC1⊊L used by Miles and Viola. Furthermore, we propose a generic conversion from AI-LTCs to non-interactive zero-knowledge proofs with offline simulation (oNIZK) for all NP in the fine-grained setting. Our instantiation derived from it has small common reference strings, perfect soundness, zero-knowledge against adversaries in NC1 under NC1⊊⊕L/poly, and minimal verification complexity. Finally, we show that any fine-grained oNIZK cannot simultaneously achieve perfect soundness and verifiable common reference strings, thereby ruling out the possibility of constructing stateful LRCs without secure hardware by eliminating the trusted setup of our AI-LTC.
Expand
Erik Mulder, Bruno Sterner, Wessel van Woerden
ePrint Report ePrint Report
Finding the largest pair of consecutive $B$-smooth integers is computationally challenging. Current algorithms to find such pairs have an exponential runtime -- which has only be provably done for $B \leq 100$ and heuristically for $100 < B \leq 113$. We improve this by detailing a new algorithm to find such large pairs. The core idea is to solve the shortest vector problem (SVP) in a well constructed lattice. With this we are able to significantly increase $B$ and notably report the heuristically largest pair with $B = 751$ which has $196$-bits. By slightly modifying the lattice, we are able to find larger pairs for which one cannot conclusively say whether it is the largest or not for a given $B$. This notably includes a $213$-bit pair with $B = 997$ which is the largest pair found in this work.
Expand
Christopher Battarbee, Arman Darbinyan, Delaram Kahrobaei
ePrint Report ePrint Report
Let f be an arbitrary positive integer valued function. The goal of this note is to show that one can construct a finitely generated group in which the discrete log problem is polynomially equivalent to computing the function f. In particular, we provide infinite, but finitely generated groups, in which the discrete logarithm problem is arbitrarily hard. As another application, we construct a family of two-generated groups that have polynomial time word problem and NP-complete discrete log problem. Additionally, using our framework, we propose a generic scheme of cryptographic protocols, which might be of independent interest.
Expand
Clemens Krüger, Bhavinkumar Moriya, Dominik Schoop
ePrint Report ePrint Report
Homomorphic encryption (HE) is a promising technique for privacy-preserving data analysis. Several HE schemes have been developed, with the CKKS and TFHE schemes being two of the most advanced. However, due to their differences, it is hard to compare their performance and suitability for a given application. We therefore conducted an empirical study of the performance of the two schemes in a comparable scenario. We benchmarked the commonly used operations addition, multiplication, division, square root, evaluation of a polynomial and a comparison function, each on a common pair of datasets with 65536 32-bit integers. Since the CKKS scheme is an approximate scheme, we set a requirement of at least 32 bits of precision to match that of the input data. Our results show that CKKS outperforms TFHE in most operations. TFHE’s only advantage is its fast bootstrapping. Even though TFHE performs bootstrapping after every operation, while CKKS typically performs bootstrapping only after a certain number of multiplications, CKKS’s bootstrapping still presents a bottleneck. This can be seen specifically with the comparison operation, where TFHE is much faster than CKKS in many settings, as it requires several bootstrapping operations in CKKS due to its multiplicative depth. Generally speaking, CKKS should be preferred in applications which can be parallelized. CKKS’s advantages decreases in applications with a large depth that require many bootstrapping operations.
Expand
Hayato Kimura, Ryoma Ito, Kazuhiko Minematsu, Shogo Shiraki, Takanori Isobe
ePrint Report ePrint Report
Distributed social networking services (SNSs) recently received significant attention as an alternative to traditional, centralized SNSs, which have inherent limitations on user privacy and freedom. We provide the first in-depth security analysis of Nostr, an open-source, distributed SNS protocol developed in 2019 with more than 1.1 million registered users. We investigate the specification of Nostr and the client implementations and present a number of practical attacks allowing forgeries on various objects, such as encrypted direct messages (DMs), by a malicious user or a malicious server. Even more, we show a confidentiality attack against encrypted DMs by a malicious user exploiting a flaw in the link preview mechanism and the CBC malleability. Our attacks are due to cryptographic flaws in the protocol specification and client implementation, some of which in combination elevate the forgery attack to a violation of confidentiality. We verify the practicality of our attacks via Proof-of-Concept implementations and discuss how to mitigate them.
Expand
Hyeonhak Kim, Seokhie Hong, Suhri Kim
ePrint Report ePrint Report
POKÉ (Point-Based Key Exchange), proposed by Basso and Maino in Eurocrypt 2025, is currently the fastest known isogeny-based public key encryption scheme, combining a SIDH-like protocol with higher-dimensional isogenies. However, the higher-dimensional representation inherently requires discrete logarithm computations, which restricts the use of torsion points to smooth subgroups. As a result, reducing the size of the underlying prime $p$ is difficult, limiting further efficiency gains. In this work, we propose a variant of POKÉ that retains the higher-dimensional representation but avoids discrete logarithm computations. By replacing the point with the intermediate curve as a shared secret, we are able to reduce the size of the prime $p$ and reduce the number of point evaluations, resulting in faster key generation, encryption, and decryption at the cost of a larger public key. We provide optimized C implementations of both POKÉ and our variant. Our results demonstrate that the proposed method improves key generation and encryption by 16% and 22% respectively, and decryption by more than 50%, for all security levels. These results highlight the practicality of our approach, particularly in computation-constrained environments and applications where fast decryption is essential, such as data processing, network communication, and database encryption.
Expand

12 August 2025

Yin Li, Sharad Mehrota, Shantanu Sharma, Komal Kumari
ePrint Report ePrint Report
This paper presents a novel key-based access control technique for secure outsourcing key-value stores where values correspond to documents that are indexed and accessed using keys. The proposed approach adopts Shamir’s secret-sharing that offers unconditional or information-theoretic security. It supports keyword-based document retrieval while preventing leakage of the data, access rights of users, or the size (i.e., volume of the output that satisfies a query). The proposed approach allows servers to detect (and abort) malicious clients from gaining unauthorized access to data, and prevents malicious servers from altering data undetected while ensuring efficient access – it takes 231.5ms over 5,000 keywords across 500,000 files
Expand
Jeremiah Blocki, Nathan Smearsoll
ePrint Report ePrint Report
A Proof of Work (PoW) is an important construction for spam-mitigation and distributed consensus protocols. Intuitively, a PoW is a short proof that is easy for the verifier to check but moderately expensive for a prover to generate. However, existing proofs of work are not egalitarian in the sense that the amortized cost to generate a PoW proof using customized hardware is often several orders of magnitude lower than the cost for an honest party to generate a proof on a personal computer. Because Memory-Hard Functions (MHFs) appear to be egalitarian, there have been multiple attempts to construct Memory-Hard Proofs of Work (MHPoW) which require memory-hard computation to generate, but are efficient to verify. Biryukov and Khovratovich (Usenix, 2016) developed a MHPoW candidate called Merkkle Tree Proofs using used the Argon2d MHF. However, they did not provide a formal security proof and Dinur and Nadler (Crypto, 2017) found an attack which exploited the data-dependencies of the underlying Argon2d graph.

We revisit the security of the MTP framework and formally prove, in the parallel random oracle model, that the MTP framework is sound when instantiated with a suitable {\em data-independent} Memory-Hard function. We generically lower bound the cumulative memory cost (cmc) of any prover for the protocol by the pebbling cost of the ex-post facto graph. We also prove that as long as the underlying graph of the original iMHF is sufficiently depth-robust that, except with negligible probability, the ex-post facto will have high cumulative memory cost (cmc). In particular, if we instantiate the iMHF with DRSample then we obtain a MHPoW with the following properties: (1) An honest prover for the protocol can run in sequential time $O(N)$, (2) The proofs have size $\mathtt{polylog}(N)$ and can be verified in time $\mathtt{polylog}(N)$ (3) Any malicious prover who produces a valid proof must incur high cumulative memory complexity at least $\Omega\left(\frac{N^2}{\log N}\right)$. We also develop general pebbling attacks to which we use to show that (1) any iMHF based MHPoW using the MTP framework has proof size at least $\Omega\left(\log^2 N/\log \log N \right)$, and (2) at least $\tilde{\Omega}(N^{0.32})$ when the iMHF is instantiated with Argon2i, the data-independent version of Argon2.
Expand
◄ Previous Next ►