IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
17 October 2025
Shibam Ghosh, Bastien Michel, María Naya-Plasencia
ARADI is a low-latency block cipher introduced by the U.S. National Security Agency (NSA) for secure and efficient memory encryption applications. In contrast to most ciphers proposed in the academic community, the design rationale for ARADI has not been publicly disclosed, limiting external evaluation to independent cryptanalysis. Several such analyses have already been published, with the most effective attacks to date reaching up to 12 out of 16 rounds. In this work, we present a differential meet-in-the-middle attack on ARADI that incorporates several new optimizations and dedicated techniques, enabling, for the first time, an attack extending to 14 rounds of the cipher.
Thomas Marquet, Elisabeth Oswald
Deep learning has become a powerful tool for profiled side-channel analysis, especially for its ability to defeat masking countermeasures. However, obtaining a successful deep learning model when the attacker cannot access the internal randomness of the profiling device, remains a challenge. The "plateau effect" hinders the training of the model, as the optimization stalls in flat regions of the loss landscape at initialization, which makes the outcome of the training run uncertain. Previous works showed that multi-task learning allows to overcome this problem by leveraging redundant features across multiple tasks, such as shared randomness or common masking flow. We continue the discussion by using belief propagation on a larger graph to guide the learning. We introduce a multi-task learning model that explicitly integrates a factor graph reflecting the algebraic dependencies among intermediates in the computations of Kyber's inverse Number Theoretic Transform (iNTT). Such framework allow the model to learn a joint representation of the related tasks that is mutually beneficial, and provides a mechanism to overcome such plateaus. For the first time, we show that one can perform a belief propagation during training even when one does not have access to the internal randomness, on the masked shares, potentially improving greatly the performances of the attack.
Ziyu Zhao, Antonio Sanso, Giuseppe Vitto, Jintai Ding
Poseidon and Poseidon2 are cryptographic hash functions crafted for efficient zero-knowledge proof systems and have seen wide adoption in practical applications. We introduce the use of the Graeffe transform in univariate polynomial solving within this line of work. The proposed method streamlines the root recovery process in interpolation attacks and achieves several orders of magnitude acceleration in practical settings, enabling a new and more efficient class of attacks against Poseidon targeting round-reduced permutations and constrained input/output instances. We release open-source code and describe our method in detail, demonstrating substantial improvements over prior approaches: reductions in wall time by a factor of $2^{13}$ and in memory usage by a factor of $2^{4.5}$. Memory-access costs for NTTs turn out to be a dominant barrier in practice. And we prove that this cost increases at least as the $4/3$-power of the input size (up to logarithmic factors), which suggests the commonly used pseudo-linear cost model may underestimate the true resource requirements. This behavior contrasts with multivariate equation solving, whose main bottleneck remains finite-field linear algebra. We argue that, when selecting parameters, designers should account for interpolation-based attacks explicitly, since their practical hardness is determined by different, and sometimes stronger, resource constraints than those of multivariate techniques.
Kyungbae Jang, Anubhab Baksi, Hwajeong Seo
Quantum depth plays a critical role in determining the performance of quantum implementations, yet quantum programming tools often fail to produce depth-optimal compilations of linear layers. In this work, we present a systematic and automated framework that reorders quantum gate sequences of linear layers to obtain depth-efficient quantum implementations. Our method consistently produces circuits with lower depth compared to prior implementations.
We apply the framework to a range of cryptographic operations, including the AES MixColumn, internal layers of the AES S-box, binary field squaring, and modular reduction in binary field multiplication. In all these cases, our method achieves meaningful reductions in quantum depth—for example, lowering the depth of the AES MixColumn and S-box circuits.
This work explores optimal quantum circuit designs for quantum programming tools, improves the accuracy of quantum resource estimation for cryptanalysis, and supports more realistic evaluations of post-quantum security.
We apply the framework to a range of cryptographic operations, including the AES MixColumn, internal layers of the AES S-box, binary field squaring, and modular reduction in binary field multiplication. In all these cases, our method achieves meaningful reductions in quantum depth—for example, lowering the depth of the AES MixColumn and S-box circuits.
This work explores optimal quantum circuit designs for quantum programming tools, improves the accuracy of quantum resource estimation for cryptanalysis, and supports more realistic evaluations of post-quantum security.
Zhengjun Cao, Lihua Liu
We show that the authentication scheme (IEEE Internet Things J., 24310-24322, 2024) cannot be practically implemented, because it misused the elliptic curve group law---point multiplication, which is represented as $kP$, where $k$ is an integer and $P$ is a point on an elliptic curve. But the scheme uses the false representation $Q_iQ_j$ to construct verification equations, where $Q_i$ and $Q_j$ are two points. Besides, we find that an adversary can retrieve the target relay device's secret key using the intercepted message via open channels.
Liyan Chen, Cody Freitag, Zhengzhong Jin, Daniel Wichs
We construct the first unambiguous succinct non-interactive arguments (SNARGs) for P and incrementally verifiable computation (IVC) for P from the polynomial hardness of learning with errors (LWE). Unambiguity guarantees that it is computationally hard to find two distinct accepting proofs for the same statement.
As an application, we establish the first PPAD hardness result based on the polynomial hardness of LWE combined with a widely believed complexity assumption.
Central to our approach is a new notion of rate-1 witness-unambiguous batch arguments for NP, which we give the first construction from the polynomial hardness of LWE. This notion may be of independent interest.
Central to our approach is a new notion of rate-1 witness-unambiguous batch arguments for NP, which we give the first construction from the polynomial hardness of LWE. This notion may be of independent interest.
Tianyu Zheng, Shang Gao, Yu Guo, Bin Xiao
Accumulation is a core technique in state-of-the-art Incrementally Verifiable Computations (IVCs), enabling the avoidance of recursively implementing costly SNARK verification within circuits. However, the recursion overhead in existing IVCs remains significant due to the accumulation verifier complexity, which scales linearly with the number of accumulated instances. In this work, we present a novel accumulation scheme for multiple instances based on polynomial commitment schemes, achieving a theoretical verifier complexity that is sublinear in the number of instances. Technically, our scheme leverages partial evaluation of polynomials to replace random linear combinations, thereby minimizing the costly Commitment Random Linear Combination (CRC) operations on the verifier side. Building on this accumulation scheme, we introduce Quasar, a multi-instance IVC with small recursion overhead in practice.
Notably, Quasar reduces the number of costly CRC operations in the recursive circuit from linear to quasi-linear, substantially improving practical performance. By instantiating Quasar with appropriate polynomial commitment schemes, it can achieve linear-time accumulation prover complexity, plausible post-quantum security, and support for parallelizable proving at each step.
Bastien Michel, Dounia M'foukh, María Naya-Plasencia
Differential meet-in-the-middle attacks, introduced by Boura et al. in 2023, propose a new way of dealing with differential distinguishers. It allows, in particular, to combine differential attacks with initial structures, that were usually used exclusively for meet-in-the-middle attacks. Several applications of this new technique have been published, but so far the results on Feistel constructions have not improved much upon previous best known attacks. In this paper, we apply them on Feistel constructions with all the improvements proposed so far, and we propose some additional new ideas to generically improve these kinds of attacks. We also propose an automatized tool for optimizing the attacks on Simon-like constructions. Our tool outputs a graphical representation of the attack that makes it very easy to verify. All this has allowed us to provide improved single-key key-recovery attacks on most of the variants of Simon, Simeck and CLEFIA-256, that increase the highest number of rounds attacked by 1 or 2 in nearly all the cases.
Alexander Karenin, Elena Kirshanova, Julian Nowakowski, Alexander May
We study the practicality of a primal hybrid lattice attack on LWE. Despite significant research efforts, the state-of-the-art in practical LWE record computations so far is the plain primal attack with Kannan's embedding.
Building on an idea by Espitau and Kirchner, Bernstein proposed in 2023 an LWE hybrid attack that asymptotically outperforms the primal attack. In a nutshell, Bernstein's attack enumerates some coordinates of an LWE key and uses the sophisticated Batch-CVP {\em (Randomized) Slicer} algorithm to solve LWE in a dimension-reduced lattice. The practical implications of this attack however remain widely unclear. One of the major obstacles for judging practicality is the lack of a fast, fully functional Slicer implementation. For the first time, we provide an efficient Slicer implementation that includes all required algorithmic ingredients like locality sensitive hashing.
Building on our Slicer implementation, we implement a generalization of Bernstein's algorithm. While Bernstein's attack works only for LWE, ours also applies to a more general BDD setting. Let $(\mathbf{B}, \mathbf{t})$ be a BDD instance, where the target $\mathbf{t}$ is off from the $d$-dimensional lattice $\mathcal{L}(\mathbf{B})$ by some error~$\mathbf{e}$, sampled coordinate-wise from a distribution $\mathcal{D}$. We show that for hard BDD instances, our BDD hybrid asymptotically speeds up primal's complexity of $T=2^{0.292d + o(d)}$ down to $T^{1-\mathcal{K}}$, where $\mathcal{K} \approx \big(1+\frac{H(\mathcal{D})}{0.058}\big)^{-1}$ with $H(\cdot)$ the Shannon entropy. Depending on $\mathcal{D}$, the constant $\mathcal{K}$ can be small, making practical improvements difficult. We test our Slicer-based implementation inside an implementation of our BDD hybrid lattice attack to tackle LWE instances. We choose two ternary LWE secrets with different entropies $H(\mathcal{D})$ as used in NTRU, and the centered binomial distribution as used in Kyber. For all three distributions in all tested LWE dimensions $n \in [160, 210]$, our Slicer-based implementation practically demonstrates measurable speedups over the primal attack, up to a factor of $5$. We also show that for parameters as originally suggested by Regev, the hybrid attack cannot improve over primal.
Building on an idea by Espitau and Kirchner, Bernstein proposed in 2023 an LWE hybrid attack that asymptotically outperforms the primal attack. In a nutshell, Bernstein's attack enumerates some coordinates of an LWE key and uses the sophisticated Batch-CVP {\em (Randomized) Slicer} algorithm to solve LWE in a dimension-reduced lattice. The practical implications of this attack however remain widely unclear. One of the major obstacles for judging practicality is the lack of a fast, fully functional Slicer implementation. For the first time, we provide an efficient Slicer implementation that includes all required algorithmic ingredients like locality sensitive hashing.
Building on our Slicer implementation, we implement a generalization of Bernstein's algorithm. While Bernstein's attack works only for LWE, ours also applies to a more general BDD setting. Let $(\mathbf{B}, \mathbf{t})$ be a BDD instance, where the target $\mathbf{t}$ is off from the $d$-dimensional lattice $\mathcal{L}(\mathbf{B})$ by some error~$\mathbf{e}$, sampled coordinate-wise from a distribution $\mathcal{D}$. We show that for hard BDD instances, our BDD hybrid asymptotically speeds up primal's complexity of $T=2^{0.292d + o(d)}$ down to $T^{1-\mathcal{K}}$, where $\mathcal{K} \approx \big(1+\frac{H(\mathcal{D})}{0.058}\big)^{-1}$ with $H(\cdot)$ the Shannon entropy. Depending on $\mathcal{D}$, the constant $\mathcal{K}$ can be small, making practical improvements difficult. We test our Slicer-based implementation inside an implementation of our BDD hybrid lattice attack to tackle LWE instances. We choose two ternary LWE secrets with different entropies $H(\mathcal{D})$ as used in NTRU, and the centered binomial distribution as used in Kyber. For all three distributions in all tested LWE dimensions $n \in [160, 210]$, our Slicer-based implementation practically demonstrates measurable speedups over the primal attack, up to a factor of $5$. We also show that for parameters as originally suggested by Regev, the hybrid attack cannot improve over primal.
Jesús-Javier Chi-Domínguez
Nowadays, the Matrix Code Equivalence Problem shows potential applicability in constructing efficient and secure advanced digital signatures, focusing on linkable ring signatures, threshold signatures, and blind signatures. Current constructions of these advanced signatures rely on relaxed instantiations of the Matrix Code Equivalence Problem: given two pairs of equivalent matrix codes, find (if it exists) the secret isometry connecting the pairs. For example, the linkable ring signature construction by Chou et al. (AFRICACRYPT, 2023) builds on top of the Inverse Matrix Code Equivalence Problem: given three equivalent matrix codes, where one pair of the codes is connected by the secret isometry and another by the inverse of that isometry, find the secret isometry.
This paper studies the Inverse Matrix Code Equivalence Problem, focusing on the family of instances where the secret isometry is (skew) symmetric. Our main contribution corresponds to a new algorithm for solving these instances of the Inverse Matrix Code Equivalence Problem. As an implication, we identify weak instances of this kind of instantiation of the Inverse Matrix Code Equivalence Problem, for around 70% of the possible parameter set choices (i.e., code dimension $k$, and code lengths $m$ and $n$), our algorithm runs (heuristically) in polynomial time. In addition, our results spotlight an additional 35% of parameter sets where the best algorithm for solving the Matrix Code Equivalence Problem, proposed by Couvreur and Levrat (Crypto, 2025), does not apply.
Our results have a crucial security impact on the recent blind signature construction proposed by Kuchta, LeGrow, and Persichetti (ePrint IACR, 2025), whose security is closely related to the hardness of solving these kinds of instances of the Inverse Matrix Code Equivalent Problem.
This paper studies the Inverse Matrix Code Equivalence Problem, focusing on the family of instances where the secret isometry is (skew) symmetric. Our main contribution corresponds to a new algorithm for solving these instances of the Inverse Matrix Code Equivalence Problem. As an implication, we identify weak instances of this kind of instantiation of the Inverse Matrix Code Equivalence Problem, for around 70% of the possible parameter set choices (i.e., code dimension $k$, and code lengths $m$ and $n$), our algorithm runs (heuristically) in polynomial time. In addition, our results spotlight an additional 35% of parameter sets where the best algorithm for solving the Matrix Code Equivalence Problem, proposed by Couvreur and Levrat (Crypto, 2025), does not apply.
Our results have a crucial security impact on the recent blind signature construction proposed by Kuchta, LeGrow, and Persichetti (ePrint IACR, 2025), whose security is closely related to the hardness of solving these kinds of instances of the Inverse Matrix Code Equivalent Problem.
Michele Battagliola, Ethan Chen, Hugo Sauerbier Couvée, Violetta Weger
Abstract. CROSS is a code-based signature based on the Restricted Syndrome Decoding Problem (R-SDP) that is currently among the fourteen candidates in the NIST standardization process. While CROSS enjoys a very competitive verification time, its primary drawback is its significantly large signature size. In this work, we introduce a new Multi-Party Computation in the Head (MPCitH) protocol for the R-SDP with the primary goal of reducing CROSS signature size. To do so, we design a publicly verifiable secret sharing scheme tailored for restricted vectors and a new multiplicative-to-additive conversion for it. These new cryptographic gadgets may be of independent interest as they can serve as building blocks for future research in multi-party computation, such as a threshold version of CROSS.
Pierre Guillot, Auguste Hoang Duc, Michel Koskas, Florian Méhats
We present GRAFHEN, a new cryptographic scheme which offers Fully Homomorphic Encryption without the need for bootstrapping (or in other words, without noise). Building on the work of Nuida and others, we achieve this using encodings in groups.
The groups are represented on a machine using rewriting systems. In this
way the subgroup membership problem, which an attacker would have to solve in order to break the scheme, becomes maximally hard, while performance is preserved. In fact we include a simple benchmark demonstrating that our implementation runs several orders of magnitude faster than existing standards.
We review many possible attacks against our protocol and explain how to protect the scheme in each case.
Andrew Huang, Vinod Vaikuntanathan
One-shot signatures (OSS) are a powerful and uniquely quantum cryptographic primitive which allows anyone, given common reference string, to come up with a public verification key $\mathsf{pk}$ and a secret signing state $\ket{\mathsf{sk}}$. With the secret signing state, one can produce the signature of any one message, but no more. In a recent breakthrough work, Shmueli and Zhandry (CRYPTO 2025) constructed one-shot signatures, either unconditionally in a classical oracle model or assuming post-quantum indistinguishability obfuscation and the hardness of Learning with Errors (LWE) in the plain model.
In this work, we address the inefficiency of the Shmueli-Zhandry construction which signs messages bit-by-bit, resulting in signing keys of $\Theta(\lambda^4)$ qubits and signatures of size $\Theta(\lambda^3)$ bits for polynomially long messages, where $\lambda$ is the security parameter. We construct a new, simple, direct, and efficient one-shot signature scheme which can sign messages of any polynomial length using signing keys of $\Theta(\lambda^2)$ qubits and signatures of size $\Theta(\lambda^2)$ bits. We achieve corresponding savings in runtimes, in both the oracle model and the plain model. In addition, unlike the Shmueli-Zhandry construction, our scheme achieves perfect correctness.
Our scheme also achieves strong signature incompressibility, which implies a public-key quantum fire scheme with perfect correctness among other applications, correcting an error in a recent work of Çakan, Goyal and Shmueli (QCrypt 2025) and recovering their applications.
In this work, we address the inefficiency of the Shmueli-Zhandry construction which signs messages bit-by-bit, resulting in signing keys of $\Theta(\lambda^4)$ qubits and signatures of size $\Theta(\lambda^3)$ bits for polynomially long messages, where $\lambda$ is the security parameter. We construct a new, simple, direct, and efficient one-shot signature scheme which can sign messages of any polynomial length using signing keys of $\Theta(\lambda^2)$ qubits and signatures of size $\Theta(\lambda^2)$ bits. We achieve corresponding savings in runtimes, in both the oracle model and the plain model. In addition, unlike the Shmueli-Zhandry construction, our scheme achieves perfect correctness.
Our scheme also achieves strong signature incompressibility, which implies a public-key quantum fire scheme with perfect correctness among other applications, correcting an error in a recent work of Çakan, Goyal and Shmueli (QCrypt 2025) and recovering their applications.
Binyi Chen
Folding schemes are a powerful tool for building scalable proof systems. However, existing folding-based SNARKs require embedding hash functions (modeled as random oracles) into SNARK circuits, introducing both security concerns and significant proving overhead.
We re-envision how to use folding, and introduce Symphony, the first folding-based SNARK that avoids embedding hashes in SNARK circuits. It is memory-efficient, parallelizable, streaming-friendly, plausibly post-quantum secure, with polylogarithmic proof size and verification, and a prover dominated by committing to the input witnesses.
As part of our construction, we introduce a new lattice-based folding scheme that compresses a large number of NP-complete statements into one in a single shot, which may be of independent interest. Furthermore, we design a generic compiler that converts a folding scheme into a SNARK without embedding the Fiat-Shamir circuit into proven statements. Our evaluation shows its concrete efficiency, making Symphony a promising candidate for applications such as zkVM, proof of learning, and post-quantum aggregate signatures.
Léo Ducas, Lynn Engelberts, Paola de Perthuis
Is module-lattice reduction better than unstructured lattice reduction? This question was highlighted as 'Q8' in the Kyber NIST standardization submission (Avanzi et al., 2021), as potentially affecting the concrete security of Kyber and other module-lattice-based schemes. Foundational works on module-lattice reduction (Lee, Pellet-Mary, Stehlé, and Wallet, ASIACRYPT 2019; Mukherjee and Stephens-Davidowitz, CRYPTO 2020) confirmed the existence of such module variants of LLL and block-reduction algorithms, but focus only on provable worst-case asymptotic behavior.
In this work, we present a concrete average-case analysis of module-lattice reduction. Specifically, we address the question of the expected slope after running module-BKZ, and pinpoint the discriminant $\Delta_K$ of the number field at hand as the main quantity driving this slope. We convert this back into a gain or loss on the blocksize $\beta$: module-BKZ in a number field $K$ of degree $d$ requires an SVP oracle of dimension $\beta + \log(|\Delta_K| / d^d)\beta /(d\log \beta) + o(\beta / \log \beta)$ to reach the same slope as unstructured BKZ with blocksize $\beta$. This asymptotic summary hides further terms that we predict concretely using experimentally verified heuristics. Incidentally, we provide the first open-source implementation of module-BKZ for some cyclotomic fields.
For power-of-two cyclotomic fields, we have $|\Delta_K| = d^d$, and conclude that module-BKZ requires a blocksize larger than its unstructured counterpart by $d-1+o(1)$. On the contrary, for all other cyclotomic fields we have $|\Delta_K| < d^d$, so module-BKZ provides a sublinear $\Theta(\beta/\log \beta)$ gain on the required blocksize, yielding a subexponential speedup of $\exp(\Theta(\beta/\log \beta))$.
In this work, we present a concrete average-case analysis of module-lattice reduction. Specifically, we address the question of the expected slope after running module-BKZ, and pinpoint the discriminant $\Delta_K$ of the number field at hand as the main quantity driving this slope. We convert this back into a gain or loss on the blocksize $\beta$: module-BKZ in a number field $K$ of degree $d$ requires an SVP oracle of dimension $\beta + \log(|\Delta_K| / d^d)\beta /(d\log \beta) + o(\beta / \log \beta)$ to reach the same slope as unstructured BKZ with blocksize $\beta$. This asymptotic summary hides further terms that we predict concretely using experimentally verified heuristics. Incidentally, we provide the first open-source implementation of module-BKZ for some cyclotomic fields.
For power-of-two cyclotomic fields, we have $|\Delta_K| = d^d$, and conclude that module-BKZ requires a blocksize larger than its unstructured counterpart by $d-1+o(1)$. On the contrary, for all other cyclotomic fields we have $|\Delta_K| < d^d$, so module-BKZ provides a sublinear $\Theta(\beta/\log \beta)$ gain on the required blocksize, yielding a subexponential speedup of $\exp(\Theta(\beta/\log \beta))$.
Lizhen Zhang, Shang Gao, Sherman S. M. Chow, Kurt Pan, Bin Xiao
We present $\mathsf{HyperWolf}^*$, a lattice-based, fully transparent polynomial commitment scheme (PCS) for univariate and multilinear polynomials.
To the best of our knowledge, it is the first lattice PCS to simultaneously achieve logarithmic proof size and verification time with standard soundness under standard lattice assumptions over polynomial~rings.
Building on sublinear schemes such as $\mathsf{Greyhound}$ (CRYPTO'24) and $\mathsf{BrakeDown}$ (CRYPTO'23), we generalize the two-dimensional approach to a $k$-dimensional witness-folding recursion, yielding a $k$-round hyperdimensional proof. Each round folds the witness along one axis, reducing the tensor arity by one, giving overall cost $O(k N^{1/k})$; choosing $k = \log N$ yields $O(\log N)$ verification time and proof size. For standard $\ell_2$ soundness, we give an exact Euclidean-norm proof tailored to lattice relations: we prove $\langle \vec{f}, \vec{f}\rangle \bmod q$ via an inner-product argument and enforce a small-coefficient bound on $\|\vec{f}\|_\infty$ so that $\langle \vec{f}, \vec{f}\rangle \bmod q = \langle \vec{f}, \vec{f}\rangle$ over $\mathbb{Z}$. Both sub-proofs admit the same structure for $O(\log N)$ complexity.
We further compact the proof using a proof-of-proof IPA \`{a}~la LaBRADOR (CRYPTO'23), attaining $O(\log\log\log{N})$ while preserving logarithmic verification and linear proving. We also describe a candidate optimization that achieves $O(\log\log N)$ proofs without LaBRADOR. For $N = 2^{30}$, $\mathsf{HyperWolf}$ features a ${\sim}53$ KB proof size and, compared to $\mathsf{Greyhound}$, reduces verifier work from $\Theta(\sqrt{N})$ to $\Theta(\log N)$, yielding $2$ to $3$ orders of magnitude improvement for large $N$ while maintaining comparable size.
Building on sublinear schemes such as $\mathsf{Greyhound}$ (CRYPTO'24) and $\mathsf{BrakeDown}$ (CRYPTO'23), we generalize the two-dimensional approach to a $k$-dimensional witness-folding recursion, yielding a $k$-round hyperdimensional proof. Each round folds the witness along one axis, reducing the tensor arity by one, giving overall cost $O(k N^{1/k})$; choosing $k = \log N$ yields $O(\log N)$ verification time and proof size. For standard $\ell_2$ soundness, we give an exact Euclidean-norm proof tailored to lattice relations: we prove $\langle \vec{f}, \vec{f}\rangle \bmod q$ via an inner-product argument and enforce a small-coefficient bound on $\|\vec{f}\|_\infty$ so that $\langle \vec{f}, \vec{f}\rangle \bmod q = \langle \vec{f}, \vec{f}\rangle$ over $\mathbb{Z}$. Both sub-proofs admit the same structure for $O(\log N)$ complexity.
We further compact the proof using a proof-of-proof IPA \`{a}~la LaBRADOR (CRYPTO'23), attaining $O(\log\log\log{N})$ while preserving logarithmic verification and linear proving. We also describe a candidate optimization that achieves $O(\log\log N)$ proofs without LaBRADOR. For $N = 2^{30}$, $\mathsf{HyperWolf}$ features a ${\sim}53$ KB proof size and, compared to $\mathsf{Greyhound}$, reduces verifier work from $\Theta(\sqrt{N})$ to $\Theta(\log N)$, yielding $2$ to $3$ orders of magnitude improvement for large $N$ while maintaining comparable size.