IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
19 July 2025
Elie Eid, Aurélien Greuet, Nathan Reboud, Rina Zeitoun
FrodoKEM is a conservative lattice-based KEM based on the Learning With Errors problem. While it was not selected for NIST standardization, it remains a strong candidate for high-security applications and is recommended by several national agencies, including BSI, ANSSI, and the EUCC. Its reliance on CDT-based Gaussian sampling presents a significant challenge for side-channel secure implementations. While recent work by Gérard and Guerreau [GG25] has shown that masking FrodoKEM is feasible, the Gaussian sampler remains a major bottleneck, accounting for between 34% and 65% of the execution time. In this work, we introduce a new high-order masking gadget for CDT sampling, provably secure in the ISW probing model and significantly more efficient than previous approaches. We instantiate and evaluate our design in the context of FrodoKEM, with a complete first-order implementation on Cortex-M3, reflecting the most relevant threat model in practice.
Compared with [GG25] at first order, the cost of the sampler is reduced by at least 82% and the number of random generations by at least 69%.
Higher-order security is also fully supported through a generic C implementation, with some selected gadgets hand-optimized in assembly to improve efficiency.
Tim Ruffing
As of November 2021, Bitcoin supports “Taproot” spending policies whose on-chain format is a single elliptic curve point. A transaction spending the funds associated with a Taproot policy can be authorized by interpreting the curve point either (a) as a public key of the Schnorr signature scheme and providing a suitable signature, or (b) as a commitment to alternative spending conditions and satisfying those.
Since a sufficiently powerful quantum adversary would be able to forge Schnorr signatures, an upgrade to Bitcoin may, at some point in the future, disable the ability to spend existing funds via Schnorr signatures in order to prevent the havoc created by leaving a large fraction of the currency supply prone to theft. However, to avoid irrevocably losing all funds not migrated in time to (yet to be added) post-quantum signature schemes, it will be desirable for an upgrade disabling Schnorr signatures to retain the ability to spend funds by interpreting the curve point in a Taproot policy as a commitment to alternative spending conditions.
This paper justifies such an upgrade strategy by demonstrating the post-quantum security of Taproot as a commitment scheme. Specifically, it provides concrete upper bounds on the probability that a quantum adversary making some number of queries to a quantum random oracle can break the binding or hiding property. Since the bounds follow from powerful existing results, which enable reasoning as if dealing with a classical adversary, the proofs are accessible without a background in quantum computing.
Yufei Yuan, Haiyi Xu, Lei Zhang, Wenling Wu
In this paper, we revisit the standard approach to constructing neural distinguishers in symmetric cryptanalysis and introduce a game-like model, the Coin-Tossing model, to generalize this methodology. From the perspective of Boolean functions, we show that many classical cryptanalytic techniques can be generalized as a specific family of Boolean functions, termed the CPF class. We further explore the connection between learning CPF Boolean functions in the Coin-Tossing model and the well-known Learning Parity with Noise (LPN) problem. Leveraging the theoretical analysis, we identify key attributes of CPF functions that significantly affect how effectively machine learning algorithms can learn them. To validate our conclusions, we also conduct extensive experiments based on machine learning algorithms. Incorporating our theoretical insights, we propose an advanced 8-round and 9-round neural distinguisher for SPECK32/64 by reducing the problem complexity. Additionally, we propose a method based on the Goldreich-Levin algorithm to analyze and interpret what black-box distinguishers learn. Using this approach, we reinterpret several established neural distinguishers in terms of Fourier expansion. It is able to resolve the previous neural distinguisher in several Fourier terms. Notably, we identify a new type of distinguisher from neural networks that has not been discovered by cryptanalysts, which can be considered as a variant of the Differential-Linear distinguisher. We also demonstrate that the neural network not only learned the optimal Differential-Linear (DL) distinguishers found using existing MILP/MIQCP models, but also discovered even superior ones.
Keewoo Lee
A Private Information Retrieval (PIR) scheme allows a client to retrieve data from a database hosted on a remote server without revealing which location is being accessed. In Doubly-Efficient PIR (DEPIR), the server preprocesses the database offline into a data structure that enables it to answer any client query in sublinear time with respect to the database size $N$. The breakthrough work of Lin-Mook-Wichs (STOC’23) presented the first DEPIR construction from the Ring-LWE assumption. This remains essentially the only known construction to date, and realizing DEPIR from alternative foundations is an open problem. In particular, constructing DEPIR from plain LWE is still open.
In this work, we present a simple LWE-based construction of DEPIR in the CRS model. Our construction is inspired by SimplePIR (USENIX’23) and leverages the matrix–vector multiplication preprocessing of Williams (SODA’07). By applying the compiler of Lin–Mook–Wichs (Eurocrypt’25), we also obtain an sk-DEPIR scheme, a keyed variant of DEPIR, based on LWE in the standard model. All existing sk-DEPIR constructions, except the one implied by LMW23 based on Ring-LWE, rely on non-standard code-based assumptions. While our constructions are only barely doubly-efficient, with server computation of $O(N / \log{N})$, it was previously unknown whether even such modest sublinear efficiency could be achieved from unstructured, plain LWE.
In this work, we present a simple LWE-based construction of DEPIR in the CRS model. Our construction is inspired by SimplePIR (USENIX’23) and leverages the matrix–vector multiplication preprocessing of Williams (SODA’07). By applying the compiler of Lin–Mook–Wichs (Eurocrypt’25), we also obtain an sk-DEPIR scheme, a keyed variant of DEPIR, based on LWE in the standard model. All existing sk-DEPIR constructions, except the one implied by LMW23 based on Ring-LWE, rely on non-standard code-based assumptions. While our constructions are only barely doubly-efficient, with server computation of $O(N / \log{N})$, it was previously unknown whether even such modest sublinear efficiency could be achieved from unstructured, plain LWE.
Anders Lindman
Cascader, a novel key-exchange protocol based on an iterative multiplicative recurrence over a finite field, is introduced. In contrast to standard methods, e.g., traditional Diffie–Hellman and ECC, it replaces exponentiation and scalar multiplication with layered products, achieving commutativity and deterministic pseudorandom behavior.
Hugo Beeloo-Sauerbier Couvée, Antonia Wachter-Zeh, Violetta Weger
The seminal work of [Bardet et al., 2020] has shown the potential of algebraic modelings to solve the Rank Syndrome Decoding Problem (R-SDP). For most parameter ranges, their algorithm first needs to perform a guessing step, making it a hybrid approach between combinatorial and algebraic solvers. In this paper, we present a novel guessing technique, which, when applied to [Bardet et al., 2020], is able to attack the proposed parameters of RYDE, a second-round candidate for the additional NIST standardization process. In particular, we show that all of the proposed RYDE parameters have to be updated: We can reduce the security of RYDE-1 to 104 bits, RYDE-3 to 153 bits, and RYDE-5 to 210 bits.
Janis Adamek, Aikata Aikata, Ahmad Al Badawi, Andreea Alexandru, Armen Arakelov, Philipp Binfet, Victor Correa, Jules Dumezy, Sergey Gomenyuk, Valentina Kononova, Dmitrii Lekomtsev, Vivian Malone ...
Fully Homomorphic Encryption (FHE) enables computation over
encrypted data and is considered a fundamental tool for privacy-preserving systems.
Despite significant theoretical progress, its practical adoption remains limited. One
contributing factor is the absence of reusable, application-level components suitable
for integration into real-world systems.
This work introduces a library of FHE components developed through a competition-
based framework. The components are outcomes of a series of formalized challenges
published on the FHERMA platform, each targeting a specific challenge—such
as comparison, sorting, or matrix operations—under concrete cryptographic and
performance constraints.
This initial release includes contributions from independent researchers and reflects
a variety of approaches across different FHE schemes. The library is intended to
expand over time as new challenges are introduced and solved, forming a foundation
for building and evaluating privacy-preserving applications.
16 July 2025
Jules Dumezy, Andreea Alexandru, Yuriy Polyakov, Pierre-Emmanuel Clet, Olive Chakraborty, Aymen Boudguiga
The Cheon--Kim--Kim--Song (CKKS) scheme is a fully homomorphic encryption scheme that traditionally supports only the evaluation of smooth functions. Recent works have enabled the evaluation of arbitrary (discontinuous) integer functions represented as lookup tables (LUT) on small inputs using the method of functional bootstrapping (FBT). Although well-suited for small integers (up to around 10 bits), the efficiency of FBT quickly declines for large LUTs, and a considerable increase in both runtime and memory requirements is observed. Building on CKKS functional bootstrapping, we propose in this paper two functional bootstrapping algorithms, specifically designed to target larger LUTs (up to 20 bits). For a 16-bit LUT, our implementation in OpenFHE achieves a speed-up of 47.5 in amortized time and 95.1 in latency for single-threaded execution, compared to the state-of-the-art CKKS-based functional bootstrapping method of Alexandru et al. (CRYPTO'25).
Pierre Daix-Moreux, Chengru Zhang
Despite the growing popularity of blockchains, their scalability remains a significant challenge. Layer-2s (L2s) aim to address this by introducing an operator to process transactions off-chain and post compact summaries to the Layer-1 (L1). However, existing L2 designs struggle with unsatisfactory throughput improvements, complex exit games, limited data availability, or high computational overhead for users.
This paper introduces PlasmaFold, a novel L2 designed to overcome these limitations. PlasmaFold utilizes a hybrid architecture: an operator (aggregator) generates proofs on server side for the honest construction of blocks, while users maintain balance proofs on their own devices. This separation of concerns enables instant, non-interactive exits via balance proofs, while block proofs handle most of the validations, minimizing users’ costs. By leveraging Incrementally Verifiable Computation (IVC), PlasmaFold achieves concrete efficiency. Users can update their balance proofs within a browser in under 1 second per transaction using less than 1 GB of RAM. Furthermore, only the identities of users who have acknowledged data receipt are posted to L1, ensuring data availability with a minimal on-chain footprint. This design keeps L1 costs extremely low, enabling a theoretical throughput of over 14000 transactions per second.
This paper introduces PlasmaFold, a novel L2 designed to overcome these limitations. PlasmaFold utilizes a hybrid architecture: an operator (aggregator) generates proofs on server side for the honest construction of blocks, while users maintain balance proofs on their own devices. This separation of concerns enables instant, non-interactive exits via balance proofs, while block proofs handle most of the validations, minimizing users’ costs. By leveraging Incrementally Verifiable Computation (IVC), PlasmaFold achieves concrete efficiency. Users can update their balance proofs within a browser in under 1 second per transaction using less than 1 GB of RAM. Furthermore, only the identities of users who have acknowledged data receipt are posted to L1, ensuring data availability with a minimal on-chain footprint. This design keeps L1 costs extremely low, enabling a theoretical throughput of over 14000 transactions per second.
Décio Luiz Gazzoni Filho, Gora Adj, Slim Bettaieb, Alessandro Budroni, Jorge Chávez-Saab, Francisco Rodríguez-Henríquez
Many post-quantum cryptosystems require generating an $n$-bit binary vector with a prescribed Hamming weight $\omega$, a process known as \emph{fixed-weight sampling}. When $\omega = O(n)$, we call this \emph{dense} fixed-weight sampling, which commonly appears in lattice-based cryptosystems, like those in the NTRU family. In contrast, code-based cryptosystems typically use \emph{sparse} fixed-weight sampling with $\omega = O(\sqrt{n})$. Sparse fixed-weight sampling generally involves three constant-time steps to keep the sampled vector secret: 1. sample $\omega$ nearly uniform random integers from a series of decreasing intervals; 2. map these integers into a set of $\omega$ distinct indices in $[0, n)$, called the \emph{support}; 3. generate a binary $n$-bit vector with bits set only at the support indices. Remarkably, some of the core algorithms employed in fixed-weight sampling date back to nearly a century, yet developing efficient and secure techniques remains essential for modern post-quantum cryptographic applications. In this paper, we present novel algorithms for steps two and three of the fixed-weight sampling process. We demonstrate their practical applicability by replacing the current fixed-weight sampling routine in the HQC post-quantum key exchange mechanism, recently selected for NIST standardization. We rigorously prove that our procedures are sound, secure, and introduce little to no bias. Our implementation of the proposed algorithms accelerates step 2 by up to $2.63\times$ and step 3 by up to $5.20\times$ compared to an optimized version of the fixed-weight sampler currently used in HQC. Since fixed-weight sampling constitutes a significant portion of HQC’s execution time, these speedups translate into protocol-level improvements of up to $1.36\times$, $1.23\times$ and $1.17\times$ for key generation, encapsulation and decapsulation, respectively.
Jung Hee Cheon, Jihwan Kim, Yongdong Yeo
The Cheon-Kim-Kim-Song (CKKS) homomorphic encryption scheme is widely adopted for securely evaluating circuits over real numbers, such as those arising in privacy-preserving machine learning (PPML), because it efficiently supports approximate floating-point arithmetic of messages. A CKKS ciphertext has a finite level, which corresponds to the budget for how many multiplicative operations can be applied. Once these levels are consumed, the ciphertext must be refreshed through a bootstrapping procedure to restore its capacity for further computation. However, bootstrapping itself also consumes a significant number of levels, leaving fewer levels after each bootstrapping.
In this work, we propose three techniques—OverModRaise1, OverModRaise2, and Tuple-C2S/S2C—that target reductions in the modulus consumption of C2S/S2C among the CKKS bootstrapping procedures, without introducing substantial overhead or compromising security. By combining these techniques, our implementation demonstrates a 27–61% throughput improvement compared to the state-of-the-art bootstrapping.
In this work, we propose three techniques—OverModRaise1, OverModRaise2, and Tuple-C2S/S2C—that target reductions in the modulus consumption of C2S/S2C among the CKKS bootstrapping procedures, without introducing substantial overhead or compromising security. By combining these techniques, our implementation demonstrates a 27–61% throughput improvement compared to the state-of-the-art bootstrapping.
Takeshi Yoshida, Keita Emura
Ateniese et al. (CRYPTO 2019/JoC 2021) introduced a cryptographic primitive which they call matchmaking encryption (ME), and Identity-based ME (IB-ME) is its identity-based variant. IB-ME supports an equality matching where a sender (encryptor) indicates a receiver's (decryptor's) identity (rcv) in addition to their own ID ($\sigma$), and a receiver indicates a sender's identity (snd) in addition to the own identity ($\rho$). A ciphertext is decrypted if $(\sigma,\rho)=$(snd,rcv). In this paper, we pay attention to the search condition of public key authenticated encryption with keyword search (PAEKS) (Huang-Li, Information Sciences 2017) is reminiscent of the equality matching. We introduce a public key variant of ME which we call PK-ME, and propose a generic construction of PK-ME from PAEKS. As a conceptual contribution, our work lies in revealing a connection between ME and public key searchable encryption, which were independently researched so far. Due to the generic construction of PAEKS (Li-Boyen, IACR CiC 2024), we can instantiate the proposed generic construction from pairings or lattices. We also introduce a weaker version of authenticity and show that it can be instantiated from several complexity assumptions. Finally, we discuss the advantage/disadvantage of PK-ME compared to IB-ME.
Rahul Ilango
A zero-knowledge proof demonstrates that a fact (like that a Sudoku puzzle has a solution) is true while, counterintuitively, revealing nothing else (like what the solution actually is). This remarkable guarantee is extremely useful in cryptographic applications, but it comes at a cost. A classical impossibility result by Goldreich and Oren [J. Cryptol. '94] shows that zero-knowledge proofs must necessarily sacrifice basic properties of traditional mathematical proofs --- namely perfect soundness (that no proof of a false statement exists) and non-interactivity (that a proof can be transmitted in a single message).
Contrary to this impossibility, we show that zero-knowledge with perfect soundness and no interaction is effectively possible. We do so by defining and constructing a powerful new relaxation of zero-knowledge. Intuitively, while the classical zero-knowledge definition requires that an object called a simulator actually exists, our new definition only requires that one cannot rule out that a simulator exists (in a particular logical sense). Using this, we show that **every falsifiable security property of (classical) zero-knowledge can be achieved with no interaction, no setup, and perfect soundness.** This enables us to remove interaction and setup from (classical) zero-knowledge in essentially all of its applications in the literature, at the relatively mild cost that such applications now have security that is "game-based" instead of "simulation-based."
Our construction builds on the work of Kuykendall and Zhandry [TCC '20] and relies on two central, longstanding, and well-studied assumptions that we show are also necessary. The first is the existence of non-interactive witness indistinguishable proofs, which follows from standard assumptions in cryptography. The second is Krajícek and Pudlák's 1989 conjecture that no optimal proof system exists. This is one of the main conjectures in the field of proof complexity and is the natural finitistic analogue of the impossibility of Hilbert's second problem (and, hence, also Gödel's incompleteness theorem). Our high-level idea is to use these assumptions to construct a prover and verifier where no simulator exists, but the non-existence of a simulator is independent (in the logical sense of unprovability) of an arbitrarily strong logical system. One such logical system is the standard axioms of mathematics: ZFC.
Contrary to this impossibility, we show that zero-knowledge with perfect soundness and no interaction is effectively possible. We do so by defining and constructing a powerful new relaxation of zero-knowledge. Intuitively, while the classical zero-knowledge definition requires that an object called a simulator actually exists, our new definition only requires that one cannot rule out that a simulator exists (in a particular logical sense). Using this, we show that **every falsifiable security property of (classical) zero-knowledge can be achieved with no interaction, no setup, and perfect soundness.** This enables us to remove interaction and setup from (classical) zero-knowledge in essentially all of its applications in the literature, at the relatively mild cost that such applications now have security that is "game-based" instead of "simulation-based."
Our construction builds on the work of Kuykendall and Zhandry [TCC '20] and relies on two central, longstanding, and well-studied assumptions that we show are also necessary. The first is the existence of non-interactive witness indistinguishable proofs, which follows from standard assumptions in cryptography. The second is Krajícek and Pudlák's 1989 conjecture that no optimal proof system exists. This is one of the main conjectures in the field of proof complexity and is the natural finitistic analogue of the impossibility of Hilbert's second problem (and, hence, also Gödel's incompleteness theorem). Our high-level idea is to use these assumptions to construct a prover and verifier where no simulator exists, but the non-existence of a simulator is independent (in the logical sense of unprovability) of an arbitrarily strong logical system. One such logical system is the standard axioms of mathematics: ZFC.
Eda Kırımlı, Chloe Martindale
In this paper, we discuss refined Humbert invariants, introduced by Kani in 1994. We show that in the contexts of SQISign and CSIDH, under GRH, the computational isogeny problem is polynomially equivalent to the computational refined Humbert invariant problem. This includes an algorithm to compute the basis of a maximal order of a quaternion algebra given its norm form. Finally, we also review what is known in the literature about computing refined Humbert invariants.
Jieyi Long
In this work, we present Interstellar, a novel folding and IVC framework built on a technique we call circuit interpolation, designed specifically for circuit satisfiability. By incorporating the GKR protocol, our approach avoids commitments to full computation traces and cross-term vectors, requiring instead only commitments to the actual circuit witness and optionally a small subset of intermediate gate values. This design significantly reduces the size of the vectors to be committed to in each folding step, which is an important advantage over existing schemes, as vector commitments typically incur costly group multi-scalar multiplications. Moreover, Interstellar is highly flexible. It can be extended naturally to handle high-degree and lookup gates, enable multi-instance folding, and support non-uniform IVC efficiently, making it well-suited for practical applications ranging from zkML to proving program execution for zkVMs. We instantiate our protocol with various vector/polynomial commitment schemes and provide detailed cost analyses, demonstrating substantial reductions in prover overhead compared to existing approaches.
Vojtech Suchanek, Jan Jancar, Jan Kvapil, Petr Svenda, Łukasz Chmielewski
Developers implementing elliptic curve cryptography (ECC) face a wide range of implementation choices created by decades of research into elliptic curves. The literature on elliptic curves offers a plethora of curve models, scalar multipliers, and addition formulas, but this comes with the price of enabling attacks to also use the rich structure of these techniques. Navigating through this area is not an easy task and developers often obscure their choices, especially in black-box hardware implementations. Since side-channel attackers rely on the knowledge of the implementation details, reverse engineering becomes a crucial part of attacks.
This work presents ECTester -- a tool for testing black-box ECC implementations. Through various test suites, ECTester observes the behavior of the target implementation against known attacks but also non-standard inputs and elliptic curve parameters. We analyze popular ECC libraries and smartcards and show that some libraries and most smartcards do not check the order of the input points and improperly handle the infinity point. Based on these observations, we design new techniques for reverse engineering scalar randomization countermeasures that are able to distinguish between group scalar randomization, additive, multiplicative or Euclidean splitting. Our techniques do not require side-channel measurements; they only require the ability to set custom domain parameters, and are able to extract not only the size but also the exact value of the random mask used. Using the techniques, we successfully reverse-engineered the countermeasures on 13 cryptographic smartcards from 5 major manufacturers -- all but one we tested on. Finally, we discuss what mitigations can be applied to prevent such reverse engineering, and whether it is possible at all.
This work presents ECTester -- a tool for testing black-box ECC implementations. Through various test suites, ECTester observes the behavior of the target implementation against known attacks but also non-standard inputs and elliptic curve parameters. We analyze popular ECC libraries and smartcards and show that some libraries and most smartcards do not check the order of the input points and improperly handle the infinity point. Based on these observations, we design new techniques for reverse engineering scalar randomization countermeasures that are able to distinguish between group scalar randomization, additive, multiplicative or Euclidean splitting. Our techniques do not require side-channel measurements; they only require the ability to set custom domain parameters, and are able to extract not only the size but also the exact value of the random mask used. Using the techniques, we successfully reverse-engineered the countermeasures on 13 cryptographic smartcards from 5 major manufacturers -- all but one we tested on. Finally, we discuss what mitigations can be applied to prevent such reverse engineering, and whether it is possible at all.
Anmoal Porwal, Antonia Wachter-Zeh, Pierre Loidreau
We introduce a new key recovery attack on the public-key encryption scheme using matrix codes proposed by Aragon et al. in Asiacrypt 2024. The secret key is a matrix code obtained by expanding an $\mathbb{F}_{q^m}$-linear Gabidulin code over an $\mathbb{F}_{q}$-basis of $\mathbb{F}_{q^m}$. This code is hidden by appending random rows and columns to a basis and then left- and right-multiplying by scrambling matrices. We show how to recover the secret code with an exponential complexity that is generally better than the current best distinguisher. This also breaks a few of their proposed parameters. Our attack does not rely on the Gabidulin structure and thus applies to most $\mathbb{F}_{q^m}$-linear codes hidden by their transform.
Ariel Futoransky, Gabriel Larotonda, Fadi Barbara
We provide minimal counterexamples for the security of the BitVM3 garbling scheme: our attack allows the evaluator to forge input and output wires. Then we use the same idea to exhibit an attack on the forward label propagation garbling scheme proposed in a more recent paper. In both cases, the authenticity property of the garbling scheme is broken.
Oriol Farràs, Vincent Grosso, Miquel Guiot, Carlos Andres Lara-Nino
Remote power analysis is a novel threat to information systems. Under this attack model, the adversary does not require direct physical access to the platform or specialized sensing equipment. Most of the literature in this field deals with advanced acquisition methods and adversarial models. In contrast, side-channel analysis techniques for remote attacks have not been sufficiently explored. We bridge this gap by taking a look at the characteristics of the data recovered from remote power analysis. We use these insights to propose a novel selection rule for correlation-based attacks that boosts success confidence. This improvement comes from the observation that the samples in a power trace are not independent. We show that adjacent samples can also provide useful information by proposing a post-processing step that capitalizes on these additional leakages. In contrast to previous work, the proposed technique does not rely on the selection of points of interest within the power traces. We further investigate the characteristics of "remote" power traces and their effect on the proposed selection rule through experiments with real (TDC, ChipWhisperer) and synthetic data sets. To assess the advantage of the proposed improvement, we also introduce novel performance metrics that divert from known-key evaluation techniques.
Yufan Jiang, Maryam Zarezadeh, Tianxiang Dai, Stefan Köpsell
Federated learning (FL) proposes to train a global machine learning model across distributed datasets. However, the aggregation protocol as the core component in FL is vulnerable to well-studied attacks, such as inference attacks, poisoning attacks [71] and malicious participants who try to deviate from the protocol [24]. Therefore, it is crucial to achieve both malicious security and poisoning resilience from cryptographic and FL perspectives, respectively. Prior works either achieve incomplete malicious security [76], address issues by using expensive cryptographic tools [22, 59] or assume the availability of a clean dataset on the server side [32].
In this work, we propose AlphaFL, a two-server secure aggregation protocol achieving both malicious security in the universal composability (UC) framework [19] and poisoning resilience in FL (thus malicious$^2$) against a dishonest majority. We design maliciously secure multi-party computation (MPC) protocols [24, 26, 48] and introduce an efficient input commitment protocol tolerating server-client collusion (dishonest majority). We also propose an efficient input commitment protocol for the non-collusion case (honest majority), which triples the efficiency in time and quadruples that in communication, compared to the state-of-the-art solution in MP-SPDZ [46]. To achieve poisoning resilience, we carry out $L_\infty$ and $L_2$-Norm checks with a dynamic $L_2$-Norm bound by introducing a novel silent select protocol, which improves the runtime by at least two times compared to the classic select protocol. Combining these, AlphaFL achieves malicious$^2$ security at a cost of 25% − 79% more runtime overhead than the state-of-the-art semi-malicious counterpart Elsa [76], with even less communication cost.