International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

04 December 2025

Juan Garay, Clint Givens, Rafail Ostrovsky
ePrint Report ePrint Report
Secure multiparty computation (MPC) is perhaps the most popularparadigm in the area of cryptographic protocols. It allows several mutually untrustworthy parties to jointly compute a function of their private inputs, without revealing to each other information about those inputs. In the case ofunconditional (information-theoretic) security, protocols are known which tolerate a dishonest minorityof players, who may coordinate their attack and deviate arbitrarily from the protocol specification. It is typically assumed in these results that parties are connected pairwise by authenticated, private channels, and that in addition they have access to a “broadcast” channel. Broadcast allows one party to send a consistent message to all other parties, guaranteeing consistency even if the broadcaster is corrupted. Because broadcast cannot be simulated on the point-to-point network when more than a third of the parties are corrupt, it is impossible to construct general MPC protocols in this setting without using a broadcast channel (or some equivalent addition to the model).

A great deal of research has focused on increasing the efficiency of MPC, primarily in terms of round complexity and communication complexity. In this work we propose a refinement of the round complexity which we term broadcast complexity. We view the broadcast channel as an expensive resource and seek to minimize the number of rounds in which it is invoked.

1. We construct an MPC protocol which uses the broadcast channel only three times in a preprocessing phase, after which it is never required again. Ours is the first unconditionally secure MPC protocol for $t < n/2$ to achieve such a low number of broadcast rounds. In contrast, combining the best previous techniques yields a protocol with twenty four broadcast rounds.

2. In the negative direction, we show a lower bound of two broadcast rounds for the specific functionality of Weak Secret Sharing (a.k.a. Distributed Commitment), also a very natural functionality and central building block of many MPC protocols.

The broadcast-efficient MPC protocol relies on new constructions of Pseudosignatures and Verifiable Secret Sharing, both of which might be of independent interest.
Expand
Noé Amiot, Quentin Meunier, Karine Heydemann, Emmanuelle Encrenaz
ePrint Report ePrint Report
Verifying the security of masked hardware and software implementations, under advanced leakage models, remains a significant challenge, especially when accounting for glitches, transitions and CPU micro-architectural specifics. Existing verification approaches are either restricted to small hardware gadgets, small programs on CPUs such as Sboxes, limited leakage models, or require hardware-specific prior knowledge. In this work, we present aLEAKator, an open-source framework for the automated formal verification of masked cryptographic accelerators and software running on CPUs from their HDL descriptions. Our method introduces mixed-domain simulation, enabling precise modeling and verification under various (including robust and relaxed) 1-probing leakage models, and supports variable signal granularity without being restricted to 1-bit wires. aLEAKator also supports verification in the presence of lookup tables, and does not require prior knowledge of the target CPU architecture. Our approach is validated against existing tools and real-world measurements while providing innovative results such as the verification of a full, first-order masked AES on various CPUs.
Expand
Andrea Basso, Chenfeng He, David Jacquemin, Fatna Kouider, Péter Kutas, Anisha Mukherjee, Sina Schaeffler, Sujoy Sinha Roy
ePrint Report ePrint Report
SQIsign, the only isogeny-based signature competing in the ongoing NIST call for additional signatures, offers the most compact key and signature sizes among all other candidates. It combines isogenies with quaternion arithmetic for its signing procedure. In this work, we address a gap in the current implementation of SQIsign: the absence of constant-time algorithms for quaternion arithmetic. We propose constant-time algorithmic formulations for three fundamental routines in SQIsign's quaternion layer.

First, we discuss a constant-time Hermite Normal Form (HNF) algorithm. We then present a new constant-time approach for computing a generator of a quaternion ideal, replacing the exhaustive search-based approach used in SQIsign. Our approach eliminates the need for coefficient scanning, coprimality tests, and norm evaluation loops, yielding a data-independent and deterministic procedure.

Finally, we design a constant-time version of the GeneralizedRepresentInteger algorithm for solving norm equations in special extremal orders. We circumvent timing dependencies arising from primality checks, modular square root calculations, and Euclidean division steps by introducing a regularized control flow with fixed-iteration sampling and branch-free arithmetic. We also show that the tools developed along the way enable a constant-time version of the recently introduced Qlapoti algorithm.

In our constant-time algorithms, the cost of large operand operations remains a bottleneck for the constant-time HNF and GeneralizedRepresentInteger. We believe our work will facilitate secure and efficient implementations and inspire further works on deployment-level optimizations.
Expand
Hanyue Dou, Peifang Ni, Yingzi Gao, Jing Xu
ePrint Report ePrint Report
Single Secret Leader Election (SSLE) protocol facilitates the election of a single leader per round among a group of registered nodes while ensuring unpredictability. Ethereum has identified SSLE as an essential component in its development roadmap and has adopted it as a potential solution to counteract potential attacks. However, we identify a new form of attack termed the state uniqueness attack that is caused by malicious leaders proposing multiple publicly verifiable states. This attack undermines the property of uniqueness in subsequent leader elections and, with high probability, leads to violations of fundamental security properties of the over-layer protocol such as liveness. The vulnerability stems inherently from the designs reducing the uniqueness guarantee to a unique state per election, and can be generalized to the existing SSLE constructions. We further quantify the severity of this attack based on theoretical analysis and real-world executions on Ethereum, highlighting the critical challenges in designing provably secure SSLE protocols. To address the state uniqueness attack while ensuring both security and practical performance, we present a universal SSLE protocol called Mobius that does not rely on extra trust assumptions. Specifically, Mobius prevents the generation of multiple verifiable states for each election and achieves a unique state across consecutive executions through an innovative approximately-unique randomization mechanism. In addition to providing a comprehensive security analysis in the Universal Composability framework, we develop a proof-of-concept implementation of Mobius, and conduct extensive experiments to evaluate the security and overhead. The experimental results show that Mobius exhibits enhanced security while significantly reducing communication complexity throughout the protocol execution, achieving over 80% reduction in the registration phase.
Expand
Pedro Branco, Pratik Soni, Sri AravindaKrishnan Thyagarajan, Ke Wu
ePrint Report ePrint Report
Secure coin-tossing is typically modeled as an input-less functionality, where parties with no private inputs jointly generate a fair coin. In the dishonest majority setting, however, a strongly fair coin-tossing protocol is impossible. To circumvent this barrier, recent work has adopted the weaker notion of game-theoretic fairness, where adversaries are rational parties with preferences for specific outcomes, seeking to bias the coin in their favor. Yet these preferences may encode secret information, making prior protocols that assume preferences are public, fundamentally incompatible with privacy.

We initiate a comprehensive study of privacy-preserving game-theoretically fair coin-tossing, where the preferences of honest parties remain private. We propose a simulation-based security framework and a new ideal functionality that reconciles both preference-privacy and game-theoretic fairness. A key ingredient is a certifying authority that authenticates each party’s preference and publishes only aggregate statistics, preventing misreporting while hiding parties' preferences. The functionality guarantees that every honest party receives an output: either a uniform coin; or, if an adversary deviates, a coin that strictly decreases the adversarial coalition's expected utility.

Within this framework, we construct a protocol realizing our ideal functionality under standard cryptographic assumptions that works for both binary and general $m$-sided coin-tossing. Our schemes tolerate the same optimal (or nearly optimal) corruption thresholds as the best known protocols with public preferences (Wu-Asharov-Shi, EUROCRYPT '22; Thyagarajan-Wu-Soni, CRYPTO '24). Technically, our protocols combine authenticated preferences with an anonymous communication layer that decouples identities from preference-dependent actions, together with a deviation-penalty mechanism that enforces game-theoretic fairness.

Our work is the first to reconcile game-theoretic fairness with preference privacy, offering new definitional tools and efficient protocols for rational multi-party computation in dishonest majority settings.
Expand
Lynn Engelberts, Yanlin Chen, Amin Shiraz Gilani, Maya-Iggy van Hoof, Stacey Jeffery, Ronald de Wolf
ePrint Report ePrint Report
The assumed hardness of the Shortest Vector Problem in high-dimensional lattices is one of the cornerstones of post-quantum cryptography. The fastest known heuristic attacks on SVP are via so-called sieving methods. While these still take exponential time in the dimension $d$, they are significantly faster than non-heuristic approaches and their heuristic assumptions are verified by extensive experiments. $k$-Tuple sieving is an iterative method where each iteration takes as input a large number of lattice vectors of a certain norm, and produces an equal number of lattice vectors of slightly smaller norm, by taking sums and differences of $k$ of the input vectors. Iterating these ''sieving steps'' sufficiently many times produces a short lattice vector. The fastest attacks (both classical and quantum) are for $k=2$, but taking larger $k$ reduces the amount of memory required for the attack. In this paper we improve the quantum time complexity of 3-tuple sieving from $2^{0.3098 d}$ to $2^{0.2846 d}$, using a two-level amplitude amplification aided by a preprocessing step that associates the given lattice vectors with nearby ''center points'' to focus the search on the neighborhoods of these center points. Our algorithm uses $2^{0.1887d}$ classical bits and QCRAM bits, and $2^{o(d)}$ qubits. This is the fastest known quantum algorithm for SVP when total memory is limited to $2^{0.1887d}$.
Expand
Ye Dong, Xiangfu Song, W.j Lu, Xudong Chen, Yaxi Yang, Ruonan Chen, Tianwei Zhang, Jin-Song Dong
ePrint Report ePrint Report
Secure two-party computation (2PC)-based privacy-preserving machine learning (ML) has made remarkable progress in recent years. However, most existing works overlook the privacy challenges that arise during the data preprocessing stage. Although some recent studies have introduced efficient techniques for privacy-preserving feature selection and data alignment on well-structured datasets, they still fail to address the privacy risks involved in transforming raw data features into ML-effective numerical representations.

In this work, we present ALIOTH, an efficient 2PC framework that securely transforms raw categorical and numerical features into Weight-of-Evidence (WoE)-based numerical representations under both vertical and horizontal data partitions. By incorporating our proposed partition-aware 2PC protocols and vectorization optimizations, ALIOTH efficiently generates WoE-transformed datasets in secret. To demonstrate scalability, we conduct experiments on diverse datasets. Notably, ALIOTH can transform 3 million data samples with 100 features securely within half an hour over a wide-area network. Furthermore, ALIOTH can be seamlessly integrated with existing 2PC-based ML frameworks. Empirical evaluations on real-world financial datasets show ALIOTH improves both the predictive performance of logistic regression and 2PC training efficiency.
Expand
Zhongming Wang, Tao Xiang, Xiaoguo Li, Guomin Yang, Biwen Chen, Ze Jiang, Jiacheng Wang, Chuan Ma, Robert H. Deng
ePrint Report ePrint Report
Encrypted messaging systems provide end-to-end security for users but obstruct content moderation, making it difficult to combat online abuses. Traceability offers a promising solution by enabling platforms to identify the originator/spreader of messages, yet this capability can be abused for mass surveillance of innocent messages. To mitigate this risk, existing approaches restrict traceability to (problematic) messages that are reported by multiple users or are on a predefined blocklist. However, these solutions either overtrust a specific entity (e.g., the party defining the blocklist) or rely on the unrealistic assumption of non-collusion between servers run by a single platform.

In this paper, we propose an abuse-resistant source tracing scheme that distributes traceability across distinct real-world entities. Specifically, we formally define its syntax and prove its security properties. Our scheme realizes two essential principles: minimal trust, which ensures that traceability cannot be abused as long as a single participant involved in tracing is honest, even if all others collude; and minimal information disclosure, which prevents participants from acquiring any information (e.g., communication parties' identities) unnecessary for tracing. We implemented our scheme using techniques deployed by Signal, and our evaluation shows it offers comparable performance to state-of-the-art schemes that are vulnerable to abuse.
Expand
Simon Gerhalter, Samir Hodžić, Marcel Medwed, Marcel Nageler, Artur Folwarczny, Ventzi Nikov, Jan Hoogerbrugge, Tobias Schneider, Gary McConville, Maria Eichlseder
ePrint Report ePrint Report
In modern CPU architectures, various security features to mitigate software attacks can be found. Examples of such features are logical isolation, memory tagging or shadow stacks. Basing such features on cryptographic isolation instead of logical checks can have many advantages such as lower memory overhead and more robustness against misconfiguration or low-cost physical attacks. The disadvantage of such an approach is however that the cipher that has to be introduced has a severe impact on the system performance, either in terms of additional cycles or a decrease of the maximum achievable frequency. Finally, as of today, there is no suitable low-latency cipher design available for encrypting 32-bit words as is common in microcontrollers. In this paper, we propose a 32-bit tweakable block cipher tailored to memory encryption for microcontroller units. We optimize this cipher for low latency, which we achieve by a careful selection of components for the round function and leveraging an attack scenario similar to the one used to analyze the cipher SCARF. To mitigate some attack vectors introduced by this attack scenario, we deploy a complex tweak-key schedule. Due to the shortage of suitable 32-bit designs, we compare our design to various low-latency ciphers with different block sizes. Our hardware implementation shows competitive latency numbers.
Expand
Jiayun Yan, Yu Li, Jie Chen, Haifeng Qian, Xiaofeng Chen, Debiao He
ePrint Report ePrint Report
We present fully adaptive secure threshold IBE and threshold signatures, which rely on the $k$-Linear assumption in the standard model over asymmetric pairing groups. In particular, our threshold signature scheme achieves a non-interactive signing process and an adaptively secure guarantee as strong as Das-Ren (CRYPTO'24), while their proof relies on the random oracle model. We achieve our results by following steps: First, we design two threshold IBE schemes against adaptive corruptions in the composite-order and prime-order groups by adopting the dual system groups encoding. Second, we provide a generic transform from threshold IBE to threshold signatures, following Naor's paradigm, which reduces the fully adaptive corruption security of threshold signatures to threshold IBE. Third, we present two threshold signatures instantiations in composite-order and prime-order groups.
Expand
Yanyi Liu, Rafael Pass
ePrint Report ePrint Report
We revisit the question of whether worst-case hardness of the time-bounded Kolmogorov complexity problem, $\KpolyA$---that is, determining whether a string is ``structured" (i.e., $K^t(x) < n-1$) or ``random" (i.e., $K^{\poly(t)} \geq n-1$)---suffices to imply the existence of one-way functions (OWF). Liu-Pass (CRYPTO'25) recently showed that worst-case hardness of a \emph{boundary} version of $\KpolyA$---where, roughly speaking, the goal is to decide whether given an instance $x$, (a) $x$ is $K^\poly$-random (i.e., $K^{\poly(t)}(x) \geq n-1$), or just close to $K^\poly$-random (i.e., $K^{t}(x) < n-1$ \emph{but} $K^{\poly(t)}> n - \log n$)---characterizes OWF, but with either of the following caveats (1) considering a non-standard notion of \emph{probabilistic $K^t$}, as opposed to the standard notion of $K^t$, or (2) assuming somewhat strong, and non-standard, derandomization assumptions. In this paper, we present an alternative method for establishing their result which enables significantly weakening the caveats. First, we show that boundary hardness of the more standard \emph{randomized} $K^t$ problem suffices (where randomized $K^t(x)$ is defined just like $K^t(x)$ except that the program generating the string $x$ may be randomized). As a consequence of this result, we can provide a characterization also in terms of just ``plain" $K^t$ under the most standard derandomization assumption (used to derandomize just $\BPP$ into $\P$)---namely $\E \not\subseteq {\sf ioSIZE}[2^{o(n)}]$.

Our proof relies on language compression schemes of Goldberg-Sipser (STOC'85); using the same technique, we also present the the first worst-case to average-case reduction for the \emph{exact} $\KpolyA$ problem (under the same standard derandomization assumption), improving upon Hirahara's celebrated results (STOC'18, STOC'21) that only applied to a \emph{gap} version of the $\KpolyA$ problem, referred to as $\GapKpolyA$, where the goal is to decide whether $K^t(x) \leq n-O(\log n))$ or $K^{\poly(t)}(x) \geq n-1$ and under the same derandomization assumption.
Expand
Suraj Mandal, Prasanna Ravi, M Dhilipkumar, Debapriya Basu Roy, Anupam Chattopadhyay
ePrint Report ePrint Report
The threat of practical quantum attacks has catapulted viable alternatives like Post-Quantum Cryptography (PQC) into prominence. The adoption and integration of standardized PQC primitives across the entire digital stack are promoted by various standardization bodies, governments, and major corporate houses. A serious challenge in quantum migration is to ensure that there is no hidden backdoor in the PQC implementations of a hybrid cryptosystem (support for both pre-quantum and post-quantum algorithms), which are often procured from a third-party vendor. In this manuscript, we investigate the possibility of a kleptographic backdoor on the NIST-recommended key-encapsulation mechanism CRYSTALS-Kyber. The modified Kyber key-generation algorithm achieves indistinguishable decryption failure probability compared to the original CRYSTALS-Kyber. The kleptographic module is also implemented in FPGA, embedded inside the CRYSTALS- Kyber accelerator with a very low area overhead (283 LUTs or 2% of total area), and thus can easily pass performance and functionality tests.
Expand

03 December 2025

Ottawa, Canada, 24 August - 28 August 2026
Event Calendar Event Calendar
Event date: 24 August to 28 August 2026
Submission deadline: 11 May 2026
Notification: 25 June 2026
Expand
Ottawa, Canada, 24 August - 28 August 2026
Event Calendar Event Calendar
Event date: 24 August to 28 August 2026
Submission deadline: 2 February 2026
Notification: 19 March 2026
Expand
Monash University, Melbourne, Australia
Job Posting Job Posting
The post-quantum cryptography research group at Monash University, Australia, has multiple Ph.D. student scholarship openings for research projects including in particular the following areas:

1. FHE Private Computation and zk-SNARKs: to devise practical cryptographic tools for securing FHE-based private cloud computation applications, including theory and application of zk-SNARKs,

2. Design of practical Post-Quantum Symmetric-key-based digital signatures (including Legendre PRF based) with privacy enhanced properties using MPC and SNARK techniques,

3. Design of practical lattice-based cryptographic protocols,

4. Secure and efficient implementation of lattice-based cryptography.

Students will have the opportunity to work in an excellent research environment. Monash University is among the leading universities in Australia and is located in Melbourne, ranked as Australia's most liveable city and among the most liveable cities in the world.

Applicants should have (or expected to complete in the next 12 months) a Masters or Honours equivalent qualification with a research thesis, with excellent grades in mathematics, theoretical computer science, cryptography, or closely related areas. They should have excellent English verbal and written communication skills. Programming experience and skills, especially in Sagemath/python/Magma and/or C/C++, are also highly desirable.

To apply: please fill in the following form - applicants will be assessed as they are received:

https://docs.google.com/forms/d/e/1FAIpQLSetFZLvDNug5SzzE-iH97P9TGzFGkZB-ly_EBGOrAYe3zUYBw/viewform?usp=sf_link

Closing date for applications:

Contact: Ron Steinfeld

More information: https://docs.google.com/forms/d/e/1FAIpQLSetFZLvDNug5SzzE-iH97P9TGzFGkZB-ly_EBGOrAYe3zUYBw/viewform?usp=sf_link

Expand

02 December 2025

Koki Jimbo
ePrint Report ePrint Report
We study several asymmetric structured key agreement schemes based on noncommutative matrix operations, including the recent proposal of Lizama as well as the strongly asymmetric algorithms SAA-3 and SAA-5 of Accardi et al.\ We place them in a common algebraic framework for public key agreement and identify simple structural conditions under which an eavesdropper can reconstruct an effective key-derivation map and reduce key recovery to solving linear systems over finite fields. We then show that the three matrix-based schemes mentioned above all instantiate our algebraic framework and can therefore be broken in polynomial time from public information alone. In particular, their security reduce to the hardness of linear-algebraic problems and does not exceed that of the underlying discrete logarithm problem. Our results demonstrate that the weakness of these schemes is structural rather than parametric, and that minor algebraic modifications are insufficient to repair them.
Expand
Isaac M Hair, Amit Sahai
ePrint Report ePrint Report
We prove that SVP$_p$ is NP-hard to approximate within a factor of $2^{\log^{1 - \varepsilon} n}$, for all constants $\varepsilon > 0$ and $p > 2$, under standard deterministic Karp reductions. This result is also the first proof that \emph{exact} SVP$_p$ is NP-hard in a finite $\ell_p$ norm. Hardness for SVP$_p$ with $p$ finite was previously only known if NP $\not \subseteq$ RP, and under that assumption, hardness of approximation was only known for all constant factors. As a corollary to our main theorem, we show that under the Sliding Scale Conjecture, SVP$_p$ is NP-hard to approximate within a small polynomial factor, for all constants $p > 2$. Our proof techniques are surprisingly elementary; we reduce from a regularized PCP instance directly to the shortest vector problem by using simple gadgets related to Vandermonde matrices and Hadamard matrices.
Expand
Laila El Aimani
ePrint Report ePrint Report
We consider the following problem: given two random polynomials $x$ and $y$ in the ring $\F_2[X]/(X^n+1)$, our goal is to compute the expectation and variance of the weight of their product $x\cdot y$, where the weight of a binary polynomial is defined as the number of its nonzero coefficients.

We consider two models for random polynomials $x$ and $y$: (1) the uniform slice case with fixed weights $w_x,w_y$, and (2) the binomial case where their coefficients are independent Bernoulli variables with success probabilities $p_x$ and $p_y$ respectively.

Our work finds a direct application in the accurate analysis of the decryption failure rate for the HQC code-based encryption scheme. The original construction relied on heuristic arguments supported by experimental data. Later, Kawachi provided a formally proven security bound, albeit a much weaker one than the heuristic estimate in the original construction. A fundamental limitation of both analyses is their restriction to the binomial case, a simplification that compromises the resulting security guarantees. Our analysis provides the first precise computation of the expectation and variance of weight($x\cdot y$) across both the uniform slice and binomial models. The results confirm the soundness of the HQC security guarantees and allow for a more informed choice of the scheme parameters that optimizes the trade-off security and efficiency.
Expand
Joël Alwen, Xiaohui Ding, Sanjam Garg, Yiannis Tselekounis
ePrint Report ePrint Report
We initiate the holistic study of Policy Compliant Secure Messaging (PCSM). A content policy is a predicate over messages deciding which messages are considered harmful and which not. A PCSM protocol is a type of end-to-end encrypted (E2EE) messaging system that guarantees E2EE privacy and authenticity for all policy compliant messages but detects and verifiably reports harmful content prior to its delivery. This stands in contrast to prior content moderation systems for E2EE messaging where detection relies on receivers reporting the harmful content themselves which makes them unsuited for most PCSM applications (e.g., for preventing the wilful distribution of harmful content). Our holistic PCSM notion explicitly captures several new roles such as policy creator, auditor and judge, to more accurately separate and model the different goals and security concerns of stakeholders when deploying PCSM.

We present efficient PCSM constructions for arbitrary policy classes, as well as for hash-based ones, achieving various levels of security, while maintaining the core security properties of the underlying E2EE layer. For hash-based PCSM, we encapsulate Apple’s recent PSI protocol used in their content moderation system, and we properly adapt it to realize the desired PCSM functionality, and analyze the resulting protocol’s security. To our knowledge, our work is the first that rigorously study Apple’s PSI for server-side content moderation within the broader context of secure messaging, addressing the diverse goals and security considerations of stakeholders when deploying larger systems.
Expand
Xavier Carril, Alicia Manuel Pasoot, Emanuele Parisi, Carlos Andrés Lara-Niño, Oriol Farràs, Miquel Moretó
ePrint Report ePrint Report
Recent advances in quantum computing pose a threat to the security of digital communications, as large-scale quantum machines can break commonly used cryptographic algorithms, such as RSA and ECC. To mitigate this risk, post-quantum cryptography (PQC) schemes are being standardized, with recent NIST recommendations selecting two lattice-based algorithms: ML-KEM for key encapsulation and ML-DSA for digital signatures. Two computationally intensive kernels dominate the execution of these schemes: the Number-Theoretic Transform (NTT) for polynomial multiplication and the Keccak-f1600 permutation function for polynomial sampling and hashing. This paper presents PQCUARK, a scalar RISC-V ISA extension that accelerates these key operations. PQCUARK integrates two novel accelerators within the core pipeline: (i) a packed SIMD butterfly unit capable of performing NTT butterfly operations on 2×32bit or 4×16bit polynomial coefficients, and (ii) a permutation engine that delivers two Keccak rounds per cycle, hosting a private state and a direct interface to the core Load Store Unit, eliminating the need for a custom register file interface. We have integrated PQCUARK into an RV64 core and deployed it on an FPGA. Experimental results demonstrate that PQCUARK provides up to 10.1× speedup over the NIST baselines and 2.3× over the optimized software, and it outperforms similar state-of-the-art approaches between 1.4-12.3× in performance. ASIC synthesis in GF22-FDSOI technology shows a moderate core area increase of 8% at 1.2 GHz, with PQCUARK units being outside the critical path.
Expand
Next ►