International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

24 June 2025

Erkan Uslu, Oğuz Yayla
ePrint Report ePrint Report
Verifiable Timed Signatures (VTS) are cryptographic primitives that enable the creation of a signature that can only be retrieved after a specific time delay, while also providing verifiable evidence of its existence. This framework is particularly useful in blockchain applications. Current VTS schemes rely on signature algorithms such as BLS, Schnorr, and ECDSA, which are vulnerable to quantum attacks due to the vulnerability of the discrete logarithm problem to Shor's Algorithm. We introduce VT-UOV, a novel VTS scheme based on the Salt-Unbalanced Oil and Vinegar (Salt-UOV) Digital Signature Algorithm. As a multivariate polynomial-based cryptographic primitive, Salt-UOV provides strong security against both classical and quantum adversaries. Adapting Salt-UOV into the VTS framework requires addressing challenges such as complex parameters instead of a integer, the computational complexity of solving multivariate equations, and the integration of Time-Lock Puzzles (TLPs) for enforcing delayed signature generation. Our experimental results show that VT-UOV exhibits a unique performance profile among existing VTS constructions. This paper offers a detailed exploration of the VT-UOV scheme and its overall security and performance properties.
Expand
Alexander Bille, Elmar Tischhauser
ePrint Report ePrint Report
We describe key recovery attacks on the authenticated stream cipher HiAE, which was recently proposed for future high-throughput communication networks such as 6G by Huawei. HiAE uses a 2048-bit state, a 256-bit key and produces 128-bit tags, targeting 256-bit security against key and state recovery. As a nonce-based AEAD scheme, it relies on the uniqueness of the nonce per key for these security claims. Our analysis indicates that a complete recovery of the 256-bit key of HiAE is possible with a complexity of $2^{128}$ data and at most $2^{129.585}$ time in the nonce-respecting attack setting, with various small tradeoffs concerning the data and time complexity. While infeasible in practice, this attack therefore violates the 256-bit security claim for HiAE. We describe further complete key-recovery attacks in the nonce-misuse and release of unverfied plaintext (RUP) settings which require only a small constant number of repeated nonces or unverified decryption queries, respectively.
Expand
Pascal Lafourcade, Dhekra Mahmoud, Sylvain Ruhault, Abdul Rahman Taleb
ePrint Report ePrint Report
PQ-WireGuard is a post-quantum variant of WireGuard Virtual Private Network (VPN), where Diffie-Hellman-based key exchange is replaced by post-quantum Key Encapsulation Mechanisms-based key exchange. In this paper, we first conduct a thorough formal analysis of PQ-WireGuard's original design, in which we point out and fix a number of weaknesses. This leads us to an improved construction PQ-WireGuard*. Secondly, we propose and formally analyze a new protocol, based on both WireGuard and PQ-WireGuard*, named Hybrid-WireGuard, compliant with current best practices for post-quantum transition about hybridization techniques. For our analysis, we use the SAPIC+ framework that enables the generation of three state-of-the-art protocol models for the verification tools ProVerif, DeepSec and Tamarin from a single specification, leveraging the strengths of each tool. We formally prove that Hybrid-WireGuard is secure. Eventually, we propose a generic, efficient and usable Rust implementation of our new protocol.
Expand
Ilias Cherkaoui, Ciaran Clarke, Indrakshi Dey
ePrint Report ePrint Report
This paper builds the foundation for a cryptosystem based on p-adic representations of supersingular elliptic curve isogenies generated through Engel expansions of Laurent series. This mathematical framework manifests as a lightweight encryption scheme implemented on ESP32 microcontrollers for IoT applications. Efficient isogeny paths are constructed for quantum-resistant primitives secured against Shor's algorithm by decomposing elements into Engel sequences. Performance analysis confirms linear computational scaling with message size and speed gain at a higher clock rate, along with power trace signatures corroborating theoretical computational models. Consequently, we confirm the practical feasibility of our proposed p-adic isogeny cryptography on resource-constrained embedded systems while offering rigorous post-quantum security assurances.
Expand
Sayan Das, Aarav Varshney, Prasanna Ravi, Anupam Chattopadhyay
ePrint Report ePrint Report
Quantum Key Distribution (QKD) is a promising technology that enables information-theoretic secure key exchange using quantum principles. It is being increasingly deployed in critical sectors through emerging Quantum Key-as-a-Service (QKaaS) models. However, current standards like ETSI GS QKD 014 assume that QKD keys are consumed within trusted environments—an assumption that breaks down in real-world deployments where keys are delivered over classical networks to remote, potentially untrusted endpoints. This creates a security gap at the interface between QKD systems and key-consuming applications. In this paper, we identify this gap and propose a proxy-based solution that secures QKD key delivery using post-quantum cryptography (PQC). Our proxy transparently applies PQC-based signatures and key encapsulation to ETSI-compliant QKD APIs, without requiring changes to existing infrastructure. It supports cryptographic agility, allowing runtime switching between multiple PQC schemes. We benchmark our design using both QKD simulators and production-grade QKD hardware, and show that it introduces minimal overhead with efficient NIST PQC algorithms. Our findings highlight the need for stronger protection of the QKD interface in practical deployments. We advocate for a revision to ETSI GS QKD 014 to include an addendum that addresses this critical gap and promotes end-to-end quantum-safe integration.
Expand

23 June 2025

Wenwen Xia, Geng Wang, Dawu Gu
ePrint Report ePrint Report
The approximate Closest Vector Problem (CVP) is a core computational problem underlying many post-quantum lattice-based signature schemes, including Dilithium, one-more-ISIS, and HuFu. While the security of these schemes is typically expressed in terms of the Inhomogeneous Short Integer Solution (ISIS) problem, it is well-known that ISIS can be efficiently reduced to approximate CVP. Despite its foundational role, approximate CVP with non-negligible approximation factors remains far less explored than other lattice problems such as SVP or LWE, creating a critical gap in both theory and practice.

In this work, we bridge this gap by advancing the Colattice framework for solving approximate CVP with large approximation factors. More concretely, (1) We define a practical version of the Colattice algorithm and propose a randomized Nearest Colattice for generating more than one approximate closest vector. (2) Define a formal strategy space for blockwise approximate CVP. (3) Propose a polynomial-time strategy selection algorithm and prove its correctness under standard lattice heuristics. (4) Building on this, we design an efficient security estimator for approximate CVP in both Euclidean and Infinity norms, and extend it to approximate batch-CVP attack settings. (5) By applying this estimator, we perform concrete security evaluations of Dilithium, HuFu, and one-more-ISIS. Our results reveal that the security of Dilithium are 20 $\log_2({\rm gates})$ lower than the required security thresholds at NIST levels 3 and 5, and none of the evaluated schemes withstand approximate batch-CVP attacks with $2^{64}$ queries. (6) We integrate a slicer and Colattice into G6K-CPU, leveraging the Locality-Sensitive Hashing (LSH) technqiue for nearest neighbors search (NNS). This is the first practical implementation of an NNS-accelerated slicer. Our results demonstrate the practical efficiency of approximate CVP and batch-CVP attacks, highlighting the need for more accurate security estimation.

These findings underscore the practical importance of accurate approximate CVP modeling and call for a reassessment of current parameter sets in post-quantum signature schemes.
Expand
Victor Shoup
ePrint Report ePrint Report
We present a scheme for verifiably encrypting a Shamir secret sharing to a committee of shareholders. Such a scheme can be used to easily implement distributed key generation (DKG) and resharing protocols used in threshold signing and decryption protocols. Our scheme is a minor variation on known techniques, and is not the most efficient in terms of communication and computational complexity. However, it is extremely simple and easy to implement. Moreover, for moderately sized shareholder committees of up to, say, 13 parties or so, and for applications where a DKG/resharing only needs to be performed occasionally, its performance should be acceptable in practice.
Expand
Min Xie, Zhengzhou Tu, Man Ho Au, Junbin Fang, Xuan Wang, Zoe Lin Jiang
ePrint Report ePrint Report
Linkable Ring Signatures (LRS) allow users to anonymously sign messages on behalf of ad-hoc rings, while ensuring that multiple signatures from the same user can be linked. This feature makes LRS widely used in privacy-preserving applications like e-voting and e-cash. To scale to systems with large user groups, efficient schemes with short signatures and fast verification are essential. Recent works, such as DualDory (ESORICS’22) and LLRing (ESORICS’24), improve verification efficiency through offline precomputations but rely on static rings, limiting their applicability in ad-hoc ring scenarios. Similarly, constant-size ring signature schemes based on accumulators face the same limitation.

In this paper, we propose a framework for constructing constant-size LRS suitable for large ad-hoc rings. We introduce a novel pairing-based Set Membership Argument (SMA) with a proof size of only three group elements. By leveraging KZG polynomial commitments, we optimize the verification to require only constant group exponentiations and pairings, as well as linear field multiplications. Utilizing the SMA, our framework achieves constant-size signatures with verification dominated by linear field operations, outperforming existing schemes that require linear group exponentiations in ad-hoc ring settings. Moreover, it exhibits strong scalability: (i) compatibility with any PKI-based cryptosystem and (ii) scoped linkability, enabling flexible definitions of linking scope.

We instantiate our framework using a discrete logarithm public key structure. On the $BN254$ curve, our signature size is fixed at 687 bytes, which to our best knowledge is the shortest LRS for ring sizes larger than 32. For a ring size of 1024, our verification cost is only 10.4 ms, achieving 48.6×, 2.6×–467×, 7.9×–13.2×, and 2.2×–102.5× improvements over Omniring (CCS’19), DualDory (with and without precomputation), LLRing-DL (with and without precomputation), and LLRing-P (with and without precomputation), respectively. Moreover, this performance gap continues to grow as the ring size increases.
Expand
Akasha Shafiq, Abhishek Kesarwani, Dimitrios Vasilopoulos, Paolo Palmieri
ePrint Report ePrint Report
Differential privacy (DP) has emerged as a preferred solution for privacy-preserving data analysis, having been adopted by several leading Internet companies. DP is a privacy-preserving mechanism that protects against re-identification of individuals within aggregated datasets. It is known that the privacy budget $\varepsilon$ determines the trade-off between privacy and utility. In this paper, we propose the use of novel set of metrics and an easy-to-implement, step-by-step framework to facilitate the implementation of the DP mechanism on real-world datasets and guide the selection of $\varepsilon$ based on desired accuracy vs utility trade-off. Currently, for a given query there is no widely accepted methodology on how to select $\varepsilon$ and choose the best DP mechanism that offers an optimal trade-off between privacy and utility. In order to address this gap, we perform experiments by considering three real-world datasets, aiming to identify optimal $\varepsilon$ and suitable mechanisms (Laplace or Gaussian) based on privacy utility trade-off as per use case for the commonly used count, sum and average queries for each dataset. Based on our experiment results, we observe that using our metric and framework, one can analyse noise distribution charts of multiple queries, and choose the suitable $\varepsilon$ and the DP mechanism for achieving a balance between privacy and utility. Additionally, we show that the optimal $\varepsilon$ depends on the particular query, desired accuracy and context in which DP is implemented, which suggests that an arbitrary, a-prior selection of $\varepsilon$ cannot provide adequate results. Our framework prioritises the plotting and visualisation of values and results in the DP analysis, making its adoption easy for a wider audience.
Expand
Yevgeniy Dodis, Bernardo Magri, Noah Stephens-Davidowitz, Yiannis Tselekounis
ePrint Report ePrint Report
Secure messaging protocols allow users to communicate asynchronously over untrusted channels with strong guarantees of privacy, authenticity, forward secrecy, and post-compromise security. However, traditional security analyses of these protocols assume complete trust in the hardware and software of honest participants, overlooking a significant class of real-world threats known as subversion attacks. These attacks alter cryptographic algorithms to compromise security, by exfiltrating secrets or creating vulnerabilities that are often undetected.

The notion of reverse firewalls (EC'15), aims at protecting against subversion attacks by introducing a third party, called a "reverse firewall" (RF), which sits between a party and the outside world and modifies its outgoing and incoming messages in a way such that, even if the party's machine has been corrupted (in a way that maintains functionality), security is still preserved. Importantly, the firewall shares no private information with the parties, and parties put no more trust in the firewall than they do in the communication channel. In this work, we address the existing gap in secure messaging and subversion attacks by presenting several key contributions:

- We design the first subversion-resilient secure messaging protocol based on the model of RF. Our protocol is based on the Signal protocol---the current state-of-the-art in two-party secure messaging, though it lacks subversion resilience---and achieves subversion resilience with only constant overhead over Signal.

- We develop a subversion-resilient version of the X3DH protocol in the RF model. X3DH is a core component that facilitates secure initial key agreement in Signal's protocol.

- We introduce and formalize the notion of Continuous Key Agreement with Tamper Detection, an essential concept for subversion-resilient secure messaging. Our notion enables parties to continuously agree on keys, even in the presence of active adversaries capable of partially tampering with the key exchange transcript. We present a construction of our notion and prove its subversion resilience in the model of RF.
Expand
Alberto Leporati, Lorenzo Rovida, Wessel van Woerden
ePrint Report ePrint Report
We suggest a generalization of homomorphic encryption (HE) schemes from a purely geometrical and lattice-based perspective. All the current reference HE schemes are based on the ring version of the Learning with Errors (LWE) problem. In this proposal, we first investigate LWE-based cryptosystems from a lattice point of view and present a framework that allows to obtain the same result, in geometrical terms, from any lattice — as long as it contains a sufficiently short trapdoor vector.

More precisely, we generalize the classical BGV (Brakerski, Gentry and Vaikuntanathan, ITCS '12) and GSW (Gentry, Sahai and Waters, CRYPTO '14) schemes to purely lattice-based variants, which we call Lattice-BGV and Lattice-GSW. By abstracting away the particular hardness assumption, our lattice framework allows to be instantiated with a broader range of lattices and hardness assumptions. For example, LWE gives a natural trapdoor for random $q$-ary lattices, and when plugged into our framework one obtains the original BGV and GSW schemes, while in this work we will also consider an instantiation based on the Lattice Isomorphism Problem (LIP), leading to the first more advanced cryptographic scheme build from LIP$^*$. Our framework also gives a geometrical and natural explanation of HE procedures and generalizes some properties, such as the ability to store many messages in a single ciphertext, one for each short trapdoor vector, without relying on any particular algebraic structure.

$^*$ In a concurrent work Branco, Malavolta and Maradni (ePrint 2025/993) propose an alternative LIP-based FHE construction.
Expand
Seunghu Kim, Eymen Ünay, Ayse Yilmazer-Metin, Hyung Tae Lee
ePrint Report ePrint Report
Sorting arrays encrypted under fully homomorphic encryption remains a fundamental challenge due to the high cost of private comparisons and the incompatibility of conventional sorting algorithms with the encrypted domain. Recently, Hong et al. (IEEE Transactions on Information Forensics and Security, 2021) proposed a $k$-way sorting network tailored to encrypted real numbers, but its reliance on multiple comparison stages incurs substantial multiplicative depth and significant bootstrapping overhead, even for modest array sizes.

In this work, we propose a novel rank-based sorting algorithm for encrypted real numbers that performs only a single comparison stage, thereby eliminating the need for bootstrapping operations. Our empirical evaluation demonstrates that the proposed method significantly outperforms the $k$-way approach for small to medium array sizes $(n\leq 1024)$, achieving a $46.91\times$ speedup at $n=256$ with a total runtime of $79$ seconds. Furthermore, we examine the recent matrix-based rank sort method by Mazzone et al. (USENIX Security '25) and show that integrating our optimized rank construction improves its efficiency. Specifically, we achieve $1.77\times$ and $2.43\times$ performance gains for $n=128$ and $n=512$, respectively.
Expand
Oleg Fomenko, Anton Levochko
ePrint Report ePrint Report
In 2023, Srinath Setty, Justin Thaler, and Riad Wahby published a paper that describes a novel lookup argument with efficient verification called Lasso. We present a focused and accessible overview of the Lasso lookup argument that stands for the foundational component of the Jolt ZK-VM. This article distills the core principles behind Lasso: the sum-check protocol, multilinear polynomials and their extensions, Spark commitment, offline memory-checking, and the evolution of Spark called Surge. By clarifying the underlying protocols and their relationship to innovations like Spark and Surge, we aim to provide researchers and engineers with practical insights into the cryptographic foundations powering both Lasso and the Jolt virtual machine.
Expand

20 June 2025

Eunchan Park, Taeung Yoon, Hocheol Nam, Deepak Maram, Min Suk Kang
ePrint Report ePrint Report
In timing-sensitive blockchain applications, such as decentralized finance (DeFi), achieving first-come-first-served (FCFS) transaction ordering among decentralized nodes is critical to prevent frontrunning attacks. Themis[CCS'23], a state-of-the-art decentralized FCFS ordering system, has become a key reference point for high-throughput fair ordering systems for real-world blockchain applications, such as rollup chains and decentralized sequencing, and has influenced the design of several subsequent proposals. In this paper, we critically analyze its core system property of practical batch-order fairness and evaluate the frontrunning resistance claim of Themis. We present the Ambush attack, a new frontrunning technique that achieves nearly 100% success against the practical batch-order fair system with only a single malicious node and negligible attack costs. This attack causes a subtle temporary information asymmetry among nodes, which is allowed due to the heavily optimized communication model of the system. A fundamental trade-off we identify is a challenge in balancing security and performance in these systems; namely, enforcing timely dissemination of transaction information among nodes (to mitigate frontrunning) can easily lead to non-negligible network overheads (thus, degrading overall throughput performance). We show that it is yet possible to balance these two by delaying transaction dissemination to a certain tolerable level for frontrunning mitigation while maintaining high throughput. Our evaluation demonstrates that the proposed delayed gossiping mechanism can be seamlessly integrated into existing systems with only minimal changes.
Expand
Mizuki Hayashi, Keita Emura
ePrint Report ePrint Report
Gao et al. (IEEE Internet of Things Journal 2024) proposed public-key inverted-index keyword search with designated tester as an extension of public key encryption with keyword search (PEKS). In their scheme, a server (a tester) has a secret key and uses the key for running the search algorithm due to the designated tester setting. They proved that no information of keyword is revealed from trapdoors under the decisional Diffie-Hellman (DDH) assumption. However, they also employed a symmetric pairing which can be seen as a DDH-solver. Thus, it is expected that information of keyword is revealed from trapdoors since the underlying complexity assumption does not hold. In this paper, we demonstrate two attacks against the Gao et al.'s scheme where information of keyword is revealed from a trapdoor. The first attack completes by using only the server's secret key in addition to the challenge trapdoor, without any additional encryption/trapdoor queries. We remark that an adversary is not allowed to obtain the server's secret key in their security model, and our attack is outside of their security model. Thus, we discuss the roles of the server, and stress that our attack scenario is reasonable. The second attack does not employ the server's secret key and utilizes linkability of two trapdoors. In both attacks, the attack complexity is just two pairing computations and is feasible in terms of the computational cost.
Expand
Giacomo Borin, Sofía Celi, Rafael del Pino, Thomas Espitau, Guilhem Niot, Thomas Prest
ePrint Report ePrint Report
Threshold signatures enable multiple participants to collaboratively produce a digital signature, ensuring both fault tolerance and decentralization. As we transition to the post-quantum era, lattice-based threshold constructions have emerged as promising candidates. However, existing approaches often struggle to scale efficiently, lack robustness guarantees, or are incompatible with standard schemes — most notably, the NIST-standard ML-DSA. In this work, we explore the design space of Fiat-Shamir-based lattice threshold signatures and introduce the two most practical schemes to date. First, we present an enhanced TRaccoon-based [DKM+24] construction that supports up to 64 participants with identifiable aborts, leveraging novel short secret-sharing techniques to achieve greater scalability than previous state-of-the-art methods. Second — and most importantly — we propose the first practical ML-DSA-compatible threshold signature scheme, supporting up to 6 users. We provide full implementations and benchmarks of our schemes, demonstrating their practicality and efficiency for real-world deployment as protocol messages are computed in at most a few milliseconds, and communication cost ranges from 10.5 kB to 525 kB depending on the threshold.
Expand
Stefan Milius, Dominik Paulus, Dominique Schröder, Lutz Schröder, Julian Thomas
ePrint Report ePrint Report
Message Authentication Codes (MACs) represent a fundamental symmetric key primitive, serving to ensure the authenticity and integrity of transmitted data. As a building block in authenticated encryption and in numerous deployed standards, including TLS, IPsec, and SSH, MACs play a central role in practice. Due to their importance for practice, MACs have been subject to extensive research, leading to prominent schemes such as HMAC, CBCMAC, or LightMAC. Despite the existence of various MACs, there is still considerable interest in creating schemes that are more efficient, potentially parallelizable, or have specific non-cryptographic attributes, such as being patent-free.

In this context, we introduce an automated method for analyzing and synthesizing MAC schemes. In order to achieve this goal, we have constructed a framework that restricts the class of MACs in such a way that it is sufficiently expressive to cover known constructions, yet also admits automated reasoning about the security guarantees of both known and new schemes. Our automated analysis has identified a novel category of MACs, termed "hybrid" MACs. These MACs operate by processing multiple blocks concurrently, with each block managed by a different, specified MAC scheme. A key finding is that in certain scenarios, the hybrid MAC marginally outperforms the simultaneous operation of the individual MACs. This improvement is attributed to the hybrid approach exploiting the strengths and compensating for the weaknesses of each distinct MAC scheme involved. Our implementation confirms that we have successfully identified new schemes that have comparable performance with state-of-the-art schemes and in some settings seem to be slightly more efficient.
Expand
Nick Aquina, Simon Rommel, Idelfonso Tafur Monroy
ePrint Report ePrint Report
The Q-problem has been introduced as a new post-quantum hard problem. We present two man-in-the-middle and three key recovery attacks against the key exchange protocol based on the Q-problem. The man-in-the-middle attacks take negligible time and allow the attacker to recover the exchanged key. The most effective key recovery attack has a computational complexity of $2^{40}$. We also propose countermeasures against all attacks.
Expand
Alexander Bienstock, Leo de Castro, Daniel Escudero, Antigoni Polychroniadou, Akira Takahashi
ePrint Report ePrint Report
A threshold signature is an advanced protocol that splits a secret signing key among multiple parties, allowing any subset above a threshold to jointly generate a signature. While post-quantum (PQ) threshold signatures are actively being studied --- especially in response to NIST's recent call for threshold schemes --- most existing solutions are tailored to specially designed, threshold-friendly signature schemes. In contrast, many real-world applications, such as distributed certificate authorities and digital currencies, require signatures that remain verifiable under the standardized verification procedures already in use. Given NIST's recent standardization of PQ signatures and ongoing industry deployment efforts, designing an efficient threshold scheme that interoperates with NIST-standardized verification remains a critical open problem.

In this work, we present the first efficient and scalable solution for multi-party generation of the module-lattice digital signature algorithm (ML-DSA), one of NIST's PQ signature standards. Our contributions are two-fold. First, we present a variant of the ML-DSA signing algorithm that is amenable to efficient multi-party computation (MPC) and prove that this variant achieves the same security as the original ML-DSA scheme. Second, we present several efficient & scalable MPC protocols to instantiate the threshold signing functionality. Our protocols can produce threshold signatures with as little as 100 KB (per party) of online communication per rejection-sampling round. In addition, we instantiate our protocols in the honest-majority setting, which allows us to avoid any additional public key assumptions.

The signatures produced by our protocols verify under the same implementation of ML-DSA verification for all three security levels. Thus, signatures and verification keys of our scheme are (naturally) the same size as that of ML-DSA; previous lattice-based threshold schemes could not match both of these sizes. Overall, our solution is the only method for producing threshold ML-DSA signatures compatible with NIST-standardized verification that scales to an arbitrary number of parties, without any new assumptions.
Expand
Dipayan Saha, Shams Tarek, Hasan Al Shaikh, Khan Thamid Hasan, Pavan Sai Nalluri, Md. Ajoad Hasan, Nashmin Alam, Jingbo Zhou, Sujan Kumar Saha, Mark Tehranipoor, Farimah Farahmandi
ePrint Report ePrint Report
Ensuring the security of complex system-on-chips (SoCs) designs is a critical imperative, yet traditional verification techniques struggle to keep pace due to significant challenges in automation, scalability, comprehensiveness, and adaptability. The advent of large language models (LLMs), with their remarkable capabilities in natural language understanding, code generation, and advanced reasoning, presents a new paradigm for tackling these issues. Moving beyond monolithic models, an agentic approach allows for the creation of multi-agent systems where specialized LLMs collaborate to solve complex problems more effectively. Recognizing this opportunity, we introduce SV-LLM, a novel multi-agent assistant system designed to automate and enhance SoC security verification. By integrating specialized agents for tasks like verification question answering, security asset identification, threat modeling, test plan and property generation, vulnerability detection, and simulation-based bug validation, SV-LLM streamlines the workflow. To optimize their performance in these diverse tasks, agents leverage different learning paradigms, such as in-context learning, fine-tuning, and retrieval-augmented generation (RAG). The system aims to reduce manual intervention, improve accuracy, and accelerate security analysis, supporting proactive identification and mitigation of risks early in the design cycle. We demonstrate its potential to transform hardware security practices through illustrative case studies and experiments that showcase its applicability and efficacy.
Expand
Next ►