IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
03 October 2025
Ryo Ohashi, Hiroshi Onuki
In 2023, LeGrow, Ti, and Zobernig proposed a cryptographic hash function (we refer to it as the LTZ hash function in this paper) based on a certain (2,2)-isogeny graph between supersingular non-superspecial abelian surfaces over $\mathbb{F}_{p^4}$. The authors estimated that the time and space complexities required to find a collision in the LTZ hash function are both given by $\widetilde{O}(p^3)$.
In this paper, we first propose a mathematical conjecture on the number of vertices defined over the smaller field $\mathbb{F}_{p^2}$ in the isogeny graphs used in the LTZ hash function. Based on this conjecture, we then construct a collision-finding algorithm for the LTZ hash function, which preserves the time complexity $\widetilde{O}(p^3)$, while reducing the required memory to $O(\log(p)^2)$. We implemented this algorithm in Rust, and successfully found a collision for parameters claimed to provide 35-37 bits of security, within 26.2MB of memory usage on average in 20.2 hours.
Rama Yadavalli, Jeffery Solomon, Vrinda Sharma
Cloud computing has matured into the default substrate for data processing, yet confidentiality demands of- ten force a hard trade-off between the utility of outsourced computation and the privacy of sensitive inputs. Homomorphic encryption (HE)[1] promises to dissolve that trade-off by enabling computation directly on ciphertexts, returning encrypted results that only the data owner can decrypt. Despite remarkable progress in fully homomorphic encryption (FHE) and leveled variants suitable for bounded-depth circuits, deploying HE at cloud scale remains challenging. The cost of ciphertext arithmetic is orders of magnitude higher than plaintext compute; noise growth and rescaling impose algorithmic discipline; vectorization and rotation keys complicate program layout; and the lack of verifiability in bare HE obliges trust in a correct-but-curious cloud[?]. This paper develops a system perspective on how to apply modern HE in cloud environments without reducing it to a boutique feature. We introduce a reference architecture that marries approximate-arithmetic HE for analytics with exact- arithmetic HE for integrity-critical operations, composes HE with succinct proofs for verifiability, and integrates a cost model into the scheduler so that elastically provisioned serverless workers can meet latency objectives under price constraints. The design begins with a compiler that lowers dataflow graphs to operator sequences parameterized by multiplicative depth L and rotation sets; it then chooses schemes and parameters—CKKS for floating-point style analytics and signal processing, BFV/BGV for integer operations and counters, TFHE-style bootstrapping for comparisons—that minimize the total time-to-result under explicit error and security budgets. A cryptographic key service supports threshold issuance and rotation-key escrow without learning plaintexts, while a storage layer packs columns into ciphertext SIMD lanes to amortize cost across tenants. For verifiability, we attach homomorphic message authentication tags to intermediate ciphertexts and wrap end-to-end executions in succinct non-interactive proofs specialized to the bilinear equations that certify correct key switching, rescaling, and boot- strapping. Analytically, we characterize latency by a linear model in the counts of core homomorphic primitives and show how to saturate GPUs or vector units with batched number-theoretic transforms to bend throughput toward practical regimes. Under realistic traces of analytic queries and encrypted inference, the architecture achieves sub-second P95 for circuits of depth six to eight with one or two bootstraps, while sustaining 128-bit security under RLWE. By treating HE not as an exotic afterthought but as a first-class cloud programming and scheduling primitive, the proposed approach demonstrates a path to confidential-by- default services in which the cloud never sees data in the clear
yet remains efficient, elastic, and auditable.
Lui Zheng, Roger Zhu, Amit Agrawal, Carol Lamore
Mutual Transport Layer Security (mTLS) under- pins authenticated, confidential communication across modern service meshes, but its deployment stance in machine-learning platforms is typically static—fixed cipher suites, certificate life- times, and re-authentication schedules chosen for worst-case threats rather than observed risk. Large Language Model (LLM) serving pipelines exacerbate this rigidity: traffic is bursty, topolo- gies reconfigure dynamically under autoscaling, and sensitive artifacts such as prompts, training features, and evaluation data traverse heterogeneous substrates. In this paper we argue that mTLS for LLM systems[1] should be governed by adaptive control rather than static policy. We formalize a feedback loop that ingests multi-modal telemetry—connection error codes, handshake latencies, anomaly scores from request semantics, workload attestation freshness, and service-level objective (SLO) drift—and outputs fine-grained adjustments to transport posture: client-certificate renewal cadence, certificate path length and key type selection, session resumption eligibility, early data gat- ing, proof-of-possession challenges, and revocation propagation thresholds. The controller targets two coupled objectives: maintain cryptographic assurances (mutual authentication, forward secrecy, and replay resistance) while bounding the cost of security on tail latency and throughput during high-load inference.
Hamza Abusalah, Karen Azari, Chethan Kamath, Erkan Tairi, Maximilian von Consbruch
This paper is concerned with the question whether Verifiable Delay Functions (VDFs), as introduced by Boneh et al. [CRYPTO 2018], can be constructed in the plain Random Oracle Model (ROM) without any computational assumptions. A first partial answer to this question is due to Mahmoody, Smith, and Wu [ICALP 2020], and rules out the existence of perfectly unique VDFs in the ROM. Building on this result, Guan, Riazanov, and Yuan [CRYPTO 2025] very recently demonstrated that even VDFs with computational uniqueness are impossible under a public-coin setup. However, the case of computationally unique VDFs with private-coin setup remained open. We close this gap by showing that even computationally expensive private-coin setup will not allow to construct VDFs in the ROM.
Pranav Garimidi, Joachim Neu, Max Resnick
Traditional single-proposer blockchains suffer from miner extractable value (MEV), where validators exploit their serial monopoly on transaction inclusion and ordering to extract rents from users. While there have been many developments at the application layer to reduce the impact of MEV, these approaches largely require auctions as a subcomponent. Running auctions efficiently on chain requires two key properties of the underlying consensus protocol: selective-censorship resistance and hiding. These properties guarantee that an adversary can neither selectively delay transactions nor see their contents before they are confirmed. We propose a multiple concurrent proposer (MCP) protocol offering exactly these properties.
Foteini Baldimtsi, Rishab Goyal, Aayush Yadav
Non-interactive blind signatures (NIBS; Eurocrypt '23) allow a signer to asynchronously generate presignatures for a recipient, ensuring that only the intended recipient can extract a "blinded" signature for a random message.
We introduce a new generalization called non-interactive batched blind signatures (NIBBS). Our goal is to reduce the computation and communication costs for signers and receivers, by batching multiple blind signature queries. More precisely, we define the property of 'succinct communication' which requires that the communication cost from signer to receiver be independent of the batch size. NIBBS is very suitable for large-scale deployments requiring only minimal signer-side effort.
We design a NIBBS scheme and prove its security based on the hardness of lattice assumptions (in the random oracle model). When instantiated with the low-depth PRF candidate "Crypto Dark Matter" (TCC '18) and the succinct lattice-based proof system for rank-1 constraint systems (Crypto '23), our final signature size is 308 KB with <1 KB communication.
We introduce a new generalization called non-interactive batched blind signatures (NIBBS). Our goal is to reduce the computation and communication costs for signers and receivers, by batching multiple blind signature queries. More precisely, we define the property of 'succinct communication' which requires that the communication cost from signer to receiver be independent of the batch size. NIBBS is very suitable for large-scale deployments requiring only minimal signer-side effort.
We design a NIBBS scheme and prove its security based on the hardness of lattice assumptions (in the random oracle model). When instantiated with the low-depth PRF candidate "Crypto Dark Matter" (TCC '18) and the succinct lattice-based proof system for rank-1 constraint systems (Crypto '23), our final signature size is 308 KB with <1 KB communication.
Aditya Singh Rawat, Mahabir Prasad Jhanwar
The proposed $\mathsf{SL\text{-}DNSSEC}$ protocol (AsiaCCS'25) uses a quantum-safe KEM and a MAC to perform \textit{signature-less} $\mathsf{(SL)}$ DNSSEC validations in a single UDP query / response style. In this paper, we give a formal analysis of $\mathsf{SL\text{-}DNSSEC}$ in the reductionist model.
Andrey Boris Khesin, Jonathan Z. Lu, Alexander Poremba, Akshar Ramkumar, Vinod Vaikuntanathan
Random classical linear codes are widely believed to be hard to decode.
While slightly sub-exponential time algorithms exist when the coding rate vanishes sufficiently rapidly, all known algorithms at constant rate require exponential time. By contrast, the complexity of decoding a random quantum stabilizer code has remained an open question for quite some time. This work closes the gap in our understanding of the algorithmic hardness of decoding random quantum versus random classical codes. We prove that decoding a random stabilizer code with even a single logical qubit is at least as hard as decoding a random classical code at constant rate—the maximally hard regime. This result suggests that the easiest random quantum decoding problem is at least as hard as the hardest random classical decoding problem, and shows that any sub-exponential algorithm decoding a typical stabilizer code, at any rate, would immediately imply a breakthrough in cryptography.
More generally, we also characterize many other complexity-theoretic properties of stabilizer codes. While classical decoding admits a random self-reduction, we prove significant barriers for the existence of random self-reductions in the quantum case. This result follows from new bounds on Clifford entropies and Pauli mixing times, which may be of independent interest. As a complementary result, we demonstrate various other self-reductions which are in fact achievable, such as between search and decision. We also demonstrate several ways in which quantum phenomena, such as quantum degeneracy, force several reasonable definitions of stabilizer decoding—all of which are classically identical—to have distinct or non-trivially equivalent complexity.
More generally, we also characterize many other complexity-theoretic properties of stabilizer codes. While classical decoding admits a random self-reduction, we prove significant barriers for the existence of random self-reductions in the quantum case. This result follows from new bounds on Clifford entropies and Pauli mixing times, which may be of independent interest. As a complementary result, we demonstrate various other self-reductions which are in fact achievable, such as between search and decision. We also demonstrate several ways in which quantum phenomena, such as quantum degeneracy, force several reasonable definitions of stabilizer decoding—all of which are classically identical—to have distinct or non-trivially equivalent complexity.
01 October 2025
Virtual event, Anywhere on Earth, 23 March - 26 March 2026
Event date: 23 March to 26 March 2026
Submission deadline: 24 October 2025
Notification: 3 December 2025
Submission deadline: 24 October 2025
Notification: 3 December 2025
University of Klagenfurt
Please use this link:
https://jobs.aau.at/en/job/university-assistant-predoctoral-and-project-researcher-all-genders-welcome-2/
Level of employment: 100 % (40 hours/week)
Minimum salary: € 52,007.20 per annum (gross); classification according to collective agreement: B1
Limited to: 4 years
Application deadline: November 5, 2025
Reference code: 702-1/24
Closing date for applications:
Contact: Prof. Gerhard Friedrich, University of Klagenfurt
More information: https://jobs.aau.at/en/job/university-assistant-predoctoral-and-project-researcher-all-genders-welcome-2/
30 September 2025
Mingshu Cong, Tsz Hon Yuen, Siu-Ming Yiu
We present DualMatrix, a zkSNARK solution for large-scale matrix multiplication. Classical zkSNARK protocols typically underperform in data analytic contexts, hampered by the large size of datasets and the superlinear nature of matrix multiplication. DualMatrix excels in its scalability. The prover time of DualMatrix scales linearly with respect to the number of non-zero elements in the input matrices. For $n \times n$ matrix multiplication with $N$ non-zero elements across three input matrices, DualMatrix employs a structured reference string (SRS) of size $O(n)$, and achieves RAM usage of $O(N+n)$, transcript size of $O(\log n)$, prover time of $O(N+n)$, and verifier time of $O(\log n)$. The prover time, notably at $O(N+n)$ and surpassing all existing protocols, includes $O(N+n)$ field multiplications and $O(n)$ exponentiations and pairings within bilinear groups. These efficiencies make DualMatrix effective for linear algebra on large matrices common in real-world applications. We evaluated DualMatrix with $2^{15} \times 2^{15}$ input matrices each containing $1G$ non-zero integers, which necessitate $32T$ integer multiplications in naive matrix multiplication. DualMatrix recorded prover and verifier times of $150.84$s and $0.56$s, respectively. When applied to $1M \times 1M$ sparse matrices each containing $1G$ non-zero integers, it demonstrated prover and verifier times of $1,384.45$s and $0.67$s. Our approach outperforms current zkSNARK solutions by successfully handling the large matrix multiplication task in experiment. We extend matrix operations from field matrices to group matrices, formalizing group matrix algebra. This mathematical advancement brings notable symmetries beneficial for high-dimensional elliptic curve cryptography. By leveraging the bilinear properties of our group matrix algebra in the context of the two-tier commitment scheme, DualMatrix achieves efficiency gains over previous matrix multiplication arguments. To accomplish this, we extend and enhance Bulletproofs to construct an inner product argument featuring a transparent setup and logarithmic verifier time.
Zhuo Wu, Xinxuan Zhang, Yi Deng, Yuanju Wei, Zhongliang Zhang, Liuyu Yang
This paper introduces the first multilinear polynomial commitment scheme (PCS) over Galois rings achieving $\bigO{\log^2 n}$ verification cost. It achieves $\bigO{n\log n}$ committing time and $\bigO{n}$ evaluation opening prover time. This PCS can be used to construct zero-knowledge proofs for arithmetic circuits over Galois rings, facilitating verifiable computation in applications requiring proofs of polynomial ring operations (e.g., verifiable fully homomorphic encryption). First we construct random foldable linear codes over Galois rings with sufficient code distance and present a distance preservation theorem over Galois rings. Second we extend the $\textsf{Basefold}$ commitment (Zeilberger et al., Crypto 2024) to multilinear polynomials over Galois rings. Our approach reduces proof size and verifier time from $\bigO{\sqrt{n}}$ to $\bigO{\log^2 n}$ compared to Wei et al., PKC 2025. Furthermore, we give a batched multipoint openning protocol for evaluation phase that collapses the proof size and verifier time of $N$ polynomials at $M$ points from $\bigO{NM \log^2 n}$ to $\bigO{\log^2 n}$, prover time from $\bigO{NMn}$ to $\bigO{n}$, further enhancing efficiency.
Marc Damie, Florian Hahn, Andreas Peter, Jan Ramon
Distributed Point Functions (DPFs) enable sharing secret point functions across multiple parties, supporting privacy-preserving technologies such as Private Information Retrieval, and anonymous communications. While 2-party PRG-based schemes with logarithmic key sizes have been known for a decade, extending these solutions to multi-party settings has proven challenging. In particular, PRG-based multi-party DPFs have historically struggled with practicality due to key sizes growing exponentially with the number of parties and the field size.
Our work addresses this efficiency bottleneck by optimizing the PRG-based multi-party DPF scheme of Boyle et al. (EUROCRYPT'15). By leveraging the honest-majority assumption, we eliminate the exponential factor present in this scheme. Our construction is the first PRG-based multi-party DPF scheme with practical key sizes, and provides key up to $3\times$ smaller than the best known multi-party DPF. This work demonstrates that with careful optimization, PRG-based multi-party DPFs can achieve practical performances, and even obtain top performances.
Our work addresses this efficiency bottleneck by optimizing the PRG-based multi-party DPF scheme of Boyle et al. (EUROCRYPT'15). By leveraging the honest-majority assumption, we eliminate the exponential factor present in this scheme. Our construction is the first PRG-based multi-party DPF scheme with practical key sizes, and provides key up to $3\times$ smaller than the best known multi-party DPF. This work demonstrates that with careful optimization, PRG-based multi-party DPFs can achieve practical performances, and even obtain top performances.
Jeffrey Champion, Fuyuki Kitagawa, Ryo Nishimaki, Takashi Yamakawa
We initiate the study of untelegraphable encryption (UTE), founded on the no-telegraphing principle, which allows an encryptor to encrypt a message such that a binary string representation of the ciphertext cannot be decrypted by a user with the secret key, a task that is classically impossible. This is a natural relaxation of unclonable encryption (UE), inspired by the recent work of Nehoran and Zhandry (ITCS 2024), who showed a computational separation between the no-cloning and no-telegraphing principles.
In this work, we define and construct UTE information-theoretically in the plain model. Building off this, we give several applications of UTE and study the interplay of UTE with UE and well-studied tasks in quantum state learning, yielding the following contributions:
- A construction of collusion-resistant UTE from standard secret-key encryption (SKE). We additionally show that hyper-efficient shadow tomography (HEST) is impossible assuming collusion-resistant UTE exists. By considering a relaxation of collusion-resistant UTE, we are able to show the impossibility of HEST assuming only pseudorandom state generators (which may not imply one-way functions). This almost unconditionally answers an open inquiry of Aaronson (STOC 2018).
- A construction of UTE from a quasi-polynomially secure one-shot message authentication code (OSMAC) in the classical oracle model, such that there is an explicit attack that breaks UE security for an unbounded polynomial number of decryptors.
- A construction of everlasting secure collusion-resistant UTE, where the decryptor adversary can run in unbounded time, in the quantum random oracle model (QROM), and formal evidence that a construction in the plain model is a challenging task. We additionally show that HEST with unbounded post-processing time (which we call weakly-efficient shadow tomography) is impossible assuming everlasting secure collusion-resistant UTE exists. - A construction of secret sharing for all polynomial-size policies that is resilient to joint and unbounded classical leakage from collusion-resistant UTE and classical secret sharing for all policies. - A construction (and definition) of collusion-resistant untelegraphable secret-key functional encryption (UTSKFE) from single-decryptor functional encryption and plain secret-key functional encryption, and a construction of collusion-resistant untelegraphable public-key functional encryption from UTSKFE, plain SKE, and plain public-key functional encryption.
In this work, we define and construct UTE information-theoretically in the plain model. Building off this, we give several applications of UTE and study the interplay of UTE with UE and well-studied tasks in quantum state learning, yielding the following contributions:
- A construction of collusion-resistant UTE from standard secret-key encryption (SKE). We additionally show that hyper-efficient shadow tomography (HEST) is impossible assuming collusion-resistant UTE exists. By considering a relaxation of collusion-resistant UTE, we are able to show the impossibility of HEST assuming only pseudorandom state generators (which may not imply one-way functions). This almost unconditionally answers an open inquiry of Aaronson (STOC 2018).
- A construction of UTE from a quasi-polynomially secure one-shot message authentication code (OSMAC) in the classical oracle model, such that there is an explicit attack that breaks UE security for an unbounded polynomial number of decryptors.
- A construction of everlasting secure collusion-resistant UTE, where the decryptor adversary can run in unbounded time, in the quantum random oracle model (QROM), and formal evidence that a construction in the plain model is a challenging task. We additionally show that HEST with unbounded post-processing time (which we call weakly-efficient shadow tomography) is impossible assuming everlasting secure collusion-resistant UTE exists. - A construction of secret sharing for all polynomial-size policies that is resilient to joint and unbounded classical leakage from collusion-resistant UTE and classical secret sharing for all policies. - A construction (and definition) of collusion-resistant untelegraphable secret-key functional encryption (UTSKFE) from single-decryptor functional encryption and plain secret-key functional encryption, and a construction of collusion-resistant untelegraphable public-key functional encryption from UTSKFE, plain SKE, and plain public-key functional encryption.
Marcin Kostrzewa, Matthew Klein, Ara Adkins, Grzegorz Świrski, Wojciech Żmuda
Keccak, the hash function at the core of the Ethereum ecosystem, is computationally expensive to reason about in SNARK circuits, creating a critical bottleneck in the ability of the ZK ecosystem to reason about blockchain state. The recent state-of-the-art in proving Keccak permutations relies on proof systems that can perform lookup arguments, which—while exhibiting better performance than directly proving the hash operations in circuit—still typically require tens of thousands of constraints to prove a single keccak-f permutation.
This paper introduces a new method, termed keccacheck, which builds upon sum-check with influence from GKR to create circuits that can batch-verify Keccak permutations with fewer than 4000 constraints per instance. Keccacheck achieves this by exploiting the logarithmic scaling of recursive verification of the sum-check protocol, reducing the computational cost of verifying large enough batches to be only slightly higher than evaluating the multilinear extension of the input and output states. Its performance becomes competitive for a batch containing 16 permutations and offers more than a 10x cost reduction for batches of 512 or more permutations. This approach enables new levels of efficiency for the ZK ecosystem, providing the performant storage proofs that are essential to light clients, cross-chain bridges, privacy-focused protocols, and roll-ups.
29 September 2025
Jonas Bertels, Ingrid Verbauwhede
NIST recently selected Kyber as a standard for key encapsulation and decapsulation. As such, servers will soon need dedicated hardware for these encapsulation protocols. The computationally critical operation of Kyber is its Number Theoretic Transform, which is commonly accelerated by dedicated hardware such as FPGAs.
This work presents an extremely high-throughput design for the Kyber NTT. By utilizing the LUT-based modular multiplication technique used by us in CHES 2025, its area delay product is generally between one and two orders of magnitude better than similar designs in literature. For instance, where Yaman et al. with 16 Processing Elements requires 9500 LUTs and 16 DSPs to perform an NTT in 69 clock cycles, our design requires 67210 LUTs and no DSPs to perform an NTT every clock cycle.
Pedro Branco, Giulio Malavolta
A threshold signature allows one to delegate its signing rights to $n$ parties, such that any subset of size $t$ can sign a message on their behalf. In this work, we show how to construct threshold signatures for any $t$ and $n$ from one way functions, thus establishing the latter as a necessary and sufficient computational assumption. Our protocol makes non-black box use of one-way functions, and can be generalized to other access structures, such as monotone policies.
Geng Wang, Ruoyi Kong, Dawu Gu
Quadratic functional encryption (QFE for short) is a cryptographic primitive which can output the value of a quadratic function between two vectors, without leaking other information on the plaintext vectors. Since the first breakthrough of Baltico et al. (Crypto 2017), there are already many constructions for QFE from bilinear groups. However, constructing more efficient QFE schemes and proving their security has always been a challenging task. While generic bilinear group model (GBGM for short) can be used to construct highly efficient QFE schemes and proving their security, obtaining a security proof under GBGM is difficult and may contain undiscovered mistakes.
In this paper, we solve this problem by presenting new techniques which finally lead to an automated proof tool for QFE schemes, and can also be used to find potential attacks. Our automated proof tool shows that the RPB+19 scheme (Riffel et al, NIPS'19) which is the most efficient QFE scheme in the literature and already used in several works, is in fact insecure, and also gives an attack for the scheme. Finally, we present two new QFE schemes, each shares same efficiency with the RPB+19 scheme from one aspect, and prove their security using our automated proof tool. Our new schemes are more efficient than all existing QFE schemes other than the RPB+19 scheme, which means that they are most efficient among all existing ``secure'' QFE schemes.
Chris Peikert, Zachary Pepin
The vast majority of work on the efficiency of lattice-based cryptography, including fully homomorphic encryption~(FHE), has relied on *cyclotomic* number fields and rings. This is because cyclotomics offer a wide variety of benefits, including good geometrical properties, fast ring arithmetic, and rich homomorphic operations like vectorized (SIMD) operations on "packed" plaintexts, automorphisms, and ring-switching. However, selecting a suitable cyclotomic that has the desired number and type of plaintext "slots," while also balancing security and efficiency, is a highly constrained problem that often lacks an ideal solution, resulting in wasted SIMD capacity and lost efficiency.
This work provides a suite of tools for instantiating ring-based lattice cryptography to work over *subfields* of cyclotomics, which provide more flexibility and better-fitting parameters for applications. A particular focus is on realizing FHE with *optimal plaintext packing* and homomorphic SIMD parallelism for *any* plaintext characteristic, along with efficient *packed bootstrapping* that fully exploits this parallelism.
Toward this end, this (two-part) work makes the following main technical contributions, all of which are catalyzed by Galois theory:
-- For sampling and decoding errors in encryption and decryption (respectively), we construct geometrically short, structured bases for the number rings of arbitrary subfields of prime-power cyclotomics (and hence their composites as well).
-- For fast ring arithmetic, we define and establish analogous structural properties for Chinese Remainder Theorem (CRT) bases in *abelian* number rings, and give specialized fast transforms that map between CRT bases and any similarly structured bases.
-- For packed bootstrapping and homomorphic linear algebra, we give a general framework for *homomorphic evaluation of structured linear transforms* in abelian number rings, and show that CRT transforms can be evaluated using relatively few homomorphic operations.
This work provides a suite of tools for instantiating ring-based lattice cryptography to work over *subfields* of cyclotomics, which provide more flexibility and better-fitting parameters for applications. A particular focus is on realizing FHE with *optimal plaintext packing* and homomorphic SIMD parallelism for *any* plaintext characteristic, along with efficient *packed bootstrapping* that fully exploits this parallelism.
Toward this end, this (two-part) work makes the following main technical contributions, all of which are catalyzed by Galois theory:
-- For sampling and decoding errors in encryption and decryption (respectively), we construct geometrically short, structured bases for the number rings of arbitrary subfields of prime-power cyclotomics (and hence their composites as well).
-- For fast ring arithmetic, we define and establish analogous structural properties for Chinese Remainder Theorem (CRT) bases in *abelian* number rings, and give specialized fast transforms that map between CRT bases and any similarly structured bases.
-- For packed bootstrapping and homomorphic linear algebra, we give a general framework for *homomorphic evaluation of structured linear transforms* in abelian number rings, and show that CRT transforms can be evaluated using relatively few homomorphic operations.
26 September 2025
Leiden University, LIACS ; Leiden, The Netherlands
We are looking for a PhD student to work on Cryptographic Hardware and Design Automation. The project is oriented towards the development of a domain-specific design automation framework
for cryptographic hardware. Your work will focus on identifying the mathematical knowledge and properties to guide hardware optimizations tailored to different environments. The optimizations
range from algebraic optimizations (e.g., term rewriting) to algorithmic optimizations (e.g., group level algorithms), and to hardware optimizations (e.g., automated pipelining).
Closing date for applications:
Contact: Nusa Zidaric
More information: https://www.universiteitleiden.nl/en/vacancies/2025/q3/16010-phd-candidate-cryptographic-hardware-and-design-automation