IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
05 September 2025
Gilad Asharov, Eliran Eiluz, Ilan Komargodski, Wei-Kai Lin
Oblivious RAM (ORAM) is a central cryptographic primitive that enables secure memory access while hiding access patterns.
Among existing ORAM paradigms, hierarchical ORAMs were long considered impractical despite their asymptotic optimality. However, recent advancements (FutORAMa, CCS'23) demonstrate that hierarchical ORAM-based schemes can be made efficient given sufficient client-side memory. In this work, we present a new hierarchical ORAM construction that achieves practical performance without requiring large local memory.
From a theoretical standpoint, we identify that there is a gap in the literature concerning the asymmetric setting, where the logical word size is asymptotically smaller than the physical memory block size. In this scenario, the best-known construction (OptORAMa, J.\ ACM '23,) turns every logical query into $O(\log N)$ physical memory accesses (quantity known as ``I/O overhead''), whereas the lower bound of Komargodski and Lin (CRYPTO'21) implies that $\Omega(\log N /\log\log N)$ accesses are needed.
We close this gap by constructing an optimal ORAM for the asymmetric setting, achieving an I/O overhead of $O(\log N / \log\log N)$. Our construction features exceptionally small constants (between 1 and 4, depending on the block size) and operates without requiring large local memory. We implement our scheme and compare it to PathORAM (CCS'13) and FutORAMa, demonstrating significant improvement. For 1TB logical memory, our construction obtains $\times 10$-$\times 30$ reduction in I/O overhead and bandwidth compared to PathORAM, and $\times 7$--$\times 26$ improvement over FutORAMa. This improvement applies when those schemes weren't designed to operate on large blocks, as in our settings, and the exact improvement depends on the physical block size and the exact local memory available.
From a theoretical standpoint, we identify that there is a gap in the literature concerning the asymmetric setting, where the logical word size is asymptotically smaller than the physical memory block size. In this scenario, the best-known construction (OptORAMa, J.\ ACM '23,) turns every logical query into $O(\log N)$ physical memory accesses (quantity known as ``I/O overhead''), whereas the lower bound of Komargodski and Lin (CRYPTO'21) implies that $\Omega(\log N /\log\log N)$ accesses are needed.
We close this gap by constructing an optimal ORAM for the asymmetric setting, achieving an I/O overhead of $O(\log N / \log\log N)$. Our construction features exceptionally small constants (between 1 and 4, depending on the block size) and operates without requiring large local memory. We implement our scheme and compare it to PathORAM (CCS'13) and FutORAMa, demonstrating significant improvement. For 1TB logical memory, our construction obtains $\times 10$-$\times 30$ reduction in I/O overhead and bandwidth compared to PathORAM, and $\times 7$--$\times 26$ improvement over FutORAMa. This improvement applies when those schemes weren't designed to operate on large blocks, as in our settings, and the exact improvement depends on the physical block size and the exact local memory available.
Thomas Schneider, Huan-Chih Wang, Hossein Yalame
Energy-efficient edge devices are essential for the widespread deployment of machine learning (ML) services. However, their limited computational capabilities make local model training infeasible. While cloud-based training offers a scalable alternative, it raises serious privacy concerns when sensitive data is outsourced. Homomorphic Encryption (HE) enables computation directly on encrypted data and has emerged as a promising solution to this privacy challenge. Yet, current HE-based training frameworks face several shortcomings: they often lack support for complex models and non-linear functions, struggle to train over multiple epochs, and require cryptographic expertise from end users.
We present HE-SecureNet, a novel framework for privacy-preserving model training on encrypted data in a single-client–server setting, using hybrid HE cryptosystems. Unlike prior HE-based solutions, HE-SecureNet supports advanced models such as Convolutional Neural Networks and handles non-linear operations including ReLU, Softmax, and MaxPooling. It introduces a level-aware training strategy that eliminates costly ciphertext level alignment across epochs. Furthermore, HE-SecureNet automatically converts ONNX models into optimized secure C++ training code, enabling seamless integration into privacy-preserving ML pipeline—without requiring cryptographic knowledge.
Experimental results demonstrate the efficiency and practicality of our approach. On the Breast Cancer dataset, HE-SecureNet achieves a 5.2× speedup and 33% higher accuracy compared to ConcreteML (Zama) and TenSEAL (OpenMined). On the MNIST dataset, it reduces CNN training latency by 2× relative to Glyph (Lou et al., NeurIPS’20), and cuts communication overhead by up to 66× on MNIST and 42× on CIFAR-10 compared to MPC-based solutions.
We present HE-SecureNet, a novel framework for privacy-preserving model training on encrypted data in a single-client–server setting, using hybrid HE cryptosystems. Unlike prior HE-based solutions, HE-SecureNet supports advanced models such as Convolutional Neural Networks and handles non-linear operations including ReLU, Softmax, and MaxPooling. It introduces a level-aware training strategy that eliminates costly ciphertext level alignment across epochs. Furthermore, HE-SecureNet automatically converts ONNX models into optimized secure C++ training code, enabling seamless integration into privacy-preserving ML pipeline—without requiring cryptographic knowledge.
Experimental results demonstrate the efficiency and practicality of our approach. On the Breast Cancer dataset, HE-SecureNet achieves a 5.2× speedup and 33% higher accuracy compared to ConcreteML (Zama) and TenSEAL (OpenMined). On the MNIST dataset, it reduces CNN training latency by 2× relative to Glyph (Lou et al., NeurIPS’20), and cuts communication overhead by up to 66× on MNIST and 42× on CIFAR-10 compared to MPC-based solutions.
MINKA MI NGUIDJOI Thierry Emmanuel
We introduce the Affine Iterated Inversion Problem (AIIP), a new candidate hard problem
for post-quantum cryptography, based on inverting iterated polynomial maps over finite fields.
Given a polynomial f ∈ Fq[x] of degree d ≥ 2, an iteration parameter n, and a target y ∈ Fq,
AIIP requires finding an input x such that f(n)(x) = y, where f(n) denotes the n-fold composi
tion of f. We establish the computational hardness of AIIP through two independent analytical
frameworks: first, by establishing a formal connection to the Discrete Logarithm Problem in
the Jacobian of hyperelliptic curves of exponentially large genus; second, via a polynomial
time reduction to solving structured systems of multivariate quadratic (MQ) equations. The
f
irst construction provides number-theoretic evidence for hardness by embedding an AIIP in
stance into the arithmetic of a high-genus curve, while the second reduction proves worst-case
hardness relative to the NP-hard MQ problem. For the quadratic case f(x) = x2 + α, we
show that the induced MQ system is heuristically indistinguishable from a random system, and
we formalize a sufficient condition for its pseudorandomness under a standard cryptographic
assumption. We provide a detailed security analysis against classical and quantum attacks,
derive concrete parameters for standard security levels, and discuss the potential of AIIP as
a foundation for digital signatures and public-key encryption. This dual hardness foundation,
rooted in both algebraic geometry and multivariate algebra, positions AIIP as a versatile and
promising primitive for post-quantum cryptography.
Kaveh Dastouri
We introduce a novel public-key cryptosystem based on the symmetric groups $S_{p_1} \times S_{p_2} $, where \( p_1, p_2 \) are large primes. The modulus \( N = f(\lambda_1) \cdot f(\lambda_2) \), with partitions \( \lambda_1 \in P(p_1) \), \( \lambda_2 \in P(p_2) \), and \( f(\lambda_i) = |C_{\lambda_i}| \cdot m_1(\lambda_i) \), leverages conjugacy class sizes to ensure large prime factors, including \( p_1, p_2 \). A partition selection strategy using non-repeated composition numbers guarantees robust security, surpassing RSA by supporting multiple large primes and deterministic key generation. Efficient decryption is achieved via known factorizations, and a lightweight symmetric hash primitive provides message authentication. We provide rigorous security analysis, practical implementation, and comparisons to multi-prime RSA, advancing algebraic cryptography for modern applications.
Anubhav Baweja, Pratyush Mishra, Tushar Mopuri, Matan Shtepel
We present the first IOPP for a linear-time encodable code that achieves linear prover time and $O(\lambda)$ query complexity, for a broad range of security parameters $\lambda$. No prior work is able to simultaneously achieve this efficiency: it either supports linear-time encodable codes but with worse query complexity [FICS; ePrint 2025], or achieves $O(\lambda)$ query complexity but only for quasilinear-time encodable codes [Minzer, Zheng; FOCS 2025]. Furthermore, we prove a matching lower bound that shows that the query complexity of our IOPP is asymptotically optimal (up to additive factors) for codes with constant rate.
We obtain our result by tackling a ubiquitous subproblem in IOPP constructions: checking that a batch of claims hold. Our novel solution to this subproblem is twofold. First, we observe that it is often sufficient to ensure that, with all but negligible probability, most of the claims hold. Next, we devise a new `lossy batching' technique which convinces a verifier of the foregoing promise with lower query complexity than that required to convince it that all the claims hold. This method differs significantly from the line-versus-point test used to achieve query-optimal IOPPs (for quasilinear-time encodable codes) in prior work [Minzer, Zheng; FOCS 2025], and may be of independent interest.
Our IOPP can handle all codes that support efficient codeswitching [Ron-Zewi, Rothblum; JACM 2024], including several linear-time encodable codes. Via standard techniques, our IOPP can be used to construct the first (to the best of our knowledge) IOP for NP with $O(n)$ prover time and $O(\lambda)$ query complexity. We additionally show that our IOPP (and by extension the foregoing IOP) is round-by-round tree-extractable and hence can be used to construct a SNARK in the random oracle model with $O(n)$ prover time and $O(\lambda \log n)$ proof size.
We obtain our result by tackling a ubiquitous subproblem in IOPP constructions: checking that a batch of claims hold. Our novel solution to this subproblem is twofold. First, we observe that it is often sufficient to ensure that, with all but negligible probability, most of the claims hold. Next, we devise a new `lossy batching' technique which convinces a verifier of the foregoing promise with lower query complexity than that required to convince it that all the claims hold. This method differs significantly from the line-versus-point test used to achieve query-optimal IOPPs (for quasilinear-time encodable codes) in prior work [Minzer, Zheng; FOCS 2025], and may be of independent interest.
Our IOPP can handle all codes that support efficient codeswitching [Ron-Zewi, Rothblum; JACM 2024], including several linear-time encodable codes. Via standard techniques, our IOPP can be used to construct the first (to the best of our knowledge) IOP for NP with $O(n)$ prover time and $O(\lambda)$ query complexity. We additionally show that our IOPP (and by extension the foregoing IOP) is round-by-round tree-extractable and hence can be used to construct a SNARK in the random oracle model with $O(n)$ prover time and $O(\lambda \log n)$ proof size.
Nakul Khambhati, Joonwon Lee, Gary Song, Rafail Ostrovsky, Sam Kumar
Organizations increasingly need to pool their sensitive data for collaborative computation while keeping their own data private from each other. One approach is to use a family of cryptographic protocols called Secure Multi-Party Computation (MPC). Another option is to use a set of cloud services called clean rooms. Unfortunately, neither approach is satisfactory. MPC is orders of magnitude more resource-intensive than regular computation, making it impractical for workloads like data analytics and AI. Clean rooms do not give users the flexibility to perform arbitrary computations.
We propose and develop an approach and system called a secure agent and utilize it to create a virtual clean room, Flexroom, that is both performant and flexible. Secure agents enable parties to create a phantom identity that they can collectively control, using maliciously secure MPC, which issues API calls to external services with parameters that remain secret from all participating parties. Importantly, in Flexroom, the secure agent uses MPC not to perform the computation itself, but instead merely to orchestrate the computation in the cloud, acting as a distinct trusted entity jointly governed by all parties. As a result, Flexroom enables collaborative computation with unfettered flexibility, including the ability to use convenient cloud services. By design, the collaborative computation runs at plaintext speeds, so the overhead of Flexroom will be amortized over a long computation.
We propose and develop an approach and system called a secure agent and utilize it to create a virtual clean room, Flexroom, that is both performant and flexible. Secure agents enable parties to create a phantom identity that they can collectively control, using maliciously secure MPC, which issues API calls to external services with parameters that remain secret from all participating parties. Importantly, in Flexroom, the secure agent uses MPC not to perform the computation itself, but instead merely to orchestrate the computation in the cloud, acting as a distinct trusted entity jointly governed by all parties. As a result, Flexroom enables collaborative computation with unfettered flexibility, including the ability to use convenient cloud services. By design, the collaborative computation runs at plaintext speeds, so the overhead of Flexroom will be amortized over a long computation.
Ritam Bhaumik, Avijit Dutta, Tetsu Iwata, Ashwin Jha, Kazuhiko Minematsu, Mridul Nandi, Yu Sasaki, Meltem Sönmez Turan, Stefano Tessaro
We consider FB-PRF, one of the key derivation functions defined in NIST SP 800-108 constructed from a pseudorandom function in a feedback mode. The standard allows some flexibility in the specification, and we show that one specific instance of FB-PRF allows an efficient distinguishing attack.
Yi-Fu Lai, Edoardo Persichetti
Recently, Hanzlik, Lai, Paracucchi, Slamanig, Tang proposed several blind signature frameworks, collectively named Tanuki(s) (Asiacrypt'25), built upon cryptographic group actions. Their work introduces novel techniques and culminates in a concurrently secure blind signature framework. Straightforward instantiations based on CSIDH (CSI-FiSh) and LESS yield signature sizes of 4.5 KB and 64 KB respectively, providing the first efficient blind signatures in the isogeny-based and code-based literature allowing concurrent executions.
In this work, we improve the code-based instantiations by using the canonical form of linear equivalent codes by a careful treatment. However, the canonical form does not naturally support a group action structure, which is central to the security proofs of Tanuki(s). Consequently and unfortunately, the original security guarantees do not directly apply. To address this, we develop two distinct non-black-box reductions for both blindness and the one-more unforgeability.
In the end, the improvements do not compromise the security.
This results in a concurrently secure code-based blind signature scheme with a compact signature size of 4.4 KB, which is approximately 1% smaller than the isogeny-based one. We also provide a C implementation where the signing time in 99ms and 268 Mcycles on an Intel i7 2.3~GHz CPU. We also look forward to our approaches benefiting advanced constructions built on top of LESS in the future.