International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

12 September 2025

Chi Feng, Lei Fan
ePrint Report ePrint Report
This paper presents BlockLens, a supervised, trace-level framework for detecting malicious Ethereum transactions using large language models. Unlike previous approaches that rely on static features or storage-level abstractions, our method processes complete execution traces, capturing opcode sequences, memory information, gas usage, and call structures to accurately represent the runtime behavior of each transaction. This framework harnesses the exceptional reasoning capabilities of LLMs for long input sequences and is fine-tuned on transaction data.

We present a tokenization strategy aligned with Ethereum Virtual Machine (EVM) semantics that converts transaction execution traces into tokens. Each transaction captures its complete execution trace through simulated execution and is sliced into overlapping chunks using a sliding window, allowing for long-range context modeling within memory constraints. During inference, the model outputs both a binary decision and a probability score indicating the likelihood of malicious behavior.

We implemented the framework based on LLaMA 3.2-1B and fine-tuned the model using LoRA. We evaluated it on a curated dataset that includes both real-world attacks and normal DeFi transactions. Our model outperforms representative baselines, achieving higher F1 scores and recall at top-k thresholds. Additionally, this work offers interpretable chunk-level outputs that enhance explainability and facilitate actionable decision-making in security-critical environments.
Expand
Sohyun Jeon, Calvin Abou Haidar, Mehdi Tibouchi
ePrint Report ePrint Report
In this paper, we construct the first lattice-based threshold ring signature scheme with signature size scaling logarithmically in the size of the ring while supporting arbitrary thresholds. Our construction is also concretely efficient, achieving signature sizes of less than 150kB for ring sizes up to $N = 4096$ (with threshold size $T=N/2$, say). This is substantially more compact than previous work.

Our approach is inspired by the recent work of Aardal et al. (CRYPTO 2024) on the compact aggregation of $\mathsf{Falcon}$ signatures, that uses the $\mathsf{LaBRADOR}$ lattice-based SNARKs to combine a collection of $\mathsf{Falcon}$ signatures into a single succinct argument of knowledge of those signatures. We proceed in a similar way to obtain compact threshold ring signatures from \falcon, but crucially require that the proof system be zero-knowledge in order to ensure the privacy of signers. Since $\mathsf{LaBRADOR}$ is not a zkSNARK, we associate it with a separate (non-succinct) lattice-based zero-knowledge proof system to achieve our desired properties.
Expand
Cheng Che, Tian Tian
ePrint Report ePrint Report
Differential-linear cryptanalysis was introduced by Langford and Hellman at CRYPTO'94 and has been an important cryptanalysis method against symmetric-key primitives. The current primary framework for constructing differential-linear distinguishers involves dividing the cipher into three parts: the differential part $E_0$, the middle connection part $E_m$, and the linear part $E_1$. This framework was first proposed at EUROCRYPT 2019, where DLCT was introduced to evaluate the differential-linear bias of $E_m$ over a single round. Recently, the TDT method and the generalized DLCT method were proposed at CRYPTO 2024 to evaluate the differential-linear bias of $E_m$ covering multiple rounds. Unlike the DLCT framework, the DATF technique could also handle $E_m$ covering more rounds.

In this paper, we enhance the DATF technique in differential-linear cryptanalysis from three perspectives. First, we improve the precision of differential-linear bias estimation by introducing new transitional rules, a backtracking strategy, and a partitioning technique to DATF. Second, we present a general bias computation method for Boolean functions that significantly reduces computational complexity compared to the exhaustive search used by Liu et al. in the previous DATF technique. Third, we propose an effective method for searching for differential-linear distinguishers with good biases based on DATF. Additionally, the bias computation method has independent interests with a wide application in other cryptanalysis methods, such as differential cryptanalysis and cube attacks. Notably, all these enhancements to DATF are equally applicable to HATF. To show the validity and versatility of our new techniques, we apply the enhanced DATF to the NIST standard Ascon, the AES finalist Serpent, the NIST LWC finalist Xoodyak, and the eSTREAM finalist Grain v1. In all applications, we either present the first differential-linear distinguishers for more rounds or update the best-known ones.
Expand
Akhil Bandarupalli, Xiaoyu Ji, Soham Jog, Aniket Kate, Chen-Da Liu-Zhang, Yifan Song
ePrint Report ePrint Report
Verifiable Secret Sharing (VSS) is a fundamental primitive in threshold cryptography and multi-party computation. It preserves secrecy, integrity, and availability of a shared secret for a fixed set of parties, with a subset of them being malicious. In practical applications, especially when the secret sharing is expected to be maintained over long durations, the VSS scheme should be able to cater to a dynamic setting where involved parties may change. The primitive known as Dynamic Proactive Secret Sharing (DPSS) is beneficial here, as it facilitates the secure transfer of secrets from the original committee to a new committee. Nevertheless, prior works on DPSS protocols either rely on unrealistic time bounds on message delivery or have a high computational cost, which limits their scalability beyond tens of parties.

In this work, we present a scalable asynchronous DPSS protocol that utilizes lightweight cryptographic tools, such as hash functions and symmetric-key encryption. Our protocol achieves full security and optimal fault tolerance with amortized linear communication costs. Unlike existing solutions, our proposed protocol is also post-quantum secure. By balancing computation and communication, our approach offers practical performance at scale. Our implementation results demonstrate improved scalability and efficiency, surpassing the current state-of-the-art achieving a $22.1\times$ lower latency than the prior best work. Furthermore, our solution also scales gracefully with increasing $n$ and reshares a batch of $100,000$ secrets between committees of sizes $n=112$ parties in under a minute.
Expand
Akhil Bandarupalli, Xiaoyu Ji, Aniket Kate, Chen-Da Liu-Zhang, Daniel Pöllmann, Yifan Song
ePrint Report ePrint Report
Multi-party computation (MPC) enables a set of mutually $n$ distrusting parties to compute any function on their private inputs. Mainly, MPC facilitates agreement on the function’s output while preserving the secrecy of honest inputs, even against a subset of $t$ parties controlled by an adversary. With applications spanning from anonymous broadcast to private auctions, MPC is considered a cornerstone of distributed cryptography, and significant research efforts have been aimed at making MPC practical in the last decade. However, most libraries either make strong assumptions like the network being bounded synchronous, or incur high computation overhead from the extensive use of expensive public-key operations that prevent them from scaling beyond a few dozen parties.

This work presents Velox, an asynchronous MPC protocol that offers fairness against an optimal adversary corrupting up to $t<\frac{n}{3}$ parties. Velox significantly enhances practicality by leveraging lightweight cryptographic primitives - such as symmetric-key encryption and hash functions - which are 2-3 orders of magnitude faster than public-key operations, resulting in substantial computational efficiency. Moreover, Velox is highly communication-efficient, with linear amortized communication relative to circuit size and only $\mathcal{O}(n^3)$ field elements of additive overhead. Concretely, Velox requires just $9.33$ field elements per party per multiplication gate, more than $10\times$ reduction compared to the state of the art. Moreover, Velox also offers Post-Quantum Security as lightweight cryptographic primitives retain their security against a quantum adversary.

We implement Velox comprehensively, covering both offline and online phases, and evaluate its performance on a geographically distributed testbed through a real-world application: anonymous broadcast. Our implementation securely shuffles a batch of $k=256$ messages in $4$ seconds with $n=16$ parties and $18$ seconds with $n=64$ parties, a $36\times$ and $28.6\times$ reduction in latency compared to the prior best work. At scale with $n=112$ parties, Velox is able to shuffle the same batch of messages in under $50$ seconds from end to end, illustrating its effectiveness and scalability. Overall, our work removes significant barriers faced by prior asynchronous MPC solutions, making asynchronous MPC practical and efficient for large-scale deployments involving $100$s of parties.
Expand
Simon Damm, Asja Fischer, Alexander May, Soundes Marzougui, Leander Schwarz, Henning Seidler, Jean-Pierre Seifert, Jonas Thietke, Vincent Quentin Ulitzsch
ePrint Report ePrint Report
Lattice-based signatures like Dilithium (ML-DSA) prove knowledge of a secret key $s \in \mathbb{Z}_n$ by using Integer LWE (ILWE) samples $z = \langle \vec c, \vec s \rangle +y $, for some known hash value $c \in \mathbb{Z}_n$ of the message and unknown error $y$. Rejection sampling guarantees zero-knowledge, which makes the ILWE problem, that asks to recover s from many z’s, unsolvable.

Side-channel attacks partially recover y, thereby obtaining more informative samples resulting in a—potentially tractable—ILWE problem. The standard method to solve the resulting problem is Ordinary Least Squares (OLS), which requires independence of $y$ from $\langle c, s \rangle$ —an assumption that is violated by zero-knowledge samples.

We present efficient algorithms for a variant of the ILWE problem that was not addressed in prior work, which we coin Concealed ILWE (CILWE). In this variant, only a fraction of the ILWE samples is zero-knowledge. We call this fraction the concealment rate. This ILWE variant naturally occurs in side-channel attacks on lattice-based signatures. A case in point are profiling side-channel attacks on Dilithium implementations that classify whether y = 0. This gives rise to either zero-error ILWE samples $z = \langle c, s \rangle$ with $y = 0$ (in case of correct classification), or ordinary zero-knowledge ILWE samples (in case of misclassification).

As we show, OLS is not practical for CILWE instances, as it requires a prohibitively large amount of samples for even small (under 10%) concealment rates. A known integer linear programming-based approach can solve some CILWE instances, but suffers from two short-comings. First, it lacks provable efficiency guarantees, as ILP is NP-hard in the worst case. Second, it does not utilize small, independent error y samples, that could occur in addition to zero-knowledge samples. We introduce two statistical regression methods to cryptanalysis, Huber and Cauchy regression. They are both efficient and can handle instances with all three types of samples. At the same time, they are capable of handling high concealment rates, up to 90% in practical experiments. While Huber regression comes with theoretically appealing correctness guarantees, Cauchy regression performs best in practice. We use this efficacy to execute a novel profiling attack against a masked Dilithium implementation. The resulting ILWE instances suffer from both concealment and small, independent errors. As such, neither OLS nor ILP can recover the secret key. Cauchy regression, however, allows us to recover the secret key in under three minutes for all NIST security levels.
Expand
Pratish Datta, Junichi Tomida, Nikhil Vanjani
ePrint Report ePrint Report
We revisit decentralized multi‑authority attribute‑based encryption (MA‑ABE) through the lens of fully adaptive security -- the most realistic setting in which an adversary can decide on‑the‑fly which users and which attribute authorities to corrupt. Previous constructions either tolerated only static authority corruption or relied on highly complex “dual system with dual‑subsystems” proof technique that inflated ciphertexts and keys.

Our first contribution is a streamlined security analysis showing -- perhaps surprisingly -- that the classic Lewko–Waters MA-ABE scheme [EUROCRYPT 2011] already achieves full adaptive security, provided its design is carefully reinterpreted and, more crucially, its security proof is re-orchestrated to conclude with an information-theoretic hybrid in place of the original target-group-based computational step. By dispensing with dual subsystems and target-group-based assumptions, we achieve a significantly simpler and tighter security proof along with a more lightweight implementation. Our construction reduces ciphertext size by 33 percent, shrinks user secret keys by 66 percent, and requires 50 percent fewer pairing operations during decryption -- all while continuing to support arbitrary collusions of users and authorities. These improvements mark a notable advance over the state-of-the-art fully adaptive decentralized MA-ABE scheme of Datta et al. [EUROCRYPT 2023]. We instantiate the scheme in both composite‑order bilinear groups under standard subgroup‑decision assumptions and in asymmetric prime‑order bilinear groups under the Matrix‑Diffie–Hellman assumption. We further show how the Kowalczyk–Wee attribute‑reuse technique [EUROCRYPT 2019] seamlessly lifts our construction from ``one‑use’’ boolean span programs (BSP) to ``multi‑use’’ policies computable in $\mathsf{NC^{1}}$, resulting in a similarly optimized construction over the state-of-the-art by Chen et al. [ASIACRYPT 2023].

Going beyond the Boolean world, we present the first MA-ABE construction for arithmetic span program (ASP) access policies, capturing a richer class of Boolean, arithmetic, and combinatorial computations. This advancement also enables improved concrete efficiency by allowing attributes to be handled directly as field elements, thereby eliminating the overhead of converting arithmetic computations into Boolean representations. The construction -- again presented in composite and prime orders -- retains decentralization and adaptive user‑key security, and highlights inherent barriers to handling corrupted authorities in the arithmetic setting.
Expand
Zeyu Liu, Yunhao Wang, Ben Fisch
ePrint Report ePrint Report
Fully homomorphic encryption (FHE) is a powerful and widely used primitive in lots of real-world applications, with IND-CPA as its standard security guarantee. Recently, Li and Micciancio [Eurocrypt'21] introduced IND-CPA-D security, which strengthens the standard IND-CPA security by allowing the attacker to access a decryption oracle for honestly generated ciphertexts (generated via either an encryption oracle or an honest homomorphic circuit evaluation process).

Recently, Jung et al. [CCS'24] and Checri et al. [Crypto'24] have shown that even exact FHE schemes like FHEW/TFHE/BGV/BFV may still not be IND-CPA-D secure, by exploiting the bootstrapping failure. However, such existing attacks can be mitigated by setting negligible bootstrapping failure probability.

On the other hand, Liu and Wang [Asiacrypt'24] proposed relaxed functional bootstrapping, which has orders of magnitude performance improvement and furthermore allows a free function evaluation during bootstrapping. These efficiency advantages make it a competitive choice in many applications but its ``relaxed'' nature also opens new directions of IND-CPA-D attack. In this work, we show that the underlying secret key could be recovered within 10 minutes against all existing relaxed functional bootstrapping constructions, and even within 1 minute for some of them. Moreover, our attack works even with a negligible bootstrapping failure probability, making it immune to existing mitigation methods.

Additionally, we propose a general fix that mitigates all the existing modulus-switching-error-based attacks, including ours, in the IND-CPA-D model. This is achieved by constructing a new modulus switching procedure with essentially no overhead. Lastly, we show that IND-CPA-D may not be sufficient for some applications, even in the passive adversary model. Thus, we extend this model to IND-CPA-D with randomness (IND-CPA-DR).
Expand
Kigen Fukuda, Shin’ichiro Matsuo
ePrint Report ePrint Report
The emergence of Cryptographically Relevant Quantum Com- puters (CRQCs) poses an existential threat to the security of contem- porary blockchain networks, which rely on public-key cryptography vul- nerable to Shor’s algorithm. While the need for a transition to Post- Quantum Cryptography (PQC) is widely acknowledged, the evolution of blockchains from simple transactional ledgers to complex, multi-layered financial ecosystems has rendered early, simplistic migration plans ob- solete. This paper provides a comprehensive analysis of the blockchain PQC migration landscape as it stands in 2025. We dissect the core tech- nical challenges, including the performance overhead of PQC algorithms, the loss of signature aggregation efficiency vital for consensus and scala- bility, and the cascading complexities within Layer 2 protocols and smart contracts. Furthermore, the analysis extends to critical operational and socio-economic hurdles, such as the ethical dilemma of dormant assets and the conflicting incentives among diverse stakeholders including users, developers, and regulators. By synthesizing ongoing community discus- sions and roadmaps for Bitcoin, Ethereum, and others, this work estab- lishes a coherent framework to evaluate migration requirements, aiming to provide clarity for stakeholders navigating the path toward a quantum- secure future.
Expand
Véronique Cortier, Alexandre Debant, Olivier Esseiva, Pierrick Gaudry, Audhild Høgåsen, Chiara Spadafora
ePrint Report ePrint Report
Internet voting in Switzerland for political elections is strongly regulated by the Federal Chancellery (FCh). It puts a great emphasis on the individual verifiability: security against a corrupted voting device is ensured via return codes, sent by postal mail. For a long time, the FCh was accepting to trust an offline component to set up data and in particular the voting material. Today, the FCh aims at removing this strong trust assumption. We propose a protocol that abides by this new will. At the heart of our system lies a setup phase where several parties create the voting material in a distributed way, while allowing one of the parties to remain offline during the voting phase. A complication arises from the fact that the voting material has to be printed, sent by postal mail, and then used by the voter to perform several operations that are critical for security. Usability constraints are taken into account in our design, both in terms of computation complexity (linear setup and tally) and in terms of user experience (we ask the voter to type a high-entropy string only once). The security of our scheme is proved in a symbolic setting, using the ProVerif prover, for various corruption scenarios, demonstrating that it fulfills the Chancellery’s requirements and sometimes goes slightly beyond them.
Expand
Sven Schäge, Marc Vorstermans
ePrint Report ePrint Report
We make progress on the foundational problem of determining the strongest security notion achievable by homomorphic encryption. Our results are negative. We prove that a wide class of semi-homomorphic public key encryption schemes (SHPKE) cannot be proven IND-PCA secure (indistinguishability against plaintext checkability attacks), a relaxation of IND-CCA security. This class includes widely used and versatile schemes like ElGamal PKE, Paillier PKE, and the linear encryption system by Boneh, Boyen, and Shacham (Crypto'04). Besides their homomorphic properties, these schemes have in common that they all provide efficient algorithms for recognizing valid ciphertexts (and public keys) from public values. In contrast to CCA security, where the adversary is given access to a decryption oracle, in a PCA attack, the adversary solely has access to an oracle that decides for a given ciphertext/plaintext pair $(c,m)$ if decrypting $c$ indeed gives $m$. Since the notion of IND-PCA security is weaker than IND-CCA security, it is not only easier to achieve, leading to potentially more efficient schemes in practice, but it also side-steps existing impossibility results that rule out the IND-CCA security. To rule out IND-PCA security we thus have to rely on novel techniques. We provide two results, depending on whether the attacker is allowed to query the PCA oracle after it has received the challenge (IND-PCA2 security) or not (IND-PCA1 security -- the more challenging scenario). First, we show that IND-PCA2 security can be ruled out unconditionally if the number of challenges is smaller than the number of queries made after the challenge. Next, we prove that no Turing reduction can reduce the IND-PCA1 security of SHPKE schemes with $O(\kappa^3)$ PCA queries overall to interactive complexity assumptions that support $t$-time access to their challenger with $t=O(\kappa)$. To obtain our second impossibility results, we develop a new meta-reduction-based methodology that can be used to tackle security notions where the attacker is granted access to a \emph{decision} oracle. This makes it challenging to utilize the techniques of existing meta-reduction-based impossibility results that focus on definitions where the attacker is allowed to access an inversion oracle (e.g. long-term key corruptions or a signature oracle). To obtain our result, we have to overcome several technical challenges that are entirely novel to the setting of public key encryption.
Expand

11 September 2025

Ruida Wang, Jikang Bai, Xuan Shen, Xianhui Lu, Zhihao Li, Binwu Xiang, Zhiwei Wang, Hongyu Wang, Lutan Zhao, Kunpeng Wang, Rui Hou
ePrint Report ePrint Report
Fully Homomorphic Encryption (FHE) enables computation over encrypted data, but deployment is hindered by the gap between plaintext and ciphertext programming models. FHE compilers aim to automate this translation, with a promising approach being FHE Instruction Set Architecture (FHE-ISA) based on homomorphic look-up-tables (LUT). However, existing FHE LUT techniques are limited to 16-bit precision and face critical performance bottlenecks. We introduce Tetris, a versatile TFHE LUT framework for high-precision FHE instructions. Tetris incorporates three advances: a) A GLWE-based design with refined noise control, enabling up to 32-bit LUTs; b) A batched TFHE circuit bootstrapping algorithm that enhances the LUT performance; and c) Adaptive parameterization and parallel execution strategy that is optimized for high-precision evaluation. These techniques deliver: a) The first general FHE instruction set with 16-bit bivariate and 32-bit univariate operations; b) Performance improvements of 2×-863× over modern TFHE LUT approaches, and 2901× lower latency than the leading CKKS-based solution [CRYPTO'25]; (c) Up to 65× speedups over existing FHE-ISA implementations, and 3×-40× faster than related FHE compilers.
Expand
Fredrik Meisingseth, Christian Rechberger, Fabian Schmid
ePrint Report ePrint Report
At Crypto'24, Bell et al. propose a definition of Certified Differential Privacy (DP) and a quite general blueprint for satisfying it. The core of the blueprint is the new primitive called random variable commitment schemes (RVCS). Bell et al. construct RVCS's for fair coins and binomial distributions. In this work, we prove three lemmata that enable simple modular design of new RVCS's: First, we show that the properties of RVCS's are closed under polynomial sequential composition. Secondly, we show that homomorphically evaluating a function $f$ on the output of an RVCS for distribution $Z$ leads to an RVCS for distribution $f(Z)$. Thirdly, we show (roughly) that applying a `Commit-and-Prove'-style proof of knowledge for a function $f$ onto the output of an RVCS for distribution $Z$ results in an RVCS for distribution $f(Z)$. These lemmata together implies that there exists RVCS's for any distribution which can be sampled exactly in strict polynomial time, under, for instance, the discrete logarithm assumption. This is, to the best of our knowledge, the first result establishing the possibility of RVCS's for arbitrary distributions. We demonstrate the usefulness of these lemmata by constructing RVCS's for arbitrarily biased coins and a discrete Laplace distribution, leading to a certified DP protocol for a discrete Laplace mechanism. We observe that the definitions in BGKW do not directly allow sampling algorithms with non-zero honest abort probability, which rules out many practical sampling algorithms. We propose a slight relaxation of the definitions which enables the use of sampling methods with negligible abort probability. It is with respect to these weakened definitions that we prove our certified discrete Laplace mechanism.
Expand
Francesca Falzon, Zichen Gui, Michael Reichle
ePrint Report ePrint Report
Encrypted multi-maps (EMMs) allow a client to outsource a multi-map to an untrusted server and then later retrieve the values corresponding to a queried label. They are a core building block for various applications such as encrypted cloud storage and searchable encryption. One important metric of EMMs is memory-efficiency: most schemes incur many random memory accesses per search query, leading to larger overhead compared to plaintext queries. Memory-efficient EMMs reduce random accesses but, in most known solutions, this comes at the cost of higher query bandwidth.

This work focuses on EMMs run on SSDs and we construct two page-efficient schemes---one static and one dynamic---both with optimal search bandwidth. Our static scheme achieves $\bigOtilde{\log N/p}$ page-efficiency and $\bigo(1)$ client storage, where $N$ denotes the size of the EMM and $p$ the SSD's page size. Our dynamic scheme achieves forward and backward privacy with $\bigOtilde{\log N/p}$ page-efficiency and $\bigo(M)$ client storage, where $M$ denotes the number of labels. Among schemes with optimal server storage, these are the first to combine optimal bandwidth with good page-efficiency, saving up to $\bigo(p)$ and $\bigOtilde{p\log\log (N/p)}$ bandwidth over the state-of-the-art static and dynamic schemes, respectively. Our implementation on real-world data shows strong practical performance.
Expand
Danilo Francati, Yevin Nikhel Goonatilake, Shubham Pawar, Daniele Venturi, Giuseppe Ateniese
ePrint Report ePrint Report
We prove a sharp threshold for the robustness of cryptographic watermarking for generative models. This is achieved by introducing a coding abstraction, which we call messageless secret-key codes, that formalizes sufficient and necessary requirements of robust watermarking: soundness, tamper detection, and pseudorandomness. Thus, we establish that robustness has a precise limit: For binary outputs no scheme can survive if more than half of the encoded bits are modified, and for an alphabet of size q the corresponding threshold is $(1−1/q)$ of the symbols.

Complementing this impossibility, we give explicit constructions that meet the bound up to a constant slack. For every $\delta > 0$, assuming pseudorandom functions and access to a public counter, we build linear-time codes that tolerate up to $(1/2)(1−\delta)$ errors in the binary case and $(1−1/q)(1−\delta)$ errors in the $q$-ary case. Together with the lower bound, these yield the maximum robustness achievable under standard cryptographic assumptions.

We then test experimentally whether this limit appears in practice by looking at the recent watermarking for images of Gunn, Zhao, and Song (ICLR 2025). We show that a simple crop and resize operation reliably flipped about half of the latent signs and consistently prevented belief-propagation decoding from recovering the codeword, erasing the watermark while leaving the image visually intact.

These results provide a complete characterization of robust watermarking, identifying the threshold at which robustness fails, constructions that achieve it, and an experimental confirmation that the threshold is already reached in practice.
Expand
Lea Thiemt, Paul Rösler, Alexander Bienstock, Rolfe Schmidt, Yevgeniy Dodis
ePrint Report ePrint Report
Modern messengers use advanced end-to-end encryption protocols to protect message content even if user secrets are ever temporarily exposed. Yet, encryption alone does not prevent user tracking, as protocols often attach metadata, such as sequence numbers, public keys, or even plain user identifiers. This metadata reveals the social network as well as communication patterns between users. Existing protocols that hide metadata in Signal (i.e., Sealed Sender), for MLS-like constructions (Hashimoto et al., CCS 2022), or in mesh networks (Bienstock et al., CCS 2023) are relatively inefficient or specially tailored for only particular settings. Moreover, all existing practical solutions reveal crucial metadata upon exposures of user secrets.

In this work, we introduce a formal definition of Anonymity Wrappers (AW) that generically hide metadata of underlying two-party and group messaging protocols. Our definition captures forward and post-compromise anonymity as well as authenticity in the presence of temporary state exposures. Inspired by prior wrapper designs, the idea of our provably secure AW construction is to use shared keys of the underlying wrapped (group) messaging protocols to derive and continuously update symmetric keys for hiding metadata. Beyond hiding metadata on the wire, we also avoid and hide structural metadata in users' local states for stronger anonymity upon their exposure.

We implement our construction, evaluate its performance, and provide a detailed comparison with Signal's current approach based on Sealed Sender: Our construction reduces the wire size of small 1:1 messages from 441 bytes to 114 bytes. For a group of 100 members, it reduces the wire size of outgoing group messages from 7240 bytes to 155 bytes. We see similar improvements in computation time for encryption and decryption, but these improvements come with substantial storage costs for receivers. For this reason, we develop extensions with a Bloom filter for compressing the receiver storage. Based on this, Signal considers deploying our solution.
Expand
Tabitha Ogilvie
ePrint Report ePrint Report
Approximate Homomorphic Encryption (AHE), introduced by Cheon et al. [CKKS17] offers a powerful solution for encrypting real-valued data by relaxing the correctness requirement and allowing small decryption errors. Existing constructions from (Ring) Learning with Errors achieve standard IND-CPA security, but this does not fully capture scenarios where an adversary observes decrypted outputs. Li and Micciancio [LiMic21] showed that when decryptions are passively leaked, these schemes become vulnerable to practical key recovery attacks even against honest-but-curious attackers. They formalise security when decryptions are shared with new notions of IND-CPA-D and KR-D security.

We propose new techniques to achieve provable IND-CPA-D and KR-D security for AHE, while adding substantially less additional decryption noise than the prior provable results. Our approach hinges on refined "game-hopping" tools in the bit-security framework, which allow bounding security loss with a lower noise overhead. We also give a noise-adding strategy independent of the number of oracle queries, removing a costly dependence inherent in the previous solution.

Beyond generic noise-flooding, we show that leveraging the recently introduced HintLWE problem [KLSS23] can yield particularly large security gains for AHE ciphertexts that are the result of "rescaling", a common operation in CKKS. Our analysis uses the fact that rescale-induced noise amounts to a linear ``hint" on the secret to enable a tighter reduction to LWE (via HintLWE). In many practical parameter regimes where the rescaling noise dominates, our results imply an additional precision loss of as little as two bits is sufficient to restore a high level of security against passive key-recovery attacks for standard parameters.

Overall, our results enable a provably secure and efficient real-world deployment of Approximate Homomorphic Encryption in scenarios with realistic security requirements.
Expand
Forest Zhang, Ke Wu
ePrint Report ePrint Report
Fair multi-party coin toss protocols are fundamental to many cryptographic applications. While Cleve's celebrated result (STOC'86) showed that strong fairness is impossible against half-sized coalitions, recent works have explored a relaxed notion of fairness called Cooperative-Strategy-Proofness (CSP-fairness). CSP-fairness ensures that no coalition has an incentive to deviate from the protocol. Previous research has established the feasibility of CSP-fairness against majority-sized coalitions, but these results focused on specific preference structures where each player prefers exactly one outcome out of all possible choices. In contrast, real-world scenarios often involve players with more complicated preference structures, rather than simple binary choices.

In this work, we initiate the study of CSP-fair multi-party coin toss protocols that accommodate arbitrary preference profiles, where each player may have an unstructured utility distribution over all possible outcomes. We demonstrate that CSP-fairness is attainable against majority-sized coalitions even under arbitrary preferences. In particular, we give the following results: \begin{itemize} \item We give a reduction from achieving CSP-fairness for arbitrary preference profiles to achieving CSP-fairness for structured split-bet profiles, where each player distributes the same amount of total utility across all outcomes. \item We present two CSP-fair protocols: (1) a \emph{size-based protocol}, which defends against majority-sized coalitions, and (2) a \emph{bets-based protocol}, which defends against coalitions controlling a majority of the total utilities. \item Additionally, we establish an impossibility result for CSP-fair binary coin toss with split-bet preferences, showing that our protocols are nearly optimal. \end{itemize}
Expand
Nouhou Abdou Idris, Yunusa Abdulsalam, Mustapha Hedabou
ePrint Report ePrint Report
The inherent presence of large-scale quantum computers is deemed to threaten the traditional public-key cryptographic systems. As a result, NIST has necessitated the development of post-quantum cryptography (PQC) algorithms. Isogeny-based schemes have compact key sizes; however, there have been some fundamental vulnerabilities in the smooth-degree isogeny systems. Therefore, the literature has concentrated its efforts on higher-dimensional isogenies of non-smooth degrees. In this work, we present POKE-KEM, a key encapsulation mechanism (KEM) derived from the POKE public-key encryption (PKE) scheme via a Fujisaki-Okamoto (FO) transformation with its corresponding optimized security levels during transformation. Despite POKE’s efficiency advantages, its current formulation provides only IND-CPA security, limiting practical deployment where adaptive chosen-ciphertext security is essential. The resulting POKE-KEM construction achieves IND-CCA security under the computational hardness of the C-POKE problem in both the random oracle model (ROM) and quantum random oracle model (QROM). Our implementation demonstrates significant practical advantages across all NIST security levels (128, 192, and 256 bits), supporting the practical viability of POKE-KEM for post-quantum cryptographic deployments. We provide a comprehensive security analysis including formal proofs of correctness, rigidity, and IND-CCA security. Our underlying construction leverages the difficulty of recovering isogenies with unknown, non-smooth degrees, resistant to known quantum attacks
Expand
MINKA MI NGUIDJOI Thierry Emmanuel
ePrint Report ePrint Report
Weintroduce the Chaotic Entropic Expansion (CEE), a new one-way function based on iterated polynomial maps over finite fields. For polynomials f in a carefully defined class Fd, we prove that N iterations preserve min-entropy of at least log2q − N log2d bits and achieve statistical distance ≤ (q − 1)(dN − 1)/(2√q) from uniform. We formalize security through the Affine Iterated Inversion Problem (AIIP) and provide reductions to the hardness of solving multivariate quadratic equations (MQ) and computing discrete logarithms (DLP). Against quantum adversaries, CEE achieves O(2λ/2) security for λ-bit classical security. We provide comprehensive cryptanalysis and parameter recommendations for practical deployment. While slower than AES, CEE’s algebraic structure enables unique applications in verifiable computation and post-quantum cryptography within the CASH framework.
Expand
◄ Previous Next ►