International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

02 June 2025

Pedro Branco, Giulio Malavolta, Zayd Maradni
ePrint Report ePrint Report
The lattice isomorphism problem (LIP) asks, given two lattices $\Lambda_0$ and $\Lambda_1$, to decide whether there exists an orthogonal linear map from $\Lambda_0$ to $\Lambda_1$. In this work, we show that the hardness of (a circular variant of) LIP implies the existence of a fully-homomorphic encryption scheme for all classical and quantum circuits. Prior to our work, LIP was only known to imply the existence of basic cryptographic primitives, such as public-key encryption or digital signatures.
Expand
Koji Nuida
ePrint Report ePrint Report
Private Simultaneous Messages (PSM) is a kind of secure multiparty computation with minimal interaction pattern and minimal security requirement. A PSM protocol is said to be with universal reconstruction for a given function family if the algorithm of the referee (the output party) is independent of a function to be computed and the referee cannot infer the function from a protocol execution. In a recent work by Eriguchi and Shinagawa (EUROCRYPT 2025), they proposed a compiler to obtain a PSM protocol for symmetric functions from PSM protocols with universal reconstruction for symmetric functions with smaller domains. They also constructed the latter PSM protocols with universal reconstruction, by which the former PSM protocol achieves communication complexity better than the previously known protocols. In this paper, we construct the latter PSM protocols with universal reconstruction for symmetric functions more efficiently; the communication complexity is exponentially (in the input range) smaller than the protocols by Eriguchi and Shinagawa. As a consequence, we also obtain a PSM protocol for symmetric functions that is more efficient than their protocol. Technically, a main ingredient of their protocols is a linear and injective encoding of histograms for the input elements, and our improvement is realized by finding a more efficient encoding of the histograms.
Expand
Linru Zhang, Xiangning Wang, Jun Jie Sim, Zhicong Huang, Jiahao Zhong, Huaxiong Wang, Pu Duan, Kwok Yan Lam
ePrint Report ePrint Report
The advent of Large Language Models (LLM) has brought about a new wave productivity, revolutionizing business operations while keeping cost relatively low. The human-like interface of LLM enables it to be easily integrated with business functions, thereby freeing up precious human resources for more complex, valuable tasks. However, due to the intensive computation and memory requirements of LLM inference, it is preferable and cheaper to deploy LLMs with the Cloud Service Providers (CSP) that offer high performance computation resources and low-latency networking. Nevertheless, privacy concerns have been raised about the possibility of data leakage to the CSP. In this work, we seek to address such privacy concerns through the use of Fully Homomorphic Encryption (FHE). FHE enables the CSP to work on data in its encrypted form, thus ensuring that the data stay private and secure. We propose the implementation of LLM inference with FHE. While a series of prior work have demonstrated that it is possible to execute LLM inference in a private manner, it remains a challenge to design a solution that is practical. Our contributions are as follows: We provide the first end-to-end open-source implementation of a non-interactive transformer inference with FHE. We report an amortized time of 9.6 minutes of one input with 128 tokens when evaluating the BERT model on CPU. Our packing methods for encrypted matrices remove the need to repack ciphertext between encrypted matrix multiplication and activation layers. Additionally, we introduce interleaved batching to eliminate the internal rotations during ciphertext matrix multiplications. Our approach also avoids HE rotations in evaluations of the softmax and layerNorm, leading to a speedup of 4.22× and 122× than existing works respectively. Our implementation supports arbitrary token lengths, in contrast with existing solutions that requires a full token embedding. Our implementation can be found at GitHub.
Expand
Reo Eriguchi, Keitaro Hiwatashi
ePrint Report ePrint Report
Secure multiparty computation (MPC) is a cryptographic primitive which enables multiple parties to jointly compute a function without revealing any extra information on their private inputs. Bottleneck complexity is an efficiency measure that captures the load-balancing aspect of MPC protocols, defined as the maximum amount of communication required by any party. In this work, we study the problem of establishing lower bounds on the bottleneck complexity of MPC protocols. While the previously known techniques for lower bounding total communication complexity can also be applied to bottleneck complexity, they do not provide nontrivial bounds in the correlated randomness model, which is commonly assumed by existing protocols achieving low bottleneck complexity, or they are applied only to functions of limited practical interest. We propose several novel techniques for lower bounding the bottleneck complexity of MPC protocols. Our methods derive nontrivial lower bounds even in the correlated randomness model and apply to practically relevant functions including the sum function and threshold functions. Furthermore, our lower bounds demonstrate the optimality of some existing MPC protocols in terms of bottleneck complexity or the amount of correlated randomness.
Expand
Reo Eriguchi, Kaoru Kurosawa, Koji Nuida
ePrint Report ePrint Report
Multi-server Private Information Retrieval (PIR) is a cryptographic primitive that allows a client to retrieve an item of a database shared by multiple servers without revealing the index. This paper addresses the problem of error correction in multi-server PIR, enabling the client to obtain the item even when some servers provide incorrect responses. In a dishonest-majority setting where the majority of servers may introduce errors, it is known that the client can no longer uniquely determine the correct value. Previous approaches in this setting have typically settled for relaxed guarantees that the client can only reject incorrect values. However, these guarantees are substantially weak, as they only indicate the presence of errors without providing any information about the desired item. In this paper, we explore a more natural alternative called list-decodable PIR, which ensures that the client receives a small list of candidate values one of which is correct. We provide the first formal definition of list-decodable PIR and study its basic properties including a fundamental lower bound on the number of servers and the difficulty of simulation-based security definitions. We propose generic constructions of list-decodable PIR from any semi-honest PIR protocols, each offering different trade-offs. Our constructions improve upon the communication complexity of the only previously known protocol satisfying our definition. Furthermore, they achieve communication complexity comparable to that of the currently best known semi-honest PIR protocols.
Expand
Bar Alon, Naty Peter
ePrint Report ePrint Report
Secure multiparty computation allows multiple parties to jointly compute a function while maintaining security even in the presence of malicious adversaries. There are two types of adversaries in the literature: static adversaries, which choose the parties to corrupt before the protocol begins; and adaptive adversaries, which can corrupt parties during the execution of the protocol based on the messages exchanged by the parties. While adaptive security provides a more robust security guarantee, it may require too much in certain scenarios. Indeed, the adversary must allocate some of its resources to corrupt the parties; however, certain parties might be more susceptible to corruption, for instance, if they have not updated their operating system to the latest version.

To address this, we introduce a new security notion called \emph{dynamic security}. Here, adversaries may corrupt new parties \emph{during and after} the protocol's execution, but \emph{cannot choose} targets based on the messages. A protocol is said to be $(t,h)$-dynamically secure if it is possible to simulate any adversary that can corrupt up to $t$ parties during the execution and $h$ thereafter. Dynamic security provides meaningful security for many real-world scenarios. Moreover, it circumvents known lower bounds on the communication complexity of adaptive security, allowing for more efficient protocols such as committee-based ones, which would be insecure against adaptive adversaries.

We further explore dynamic security and establish the following results. 1. We show a surprising connection between dynamic security and the seemingly unrelated notion of security with friends and foes (FaF security), introduced by Alon et al. (CRYPTO 2020), which aims to protect honest parties not only from adversaries but also against other honest parties. The notion of $(t,h)$-\emph{strong FaF security} strengthens this by requiring the simulatability of the joint view of any $t$ malicious parties alongside any $h$ honest parties to be indistinguishable from their real-world view. We show that $(t,h)$-dynamic security and $(t,h)$-strong FaF security are equivalent.

2. We consider the feasibility of $(t,h)$-dynamic security and show that every $n$-party functionality can be computed with computational $(t,h)$-dynamic security (with guaranteed output delivery) if and only if $2t+h
Expand
Utkarsh Gupta, Hessam Mahdavifar
ePrint Report ePrint Report
Secret sharing is a foundational cryptographic primitive for sharing secret keys in distributed systems. In a classical threshold setting, it involves a dealer who has a secret, a set of $n$ users to whom shares of the secret are sent, and a threshold $t$ which is the minimum number of shares required to recover the secret. These schemes offer an all-or-nothing security approach where less than $t$ shares reveal no information about the secret. But these guarantees are threatened by side-channel attacks which can leak partial information from each share. Initiated by Benhamouda et. al. (CRYPTO'18), the security of such schemes has been studied for precise and worst-case bounded leakage models. However, in practice, side-channel attacks are inherently noisy. In this work, we propose a noisy leakage model for secret sharing, where each share is independently leaked to an adversary corrupted by additive noise in the underlying field $\mathbb{F}_q$. Under this model, we study the security of linear secret sharing schemes, and show bounds on the mutual information (MI) and statistical distance (SD) security metrics. We do this by using the MacWilliams' identity from the theory of error-correcting codes. For a given secret, it enables us to bound the the statistical deviation of the leaked shares from uniform as $\delta^t$, where $\delta$ is the Fourier bias of the added noise. Existing analyses for the security of linear $(n,t)$-threshold schemes only bound the SD metric, and show resilience for schemes with $t \ge 0.668n$. In this work, we show that these constraints are artifacts of the bounded leakage model. In particular, we show that $(n,t)$-threshold schemes over $\mathbb{F}_q$ with $t \ge \tau (n+1)$ leak $\mathcal{O}(q^{-2t(\gamma+1-1/\tau)})$ bits about the secret, given the bias of added noise satisfies $\delta \le q^{-\gamma}$. To the best of our knowledge, this is the first attempt towards understanding the side-channel security of linear secret sharing schemes for the MI metric.
Expand
Cong Ling, Laura Luzzi, Hao Yan
ePrint Report ePrint Report
The smoothing parameter is a cornerstone concept in lattice-based cryptography. Traditionally defined using the \( L^{\infty} \) distance, this standard formulation can be overly stringent compared to the \( L^1 \) (or statistical) distance more commonly employed in cryptographic contexts. Recent work has proposed relaxed definitions based on Kullback-Leibler (KL) divergence and \( L^1 \) distance, thereby loosening the constraints required for the distance to vanish. However, the additive nature of the \( L^1 \) distance can be limiting for cryptographic applications where probability preservation is essential. In this paper, we introduce the {Rényi smoothing parameter} of a lattice, based on Rényi divergence, to address this limitation. The advantages of Rényi divergence in cryptographic settings are well known thanks to its multiplicative nature. The Rényi smooting parameter provides a tunable framework that interpolates between the \( L^1 \) and \( L^{\infty} \) distances, offering enhanced flexibility. We present two complementary methods to study the averaging behavior of the Rényi flatness factor: one uses classical tools such as the Minkowski-Hlawka ensemble and Rogers’ formula for computing lattice function moments; the other employs Construction A lattices derived from random codes. Finally, we illustrate how this new perspective yields improvements in lattice-based cryptographic constructions.
Expand
Pouria Fallahpour, Serge Fehr, Yu-Hsuan Huang
ePrint Report ePrint Report
We revisit the quantum security (in the QROM) of digital signature schemes that follow the Fiat-Shamir-with-aborts (FSwA) or the probabilistic hash-and-sign with retry/abort (HSwA) design paradigm. Important examples of such signature schemes are Dilithium, SeaSign, Falcon+ and UOV. In particular, we are interested in the UF-CMA-to-UF-NMA reduction for such schemes. We observe that previous such reductions have a reduction loss that is larger than what one would hope for, or require a more stringent notion of zero-knowledge than one would hope for.

We resolve this matter here by means of a novel UF-CMA-to-UF-NMA reduction that applies to FSwA and HSwA signature schemes simultaneously, and that offers an improved reduction loss (without making the zero-knowledge assumption more stringent).
Expand
Bishwajit Chakraborty, Mridul Nandi, Soumit Pal, Thomas Peyrin, Quan Quan Tan
ePrint Report ePrint Report
After more than half a decade since its initiation, NIST declared Ascon as the winner of the LwC competition. In the first public draft of AsconAEAD128, NIST recognized that Ascon has limitations when used in multi-user applications. To mitigate this, NIST prescribed the use of a \(256\)-bit key in multi-user applications and produced an instantiation on how to process this extra key size in the current AsconAEAD128 API. While doing so, they identified a limitation of this new scheme (which we refer to as mu-Ascon in this document): mu-Ascon is vulnerable to committing attack and hence cannot be used in cases where committing security is required. On the other hand, the full key-binding property in Ascon, which separated it from other sponge-type constructions, has been used to show that Ascon is much stronger in the sense that it presents a key recovery resistance even in the case where some intermediate state is recovered. We remark that the current mu-Ascon has the limitation that only a partial key is bound during initialization and finalization. In this work, we propose some alternative instantiations of AsconAEAD128 API for multi-user applications. In comparison with the current mu-Ascon proposal, our first construction Ascon-256.v2 guarantees CMT-4 committing security up to 64 bits, and our second construction Ascon-256.v3 leads to both CMT-4 committing security and full 256-bit key binding. Structurally, our instantiations use only an extra-permutation call to provide these extra security features compared to mu-Ascon, which has a negligible overhead in terms of performance (given the lightweight nature of the Ascon permutation).
Expand
Pierre-Alain Jacqmin, Jean Liénardy
ePrint Report ePrint Report
Symmetric-key authenticated key establishment (AKE) protocols are particularly well suited in resource constraint environments such as internet of things (IoT) devices. Moreover, they often rely on better understood assumptions than asymmetric ones. In this paper, we review the security model for symmetric-key AKE protocols. We show why several existing models allow trivial attacks while they do not protect against some non-trivial ones. We fix these issues with our new security definitions.

We show that the protocols $\textrm{LP2}$ and $\textrm{LP3}$ of Boyd et al. do not satisfy the claimed security properties. We propose a new 2-message protocol based on them, called $\textrm{LP2+}$. This protocol is proved to satisfy correctness, weak synchronization robustness, entity authentication, key indistinguishability and, as a consequence, it admits perfect forward secrecy. An instantiation of $\textrm{LP2+}$ is presented, whose security only relies on that of a pseudo-random function (PRF). Its total execution time in normal cases is dominated by only 14 evaluations of the PRF, making it a lightweight protocol that is particularly well suited for resource-constrained environments such as IoT devices.

The flaws found in the security models as well as in the security arguments could have been avoided with precise and detailed proofs. We thus take this paper as an opportunity to advocate for thorough security proofs. Therefore, we have made the choice of rigor over concision.
Expand
Hans Heum
ePrint Report ePrint Report
Contrary to expectation, we show that simulation-based selective-opening security (SSO) does not imply indistinguishability-based selective opening security (ISO) in the CCA setting, making them incomparable in the presence of either encryption randomness leakage (sender opening) or secret key leakage (receiver opening). This contrasts the CPA case, where SSO-CPA is known to be strictly stronger than ISO-CPA in the presence of sender and/or receiver opening. Our separation result holds relative to all message distributions with sufficiently high min-entropy. On the other hand, restricting to message distributions with low enough min-entropy gives rise to an implication. Our separation result does not rely on the presence of selective openings. At a glance, this may seem to contradict known equivalence results between indistinguishability, semantic security, and selective opening security under trivial openings. We reconcile the apparent contradiction by showing that the selective-opening CCA landscape splits into a “high-entropy” and a “low-entropy” world which must be considered separately.
Expand
Hong-Sen Yang, Qun-Xiong Zheng, Jing Yang
ePrint Report ePrint Report
The LowMC-based post-quantum signature scheme Picnic was selected as a third-round candidate for NIST PQC, attracting wide attention to the design of efficient and secure post-quantum signature schemes using Symmetric Techniques for Advanced Protocols (STAP). Symmetric primitives designed for advanced protocols such as secure multi-party computation (MPC), fully homomorphic encryption (FHE), and zero-knowledge (ZK) proof systems, with the goal of reducing the number of multiplication operations, are referred to as arithmetic-oriented (AO) primitives. These cryptographic primitives are typically constructed over large finite fields, which makes classical statistical analysis methods like differential and linear cryptanalysis inefficient. Due to their inherent algebraic properties, the mainstream security evaluation approaches are based on algebraic attacks. In this paper, we analyze the security of the MPC-friendly primitives \textsc{Rain} (CCS 2022) and AIM (CCS 2023) used in the post-quantum signature schemes Rainier and AIMer. Existing algebraic attacks on \textsc{Rain} and AIM were conducted over $\mathbb{F}_2$. We propose a novel algebraic attack over $\mathbb{F}_{2^n}$ that uses the polynomial decomposition to reduce degrees of equations. By further combining with the guess-and-determine technique, meet-in-the-middle modeling, and resultant, we are able to attack \textsc{Rain} and the full AIM. Specifically, we successfully attacked 2-round \textsc{Rain} with $2^{73.7}/2^{107.0}/2^{138.9}$ primitive calls, 3-round \textsc{Rain} with $2^{160.6}/2^{236.0}/2^{311.1}$ primitive calls, for the $128/192/256$-bit key. For the full AIM, we successfully attacked it with $2^{114.0}/2^{163.2}/2^{228.3}$ primitive calls for the $128/192/256$-bit key. The attack complexities mainly lie in solving univariate polynomial equations and computing resultants, and hence the complexity evaluations are accurate.
Expand
Karthikeyan Bhargavan, Lasse Letager Hansen, Franziskus Kiefer, Jonas Schneider-Bensch, Bas Spitters
ePrint Report ePrint Report
We present an effective methodology for the formal verification of practical cryptographic protocol implementations written in Rust. Within a single proof framework, we show how to develop machine-checked proofs of diverse properties like runtime safety, parsing correctness, and cryptographic protocol security. All analysis tasks are driven by the software developer who writes annotations in the Rust source code and chooses a backend prover for each task, ranging from a generic proof assistant like F$\star$ to dedicated crypto-oriented provers like ProVerif and SSProve Our main contribution is a demonstration of this methodology on Bert13, a portable, post-quantum implementation of TLS 1.3 written in Rust and verified both for security and functional correctness. To our knowledge, this is the first security verification result for a protocol implementation written in Rust, and the first verified post-quantum TLS 1.3 library.
Expand
Zhengrong Lu, Hongbo Yu, Xiaoen Lin, Sitong Yuan
ePrint Report ePrint Report
RIPEMD-128 is an ISO/IEC standard hash function based on a double-branch Merkle-Damgård structure. Its compression function includes two branches with distinct Boolean functions and message expansion permutations. To perform a collision attack, differential characteristics must be constructed simultaneously for both branches under the same message word difference, and the message modification order must align with conditions in both branches. These factors make collision attacks on (reduced) RIPEMD-128 highly challenging. In 2014, an attack on 40 steps of RIPEMD-128 was achieved by Wang with no state differences in round 3. In this work, we analyze message permutation properties and propose two new structures for creating message differences. These structures enable high-probability local collisions in both branches of round 3, extending the attack to more steps. Notably, the second structure can eliminate all state differences in round 3, allowing the attack to cover more than three whole rounds. To ensure practical attacks, we limit the number of conditions based on our message modification strategy and use multi-step message modification techniques to control more conditions. As a result, we successfully generate colliding message pairs for 46-step and 54-step reduced RIPEMD-128, with time complexities of approximately $2^{42}$ and $2^{54}$, respectively.
Expand
Toomas Krips, Pille Pullonen-Raudvere
ePrint Report ePrint Report
Distributed point functions (DPFs), introduced in 2014, are a widely used primitive in secure computation for a wide variety of applications. However, until now, constructions for DPFs with polylogarithmic-size keys have been known only for the two-party setting. We propose a scheme for a polylogarithmic-size DPF for an arbitrary number of parties. We use a technique where a secret-shared vector is mapped to collinear vectors by public matrices serves as an invariant for off-path leaves. We show, using a technique by Shamir, that when we work over Z_pq , these vectors are hard to compute if factoring is hard. We also show that our scheme is a secure DPF, provided that two new assumptions hold, one of which is related to Generic Group Model and the other to MinRank. The output of our scheme is in the exponent in some group where Diffie-Hellman type problems are hard. Although this limits the usability of our scheme, we believe that our scheme is the first distributed point function for more than two parties with a key size that is polylogarithmic in the size of the domain and that does not use fully homomorphic encryption.
Expand
Xiaolin Duan, Fan Huang, Yaqi Wang, Honggang Hu
ePrint Report ePrint Report
At Crypto 2009, Heninger and Shacham presented a branch-and-prune algorithm for reconstructing RSA private keys given a random fraction of its private components. This method is widely adopted in side-channel attacks, and its complexity is closely related to the specific leakage pattern encountered. In this work, we identified a novel leakage model in the Miller-Rabin primality test implemented in OpenSSL. Under certain side-channel attacks against fixed-window modular exponentiation (e.g., recovering the least significant $b$ bits from each window), the proposed model enables staggered recovery of bits in $p$ and $q$, reducing uncertainty in key reconstruction. In particular, this model includes previously undocumented scenarios where full key recovery is achievable without branching. To understand how the proposed leakage model could contribute to attacks on modular exponentiation, we investigated the global and local behavior of key reconstruction. Our evaluation demonstrates that the proposed scenarios enable more efficient key reconstruction and retain this advantage when additional erasure bits are introduced. Moreover, in specific cases, successful reconstruction remains achievable within practical time even if the bits obtained are less than 50%. Finally, we conducted a series of experiments to confirm the practicality of our assumption, successfully recovering the lower 4 bits from each 6-bit window.
Expand
Roberto Avanzi, Bishwajit Chakraborty, Eik List
ePrint Report ePrint Report
We introduce the large block cipher Vistrutah, featuring 256- and 512-bit block sizes. Vistrutah is an iterated cipher where each "step" applies two AES rounds to each 128-bit block of the state, followed by a cell permutation across the entire state. Building upon established design principles from Simpira, Haraka, Pholkos, and ASURA, Vistrutah achieves high performance by leveraging AES instructions.

For each component of Vistrutah, we conducted a systematic evaluation of functions that can be efficiently implemented across different vector instruction set architectures. Our evaluation methodology combines latency estimation with security analysis, aiming to maximize the ratio of "bits of security per unit of time." This approach ensures we achieve the highest security or, equivalently, the best performance for this class of designs. Implementation results confirm the accuracy of our latency model.

We support our security claims with a comprehensive cryptanalysis.

A core design principle is the use of an inline key schedule: all round keys are computed during each encryption or decryption operation without requiring memory storage. Key schedules like the AES's must precompute and store round keys in memory for acceptable performance. This creates security vulnerabilities: such implementations become more susceptible to memory read oracles and, as demonstrated by Kamal and Youssef, more vulnerable to cold boot attacks. We consider these designs unsuitable for modern software security requirements. Vistrutah's approach minimizes leakage to at most one value during context switches. Furthermore, expensive key schedules reduce key agility, limiting the design of modes of operation.

Vistrutah is particularly well-suited for Birthday-Bound modes of operation, including Synthetic IV modes and Accordion modes for 256-bit block ciphers. It also serves effectively as a building block for compression functions (such as Matyas-Meyer-Oseas) in wide Merkle–Damg\aa rd hash functions. Additionally, it can implement "ZIP" wide pseudo-random functions as recently proposed by Flórez-Gutiérrez et al. in 2024.

Our cryptanalysis includes consideration of related-key security for two key reasons. First, strong related-key security demonstrates the robustness of both the key schedule and the cipher as a whole. Second, in counter-mode-based modes, Vistrutah's ability to change key with no overhead may allow the designer of a mode of operation to place counters (or values obtained from other update functions) in the key input rather than the plaintext input. This approach makes it easier to achieve Beyond the Birthday Bound security.

As a by-product of this research, we identified design flaws in Ghidle that affect both its security and performance. Our security analysis accounts for these weaknesses. We also developed Vistrutah-B, a variant of Vistrutah that addresses Ghidle's issues while maintaining comparable performance in most aspects. Nevertheless, Vistrutah retains superior diffusion properties.
Expand
Eylon Yogev, Shany Ben-David
ePrint Report ePrint Report
Recently, the concept of incompressible encryption has emerged as a powerful enhancement to key-leakage resilience. In an incompressible encryption scheme, an adversary who intercepts ciphertexts is forced to dedicate a significant amount of memory to store them in full if they wish to extract any information about the plaintext later when the secret key becomes available. Given two messages, the security game involves two adversaries: the first adversary receives an encryption of one of the messages and produces a compressed state. Then, the second adversary, given both the secret key and the compressed state, attempts to determine which message was encrypted.

Several positive results exist in incompressible cryptography. On the one hand, there are constructions based on minimal assumptions but with a poor rate (i.e., rate tends to 0). On the other hand, there are rate-1 constructions that achieve optimal efficiency but rely on strong cryptographic assumptions, such as obfuscation.

A stronger security notion, known as everlasting security, has been proposed for incompressible encryption. In this formulation, the second adversary, who receives the compressed state and the secret key, is allowed to be computationally unbounded. While this notion is conceptually appealing, no constructions of everlasting incompressible encryption are currently known, regardless of the underlying assumption or even in idealized models.

In this work, we give the first construction of everlasting incompressible encryption. In fact, we show that everlasting incompressible encryption is inherent in any sufficiently secure public-key encryption scheme. Specifically, we prove that any public-key encryption scheme with subexponential security (when instantiated with an appropriate security parameter) already satisfies the definition of everlasting incompressible encryption with subexponential security. Furthermore, our scheme achieves rate-1, improving upon existing results even for the weaker notion of standard incompressible encryption.
Expand
Joshua G. Stern
ePrint Report ePrint Report
To prevent privacy-preserving digital assets from becoming instruments of despotism via unitary-executivist compliance regimes, we propose OptAttest, a hybrid zero-knowledge architecture. This system empowers users to optionally generate verifiable attestation history for the current (Hop 0) and immediately preceding (Hop 1) transactions involving their private commitments. For crucial 0-hop multi-list attestations, users employ Zero-Knowledge Proofs (ZKPs) of claims from selected Verifiable Credentials (VCs). Users achieve per-transaction efficiency with diverse VC types by pre-computing and caching proofs of their VC validity. This approach avoids mandated adherence to singular, fallible external standards. Opted-in lightweight updates create cryptographic accumulator summaries, verified by network infrastructure (e.g., Layer 2 scaling solutions using Zero-Knowledge Virtual Machines), and are paired with user-managed Intermediate Attestation Data Packets (IADPs) containing detailed evidence. For comprehensive verification, users can then generate full recursive proofs from these IADPs for their attestation-enabled funds, leveraging native zkVM recursion. The protocol facilitates optional attestation generation, not enforcement, allowing downstream policy application. Aiming to cultivate a permissionless ethos, we propose a user-centric balance between privacy and verifiable accountability, distinct from models compelling broader data access. Folding schemes are noted as potential future enhancements for recursive proof efficiency.
Expand
◄ Previous Next ►