International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

Updates on the COVID-19 situation are on the Announcement channel.

Here you can see all recent updates to the IACR webpage. These updates are also available:

RSS symbol icon
via RSS feed
Twitter bird icon
via Twitter
Weibo icon
via Weibo
Facebook icon
via Facebook

18 April 2024

Shany Ben-David
ePrint Report ePrint Report
A probabilistically checkable argument (PCA) is a computational relaxation of PCPs, where soundness is guaranteed to hold only for false proofs generated by a computationally bounded adversary. The advantage of PCAs is that they are able to overcome the limitations of PCPs. A succinct PCA has a proof length that is polynomial in the witness length (and is independent of the non-deterministic verification time), which is impossible for PCPs, under standard complexity assumptions. Bronfman and Rothblum (ITCS 2022) constructed succinct PCAs for NC that are publicly-verifiable and have constant query complexity under the sub-exponential hardness of LWE.

We construct a publicly-verifiable succinct PCA with constant query complexity for all NP in the adaptive security setting. Our PCA scheme offers several improvements compared to the Bronfman and Rothblum construction: (1) it applies to all problems in NP, (2) it achieves adaptive security, and (3) it can be realized under any of the following assumptions: the polynomial hardness of LWE; $O(1)$-LIN on bilinear maps; or sub-exponential DDH.

Moreover, our PCA scheme has a succinct prover, which means that for any NP relation that can be verified in time $T$ and space $S$, the proof can be generated in time $O_{\lambda,m}(T\cdot\mathrm{polylog}(T))$ and space $O_{\lambda,m}(S\cdot\mathrm{polylog}(T))$. Here, ${O}_{\lambda,m}$ accounts for polynomial factors in the security parameter and in the size of the witness. En route, we construct a new complexity-preserving RAM delegation scheme that is used in our PCA construction and may be of independent interest.
Expand
Ganyuan Cao
ePrint Report ePrint Report
We introduce a novel notion, denoted as IND-rCCA, to formalize the security and robustness of authenticated encryption. This notion is an augmentation of common notions defined for AEAD schemes by considering indistinguishability of potential leakage due to decryption failure in the presence of multiple checks for errors. We further extend this notion to IND-sf-rCCA to formalize the stateful security involving out-of-order ciphertext. Additionally, we present a modification to the Encode-then-Encrypt-then-MAC (EEM) paradigm to boost its robustness. We then analyze the security of the modification and show that it satisfies IND-rCCA security.
Expand
Gewu BU, Serge Fdida, Maria Potop-Butucaru, Bilel Zaghdoudi
ePrint Report ePrint Report
This paper presents a novel blockchain-based decentralized identity system (DID), tailored for enhanced digital identity management in Internet of Things (IoT) and device-to-device (D2D) networks. The proposed system features a hierarchical structure that effectively merges a distributed ledger with a mobile D2D network, ensuring robust security while streamlining communication. Central to this design are the gateway nodes, which serve as intermediaries, facilitating DID registration and device authentication through smart contracts and distributed storage systems. A thorough security analysis underscores the system’s resilience to common cyber threats and adherence to critical principles like finality and liveness.
Expand
Djimnaibeye Sidoine, Guy Mobouale Wamba, Abiodoun Clement Hounkpevi, Tieudjo Daniel, Djiby Sow
ePrint Report ePrint Report
Cumplido, María et al. have recently shown that the Wang-Hu digital signature is not secure and has presented a potential attack on the root extraction problem. The effectiveness of generic attacks on solving this problem for braids is still uncertain and it is unknown if it is possible to create braids that require exponential time to solve these problems. In 2023, Lin and al. has proposed a post-quantum signature scheme similar to the Wang-Hu scheme that is proven to be able to withstand attacks from quantum computers. However, evidence is presented here for the existence of an algorithm based on mean-set attacks that can recover the private key in both schemes without solving the root extraction problem. In the post-quantum signature version, we prove that the attacker can forge a signature passing the verification without recovering the private key
Expand
Morten Øygarden, Patrick Felke, Håvard Raddum
ePrint Report ePrint Report
A common strategy for constructing multivariate encryption schemes is to use a central map that is easy to invert over an extension field, along with a small number of modifications to thwart potential attacks. In this work we study the effectiveness of these modifications, by deriving estimates for the number of degree fall polynomials. After developing the necessary tools, we focus on encryption schemes using the $C^*$ and Dobbertin central maps, with the internal perturbation (ip), and $Q_+$ modifications. For these constructions we are able to accurately predict the number of degree fall polynomials produced in a Gröbner basis attack, up to and including degree five for the Dob encryption scheme and four for $C^*$. The predictions remain accurate even when fixing variables. Based on this new theory we design a novel attack on Dob, which completely recovers the secret key for the parameters suggested by its designers. Due to the generality of the presented techniques, we also believe that they are of interest to the analysis of other big field schemes.
Expand

17 April 2024

Luxembourg Institute of Science and Technology
Job Posting Job Posting
Are you passionate about research? So are we! Come and join us The Luxembourg Institute of Science and Technology (LIST) is a Research and Technology Organization (RTO) active in the fields of materials, environment and IT. Discover our IT for Innovative Services department: https://www.list.lu/en/informatics/ How will you contribute? Your specific mission includes, but is not limited to, participating into the following activities along the project partners: • to design and develop DevSecOps solutions and Data security solutions • to prototype ML-based anti-fuzzing, vulnerability detection, information sharing technologies for cybersecurity, and anomaly detection solutions • to develop open-source software • to validate the effectiveness of developed technologies You are in charge of disseminating and promoting the research activities that will be carried out, whether through publications, prototype development or technical reports. You’re highly motivated and have proven skills in machine learning & cybersecurity to address the security concerns in software development and data protection. You have already good experience in collaborative cyberthreat intelligence systems that use advanced analytics solutions as can offer significant advantages over the local systems by detecting cyberattacks early and promptly responding to them. And last, but not least, you’re a great practitioner of cybersecurity techniques such as vulnerability detection, information sharing, fuzzing, anti-fuzzing. As to join us, you: • hold a PhD. degree in Computer Science or related disciplines • have good programming skills (particularly experience on Python and C++) • have good track record on applied ML for cybersecurity, such as fuzzing and ML-based vulnerability detection and anomaly detection. • have good track record on data protection for applications such as data spaces and digital twin. • demonstrate strong interest and experience in fuzzing techniques such as mutation testing, symbolic execution, AFL, etc. • familiar with ML frameworks, such as Pytorch, Tenso

Closing date for applications:

Contact: jobs@list.lu

Expand
University of New Brunswick, Computer Science; Fredericton, Canada
Job Posting Job Posting
We are looking for one PhD student who will work on cryptography and privacy-preserving machine learning. The position is a fully funded PhD position. The candidate must hold a Master's degree in Computer Science, Electrical and Computer Engineering, or a related area, and a have a strong background in mathematics and cryptography and good programming skills. The application materials should contain a curriculum vitae, a research statement, transcripts of Bachelor's and Master's, and the name and contact information of two references. The starting date is September 2024 or January 2025, but applications will be accepted until the position is filled.

Closing date for applications:

Contact: Kalikinkar Mandal

Expand

16 April 2024

Enrico Bottazzi
ePrint Report ePrint Report
Fully homomorphic encryption (FHE) allows for evaluating arbitrary functions over encrypted data. In Multi-party FHE applications, different parties encrypt their secret data and submit ciphertexts to a server, which, according to the application logic, performs homomorphic operations on them. For example, in a secret voting application, the tally is computed by summing up the ciphertexts encoding the votes. Valid encrypted votes are of the form $E(0)$ and $E(1)$. A malicious voter could send an invalid encrypted vote such as $E(145127835)$, which can mess up the whole election. Because of that, users must prove that the ciphertext they submitted is a valid Ring-Learning with Errors (RLWE) ciphertext and that the plaintext message they encrypted is a valid vote (for example, either a 1 or 0). Greco uses zero-knowledge proof to let a user prove that their RLWE ciphertext is well-formed. Or, in other words, that the encryption operation was performed correctly. The resulting proof can be, therefore, composed with additional application-specific logic and subject to public verification in a non-interactive setting. Considering the secret voting application, one can prove further properties of the message being encrypted or even properties about the voter, allowing the application to support anonymous voting as well. The prover has been implemented using Halo2-lib as a proving system, and the benchmarks have shown that Greco can already be integrated into user-facing applications without creating excessive friction for the user. The implementation is available at https://github.com/privacy-scaling-explorations/greco
Expand
George Teseleanu
ePrint Report ePrint Report
In this paper we study the effect of using small prime numbers within the Okamoto-Uchiyama public key encryption scheme. We introduce two novel versions and prove their security. Then we show how to choose the system's parameters such that the security results hold. Moreover, we provide a practical comparison between the cryptographic algorithms we introduced and the original Okamoto-Uchiyama cryptosystem.
Expand
Daniel J. Bernstein
ePrint Report ePrint Report
Many proposals of lattice-based cryptosystems estimate security levels by following a recipe introduced in the New Hope proposal. This recipe, given a lattice dimension n, modulus q, and standard deviation s, outputs a "primal block size" β and a security level growing linearly with β. This β is minimal such that some κ satisfies ((n+κ)s^2+1)^{1/2} < (d/β)^{1/2} δ^{2β−d−1} q^{κ/d}, where d = n + κ + 1 and δ = (β(πβ)^{1/β}/(2π exp 1))^{1/2(β−1)}.

This paper identifies how β grows with n, with enough precision to show the impact of adjusting q and s by constant factors. Specifically, this paper shows that if lg q grows as Q_0 lg n + Q_1 + o(1) and lg s grows as S_0 lg n + S_1 + o(1), where 0 <= S_0 <= 1/2 < Q_0 − S_0, then β/n grows as z_0 + (z_1+o(1))/lg n, where z_0 = 2Q_0/(Q_0−S_0+1/2)^2 and z_1 has a formula given in the paper. The paper provides a traditional-format proof and a proof verified by the HOL Light proof assistant.
Expand
Thomas Aulbach, Samed Düzlü, Michael Meyer, Patrick Struck, Maximiliane Weishäupl
ePrint Report ePrint Report
In this work, we analyze the so-called Beyond UnForgeability Features (BUFF) security of the submissions to the current standardization process of additional signatures by NIST. The BUFF notions formalize security against maliciously generated keys and have various real-world use cases, where security can be guaranteed despite misuse potential on a protocol level. Consequently, NIST declared the security against the BUFF notions as desirable features. Despite NIST's interest, only $6$ out of $40$ schemes consider BUFF security at all, but none give a detailed analysis. We close this gap by analyzing the schemes based on codes, isogenies, lattices, and multivariate equations. The results vary from schemes that achieve neither notion (e.g., Wave) to schemes that achieve all notions (e.g., PROV). In particular, we dispute certain claims by SQUIRRELS and VOX regarding their BUFF security. Resulting from our analysis, we observe that three schemes (CROSS, HAWK and PROV) achieve BUFF security without having the hash of public key and message as part of the signature, as BUFF transformed schemes would have. HAWK and PROV essentially use the lighter PS-3 transform by Pornin and Stern (ACNS'05). We further point out whether this transform suffices for the other schemes to achieve the BUFF notions, with both positive and negative results.
Expand
Quan Yuan, Chao Sun, Tsuyoshi Takagi
ePrint Report ePrint Report
The Fiat-Shamir transformation is a widely employed technique in constructing signature schemes, known as Fiat-Shamir signature schemes (FS-SIG), derived from secure identification (ID) schemes. However, the existing security proof only takes into account classical signing queries and does not consider superposition attacks, where the signing oracle is quantum-accessible to the adversaries. Alagic et al. proposed a security model called blind unforgeability (BUF, Eurocrypt'20), regarded as a preferable notion under superposition attacks.

In this paper, we conduct a thorough security analysis of FS-SIGs in the BUF model. First, we propose a special property for ID schemes called quantum special honest-verifier zero-knowledge (qsHVZK), which is stronger than classical HVZK. We prove that qsHVZK is a sufficient property for BUF (with implicit rejection) of the resulting FS-SIG in the quantum random oracle model (QROM). Next, we give an efficient construction of (a weaker variant) of qsHVZK ID scheme based on the quantum hardness of LWE problems.

To avoid enhancing the requirement of HVZK, we then progress to the deterministic FS-SIG (DFS) for more efficient constructions. We show that if the pseudorandom function is quantum-access-secure (QPRF), then we can prove the BUF security of the resulting DFS only with the requirement of the standard (multi-)HVZK in the QROM. A similar result can be extended to the hedged version of FS-SIG.
Expand
Xunyue Hu, Quentin L. Meunier, Emmanuelle Encrenaz
ePrint Report ePrint Report
Side-Channel Attacks target the recovery of key material in cryptographic implementations by measuring physical quantities such as power consumption during the execution of a program. Simple Power Attacks consist in deducing secret information from a trace using a single or a few samples, as opposed to differential attacks which require many traces. Software cryptographic implementations now all contain a data-independent execution path, but often do not consider variations in power consumption associated to data. In this work, we show that a technique commonly used to select a value from different possible values in a control-independant way leads to significant power differences depending on the value selected. This difference is actually so important that a single sample can be considered for attacking one condition, and no training on other traces is required. We exploit this finding to propose the first single-trace attack without any knowledge gained on previous executions, using trace folding. We target the two modular exponentiation implementations in Libgcrypt, getting respectively 100% and 99.98% of correct bits in average on 30 executions using 2,048-bit exponents. We also use this technique to attack the scalar multiplication in ECDSA, successfully recovering all secret nonces on 1,000 executions. Finally, the insights we gained from this work allow us to show that a proposed counter-measure from the litterature for performing the safe loading of precomputed operands in the context of windowed implementations can be attacked as well.
Expand
Simon Erfurth
ePrint Report ePrint Report
We construct a digital signature scheme for images that allows the image to be compressed without invalidating the signature. More specifically, given a JPEG image signed with our signature scheme, a third party can compress the image using JPEG compression, and, as long as the quantization tables only include powers of two, derive a valid signature for the compressed image, without access to the secret signing key, and without interaction with the signer. Our scheme is constructed using a standard digital signature scheme and a hash function as building blocks. This form of signatures that allow image compression could be useful in mitigating some of the threats posed by generative AI and fake news, without interfering with all uses of generative AI.

Taking inspiration from related signature schemes, we define a notion of unforgeability and prove our construction to be secure. Additionally, we show that our signatures have size 32.5 kb under standard parameter choices. Using image quality assessment metrics, we show that JPEG compression with parameters as specified by our scheme, does not result in perceivably reduced visual fidelity, compared to standard JPEG compression.
Expand
Saskia Bayreuther, Robin Berger, Felix Dörre, Jeremias Mechler, Jörn Müller-Quade
ePrint Report ePrint Report
Secure two-party computation allows two mutually distrusting parties to compute a joint function over their inputs, guaranteeing properties such as input privacy or correctness.

For many tasks, such as joint computation of statistics, it is important that when one party receives the result of the computation, the other party also receives the result. Unfortunately, this property, which is called fairness, is unattainable in the two-party setting for arbitrary functions. So weaker variants have been proposed.

One such notion, proposed by Pass et al. (EUROCRYPT 2017) is called $\Delta$-fairness. Informally, it guarantees that if a corrupt party receives the output in round $r$ and stops participating in the protocol, then the honest party receives the output by round $\Delta(r)$. This notion is achieved by using so-called secure enclaves.

In many settings, $\Delta$-fairness is not sufficient, because a corrupt party is guaranteed to receive its output before the honest party, giving the corrupt party an advantage in further interaction. Worse, as $\Delta$ is known to the corrupt party, it can abort the protocol when it is most advantageous.

We extend the concept of $\Delta$-fairness by introducing a new fairness notion, which we call hidden $\Delta$-fairness, which addresses these problems. First of all, under our new notion, a corrupt party may not benefit from aborting, because it may not, with probability $\frac{1}{2}$, learn the result first. Moreover, $\Delta$ and other parameters are sampled according to a given distribution and remain unknown to the participants in the computation.

We propose a 2PC protocol that achieves hidden $\Delta$-fairness, also using secure enclaves, and prove its security in the Generalized Universal Composability (GUC) framework.
Expand
Yongge Wang
ePrint Report ePrint Report
Transformer neural networks have gained significant traction since their introduction, becoming pivotal across diverse domains. Particularly in large language models like Claude and ChatGPT, the transformer architecture has demonstrated remarkable efficacy. This paper provides a concise overview of transformer neural networks and delves into their security considerations, focusing on covert channel attacks and their implications for the safety of large language models. We present a covert channel utilizing encryption and demonstrate its efficacy in circumventing Claude.ai's security measures. Our experiment reveals that Claude.ai appears to log our queries and blocks our attack within two days of our initial successful breach. This raises two concerns within the community: (1) The extensive logging of user inputs by large language models could pose privacy risks for users. (2) It may deter academic research on the security of such models due to the lack of experiment repeatability.
Expand
Ardianto Satriawan, Rella Mareta
ePrint Report ePrint Report
The Number Theoretic Transform (NTT) is a powerful mathematical tool that has become increasingly important in developing Post Quantum Cryptography (PQC) and Homomorphic Encryption (HE). Its ability to efficiently calculate polynomial multiplication using the convolution theorem with a quasi-linear complexity $O(n \log{n})$ instead of $O(n^2)$ when implemented with Fast Fourier Transform-style algorithms has made it a key component in modern cryptography. FFT-style NTT algorithm or fast-NTT is particularly useful in lattice-based cryptography. In this short note, we briefly introduce the basic concepts of linear, cyclic, and negacyclic convolutions via traditional schoolbook algorithms, traditional NTT, its inverse (INTT), and FFT-like versions of NTT/INTT. We then provide consistent toy examples through different concepts and algorithms to understand the basics of NTT concepts.
Expand
Jianming Lin, Weize Wang, Chang-An Zhao, Yuhao Zheng
ePrint Report ePrint Report
In the implementation of isogeny-based cryptographic schemes, Vélu’s formulas are essential for constructing and evaluating odd degree isogenies. Bernstein et al. proposed an approach known as √élu, which computes an ?-isogeny at a cost of̃ (√?) finite field operations. This paper presents two key improvements to enhance the efficiency of the implementation of √élu from two aspects: optimizing the partition involved in √élu and speeding up the computations of the sums of products used in polynomial multiplications over finite field ?? with large prime characteristic ?. To optimize the partition, we adjust it to enhance the utilization of ?-coordinates and eliminate the computational redundancy, which can ultimately reduce the number of ??-multiplications. The speedup of the sums of products is to employ two techniques: lazy reduction (abbreviated as LZYR) and generalized interleaved Montgomery multiplication (abbreviated as INTL). These techniques aim to minimize the underlying operations such as ??-reductions and assembly memory instructions. We present an optimized C and ssembly code implementation of √élu for the CTIDH512 instantiation. In terms of ?-isogeny computations in CTIDH512, the performance of clock cycles applying new partition + INTL (resp. new partition + LZYR) offers an improvement up to 16.05% (resp. 10.96%) compared to the previous work.
Expand
Omri Shmueli
ePrint Report ePrint Report
Recently, a paper by Chen (eprint 2024/555) has claimed to construct a quantum polynomial-time algorithm that solves the Learning With Errors Problem (Regev, JACM 2009), for a range of parameters. As a byproduct of Chen's result, it follows that Chen's algorithm solves the Gap Shortest Vector Problem, for gap $g(n) = \tilde{O}\left( n^{4.5} \right)$. In this short note we point to an error in the claims of Chen's paper.
Expand
Navid Alamati, Guru-Vamsi Policharla, Srinivasan Raghuraman, Peter Rindal
ePrint Report ePrint Report
We revisit the alternating moduli paradigm for constructing symmetric key primitives with a focus on constructing highly efficient protocols to evaluate them using secure multi-party computation (MPC). The alternating moduli paradigm of Boneh et al. (TCC 2018) enables the construction of various symmetric key primitives with the common characteristic that the inputs are multiplied by two linear maps over different moduli, first over $\mathbb{F}_2$ and then over $\mathbb{F}_3$.

The first contribution focuses on efficient two-party evaluation of alternating moduli PRFs, effectively building an oblivious pseudorandom function. We present a generalization of the PRF proposed by Boneh et al. (TCC 18) along with methods to lower the communication and computation. We then provide several variants of our protocols, with different computation and communication tradeoffs, for evaluating the PRF. Most are in the OT/VOLE hybrid model while one is based on specialized garbling. Our most efficient protocol effectively is about $3\times$ faster and requires $1.3\times$ lesser communication. %Moreover, our implementation is an order of magnitude faster than prior work. Our next contribution is the efficient evaluation of the OWF $f(x)=B\cdot_3 (A\cdot_2 x)$ proposed by Dinur et al. (CRYPTO 21) where $A \in \mathbb{F}^{m\times n}_2, B\in\mathbb{F}^{t\times m}_3$ and $\cdot_p$ is multiplication mod $p$. This surprisingly simple OWF can be evaluated within MPC by secret sharing $[\hspace{-3px}[x]\hspace{-3px}]$ over $\mathbb{F}_2$, locally computing $[\hspace{-3px}[v]\hspace{-3px}]=A\cdot_2 [\hspace{-3px}[x]\hspace{-3px}]$, performing a modulus switching protocol to $\mathbb{F}_3$ shares, followed by locally computing the output shares $[\hspace{-3px}[y]\hspace{-3px}]=B\cdot_3 [\hspace{-3px}[v]\hspace{-3px}]$. %While Dinur et al. used the KKW framework (CCS 2018) to construct a post quantum signature, w We design a bespoke MPC-in-the-Head (MPCitH) signature scheme that evaluates the OWF, achieving state of art performance. The resulting signature has a size ranging from 4.0-5.5 KB, achieving between $2\text{-}3\times$ reduction compared to Dinur et al. To the best of our knowledge, this is only $\approx 5\%$ larger than the smallest signature based on symmetric key primitives, including the latest NIST PQC competition submissions. We additionally show that our core techniques can be extended to build very small post-quantum ring signatures for small-medium sized rings that are competitive with state-of-the-art lattice based schemes. Our techniques are in fact more generally applicable to set membership in MPCitH.
Expand
Next ►