International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

13 November 2025

Matteo Campanelli, Dario Fiore, Mahak Pancholi
ePrint Report ePrint Report
Cryptographic proofs are a versatile primitive. They are useful in practice not only when used as a standalone tool (for example in verifiable computation), but also when applied $\textit{on top}$ of other cryptographic functionalities — hash functions, signature schemes, and even proofs themselves — to $\textit{enhance}$ their security guarantees (for example to provide succinctness). However, when the security of the other primitive is established in the Algebraic Group Model (AGM), the security of the resulting construction does not follow automatically. We introduce a general methodology of $\textit{provable security}$ for this setting. Our approach guarantees the security of $\Pi \circ X$, the composition of a cryptographic proof $\Pi$ with a functionality $X$, whenever the security of $X$ is analysed in the AGM. Our methodology has general applicability, with immediate relevance to IVC, proof aggregation, and aggregate signatures. We obtain: - $\textbf{IVC for unbounded depth from AGM-secure proofs.}$ Incrementally Verifiable Computation (IVC) is a canonical example of composing cryptographic proofs with one another. Achieving provable security for IVC beyond constant-depth computations has remained a central open challenge. Using our methodology, we obtain new IVC instantiations that remain secure for unbounded-depth computations, when built from proofs analysed in the AGM. This broadens the class of proofs systems usable in the canonical IVC constructions to include prominent systems such as Groth16 and Marlin – proof systems not covered by prior analyses (e.g., Chiesa et al., TCC 2024). - $\textbf{Succinct aggregation of AGM-secure signatures.}$ Applying our framework, we give the first provable security for the folklore proof-based construction of aggregate signatures from AGM-secure signatures. Prior analyses either exclude AGM-secure signatures or rely on heuristic assumptions. Establishing this result required resolving additional technical challenges beyond applying our framework – for example, reasoning about the security of proof systems in the presence of signing oracles.
Expand
Marshall Ball, Clément Ducros, Saroja Erabelli, Lisa Kohl, Nicolas Resch
ePrint Report ePrint Report
Understanding the minimal computational power needed to realize a pseudorandom function (PRF) is a long-standing question in cryptography. By the Razborov–Smolensky polynomial approximation method, it is known that $AC^0[2]$ cannot support strong pseudorandom functions with subexponential security, since any such function can be distinguished from random with quasipolynomially many samples.

In this work, we initiate the study of low-complexity strong PRFs under a refined framework that separates adversary query complexity from running time, and observe that distinguishing algorithms for $AC^0[2]$ do not apply if the number of queries is below the threshold implied by the Razborov–Smolensky approximation bound.

We propose the first candidate strong PRF in $AC^0[2]$, which plausibly offers subexponential security against adversaries limited to a fixed quasipolynomial number of queries. We show that our candidate lacks heavy Fourier coefficients, resists a natural class of adaptive attacks, has high rational degree, is non-sparse over $\mathbb{F}_2$ in expectation, and has low correlation with fixed function families.

Finally, we show that if any strong PRF exists in $AC^0[2]$ (or a superclass), then we can construct a universal PRF, i.e., a single, fixed function which is guaranteed to be a strong PRF in the same class.
Expand
Amir Moradi
ePrint Report ePrint Report
The Enhanced ISW (E-ISW) masking scheme, recently proposed at DATE 2024, was introduced as a refinement to the classical ISW construction to restore provable security guarantees in hardware implementations affected by glitches. By enforcing input-complete gate evaluations through the use of artificial delays, E-ISW seeks to mitigate the glitch-induced leakage that compromises standard masking techniques. However, in this work, we demonstrate that this modification is fundamentally insufficient to ensure robust side-channel resistance in realistic hardware environments. We conduct a detailed analysis and present concrete examples where E-ISW fails to prevent information leakage, even when the prescribed countermeasures are correctly applied. These vulnerabilities arise due to deeper conceptual shortcomings in the design, particularly the absence of compositional reasoning about the interaction between glitches and masking. Our results show that the security claims of E-ISW do not hold in practice, and they expose critical limitations in relying on heuristic delay-based fixes without formal and compositional proofs of security. This study serves as a cautionary note for the cryptographic engineering community, emphasizing the necessity of rigorous validation when proposing enhancements to established secure computation techniques.
Expand
Mike Hamburg
ePrint Report ePrint Report
Lucas sequences are a helpful tool in mathematical and cryptographic calculations, providing in particular an efficient way to exponentiate in a quotient ring $R[x]/(x^2 - Px + Q)$. As with exponentiation in other finite rings and fields, we can use the periodic nature of these sequences to find roots of polynomials. Since they behave differently in the ring $\mathbb{Z}/N$ depending on whether $N$ is prime, Lucas sequences are also useful for primality testing. In this paper, we discuss improvements to Lucas-sequence algorithms for square roots and heuristic primality testing.

Our first application is modular square roots. It is straightforward to take square roots modulo primes $p\equiv \{3,5,7\}$ mod 8. When $p\equiv 1$ mod 8, and especially when $p-1$ is divisible by many powers of 2, Müller's algorithm and Kim-Koo-Kwon are attractive options. Both of these use Lucas sequences. Here we show how to simplify and speed up Kim-Koo-Kwon. We also show a variant on Müller's algorithm which works even when $p\equiv 3$ mod 4, which would be useful if $p$ were secret.

Our second application is heuristic primality testing. The Baillie-PSW primality test combines a strong Fermat test with a strong Lucas test. The recent Baillie-Fiori-Wagstaff variant strengthens Baillie-PSW. Here we show an improved variant, $\mathtt{SuperBFPSW}$, which is stronger than Baillie-Fiori-Wagstaff, but also faster than the original Ballie-PSW.
Expand
Akif Mehmood, Nicola Tuveri
ePrint Report ePrint Report
The emergence of Cryptographically Relevant Quantum Computers (CRQCs) threatens traditional cryptographic systems, necessitating a transition to Post-Quantum Cryptography (PQC). OpenSSL 3.0 introduced `Providers`, enabling modular cryptographic integration. This work presents the concept of a "shallow `Provider`", facilitating integration of external implementations, to achieve a higher degree of cryptographic agility. `aurora`, which we introduce as an instance of the "shallow `Provider`" methodology, integrates standardized PQC algorithms in TLS 1.3 for both key establishment and authentication, to support the PQC transition. It enhances cryptographic agility by allowing OpenSSL to dynamically adapt to evolving PQC standards and the rapidly evolving ecosystem of PQC implementations.
Expand
Charanjit S. Jutla, Rohit Nema, Arnab Roy
ePrint Report ePrint Report
Partial fraction decomposition is a fundamental technique in mathematics where products of rational functions can be expressed as sums of fractions. While rational functions have been used in various cryptographic constructions, their rich algebraic structure has not been systematically explored as a direct foundation for building cryptographic primitives. In this work, we describe and exploit two key properties of partial fraction decomposition: (1) the decomposition property itself, which enables efficient set membership testing, and (2) a novel linear independence property arising from the non-singularity of Cauchy matrices, which enables threshold cryptography.

We present two main applications. First, we construct a key-value commitment scheme where a dictionary is represented as a linear combination of partial fractions. Our scheme achieves constant-size commitments (a single group element) and proofs, supports homomorphic updates enabling stateless operation, and provides efficient membership and non-membership proofs through simple pairing equations. We also introduce Credential-based Key-Value Commitments, where keys are registered via Boneh-Boyen signatures, enabling applications in permissioned settings.

Second, we construct a dynamic threshold encryption scheme leveraging the linear independence of partial fraction products. Our scheme achieves compact ciphertexts, supports public preprocessing of public keys to a succinct encryption key, enables dynamic threshold selection at encryption time, and provides robustness through share verification without random oracles. In particular, we achieve the shortest CPA-secure ciphertext size of 3 group elements, given logarithmic size preprocessed encryption key.

We prove security of our constructions in the standard model under new $q$-type assumptions and establish their generic hardness in the generic bilinear group model. Our work demonstrates that working directly with the algebraic structure of rational fractions, rather than converting to polynomial representations, yields elegant and efficient cryptographic constructions with concrete advantages over prior work.
Expand
Jonathan Katz, Marek Sefranek
ePrint Report ePrint Report
Anonymous credentials allow users to obtain credentials on various attributes, and then use those credentials to give unlinkable proofs about the values of some attributes without leaking anything about others. They have recently received interest from companies including Google, Apple, and Cloudflare, and are being actively evaluated both at the IETF and in the EU. Anonymous credentials based on BBS signatures are a leading candidate for standardization.

In some natural applications of anonymous credentials, it is beneficial to hide even the issuer of a credential, beyond revealing the fact that the issuer is in some pre-determined set specified by a verifier. Sanders and Traore recently showed a construction of such issuer-hiding anonymous credentials based on the Pointcheval-Sanders signature scheme.

In this work we show how to achieve issuer hiding for BBS-based anonymous credentials. Our construction satisfies a notion of everlasting issuer-hiding anonymity, and is unforgeable in the generic group model. It can be integrated into existing standards, and has several efficiency advantages compared to prior work.
Expand
Eugene Lau, Laura Shea, Nadia Heninger
ePrint Report ePrint Report
We analyze the security of RSA keys where the public exponent $e$ is larger than $\varphi(N)$. While nearly all real-world applications of RSA use a small set of pre-determined constant values for $e$, the literature contains a number of constructions involving large special-form exponents. Examples include proposed countermeasures against Wiener's attack on small RSA private exponents, exponent masking against side channels, a 2018 proposal by Joye and Michalevsky to extend the usefulness of hardware security modules, and a 2023 RSA blind signature construction by Amjad, Yeo, and Yung.

We give an efficient algorithm to factor an RSA modulus $N$ given an integer $a$ that is "close" to a multiple of $\varphi(N)$. That is, we can factor $N$ in polynomial time given $\varphi(N) < a \le N^{3/2}$ if there is an integer $y$ with $|y| \le a N^{-3/4}$ such that $a - y \equiv 0 \bmod \varphi(N)$. Our attack is a special case of Blömer and May's 2004 algorithm using Coppersmith's method that enables us to give stronger bounds for our application range of interest.

We instantiate our attack against several constructions and exhibit families of weak public exponents that do not appear to have been analyzed in the literature. In particular, the Joye and Michalevsky exponent transform permits full key recovery if used for small public exponents. While it is well known that RSA is vulnerable for small private exponent $d$, our work suggests that care must also be taken when generating large public exponents, or when publishing transformed exponents.
Expand
Gabriel Dettling, Elisaweta Masserova, Chen-Da Liu-Zhang, Matthieu Rambaud, Antoine Urban
ePrint Report ePrint Report
A significant number of works have considered the problem of multi-party computation over dynamic committees in synchronous networks, including YOSO MPC [Crypto'21], Fluid MPC [Crypto'21], SCALES MPC [TCC'22] and Layered MPC [Crypto'23]. However, prior works assume that every party has access to an ideal synchronous broadcast channel towards the next committee.

While this assumption is partly justified due to the seminal work of Garay [WDAG'94] stating that deterministic broadcast with dynamic committees is impossible, it is open whether there are randomized solutions.

We answer this question in the affirmative, by providing a complete characterization of broadcast with dynamic committees. We use the formalization introduced in the Layered MPC setting and achieve the following results for layered broadcast: %first, a protocol for $t
Expand
Pedro Capitão, Hila Dahari-Garbian, Lisa Kohl, Zhe Li
ePrint Report ePrint Report
Homomorphic Secret Sharing (CRYPTO 2016) allows a secret to be shared among two or more parties in such a way that the parties can locally evaluate a class of functions on their shares. Homomorphic secret sharing (HSS) schemes and their underlying techniques have facilitated a wide range of applications. To account for the fact that parties generating or evaluating the shares might act maliciously, variants of HSS schemes that allow detection of such malicious behavior have been introduced. However, all prior approaches of malicious HSS that capture the class of $\mathsf{NC}1$ circuits either crucially rely on a random oracle or require an non-reusable setup. In this work, we initiate the study of malicious public-key $2$-party HSS in the standard model with reusable setup, where any malicious behavior during share generation and share evaluation can be detected. Towards constructing malicious HSS, we introduce the notion of homomorphic secret sharing with robust linear reconstruction (RLR-HSS) and show that this notion readily implies malicious HSS. We outline challenges in instantiating RLR-HSS due to the error present in all current HSS constructions not relying on SHE/FHE, and show how to overcome these using derandomization techniques by Dwork et al. (EUROCRYPT 2004). Finally, we show applications of malicious HSS to compact designated verifier non-interactive zero knowledge arguments and maliciously secure $2$-party computation in the standard model (supporting the same function class as the underlying malicious HSS).
Expand
Lucjan Hanzlik, Eugenio Paracucchi, Riccardo Zanotto
ePrint Report ePrint Report
Blind signatures have received increased attention from researchers and practitioners. They allow users to obtain a signature under a message without revealing it to the signer. One of the most popular applications of blind signatures is to use them as one-time tokens, where the issuing is not linkable to the redeeming phase, and the signature under a random identifier forms a valid token. This concept is the backbone of the Privacy Pass system, which uses it to identify honest but anonymous users and protect content delivery networks from botnets.

Non-interactive blind signatures for random messages were introduced by Hanzlik (Eurocrypt'23). They allow a signer to create a pre-signature with respect to a particular public key, while the corresponding secret key can later be used to finalize the signature. This non-interaction allows for more applications than in the case of blind signatures. In particular, the author suggested using regular PKI keys as the recipient public key, allowing for a distribution of one-time tokens to users outside the system, e.g., to public keys of GitHub users, similar to airdropping of cryptocurrencies. Unfortunately, despite introducing this concept, the paper fails to provide schemes that work with keys used in the wild.

We solve this open problem. We introduce a generic construction of non-interactive blind signatures that relies on Yao's garbled circuit techniques and provide particular improvements to this generic setting. We replace oblivious transfer with their non-interactive variant and show how to construct them so that the recipient's public key, encoding the $\mathsf{OT}$ choice, is a standard RSA public key $(e,N)$. To improve the efficiency of the garbling, we show how to garble the signing algorithm of the pairing-based Pointcheval-Sanders (PS) signatures and the RSA-based signature scheme with efficient protocols by Camenisch and Lysyanskaya. Our technique also apply to the well-known BBS signatures. All our improvements are of independent interest and are central to our contribution.
Expand
Subham Das, Riccardo Invernizzi, Péter Kutas, Jonas Meers
ePrint Report ePrint Report
We define and analyze the Leveled Isogeny Problem with Hints (LIPH), which is a generalization of the Isogeny Problem with Level Structure first introduced by De Feo, Fuoutsa and Panny at EUROCRYPT'24. In a LIPH instance we are tasked to recover a secret isogeny \(\varphi\) given masked torsion point images \(\Gamma\cdot(\varphi(P),\varphi(Q))^\top\) for some \((P,Q)\) of order \(N\) and unknown \(\Gamma\in GL_2(N)\). Additionally, we are provided a \emph{hint} on \(\Gamma\), revealing some bits of its entries. Instances of LIPH occur naturally in the case of modern isogeny-based key exchanges that use masked torsion points as part of their public key, when additionally some parts of the masking matrix \(\Gamma\) are revealed due to, for instance, a side-channel attack.

We provide efficient algorithms that solve various instances of LIPH, leading to efficient \emph{partial key recovery attacks} in practice. More specifically, we present Coppersmith-type attacks that are able to recover an M-SIDH/POKÉ secret key given \(50\%\) (resp. \(86\%\)) of the most-significant bits of an entry of \(\Gamma\), and a FESTA secret key given the 67\% of the most-significant bits of \(\Gamma\). In the case of FESTA we also present a tailored combinatorial attack running in subexponential time $O(2^{\sqrt{n}})$ when $50\%$ of the bits of $\Gamma$ leak at random.
Expand
Chenyang Liu, Xukun Wang, Zhifang Zhang
ePrint Report ePrint Report
Private Information Retrieval (PIR) is a crucial component in many privacy-preserving systems, with Offline/Online PIR attracting significant attention. Recent works have focused on eliminating offline communication overhead. However, existing constructions incur high online communication costs as a trade-off. To address this, we propose VIA, a single-server PIR scheme that eliminates offline communication while achieving $O{_\lambda}(\log N)$ online communication complexity. Experimental evaluations demonstrate that for a 32 GB database, VIA requires only 690 KB of online communication---a $3.7\times$ reduction compared to state-of-the-art schemes without offline communication---while attaining a throughput of 3.11 GB/s. Furthermore, we introduce VIA-C, a variant of VIA that allows offline communication. Compared to previous communication-efficient schemes, VIA-C achieves a $24.5\times$ reduction in online communication, requiring only 2.1 KB for a 32 GB database (with 14.8 MB offline communication). Moreover, VIA-C can naturally extend to VIA-B that supports batch queries. Compared to previous communication-efficient batch PIR schemes, VIA-B achieves a $3.5\times$ reduction in query size and a $127\times$ reduction in response size for a 1 GB database of 1-byte records. The designs of our schemes rely on a novel DMux-CMux structure and LWE-to-RLWE conversion techniques.
Expand
Alessandro Budroni, Marco Defranceschi, Federico Pintore
ePrint Report ePrint Report
The Permuted Kernel Problem (PKP) is a computational problem for linear codes over finite fields that has emerged as a promising hard problem for constructing post-quantum cryptographic schemes, with its main application found in the digital signature scheme PERK, submitted to the NIST standardization process for quantum-secure additional signatures. Upon reviewing the first version of PERK, NIST recommended further research on the concrete complexity of PKP. In this work, we follow this recommendation and investigate algorithmic improvements to the known methods for solving PKP. Specifically, we build upon the state-of-the-art work of Santini, Baldi, and Chiaraluce (IEEE Trans. Inf. Theory, 2024), and introduce a new algorithm that outperforms it over a wide range of parameters, yielding double-digit bit reductions in estimated complexity on representative instances. Nevertheless, our analysis shows that these improvements do not affect the parameter-set choices in PERK, thereby reinforcing confidence in its security.
Expand
Christopher Goes, Yulia Khalniyazova, Enrique Larraia, Xuyang Song
ePrint Report ePrint Report
Fuzzy Message Detection, or FMD, outsources detection of messages to an untrusted server, Beck et. al. CCS 2021. In this paper, we extend FMD to the multi-key setting: several servers are given different detection keys, all extracted from a single secret key. Multi-key FMD allows to combine tests from multiple servers locally by each receiver. This allows to set high false-positive rates on the servers, while attaining low rates on the receiver side. Striking this way a better balance between privacy and efficiency. We further formalize the notion of stealth public keys in the FMD setting. Last, we provide two constructions, one with short public keys.
Expand
Bruno Cavalar, Eli Goldin, Matthew Gray, Taiga Hiroka, Tomoyuki Morimae
ePrint Report ePrint Report
One of the most fundamental problems in the field of hypothesis testing is the identity testing problem: whether samples from some unknown distribution $\mathcal{G}$ are actually from some explicit distribution $\mathcal{D}$. It is known that when the distribution $\mathcal{D}$ has support $[N]$, the optimal sample complexity for the identity testing problem is roughly $O(\sqrt{N})$. However, many distributions of interest, including those which can be sampled efficiently, have exponential support size, and therefore the optimal identity tester also requires exponential samples. In this paper, we bypass this lower bound by considering restricted settings. The above $O(\sqrt{N})$ sample complexity identity tester is constructed so that it is not fooled by any (even inefficiently-sampled) distributions. However, in most applications, the distributions under consideration are efficiently samplable, and therefore it is enough to consider only identity testers that are not fooled by efficiently-sampled distributions. In this setting we can hope to construct efficient identity testers. We investigate relations between efficient verification of classical/quantum distributions with classical/quantum cryptography, showing the following results:

\begin{itemize} \item Classically efficiently samplable distributions are verifiable if and only if one-way functions do not exist. \item Quantumly efficiently samplable distributions are verifiable by $\mathbf{P}^\mathbf{PP}$ with a polynomial number of samples. \item Sampling-based quantum advantage can be verified quantumly (with a polynomial number of samples) if one-way puzzles do not exist. \item If QEFID pairs exist, then some quantumly efficiently samplable distributions are not verifiable. \end{itemize}
Expand
Hanbeom Shin, Insung Kim, Sunyeop Kim, Byoungjin Seok, Dongjae Lee, Deukjo Hong, Jaechul Sung, Seokhie Hong, Sangjin Lee
ePrint Report ePrint Report
At ASIACRYPT 2021, Baksi et al. introduced DEFAULT, a block cipher designed to algorithmically resist Differential Fault Attack (DFA), claiming 64-bit DFA security regardless of the number of injected faults. At EUROCRYPT 2022, Nageler et al. demonstrated that DEFAULT’s claimed DFA resistance can be broken by applying an information-combining technique. More recently, at ASIACRYPT 2024, Jana et al. improved DFA by searching for deterministic trails. They showed that, for DEFAULT with a simple key schedule, injecting five faults at the fifth-to-last round reduces the key space to one, and for BAKSHEESH, injecting twelve faults at the third-to-last round achieves the same result. In this paper, we propose a new DFA framework that utilizes a MixedInteger Linear Programming (MILP) solver. This framework makes it possible to attack more rounds than previously achieved, while simultaneously reducing the number of fault injections required for key recovery. Furthermore, we present a method to determine the most efficient fault injection bit positions by systematically analyzing the input differences from all possible single bit-flip faults, thereby further reducing the required number of faults. This systematic analysis has the significant advantage of allowing us to theoretically calculate the required number of faults. Applying our framework, for DEFAULT, injecting three faults at the sixth-to-last round and two faults at the seventh- and eighth-tolast rounds reduces the key space to one, and for BAKSHEESH, injecting six faults at the fourth-to-last round and nine faults at the fifth-to-last round achieves the same result.
Expand
Mehdi Abri, Jonathan Katz
ePrint Report ePrint Report
The stateless hash-based digital signature algorithm (SLH-DSA) is a post-quantum signature scheme based on the SPHINCS+ framework that was recently standardized by NIST. Although it offers many benefits, a drawback of SLH-DSA is that it has relatively large signatures. Several techniques have been proposed to reduce the signature size of SPHINCS-like schemes, and NIST is actively evaluating variants with shorter signatures for possible future standardization.

We explore using forced pruning in the few-time signature scheme used by SPHINCS+ to reduce the overall signature size. Prior work suggested similar ideas, but claimed that the improvement from forced pruning was small. We re-visit this conclusion by performing a detailed theoretical analysis of forced pruning along with a more thorough exploration of its benefits. We show that forced pruning can improve upon SPHINCS+C (Oakland 2023) in all respects, and can reduce the overall signature size for the ''smaller SPHINCS+'' variants proposed by Fluhrer and Dang by up to 20% with minimal effect on signing time. Our results thus show that forced pruning can be a beneficial optimization for hash-based signatures.
Expand
Yicheng Liu, Rafail Ostrovsky, Scott Shenker, Sam Kumar
ePrint Report ePrint Report
Organizations increasingly need to collaborate by performing a computation on their combined dataset, while keeping their data hidden from each other. Certain kinds of collaboration, such as collaborative data analytics and AI, require a level of performance beyond what current cryptographic techniques for distributed trust can provide. This is because the organizations run software in different trust domains, which can require them to communicate over WANs or the public Internet. In this paper, we explore how to instead run such applications using fast datacenter-type LANs. We show that, by carefully redesigning distributed trust frameworks for LANs, we can achieve up to order-of-magnitude better performance than naïvely using a LAN. Then, we develop deployment models for Distributed But Proximate Trust (DBPT) that allow parties to use a LAN while remaining physically and logically distinct. These developments make secure collaborative data analytics and AI significantly more practical and set new research directions for developing systems and cryptographic theory for high-performance distributed trust.
Expand
Enis Golaszewski, Alan T. Sherman, Edward Zieglar, Jonathan D. Fuchs, Sophia Hamer
ePrint Report ePrint Report
As a case study in cryptographic binding, we present a formal-methods analysis of the cryptographic channel binding mechanisms in the Fast IDentity Online (FIDO) Universal Authentication Framework (UAF) authentication protocol, which seeks to reduce the use of traditional passwords in favor of authentication devices. First, we show that UAF's channel bindings fail to mitigate protocol interaction by a Dolev-Yao adversary, enabling the adversary to transfer the server's authentication challenge to alternate sessions of the protocol. As a result, in some contexts, the adversary can masquerade as a client and establish an authenticated session with a server (e.g., possibly a bank server). Second, we implement a proof-of-concept man-in-the-middle attack against eBay's open source FIDO UAF implementation. Third, we propose and formally verify improvements to UAF. The weakness we analyze is similar to the vulnerability discovered in the Needham-Schroeder protocol over 25 years ago. That this vulnerability appears in the FIDO UAF standard highlights the strong need for protocol designers to bind messages properly and to analyze their designs with formal-methods tools. To our knowledge, we are first to carry out a formal-methods analysis of channel binding in UAF and first to exhibit details of an attack on UAF that exploits the weaknesses of UAF's channel binding. Our case study illustrates the importance of cryptographically binding context to protocol messages to prevent an adversary from misusing messages out of context.
Expand
Next ►