International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

20 August 2025

Liam Eagen
ePrint Report ePrint Report
Bitcoin is a decentralized, permissionless network for digital payments. Bitcoin also supports a limited set of smart contracts, which restrict how bitcoin can be spent, through bitcoin script. In order to support more expressive scripting functionality, Robin Linus introduced the BitVM family of protocols. These implement a weaker form of ``optimistic" smart contracts, and for the first time allowed bitcoin to verify arbitrary computation. BitVM allows a challenger to publish a ``fraud proof" that the computation was carried out incorrectly which can be verified on chain, even when the entire computation cannot. Jermey Rubin introduced an alternative optimistic smart contract protocol called Delbrag. This protocol uses Garbled Circuits (GC) to replace the BitVM fraud proof with by simply revealing a secret. He also introduced the Grug technique for malicious security.

We introduce a new formalization of GC based optimistic techniques called Garbled Locks or Glocks. Much like Delbrag, we use the GC to leak a secret and produce a signature as a fraud proof. We further propose the first concretely practical construction that does not require Grug. Like BitVM2 and Delbrag, Glock25 reduces verification of arbitrary bounded computation to verification of a SNARK. In Glock25, we use a designated verifier version of a modified of the SNARK Pari with smaller proof size. We make Glock25 maliciously secure using a combination of Cut-and-Choose, Verifiable Secret Sharing (VSS), and Adaptor Signatures. These techniques reduce the communication, computational, and on-chain complexity of the protocol compared to other approaches to construct a Glock, e.g. based on Groth16.
Expand
Krishnendu Chatterjee, Jan Matyáš Křišťan, Stefan Schmid, Jakub Svoboda, Michelle Yeo
ePrint Report ePrint Report
Payment channel networks (PCNs) are a promising technology that alleviates blockchain scalability by shifting the transaction load from the blockchain to the PCN. Nevertheless, the network topology has to be carefully designed to maximise the transaction throughput in PCNs. Additionally, users in PCNs also have to make optimal decisions on which transactions to forward and which to reject to prolong the lifetime of their channels. In this work, we consider an input sequence of transactions over $p$ parties. Each transaction consists of a transaction size, source, and target, and can be either accepted or rejected (entailing a cost). The goal is to design a PCN topology among the $p$ cooperating parties, along with the channel capacities, and then output a decision for each transaction in the sequence to minimise the cost of creating and augmenting channels, as well as the cost of rejecting transactions. Our main contribution is an $\mathcal{O}(p)$ approximation algorithm for the problem with $p$ parties. We further show that with some assumptions on the distribution of transactions, we can reduce the approximation ratio to $\mathcal{O}(\sqrt{p})$. We complement our theoretical analysis with an empirical study of our assumptions and approach in the context of the Lightning Network.
Expand
Yue Huang, Xin Wang, Haibin Zhang, Sisi Duan
ePrint Report ePrint Report
Conventional Byzantine fault-tolerant protocols focus on the workflow within a group of nodes. In recent years, many applications of consensus involve communication across groups. Examples include communication between infrastructures running replicated state machine, sharding-based protocols, and cross-chain bridges. Unfortunately, little efforts have been made to model the properties for communication across groups.

In this work, we propose a new primitive called cross-consensus reliable broadcast (XRBC). The XRBC primitive models the security properties of communication between two groups, where at least one group executes a consensus protocol. We provide three constructions of XRBC under different assumptions and present three different applications for our XRBC protocols: a cross-shard coordination protocol via a case study of Reticulum (NDSS 2024), a protocol for cross-shard transactions via a case study of Chainspace (NDSS 2018), and a solution for cross-chain bridge. Our evaluation results show that our protocols are highly efficient and benefit different applications. For example, in our case study on Reticulum, our approach achieves 61.16% lower latency than the vanilla approach.
Expand
Charlotte Bonte, Georgio Nicolas, Nigel P. Smart
ePrint Report ePrint Report
We discuss how Fully Homomorphic Encryption (FHE), and in particular the TFHE scheme, can be used to define an e-voting scheme for the Alternative Vote (AV) election system. This system has a more complex tallying phase than traditional First-Past-The-Post (FPTP) election variants. Previous work on e-voting schemes that used homomorphic encryption has focused on FPTP systems only, and utilized mainly linearly homomorphic encryption. We show, by using FHE, that more complex electoral systems such as AV can also be supported by homomorphic encryption. We show security of our protocol by considering it as a simple MPC functionality, and we also show practicality by presenting some experimental runtimes using the tfhe-rs library.
Expand
Gopal Anantharaman, Jintai Ding
ePrint Report ePrint Report
A Symmetric Key Encryption scheme using Camera Zooming is pre- sented using a familiar Paper-pencil cipher. The Camera can have a mag- nification/scaling up to some integer. The encrypter and decrypter are two hardware systems that are assumed to have the capability to zoom a given image with text from the resolution of a single character to a page by applying an appropriate scaling factor and an appropriate polynomial time zoom algorithm. Using the symmetric key, the Camera or a zooming algorithm implementation repeatedly zooms on different boxes in an im- age filled with cipher text and spurious redundant non-correlated pseudo- random data. Then, it decrypts the cipher using the Merkle-Hellman Knapsack Cryptosystem (MHKC) trapdoor algorithm or variants for ef- ficient encryption/decryption. As shown, the MHKC algorithm’s crypt- analysis vulnerability using Lattice-based LLL and other density attacks would not affect this scheme—as long as the key is private.
Expand
Ting-Yun Yeh
ePrint Report ePrint Report
Kleptography was first proposed by Adam Young and Moti Yung in 1996, while algorithm substitution attack was introduced by Mi- hir Bellare et al. as a variation of kleptography in 2014 after the Dual EC incident with the confidential documents revelation by Edward Snowden. These two paradigms share a common goal: to enable attackers to embed covert capabilities into cryptographic implementations while maintaining the appearance of normal functionality. The goal of this paper is to con- solidate existing research on kleptographic attacks, integrate it into a uni- fied definition, and explore future directions for the research in this field. This paper begins by introducing and comparing the two major branches of kleptographic attacks: traditional kleptography and post-Snowden al- gorithm substitution attack, highlighting their theoretical distinctions, threat models, and historical development. Then, it analyzes the spe- cific goals that attackers aim to achieve through such subversions and propose a generalized definition of algorithm substitution attack that in- clude all the goals. The paper also presents practical examples framed within my definition and classify prior research works as either strong or weak attacks based on their structure and undetectability. Finally, it discusses the current landscape of research in kleptographic attacks, and then suggest future directions for the attack and defense perspectives.
Expand
Tianyao Gu, Afonso Tinoco, Sri Harish G Rajan, Elaine Shi
ePrint Report ePrint Report
Making 2-party computation scale up to big datasets is a long-cherished dream of our community. More than a decade ago, a line of work has implemented and optimized interactive RAM-model 2-party computation (2PC), achieving somewhat reasonable concrete performance on large datasets, but unfortunately suffering from $\widetilde{O}(T)$ roundtrips for a $T$-time computation. Garbled RAM promises to compress the number of roundtrips to $2$, and encouragingly, a line of recent work has designed concretely efficient Garbled RAM schemes whose asymptotic communication and computation costs almost match the best known interactive RAM-model 2PC, but still leaves $({\sf poly})\log\log$ gaps.

We present ${\sf PicoGRAM}$, a practical garbled RAM (GRAM) scheme that not only asymptotically matches the prior best RAM-model 2PC, but also achieves an order of magnitude concrete improvement in online time relative to interactive RAM-model 2PC, on a dataset of size $8$GB. Moreover, our work also gives the first Garbled RAM whose total cost (including bandwidth and computation) achieves an optimal dependency on the database size (up to an arbitrarily small super-constant factor).

Our work shows that for high-value real-life applications such as Signal, blockchains, and Meta that require oblivious accesses to large datasets, Garbled RAM is a promising direction towards eventually removing the trusted hardware assumption that exist in production implementations today. Our open source code is available at https://github.com/picogramimpl/picogram.
Expand
Paul Gerhart, Davide Li Calsi, Luigi Russo, Dominique Schröder
ePrint Report ePrint Report
Threshold Schnorr signatures enable $t$-out-of-$n$ parties to collaboratively produce signatures that are indistinguishable from standard Schnorr signatures, ensuring compatibility with existing verification systems. While static-secure constructions are well understood and achieve optimal round complexity, obtaining full adaptive security - withstanding up to $t-1$ dynamic corruptions under standard assumptions has proven elusive: Recent impossibility results (CRYPTO’25) either rule out known proof techniques for widely deployed schemes or require speculative assumptions and idealized models, while positive examples achieving full adaptivity from falsifiable assumptions incur higher round complexity (EUROCRYPT’25, CRYPTO’25).

We overcome these barriers with the first round-optimal threshold Schnorr signature scheme that, under a slightly relaxed security model, achieves full adaptive security from DDH in the random oracle model.

Our model is relaxed in the sense that the adversary may adaptively corrupt parties at any time, but each signer must refresh part of their public key after a fixed number of signing queries. These updates are executed via lightweight, succinct, stateless tokens, preserving the aggregated signature format. Our construction is enabled by a new proof technique, equivocal deterministic nonce derivation, which may be of independent interest.
Expand
Sourav Das, Ling Ren, Ziling Yang
ePrint Report ePrint Report
Threshold decryption schemes allow a group of decryptors, each holding a private key share, to jointly decrypt ciphertexts. Over the years, numerous threshold decryption schemes have been proposed for applications such as secure data storage, internet auctions, and voting, and recently as a tool to protect against miner-extractable value attacks in blockchain. Despite the importance and popularity of threshold decryption, many natural and practical threshold decryption schemes have only been proven secure against static adversaries.

In this paper, we present two threshold decryption schemes that withstand malicious adaptive corruption. Our first scheme is based on the standard ElGamal encryption scheme and is secure against chosen plaintext attack~(CPA). Our second scheme, based on the chosen ciphertext attack~(CCA) secure Shoup-Gennaro encryption scheme, is also CCA secure. Both of our schemes have non-interactive decryption protocols and comparable efficiency to their static secure counterparts. Building on the technique introduced by Das and Ren (CRYPTO 2024), our threshold ElGamal decryption scheme relies on the hardness of Decisional Diffie-Hellman and the random oracle model.
Expand
Hanlin Liu, Xiao Wang, Kang Yang, Longhui Yin, Yu Yu
ePrint Report ePrint Report
The Regular Syndrome Decoding (RSD) problem, introduced nearly two decades ago, is a regular version of the Syndrome Decoding (SD) problem, where the noise vector is divided into consecutive, equally sized blocks, each containing exactly one noisy coordinate. Recently, RSD has gained renewed attention for its applications in Pseudorandom Correlation Generator (PCG) and more. Very recently, several works presented the improved algebraic approach (AGB for short) and ISD approach (including regular-ISD and regular-RP) to utilize the regular structure of noise vectors and provide a more precise security evaluation for the RSD problem.

In this paper, we refine the AGB algorithm from a one-round process to a two-round process, and refer to the new algebraic algorithm as AGB 2.0. For each round, we guess a few noise-free positions, followed by a tailored partial XL algorithm. This interleaving strategy increases the probability of success by reducing the number of guessing noise-free positions and effectively lowers the problem's dimension in each round. By fine-tuning position guesses in each round and optimizing the aggregate running time, our AGB 2.0 algorithm reduces the concrete security of the RSD problem by up to $6$ bits for the parameter sets used in prior works, compared to the best-known attack. In particular, for a specific parameter set in Wolverine, the RSD security is $7$ bits below the $128$-bit target. We analyze the asymptotic complexity of algebraic attacks on the RSD problem over a finite field $\mathbb{F}$ with the field size $|\mathbb{F}|>2$, when the noise rate $\rho$ and code rate $R$ satisfy $\rho + R < 1$. If $n \rho R^2 = O(1)$ where $n$ is the noise length, the RSD problem over $\mathbb{F}$ can be solved in polynomial time, but it does not hold for the SD problem. We show that the ISD and its variants, including regular-ISD and regular-RP, are asymptotically less efficient than AGB for solving RSD problems with $R = o(1/\log(n))$ and $|\mathbb{F}| > e^{n\rho R}$.
Expand
Michael Adjedj, Geoffroy Couteau, Arik Galansky, Nikolaos Makriyannis, Oren Yomtov
ePrint Report ePrint Report
The industry is moving away from passwords for authentication and authorization, with hardware devices for storing long-term cryptographic keys emerging as the leading alternative. However, these devices often have limited displays and remain vulnerable to theft, malware, or tricking users into signing malicious payloads. Current systems provide little fallback security in such cases. Any solution must also meet strict requirements: compatibility with industry standards, scalability to handle high request volumes, and high availability.

We present a novel design for authentication and authorization that meets these demands. Our approach virtualizes the authenticating/authorizing party via a two-party signing protocol with a helper entity, ensuring that keys remain secure even if a device is compromised and that every signed message conforms to a security policy.

We formalize the required properties for such protocols and show how they are met by existing schemes (e.g., FROST for Schnorr, Boneh–Haitner–Lindell-Segev'25 for ECDSA). Motivated by the widespread use of ECDSA (FIDO2/Passkeys, blockchains), we introduce a new, optimized two-party ECDSA protocol that is significantly more efficient than prior work. At its core is a new variant of exponent-VRF, improving on earlier constructions and of independent interest. We validate our design with a proof-of-concept virtual authenticator for the FIDO2 Passkeys framework.
Expand
Jonas Janneck, Jonas Meers, Massimo Ostuzzi, Doreen Riepel
ePrint Report ePrint Report
An Authenticated Key Encapsulation Mechanism (AKEM) combines public-key encryption and digital signatures to provide confidentiality and authenticity. AKEMs build the core of Hybrid Public Key Encryption (RFC 9180) and serve as a useful abstraction for messaging applications like the Messaging Layer Security (MLS) protocol (RFC 9420) and Signal's X3DH protocol. To date, most existing AKEM constructions either rely on classical (non post-quantum) assumptions or on unoptimized black-box approaches leading to suboptimal efficiency.

In this work, we choose a different abstraction level to combine KEMs and identification schemes more efficiently by leveraging randomness reuse. We construct a generic scheme and identify the necessary security requirements on the underlying KEM and identification scheme when reusing parts of their randomness. This allows for a concrete instantiation from isogenies based on the POKÉ KEM (EUROCRYPT'25) and the SQIsignHD identification scheme (EUROCRYPT'24). To be used in our black-box construction, the identification scheme requires the more advanced security property of response non-malleability. Hence, we further show that a slight modification of SQIsignHD satisfies this notion, which might be of independent interest.

Putting everything together, our final scheme yields the most compact AKEM from PQ assumptions with public keys of 366 bytes and ciphertexts of 216 bytes while fulfilling the strongest confidentiality and authenticity notions.
Expand

14 August 2025

Anubhav Baweja, Alessandro Chiesa, Elisabetta Fedele, Giacomo Fenzi, Pratyush Mishra, Tushar Mopuri, Andrew Zitek-Estrada
ePrint Report ePrint Report
The sumcheck protocol is a fundamental building block in the design of probabilistic proof systems, and has become a key component of recent work on efficient succinct arguments.

We study time-space tradeoffs for the prover of the sumcheck protocol in the streaming model, and provide upper and lower bounds that tightly characterize the efficiency achievable by the prover.

$\bullet{}$ For sumcheck claims about a single multilinear polynomial we demonstrate an algorithm that runs in time $O(kN)$ and uses space $O(N^{1/k})$ for any $k \geq 1$. For non-adaptive provers (a class which contains all known sumcheck prover algorithms) we show this tradeoff is optimal.

$\bullet{}$ For sumcheck claims about products of multilinear polynomials, we describe a prover algorithm that runs in time $O(N(\log \log N + k))$ and uses space $O(N^{1/k})$ for any $k \geq 1$. We show that, conditioned on the hardness of a natural problem about multiplication of multilinear polynomials, any ``natural'' prover algorithm that uses space $O(N^{1/2 - \varepsilon})$ for some $\varepsilon > 0$ must run in time $\Omega(N(\log \log N + \log \varepsilon))$.

We implement and evaluate the prover algorithm for products of multilinear polynomials. We show that our algorithm consumes up to $120\times$ less memory compare to the linear-time prover algorithm, while incurring a time overhead of less than $2\times$.

The foregoing algorithms and lowerbounds apply in the interactive proof model. We show that in the polynomial interactive oracle proof model one can in fact design a new protocol that achieves a better time-space tradeoff of $O(N^{1/k})$ space and $O(N(\log^* N + k))$ time for any $k \geq 1$.
Expand
Katharina Boudgoust, Corentin Jeudy, Erkan Tairi, Weiqiang Wen
ePrint Report ePrint Report
The Module Learning With Errors (M-LWE) problem has become a fundamental hardness assumption for lattice-based cryptography. It offers an attractive trade-off between strong robustness guarantees, sometimes directly based on worst-case lattice problems, and efficiency of the subsequent cryptographic primitives. Different flavors of M-LWE have then been introduced towards improving performance. Such variants look at different secret-error distributions and might allow for additional hints on the secret-error vector. Existing hardness results however only cover restricted classes of said distributions, or are tailored to specific leakage models. This lack of generality hinders the design of efficient and versatile cryptographic schemes, as each new distribution or leakage model requires a separate and nontrivial hardness evaluation.

In this work, we address this limitation by establishing the hardness of MLWE under general distributions. As a first step, we show that MLWE remains hard when the error vector follows an arbitrary bounded distribution with sufficient entropy, with some restriction on the number of samples. Building on this, we then reduce to the Hermite Normal Form (HNF) where the secret-error vector follows said arbitrary distribution. Overall, our result shows the actual shape of the distribution does not matter, as long as it keeps sufficient entropy.

To demonstrate the versatility of our framework, we further analyze a range of leakage scenarios. By examining the residual entropy given the leakage, we show that our results of M-LWE with general distributions encompass various types of leakage. More precisely, we cover exact and approximate linear hints which are widely used in recent cryptographic designs, as well as quadratic, and even non-algebraic forms, some of which were not yet covered by any theoretical hardness guarantees. The generality of our results aims at facilitating future cryptographic designs and security analyses.
Expand
Jakub Mielczarek, Małgorzata Zajęcka
ePrint Report ePrint Report
In this article, we introduce a new post-quantum cryptosystem, NTWR Prime, which is based on the NTRU Prime and Learning With Rounding (LWR) problems. This scheme is inspired by the NTWE construction proposed by Joel Gartner in 2023. Unlike NTWE, our algorithm employs an irreducible, non-cyclotomic polynomial whose Galois group is isomorphic to the symmetric group. Additionally, the LWR problem is used in place of the LWE problem, offering potential advantages for structural security due to its deterministic nature. We conduct a security analysis demonstrating that solving the NTWR Prime problem requires solving both the underlying NTRU Prime and LWR problems. Consequently, given the absence of definitive post-quantum security proofs for these problems, our construction offers redundancy, which may fulfill the requirements of applications with exceptionally high security standards. Importantly, we show that there exists a set of parameters satisfying the hardness assumptions for both contributing problems.
Expand

13 August 2025

Dung Bui, Kelong Cong
ePrint Report ePrint Report
Fuzzy-labeled private set intersection (PSI) outputs the corresponding label if there is a ``fuzzy'' match between two items, for example, when the Hamming distance is low between the two items. Such protocols can be applied in privacy-preserving biometric authentication, proximity testing, and so on. The only fuzzy-labeled PSI protocol designed for practical purposes is by Uzun et al. (USENIX’21), which is based on homomorphic encryption. This design puts constraints on the item size, label size, and communication cost since it is difficult for homomorphic encryption to support large plaintext space and it is well-known that the ciphertext-expansion factor is large.

Our construction begins with a new primitive which we call vector ring-oblivious linear evaluation (vector ring-OLE). This primitive does not rely on existing instantiations of ring-OLE over the quotient ring, but leverages the more efficient vector-OLE. It is ideal for building unbalanced threshold-labeled PSI and is also of independent interest.

Our main contribution, fuzzy-labeled PSI, is bootstrapped from our threshold-labeled PSI protocol. Through a prototype implementation, we demonstrate our communication cost is up to $4.6\times$ better than the prior state-of-the-art with comparable end-to-end latency while supporting a significantly higher label size.
Expand
Andrej Bogdanov, Alon Rosen, Kel Zin Tan
ePrint Report ePrint Report
The $k$LIN problem concerns solving noisy systems of random sparse linear equations mod 2. It gives rise to natural candidate hard CSP distributions and is a cornerstone of local cryptography. Recently, it was used in advanced cryptographic constructions, under the name 'sparse LPN'.

For constant sparsity $k$ and inverse polynomial noise rate, both search and decision versions of $k$LIN are statistically possible and conjectured to be computationally hard for $n\ll m\ll n^{k/2}$, where $m$ is the number of $k$-sparse linear equations, and $n$ is the number of variables.

We show an algorithm that given access to a distinguisher for $(k-1)$LIN with $m$ samples, solves search $k$LIN with roughly $O(nm)$ samples. Previously, it was only known how to reduce from search $k$LIN with $O(m^3)$ samples, yielding meaningful guarantees for decision $k$LIN only when $m \ll n^{k/6}$.

The reduction succeeds even if the distinguisher has sub-constant advantage at a small additive cost in sample complexity. Our technique applies with some restrictions to Goldreich's function and $k$LIN with random coefficients over other finite fields.
Expand
Sam Buxbaum, Lucas M. Tassis, Lucas Boschelli, Giovanni Comarela, Mayank Varia, Mark Crovella, Dino P. Christenson
ePrint Report ePrint Report
We present a real-world deployment of secure multiparty computation to predict political preference from private web browsing data. To estimate aggregate preferences for the 2024 U.S. presidential election candidates, we collect and analyze secret-shared data from nearly 8000 users from August 2024 through February 2025, with over 2000 daily active users sustained throughout the bulk of the survey. The use of MPC allows us to compute over sensitive web browsing data that users would otherwise be more hesitant to provide. We collect data us- ing a custom-built Chrome browser extension and perform our analysis using the CrypTen MPC library. To our knowledge, we provide the first implementation under MPC of a model for the learning from label pro- portions (LLP) problem in machine learning, which allows us to train on unlabeled web browsing data using publicly available polling and elec- tion results as the ground truth. The client code is open source, and the remaining code will be open source in the future.
Expand
Randy Kuang
ePrint Report ePrint Report
In this paper, we present an optimized construction of the Homomorphic Polynomial Public Key (HPPK) cryptosystem, a novel framework designed to provide enhanced security and efficiency in the post-quantum era. Our work introduces a layered cryptographic design that combines modular arithmetic permutations with an innovative additive random masking technique. This approach effectively obscures the underlying factorizable structure of the public key, thereby mitigating vulnerabilities to known lattice reduction attacks and other algebraic cryptanalyses. The security of our scheme is formally grounded in the computational hardness of three new problems: the Hidden Modulus Product Problem (HMPP), the HPPK Key Recovery Problem (HKRP), and the HPPK Secret Recovery Problem (HSRP). We demonstrate through rigorous analysis that the optimal attacks on our scheme are computationally infeasible for appropriately chosen parameters. Furthermore, we show that HPPK achieves remarkably compact key, ciphertext, and signature sizes, offering a significant advantage over leading NIST post-quantum finalists such as Kyber, Dilithium, and Falcon, particularly in bandwidth-constrained environments. The HPPK cryptosystem offers a compelling and mathematically-grounded solution for next-generation cryptography, delivering both provable security and practical efficiency.
Expand
Weidan Ji, Zhedong Wang, Lin Lyu, Dawu Gu
ePrint Report ePrint Report
Most adaptively secure identity-based encryption (IBE) constructions from lattices in the standard model follow the framework proposed by Agrawal et al. (EUROCRYPT 2010). However, this framework has an inherent restriction: the modulus is quadratic in the trapdoor norm. This leads to an unnecessarily large modulus, reducing the efficiency of the IBE scheme. In this paper, we propose a novel framework for adaptively secure lattice-based IBE in the standard model, that removes this quadratic restriction of modulus while keeping the dimensions of the master public key, secret keys, and ciphertexts unchanged. More specifically, our key observation is that the original framework has a \textit{natural} cross-multiplication structure of trapdoor. Building on this observation, we design two novel algorithms with non-spherical Gaussian outputs that fully utilize this structure and thus remove the restriction. Furthermore, we apply our framework to various IBE schemes with different partitioning functions in both integer and ring settings, demonstrating its significant improvements and broad applicability. Besides, compared to a concurrent and independent work by Ji et al. (PKC 2025), our framework is significantly simpler in design, and enjoys a smaller modulus, a more compact master public key and shorter ciphertexts.
Expand
◄ Previous Next ►