International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Papers from EPRINT 2010

Year
Venue
Title
2010
EPRINT
(If) Size Matters: Size-Hiding Private Set Intersection
Modern society is increasingly dependent on, and fearful of, the availability of electronic information. There are numerous examples of situations where sensitive data must be – sometimes reluctantly – shared between two or more entities without mutual trust. As often happens, the research community has foreseen the need for mechanisms to enable limited (privacy-preserving) sharing of sensitive information and a number of effective (if not always efficient) solutions have been proposed. Among them, Private Set Intersection techniques are particularly appealing for scenarios where two parties wish to compute an intersection of their respective sets of items without revealing to each other any other information. Thus far, ”any other information” has been interpreted to mean any information about items not in the intersection. In this paper, we motivate the need for Private Set Intersection with stronger privacy properties that include hiding of the set size held by one of the two entities (Client). This new and important privacy feature turns out to be attainable at relative low additional cost. We illustrate a pair of concrete SHI-PSI (Size-Hiding Private Set Intersection) protocols that offer a trade-off between stronger privacy and better efficiency. Both protocols are provably secure under very standard cryptographic assumptions. We demonstrate their practicality via experimental results obtained from a prototype implementation. We also consider size-hiding in a group PSI setting and construct a Group SHI-PSI extension that incurs surprisingly low overhead.
2010
EPRINT
1024XKS - A High Security Software Oriented Block Cipher Revisited
The block cipher 1024 has a key schedule that somehow resembles that of IDEA. The user key is cyclicly shifted by a fiexed amount to form the round keys. In the key schedule of IDEA this has lead to weak keys. The primitive key schedule from 1024 may lead also to attacks with related keys. Although to the knowlegde of the author weak keys or attacks with related keys have not yet been published, there is a need to put things right. The new one-way key schedule of 1024XKS (eXtended Key Schedule) has pseudo-random round keys, which are obtained by using the cipher as randomizer.Apart from that, the user key has now to sizes, 2048 bit and 4096 bit. Also the order of the s-boxes have been changed to thwart attacks based on symmetry
2010
EPRINT
2-round Substitution-Permutation and 3-round Feistel Networks have bad Algebraic Degree
We study algebraic degree profile of reduced-round block cipher schemes. We show that the degree is not maximal with elementary combinatorial and algebraic arguments. We discuss on how it can be turned into distinguishers from balanced random functions.
2010
EPRINT
A calculus for game-based security proofs
The game-based approach to security proofs in cryptography is a widely-used methodology for writing proofs rigorously. However a unifying language for writing games is still missing. In this paper we show how CSLR, a probabilistic lambda-calculus with a type system that guarantees that computations are probabilistic polynomial time, can be equipped with a notion of game indistinguishability. This allows us to de ne cryptographic constructions, e ective adversaries, security notions, computational assumptions, game transformations, and game-based security proofs in the uni ed framework provided by CSLR. Our code for cryptographic constructions is close to implementation in the sense that we do not assume primitive uniform distributions but use a realistic algorithm to approximate them. We illustrate our calculus on cryptographic constructions for public-key encryption and pseudorandom bit generation.
2010
EPRINT
A Certifying Compiler for Zero-Knowledge Proofs of Knowledge Based on $\Sigma$-Protocols
Zero-knowledge proofs of knowledge (ZK-PoK) are important building blocks for numerous cryptographic applications. Although ZK-PoK have very useful properties, their real world deployment is typically hindered by their significant complexity compared to other (non-interactive) crypto primitives. Moreover, their design and implementation is time-consuming and error-prone. We contribute to overcoming these challenges as follows: We present a comprehensive specification language and a certifying compiler for ZK-PoK protocols based on $\Sigma$-protocols and composition techniques known in literature. The compiler allows the fully automatic translation of an abstract description of a proof goal into an executable implementation. Moreover, the compiler overcomes various restrictions of previous approaches, e.g., it supports the important class of exponentiation homomorphisms with hidden-order co-domain, needed for privacy-preserving applications such as idemix. Finally, our compiler is certifying, in the sense that it automatically produces a formal proof of security (soundness) of the compiled protocol (currently covering special homomorphisms) using the Isabelle/HOL theorem prover.
2010
EPRINT
A Class of 1-Resilient Function with High Nonlinearity and Algebraic Immunity
In this paper, we propose a class of 1-resilient Boolean function with optimal algebraic degree and high nonlinearity, moreover, based on the conjecture proposed in [4], it can be proved that the algebraic immunity of our function is at least suboptimal.
2010
EPRINT
A Combinatorial Analysis of HC-128
We show that the knowledge of any one of the two internal state arrays of HC-128 along with the knowledge of 2048 keystream words is sufficient to construct the other state array completely in $2^{42}$ time complexity. Though our analysis does not lead to any attack on HC-128, it reveals a structural insight into the cipher. In the process, we theoretically establish certain combinatorial properties of HC-128 keystream generation algorithm. We also suggest a modification to HC-128 that takes care of the recently known cryptanalytic results with little reduction in speed.
2010
EPRINT
A Compact FPGA Implementation of the SHA-3 Candidate ECHO
We propose a compact architecture of the SHA-3 candidate ECHO for the Virtex-5 FPGA family. Our architecture is built around a 8-bit datapath. We show that a careful organization of the chaining variable and the message block in the register file allows one to design a compact control unit based on a 4-bit counter, an 8-bit counter, and a simple Finite State Machine. A fully autonomous implementation of ECHO on a Xilinx Virtex-5 FPGA requires $127$ slices and a single memory block to store the internal state, and achieves a throughput of $72$Mbps.
2010
EPRINT
A Comparison of Cryptanalytic Tradeoff Algorithms
The three major time memory tradeoff algorithms are compared in this paper. Specifically, the Hellman tradeoff algorithm, the distinguished point tradeoff method, and the rainbow table method, in their non-perfect table versions, are considered. We show that, under parameters that are typically considered in theoretic discussions of the tradeoff algorithms, Hellman and distinguished point tradeoffs perform very close to each other and the rainbow table method performs somewhat better than the other two algorithms. Our method of comparison can easily be applied to other situations, where the conclusions could be different. The analysis presented in this paper takes the effects of false alarms into account and also fully considers techniques for reducing storage, such as the ending point truncation method and index files.
2010
EPRINT
A DAA Scheme Requiring Less TPM Resources
Direct anonymous attestation (DAA) is a special digital signature primitive, which provides a balance between signer authentication and privacy. One of the most interesting properties that makes this primitive attractive in practice is its construction of signers. The signer role of DAA is split between two entities, a principal signer (a trusted platform module (TPM)) with limited computational capability and an assistant signer (a computer platform into which the TPM is embedded) with more computational power but less security tolerance. Our first contribution in this paper is a new DAA scheme that requires very few TPM resources. In fact the TPM has only to perform two exponentiations for the DAA Join algorithm and three exponentiations for the DAA Signing algorithm. We show that this new scheme has better performance than the existing DAA schemes and is provable secure based on the $q$-SDH problem and DDH problem under the random oracle model. Our second contribution is a modification of the DAA game-based security model to cover the property of non-frameability.
2010
EPRINT
A Digital Signature Using Multivariate Functions on Quaternion Ring
We propose the digital signature scheme on non-commutative quaternion ring over finite fields in this paper. We generate the multivariate function of high degree F(X) . We construct the digital signature scheme using F(X). Our system is immune from the Gröbner bases attacks because obtaining parameters of F(X) to be secret keys arrives at solving the multivariate algebraic equations that is one of NP complete problems .
2010
EPRINT
A Distinguisher for High Rate McEliece Cryptosystems
The purpose of this paper is to study the difficulty of the so-called Goppa Code Distinguishing (GD) problem introduced by Courtois, Finiasz and Sendrier in Asiacrypt 2001. GD is the problem of distinguishing the public matrix in the McEliece cryptosystem from a random matrix. It is widely believed that this problem is computationally hard as proved by the increasing number of papers using this hardness assumption. To our point of view, disproving/mitigating this hardness assumption is a breakthrough in code-based cryptography and may open a new direction to attack McEliece cryptosystems. In this paper, we present an efficient distinguisher for alternant and Goppa codes of high rate over binary/non binary fields. Our distinguisher is based on a recent algebraic attack against compact variants of McEliece which reduces the key-recovery to the problem of solving an algebraic system of equations. We exploit a defect of rank in the (linear) system obtained by linearizing this algebraic system. It turns out that our distinguisher is highly discriminant. Indeed, we are able to precisely quantify the defect of rank for ``generic'' binary and non-binary random, alternant and Goppa codes. We have verified these formulas with practical experiments, and a theoretical explanation for such defect of rank is also provided. We believe that this work permits to shed some light on the choice of secure parameters for McEliece cryptosystems; a topic thoroughly investigated recently. Our technique permits to indeed distinguish a public key of the CFS signature scheme for all parameters proposed by Finiasz and Sendrier at Asiacrypt 2009. Moreover, some realistic parameters of McEliece scheme also fit in the range of validity of our distinguisher.
2010
EPRINT
A Family of Implementation-Friendly BN Elliptic Curves
We describe a class of Barreto-Naehrig (BN) curves that are not only computationally very simple to generate, but also specially suitable for efficient implementation on the broadest possible range of platforms.
2010
EPRINT
A Flaw in The Internal State Recovery Attack on ALPHA-MAC
An distinguisher was constructed by utilizing a 2-round collision differential path of ALPHA-MAC, with about $2^{65.5}$ chosen messages and $2^{65.5}$ queries. Then, this distinguisher was used to recover the internal state(\cite{Yuan1},\cite{Yuan2}). However, a flaw is found in the internal state recovery attack. The complexity of recovering the internal state is up to $2^{81}$ exhaustive search. And the complexity of the whole attack will be up to $2^{67}$ chosen messages and $2^{81}$ exhaustive search. To repair the flaw, a modified 2-round differential path of ALPHA-MAC is present and a new distinguisher based on this path is proposed. Finally, an attack with about $2^{65.5}$ chosen messages and $2^{65.5}$ queries is obtained under the new distinguisher.
2010
EPRINT
A Framework for Efficient Signatures, Ring Signatures and Identity Based Encryption in the Standard Model
In this work, we present a generic framework for constructing efficient signature scheme, ring signature schemes, and identity based encryption schemes, all in the standard model (without relying on random oracles). We start by abstracting the recent work of Hohenberger and Waters (Crypto 2009), and specifically their ``prefix method''. We show a transformation taking a signature scheme with a very weak security guarantee (a notion that we call a-priori-message unforgeability under static chosen message attack) and producing a fully secure signature scheme (i.e., existentially unforgeable under adaptive chosen message attack). Our transformation uses the notion of chameleon hash functions, defined by Krawczyk and Rabin (NDSS 2000) and the ``prefix method''. Constructing such weakly secure schemes seems to be significantly easier than constructing fully secure ones, and we present {\em simple} constructions based on the RSA assumption, the {\em short integer solution} (SIS) assumption, and the {\em computational Diffie-Hellman} (CDH) assumption over bilinear groups. Next, we observe that this general transformation also applies to the regime of ring signatures. Using this observation, we construct new (provably secure) ring signature schemes: one is based on the {\em short integer solution} (SIS) assumption, and the other is based on the CDH assumption over bilinear groups. As a building block for these constructions, we define a primitive that we call {\em ring trapdoor functions}. We show that ring trapdoor functions imply ring signatures under a weak definition, which enables us to apply our transformation to achieve full security. Finally, we show a connection between ring signatures and identity based encryption (IBE) schemes. Using this connection, and using our new constructions of ring signature schemes, we obtain two IBE schemes: The first is based on the {\em learning with error} (LWE) assumption, and is similar to the recently introduced IBE schemes of Peikert, Agrawal-Boyen and Cash-Hofheinz-Kiltz (2009); The second is based on the $d$-linear assumption over bilinear groups.
2010
EPRINT
A Framework For Fully-Simulatable $h$-Out-Of-$n$ Oblivious Transfer
In this paper, we present a framework for efficient, fully-simulatable $h$-out-of-$n$ oblivious transfer ($OT^{n}_{h}$) with security against nonadaptive malicious adversary. The number of communication round of the framework is six. Compared with existing fully-simulatable $OT^{n}_{h}$, our framework is round-efficient. Conditioning on no trusted common string is available, our DDH-based instantiation is the most efficient protocol for $OT^{n}_{h}$. Our framework uses three abstract tools, i.e. perfectly binding commitment, perfectly hiding commitment and our new smooth projective hash. This allows a simple and intuitive understanding of its security. We instantiate the new smooth projective hash under the lattice, decisional Diffie-Hellman, decisional N-th residuosity, decisional quadratic residuosity assumptions. This indeed shows that the folklore that it is technically difficult to instantiate the projective hash framework under the lattice assumption is not true. What's more, by using this lattice-based instantiation and Brassard's commitment scheme, we gain a $OT^{n}_{h}$ instantiation which is secure against any quantum algorithm.
2010
EPRINT
A Hardware Wrapper for the SHA-3 Hash Algorithms
The second round of the NIST public competition is underway to find a new hash algorithm(s) for inclusion in the NIST Secure Hash Standard (SHA-3). Computational efficiency of the algorithms in hardware is to be addressed during the second round of the contest. For software implementations NIST specifies an application programming interface (API) along with reference implementation for each of the designs, thereby enabling quick and easy comparison and testing on software platforms, however no such specification was given for hardware analysis. In this paper we present a hardware wrapper interface which attempts to encompass all the competition entries (and indeed, hash algorithms in general) across any number of both FPGA and ASIC hardware platforms. This interface comprises communications and padding, and attempts to standardise the hashing algorithms to allow accurate and fair area, timing and power measurement between the different designs.
2010
EPRINT
A Low-Area yet Performant FPGA Implementation of Shabal
In this paper, we present an efficient FPGA implementation of the SHA-3 hash function candidate Shabal. Targeted at the recent Xilinx Virtex-5 FPGA family, our design achieves a relatively high throughput of 2 Gbit/s at a cost of only 153 slices, yielding a throughput-vs.-area ratio of 13.4 Mbit/s per slice. Our work can also be ported to Xilinx Spartan-3 FPGAs, on which it supports a throughput of 800 Mbit/s for only 499 slices, or equivalently 1.6 Mbit/s per slice. According to the SHA-3 Zoo website, this work is among the smallest reported FPGA implementations of SHA-3 candidates, and ranks first in terms of throughput per area.
2010
EPRINT
A Meet-in-the-Middle Attack on ARIA
In this paper, we study the meet-in-the-middle attack against block cipher ARIA. We find some new 3-round and 4-round distinguish- ing properties of ARIA. Based on the 3-round distinguishing property, we can apply the meet-in-the-middle attack with up to 6 rounds for all versions of ARIA. Based on the 4-round distinguishing property, we can mount a successful attack on 8-round ARIA-256. Furthermore, the 4-round distinguishing property could be improved which leads to a 7-round attack on ARIA-192. The data and time complexities of 7-round attack are 2^120 and 2^185:3, respectively. The data and time complexities of 8-round attack are 2^56 and 2^251:6, respectively. Compared with the existing cryptanalytic results on ARIA, our 5-round attack has the lowest data and time complexities and the 6-round attack has the lowest data complexity. Moreover, it is shown that 8-round ARIA-256 is not immune to the meet-in-the-middle attack.
2010
EPRINT
A modified eCK model with stronger security for tripartite authenticated key exchange
Since Bellare and Rogaway presented the first formal security model for authenticated key exchange (AKE) protocols in 1993, many formal security models have been proposed. The extended Canetti-Krawczyk (eCK) model proposed by LaMacchia et al. is currently regarded as the strongest security model for two-party AKE protocols. In this paper, we first generalize the eCK model for tripartite AKE protocols, called teCK model, and enhance the security of the new model by adding a new reveal query. In the teCK model, the adversary has stronger powers, and can learn more secret information. Then we present a new tripartite AKE protocol based on the NAXOS protocol, called T-NAXOS protocol, and analyze its security in the teCK model under the random oracle assumption.
2010
EPRINT
A New Chaos-Based Cryptosystem for Secure Transmitted Images
This paper presents a novel and robust chaos-based cryptosystem for secure transmitted images and four others versions. In the proposed block encryption/decryption algorithms, an 2D chaotic map is used to shuffle the image pixel positions. Then, substitution (confusion) and permutation (diffusion) operations on every block, with multiple rounds, are combined using two perturbed chaotic PWLCM maps. The perturbing orbit technique improves the dynamical statistical properties of generated chaotic sequences. The obtained error propagation in various standard cipher block modes demonstrates that the proposed cryptosystem including OFB, or CTR modes, is suitable to transmit cipher data over a corrupted digital channel. Finally, to quantify the security level of the proposed cryptosystem, many standard tools are performed and experimental results show that the suggested cryptosystem has a high security level.
2010
EPRINT
A New Chaotic Image Encryption Algorithm using a New Way of Permutation Methods
This paper presents a novel chaos-based cryptosystem for secure transmitted images. In the proposed block encryption/decryption algorithm, two chaotic permutation methods (key-dependant shift approach and Socek method) are used to shuffle the image pixel bits. These methods are controlled using a perturbed chaotic PWLCM map. The perturbing orbit technique improves the dynamical statistical properties of generated chaotic sequences. Our algorithm is based on tree encryption cryptosystems (Socek, Yang and Xiang algorithms). In this paper, we prove that the proposed cryptosystem overcomes the drawbacks of these algorithms. Finally, many standard tools are performed to quantify the security level of the proposed cryptosystem, and experimental results show that the suggested cryptosystem has a high security level.
2010
EPRINT
A New Class of Public Key Cryptosystems Constructed Based on Error-Correcting Codes, Using K(III) Scheme
In this paper, we present a new scheme referred to as K(III) scheme which would be effective for improving a certain class of PKC's. Using K(III) scheme, we propose a new method for constructing the public-key cryptosystems based on error-correcting codes. The constructed PKC is referred to as K(V)SE(1)PKC. We also present more secure version of K(V)SE(1)PKC, referred to as K*(V)SE(1)PKC, using K(I) scheme previously proposed by the present author, as well as K(III) scheme.
2010
EPRINT
A New Class of Public Key Cryptosystems Constructed Based on Perfect Error-Correcting Codes Realizing Coding Rate of Exactly 1.0
In this paper, we propose a new method for constructing the public-key cryptosystems based on a class of perfect error-correcting codes. The constructed PKC is referred to as K(IV)SE(1)PKC. In K(IV)SE(1)PKC, members of the class of perfect error correcting codes such as (7,4,3) cyclic Hamming code and (3,1,3) code {000,111} is used, yielding a simple process of encryption and decryption. The K(IV)SE(1)PKC has a remarkable feature that the coding rate can take on exactly 1.0 due to the use of perfect codes. Besides the size of the public key for K(IV)SE(1)PKC can be made smaller than that of the McEliece PKC.
2010
EPRINT
A New Framework for Password-Based Authenticated Key Exchange
Protocols for password-based authenticated key exchange (PAKE) allow two users who share only a short, low-entropy password to agree on a cryptographically strong session key. The challenge in designing such protocols is that they must be immune to off-line dictionary attacks in which an eavesdropping adversary exhaustively enumerates the dictionary of likely passwords in an attempt to match a password to the set of observed transcripts. To date, few general frameworks for constructing PAKE protocols in the standard model are known. Here, we abstract and generalize a protocol by Jiang and Gong to give a new methodology for realizing PAKE without random oracles, in the common reference string model. In addition to giving a new approach to the problem, the resulting construction offers several advantages over prior work. We also describe an extension of our protocol that is secure within the universal composability~(UC) framework and, when instantiated using El Gamal encryption, is more efficient than a previous protocol of Canetti et al.
2010
EPRINT
A New Framework for RFID Privacy
Formal RFID security and privacy frameworks are fundamental to the design and analysis of robust RFID systems. In this paper, we develop a new definitional framework for RFID privacy in a rigorous and precise manner. Our framework is based on a zero-knowledge (ZK) formulation [7, 5] and incorporates the notions of adaptive completeness and mutual authentication. We provide meticulous justification of the new framework and contrast it with existing ones in the literature. In particular, we prove that our framework is stronger than the ind-privacy model of [14], which answers an open question posed in [14] for developing stronger RFID privacy models. Along the way we also try to clarify certain confusions and rectify several defects in the existing frameworks. Based on the protocol of [16], we propose an efficient RFID mutual authentication protocol and analyze its security and privacy. The methodology used in our analysis is of independent interest and can be applied to analyze other RFID protocols within the new framework.
2010
EPRINT
A New Human Identification Protocol and Coppersmith's Baby-Step Giant-Step Algorithm
We propose a new protocol providing cryptographically secure authentication to unaided humans against passive adversaries. We also propose a new generic passive attack on human identification protocols. The attack is an application of Coppersmith's baby-step giant-step algorithm on human identification protcols. Under this attack, the achievable security of some of the best candidates for human identification protocols in the literature is further reduced. We show that our protocol preserves similar usability while achieves better security than these protocols. A comprehensive security analysis is provided which suggests parameters guaranteeing desired levels of security.
2010
EPRINT
A New Joint Fingerprinting and Decryption Scheme based on a Lattice Problem
We propose a new encryption scheme that supports joint fingerprinting and decryption. The scheme is remarkably resistant to known-plaintext attack and collusion attack (e.g. average attack or other linear combination attack) on keys. Interestingly, the security of our scheme is relied on a lattice problem: Given a collection of random lattice points generated from a short basis of a lattice, find the short basis. The scheme can be used as a traitor-tracing scheme or a buyer-seller watermarking scheme.
2010
EPRINT
A new one-time signature scheme from syndrome decoding
We describe a one-time signature scheme based on the hardness of the syndrome decoding problem, and prove it secure in the random oracle model. Our proposal can be instantiated on general linear error correcting codes, rather than restricted families like alternant codes for which a decoding trapdoor is known to exist.
2010
EPRINT
A New Scheme for Zero Knowledge Proof based on Multivariate Quadratic Problem and Quaternion Algebra
This paper introduces a new intractable security problem whose intractability is due to the NP completeness of multivariate quadratic problem. This novel problem uses quaternion algebra in conjunction with MQ. Starting with the simultaneous multivariate equations, we transform these equations into simultaneous quaternion based multivariate quadratic equations. A new scheme for computational zero knowledge proof based on this problem is proposed. It is proved that according to black box definition of zero knowledge proof (ZKP) system, the proposed scheme is ZKP. Our proof has two lemmas. The proof is done through two lemmas. In the first lemma it is shown that expected polynomial time machine V * M halts in a polynomial time. In the second lemma, it is showed that the probability ensembles V x L M x * and x L P x , V * x are polynomially indistinguishable. The scheme has low computational overhead and is particularly useful in cryptographic applications such as digital signature and key agreement.
2010
EPRINT
A New Security Model for Authenticated Key Agreement
The Canetti--Krawczyk (CK) and extended Canetti--Krawczyk (eCK) security models, are widely used to provide security arguments for key agreement protocols. We discuss security shades in the (e)CK models, and some practical attacks unconsidered in (e)CK--security arguments. We propose a strong security model which encompasses the eCK one. We also propose a new protocol, called Strengthened MQV (SMQV), which in addition to provide the same efficiency as the (H)MQV protocols, is particularly suited for distributed implementations wherein a tamper--proof device is used to store long--lived keys, while session keys are used on an untrusted host machine. The SMQV protocol meets our security definition under the Gap Diffie--Hellman assumption and the Random Oracle model.
2010
EPRINT
A note on ``Improved Fast Correlation Attacks on Stream Ciphers"
In SAC'08, an improved fast correlation attack on stream ciphers was proposed. This attack is based on the fast correlation attack proposed at Crypto'00 and combined with the fast Walsh transform. However, we found that the attack results are wrong. In this paper, we correct the results of the attack algorithm by analyzing it theoretically. Also we propose a threshold of the valid bias.
2010
EPRINT
A Note On Gottesman-Chuang Quantum Signature Scheme
In 2001, Gottesman and Chuang proposed a quantum signature scheme. Unlike classical signature schemes, the public keys in the scheme can only be used once. The authors claim that the scheme is somewhat cumbersome but it serves as a good model and suggests novel research directions for the field of quantum cryptography. In this note, we remark that the Gottesman-Chuang quantum signature scheme is so commonplace and cumbersome that it can not suggest the potential for quantum public key cryptography. The authors ignore an ultimate fact, namely, the cost to grantee the authenticity of a user's public key is expensive in the scenario of Public Key Infrastructure. It entails that a user's public key should be repeatedly usable in the life duration.
2010
EPRINT
A novel k-out-of-n Oblivious Transfer Protocols Based on Bilinear Pairings
Low bandwidth consumption is an important issue in a busy commercial network whereas time may not be so crucial, for example, the end-of-day financial settlement for commercial transactions in a day. In this paper, we construct a secure and low bandwidth-consumption k-out-of-n oblivious transfer scheme based on bilinear pairings. We analyze the security and efficiency of our scheme and conclude that our scheme is more secure and efficient in communication bandwidth consumption than most of the other existing oblivious transfer schemes that we know.
2010
EPRINT
A Pairing-Based DAA Scheme Further Reducing TPM Resources
Direct Anonymous Attestation (DAA) is an anonymous signature scheme designed for anonymous attestation of a Trusted Platform Module (TPM) while preserving the privacy of the device owner. Since TPM has limited bandwidth and computational capability, one interesting feature of DAA is to split the signer role between two entities: a TPM and a host platform where the TPM is attached. Recently, Chen proposed a new DAA scheme that is more efficient than previous DAA schemes. In this paper, we construct a new DAA scheme requiring even fewer TPM resources. Our DAA scheme is about 5 times more efficient than Chen's scheme for the TPM implementation using the Barreto-Naehrig curves. In addition, our scheme requires much smaller size of software code that needs to be implemented in the TPM. This makes our DAA scheme ideal for the TPM implementation. Our DAA scheme is efficient and provably secure in the random oracle model under the strong Diffie-Hellman assumption and the decisional Diffie-Hellman assumption.
2010
EPRINT
A Practical-Time Attack on the A5/3 Cryptosystem Used in Third Generation GSM Telephony
The privacy of most GSM phone conversations is currently protected by the 20+ years old A5/1 and A5/2 stream ciphers, which were repeatedly shown to be cryptographically weak. They will soon be replaced in third generation networks by a new A5/3 block cipher called KASUMI, which is a modified version of the MISTY cryptosystem. In this paper we describe a new type of attack called a sandwich attack, and use it to construct a simple distinguisher for 7 of the 8 rounds of KASUMI with an amazingly high probability of $2^{ -14}$. By using this distinguisher and analyzing the single remaining round, we can derive the complete 128 bit key of the full KASUMI by using only 4 related keys, $2^{26}$ data, $2^{30}$ bytes of memory, and $2^{32}$ time. These complexities are so small that we have actually simulated the attack in less than two hours on a single PC, and experimentally verified its correctness and complexity. Interestingly, neither our technique nor any other published attack can break MISTY in less than the $2^{128}$ complexity of exhaustive search, which indicates that the changes made by the GSM Association in moving from MISTY to KASUMI resulted in a much weaker cryptosystem.
2010
EPRINT
A Principle for Cryptographic Protocols Beyond Security, Less Parameters
Almost cryptographic protocols are presented with security arguments. None of them, however, did explain why a protocol should like this, not like that. The reason is that there are short of any principles for designing and analyzing cryptographic protocols. In this paper, we put forth such a principle beyond security, called Less Parameters, which says that the involved parameters should be reduced as less as possible. Actually, the principle ensures a protocol better cost. In different scenarios, the principle is not easy to grasp. Intuitively, we advise to introduce public parameters as less as possible. In the light of the principle, we investigate some signatures. We believe the techniques developed in this paper will be helpful to better some cryptographic protocols.
2010
EPRINT
A Privacy-Flexible Password Authentication Scheme for Multi-Server Environment
Since Kerberos suffers from KDC (Key Distribution Center) compromise and impersonation attack, a multi-server password authentication protocol which highlights no verification table in the server end could therefore be an alternative. Typically, there are three roles in a multi-server password authentication protocol: clients, servers, and a register center which plays the role like KDC in Kerberos. In this paper, we exploit the theoretical basis for implementing a multi-server password authentication system under two constraints: no verification table and user privacy protection. We found that if a system succeeds in privacy protection, it should be implemented either by using a public key cryptosystem or by a register center having a table to record the information shared with corresponding users. Based on this finding, we propose a privacy-flexible system to let a user can employ a random-looking dynamic identity or employ a pseudonym with the register center online or offline to login a server respectively according to his privacy requirement. Compared with other related work, our scheme is not only efficient but also the most conformable to the requirements that previous work suggest.
2010
EPRINT
A Random Number Generator Based on Isogenies Operations
A random number generator based on the operation of isogenies between elliptic curves over finite fields Fp is proposed. By using the proposed generator together with the isogeny cryptography algorithm, which is against the attack of quantum computer, we can save hardware and software components. Theoretical analyses show that periods of the proposed random number generator are sufficiently long. Moreover, the generated sequences have passed the U.S. NIST statistical test.
2010
EPRINT
A Reflection on the Security of Two-Party Key Establishment Protocols
Two-party key establishment has been a very fruitful research area in cryptography, with many security models and numerous protocols proposed. In this paper, we take another look at the YAK protocol and the HMQV protocols and present some extended analysis. Motivated by our analysis, we reflect on the security properties that are desired by two-party key establishment protocols, and their formalizations. In particular, we take into account the interface between a key establishment protocol and the applications which may invoke it, and emphasize the concept of session and the usage of session identifier. Moreover, we show how to design a two-party key establishment protocol to achieve both key authentication and entity authentication properties in our security model.
2010
EPRINT
A Reflection on the Security Proofs of Boneh-Franklin Identity-Based Encryption
Boneh and Franklin constructed the first practical Identity-Based Encryption scheme (BF-IBE) [1] and proved its security based on the computational Bilinear Diffie-Hellman assumption (CBDH) in 2001. The correct- ness of its security proof was long believed to be correct until in 2005, Galindo [2] noticed a flawed step in the original proof. In the same paper, Galindo provided a new proof with a looser security reduction. Shortly after- wards, Nishioka [3] improved Galindo’s proof to achieve a tighter security reduction. In the same year, Zhang and Imai [4] gave another proof of BF-IBE. Unfortunately, we find that none of their proofs is flawless. In this paper, besides identifying and fixing the lapses in previous proofs, we present two new proofs for the CCA security of BF-IBE. The first proof is proved via selective-identity security with imposing a natural constraint to the original scheme. The second proof is proved by directly reducing the security to a stronger assumption, namely the gap Bilinear Diffie-Hellman (GBDH) assumption.
2010
EPRINT
A SAT-based preimage analysis of reduced KECCAK hash functions
In this paper, we present a preimage attack on reduced versions of Keccak hash functions. We use our recently developed toolkit CryptLogVer for generating CNF (conjunctive normal form) which is passed to the SAT solver PrecoSAT. We found preimages for some reduced versions of the function and showed that full Keccak function is secure against the presented attack.
2010
EPRINT
A secure anonymous communication scheme in vehicular ad hoc networks from pairings
Security and efficiency are two crucial issues in vehicular ad hoc networks. Many researches have devoted to these issues. However, we found that most of the proposed protocols in this area are insecure and can’t satisfy the anonymous property. Due to this observation, we propose a secure and anonymous method based on bilinear pairings to resolve the problems. After analysis, we conclude that our scheme is the most secure when compared with other protocols proposed so far.
2010
EPRINT
A secure Deniable Authentication Protocol based on Bilinear Diffie-Hellman Algorithm
This paper describes a new deniable authentication protocol whose security is based Diffe-Hellman (CDH) Problem of type Decisional Diffie-Hellman(DDH) and the Hash Diffie-Hellman (HDDH) problem.This protocol can be implemented in low power and small processor mobile devices such as smart card, PDA etc which work in low power and small processor. A deniable authentication protocol enables a receiver to identify the true source of a given message, but not to prove the identity of the sender to a third party. This property is very useful for providing secure negotiation over the internet. Our proposed protocol will be achieving the most three security requirement like deniable authentication, Confidentialities and also it is resistant against Man-in middle Attack
2010
EPRINT
A Security Enhancement and Proof for Authentication and Key Agreement (AKA)
In this work, we consider Authentication and Key Agreement (AKA), a popular client-server Key Exchange (KE) protocol, commonly used in wireless standards (e.g., UMTS), and widely considered for new applications. We discuss natural potential usage scenarios for AKA, attract attention to subtle vulnerabilities, propose a simple and efficient AKA enhancement, and provide its formal proof of security. The vulnerabilities arise due to the fact that AKA is not a secure KE in the standard cryptographic sense, since Client C does not contribute randomness to the session key. We argue that AKA remains secure in current deployments where C is an entity controlled by a single tamper-resistant User Identity Module (UIM). However, we also show that AKA is insecure if several Client's devices/UIMs share his identity and key. We show practical applicability and efficiency benefits of such multi-UIM scenarios. As our main contribution, we adapt AKA for this setting, with only the minimal changes, while adhering to AKA design goals, and preserving its advantages and features. Our protocol involves one extra PRFG evaluation and no extra messages. We formally prove security of the resulting protocol. We discuss how our security improvement allows simplification of some of AKA security heuristics, which may make our protocol more efficient and robust than AKA even for the current deployment scenarios.
2010
EPRINT
A Security Evaluation of DNSSEC with NSEC3
Domain Name System Security Extensions (DNSSEC) and Hashed Authenticated Denial of Existence (NSEC3) are slated for adoption by important parts of the DNS hierarchy, including the root zone, as a solution to vulnerabilities such as ”cache-poisoning” attacks. We study the security goals and operation of DNSSEC/NSEC3 using Murphi, a finite-state enumeration tool, to analyze security properties that may be relevant to various deployment scenarios. Our systematic study reveals several subtleties and potential pitfalls that can be avoided by proper configuration choices, including resource records that may remain valid after the expiration of relevant signatures and potential insertion of forged names into a DNSSEC-enabled domain via the opt-out option. We demonstrate the exploitability of DNSSEC opt-out options in an enterprise setting by constructing a browser cookie-stealing attack on a laboratory domain. Under recommended configuration settings, further Murphi model checking finds no vulnerabilities within our threat model, suggesting that DNSSEC with NSEC3 provides significant security benefits.
2010
EPRINT
A Security Weakness in a Generic Construction of a Group Key Exchange Protocol
Protocols for group key exchange are cryptographic algorithms that allow a group of parties communicating over a public network to come up with a common secret key. One of the interesting results of research on group key exchange is the protocol compiler presented by Abdalla et al.~in TCC '07. Abdalla et al.'s compiler shows how one can transform any authenticated 2-party key exchange protocol into an authenticated group key exchange protocol with 2 more rounds of communication. This compiler certainly is elegant in its genericness, symmetricity, simplicity and efficiency. However, the situation completely changes when it comes to security. In this work, we reveal a major security weakness in Abdalla et al.'s compiler and show how to address it. The security weakness uncovered here implies that Abdalla et al.'s proof of security for their compiler is invalid.
2010
EPRINT
A Security Weakness in Composite-Order Pairing-Based Protocols with Imbedding Degree $k>2$
In this note we describe a security weakness in pairing-based protocols when the group order is composite and the imbedding degree $k$ is greater than $2$.
2010
EPRINT
A Simple BGN-type Cryptosystem from LWE
We construct a simple public-key encryption scheme that supports polynomially many additions and one multiplication, similar to the cryptosystem of Boneh, Goh, and Nissim (BGN). Security is based on the hardness of the learning with errors (LWE) problem, which is known to be as hard as certain worst-case lattice problems. Some features of our cryptosystem include support for large message space, an easy way of achieving formula-privacy, a better message-to-ciphertext expansion ratio than BGN, and an easy way of multiplying two encrypted polynomials. Also, the scheme can be made identity-based and leakage-resilient (at the cost of a higher message-to-ciphertext expansion ratio).
2010
EPRINT
A supplement to Liu et al.'s certificateless signcryption scheme in the standard model
Recently, Liu et al. proposed the first certificateless signcryption scheme without random oracles and proved it was semantically secure in the standard model. However, Selvi et al. launched a fatal attack to its confidentiality by replacing users' public keys, thus pointed out this scheme actually doesn't reach the semantic security as claimed. In this paper, we come up with a rescue scheme based on Liu et al.'s original proposal. A Schnorr-based one-time signature is added to each user's public key, which is used to resist Selvi et al.'s attack. In addition, according to the mistake made in Liu et al.'s security proof, we also show that our improvement is really secure in the standard model under the intractability of the decisional bilinear Diffie-Hellman assumption.
2010
EPRINT
A Two-Party Protocol with Trusted Initializer for Computing the Inner Product
We propose the first protocol for securely computing the inner product modulo an integer $m$ between two distrustful parties based on a trusted initializer, i.e. a trusted party that interacts with the players solely during a setup phase. We obtain a very simple protocol with universally composable security. As an application of our protocol, we obtain a solution for securely computing linear equations.
2010
EPRINT
A Unified Method for Improving PRF Bounds for a Class of Blockcipher based MACs
This paper provides a unified framework for {\em improving} \PRF(pseudorandom function) advantages of several popular MACs (message authentication codes) based on a blockcipher modeled as \tx{RP} (random permutation). In many known MACs, the inputs of the underlying blockcipher are defined to be some deterministic affine functions of previously computed outputs of the blockcipher. Keeping the similarity in mind, we introduce a class of \tx{ADE}s (affine domain extensions) and a wide subclass of \tx{SADE}s (secure \tx{ADE}) containing $\mathcal{C} = \{ \tx{CBC-MAC},\ \tx{GCBC}^*,\ \tx{OMAC},\ \tx{PMAC} \}$. We define a parameter $N(t,q)$ for each domain extension and show that all \tx{SADE}s have \PRF advantages $O(tq/2^n + N(t,q)/2^n)$ where $t$ is the total number of blockcipher computations needed for all $q$ queries. We prove that \PRF advantage of any \tx{SADE} is $O(t^2/2^n)$ by showing that $N(t,q)$ is always at most ${t \choose 2}$. We provide a better estimate $O(tq)$ of $N(t,q)$ for all members of $\mathcal{C}$ and hence these MACs have {\em improved advantages $O(tq / 2^n)$}. Our proposed bounds for \tx{CBC-MAC} and $\tx{GCBC}^*$ are better than previous best known bounds.
2010
EPRINT
A variant of the F4 algorithm
Algebraic cryptanalysis usually requires to find solutions of several similar polynomial systems. A standard tool to solve this problem consists of computing the Gröbner bases of the corresponding ideals, and Faugère's F4 and F5 are two well-known algorithms for this task. In this paper, we present a new variant of the F4 algorithm which is well suited to algebraic attacks of cryptosystems since it is designed to compute Gröbner bases of a set of polynomial systems having the same shape. It is faster than F4 as it avoids all reductions to zero, but preserves its simplicity and its computation efficiency, thus competing with F5.
2010
EPRINT
A Zero-One Law for Deterministic 2-Party Secure Computation
We use security in the Universal Composition framework as a means to study the ``cryptographic complexity'' of 2-party secure computation tasks (functionalities). We say that a functionality $F$ {\em reduces to} another functionality $G$ if there is a UC-secure protocol for $F$ using ideal access to $G$. This reduction is a natural and fine-grained way to compare the relative complexities of cryptographic tasks. There are two natural ``extremes'' of complexity under the reduction: the {\em trivial} functionalities, which can be reduced to any other functionality; and the {\em complete} functionalities, to which any other functionality can be reduced. In this work we show that under a natural computational assumption (the existence of a protocol for oblivious transfer secure against semi-honest adversaries), there is a {\bf zero-one law} for the cryptographic complexity of 2-party deterministic functionalities. Namely, {\em every such functionality is either trivial or complete.} No other qualitative distinctions exist among functionalities, under this computational assumption. While nearly all previous work classifying multi-party computation functionalities has been restricted to the case of secure function evaluation, our results are the first to consider completeness of arbitrary {\em reactive} functionalities, which receive input and give output repeatedly throughout several rounds of interaction. One important technical contribution in this work is to initiate the comprehensive study of the cryptographic properties of reactive functionalities. We model these functionalities as finite automata and develop an automata-theoretic methodology for classifying and studying their cryptographic properties. Consequently, we completely characterize the reactive behaviors that lead to cryptographic non-triviality. Another contribution of independent interest is to optimize the hardness assumption used by Canetti et al.\ (STOC 2002) in showing that the common random string functionality is complete (a result independently obtained by Damg{\aa}rd et al.\ (TCC 2010)).
2010
EPRINT
Acceleration of Differential Fault Analysis of the Advanced Encryption Standard Using Single Fault
In this paper we present a speed up of the existing fault attack [2] on the Advanced Encryption Standard (AES) using single faulty cipher. The paper suggests a parallelization technique to reduce the complexity of the attack from 2^{32} to 2^{30}.
2010
EPRINT
Accountability: Definition and Relationship to Verifiability
Many cryptographic tasks and protocols, such as non-repudiation, contract-signing, voting, auction, identity-based encryption, and certain forms of secure multi-party computation, involve the use of (semi-)trusted parties, such as notaries and authorities. It is crucial that such parties can be held accountable in case they misbehave as this is a strong incentive for such parties to follow the protocol. Unfortunately, there does not exist a general and convincing definition of accountability that would allow to assess the level of accountability a protocol provides. In this paper, we therefore propose a new, widely applicable definition of accountability, with interpretations both in symbolic and computational models. Our definition reveals that accountability is closely related to verifiability, for which we also propose a new definition. We prove that verifiability can be interpreted as a restricted form of accountability. Our findings on verifiability are of independent interest. As a proof of concept, we apply our definitions to the analysis of protocols for three different tasks: contract-signing, voting, and auctions. Our analysis unveils some subtleties and unexpected weaknesses, showing in one case that the protocol is unusable in practice. However, for this protocol we propose a fix to establish a reasonable level of accountability.
2010
EPRINT
Achieving Leakage Resilience Through Dual System Encryption
In this work, we show that strong leakage resilience for cryptosystems with advanced functionalities can be obtained quite naturally within the methodology of dual system encryption, recently introduced by Waters. We demonstrate this concretely by providing fully secure IBE, HIBE, and ABE systems which are resilient to bounded leakage from each of many secret keys per user, as well as many master keys. This can be realized as resilience against continual leakage if we assume keys are periodically updated and no (or logarithmic) leakage is allowed during the update process. Our systems are obtained by applying a simple modification to previous dual system encryption constructions: essentially this provides a generic tool for making dual system encryption schemes leakage-resilient.
2010
EPRINT
Adaptive Concurrent Non-Malleability with Bare Public-Keys
Concurrent non-malleability (CNM) is central for cryptographic protocols running concurrently in environments such as the Internet. In this work, we formulate CNM in the bare public-key (BPK) model, and show that round-e±cient concurrent non-malleable cryptography with full adaptive input selection can be established, in general, with bare public-keys (where, in particular, no trusted assumption is made).
2010
EPRINT
Adaptively Secure Broadcast Encryption with Short Ciphertexts
We propose an adaptively secure broadcast encryption scheme with short ciphertexts. That is the size of the broadcast encryption message is fixed, regardless of the size of the broadcast group. In our proposed scheme, members can join and leave the group without requiring any change to public parameters of the system or private keys of existing members. Our construction has a twofold improvement over best previously known broadcast encryption schemes. First, we propose a scheme that immediately yields adaptive security in the CCA model without any (sub-linear) increase in the size of ciphertexts or use of a random oracle. Secondly, the security model in our system includes decryption queries for any member, even including the members in the challenge set. This a more secure model, as it is closer to the adversary in real world.
2010
EPRINT
Advanced Meet-in-the-Middle Preimage Attacks: First Results on Full Tiger, and Improved Results on MD4 and SHA-2
We revisit narrow-pipe designs that are in practical use, and their security against preimage attacks. Our results are the best known preimage attacks on Tiger, MD4, and reduced SHA-2, with the result on Tiger being the first cryptanalytic shortcut attack on the full hash function. Our attacks runs in time $2^{188.8}$ for finding preimages, and $2^{188.2}$ for second-preimages. Both have memory requirement of order $2^{8}$, which is much less than in any other recent preimage attacks on reduced Tiger. Using pre-computation techniques, the time complexity for finding a new preimage or second-preimage for MD4 can now be as low as $2^{78.4}$ and $2^{69.4}$ MD4 computations, respectively. The second-preimage attack works for all messages longer than 2 blocks. To obtain these results, we extend the meet-in-the-middle framework recently developed by Aoki and Sasaki in a series of papers. In addition to various algorithm-specific techniques, we use a number of conceptually new ideas that are applicable to a larger class of constructions. Among them are (1) incorporating multi-target scenarios into the MITM framework, leading to faster preimages from pseudo-preimages, (2) a simple precomputation technique that allows for finding new preimages at the cost of a single pseudo-preimage, and (3) probabilistic initial structures, compared with deterministic ones, to enable more neutral words, and hence to reduce the attack time complexity. All the techniques developed await application to other hash functions. To illustrate this, we give as another example improved preimage attacks on SHA-2 members.
2010
EPRINT
Algebraic Pseudorandom Functions with Improved Efficiency from the Augmented Cascade
We construct an algebraic pseudorandom function (PRF) that is more efficient than the classic Naor- Reingold algebraic PRF. Our PRF is the result of adapting the cascade construction, which is the basis of HMAC, to the algebraic settings. To do so we define an augmented cascade and prove it secure when the underlying PRF satisfies a property called parallel security. We then use the augmented cascade to build new algebraic PRFs. The algebraic structure of our PRF leads to an efficient large-domain Verifiable Random Function (VRF) and a large-domain simulatable VRF.
2010
EPRINT
An Analysis of Affine Coordinates for Pairing Computation
In this paper we analyze the use of affine coordinates for pairing computation. We observe that in many practical settings, for example when implementing optimal ate pairings in high security levels, affine coordinates are faster than using the best currently known formulas for projective coordinates. This observation relies on two known techniques for speeding up field inversions which we analyze in the context of pairing computation. We give detailed performance numbers for a pairing implementation based on these ideas, including timings for base field and extension field arithmetic with relative ratios for inversion-to-multiplication costs, timings for pairings in both affine and projective coordinates, and average timings for multiple pairings and products of pairings.
2010
EPRINT
An Anonymous ID-based Encryption Revisited
In 2006, Boyen and Waters proposed an anonymous ID-based encryption. It is impressive that in the scheme the system secret key is a tuple of five numbers. The user's secret key is also a tuple of five elements. The authors did not explain why it should introduce so many parameters. In this paper, we simulate a general attempt to attack the scheme. It shows us which parameters are essential to the scheme and which parameters can be reasonably discarded. Based on the analysis we present a simplified version and an efficient version of the Boyen-Waters scheme. The analyzing technique developed in this paper is helpful to better other cryptographic protocols.
2010
EPRINT
An Efficient and Parallel Gaussian Sampler for Lattices
At the heart of many recent lattice-based cryptographic schemes is a polynomial-time algorithm that, given a `high-quality' basis, generates a lattice point according to a Gaussian-like distribution. Unlike most other operations in lattice-based cryptography, however, the known algorithm for this task (due to Gentry, Peikert, and Vaikuntanathan; STOC 2008) is rather inefficient, and is inherently sequential. We present a new Gaussian sampling algorithm for lattices that is \emph{efficient} and \emph{highly parallelizable}. We also show that in most cryptographic applications, the algorithm's efficiency comes at almost no cost in asymptotic security. At a high level, our algorithm resembles the ``perturbation'' heuristic proposed as part of NTRUSign (Hoffstein \etal, CT-RSA 2003), though the details are quite different. To our knowledge, this is the first algorithm and rigorous analysis demonstrating the security of a perturbation-like technique.
2010
EPRINT
AN EFFICIENT PARALLEL ALGORITHM FOR SKEIN HASH FUNCTIONS
Recently, cryptanalysts have found collisions on the MD4, MD5, and SHA-0 algorithms; moreover, a method for finding SHA–1 collisions with less than the expected amount of work complexity has been published. The National Institute of Standards and Technology (http://www.nist.gov/index.html) has decided that it is prudent to develop a new hash algorithm that shall be referred to as “SHA–3”, and will be developed through a public competition (http://www.nist.gov/itl/csd/ct/hash_competition.cfm). From the set of proposal accepted for the second round of the competition, the solution we have chosen to explore in this paper for providing an efficient parallel algorithm, is the Skein [10] hash function family. Its design combines speed, security, simplicity, and a great deal of flexibility in a modular package that is easy to analyze. The main reason for parallelizing such an algorithm is to obtain optimal performances when dealing with critical applications which require implementation on multi-core target processors. For parallelizing Skein we have used the tree hash mode which virtually creates one thread for each node of the tree. We claim that this is one of the first parallel implementation with associated performances evaluation of this SHA-3 candidate algorithm.
2010
EPRINT
An enhanced ID-based remote mutual authentication with key agreement protocol for mobile devices on elliptic curve cryptosystem
Recently, Yoon et al. and Wu proposed two improved remote mutual authentication and key agreement scheme for mobile devices on elliptic curve cryptosystem. In this paper, we show that Yoon et al.’s protocol fails to provide explicit key perfect forward secrecy and fails to achieve explicit key confirmation. We also point out Wu’s scheme decreases efficiency by using the double secret keys and is vulnerable to the password guessing attack and the forgery attack. In order to overcome the drawback, we proposed and improved scheme. Through the comparison with other protocol, we believe that our improved scheme is more suitable for real-life applications.
2010
EPRINT
An Improved Timestamp-Based Password Remote User Authentication Scheme
In 2003, Shen et al [4] proposed a timestamp-based password authentication scheme in which remote server does not need to store the passwords or verification table for users authentication. Unfortunately Wang and Li[6], E.J.Yoon [8],Lieu et al.[3], analyzed independently the Shen Lin Scheme [4] and was found to be vulnerable to some deadly attacks. In continuation to it, this paper analyzes few attacks and finally proposes an improved Timestamp- based password remote user authentication scheme so that it can withstand the existing forged attacks.
2010
EPRINT
An Improved Timing Attack with Error Detection on RSA-CRT
Several types of timing attacks have been published, but they are either in theory or hard to be taken into practice. In order to improve the feasibility of attack, this paper proposes an advance timing attack scheme on RSA-CRT with T-test statistical tool. Similar timing attacks have been presented, such as BB-Attack and Shindler’s attack, however none of them applied statistical tool in their methods with such efficiency, and showed the complete recovery in practice by attacking on RSA-CRT. With T-test, we enlarge the 0-1 gap, reduce the neighborhood size and improve the precision of decision. However, the most contribution of this paper is that our algorithm has an error detection property which can detect the erroneous decision of guessing qk and correct it. We could make the success rate of recovering q to be 100% indeed for interprocess timing attack, recovery 1024bits RSA key completely in practice.
2010
EPRINT
An Information Theoretic Perspective on the Differential Fault Analysis against AES
Differential Fault Analysis against AES has been actively studied these years. Based on similar assumptions of the fault injection, different DFA attacks against AES have been proposed. However, it is difficult to understand how different attack results are obtained for the same fault injection. It is also difficult to understand the relationship between similar assumptions of fault injection and the corresponding attack results. This paper reviews the previous DFA attacks against AES based on the information theory, and gives a general and easy understanding of DFA attacks against AES. We managed to apply the analysis on DFA attacks on AES-192 and AES-256, and we propose the attack procedures to reach the theoretically minimal number of fault injections.
2010
EPRINT
Analysis of an internet voting protocol
The Norwegian government is planning trials of internet voting in the 2011 local government elections. We describe and analyse the cryptographic protocol that will be used. In our opinion, the protocol is suitable for trials of internet voting, even though it is not perfect. This paper is a second1 step in an ongoing evaluation of the cryptographic protocol.
2010
EPRINT
Analysis of Efficient Techniques for Fast Elliptic Curve Cryptography on x86-64 based Processors
In this work, we analyze and present experimental data evaluating the efficiency of several techniques for speeding up the computation of elliptic curve point multiplication on emerging x86-64 processor architectures. In particular, we study the efficient combination of such techniques as elimination of conditional branches and incomplete reduction to achieve fast field arithmetic over GF(p). Furthermore, we study the impact of (true) data dependencies on these processors and propose several generic techniques to reduce the number of pipeline stalls, memory reads/writes and function calls. We also extend these techniques to field arithmetic over GF(p^2), which is utilized as underlying field by the recently proposed Galbraith-Lin-Scott (GLS) method to achieve higher performance in the point multiplication. By efficiently combining all these methods with state-of-the-art elliptic curve algorithms we obtain high-speed implementations of point multiplication that are up to 31% faster than the best previous published results on similar platforms. This research is crucial for advancing high-speed cryptography on new emerging processor architectures.
2010
EPRINT
Applications of SAT Solvers to AES key Recovery from Decayed Key Schedule Images
Cold boot attack is a side channel attack which exploits the data remanence property of random access memory (RAM) to retrieve its contents which remain readable shortly after its power has been removed. Given the nature of the cold boot attack, only a corrupted image of the memory contents will be available to the attacker. In this paper, we investigate the use of an off-the-shelf SAT solver, CryptoMinSat, to improve the key recovery of the AES-128 key schedules from its corresponding decayed memory images. By exploiting the asymmetric decay of the memory images and the redundancy of key material inherent in the AES key schedule, rectifying the faults in the corrupted memory images of the AES-128 key schedule is formulated as a Boolean satisfiability problem which can be solved efficiently for relatively very large decay factors. Our experimental results show that this approach improves upon the previously known results.
2010
EPRINT
Approximating Addition by XOR: how to go all the way
In this paper, we study approximation of addition by XOR, taking P. Sarkar's publication~\cite{bib:sarkar} as the reference work and starting point. In this work, among various results, it was claimed that explicit formulas seemed difficult to obtain when the number $n$ of summands is more than $5$. In the first part of our work, we show a systematic way to find explicit formulas: the complexity to compute them is $O(n^3)$, which allows large values of $n$. We present some numerical computation and point out a - conjectural - observation on the coefficients. In the second part, we study a generalization of P. Sarkar's work to $q$-ary addition, instead of binary. We show that the mechanics of the addition is essentially the same as in the binary case. In particular, sequence of carries behaves very similarly: it is a Markov chain whose transition matrix can be computed. Running some experiments on small values of $n$ leads us to a conjecture, the first part of which is intuitive and the second part of which reveals an amazing coincidence (and is probably not!). Finally, in a section titled ``very last news'', we refer to a paper published by Holte in 1997, that was brought to us after our first post and that we had missed before. It happens that this paper studies the topic and solves a major part of our open problems. Henceforth, the present post is an updated version of our previous ``Approximating Addition by XOR: how to go (a little) further than P. Sarkar'', taking into account this previous Holte's reference.
2010
EPRINT
Arithmetic of Supersingular Koblitz Curves in Characteristic Three
We consider digital expansions of scalars for supersingular Koblitz curves in characteristic three. These are positional representations of integers to the base of $\tau$, where $\tau$ is a zero of the characteristic polynomial $T^2 \pm 3\,T + 3$ of a Frobenius endomorphism. They are then applied to the improvement of scalar multiplication on the Koblitz curves. A simple connection between $\tau$-adic expansions and balanced ternary representations is given. Windowed non-adjacent representations are considered whereby the digits are elements of minimal norm. We give an explicit description of the elements of the digit set, allowing for a very simple and efficient precomputation strategy, whereby the rotational symmetry of the digit set is also used to reduce the memory requirements. With respect to the current state of the art for computing scalar multiplications on supersingular Koblitz curves we achieve the following improvements: \rm{(i)} speed-ups of up to 40\%, \rm{(ii)} a reduction of memory consumption by a factor of three, \rm{(iii)} our methods apply to all window sizes without requiring operation sequences for the precomputation stage to be determined offline first. Additionally, we explicitly describe the action of some endomorphisms on the Koblitz curve as a scalar multiplication by an explicitly given integer.
2010
EPRINT
Attacking M&M Collective Signature Scheme
A collective signature scheme aims to solve the problem of signing a message by multiple signers. Recently, Moldovyan and Moldovyan [1] proposed a scheme for collective signatures based on Schnorr signatures. We show some security weaknesses of the scheme.
2010
EPRINT
Attribute-based Authenticated Key Exchange
We introduce the concept of attribute-based authenticated key exchange (AB-AKE) within the framework of ciphertext policy attribute-based systems. A notion of AKE-security for AB-AKE is presented based on the security models for group key exchange protocols and also taking into account the security requirements generally considered in the ciphertext policy attribute-based setting. We also extend the paradigm of hybrid encryption to the ciphertext policy attribute-based encryption schemes. A new primitive called encapsulation policy attribute-based key encapsulation mechanism (EP-AB-KEM) is introduced and a notion of chosen ciphertext security is defined for EP-AB-KEMs. We propose an EP-AB-KEM from an existing attribute-based encryption scheme and show that it achieves chosen ciphertext security in the generic group and random oracle models. We present a generic one-round AB-AKE protocol that satisfies our AKE-security notion. The protocol is generically constructed from any EP-AB-KEM that satisfies chosen ciphertext security. Instantiating the generic AB-AKE protocol with our EP-AB-KEM will result in a concrete one-round AB-AKE protocol also secure in the generic group and random oracle models.
2010
EPRINT
Attribute-based group key establishment
Motivated by the problem of establishing a session key among parties based on the possession of certain credentials only, we discuss a notion of attribute-based key establishment. A number of new issues arise in this setting that are not present in the usual settings of group key establishment where unique user identities are assumed to be publicly available. After detailing the security model, we give a two-round solution in the random oracle model. As main technical tool we introduce a notion of attribute-based signcryption, which may be of independent interest. We show that the type of signcryption needed can be realized through the encrypt-then-sign paradigm. Further, we discuss additional guarantees of the proposed protocol, that can be interpreted in terms of deniability and privacy.
2010
EPRINT
Authenticating Aggregate Range Queries over Dynamic Multidimensional Dataset
We are interested in the integrity of the query results from an outsourced database service provider. Alice passes a set $\set{D}$ of $d$-dimensional points, together with some authentication tag $\set{T}$, to an untrusted service provider Bob. Later, Alice issues some query over $\set{D}$ to Bob, and Bob should produce a query result and a proof based on $\set{D}$ and $\set{T}$. Alice wants to verify the integrity of the query result with the help of the proof, using only the private key. Xu J.~\emph{et al.}~\cite{maia-full} proposed an authentication scheme to solve this problem for multidimensional aggregate range query, including {\SUM, \COUNT, \MIN, \MAX} and {\MEDIAN}, and multidimensional range selection query, with $O(d^2)$ communication overhead. However, their scheme only applys to static database. This paper extends their method to support dynamic operations on the dataset, including inserting or deleting a point from the dataset. The communication overhead of our scheme is $O(d^2 \log N)$, where $N$ is the number of data points in the dataset.
2010
EPRINT
Authenticating Aggregate Range Queries over Multidimensional Dataset
We are interested in the integrity of the query results from an outsourced database service provider. Alice passes a set $\mathtt{D}$ of $d$-dimensional points, together with some authentication tag $\mathtt{T}$, to an untrusted service provider Bob. Later, Alice issues some query over $\mathtt{D}$ to Bob, and Bob should produce a query result and a proof based on $\mathtt{D}$ and $\mathtt{T}$. Alice wants to verify the integrity of the query result with the help of the proof, using only the private key. In this paper, we consider aggregate query conditional on multidimensional range selection. In its basic form, a query asks for the total number of data points within a $d$-dimensional range. We are concerned about the number of communication bits required and the size of the tag $\mathtt{T}$. We give a method that requires $O(d^2)$ communication bits to authenticate an aggregate query conditional on $d$-dimensional range selection. Besides counting, summing and finding of the minimum can also be supported. Furthermore, our scheme can be extended slightly to authenticate $d$-dimensional usual (non-aggregate) range selection query with $O(d^2)$ bits communication overhead, improving known results that require $O(\log^{d-1} N)$ communication overhead, where $N$ is the number of data points in the dataset.
2010
EPRINT
Authentication protocols based on low-bandwidth unspoofable channels: a comparative survey
One of the main challenges in pervasive computing is how we can establish secure communication over an untrusted high-bandwidth network without any initial knowledge or a Public Key Infrastructure. An approach studied by a number of researchers is building security though human work creating a low-bandwidth empirical (or authentication) channel where the transmitted information is authentic and cannot be faked or modified. In this paper, we give an analytical survey of authentication protocols of this type. We start with non-interactive authentication schemes, and then move on to analyse a number of strategies used to build interactive pair-wise and group protocols that minimise the human work relative to the amount of security obtained as well as optimising the computation processing. In studying these protocols, we will discover that their security is underlined by the idea of commitment before knowledge, which is refined by two protocol design principles introduced in this survey.
2010
EPRINT
Authentication schemes from actions on graphs, groups, or rings
We propose a couple of general ways of constructing authentication schemes from actions of a semigroup on a set, without exploiting any specific algebraic properties of the set acted upon. Then we give several concrete realizations of this general idea, and in particular, we describe several authentication schemes with long-term private keys where forgery (a.k.a. impersonation) is NP-hard. Computationally hard problems that can be employed in these realizations include Graph Colorability, Diophantine Problem, and many others.
2010
EPRINT
Automatic Search for Related-Key Diff erential Characteristics in Byte-Oriented Block Ciphers: Application to AES, Camellia, Khazad and Others
While di fferential behavior of modern ciphers in a single secret key scenario is relatively well understood, and simple techniques for computation of security lower bounds are readily available, the security of modern block ciphers against related-key attacks is still very ad hoc. In this paper we make a first step towards provable security of block ciphers against related-key attacks by presenting an efficient search tool for finding diff erential characteristics both in the state and in the key (note that due to similarities between block ciphers and hash functions such tool will be useful in analysis of hash functions as well). We use this tool to search for the best possible (in terms of the number of rounds) related-key diff erential characteristics in AES, byte-Camellia, Khazad, FOX, and Anubis. We show the best related-key diff erential characteristics for 5, 11, and 14 rounds of AES-128, AES-192, and AES-256 respectively. We use the optimal diff erential characteristics to design the best related-key and chosen key attacks on AES-128 (7 out of 10 rounds), AES-192 (full 12 rounds), byte-Camellia (full 18 rounds) and Khazad (7 and 8 out of 8 rounds). We also show that ciphers FOX and Anubis have no related-key attacks on more than 4-5 rounds.
2010
EPRINT
Automorphism group of the set of all bent functions
Boolean function in even number of variables is called {\it bent} if it is at the maximal possible Hamming distance from the class of all affine Boolean functions. We have proven that every isometric mapping of the set of all Boolean functions into itself that transforms bent functions into bent functions is a combination of an affine transform of coordinates and an affine shift.
2010
EPRINT
Avoiding Full Extension Field Arithmetic in Pairing Computations
The most costly operations encountered in pairing computations are those that take place in the full extension field $\mathbb{F}_{p^k}$. At high levels of security, the complexity of operations in $\mathbb{F}_{p^k}$ dominates the complexity of the operations that occur in the lower degree subfields. Consequently, full extension field operations have the greatest effect on the runtime of Miller's algorithm. Many recent optimizations in the literature have focussed on improving the overall operation count by presenting new explicit formulas that reduce the number of subfield operations encountered throughout an iteration of Miller's algorithm. Unfortunately, almost all of these operations far outweigh the operations in the smaller subfields. In this paper, we propose a new way of carrying out Miller's algorithm that involves new explicit formulas which reduce the number of full extension field operations that occur in an iteration of the Miller loop, resulting in significant speed ups in most practical situations of between 5 and 30 percent.
2010
EPRINT
Balanced Boolean Functions with (Almost) Optimal Algebraic Immunity and Very High Nonlinearity
In this paper, we present a class of $2k$-variable balanced Boolean functions and a class of $2k$-variable $1$-resilient Boolean functions for an integer $k\ge 2$, which both have the maximal algebraic degree and very high nonlinearity. Based on a newly proposed conjecture by Tu and Deng, it is shown that the proposed balanced Boolean functions have optimal algebraic immunity and the $1$-resilient Boolean functions have almost optimal algebraic immunity. Among all the known results of balanced Boolean functions and $1$-resilient Boolean functions, our new functions possess the highest nonlinearity. Based on the fact that the conjecture has been verified for all $k\le 29$ by computer, at least we have constructed a class of balanced Boolean functions and a class of $1$-resilient Boolean functions with the even number of variables $\le 58$, which are cryptographically optimal or almost optimal in terms of balancedness, algebraic degree, nonlinearity, and algebraic immunity.
2010
EPRINT
Barreto-Naehrig Curve With Fixed Coefficient - Efficiently Constructing Pairing-Friendly Curves -
This paper describes a method for constructing Barreto-Naehrig (BN) curves and twists of BN curves that are pairing-friendly and have the embedding degree $12$ by using just primality tests without a complex multiplication (CM) method. Specifically, this paper explains that the number of points of elliptic curves $y^2=x^3\pm 16$ and $y^2=x^3 \pm 2$ over $\Fp$ is given by 6 polynomials in $z$, $n_0(z),\cdots, n_5(z)$, two of which are irreducible, classified by the value of $z\bmod{12}$ for a prime $p(z)=36z^4+36z^3+24z^2+6z+1$ with $z$ an integer. For example, elliptic curve $y^2=x^3+2$ over $\Fp$ always becomes a BN curve for any $z$ with $z \equiv 2,11\!\!\!\pmod{12}$. Let $n_i(z)$ be irreducible. Then, to construct a pairing-friendly elliptic curve, it is enough to find an integer $z$ of appropriate size such that $p(z)$ and $n_i(z)$ are primes.
2010
EPRINT
Batch Groth-Sahai
In 2008, Groth and Sahai proposed a general methodology for constructing non-interactive zero-knowledge (and witness-indistinguishable) proofs in bilinear groups. While avoiding expensive NP-reductions, these proof systems are still inefficient due to a number of pairing computations required for verification. We apply recent techniques of batch verification to the Groth-Sahai proof systems and manage to improve significantly the complexity of proof verification. We give explicit batch verification formulas for generic Groth-Sahai equations (whose cost is less than a tenth of the original) and also for specific popular protocols relying on their methodology (namely Groth's group signatures and Belenkiy-Chase-Kohlweiss-Lysyanskaya's P-signatures).
2010
EPRINT
Bent functions at the minimal distance and algorithms of constructing linear codes for CDMA
In this paper we study linear codes for CDMA (Code Division Multiple Access).
2010
EPRINT
Between Hashed DH and Computational DH: Compact Encryption from Weaker Assumption
In this paper, we introduce the intermediate hashed Diffie-Hellman (IHDH) assumption which is weaker than the hashed DH (HDH) assumption (and thus the decisional DH assumption), and is stronger than the computational DH assumption. We then present two public key encryption schemes with short ciphertexts which are both chosen-ciphertext secure under this assumption. The short-message scheme has smaller size of ciphertexts than Kurosawa-Desmedt (KD) scheme, and the long-message scheme is a KD-size scheme with arbitrary plaintext length which is based on a weaker assumption than the HDH assumption.
2010
EPRINT
Bias in the nonlinear filter generator output sequence
Nonlinear filter generators are common components used in the keystream generators for stream ciphers and more recently for authentication mechanisms. They consist of a Linear Feedback Shift Register (LFSR) and a nonlinear Boolean function to mask the linearity of the LFSR output. Properties of the output of a nonlinear filter are not well studied. Anderson noted that the $m$-tuple output of a nonlinear filter with consecutive taps to the filter function is unevenly distributed. Current designs use taps which are not consecutive. We examine $m$-tuple outputs from nonlinear filter generators constructed using various LFSRs and Boolean functions for both consecutive and uneven (full positive difference sets where possible) tap positions. The investigation reveals that in both cases, the $m$-tuple output is not uniform. However, consecutive tap positions result in a more biased distribution than uneven tap positions, with some m-tuples not occurring at all. These biased distributions indicate a potential flaw that could be exploited for cryptanalysis.
2010
EPRINT
Binomial Sieve Series -- a Prospective Cryptographic Tool
A Binomial Sieve Series (BSS) is an infinite monotonic set of natural numbers, b1, b2, .....bn ( bi < b(i+1) ) generated, ('naturally') from any two natural numbers (x, y <= x) . If one repeatedly counts bi elements over the set X= 1,2,…,x (recycled counting) and eliminates each time the element of X that stops each round of counting, then the surviving element of X is y. Every natural number, per any x, is associated with a certain survivor. We prove that per any x all BSS are infinite and approach an equal size, regardless of the identity of the survivor element y. These infinite series (in count and length) have no simple pattern, their disorder is reminiscent of primes. We suggest some intriguing cryptographic applications based on the poor predictability of the next element in each series, combined with good predictability of the computational load to develop the series (by users and by the cryptanalyst). Using x as a shared secret, and a random, per-session, y, Alice and Bob may mark successive messages between them with the next element of the respective BSS, thereby mutually authenticating themselves throughout their conversation. Other cryptographic possibilities are outlined.
2010
EPRINT
Black-Box Constructions of Protocols for Secure Computation
It is well known that secure computation without an honest majority requires computational assumptions. An interesting question that therefore arises relates to the way such computational assumptions are used. Specifically, can the secure protocol use the underlying primitive (e.g., a one-way trapdoor permutation) in a {\em black-box} way, treating it as an oracle, or must it be {\em nonblack-box} (by referring to the code that computes the primitive)? Despite the fact that many general constructions of cryptographic schemes refer to the underlying primitive in a black-box wayonly, there are some constructions that are inherently nonblack-box. Indeed, all known constructions of protocols for general secure computation that are secure in the presence of a malicious adversary and without an honest majority use the underlying primitive in a nonblack-box way (requiring to prove in zero-knowledge statements that relate to the primitive). In this paper, we study whether such nonblack-box use is essential. We answer this question in the negative. Concretely, we present a \emph{fully black-box reduction} from oblivious transfer with security against malicious parties to oblivious transfer with security against semi-honest parties. As a corollary, we get the first constructions of general multiparty protocols (with security against malicious adversaries and without an honest majority) which only make a {\em black-box} use of semi-honest oblivious transfer, or alternatively a black-box use of lower-level primitives such as enhanced trapdoor permutations or homomorphic encryption.
2010
EPRINT
BoostReduce - A Framework For Strong Lattice Basis Reduction
In this paper, we propose a new generic reduction framework BoostReduce for strong lattice basis reduction. At the core of our new framework is an iterative method which uses a newly-developed algorithm for finding short lattice vectors and integrating them efficiently into an improved lattice basis. We present BoostBKZ as an instance of BoostReduce using the Block-Korkine-Zolotarev (BKZ) reduction. BoostBKZ is tailored to make effective use of modern computer architectures in that it takes advantage of multiple threads. Experimental results of BoostBKZ show a significant reduction in running time while maintaining the quality of the reduced lattice basis in comparison to the traditional BKZ reduction algorithm.
2010
EPRINT
CCA-Secure Cryptosystem from Lattice
We propose a simple construction of CCA- secure publickey encryption scheme based on lattice in the standard model. Our construction regards lattice-based cryptosystem mR05 of [21], which is the multi-bit version of single-bit cryptosystems R05 [20], as building block and makes use of its indistinguishable pseudohomomorphism property which is known to be achievable without random oracles and which is the crux that we can construct a public key encryption scheme which is CCA-secure in standard model. This makes our construction approach quite di erent from existing ones. So far as we know, our construction is the rst CCA-secure cryptosystem which is directly constructed from lattice and whose security is directly based on the standard lattice problem which is hard in the worst case for quantum algorithms.
2010
EPRINT
CCA-Secure PRE Scheme without Public Verifiability
In a proxy re-encryption (PRE) scheme, a semi-trusted proxy can transform a ciphertext under Alice's public key into another ciphertext that Bob can decrypt. However, the proxy cannot access the plaintext. Due to its transformation property, PRE can be used in many applications, such as encrypted email forwarding. All the existing CCA-secure PRE schemes have a crucial property: the public verifiability of the original ciphertext, i.e., everyone can check the validity of the original ciphertext. In this paper, we propose a novel CCA-secure PRE scheme without public verifiability. This proposal is proven-secure based on the DDH assumption in the standard model. To the best of our knowledge, our proposal is the first CCA-secure unidirectional PRE scheme without pairings in the standard model, which answers an open problem in the PRE field.
2010
EPRINT
CCA-Secure PRE Scheme without Random Oracles
In a proxy re-encryption scheme, a semi-trusted proxy can transform a ciphertext under Alice's public key into another ciphertext that Bob can decrypt. However, the proxy cannot access the plaintext. Due to its transformation property, proxy re-encryption can be used in many applications, such as encrypted email forwarding. In this paper, by using the techniques of Canetti-Hohenberger and Kurosawa-Desmedt, we propose a new single-use unidirectional proxy re-encryption scheme. Our proposal is secure against chosen ciphertext attack (CCA) and collusion attack in the standard model.
2010
EPRINT
CCA-Secure Unidirectional Proxy Re-Encryption in the Adaptive Corruption Model without Random Oracles
Proxy re-encryption (PRE), introduced by Blaze, Bleumer and Strauss in Eurocrypt'98, allows a semi-trusted proxy to convert a ciphertext originally intended for Alice into an encryption of the same message intended for Bob. PRE has recently drawn great interest, and many interesting PRE schemes have been proposed. However, up to now, it is still an important question to come up with a chosen-ciphertext secure unidirectional PRE in the adaptive corruption model. To address this problem, we propose a new unidirectional PRE scheme, and prove its chosen-ciphertext security in the adaptive corruption model without random oracles. Compared with the best known unidirectional PRE scheme proposed by Libert and Vergnaud in PKC'08, our schemes enjoys the advantages of both higher efficiency and stronger security.
2010
EPRINT
CCA2 Secure Certificateless Encryption Schemes Based on RSA
Certificateless cryptography, introduced by Al-Riyami and Paterson eliminates the key escrow problem inherent in identity based cryptosystem. In this paper, we present two novel and completely different RSA based adaptive chosen ciphertext secure (CCA2) certificateless encryption schemes. The new schemes are efficient when compared to other existing certificatless encryption schemes that are based on the costly bilinear pairing operation and are quite comparable with the certificateless encryption scheme based on multiplicative groups (without bilinear pairing) by Sun et al. \cite{SZB07} and the RSA based CPA secure certificateless encryption scheme by Lai et al. \cite{LDLK09}. We consider a slightly stronger security model than the ones considered in \cite{LDLK09} and \cite{SZB07} to prove the security of our schemes.
2010
EPRINT
Certificateless generalized signcryption
Generalized Signcryption is a fresh cryptographic primitive that not only can obtain encryption and signature in a single operation, but also provives encryption or signature alone when needed. This paper gives a formal definition of certificateless generalized signcryption and its security model is present. A concrete certificateless generalized signcryption scheme is also proposed in this paper.
2010
EPRINT
Certificateless Signcryption without Pairing
Certificateless public key cryptography is receiving significant attention because it is a new paradigm that simplifies the traditional PKC and solves the inherent key escrow problem suffered by ID-PKC. Certificateless signcryption is one of the most important security primitives in CL-PKC. However, to the best of our knowledge, all constructions of certificateless signcryption (CLSC) in the literature are built from bilinear maps which need costly operations. In the paper, motivated by certificateless encryption schemes proposed in [3, 21], we present a pairing-free CLSC scheme, which is more efficient than all previous constructions.
2010
EPRINT
Chosen Ciphertext Secure Encryption over Semi-smooth Subgroup
In this paper we propose two public key encryption schemes over the semi-smooth subgroup introduced by Groth05. Both the schemes are proved secure against chosen ciphertext attacks under the factoring assumption. Since the domain of exponents is much smaller, both our schemes are significantly more efficient than Hofheiz-Kiltz 2009 encryption.
2010
EPRINT
Circular and Leakage Resilient Public-Key Encryption Under Subgroup Indistinguishability (or: Quadratic Residuosity Strikes Back)
The main results of this work are new public-key encryption schemes that, under the quadratic residuosity (QR) assumption (or Paillier's decisional composite residuosity (DCR) assumption), achieve key-dependent message security as well as high resilience to secret key leakage and high resilience to the presence of auxiliary input information. In particular, under what we call the {\it subgroup indistinguishability assumption}, of which the QR and DCR are special cases, we can construct a scheme that has: * Key-dependant message (circular) security. Achieves security even when encrypting affine functions of its own secret-key (in fact, w.r.t. affine ``key-cycles'' of predefined length). Our scheme also meets the requirements for extending key-dependant message security to broader classes of functions beyond affine functions using the techniques of [BGK, ePrint09] or [BHHI, ePrint09]. * Leakage resiliency. Remains secure even if any adversarial low-entropy (efficiently computable) function of the secret-key is given to the adversary. A proper selection of parameters allows for a ``leakage rate'' of $(1-o(1))$ of the length of the secret-key. * Auxiliary-input security. Remains secure even if any sufficiently \emph{hard to invert} (efficiently computable) function of the secret-key is given to the adversary. Our scheme is the first to achieve key-dependant security and auxiliary-input security based on the DCR and QR assumptions. Previous schemes that achieved these properties relied either on the DDH or LWE assumptions. The proposed scheme is also the first to achieve leakage resiliency for leakage rate $(1-o(1))$ of the secret-key length, under the QR assumption. We note that leakage resilient schemes under the DCR and the QR assumptions, for the restricted case of composite modulus product of safe primes, were implied by the work of [NS, Crypto09], using hash proof systems. However, under the QR assumption, known constructions of hash proof systems only yield a leakage rate of $o(1)$ of the secret-key length.
2010
EPRINT
Class Invariants by the CRT Method
We adapt the CRT approach to computing Hilbert class polynomials to handle a wide range of class invariants. For suitable discriminants D, this improves its performance by a large constant factor, more than 200 in the most favourable circumstances. This has enabled record-breaking constructions of elliptic curves via the CM method, including examples with |D| > 10^{15}.
2010
EPRINT
Co-Z Addition Formulae and Binary Ladders on Elliptic Curves
Meloni recently introduced a new type of arithmetic on elliptic curves when adding projective points sharing the same Z-coordinate. This paper presents further co-Z addition formulae (and register allocations) for various point additions on Weierstrass elliptic curves. It explains how the use of conjugate point addition and other implementation tricks allow one to develop efficient scalar multiplication algorithms making use of co-Z arithmetic. Specifically, this paper describes efficient co-Z based versions of Montgomery ladder and Joye’s double-add algorithm. Further, the resulting implementations are protected against a large variety of implementation attacks.
2010
EPRINT
Collisions for 72-step and 73-step SHA-1: Improvements in the Method of Characteristics
We present a brief report on the collision search for the reduced SHA-1. With a few improvements to the De Canni\`ere-Rechberger automatic collision search method we managed to construct two new collisions for 72- and 73-step reduced SHA-1 hash function.
2010
EPRINT
Collusion Free Protocol for Correlated Element Selection Problem
A common problem in many markets is that competing firms cannot plan joint business strategies which are socially beneficial, as each firm has its own preferable business strategy which would yield higher profits for it and lower profits for the others. The solution to this problem becomes complex because each firm need not stick to its commitment to follow the pre-designated strategy. Game theory suggests to us a way to enforce this commitment, as when every player chooses his actions according to his observation of the value of a common public signal and, assuming that the others do not deviate, no player is willing to deviate from his recommended strategy. The players do not deviate from their recommended strategy as playing them would yield them a much higher expected pay-off than playing individually. The common public channel can be a trusted external mediator which may send each player his recommended strategy. This mediator can be simulated by a cryptographic protocol, which all the players agree to implement. This problem of suggesting the protocol is known as the \textit{Correlated Element Selection Problem}. The first two-player protocol was proposed by Dodis et. al\cite{dhr00} in Crypto 2000. The extension of the two-player protocol to an $n$-player protocol is highly prone to collusions, as two firms can collude and cheat the rest of the firms. The main contribution of the paper is the first $n$-player collusion free protocol for the \textit{correlated element selection problem} that does not use hardware primitives. We assume that players are honest but curious.
2010
EPRINT
Collusion Free Protocol for Rational Secret Sharing
We consider the \textit{rational secret sharing problem} introduced by Halpern and Teague\cite{ht04}, where players prefer to get the secret rather than not to get the secret and with lower preference, prefer that as few of the other players get the secret. Some positive results have been derived by Kol and Naor\cite{stoc08} by considering that players only prefer to learn. They have proposed an efficient $m$-out-of-$n$ protocol for rational secret sharing without using cryptographic primitives. Their solution considers that players are of two types; one player is the short player and the rest of the players are long players. But their protocol is susceptible to coalitions if the short player colludes with any of the long players. We extend their protocol, and propose a completely collusion free, $\varepsilon$-Nash equilibrium protocol, when $n \geq 2m -1 $, where $n$ is the number of players and $m$ is the number of shares needed to construct the secret.
2010
EPRINT
Combining leak--resistant arithmetic for elliptic curves defined over $\F_p$ and RNS representation
In this paper we combine the residue number system (RNS) representation and the leak-resistant arithmetic on elliptic curves. These two techniques are relevant for implementation of elliptic curve cryptography on embedded devices.\\ % since they have leak-resistance properties. It is well known that the RNS multiplication is very efficient whereas the reduction step is costly. Hence, we optimize formulae for basic operations arising in leak-resistant arithmetic on elliptic curves (unified addition, Montgomery ladder) in order to minimize the number of modular reductions. We also improve the complexity of the RNS modular reduction step. As a result, we show how to obtain a competitive secured implementation.\\ Finally, %we recall the main advantages of the RNS representation, %especially in hardware and for embedded devices, and we show that, contrary to other approaches, ours takes optimally the advantage of a dedicated parallel architecture.
2010
EPRINT
Comment on four two-party authentication protocols
In this paper, we analyze the protocols of Bindu et al., Goriparthi et al., Wang et al. and Hölbl et al.. After analyses, we found that Bindu et al.’s protocol suffers from the insider attack if the smart card is lost, both Goriparthi et al.’s and Wang et al.’s protocols can’t withstand the DoS attack on the password change phase which makes the password invalid after the protocol run, and Hölbl et al.’s protocol is vulnerable to the insider attack since a malevolent legal user can deduce KGC’s secret key xs.
2010
EPRINT
Comments on five smart card based password authentication protocols
In this paper, we use the ten security requirements proposed by Liao et al. for a smart card based authentication protocol to examine five recent work in this area. After analyses, we found that the protocols of Juang et al.’s, Hsiang et al.’s, Kim et al.’s, and Li et al.’s all suffer from the password guessing attack if the smart card is lost and the protocol of Xu et al.’s suffers from the insider attack.
2010
EPRINT
Communication Efficient Perfectly Secure VSS and MPC in Asynchronous Networks with Optimal Resilience
Verifiable Secret Sharing (VSS) is a fundamental primitive used in many distributed cryptographic tasks, such as Multiparty Computation (MPC) and Byzantine Agreement (BA). It is a two phase (sharing, reconstruction) protocol. The VSS and MPC protocols are carried out among n parties, where t out of n parties can be under the influence of a Byzantine (active) adversary, having unbounded computing power. It is well known that protocols for perfectly secure VSS and perfectly secure MPC exist in an asynchronous network iff n \geq 4t+1. Hence, we call any perfectly secure VSS (MPC) protocol designed over an asynchronous network with n=4t+1 as optimally resilient VSS (MPC) protocol. A secret is d-shared among the parties if there exists a random degree-d polynomial whose constant term is the secret and each honest party possesses a distinct point on the degree-d polynomial. Typically VSS is used as a primary tool to generate t-sharing of secret(s). In this paper, we present an optimally resilient, perfectly secure Asynchronous VSS (AVSS) protocol that can generate d-sharing of secret for any d, where t \leq d \leq 2t. This is the first optimally resilient, perfectly secure AVSS of its kind in the literature. Specifically, our AVSS can generate d-sharing of \ell \geq 1 secrets from F concurrently, with a communication cost of O(\ell n^2 \log{|F|}) bits, where F is a finite field. Communication complexity wise, the best known optimally resilient, perfectly secure AVSS is reported in [BH07]. The protocol of [BH07] can generate t-sharing of \ell secrets concurrently, with the same communication complexity as our AVSS. However, the AVSS of [BH07] and [BCG93] (the only known optimally resilient perfectly secure AVSS, other than [BH07]) does not generate d-sharing, for any d > t. Interpreting in a different way, we may also say that our AVSS shares \ell(d+1 -t) secrets simultaneously with a communication cost of O(\ell n^2 \log{|F|}) bits. Putting d=2t (the maximum value of d), we notice that the amortized cost of sharing a single secret using our AVSS is only O(n \log{|F|}) bits. This is a clear improvement over the AVSS of [BH07] whose amortized cost of sharing a single secret is O(n^2 \log{|F|}) bits. As an interesting application of our AVSS, we propose a new optimally resilient, perfectly secure Asynchronous Multiparty Computation (AMPC) protocol that communicates O(n^2 \log|F|) bits per multiplication gate. The best known optimally resilient perfectly secure AMPC is due to [BH07], which communicates O(n^3 \log|F|) bits per multiplication gate. Thus our AMPC improves the communication complexity of the best known AMPC of [BH07] by a factor of \Omega(n).
2010
EPRINT
Commuting Signatures and Verifiable Encryption and an Application to Non-Interactively Delegatable Credentials
Verifiable encryption allows to encrypt a signature and prove that the plaintext is valid. We introduce a new primitive called commuting signature that extends verifiable encryption in multiple ways: a signer can encrypt both signature and message and prove validity; more importantly, given a ciphertext, a signer can create a verifiably encrypted signature on the encrypted message; thus signing and encrypting commute. We instantiate commuting signatures using the proof system by Groth and Sahai (EUROCRYPT '08) and the automorphic signatures by Fuchsbauer (ePrint report 2009/320). As an application, we give an instantiation of delegatable anonymous credentials, a powerful primitive introduced by Belenkiy et al. (CRYPTO '09). Our instantiation is arguably simpler than theirs and it is the first to provide non-interactive issuing and delegation, which is a standard requirement for non-anonymous credentials. Moreover, the size of our credentials and the cost of verification are less than half of those of the only previous construction, and efficiency of issuing and delegation is increased even more significantly. All our constructions are proved secure in the standard model.
2010
EPRINT
Compact hardware for computing the Tate pairing over 128-bit-security supersingular curves
This paper presents a novel method for designing compact yet efficient hardware implementations of the Tate pairing over supersingular curves in small characteristic. Since such curves are usually restricted to lower levels of security because of their bounded embedding degree, aiming for the recommended security of 128 bits implies considering them over very large finite fields. We however manage to mitigate this effect by considering curves over field extensions of moderately-composite degree, hence taking advantage of a much easier tower field arithmetic. This technique of course lowers the security on the curves, which are then vulnerable to Weil descent attacks, but a careful analysis allows us to maintain their security above the 128-bit threshold. As a proof of concept of the proposed method, we detail an FPGA accelerator for computing the Tate pairing on a supersingular curve over GF(3^(5*97)), which satisfies the 128-bit security target. On a mid-range Xilinx Virtex-4 FPGA, this accelerator computes the pairing in 2.2 ms while requiring no more than 4755 slices.
2010
EPRINT
Compact Implementations of BLAKE-32 and BLAKE-64 on FPGA
We propose compact architectures of the SHA-$3$ candidates BLAKE-32 and BLAKE-64 for several FPGA families. We harness the intrinsic parallelism of the algorithm to interleave the computation of four instances of the $G_i$ function. This approach allows us to design an Arithmetic and Logic Unit with four pipeline stages and to achieve high clock frequencies. With careful scheduling, we completely avoid pipeline bubbles. For the time being, the designs presented in this work are the most compact ones for any of the SHA-3 candidates. We show for instance that a fully autonomous implementation of BLAKE-32 on a Xilinx Virtex-5 device requires 56 slices and two memory blocks.
2010
EPRINT
Comparing Hardware Performance of Fourteen Round Two SHA-3 Candidates Using FPGAs
Performance in hardware has been demonstrated to be an important factor in the evaluation of candidates for cryptographic standards. Up to now, no consensus exists on how such an evaluation should be performed in order to make it fair, transparent, practical, and acceptable for the majority of the cryptographic community. In this report, we formulate a proposal for a fair and comprehensive evaluation methodology, and apply it to the comparison of hardware performance of 14 Round~2 SHA-3 candidates. The most important aspects of our methodology include the definition of clear performance metrics, the development of a uniform and practical interface, generation of multiple sets of results for several representative FPGA families from two major vendors, and the application of a simple procedure to convert multiple sets of results into a single ranking. The VHDL codes for 256 and 512-bit variants of all 14 SHA-3 Round 2 candidates and the old standard SHA-2 have been developed and thoroughly verified. These codes have been then used to evaluate the relative performance of all aforementioned algorithms using seven modern families of Field Programmable Gate Arrays (FPGAs) from two major vendors, Xilinx and Altera. All algorithms have been evaluated using four performance measures: the throughput to area ratio, throughput, area, and the execution time for short messages. Based on these results, the 14 Round 2 SHA-3 candidates have been divided into several groups depending on their overall performance in FPGAs.
2010
EPRINT
Composable Security Analysis of OS Services
We provide an analytical framework for analyzing basic integrity properties of file systems, namely the binding of files to filenames and writing capabilities. A salient feature of our modeling and analysis is that it is *composable*: In spite of the fact that we analyze the filesystem in isolation, security is guaranteed even when the file system operates as a component within an arbitrary, and potentially adversarial system. Such secure composability properties seem essential when trying to assert the security of large systems. Our results are obtained by adapting the *Universally Composable* (UC) security framework to the analysis of software systems. Originally developed for cryptographic protocols, the UC framework allows the analysis of simple components in isolation, and provides assurance that these components maintain their behavior when combined in a large system, potentially under adversarial conditions.
2010
EPRINT
Computationally Sound Verification of Source Code
Increasing attention has recently been given to the formal verification of the source code of cryptographic protocols. The standard approach is to use symbolic abstractions of cryptography that make the analysis amenable to automation. This leaves the possibility of attacks that exploit the mathematical properties of the cryptographic algorithms themselves. In this paper, we show how to conduct the protocol analysis on the source code level (F# in our case) in a computationally sound way, i.e., taking into account cryptographic security definitions. We build upon the prominent F7 verification framework (Bengtson et al., CSF 2008) which comprises a security type-checker for F# protocol implementations using symbolic idealizations and the concurrent lambda calculus RCF to model a core fragment of F#. To leverage this prior work, we give conditions under which symbolic security of RCF programs using cryptographic idealizations implies computational security of the same programs using cryptographic algorithms. Combined with F7, this yields a computationally sound, automated verification of F# code containing public-key encryptions and signatures. For the actual computational soundness proof, we use the CoSP framework (Backes, Hofheinz, and Unruh, CCS 2009). We thus inherit the modularity of CoSP, which allows for easily extending our proof to other cryptographic primitives.
2010
EPRINT
Computing genus 2 curves from invariants on the Hilbert moduli space
We give a new method for generating genus 2 curves over a finite field with a given number of points on the Jacobian of the curve. We define two new invariants for genus 2 curves as values of modular functions on the Hilbert moduli space and show how to compute them. We relate them to the usual three Igusa invariants on the Siegel moduli space and give an algorithm to construct curves using these new invariants. Our approach simplifies the complex analytic method for computing genus 2 curves for cryptography and reduces the amount of computation required.
2010
EPRINT
Concurrent composition in the bounded quantum storage model
We define the BQS-UC model, a variant of the UC model, that deals with protocols in the bounded quantum storage model. We present a statistically secure commitment protocol in the BQS-UC model that composes concurrently with other protocols and an (a-priori) polynomially-bounded number of instances of itself. Our protocol has an efficient simulator which is important if one wishes to compose our protocol with protocols that are only computationally secure. Combining our result with prior results, we get a statistically BQS-UC secure constant-round protocol for general two-party computation without the need for any setup assumption.
2010
EPRINT
Concurrent Knowledge Extraction in the Public-Key Model
Knowledge extraction is a fundamental notion, modeling machine possession of values (witnesses) in a computational complexity sense and enabling one to argue about the internal state of a party in a protocol without probing its internal secret state. However, when transactions are concurrent (e.g., over the Internet) with players possessing public-keys (as is common in cryptography), assuring that entities ``know" what they claim to know, where adversaries may be well coordinated across different transactions, turns out to be much more subtle and in need of re-examination. Here, we investigate how to formally treat knowledge possession by parties (with registered public-keys) interacting over the Internet. Stated more technically, we look into the relative power of the notion of ``concurrent knowledge-extraction" (CKE) in the concurrent zero-knowledge (CZK) bare public-key (BPK) model where statements being proven can be dynamically and adaptively chosen by the prover. We show the potential vulnerability of man-in-the-middle (MIM) attacks turn out to be a real security threat to existing natural protocols running concurrently in the public-key model, which motivates us to introduce and formalize the notion of CKE, alone with clarification of various subtleties. Then, both generic (based on standard polynomial assumptions), and efficient (employing complexity leveraging in a novel way) implementations for NP are presented for constant-round (in particular, round-optimal) concurrently knowledge-extractable concurrent zero-knowledge (CZK-CKE) arguments in the BPK model. The efficient implementation can be further practically instantiated for specific number-theoretic language.
2010
EPRINT
Constructing Verifiable Random Functions with Large Input Spaces
We present a family of verifiable random functions which are provably secure for exponentially-large input spaces under a non-interactive complexity assumption. Prior constructions required either an interactive complexity assumption or one that could tolerate a factor 2^n security loss for n-bit inputs. Our construction is practical and inspired by the pseudorandom functions of Naor and Reingold and the verifiable random functions of Lysyanskaya. Set in a bilinear group, where the Decisional Diffie-Hellman problem is easy to solve, we require the Decisional Diffie-Hellman Exponent assumption in the standard model, without a common reference string. Our core idea is to apply a simulation technique where the large space of VRF inputs is collapsed into a small (polynomial-size) input in the view of the reduction algorithm. This view, however, is information-theoretically hidden from the attacker. Since the input space is exponentially large, we can first apply a collision-resistant hash function to handle arbitrarily-large inputs.
2010
EPRINT
Construction of 1-Resilient Boolean Functions with Optimal Algebraic Immunity and Good Nonlinearity
This paper presents a construction for a class of 1-resilient Boolean functions with optimal algebraic immunity on an even number of variables by dividing them into two correlation classes, i.e. equivalence classes. From which, a nontrivial pair of functions has been found by applying the generating matrix. For $n$ is small (e.g. $n=6$), a part of these functions achieve almost optimal nonlinearity. Apart from their good nonlinearity, the functions reach Siegenthaler's \cite{Siegenthaler} upper bound of algebraic degree. Furthermore, a class of 1-resilient functions on any number $n>2$ of variables with at least sub-optimal algebraic immunity is provided.
2010
EPRINT
Construction of Balanced Boolean Functions with High Nonlinearity and Good Autocorrelation Properties
Boolean functions with high nonlinearity and good autocorrelation properties play an important role in the design of block ciphers and stream ciphers. In this paper, we give a method to construct balanced Boolean functions on $n$ variables, where $n\ge 10$ is an even integer, satisfying strict avalanche criterion (SAC). Compared with the known balanced Boolean functions with SAC property, the constructed functions possess the highest nonlinearity and the best global avalanche characteristics (GAC) property.
2010
EPRINT
Cooperative Provable Data Possession
Provable data possession (PDP) is a technique for ensuring the integrity of data in outsourcing storage service. In this paper, we address the construction of efficient PDP schemes on hybrid clouds to support scalability of service and data migration, in which we consider the existence of multiple cloud service providers (CSP) to cooperatively store and maintain the clients' data. The proposed PDP schemes include an interactive PDP (IPDP) and a cooperative PDP (CPDP) schemes adopting zero-knowledge property and three-layered index hierarchy, respectively. In particular, we present an efficient method for selecting the optimal number of sectors in each block to minimize the computation costs of clients and storage service providers. Our experiments show that the verification requires a small, constant amount of overhead, which minimizes communication complexity.
2010
EPRINT
Correlated Product Security From Any One-Way Function and the New Notion of Decisional Correlated Product Security
It is well-known that the k-wise product of one-way functions remains one-way, but may no longer be when the k inputs are correlated. At TCC 2009, Rosen and Segev introduced a new notion known as Correlated Product secure functions. These functions have the property that a k-wise product of them remains one-way even under correlated inputs. Rosen and Segev gave a construction of injective trapdoor functions which were correlated product secure from the existence of Lossy Trapdoor Functions (introduced by Peikert and Waters in STOC 2008). The first main result of this work shows the surprising fact that a family of correlated product secure functions can be constructed from any one-way function. Because correlated product secure functions are trivially one-way, this shows an equivalence between the existence of these two cryptographic primitives. In the second main result of this work, we consider a natural decisional variant of correlated product security. Roughly, a family of functions are Decisional Correlated Product (DCP) secure if $f_1(x_1),\ldots,f_k(x_1)$ is indistinguishable from $f_1(x_1),\ldots,f_k(x_k)$ when $x_1,\ldots,x_k$ are chosen uniformly at random. We argue that the notion of Decisional Correlated Product security is a very natural one. To this end, we show a parallel from the Discrete Log Problem and Decision Diffie-Hellman Problem to Correlated Product security and its decisional variant. This intuition gives very simple constructions of PRGs and IND-CPA encryption from DCP secure functions. Furthermore, we strengthen our first result by showing that the existence of DCP secure one-way functions is also equivalent to the existence of any one-way function. When considering DCP secure functions with trapdoors, we give a construction based on Lossy Trapdoor Functions, and show that any DCP secure function family with trapdoor satisfy the security requirements for Deterministic Encryption as defined by Bellare, Boldyreva and O'Neill in CRYPTO 2007. In fact, we also show that definitionally, DCP secure functions with trapdoors are a strict subset of Deterministic Encryption functions by showing an example of a Deterministic Encryption function which according to the definition is not a DCP secure function.
2010
EPRINT
Correlation-Enhanced Power Analysis Collision Attack
Side-channel based collision attacks are a mostly disregarded alternative to DPA for analyzing unprotected implementations. The advent of strong countermeasures, such as masking, has made further research in collision attacks seemingly in vain. In this work, we show that the principles of collision attacks can be adapted to efficiently break some masked hardware implementation of the AES which still have first-order leakage. The proposed attack breaks an AES implementation based on the corrected version of the masked S-box of Canright and Batina presented at ACNS 2008 which is supposed to be resistant against firstorder attacks. It requires only six times the number of traces necessary for breaking a comparable unprotected implementation. At the same time, the presented attack has minimal requirements on the abilities and knowledge of an adversary. The attack requires no detailed knowledge about the design, nor does it require a training phase.
2010
EPRINT
CPA and CCA-Secure Encryption Systems that are not 2-Circular Secure
Traditional definitions of encryption guarantee security for plaintexts which can be derived by the adversary. In some settings, such as anonymous credential or disk encryption systems, one may need to reason about the security of messages potentially unknown to the adversary, such as secret keys encrypted in a self-loop or a cycle. A public-key cryptosystem is n-circular secure if it remains secure when the ciphertexts E(pk_1, sk_2), E(pk_2, sk_3), ... , E(pk_{n-1}, sk_n), E(pk_n, sk_1) are revealed, for independent key pairs. A natural question to ask is what does it take to realize circular security in the standard model? Are all CPA-secure (or CCA-secure) cryptosystems also n-circular secure for n >1? One way to resolve this question is to produce a CPA-secure (or CCA-secure) cryptosystem which is demonstrably insecure for key cycles larger than self-loops. Recently and independently, Acar, Belenkiy, Bellare and Cash provided a CPA-secure cryptosystem, under the SXDH assumption, that is not 2-circular secure. In this paper, we present a different CPA-secure counterexample (under SXDH) as well as the first CCA-secure counterexample (under SXDH and the existence of certain NIZK proof systems) for n >1. Moreover, our 2-circular attacks recover the secret keys of both parties and thus exhibit a catastrophic failure of the system whereas the attack in Acar et al. provides a test whereby the adversary can distinguish whether it is given a 2-cycle or two random ciphertexts. These negative results are an important step in answering deep questions about which attacks are prevented by commonly-used definitions and systems of encryption.
2010
EPRINT
Credential Authenticated Identification and Key Exchange
Secure two-party authentication and key exchange are fundamental problems. Traditionally, the parties authenticate each other by means of their identities, using a public-key infrastucture (PKI). However, this is not always feasible or desirable: an appropriate PKI may not be available, or the parties may want to remain anonymous, and not reveal their identities. To address these needs, we introduce the notions of credential-authenticated identification (CAID) and key exchange (CAKE), where the compatibility of the parties' \emph{credentials} is the criteria for authentication, rather than the parties' \emph{identities} relative to some PKI. We formalize CAID and CAKE in the universal composability (UC) framework, with natural ideal functionalities, and we give practical, modularly designed protocol realizations. We prove all our protocols UC-secure in the adaptive corruption model with erasures, assuming a common reference string (CRS). The proofs are based on standard cryptographic assumptions and do not rely on random oracles. CAKE includes password-authenticated key exchange (PAKE) as a special case, and we present two new PAKE protocols. The first one is interesting in that it is uses completly different techniques than known practical PAKE protocols, and also achieves UC-security in the adaptive corruption model with erasures; the second one is the first practical PAKE protocol that provides a meaningful form of resilience against server compromise without relying on random oracles.
2010
EPRINT
Cryptanalysis and Improvement of A New Electronic Traveler’s Check Scheme Based on One-way Hash Function
Recently, Liaw et al. proposed a hash based electronic traveler’s check system. They claimed that their scheme is secure. However, after analyses, we found that their scheme is vulnerable to key compromise impersonation and parallel session attack. Further, we will improve their scheme to avoid such an attack.
2010
EPRINT
Cryptanalysis and Improvement of a New Gateway-Oriented Password-Based Authenticated Key Exchange Protocol
Abdalla et al. proposed the first gateway-oriented password-based authenticated key exchange (GPAKE) protocol. The security goal of GPAKE is to securely establish a session key between the client and the gateway by the help of the authentication server without revealing any information of the password to the gateway. However, Byun et al. showed that the original GPAKE protocol was suspectable to an undetectable on-line dictionary attack by a malicious gateway. Recently, Abdalla et al. presented a new variant of the original GPAKE protocol to resist Byun et al.'s attack. In this letter, we show that the new GPAKE protocol is still vulnerable to another simple but powerful undetectable on-line dictionary attack. We then make a suggestion for improvement.
2010
EPRINT
Cryptanalysis of Libert-Vergnaud Proxy Re-encryption Scheme
In 2008, Libert and Vergnaud put forth a proxy re-encryption scheme (LV08 for short). Unlike some earlier PRE schemes, the LV08 scheme specifies a validity-checking process to guarantee that the received ciphertext is well-formed. In this paper, we clarify an ultimate fact that the received message is well-formed is the premise to decryption in all encryption schemes. The underlying mechanism to keep the communicated message well-formed in encryption schemes is another topic. The authors ignored the fact and proposed a cumbersome presentation. We will simplify the LV08 scheme and show its security level is the same as that of the scheme proposed by Ateniese et al in 2005. Therefore, the LV08 scheme can not ensure chosen-ciphertext security as claimed.
2010
EPRINT
Cryptanalysis of a DoS-resistant ID-based password authentication
Remote authentication is a method to authenticate remote users over insecure communication channel. Password-based authentication schemes have been widely deployed to verify the legitimacy of remote users. Very recently, Hwang et al. proposed a DoS-resistant ID-based password authentication scheme using smart cards. In the current work, we are concerned with the password security of the Hwang et al.’s scheme. We first show that their scheme is vulnerable to a password guessing attack in which an attacker exhaustively enumerates all possible passwords in an off-line manner to determine the correct one. We then figure out how to eliminate the security vulnerability of their scheme.
2010
EPRINT
Cryptanalysis of an Exquisite Mutual Authentication Scheme with Key Agreement Using Smart Card
The weakness of an exquisite authentication scheme based on smart cards and passwords proposed by Liao et al. [C. H. Liao, H. C. Chen, and C. T. Wang, An Exquisite Mutual Authentication Scheme with Key Agreement Using Smart Card, Informatica, Vol. 33, No. 2, 2009, 125-132.] is analyzed. Five kinds of weakness are presented in different scenarios. The analyses show that Liao et al.’s scheme is insecure for practical application.
2010
EPRINT
Cryptanalysis of Cryptosystems Based on Noncommutative Skew Polynomials
We describe an attack on the family of Diffie-Hellman and El-Gamal like cryptosystems recently presented at PQ Crypto 2010. We show that the reference hard problem is not hard.
2010
EPRINT
Cryptanalysis of the Compression Function of SIMD
SIMD is one of the second round candidates of the SHA-3 competition hosted by NIST. In this paper, we present some results on the compression function of SIMD 1.1 (the tweaked version) using the modular difference method. For SIMD-256, We give a free-start near collision attack on the compression function reduced to 20 steps with complexity $2^{-107}$. And for SIMD-512, we give a free-start near collision attack on the 24-step compression function with complexity $2^{208}$. Furthermore, we give a distinguisher attack on the full compression function of SIMD-512 with complexity $2^{398}$. Our attacks are also applicable for the final compression function of SIMD.
2010
EPRINT
Cryptanalysis of Two Efficient HIBE Schemes in the Standard Model
In Informatica 32 (2008), Ren and Gu proposed an anonymous hierarchical identity based encryption scheme based on the q-ABDHE problem with full security in the standard model. Later in Indocrypt'08, they proposed another secure hierarchical identity based encryption scheme based on the q-TBDHE problem with full security in the standard model. They claimed that their schemes have short parameters, high efficiency and tight reduction. However, in this paper we give attacks to show their schemes are insecure at all. Concretely, from any first level private key, the adversary can easily derive a proper ``private key'' which can decrypt any ciphertexts for the target identity. That is to say, one key generation query on any first level identity excluding the target's first level identity, is enough to break their schemes.
2010
EPRINT
Cryptanalysis of XXTEA
XXTEA, or Corrected Block TEA, is a simple block cipher in Roger Needham and David Wheeler's TEA series of algorithms. We describe a chosen plaintext attack for XXTEA using about $2^{59}$ queries and negligible work.
2010
EPRINT
Cryptographic Agility and its Relation to Circular Encryption
We initiate a provable-security treatment of cryptographic \emph{agility}. A primitive (for example PRFs, authenticated encryption schemes or digital signatures) is agile when multiple, individually secure schemes can securely share the same key. We provide a surprising connection between two seemingly unrelated but challenging questions. The first, new to this paper, is whether wPRFs (weak-PRFs) are agile. The second, already posed several times in the literature, is whether every secure (IND-R) encryption scheme is secure when encrypting cycles. We resolve the second question in the negative and thereby the first as well. We go on to provide a comprehensive treatment of agility, with definitions for various different primitives. We explain the practical motivations for agility. We provide foundational results that show to what extent it is achievable and practical constructions to achieve it to the best extent possible. On the theoretical side our work uncovers new notions and relations and settles stated open questions, and on the practical side it serves to guide developers.
2010
EPRINT
Cryptographic Aspects of Real Hyperelliptic Curves
In this paper, we give an overview of cryptographic applications using real hyperelliptic curves. We review previously proposed cryptographic protocols, and discuss the infrastructure of a real hyperelliptic curve, the mathematical structure underlying all these protocols. We then describe recent improvements to infrastructure arithmetic, including explicit formulas for divisor arithmetic in genus 2; and advances in solving the infrastructure discrete logarithm problem, whose presumed intractability is the basis of security for the related cryptographic protocols.
2010
EPRINT
Cryptographic Extraction and Key Derivation: The HKDF Scheme
In spite of the central role of key derivation functions (KDF) in applied cryptography, there has been little formal work addressing the design and analysis of general multi-purpose KDFs. In practice, most KDFs (including those widely standardized) follow ad-hoc approaches that treat cryptographic hash functions as perfectly random functions. In this paper we close some gaps between theory and practice by contributing to the study and engineering of KDFs in several ways. We provide detailed rationale for the design of KDFs based on the extract-then-expand approach; we present the first general and rigorous definition of KDFs and their security which we base on the notion of computational extractors; we specify a concrete fully practical KDF based on the HMAC construction; and we provide an analysis of this construction based on the extraction and pseudorandom properties of HMAC. The resultant KDF design can support a large variety of KDF applications under suitable assumptions on the underlying hash function; particular attention and effort is devoted to minimizing these assumptions as much as possible for each usage scenario. Beyond the theoretical interest in modeling KDFs, this work is intended to address two important and timely needs of cryptographic applications: (i) providing a single hash-based KDF design that can be standardized for use in multiple and diverse applications, and (ii) providing a conservative, yet efficient, design that exercises much care in the way it utilizes a cryptographic hash function. (The HMAC-based scheme presented here, named HKDF, is being standardized by the IETF.)
2010
EPRINT
Cryptographic Pairings Based on Elliptic Nets
In 2007, Stange proposed a novel method of computing the Tate pairing on an elliptic curve over a finite field. This method is based on elliptic nets, which are maps from $\mathbb{Z}^n$ to a ring that satisfy a certain recurrence relation. In this paper, we explicitly give formulae for computing some variants of the Tate pairing: Ate, Ate$_i$, R-Ate and Optimal pairings, based on elliptic nets. We also discuss their efficiency by using some experimental results.
2010
EPRINT
Cryptographic Role-based Security Mechanisms based on Role-Key Hierarchy
Even though role-based access control (RBAC) can tremendously help us minimize the complexity in administering users, it is still needed to realize the notion of roles at the resource level. In this paper, we propose a practical cryptographic RBAC model, called role-key hierarchy model, to support various security features including signature, identification and encryption based on role-key hierarchy. With the help of rich algebraic structure of elliptic curve, we introduce a role-based cryptosystem construction to verify the rationality and validity of our proposed model. Also, a proof-of-concept prototype implementation and performance evaluation are iscussed to demonstrate the feasibility and efficiency of our mechanisms.
2010
EPRINT
Cryptography Against Continuous Memory Attacks
We say that a cryptographic scheme is Continous Leakage-Resilient (CLR), if it allows users to refresh their secret keys, using only fresh local randomness, such that: 1. The scheme remains functional after any number of key refreshes, although the public key never changes. Thus, the “outside world” is neither affected by these key refreshes, nor needs to know about their frequency. 2. The scheme remains secure even if the adversary can continuously leak arbitrary information about the current secret-key of the system, as long as the amount of leaked information is bounded in between any two successive key refreshes. There is no bound on the total amount of information that can be leaked during the lifetime of the system. In this work, we construct a variety of practical CLR schemes, including CLR one-way relations, CLR signatures, CLR identification schemes, and CLR authenticated key agreement protocols. For each of the above, we give general constructions, and then show how to instantiate them efficiently using a well established assumption on bilinear groups, called the K-Linear assumption (for any constant K >= 1). Our constructions are highly modular, and we develop many interesting techniques and building-blocks along the way, including: leakage-indistinguishable re-randomizable relations, homomorphic NIZKs, and leakage-of-ciphertext non-malleable encryption schemes. Prior to our work, no “truly CLR” schemes were known, as previous leakage-resilient schemes suffer from one or more of the following drawbacks: (a) restrictions are placed on the type of allowed leakage, such as the axiom that “only computation leaks information”; (b) the overall amount of key leakage is bounded a-priori for the lifetime of the system and there is no method for refreshing keys ; (c) the efficiency of the scheme degrades proportionally with the number of refreshes; (d) the key updates require an additional leak-free “master secret key” to be stored securely; (e) the scheme is only proven secure under a strong non-standard assumption.
2010
EPRINT
Cryptography Resilient to Continual Memory Leakage
In recent years, there has been a major effort to design cryptographic schemes that remain secure even if part of the secret key is leaked. This is due to a recent proliferation of side channel attacks which, through various physical means, can recover part of the secret key. We explore the possibility of achieving security even with continual leakage, i.e., even if some information is leaked each time the key is used. We show how to securely update a secret key while information is leaked: We construct schemes that remain secure even if an attacker, {\em at each time period}, can probe the entire memory (containing a secret key) and ``leak'' up to a $(1-o(1))$ fraction of the secret key. The attacker may also probe the memory during the updates, and leak $O(\log k)$ bits, where $k$ is the security parameter (relying on subexponential hardness allows $k^\epsilon$ bits of leakage during each update process). All of the above is achieved without restricting the model as is done in previous works (e.g. by assuming that ``only computation leaks information'' [Micali-Reyzin, TCC04]). Specifically, under the decisional linear assumption on bilinear groups (which allows for a leakage rate of $(1/2-o(1))$) or the symmetric external Diffie-Hellman assumption (which allows for a leakage rate of $(1-o(1))$), we achieve the above for public key encryption, identity-based encryption, and signature schemes. Prior to this work, it was not known how to construct public-key encryption schemes even in the more restricted model of [MR]. The main contributions of this work are (1) showing how to securely update a secret key while information is leaked (in the more general model) and (2) giving a public key encryption (and IBE) schemes that are resilient to continual leakage.
2010
EPRINT
Cube Test Analysis of the Statistical Behavior of CubeHash and Skein
This work analyzes the statistical properties of the SHA-3 candidate cryptographic hash algorithms CubeHash and Skein to try to find nonrandom behavior. Cube tests were used to probe each algorithm's internal polynomial structure for a large number of choices of the polynomial input variables. The cube test data were calculated on a 40-core hybrid SMP cluster parallel computer. The cube test data were subjected to three statistical tests: balance, independence, and off-by-one. Although isolated statistical test failures were observed, the balance and off-by-one tests did not find nonrandom behavior overall in either CubeHash or Skein. However, the independence test did find nonrandom behavior overall in both CubeHash and Skein.
2010
EPRINT
CyclicRainbow - A multivariate Signature Scheme with a Partially Cyclic Public Key based on Rainbow
Multivariate Cryptography is one of the alternatives to guarantee the security of communication in the post-quantum world. One major drawback of such schemes is the huge size of their keys. In \cite{PB10} Petzoldt et al. proposed a way how to reduce the public key size of the UOV scheme by a large factor. In this paper we extend this idea to the Rainbow signature scheme of Ding and Schmidt \cite{DS05}. By our construction it is possible to reduce he size of the public key by up to 62 \verb!%!.
2010
EPRINT
Decentralizing Attribute-Based Encryption
We propose a Multi-Authority Attribute-Based Encryption (ABE) system. In our system, any party can become an authority and there is no requirement for any global coordination other than the creation of an initial set of common reference parameters. A party can simply act as an ABE authority by creating a public key and issuing private keys to different users that reflect their attributes. A user can encrypt data in terms of any boolean formula over attributes issued from any chosen set of authorities. Finally, our system does not require any central authority. In constructing our system, our largest technical hurdle is to make it collusion resistant. Prior Attribute-Based Encryption systems achieved collusion resistance when the ABE system authority ``tied'' together different components (representing different attributes) of a user's private key by randomizing the key. However, in our system each component will come from a potentially different authority, where we assume no coordination between such authorities. We create new techniques to tie key components together and prevent collusion attacks between users with different global identifiers. We prove our system secure using the recent dual system encryption methodology where the security proof works by first converting the challenge ciphertexts and private keys to a semi-functional form and then arguing security. We follow a recent variant of the dual system proof technique due to Lewko and Waters and build our system using bilinear groups of composite order. We prove security under similar static assumptions to the LW paper in the random oracle model.
2010
EPRINT
Decoding square-free Goppa codes over $\F_p$
We propose a new, efficient decoding algorithm for square-free (irreducible or otherwise) Goppa codes over $\F_p$ for any prime $p$. If the code in question has degree $t$ and its average code distance is at least $(4/p)t + 1$, the proposed decoder can uniquely correct up to $(2/p)t$ errors with high probability. The correction capability is higher if the distribution of error magnitudes is not uniform, approaching or reaching $t$ errors when any particular error value occurs much more often than others or exclusively. This makes the method interesting for (semantically secure) cryptosystems based on the decoding problem for permuted and punctured Goppa codes.
2010
EPRINT
Delaying Mismatched Field Multiplications in Pairing Computations
Miller's algorithm for computing pairings involves performing multiplications between elements that belong to different finite fields. Namely, elements in the full extension field $\mathbb{F}_{p^k}$ are multiplied by elements contained in proper subfields $\mathbb{F}_{p^{k/d}}$, and by elements in the base field $\mathbb{F}_{p}$. We show that significant speedups in pairing computations can be achieved by delaying these ``mismatched'' multiplications for an optimal number of iterations. Importantly, we show that our technique can be easily integrated into traditional pairing algorithms; implementers can exploit the computational savings herein by applying only minor changes to existing pairing code.
2010
EPRINT
Deterministic Encoding and Hashing to Odd Hyperelliptic Curves
In this paper we propose a very simple and efficient encoding function from F_q to points of a hyperelliptic curve over F_q of the form H: y^2=f(x) where f is an odd polynomial. Hyperelliptic curves of this type have been frequently considered in the literature to obtain Jacobians of good order and pairing-friendly curves. Our new encoding is nearly a bijection to the set of F_q-rational points on H. This makes it easy to construct well-behaved hash functions to the Jacobian J of H, as well as injective maps to J(F_q) which can be used to encode scalars for such applications as ElGamal encryption. The new encoding is already interesting in the genus 1 case, where it provides a well-behaved encoding to Joux's supersingular elliptic curves.
2010
EPRINT
Differential and invertibility properties of BLAKE (full version)
BLAKE is a hash function selected by NIST as one of the 14 second round candidates for the SHA-3 Competition. In this paper, we follow a bottom-up approach to exhibit properties of BLAKE and of its building blocks: based on differential properties of the internal function G, we show that a round of BLAKE is a permutation on the message space, and present an efficient inversion algorithm. For 1.5 rounds we present an algorithm that finds preimages faster than in previous attacks. Discovered properties lead us to describe large classes of impossible differentials for two rounds of BLAKE’s internal permutation, and particular impossible differentials for five and six rounds, respectively for BLAKE- 32 and BLAKE-64. Then, using a linear and rotation-free model, we describe near-collisions for four rounds of the compression function. Finally, we discuss the problem of establishing upper bounds on the probability of differential characteristics for BLAKE.
2010
EPRINT
Differential Cache Trace Attack Against CLEFIA
The paper presents a differential cache trace attack against CLEFIA, a $128$ bit block cipher designed by Sony Corporation. The attack shows that such ciphers based on the generalized Feistel structures leak information of the secret key if the cache trace pattern is revealed to an adversary. The attack that we propose is a three staged attack and reveals the entire key with $2^{43}$ CLEFIA encryptions. The attack is simulated on an Intel Core 2 Duo Processor with a cache architecture with $32$ byte lines as a target platform.
2010
EPRINT
Differential Cryptanalysis of SMS4 Block Cipher
SMS4 is a 128-bit block cipher used in the WAPI standard for wireless networks in China. In this paper, we analyze the security of SMS4 block cipher against differential cryptanalysis. Firstly, we prove three theorems and one corollary that reflect relationships of 5- and 6-round SMS4. Nextly, by these relationships, we clarify the minimum number of differentially active S-boxes in 6-, 7- and 12-round SMS4 respectively. Finally, based on the above results, we present a family of about $2^{14}$ differential characteristics for 19-round SMS4, which leads to an attack on 23-round SMS4 with $2^{115}$ chosen plaintexts and $2^{124.3}$ encryptions. Our attack is the best known attack on SMS4 so far.
2010
EPRINT
Differential Fault Analysis on AES with 192 and 256-Bit Keys
This paper describes a differential fault analysis (DFA) on AES with 192 and 256-bit keys. We show a new attack in which both 192 and 256-bit keys are retrieved within a feasible computational time. In order to verify the proposed attack and estimate the calculation time, we implement the proposed attack using C code on a PC. As a result, we successfully recover the original 192-bit key using 3 pairs of correct and faulty ciphertexts within 5 minutes, and 256-bit key using 2 pairs of correct and faulty ciphertexts and 2 pairs of correct and faulty plaintexts within 10 minutes.
2010
EPRINT
Differential Fault Analysis on SMS4 Using a Single Fault
Differential Fault Analysis (DFA) attack is a powerful cryptanalytic technique that could be used to retrieve the secret key by exploiting computational errors in the encryption (decryption) procedure. In the present paper, we propose a new DFA attack on SMS4 using a single fault. We show that if a random byte fault is induced into either the second, third, or forth word register at the input of the $28$-th round, the $128$-bit master key could be recovered with an exhaustive search of $22.11$ bits on average. The proposed attack makes use of the characteristic of the cipher's structure, the speciality of the diffusion layer, and the differential property of the S-box. Furthermore, it can be tailored to any block cipher employing a similar structure and an SPN-style round function as that of SMS4.
2010
EPRINT
Dismantling SecureMemory, CryptoMemory and CryptoRF
The Atmel chip families SecureMemory, CryptoMemory, and CryptoRF use a proprietary stream cipher to guarantee authenticity, confidentiality, and integrity. This paper describes the cipher in detail and points out several weaknesses. One is the fact that the three components of the cipher operate largely independently; another is that the intermediate output generated by two of those components is strongly correlated with the generated keystream. For SecureMemory, a single eavesdropped trace is enough to recover the secret key with probability 0.57 in 2^{39} cipher ticks. This is a factor of 2^{31.5} faster than a brute force attack. On a 2 GHz laptop, this takes around 10 minutes. With more traces, the secret key can be recovered with virtual certainty without significant additional cost in time. For CryptoMemory and CryptoRF, if one has 2640 traces it is possible to recover the key in 2^{52} cipher ticks, which is 2^{19} times faster than brute force. On a 50 machine cluster of 2 GHz quad-core machines this would take less than 2 days.
2010
EPRINT
Distinguisher for Shabal's Permutation Function
In this note we consider the Shabal permutation function $\mathcal{P}$ as a block cipher with input $A_p$,$B_p$ and key $C$,$M$ and describe a distinguisher with a data complexity of $2^{23}$ random inputs with a given difference. If the attacker can control one chosen bit of $B_p$, only $2^{21}$ inputs with a given difference are required on average. This distinguisher does not appear to lead directly to an attack on the full Shabal construction.
2010
EPRINT
Distinguishers for the Compression Function and Output Transformation of Hamsi-256
Hamsi is one of 14 remaining candidates in NIST's Hash Competition for the future hash standard SHA-3. Until now, little analysis has been published on its resistance to differential cryptanalysis, the main technique used to attack hash functions. We present a study of Hamsi's resistance to differential and higher-order differential cryptanalysis, with focus on the 256-bit version of Hamsi. Our main results are efficient distinguishers and near-collisions for its full (3-round) compression function, and distinguishers for its full (6-round) finalization function, indicating that Hamsi's building blocks do not behave ideally.
2010
EPRINT
Distinguishing Attacks on MAC/HMAC Based on A New Dedicated Compression Function Framework
By the birthday attack, a new distinguisher with an inner partial collision is first presented. Using the distinguisher can attack on MAC/HMAC based on a dedicated compression function framework proposed in ChinaCrypt2008, with $2^{16.5}$ data complexity and $2^{16.5}$ MAC queries. More important, using the new distinguishing attack can recover the secret key of NMAC with the data complexities of $2^{16.5}$.
2010
EPRINT
Distinguishing Properties of Higher Order Derivatives of Boolean Functions
Higher order differential cryptanalysis is based on the property of higher order derivatives of Boolean functions that the degree of a Boolean function can be reduced by at least 1 by taking a derivative on the function at any point. We define \emph{fast point} as the point at which the degree can be reduced by at least 2. In this paper, we show that the fast points of a $n$-variable Boolean function form a linear subspace and its dimension plus the algebraic degree of the function is at most $n$. We also show that non-trivial fast point exists in every $n$-variable Boolean function of degree $n-1$, every symmetric Boolean function of degree $d$ where $n \not\equiv d \pmod{2}$ and every quadratic Boolean function of odd number variables. Moreover we show the property of fast points for $n$-variable Boolean functions of degree $n-2$.
2010
EPRINT
Double Ciphertext Mode : A Proposal for Secure Backup
Security of data stored in bulk storage devices like the hard disk has gained a lot of importance in the current days. Among the variety of paradigms which are available for disk encryption, low level disk encryption is well accepted because of the high security guarantees it provides. In this paper we view the problem of disk encryption from a different direction. We explore the possibility of how one can maintain secure backups of the data, such that loss of a physical device will mean neither loss of the data nor the fact that the data gets revealed to the adversary. We propose an efficient solution to this problem through a new cryptographic scheme which we call as the double ciphertext mode (DCM). In this paper we describe the syntax of DCM, define security for it and give some efficient constructions. Moreover we argue regarding the suitability of DCM for the secure backup application and also explore other application areas where a DCM can be useful.
2010
EPRINT
ECC2K-130 on Cell CPUs
This paper describes an implementation of Pollard's rho algorithm to compute the elliptic curve discrete logarithm for the Synergistic Processor Elements of the Cell Broadband Engine Architecture. Our implementation targets the elliptic curve discrete logarithm problem defined in the Certicom ECC2K-130 challenge. We compare a bitsliced implementation to a non-bitsliced implementation and describe several optimization techniques for both approaches. In particular, we address the question whether normal-basis or polynomial-basis representation of field elements leads to better performance. Using our software, the ECC2K-130 challenge can be solved in one year using the Synergistic Processor Units of less than 2700 Sony Playstation~3 gaming consoles.
2010
EPRINT
Effect of the Dependent Paths in Linear Hull
Linear Hull is a phenomenon that there are a lot of linear paths with the same data mask but different key masks for a block cipher. In 1994, Nyberg presented the effect on the key-recovery attack such as Algorithm 2 with linear hull, in which the required number of the known plaintexts can be decreased compared with that in the attack using individual linear path. In 2009, Murphy proved that Nyberg's results can only be used to give a lower bound on the data complexity and will be no use on the real linear cryptanalysis. In fact, the linear hull have this kind of positive effect in linear cryptanalysis for some keys instead of the whole key space. So the linear hull can be used to improve the traditional linear cryptanalysis for some weak keys. In the same year, Ohkuma gave the linear hull analysis on PRESENT block cipher, and pointed that there are $32\%$ weak keys of PRESENT which make the bias of a given linear hull with multiple paths more than that of any individual linear path. However, Murphy and Ohkuma have not considered the dependency of the muti-path, and their results are based on the assumption that the linear paths are independent. Actually, most of the linear paths are dependent in the linear hull, and the dependency of the linear paths means the dependency of the equivalent key bits. In this paper, we will analyze the dependency of the linear paths in linear hull and present the real effect of linear hull with the dependent linear paths. Firstly, we give the relation between the bias of a linear hull and its linear paths in linear cryptanalysis. Secondly, we present the algorithm to compute the rate of weak keys corresponding to the expect bias of the linear hull. At last, we verify our algorithm by cryptanalyzing reduced-round of PRESENT. Compared with the rate of weak keys under the assumption of the independent linear paths, the dependency of the linear paths will greatly reduce the rate of weak keys for a given linear hull.
2010
EPRINT
Efficiency-Improved Fully Simulatable Adaptive OT under the DDH Assumption
At Asiacrypt 2009, Kurosawa and Nojima showed a fully simulatable adaptive oblivious transfer (OT) protocol under the DDH assumption in the standard model. However, Green and Hohenberger pointed out that the communication cost of each transfer phase is O(n), where n is the number of the sender's messages. In this paper, we show that the cost can be reduced to O(1) by utilizing a verifiable shuffle protocol.
2010
EPRINT
Efficient Access Control of Sensitive Data Service in Outsourcing Scenarios
With the rapid application of service-oriented technologies, service and data outsourcing has become a practical and useful computing paradigm. Combined use of access control and cryptography was proposed by many researchers to protect information in this outsourcing scenario. However, existing approaches often limit dynamical update of access control policy, or have security weakness in practical use. In this paper, we propose a new solution to realize efficient access control of sensitive data service in outsourcing scenarios by using a new re-encryption execution model. Our solution realizes selective access control, dynamical policy updating, simple key management, and collusion prevention of the outsourcee and customers. We also give some proofs of our implementation.
2010
EPRINT
Efficient and Provably Secure Identity Based Aggregate Signature Schemes With Partial and Full Aggregation
An identity based signature allows users to sign their documents using their private keys and the signature can be verified by any user by using the identity of the signer and public parameters of the system. This allows secure communication between the users without any exchange of certificates. An aggregate signature scheme is a digital signature scheme which allows aggregation of different signatures by different users on different messages. An aggregate signature on $n$ messages $m_{i}$ by $n$ users $U_{i}$ convinces the verifier that each user $U_{i}$ has signed the corresponding message $m_{i}$. The primary objective of the aggregate signature scheme is to achieve both computational and communication efficiency. Here we discuss two identity based aggregate signature schemes. The first aggregate scheme IBAS-1 uses a variation of light weight Schnorr based signature. IBAS-1 does not involve any pairing operations in signature verification. IBAS-1 is computationally efficient since it avoids the costlier operation in elliptic curve groups(pairings). Also because of the light weight property of IBAS-1, it is much suitable for practice. The second aggregate signature scheme IBAS-2, which also has Schnorr type key construct, achieves full aggregation of signatures without agreeing on common randomness and without having any kind of interaction among all the signers. IBAS-2 achieves communication efficiency. But the computational complexity of IBAS-2 is higher than the IBAS-1 because it involves bilinear pairing.
2010
EPRINT
Efficient chaotic permutations for image encryption algorithms
Permutation is widely used in cryptographic algorithm. Recently, a number of candidate instructions have been proposed to efficient compute arbitrary bit permutations. Among these, we present the most attractive methods and having good inherent cryptographic properties. We propose to control it by the perturbed chaotic maps that we studied in [1]. Then, we measure the efficiency of the obtained chaotic permutation methods on a standard image. This study allows choosing a good chaotic permutation method to be used in a chaotic cryptosystem.
2010
EPRINT
Efficient Differential Fault Analysis for AES
This paper proposes improved post analysis methods for Differential Fault Analysis (DFA) against AES. In detail, we propose three techniques to improve the attack efficiency as 1) combining previous DFA methods, 2) performing a divide-and-conquer attack by considering the AES key-schedule structure, and 3) taking the linearity of the MixColumns operation into account. As a result, the expectation of the analysis time in the previous work can be reduced to about one sixteenth. Notice that these improvements are based on the detailed analysis of the previous DFA methods and the calculation time and memory cost in practical implementations. Moreover, the proposed techniques can be widely applied to DFA attacks under different assumptions.
2010
EPRINT
Efficient Generalized Signcryption Schemes
Generalized signcryption is a new cryptographic primitive which works as a signcryption scheme, a signature scheme and an encryption scheme as per need. Recently Ji et al. proposed a security model for certificateless generalized signcryption scheme and also proposed a scheme which they claim is secure under the proposed security model. In this paper we show that Ji et al. scheme is not existentially unforgeable against Type-I adversary and propose a simplified certificateless generalized signcryption. We also present an efficient identity based generalized signcryption scheme.
2010
EPRINT
Efficient Implementation of Elliptic Curve Point Operations Using Binary Edwards Curves
This paper presents a deterministic algorithm for converting points on an ordinary elliptic curve (defined over a field of characteristic 2) to points on a complete binary Edwards curve. This avoids the problem of choosing curve parameters at random. When implemented on a large (512 bit) hardware multiplier, computation of point multiplication using this algorithm performs significantly better, in terms of code complexity, code coverage and timing, than the standard implementation. In addition, we propose a simple modification to the birational equivalence detailed in the paper by Bernstein et al. which both reduces the number of inversions required in the affine mapping and has fewer exceptional points. Finally, we compare software implementations using this efficient point multiplication for binary Edwards curves with computations on elliptic curves in Weierstrass form.
2010
EPRINT
Efficient Implementation of the Orlandi Protocol Extended Version
We present an efficient implementation of the Orlandi protocol which is the first implementation of a protocol for multiparty computation on arithmetic circuits, which is secure against up to $n-1$ static, active adversaries. An efficient implementation of an actively secure self-trust protocol enables a number of multiparty computation where one or more of the parties only trust himself. Examples includes auctions, negotiations, and online gaming. The efficiency of the implementation is largely obtained through an efficient implementation of the Paillier cryptosystem, also described in this paper.
2010
EPRINT
Efficient Online/Offline Identity-Based Signature for Wireless Sensor Network
In this paper, we present an \emph{online/offline identity-based signature} scheme for the wireless sensor network (WSN). We argue that due to significant reduction in computational and storage costs, our scheme is particularly suitable for the WSN environment with severely constrained resources. One of the interesting features of our scheme is that it provides \textit{multi-time} usage of the offline storage, which allows the signer to re-use the offline pre-computed information in polynomial time, in contrast to \textit{one-time} usage in all previous online/offline signature schemes. As evidence of the practicality and feasibility of our scheme to be used in the WSN environment, we provide an actual implementation result of our scheme on the MicaZ platform.
2010
EPRINT
Efficient Public-Key Cryptography in the Presence of Key Leakage
We study the design of cryptographic primitives resistant to a large class of side-channel attacks, called "memory attacks", where an attacker can repeatedly and adaptively learn information about the secret key, subject *only* to the constraint that the *overall amount* of such information is bounded by some parameter $\ell$. Although the study of such primitives was initiated only recently by Akavia et al. [AGV09], subsequent work already produced many such "leakage-resilient" primitives [NS09,ADW09,KV09], including signature, encryption, identification (ID) and authenticated key agreement (AKA) schemes. Unfortunately, every existing scheme, --- for any of the four fundamental primitives above, --- fails to satisfy at least one of the following desirable properties: - Efficiency. While the construction may be generic, it should have some *efficient* instantiations, based on standard cryptographic assumptions, and without relying on random oracles. - Strong Security. The construction should satisfy the strongest possible definition of security (even in the presence of leakage). For example, encryption schemes should be secure against chosen *ciphertext* attack (CCA), while signatures should be *existentially* unforgeable. - Leakage Flexibility. It should be possible to set the parameters of the schemes so that the leakage bound $\ell$ can come arbitrarily close to the size of the secret key $sk$. In this work we design the first signature, encryption, ID and AKA schemes which overcome these limitations, and satisfy all the properties above. Moreover, all our constructions are generic, in several cases elegantly simplifying and generalizing the prior constructions (which did not have any efficient instantiations). We also introduce several tools of independent interest, such as the abstraction (and constructions) of *simulation extractable* NIZK arguments, and a new *deniable* DH-based AKA protocol based on any CCA-secure encryption.
2010
EPRINT
Efficient Techniques for High-Speed Elliptic Curve Cryptography
In this paper, a thorough bottom-up optimization process (field, point and scalar arithmetic) is used to speed up the computation of elliptic curve point multiplication and report new speed records on modern x86-64 based processors. Our different implementations include elliptic curves using Jacobian coordinates, extended Twisted Edwards coordinates and the recently proposed Galbraith-Lin-Scott (GLS) method. Compared to state-of-the-art implementations on identical platforms the proposed techniques provide up to 30% speed improvements. Additionally, compared to the best previous published results on similar platforms improvements up to 31% are observed. This research is crucial for advancing high speed cryptography on new emerging processor architectures.
2010
EPRINT
Elliptic Curve Discrete Logarithm Problem over Small Degree Extension Fields. Application to the static Diffie-Hellman problem on $E(\F_{q^5})$
In 2008 and 2009, Gaudry and Diem proposed an index calculus method for the resolution of the discrete logarithm on the group of points of an elliptic curve defined over a small degree extension field $\F_{q^n}$. In this paper, we study a variation of this index calculus method, improving the overall asymptotic complexity when $\log q \leq c n^3$. In particular, we are able to successfully obtain relations on $E(\F_{p^5})$, whereas the more expensive computational complexity of Gaudry and Diem's initial algorithm makes it impractical in this case. An important ingredient of this result is a new variation of Faugère's Gröbner basis algorithm F4, which significantly speeds up the relation computation and might be of independent interest. As an application, we show how this index calculus leads to a practical example of an oracle-assisted resolution of the elliptic curve static Diffie-Hellman problem over a finite field on $130$ bits, which is faster than birthday-based discrete logarithm computations on the same curve.
2010
EPRINT
Elliptic curves in Huff 's model
This paper introduce generalizes the Huff curves $x(ay^2-1)=y(bx^2-1)$ which contains Huff's model $ax(y^2-1)=by(x^2-1)$ as a special case. It is shown that every elliptic curve over the finite field with three points of order $2$ is isomorphic to a general Huff curve. Some fast explicit formulae for general Huff curves in projective coordinates are presented. These explicit formulae for addition and doubling are almost as fast in the general case as they are for the Huff curves in \cite{Joye}. Finally, the number of isomorphism classes of general Huff curves defined over the finite field $\mathbb{F}_q$ is enumerated.
2010
EPRINT
Embedded Extended Visual Cryptography Schemes
Visual cryptography scheme (VCS) is a kind of secret sharing scheme which allows the encoding of a secret image into n shares that distributed to n participants. The beauty of such scheme is that a set of qualified participants is able to recover the secret image without any cryptographic knowledge and computation devices. Extended visual cryptography scheme (EVCS) is a kind of VCS which consists of meaningful shares (compared to the random shares of traditional VCS). In this paper, we propose a construction of EVCS which is realized by embedding random shares into meaningful covering shares, and we call it the embedded extended visual cryptography scheme (embedded EVCS). Experimental results compare some of the well-known EVCS's proposed in recent years systematically, and show that the proposed embedded EVCS has competitive visual quality compared with many of the well-known EVCS's in the literature. Besides, it has many specific advantages against these well-known EVCS's respectively.
2010
EPRINT
Enhanced Security Notions for Dedicated-Key Hash Functions: Definitions and Relationships
In this paper, we revisit security notions for dedicated-key hash functions, considering two essential theoretical aspects; namely, formal definitions for security notions, and the relationships among them. Our contribution is twofold. First, we provide a new set of enhanced security notions for dedicated-key hash functions. The provision of this set of enhanced properties has been motivated by the introduction of enhanced target collision resistance (eTCR) property by Halevi and Krawczyk at Crypto 2006. We notice that the eTCR property does not belong to the set of the seven security notions previously investigated by Rogaway and Shrimpton at FSE 2004, namely: Coll, Sec, aSec, eSec, Pre, aPre and ePre. The fact that eTCR, as a new useful property, is the enhanced variant of the well-known TCR (a.k.a. eSec or UOWHF) property motivates one to investigate the possibility of providing enhanced variants for the other properties. We provide such an enhanced set of properties. Interestingly, there are six enhanced variants of security notions available, excluding ``ePre'' which can be demonstrated to be non-enhanceable. As the second and main part of our contribution, we provide a full picture of relationships (i.e. implications and separations) among the (thirteen) security properties including the (six) enhanced properties and the previously considered seven properties. The implications and separations are supported by formal proofs (reductions) and/or counterexamples in the concrete-security framework.
2010
EPRINT
Estimating the Security of Lattice-based Cryptosystems
Encryption and signature schemes based on worst-case lattice problems are promising candidates for the post-quantum era, where classic number-theoretic assumptions are rendered false. Although there have been many important results and breakthroughs in lattice cryptography, the questions of how to systematically evaluate their security in practice and how to choose secure parameters are still open. This is mainly due to the fact that most security proofs are essentially asymptotic statements. In addition, the hardness of the underlying complexity assumption is controlled by several interdependent parameters rather than just a simple bit length as in classic schemes. With our work, we close this gap by providing a handy framework that (1) distills a hardness estimate out of a given parameter set and (2) relates the complexity of practical lattice-based attacks to symmetric ``bit security'' for the first time. Our approach takes various security levels, or attacker types, into account. Moreover, we use it to predict long-term security in a similar fashion as the results that are collected on \url{www.keylength.com}. In contrast to the experiments by Gama and Nguyen (Eurocrypt 2008), our estimates are based on precisely the family of lattices that is relevant in cryptography. Our framework can be applied in two ways: Firstly, to assess the hardness of the (few) proposed parameter sets so far and secondly, to propose secure parameters in the first place. Our methodology is applicable to essentially all lattice-based schemes that are based on the learning with errors problem (LWE) or the small integer solution problem (SIS) and it allows us to compare efficiency and security across different schemes and even across different types of cryptographic primitives.
2010
EPRINT
Estimating the Size of the Image of Deterministic Hash Functions to Elliptic Curves
Let E be a non-supersingular elliptic curve over a finite field F_q. At CRYPTO 2009, Icart introduced a deterministic function F_q->E(F_q) which can be computed efficiently, and allowed him and Coron to define well-behaved hash functions with values in E(F_q). Some properties of this function rely on a conjecture which was left as an open problem in Icart's paper. We prove this conjecture as well as analogues for other hash functions. See also Farahashi, Shparlinski and Voloch, _On Hashing into Elliptic Curves_, for independent results of a similar form.
2010
EPRINT
Evaluation of Hardware Performance for the SHA-3 Candidates Using SASEBO-GII
As a result of extensive analyses on cryptographic hash functions, NIST started an open competition for selecting a new standard hash function SHA-3. One important aspect of this competition is in evaluating hardware implementations and in collecting much attention of researchers in this area. For a fair comparison of the hardware performance, we propose an evaluation platform, a hardware design strategy, and evaluation criteria that must be consistent for all SHA-3 candidates. First, we define specifications of interface for the SASEBO-GII platform that are suitable for evaluating the performance in real-life hash applications, while one can also evaluate the performance of the SHA-3 core function that has an ideal interface. Second, we discuss the design strategy for high-throughput hardware implementations. Lastly, we explain the evaluation criteria to compare the cost and speed performance of eight SHA-3 candidates out of fourteen.
2010
EPRINT
Every Vote Counts: Ensuring Integrity in Large-Scale DRE-based Electronic Voting
The Direct Recording Electronic (DRE) system commonly uses touch-screen technology to directly record votes. It can provide several benefits in large-scale electronic voting, including usability, accessibility and efficiency. Unfortunately, a lack of tallying integrity in many existing products has largely discredited the entire approach along with its merits. To address this problem, we propose a cryptographic protocol called DRE-i, where i stands for integrity. We take a broad interpretation of the DRE: which includes not only touch-screen machines, as deployed at polling stations, but also remote voting systems conducted over the Internet or mobile phones. In all cases, the system records electronic votes directly, although the implementations are different. Our DRE-i protocol provides a drop-in solution to add integrity assurance to any DRE voting system without altering the voter's intuitive voting experience. It preserves election tallying integrity even if the DRE machine is completely corrupted, although in that case, vote secrecy will be compromised. The protocol requires a medium (e.g., an attached printer, email, or SMS) that the DRE machine can write the commitment data to. In addition, it requires a public bulletin board that everyone can read. Whilst past electronic voting protocols generally assume trusted computing or rely on trustees (i.e., tallying authorities), our proposal depends on neither. The protocol is self-tallying -- that is, anyone can tally the votes, without involving tallying authorities at all.
2010
EPRINT
Evolutionary Cipher against Differential Power Attack
DPA attack is one of most threatening SCA attacks, this paper focuses on research of DPA resistance. There are two phases in DPA attacks: collection and analyzing which can be utilized to construct different countermeasures against DPAs, such as balancing technologies aim at analyzing. We propose a new idea with dynamic structure algorithm to resist DPAs and call this measure as evolutionary cipher which can effectively resist DPA attacks based on destroying differential power computation model proposed by kocher. Moreover, evolutionary cipher opens up a new idea to design safety cryptographic algorithm for it can resist both DPA attack and some mathematic attacks as well. Designing principles of evolutionary cipher can be referenced by other dynamic cryptographic algorithms. This paper has theoretically and practically proofed security and effectiveness of evolutionary cipher to resist against DPAs.
2010
EPRINT
Exponential Bounds for Information Leakage in Unknown-Message Side-Channel Attacks
In Backes&Kopf(2008), the authors introduced an important new information theoretic numerical measure for assessing a system's resistance to unknown-message side-channel attacks and computed a formula for the limit of the numerical values defined by this measure as the number of side-channel observations tends to infinity. Here, we present corresponding quantitative (exponential) bounds that yield an actual rate-of-convergence for this limit, something not given in Backes&Kopf(2008). Such rate-of-convergence results can potentially be used to significantly strengthen the utility of the limit formula of Backes&Kopf(2008) as a tool to reduce computational complexity difficulties associated with calculating the side-channel attack resistance measure presented there. In addition, our arguments here show how the arguments used in Backes&Kopf(2008) to prove the limit formula can be substantially simplified.
2010
EPRINT
Factorization of a 768-bit RSA modulus
This paper reports on the factorization of the 768-bit number RSA-768 by the number field sieve factoring method and discusses some implications for RSA.
2010
EPRINT
Factorization of RSA-180
We present a brief report on the factorization of RSA-180, currently smallest unfactored RSA number. We show that the numbers of similar size could be factored in a reasonable time at home using open source factoring software running on a few Intel Core i7 PCs.
2010
EPRINT
Fair Blind Signatures without Random Oracles
A fair blind signature is a blind signature with revocable anonymity and unlinkability, i.e., an authority can link an issuing session to the resulting signature and trace a signature to the user who requested it. In this paper we first revisit the security model for fair blind signatures given by Hufschmitt and Traor\'e in 2007. We then give the first practical fair blind signature scheme with a security proof in the standard model. Our scheme satisfies a stronger variant of the Hufschmitt-Traor\'e model.
2010
EPRINT
Fast Exhaustive Search for Polynomial Systems in $F_2$
We analyze how fast we can solve general systems of multivariate equations of various low degrees over \GF{2}; this is a well known hard problem which is important both in itself and as part of many types of algebraic cryptanalysis. Compared to the standard exhaustive-search technique, our improved approach is more efficient both asymptotically and practically. We implemented several optimized versions of our techniques on CPUs and GPUs. Modern graphic cards allows our technique to run more than 10 times faster than the most powerful CPU available. Today, we can solve 48+ quadratic equations in 48 binary variables on a NVIDIA GTX 295 video card (USD 500) in 21 minutes. With this level of performance, solving systems of equations supposed to ensure a security level of 64 bits turns out to be feasible in practice with a modest budget. This is a clear demonstration of the power of GPUs in solving many types of combinatorial and cryptanalytic problems.
2010
EPRINT
Faster Computation of Self-pairings
Self-pairings have found interesting applications in cryptographic schemes. In this paper, we present a novel method for constructing a self-pairing on supersingular elliptic curves with even embedding degrees, which we call the Ateil pairing. This new pairing improves the efficiency of the self-pairing computation on supersingular curves over finite fields with large characteristics. Based on the $\eta_T$ pairing, we propose a generalization of the Ateil pairing, which we call the Ateil$_i$ pairing. The optimal Ateil$_i$ pairing which has the shortest Miller loop is faster than previously known self-pairings on supersingular elliptic curves over finite fields with small characteristics. We also present a new self-pairing based on the Weil pairing which is faster than the self-pairing based on the Tate pairing on ordinary elliptic curves with embedding degree $one$.
2010
EPRINT
Faster Fully Homomorphic Encryption
We describe two improvements to Gentry's fully homomorphic scheme based on ideal lattices and its analysis: we provide a refined analysis of one of the hardness assumptions (the one related to the Sparse Subset Sum Problem) and we introduce a probabilistic decryption algorithm that can be implemented with an algebraic circuit of low multiplicative degree. Combined together, these improvements lead to a faster fully homomorphic scheme, with a~$\softO(\lambda^{3})$ bit complexity per elementary binary add/mult gate, where~$\lambda$ is the security parameter. These improvements also apply to the fully homomorphic schemes of Smart and Vercauteren [PKC'2010] and van Dijk et al. [Eurocrypt'2010].
2010
EPRINT
Fault Resistant RSA Signatures: Chinese Remaindering in Both Directions
Fault attacks are one of the most severe attacks against secure embedded cryptographic implementations. Block ciphers such as AES, DES or public key algorithms such as RSA can be broken with as few as a single or a handful of erroneous computation results. Many countermeasures have been proposed both at the algorithmic level and using ad-hoc methods. In this paper, we address the problem of finding efficient countermeasures for RSA signature computations based on the Chinese Remainder Theorem for which one uses the inverse operation (verification) in order to secure the algorithm against fault attacks. We propose new efficient methods with associated security proofs in two different models; our methods protect against run-time errors, computation errors, and most permanent errors in the key parameters as well. We also extend our methods with infective computation strategies to secure the algorithm against double faults.
2010
EPRINT
Feasible Attack on the 13-round AES-256
In this note we present the first attack with feasible complexity on the 13-round AES-256. The attack runs in the related-subkey scenario with four related keys, in 2^{76} time, data, and memory.
2010
EPRINT
Finding discrete logarithms with a set orbit distinguisher
We consider finding discrete logarithms in a group $\GG$ when the help of an algorithm $D$ that distinguishes certain subsets of $\GG$ from each other is available. For a group $\GG$ of prime order $p$, if algorithm $D$ is polynomial-time with complexity c(\log(p))$, we can find discrete logarithms faster than square-root algorithms. We consider two variations on this idea and give algorithms solving the discrete logarithm problem in $\GG$ with complexity ${\cal O}(p^{\frac{1}{3}}\log(p)^3 + p^{\frac{1}{3}}c(\log(p) )$ and ${\cal O}(p^{\frac{1}{4}}\log(p)^3 + p^{\frac{1}{4}}c( \log(p) )$ in the best cases. When multiple distinguishers are available logarithms can be found in polynomial time. We discuss natural classes of algorithms $D$ that distinguish the required subsets, and prove that for {\em some} of these classes no algorithm for distinguishing can be efficient. The subsets distinguished are also relevant in the study of error correcting codes, and we give an application of our work to bounds for error-correcting codes.
2010
EPRINT
First-Order Side-Channel Attacks on the Permutation Tables Countermeasure –Extended Version–
The use of random permutation tables as a side-channel attack countermeasure was recently proposed by Coron [6]. The countermeasure operates by ensuring that during the execution of an algorithm, each intermediate variable that is handled is in a permuted form described by the random permutation tables. In this paper, we examine the application of this countermeasure to the AES algorithm as described in [6], and show that certain operations admit first-order side-channel leakage. New side-channel attacks are developed to exploit these flaws, using correlation-based and mutual information-based methods. The attacks have been verified in simulation, and in practice on a smart card.
2010
EPRINT
Fixed Argument Pairings
A common scenario in many pairing-based cryptographic protocols is that one argument in the pairing is &#64257;xed as a long term secret key or a constant parameter in the system. In these situations, the runtime of Miller’s algorithm can be signi&#64257;cantly reduced by storing precomputed values that depend on the &#64257;xed argument, prior to the input or existence of the second argument. In light of recent developments in pairing computation, we show that the computation of the Miller loop can be sped up by up to 37% if precomputation is employed, with our method being up to 19.5% faster than the previous precomputation techniques.
2010
EPRINT
Flaws in Differential Cryptanalysis of Reduced Round PRESENT
In this paper, we have presented flaws in differential cryptanalysis of reduced round variant of PRESENT given by M.Wang in [3] [4] for 80 bits key length and we have shown that it is not possible to recover 32 subkey bits by differential cryptanalysis of 16-round PRESENT as claimed in [3] [4].We have also shown that at the most 30 subkey bits can be recovered by the attack given in [4] after some modifications in the algorithm presented in [3][4].
2010
EPRINT
Founding Cryptography on Tamper-Proof Hardware Tokens
A number of works have investigated using tamper-proof hardwaretokens as tools to achieve a variety of cryptographic tasks. In particular, Goldreich and Ostrovsky considered the goal of software protection via oblivious RAM. Goldwasser, Kalai, and Rothblum introduced the concept of \emph{one-time programs}: in a one-time program, an honest sender sends a set of {\em simple} hardware tokens to a (potentially malicious) receiver. The hardware tokens allow the receiver to execute a secret program specified by the sender's tokens exactly once (or, more generally, up to a fixed $t$ times). A recent line of work initiated by Katz examined the problem ofachieving UC-secure computation using hardware tokens. Motivated by the goal of unifying and strengthening these previous notions, we consider the general question of basing secure computation on hardware tokens. We show that the following tasks, which cannot be realized in the ``plain'' model, become feasible if the parties are allowed to generate and exchange tamper-proof hardware tokens. Unconditional non-interactive secure computation: We show that by exchanging simple stateful hardware tokens, any functionality can be realized with unconditional security against malicious parties. In the case of two-party functionalities $f(x,y)$ which take their inputs from a sender and a receiver and deliver their output to the receiver, our protocol is non-interactive and only requires a unidirectional communication of simple stateful tokens from the sender to the receiver. This strengthens previous feasibility results for one-time programs both by providing unconditional security and by offering general protection against malicious senders. As is typically the case for unconditionally secure protocols, our protocol is in fact UC-secure. This improves over previous works on UC-secure computation based on hardware tokens, which provided computational security under cryptographic assumptions. Interactive Secure computation from stateless tokens based on one-way functions: We show that stateless hardware tokens are sufficient to base general secure (in fact, UC-secure) computation on the existence of one-way functions. One cannot hope for security against unbounded adversaries with stateless tokens since an unbounded adversary could query the token multiple times to ``learn" the functionality it contains. Non-interactive secure computation from stateless tokens: We consider the problem of designing non-interactive secure computation from stateless tokens for stateless oblivious reactive functionalities, i.e., reactive functionalities which allow unlimited queries from the receiver (these are the only functionalities one can hope to realize non-interactively with stateless tokens). By building on recent techniques from resettably secure computation, we give a general positive result for stateless oblivious reactive functionalities under standard cryptographic assumption. This result generalizes the notion of (unlimited-use) obfuscation by providing security against a malicious sender, and also provides the first general feasibility result for program obfuscation using stateless tokens.
2010
EPRINT
From AES-128 to AES-192 and AES-256, How to Adapt Differential Fault Analysis Attacks
Since its announcement, AES has been subject to different DFA attacks. Most of these attacks target the AES with 128-bit key. However, the two other variants are nowadays deployed in various applications and are also submitted to the same attack path. In this paper, we adapt the DFA techniques originally used on AES-128 in order to obtain the keys of AES-192 and AES-256. To illustrate this method, we propose efficient attacks on AES-192 and AES-256 based on a known DFA on KeyExpansion.
2010
EPRINT
Fully Secure Anonymous HIBE and Secret-Key Anonymous IBE with Short Ciphertexts
Lewko and Waters [Eurocrypt 2010] presented a fully secure HIBE with short ciphertexts. In this paper we show how to modify their construction to achieve anonymity. We prove the security of our scheme under static (and generically secure) assumptions formulated in composite order bilinear groups. In addition, we present a fully secure Anonymous IBE in the secret-key setting. Secret-Key Anonymous IBE was implied by the work of [Shen-Shi-Waters - TCC 2009] which can be shown secure in the selective-id model. No previous fully secure construction of secret-key Anonymous IBE is known.
2010
EPRINT
Fully Secure Functional Encryption: Attribute-Based Encryption and (Hierarchical) Inner Product Encryption
In this paper, we present two fully secure functional encryption schemes. Our first result is a fully secure attribute-based encryption (ABE) scheme. Previous constructions of ABE were only proven to be selectively secure. We achieve full security by adapting the dual system encryption methodology recently introduced by Waters and previously leveraged to obtain fully secure IBE and HIBE systems. The primary challenge in applying dual system encryption to ABE is the richer structure of keys and ciphertexts. In an IBE or HIBE system, keys and ciphertexts are both associated with the same type of simple object: identities. In an ABE system, keys and ciphertexts are associated with more complex objects: attributes and access formulas. We use a novel information-theoretic argument to adapt the dual system encryption methodology to the more complicated structure of ABE systems. We construct our system in composite order bilinear groups, where the order is a product of three primes. We prove the security of our system from three static assumptions. Our ABE scheme supports arbitrary monotone access formulas. Our second result is a fully secure (attribute-hiding) predicate encryption (PE) scheme for inner-product predicates. As for ABE, previous constructions of such schemes were only proven to be selectively secure. Security is proven under a non-interactive assumption whose size does not depend on the number of queries. The scheme is comparably efficient to existing selectively secure schemes. We also present a fully secure hierarchical PE scheme under the same assumption. The key technique used to obtain these results is an elaborate combination of the dual system encryption methodology (adapted to the structure of inner product PE systems) and a new approach on bilinear pairings using the notion of dual pairing vector spaces (DPVS) proposed by Okamoto and Takashima.
2010
EPRINT
Fully Secure Identity-Based Encryption Without Random Oracles: A variant of Boneh-Boyen HIBE
We present an Identity-Based Encryption (IBE) scheme that is fully secure without random oracles and has several advantages over previous such schemes - namely, computational efficiency, shorter public parameters, and simple assumption. The construction is remarkably simple and the security reduction is straightforward. We first give our CPA construction based on the decisional Bilinear Diffie-Hellman (BDH) problem, then archiving the CCA security by employing secure symmetric-key encryption algorithm. Additionally, we transform the CPA construction into a new signature scheme that is secure under the computational Diffie-Helleman assumption without random oracles.
2010
EPRINT
Further Improved Differential Fault Analysis on Camellia by Exploring Fault Width and Depth
In this paper, we present two further improved differential fault analysis methods on Camellia by exploring fault width and depth. Our first method broadens the fault width of previous Camellia attacks, injects multiple byte faults into the rth round left register to recover multiple bytes of the rth round equivalent key, and obtains Camellia-128,192/256 key with at least 8 and 12 faulty ciphertexts respectively; our second method extends fault depth of previous Camellia attacks, injects one byte fault into the r-2th round left register to recover full 8 bytes of the rth round equivalent key, 5-6 bytes of the r-1th round equivalent key, 1 byte of the r-2th round equivalent key, and obtains Camellia-128,192/256 key with 4 and 6 faulty ciphertexts respectively. Simulation experiments demonstrate: due to its reversible permutation function, Camellia is vulnerable to multiple bytes fault attack, the attack efficiency is increased with fault width, this feature greatly improves fault attack’s practicalities; and due to its Feistel structure, Camellia is also vulnerable to deep single byte fault attack, 4 and 6 faulty ciphertexts are enough to reduce Camellia-128 and Camellia-192/256 key hypotheses to 222.2 and 231.8 respectively.
2010
EPRINT
Garbled Circuits for Leakage-Resilience: Hardware Implementation and Evaluation of One-Time Programs
The power of side-channel leakage attacks on cryptographic implementations is evident. Today's practical defenses are typically attack-specific countermeasures against certain classes of side-channel attacks. The demand for a more general solution has given rise to the recent theoretical research that aims to build provably leakage-resilient cryptography. This direction is, however, very new and still largely lacks practitioners' evaluation with regard to both efficiency and practical security. A recent approach, One-Time Programs (OTPs), proposes using Yao's Garbled Circuit (GC) and very simple tamper-proof hardware to securely implement oblivious transfer, to guarantee leakage resilience. Our main contributions are (i) a generic architecture for using GC/OTP modularly, and (ii) hardware implementation and efficiency analysis of GC/OTP evaluation. We implemented two FPGA-based prototypes: a system-on-a-programmable-chip with access to hardware crypto accelerator (suitable for smartcards and future smartphones), and a stand-alone hardware implementation (suitable for ASIC design). We chose AES as a representative complex function for implementation and measurements. As a result of this work, we are able to understand, evaluate and improve the practicality of employing GC/OTP as a leakage-resistance approach. Last, but not least, we believe that our work contributes to bringing together the results of both theoretical and practical communities.
2010
EPRINT
Generating more Kawazoe-Takahashi Genus 2 Pairing-friendly Hyperelliptic Curves
Constructing pairing-friendly hyperelliptic curves with small $\rho$-values is one of challenges for practicability of pairing-friendly hyperelliptic curves. In this paper, we describe a method that extends the Kawazoe-Takahashi method of generating families of genus $2$ ordinary pairing-friendly hyperelliptic curves by parameterizing the parameters as polynomials. With this approach we construct genus $2$ ordinary pairing-friendly hyperelliptic curves with $2 <\rho \le 3$.
2010
EPRINT
Generic Collision Attacks on Narrow-pipe Hash Functions Faster than Birthday Paradox, Applicable to MDx, SHA-1, SHA-2, and SHA-3 Narrow-pipe Candidates
In this note we show a consequence of the recent observation that narrow-pipe hash designs manifest an abberation from ideal random functions for finding collisions for those functions with complexities much lower than the so called generic birthday paradox lower bound. The problem is generic for narrow-pipe designs including classic Merkle-Damgard designs but also recent narrow-pipe SHA-3 candidates. Our finding does not reduces the generic collision security of n/2 bits that narrow-pipe functions are declaring, but it clearly shows that narrow-pipe designs have a property when we count the calls to the hash function as a whole, the birthday paradox bound of 2^{n/2} calls to the hash function is clearly broken. This is yet another property in a series of similar non-ideal random properties (like HMAC or PRF constructions) that narrow-pipe hash function manifest and that are described in [1] and [2].
2010
EPRINT
Generic Constructions for Verifiably Encrypted Signatures without Random Oracles or NIZKs
Verifiably encrypted signature schemes (VES) allow a signer to encrypt his or her signature under the public key of a trusted third party, while maintaining public signature verifiability. With our work, we propose two generic constructions based on Merkle authentication trees that do not require non-interactive zero-knowledge proofs (NIZKs) for maintaining verifiability. Both are stateful and secure in the standard model. Furthermore, we extend the specification for VES, bringing it closer to real-world needs. We also argue that statefulness can be a feature in common business scenarios. Our constructions rely on the assumption that CPA (even slightly weaker) secure encryption, ``maskable'' CMA secure signatures, and collision resistant hash functions exist. ``Maskable'' means that a signature can be hidden in a verifiable way using a secret masking value. Unmasking the signature is hard without knowing the secret masking value. We show that our constructions can be instantiated with a broad range of efficient signature and encryption schemes, including two lattice-based primitives. Thus, VES schemes can be based on the hardness of worst-case lattice problems, making them secure against subexponential and quantum-computer attacks. Among others, we provide the first efficient pairing-free instantiation in the standard model.
2010
EPRINT
Genus 2 Curves with Complex Multiplication
Genus 2 curves are useful in cryptography for both discrete-log based and pairing-based systems, but a method is required to compute genus 2 curves with Jacobian with a given number of points. Currently, all known methods involve constructing genus 2 curves with complex multiplication via computing their 3 Igusa class polynomials. These polynomials have rational coefficients and require extensive computation and precision to compute. Both the computation and the complexity analysis of these algorithms can be improved by a more precise understanding of the denominators of the coefficients of the polynomials. The main goal of this paper is to give a bound on the denominators of Igusa class polynomials of genus 2 curves with CM by a primitive quartic CM field $K$. We give an overview of Igusa's results on the moduli space of genus two curves and the method to construct genus 2 curves via their Igusa invariants. We also give a complete characterization of the reduction type of a CM abelian surface, for biquadratic, cyclic, and non-Galois quartic CM fields, and for any type of prime decomposition of the prime, including ramified primes.
2010
EPRINT
Golay Complementary Sequences Over the QAM Constellation
In this paper, we present new constructions for $ M^{2}$ -QAM and $2M$ $Q$-$PAM$ Golay complementary sequences of length $2^n$ for integer $n$, where $M=2^{m}$ for integer $m$. New decision conditions are proposed to judge whether an offset pairs can be used to construct the Golay complementary sequences over constellation, and with the new decision conditions, we prove the conjecture 1 proposed by Ying Li~\cite{16}. We describe a new offset pairs and construct new $64$-$QAM$ Golay sequences based on this new offset pairs. We also study the $128$-$QAM$ Golay complementary sequences, and propose a new decision condition to judge whether the sequences are $128$-$QAM$ Golay complementary.
2010
EPRINT
Halving on Binary Edwards Curves
Edwards curves have attracted great interest for their efficient addition and doubling formulas. Furthermore, the addition formulas are strongly unified or even complete, i.e., work without change for all inputs. In this paper, we propose the first halving algorithm on binary Edwards curves, which can be used for scalar multiplication. We present a point halving algorithm on binary Edwards curves in case of $d_1\neq d_2$. The halving algorithm costs about $3I+5M+4S$, which is slower than the doubling one. We also give a theorem to prove that the binary Edwards curves have no minimal two-torsion in case of $d_1= d_2$, and we briefly explain how to achieve the point halving algorithm using an improved algorithm in this case. Finally, we apply our halving algorithm in scalar multiplication with $\omega$-coordinate using Montgomery ladder.
2010
EPRINT
Hash-based Multivariate Public Key Cryptosystems
Many efficient attacks have appeared in recent years, which have led to serious blow for the traditional multivariate public key cryptosystems. For example, the signature scheme SFLASH was broken by Dubois et al. at CRYPTO'07, and the Square signature (or encryption) scheme by Billet et al. at ASIACRYPTO'09. Most multivariate schemes known so far are insecure, except maybe the sigature schemes UOV and HFEv-. Following these new developments, it seems that the general design principle of multivariate schemes has been seriously questioned, and there is a rather pressing desire to find new trapdoor construction or mathematical tools and ideal. In this paper, we introduce the hash authentication techniques and combine with the traditional MQ-trapdoors to propose a novel hash-based multivariate public key cryptosystems. The resulting scheme, called EMC (Extended Multivariate Cryptosystem), can also be seen as a novel hash-based cryptosystems like Merkle tree signature. And it offers the double security protection for signing or encrypting. By the our analysis, we can construct the secure and efficient not only signature scheme but also encryption scheme by using the EMC scheme combined some modification methods summarized by Wolf. And thus we present two new schems: EMC signature scheme (with the Minus method ``-") and EMC encryption scheme (with the Plus method ``+"). In addition, we also propose a reduced scheme of the EMC signature scheme (a light-weight signature scheme). Precise complexity estimates for these schemes are provided, but their security proofs in the random oracle model are still an open problem.
2010
EPRINT
Hashing into Hessian Curves
We describe a hashing function from the elements of the finite field $\F_q$ into points on a Hessian curve. Our function features the uniform and smaller size for the cardinalities of almost all fibers compared with the other known hashing functions for elliptic curves. Moreover, a point on the image set of the function is uniquely given by its abscissa. For ordinary Hessian curves, the cardinality of the image set of the function is exactly given by $(q+i)/2$ for some $i=1,2,3$.
2010
EPRINT
Heraclitus: A LFSR-based Stream Cipher with Key Dependent Structure
We describe Heraclitus as an example of a stream cipher that uses a 128 bit index string to specify the structure of each instance in real time: each instance of Heraclitus will be a stream cipher based on mutually clocked shift registers. Ciphers with key-dependent structures have been investigated and are generally based on Feistel networks. Heraclitus, however, is based on mutually clocked shift registers. Ciphers of this type have been extensively analysed, and published attacks on them will be infeasible against any instance of Heraclitus. The speed and security of Heraclitus makes it suitable as a session cipher, that is, an instance is generated at key exchange and used for one session.
2010
EPRINT
High-Speed Software Implementation of the Optimal Ate Pairing over Barreto-Naehrig Curves
This paper describes the design of a fast software library for the computation of the optimal ate pairing on a Barreto--Naehrig elliptic curve. Our library is able to compute the optimal ate pairing over a $254$-bit prime field $\mathbb{F}_{p}$, in just $2.63$ million of clock cycles on a single core of an Intel Core i7 $2.8$GHz processor, which implies that the pairing computation takes $0.942$msec. We are able to achieve this performance by a careful implementation of the base field arithmetic through the usage of the customary Montgomery multiplier for prime fields. The prime field is constructed via the Barreto--Naehrig polynomial parametrization of the prime $p$ given as, $p = 36t^4 +36t^3 +24t^2 +6t+1$, with $t = 2^{62} - 2^{54} + 2^{44}$. This selection of $t$ allows us to obtain important savings for both the Miller loop as well as the final exponentiation steps of the optimal ate pairing.
2010
EPRINT
Homomorphic Encryption Over Cyclic Groups Implies Chosen-Ciphertext Security
Chosen-Ciphertext (IND-CCA) security is generally considered the right notion of security for a cryptosystem. Because of its central importance much effort has been devoted to constructing IND-CCA secure cryptosystems. In this work, we consider the problem of constructing IND-CCA secure cryptosystems from (group) homomorphic encryption. Our main results are natural and efficient constructions of IND-CCA secure cryptosystems from any homomorphic encryption scheme that satisfies weak cyclic properties, either in the plaintext, ciphertext or randomness space. Our results have the added benefit of simple and elegant proofs.
2010
EPRINT
Homomorphic One-Way Function Trees and Application in Collusion-Free Multicast Key Distribution
Efficient multicast key distribution (MKD) is essential for secure multicast communications. Although Sherman et al. claimed that their MKD scheme — OFT (One-way Function Tree) achieves both perfect forward and backward secrecy, several types of collusion attacks on it still have been found. Solutions to prevent these attacks have also been proposed, but at the cost of a higher communication overhead. In this paper, we prove falsity of a recently-proposed necessary and sufficient condition for existence of collusion attack on the OFT scheme by a counterexample and give a new necessary and sufficient condition for nonexistence of any type of collusion attack on it. We extend the notion of OFT to obtain a new type of cryptographic construction — homomorphic one-way function tree (HOFT). We propose two graph operations on HOFTs, tree product as well as tree blinding, and prove that both are structure-preserving. We provide algorithms for adding/removing leaf nodes in a HOFT by performing a tree product of the HOFT and a corresponding incremental tree. Employing HOFTs and related algorithms, we provide a collusion-free MKD scheme, which has not only the same leave-rekeying communication efficiency as the original OFT scheme, but also even better join-rekeying communication efficiency.
2010
EPRINT
Homomorphic Signatures over Binary Fields: Secure Network Coding with Small Coefficients
We propose a new signature scheme that can be used to authenticate data and prevent pollution attacks in networks that use network coding. At its core, our system is a homomorphic signature scheme that authenticates vector subspaces of a given ambient space. Our system has several novel properties not found in previous proposals: - It is the first such scheme that authenticates vectors defined over *binary fields*; previous proposals could only authenticate vectors with large or growing coefficients. - It is the first such scheme based on the problem of finding short vectors in integer lattices, and thus enjoys the worst-case security guarantees common to lattice-based cryptosystems. Security of our scheme (in the random oracle model) is based on a new hard problem on lattices, called k-SIS, that reduces to standard average-case and worst-case lattice problems. Our construction gives an example of a cryptographic primitive -- homomorphic signatures over F_2 -- that can be built using lattice methods, but cannot currently be built using bilinear maps or other traditional algebraic methods based on factoring or discrete-log type problems.
2010
EPRINT
Horizontal Correlation Analysis on Exponentiation
Power Analysis has been widely studied since Kocher et al. presented in 1998 the initial Simple and Di fferential Power Analysis (SPA and DPA). Correlation Power Analysis (CPA) is nowadays one of the most powerful techniques which requires, as classical DPA, many execution curves for recovering secrets. We introduce in this paper a technique in which we apply correlation analysis using only one execution power curve during an exponentiation to recover the whole secret exponent manipulated by the chip. As in the Big Mac attack from Walter, longer keys may facilitate this analysis and success will depend on the chip arithmetic characteristics. We present the theory of the attack with some practical successful results on an embedded device and analyze the efficiency of classical countermeasures with respect to our attack. Our technique, which uses a single exponentiation curve, cannot be prevented by exponent blinding. Also, contrarily to the Big Mac attack, it applies even in the case of regular implementations such as the square and multiply always or the Montgomery ladder. We also point out that DSA and Diffe-Hellman schemes are no longer immune against CPA. Then we discuss the efficiency of known countermeasures, and we finally present some new ones.
2010
EPRINT
How to Construct Space Efficient Revocable IBE from Non-monotonic ABE
Since there always exists some users whose private keys are stolen or expired in practice, it is important for identity based encryption (IBE) system to provide a solution for revocation. The current most efficient revocable IBE system has a private key of size $\mathcal{O}(\log n)$ and update information of size $\mathcal{O}(r \log(\frac{n}{r}))$ where $r$ is the number of revoked users. We describe a new revocable IBE systems where the private key only contains two group elements and the update information size is $\mathcal{O}(r)$. To our best knowledge, the proposed constructions serve as the most efficient revocable IBE constructions in terms of space cost. Besides, this construction also provides a generic methodology to transform a non-monotonic attribute based encryption into a revocable IBE scheme. This paper also demonstrates how the proposed method can be employed to present an efficient revocable hierarchical IBE scheme.
2010
EPRINT
How to Tell if Your Cloud Files Are Vulnerable to Drive Crashes
This paper presents a new challenge---verifying that a remote server is storing a file in a fault-tolerant manner, i.e., such that it can survive hard-drive failures. We describe an approach called the Remote Assessment of Fault Tolerance (RAFT). The key technique in a RAFT is to measure the time taken for a server to respond to a read request for a collection of file blocks. The larger the number of hard drives across which a file is distributed, the faster the read-request response. Erasure codes also play an important role in our solution. We describe a theoretical framework for RAFTs and show experimentally that RAFTs can work in practice.
2010
EPRINT
Huff's Model for Elliptic Curves
This paper revisits a model for elliptic curves over Q introduced by Huff in 1948 to study a diophantine problem. Huff's model readily extends over fields of odd characteristic. Every elliptic curve over such a field and containing a copy of Z/4Z×Z/2Z is birationally equivalent to a Huff curve over the original field. This paper extends and generalizes Huff's model. It presents fast explicit formulas for point addition and doubling on Huff curves. It also addresses the problem of the efficient evaluation of pairings over Huff curves. Remarkably, the formulas we obtain feature some useful properties, including completeness and independence of the curve parameters.
2010
EPRINT
i-Hop Homomorphic Encryption and Rerandomizable Yao Circuits
Homomorphic encryption (HE) schemes enable computing functions on encrypted data, by means of a public $\Eval$ procedure that can be applied to ciphertexts. But the evaluated ciphertexts so generated may differ from freshly encrypted ones. This brings up the question of whether one can keep computing on evaluated ciphertexts. An \emph{$i$-hop} homomorphic encryption scheme is one where $\Eval$ can be called on its own output up to $i$~times, while still being able to decrypt the result. A \emph{multi-hop} homomorphic encryption is a scheme which is $i$-hop for all~$i$. In this work we study $i$-hop and multi-hop schemes in conjunction with the properties of function-privacy (i.e., $\Eval$'s output hides the function) and compactness (i.e., the output of $\Eval$ is short). We provide formal definitions and describe several constructions. First, we observe that "bootstrapping" techniques can be used to convert any (1-hop) homomorphic encryption scheme into an $i$-hop scheme for any~$i$, and the result inherits the function-privacy and/or compactness of the underlying scheme. However, if the underlying scheme is not compact (such as schemes derived from Yao circuits) then the complexity of the resulting $i$-hop scheme can be as high as $k^{O(i)}$. We then describe a specific DDH-based multi-hop homomorphic encryption scheme that does not suffer from this exponential blowup. Although not compact, this scheme has complexity linear in the size of the composed function, independently of the number of hops. The main technical ingredient in this solution is a \emph{re-randomizable} variant of the Yao circuits. Namely, given a garbled circuit, anyone can re-garble it in such a way that even the party that generated the original garbled circuit cannot recognize it. This construction may be of independent interest.
2010
EPRINT
Ideal Key Derivation and Encryption in Simulation-based Security
Many real-world protocols, such as SSL/TLS, SSH, IPsec, IEEE 802.11i, DNSSEC, and Kerberos, derive new keys from other keys. To be able to analyze such protocols in a composable way, in this paper we extend an ideal functionality for symmetric and public-key encryption proposed in previous work by a mechanism for key derivation. We also equip this functionality with message authentication codes (MACs) and ideal nonce generation. We show that the resulting ideal functionality can be realized based on standard cryptographic assumptions and constructions, hence, providing a solid foundation for faithful, composable cryptographic analysis of real-world security protocols. Based on this new functionality, we identify sufficient criteria for protocols to provide universally composable key exchange and secure channels. Since these criteria are based on the new ideal functionality, checking the criteria requires merely information-theoretic or even only syntactical arguments, rather than involved reduction arguments. As a case study, we use our method to analyze two central protocols of the IEEE 802.11i standard, namely the 4-Way Handshake Protocol and the CCM Protocol, proving composable security properties. As to the best of our knowledge, this constitutes the first rigorous cryptographic analysis of these protocols.
2010
EPRINT
Identity Based Online/Offline Encryption Scheme
Consider the situation where a low power device with limited computational power has to perform cryptographic operation in order to do secure communication to the base station where the computational power is not limited. The most obvious way is to split each and every cryptographic operations into resource consuming, heavy operations (which are performed when the device is idle) and the fast light weight operations (which are executed on the fly). This concept is called online/offline cryptography. In this paper, we show the security weakness of an identity based online offline encryption scheme proposed in ACNS 09 by Liu et al. \cite{LiuZ09}. The scheme in \cite{LiuZ09} is the first identity based online offline encryption scheme in the random oracle model, in which the message and recipient are not known during the offline phase. We show that this scheme is not CCA secure. We show the weakness in the security proof of CCA secure online/offline encryption system proposed by Chow et al. in \cite{Chow10}. We propose a new provably secure identity based online offline encryption scheme in which the message and receiver are not known during the offline phase. Since all the CCA secure identity based online/offline encryption schemes are shown to have weakness, ours is the first provably secure scheme with the aforementioned properties.
2010
EPRINT
Identity Based Online/Offline Signcryption Scheme
Online/Offline signcryption is a cryptographic primitive where the signcryption process is divided into two phases - online and offline phase. Most of the computations are carried out offline (where the message and the receiver identity are unavailable). The online phase does not require any heavy computations like pairing, multiplication on elliptic curves and is very efficient. To the best of our knowledge there exists three online/offline signcryption schemes in the literature : we propose various attacks on all the existing schemes. Then, we give the first efficient and provably secure identity based online/offline signcryption scheme. We formally prove the security of the new scheme in the random oracle model \cite{BellareR93}. The main advantage of the new scheme is, it does not require the knowledge of message or receiver during the offline phase. This property is very useful since it is not required to pre-compute offline signcryption for different receivers based on the anticipated receivers during the offline phase. Hence, any value generated during the offline phase can be used during the online phase to signcrypt the message to a receiver during the online phase. This helps in reducing the number of values stored during the offline phase. To the best of our knowledge, the scheme in this paper is the first provably secure scheme with this property.
2010
EPRINT
Identity Based Public Verifiable Signcryption Scheme
Signcryption as a single cryptographic primitive offers both confidentiality and authentication simultaneously. Generally in signcryption schemes, the message is hidden and thus the validity of the ciphertext can be verified only after unsigncrypting the ciphertext. Thus, a third party will not be able to verify whether the ciphertext is valid or not. Signcryption schemes that allow any user to verify the validity of the ciphertext without the knowledge of the message are called public verifiable signcryption schemes. Third Party verifiable signcryption schemes allow the receiver to convince a third party, by providing some additional information along with the signcryption other than his private key with/without exposing the message. In this paper, we show the security weaknesses in three existing schemes \cite{BaoD98}, \cite{TsoOO08} and \cite{ChowYHC03}. The schemes in \cite{BaoD98} and \cite{TsoOO08} are in the Public Key Infrastructure (PKI) setting and the scheme in \cite{ChowYHC03} is in the identity based setting. More specifically, \cite{TsoOO08} is based on elliptic curve digital signature algorithm (ECDSA). We also, provide a new identity based signcryption scheme that provides public verifiability and third party verification. We formally prove the security of the newly proposed scheme in the random oracle model.
2010
EPRINT
Identity Based Self Delegated Signature - Self Proxy Signatures
A proxy signature scheme is a variant of digital signature scheme in which a signer delegates his signing rights to another person called proxy signer, so that the proxy signer can generate the signature of the actual signer in his absence. Self Proxy Signature (SPS) is a type of proxy signature wherein, the original signer delegates the signing rights to himself (Self Delegation), there by generating temporary public and private key pairs for himself. Thus, in SPS the user can prevent the exposure of his private key from repeated use. In this paper, we propose the first identity based self proxy signature scheme. We give a generic scheme and a concrete instantiation in the identity based setting. We have defined the appropriate security model for the same and proved both the generic and identity based schemes in the defined security model.
2010
EPRINT
Identity-Based Authenticated Asymmetric Group Key Agreement Protocol
In identity-based public-key cryptography, an entity's public key can be easily derived from its identity. The direct derivation of public keys in identity-based public-key cryptography eliminates the need for certificates and solves certain public key management problems in traditional public-key cryptosystems. Recently, the notion of asymmetric group key agreement was introduced, in which the group members merely negotiate a common encryption key which is accessible to any entity, but they hold respective secret decryption keys. In this paper, we first propose a security model for identity-based authenticated asymmetric group key agreement (IB-AAGKA) protocols. We then propose an IB-AAGKA protocol which is proven secure under the Bilinear Di±e-Hellman Exponent assumption. Our protocol is also efficient, and readily adaptable to provide broadcast encryption.
2010
EPRINT
Identity-Based Encryption Secure under Selective Opening Attack
We present the first Identity-Based Encryption (IBE) scheme that is proven secure against selective opening attack (SOA). This means that if an adversary, given a vector of ciphertexts, adaptively corrupts some fraction of the senders, exposing not only their messages but also their coins, the privacy of the unopened messages is guaranteed. Achieving security against such attacks is well-known to be challenging and was only recently solved in the PKE case via lossy encryption. We explain why those methods won’t work for IBE and instead rely on an approach based on encryption schemes that have a property we call one-sided public openability. Our SOA-secure IBE scheme is quite efficient and proven secure without random oracles based on the Decision Linear assumption.
2010
EPRINT
Identity-Based Online/Offline Key Encapsulation and Encryption
An identity-based online/offline encryption (IBOOE) scheme splits the encryption process into two phases. The first phase performs most of the heavy computations, such as modular exponentiation or pairing over points on elliptic curve. The knowledge of the plaintext or the receiver's identity is not required until the second phase, where the ciphertext is produced by only light computations, such as integer addition/multiplication or hashing. This division of computations makes encryption affordable by devices with limited computation power since the preparation works can be executed ``offline'' or possibly by some powerful devices. Since efficiency is the main concern, smaller ciphertext size and less burden in the computation requirements of all phases (i.e., both phases of encryption and the decryption phase) are desirable. In this paper, we proposed new schemes with improved efficiency over previous schemes by assuming random oracles. Our first construction is a very efficient scheme which is secure against chosen-plaintext attack (CPA), This scheme is slightly modified from an existing scheme. In particular, the setup and the user private key remain the same. We then proceed to propose the notion of ID-based Online/Offline KEM (IBOOKEM) that allows the key encapsulation process to be split into offline and online stages, in the same way as IBOOE does. We also present a generic transformation to get security against chosen-ciphertext attack (CCA) for IBOOE from any IBOOKEM scheme with one-wayness only. Our schemes (both CPA and CCA) are the most efficient one in the state-of-the-art, in terms of online computation and ciphertext size, which are the two main focuses of online/offline schemes. Our schemes are very suitable to be deployed on embedded devices such as smartcard or wireless sensor which have very limited computation powers and the communication bandwidth is very expensive.
2010
EPRINT
Impossible Differential Cryptanalysis of SPN Ciphers
Impossible differential cryptanalysis is a very popular tool for analyzing the security of modern block ciphers and the core of such attack is based on the existence of impossible differentials. Currently, most methods for finding impossible differentials are based on the miss-in-the-middle technique and they are very ad-hoc. In this paper, we concentrate SPN ciphers and propose several criteria on the linear transformation $P$ and its inversion $P^{-1}$ to characterize the existence of $3/4$-round impossible differentials. We further discuss the possibility to extend these methods to analyze $5/6$-round impossible differentials. Using these criteria, impossible differentials for reduced-round Rijndael are found that are consistent with the ones found before. New $4$-round impossible differentials are discovered for block cipher ARIA. And many $4$-round impossible differentials are firstly detected for a kind of SPN cipher that employs a $32\times32$ binary matrix proposed at ICISC 2006 as its diffusion layer.
2010
EPRINT
Impossible Differential Cryptanalysis on E2
E2 is a 128-bit block cipher which employs Feistel structure and 2-round SPN in round function. It is an AES candidate and was designed by NTT. In the former publications, E2 is supposed no more than 5-round impossible differential. In this paper, we describe some 6-round impossible differentials of E2. By using the 6-round impossible differential, we first present an attack on 9-round reduced version of E2-256 without IT Function(the initial transformation) and FT-Function(the Final transformation) function.
2010
EPRINT
Improved Agreeing-Gluing Algorithm
A system of algebraic equations over a finite field is called sparse if each equation depends on a low number of variables. Finding efficiently solutions to the system is an underlying hard problem in the cryptanalysis of modern ciphers. In this paper a deterministic Improved Agreeing-Gluing Algorithm is introduced. The expected running time of the new Algorithm on uniformly random instances of the problem is rigorously estimated. The estimate is at present the best theoretical bound on the complexity of solving average instances of the problem. In particular, this is a significant improvement over those in our earlier papers [10, 11]. In sparse Boolean equations a gap between the worst case and the average time complexity of the problem has significantly increased.
2010
EPRINT
Improved Algebraic Cryptanalysis of QUAD, Bivium and Trivium via Graph Partitioning on Equation Systems
We present a novel approach for solving systems of polynomial equations via graph partitioning. The concept of a variable-sharing graph of a system of polynomial equations is defined. If such graph is disconnected, then the corresponding system of equations can be split into smaller ones that can be solved individually. This can provide a significant speed-up in computing the solution to the system, but is unlikely to occur either randomly or in applications. However, by deleting a certain vertices on the graph, the variable-sharing graph could be disconnected in a balanced fashion, and in turn the system of polynomial equations are separated into smaller ones of similar sizes. In graph theory terms, this process is equivalent to finding balanced vertex partitions with minimum-weight vertex separators. The techniques of finding these vertex partitions are discussed, and experiments are performed to evaluate its practicality for general graphs and systems of polynomial equations. Applications of this approach in algebraic cryptanalysis on symmetric ciphers are presented. For the QUAD family of stream ciphers, we show how a malicious party can manufacture conforming systems that can be easily broken. For the stream cipher Trivium and its variants, we achieve significant speedups in algebraic attacks launched against them. In each of these cases, the systems of polynomial equations involved are well-suited to our graph partitioning method. These results may open a new avenue for evaluating the security of symmetric ciphers against algebraic attacks.
2010
EPRINT
Improved Cache Trace Attack on AES and CLEFIA by Considering Cache Miss and S-box Misalignment
This paper presents an improved Cache trace attack on AES and CLEFIA by considering Cache miss trace information and S-box misalignment. In 2006, O. Ac&#305;içmez et al. present a trace driven Cache attack on AES first two rounds, and point out that if the Cache element number of the Cache block is 16, at most 48-bit of AES key can be obtained in the first round attack. Their attack is based on the ideal case when S-box elements are perfected aligned in the Cache block. However, this paper discovers that, the S-box elements are usually misaligned, and due to this feature and by considering Cache miss trace information, about 200 samples are enough to obtain full 128-bit AES key within seconds. In 2010, Chester Rebeiro et al. present the first trace driven Cache attack on C LEFIA by considering Cache hit information and obtain 128-bit key with 243 CLEFIA encryptions. In this paper, we present a new attack on CLEFIA by considering Cache miss information and S-box misalignment features, finally successfully obtain CLEFIA-128 key for about 220 samples within seconds.
2010
EPRINT
Improved Collision Attacks on the Reduced-Round Gr{\o}stl Hash Function
We analyze the Gr{\o}stl hash function, which is a 2nd-round candidate of the SHA-3 competition. Using the start-from-the-middle variant of the rebound technique, we show collision attacks on the Gr{\o}stl-256 hash function reduced to 5 and 6 out of 10 rounds with time complexities $2^{48}$ and $2^{112}$, respectively. Furthermore, we demonstrate semi-free-start collision attacks on the Gr{\o}stl-224 and -256 hash functions reduced to 7 rounds and the Gr{\o}stl-224 and -256 compression functions reduced to 8 rounds. Our attacks are based on differential paths between the two permutations $P$ and $Q$ of Gr{\o}stl, a strategy introduced by Peyrin to construct distinguishers for the compression function. In this paper, we extend this approach to construct collision and semi-free-start collision attacks for both the hash and the compression function. Finally, we present improved distinguishers for reduced-round versions of the Gr{\o}stl-224 and -256 permutations.
2010
EPRINT
Improved Delegation of Computation using Fully Homomorphic Encryption
Following Gennaro, Gentry, and Parno (Cryptology ePrint Archive 2009/547), we use fully homomorphic encryption to design improved schemes for delegating computation. In such schemes, a delegator outsources the computation of a function $F$ on many, dynamically chosen inputs $x_i$ to a worker in such a way that it is infeasible for the worker to make the delegator accept a result other than $F(x_i)$. The "online stage" of the Gennaro et al. scheme is very efficient: the parties exchange two messages, the delegator runs in time $poly(log T)$, and the worker runs in time $poly(T)$, where $T$ is the time complexity of $F$. However, the "offline stage" (which depends on the function $F$ but not the inputs to be delegated) is inefficient: the delegator runs in time $poly(T)$ and generates a public key of length $poly(T)$ that needs to be accessed by the worker during the online stage. Our first construction eliminates the large public key from the Gennaro et al. scheme. The delegator still invests $poly(T)$ time in the offline stage, but does not need to communicate or publish anything. Our second construction reduces the work of the delegator in the offline stage to $poly(log T)$ at the price of a 4-message (offline) interaction with a $poly(T)$-time worker (which need not be the same as the workers used in the online stage). Finally, we describe a "pipelined" implementation of the second construction that avoids the need to re-run the offline construction after errors are detected (assuming errors are not too frequent).
2010
EPRINT
Improved Differential Attacks for ECHO and Grostl
We present improved cryptanalysis of two second-round SHA-3 candidates: the AES-based hash functions ECHO and GROSTL. We explain methods for building better differential trails for ECHO by increasing the granularity of the truncated differential paths previously considered. In the case of GROSTL, we describe a new technique, the internal differential attack, which shows that when using parallel computations designers should also consider the differential security between the parallel branches. Then, we exploit the recently introduced start-from-the-middle or Super-Sbox attacks, that proved to be very efficient when attacking AES-like permutations, to achieve a very efficient utilization of the available freedom degrees. Finally, we obtain the best known attacks so far for both ECHO and GROSTL. In particular, we are able to mount a distinguishing attack for the full GROSTL-256 compression function.
2010
EPRINT
Improved Fault Attack on FOX
In this paper, based on a differential property of two round Lai-Massay scheme in a fault model, we present an improved fault attack on the block cipher FOX64. Our improved method can deduce any round subkey through 4.25 faults on average (4 in the best case), and retrieve the whole round sub-keys through 45.45 faults on average (38 in the best case). The technique of the proposed attack in this paper can also be easily extended to other series of FOX.
2010
EPRINT
Improved Single-Key Attacks on 8-round AES
AES is the most widely used block cipher today, and its security is one of the most important issues in cryptanalysis. After 13 years of analysis, related-key attacks were recently found against two of its flavors (AES-192 and AES-256). However, such a strong type of attack is not universally accepted as a valid attack model, and in the more standard single-key attack model at most 8 rounds of these two versions can be currently attacked. In the case of 8-round AES-192, the only known attack (found 10 years ago) is extremely marginal, requiring the evaluation of essentially all the 2^{128} possible plaintext/ciphertext pairs in order to speed up exhaustive key search by a factor of 16. In this paper we introduce three new cryptanalytic techniques, and use them to get the first non-marginal attack on 8-round AES-192 (making its time complexity about a million times faster than exhaustive search, and reducing its data complexity to about 1/32,000 of the full codebook). In addition, our new techniques can reduce the best known time complexities for all the other combinations of 7-round and 8-round AES-192 and AES-256.
2010
EPRINT
Improved Trace-Driven Cache-Collision Attacks against Embedded AES Implementations
In this paper we present two attacks that exploit cache events, which are visible in some side channel, to derive a secret key used in an implementation of AES. The first is an improvement of an adaptive chosen plaintext attack presented at ACISP 2006. The second is a new known plaintext attack that can recover a 128-bit key with approximately 30 measurements to reduce the number of key hypotheses to $2^{28}$. This is comparable to classical Differential Power Analysis; however, our attacks are able to overcome certain masking techniques. We also show how to deal with unreliable cache event detection in the real-life measurement scenario and present practical explorations on a 32-bit ARM microprocessor.
2010
EPRINT
Improving the performance of Luffa Hash Algorithm
Luffa is a new hash algorithm that has been accepted for round two of the NIST hash function competition SHA-3. Computational efficiency is the second most important evaluation criteria used to compare candidate algorithms. In this paper, we describe a fast software implementation of the Luffa hash algorithm for the Intel Core 2 Duo platform. We explore the use of the perfect shuffle operation to improve the performance of 64-bit implementation and 128-bit implementation with the Intel Supplemental SSSE3 instructions. In addition, we introduce a new way of implementing Luffa based on a Parallel Table Lookup instruction. The timings of our 64-bit implementation (C code) resulted in a 16 to 32% speed improvement over the previous fastest implementation.
2010
EPRINT
Increased Resilience in Threshold Cryptography: Sharing a Secret with Devices That Cannot Store Shares
Threshold cryptography has been used to secure data and control access by sharing a private cryptographic key over different devices. This means that a minimum number of these devices, the threshold $t+1$, need to be present to use the key. The benefits are increased security, because an adversary can compromise up to $t$ devices, and resilience, since any subset of $t+1$ devices is sufficient. Many personal devices are not suitable for threshold schemes, because they do not offer secure storage, which is needed to store shares of the private key. This article presents several protocols in which shares are stored in protected form (possibly externally). This makes them suitable for low-cost devices with a factory-embedded key, e.g., car keys and access cards. All protocols are verifiable through public broadcast, thus without private channels. In addition, distributed key generation does not require all devices to be present.
2010
EPRINT
Insecure ``Provably Secure Network Coding'' and Homomorphic Authentication Schemes for Network Coding
Network coding allows the routers to mix the received information before forwarding them to the next nodes. Though this information mixing has been proven to maximize network throughput, it also introduces security challenges such as pollution attacks. A malicious node could insert a malicious packet into the system and this corrupted packet will propagate more quickly than in traditional copy-and-forward networks. Several authors have studied secure network coding from both information theoretic and probabilistic viewpoints. In this paper, we show that there are serious flaws in several of these schemes (the security ``proofs'' for these schemes were presented in these publications). Furthermore, we will propose a secure homomorphic authentication scheme for network coding.
2010
EPRINT
Interactive Locking, Zero-Knowledge PCPs, and Unconditional Cryptography
Motivated by the question of basing cryptographic protocols on stateless tamper-proof hardware tokens, we revisit the question of unconditional two-prover zero-knowledge proofs for $NP$. We show that such protocols exist in the {\em interactive PCP} model of Kalai and Raz (ICALP '08), where one of the provers is replaced by a PCP oracle. This strengthens the feasibility result of Ben-Or, Goldwasser, Kilian, and Wigderson (STOC '88) which requires two stateful provers. In contrast to previous zero-knowledge PCPs of Kilian, Petrank, and Tardos (STOC '97), in our protocol both the prover and the PCP oracle are efficient given an $NP$ witness. Our main technical tool is a new primitive that we call {\em interactive locking}, an efficient realization of an unconditionally secure commitment scheme in the interactive PCP model. We implement interactive locking by adapting previous constructions of {\em interactive hashing} protocols to our setting, and also provide a direct construction which uses a minimal amount of interaction and improves over our interactive hashing based constructions. Finally, we apply the above results towards showing the feasibility of basing unconditional cryptography on {\em stateless} tamper-proof hardware tokens, and obtain the following results: *) We show that if tokens can be used to encapsulate other tokens, then there exist unconditional and statistically secure (in fact, UC secure) protocols for general secure computation. *) Even if token encapsulation is not possible, there are unconditional and statistically secure commitment protocols and zero-knowledge proofs for $NP$. *) Finally, if token encapsulation is not possible, then no protocol can realize statistically secure oblivious transfer.
2010
EPRINT
2010
EPRINT
Intractable Problems in Cryptography
We examine several variants of the Diffie-Hellman and Discrete Log problems that are connected to the security of cryptographic protocols. We discuss the reductions that are known between them and the challenges in trying to assess the true level of difficulty of these problems, particularly if they are interactive or have complicated input.
2010
EPRINT
Introduction to Mirror Theory: Analysis of Systems of Linear Equalities and Linear Non Equalities for Cryptography
\begin{abstract} In this paper we will first study two closely related problems:\\ 1. The problem of distinguishing $f(x\Vert 0)\oplus f(x \Vert 1)$ where $f$ is a random permutation on $n$ bits. This problem was first studied by Bellare and Implagliazzo in~\cite{BI}.\\ 2. The so-called ``Theorem $P_i \oplus P_j$'' of Patarin (cf~\cite{P05}). Then, we will see many variants and generalizations of this ``Theorem $P_i \oplus P_j$'' useful in Cryptography. In fact all these results can be seen as part of the theory that analyzes the number of solutions of systems of linear equalities and linear non equalities in finite groups. We have nicknamed these analysis ``Mirror Theory'' due to the multiples induction properties that we have in it. \end{abstract}
2010
EPRINT
J-PAKE: Authenticated Key Exchange Without PKI
Password Authenticated Key Exchange (PAKE) is one of the important topics in cryptography. It aims to address a practical security problem: how to establish secure communication between two parties solely based on a shared password without requiring a Public Key Infrastructure (PKI). After more than a decade of extensive research in this field, there have been several PAKE protocols available. The EKE and SPEKE schemes are perhaps the two most notable examples. Both techniques are however patented. In this paper, we review these techniques in detail and summarize various theoretical and practical weaknesses. In addition, we present a new PAKE solution called J-PAKE. Our strategy is to depend on well-established primitives such as the Zero-Knowledge Proof (ZKP). So far, almost all of the past solutions have avoided using ZKP for the concern on efficiency. We demonstrate how to effectively integrate the ZKP into the protocol design and meanwhile achieve good efficiency. Our protocol has comparable computational efficiency to the EKE and SPEKE schemes with clear advantages on security.
2010
EPRINT
Key Agreement Protocols Based on Multivariate Algebraic Equations on Quaternion Ring
In this paper we propose new key agreement protocols based on multivariate algebraic equations. We choose the multivariate function F(X) of high degree on non-commutative quaternion ring H over finite field Fq. Common keys are generated by using the public-key F(X). Our system is immune from the Gröbner bases attacks because obtaining parameters of F(X) to be secret keys arrives at solving the multivariate algebraic equations that is one of NP complete problems .Our protocols are also thought to be immune from the differential attacks and the rank attacks.
2010
EPRINT
Key Agreement Protocols Using Multivariate Equations on Non-commutative Ring
In this paper we propose two KAP(key agreement protocols) using multivariate equations. As the enciphering functions we select the multivariate functions of high degree on non-commutative ring H over finite field Fq. Two enciphering functions are slightly different from the enciphering function previously proposed by the present author. In proposed systems we can adopt not only the quaternion ring but also the non-associative octonion ring as the basic ring. Common keys are generated by using the enciphering functions. Proposed systems are immune from the Gröbner bases attacks because obtaining parameters of the enciphering functions to be secret keys arrives at solving the multivariate algebraic equations, that is, one of NP complete problems .Our protocols are also thought to be immune from the differential attacks because of the equations of high degree. We can construct our system on the some non-commutative rings, for example quaternion ring, matrix ring or octonion ring.
2010
EPRINT
Key Exchange with Anonymous Authentication using DAA-SIGMA Protocol
Anonymous digital signatures such as Direct Anonymous Attestation (DAA) and group signatures have been a fundamental building block for anonymous entity authentication. In this paper, we show how to incorporate DAA schemes into a key exchange protocol between two entities to achieve anonymous authentication and to derive a shared key between them. We propose a modification to the SIGMA key exchange protocol used in the Internet Key Exchange (IKE) standards to support anonymous authentication using DAA. Our key exchange protocol can be also extended to support group signature schemes instead of DAA. We present a secure model for key exchange with anonymous authentication derived from of the Canetti-Krawczyk key-exchange security model. We formally prove that our DAA-SIGMA protocol is secure under our security model.
2010
EPRINT
Key-Controlled Order-Preserving Encryption
In this paper we study order-preserving encryption (OPE), a primitive in the database community for allowing efficient range queries on ecrypted data. OPE was suggested by Agrawal et al [1], and was throughly studied by Boldyreva et al [2]. In this paper we present a practical OPE scheme, which is a key-controlled algorithm, based on simple computation. A primary analysis shows that our algorithm is secure enough
2010
EPRINT
KIST: A new encryption algorithm based on splay
In this paper, we proposed a new encryption algorithm called KIST. This algorithm uses an asynchronous key sequence and a splay tree. It is very efficient in the usage of both space and time. Some elementary security tests have been done.
2010
EPRINT
LAB Form for Iterated Hash Functions
In this paper,we proposed a efficient and laconic mode for iterative hash functions and tried to fix the flaws of the Merkle-Damgaard construction completely and certainly tried to prevent varieties of those generic attacks ,such as Multicollisions Attack,Second Preimage Attack and Herding Attack.The struc- ture of this new mode is different from HAIFA or any other proposal,it contains a new method “Locking Abutting Blocks”(LAB)with checksum ,it makes a larger size of connotative chaining value without requirements of intricate computing and larger memory and it allows for an online computation in one pass with a fixed memory independently .It’s also easy to avoid the generic attacks (presented by Praveen Gauravaram and John Kelsey) which apply on the hash functions with linear-XOR/additive checksum.
2010
EPRINT
Lattice Reduction and Polynomial Solving
In this paper, we suggest a generalization of Coppersmith's method for finding integer roots of a multivariate polynomial. Our generalization allows finding integer solutions of a system of $k$ multivariate polynomial equations. We then apply our method to the so-called implicit factoring problem, which constitutes the main contribution of this paper. The problem is as follows: let $N_1 = p_1 q_1$ and $N_2 = p_2 q_2$ be two RSA moduli of same bit-size, where $q_1, q_2$ are $\alpha$-bit primes. We are given the \emph{implicit} information that $p_1$ and $p_2$ share $t$ most significant bits. We present a novel and rigorous lattice-based method that leads to the factorization of $N_1$ and $N_2$ in polynomial time as soon as $t \ge 2 \alpha + 3$. Subsequently, we heuristically generalize the method to $k$ RSA moduli $N_i = p_i q_i$ where the $p_i$'s all share $t$ most significant bits (MSBs) and obtain an improved bound on $t$ that converges to $t \ge \alpha + 3.55\ldots$ as $k$ tends to infinity. This paper extends the work of May and Ritzenhofen in \cite{DBLP:conf/pkc/MayR09}, where similar results were obtained when the $p_i$'s share least significant bits (LSBs). In \cite{sarkar2009further}, Sarkar and Maitra describe an alternative but heuristic method for only two RSA moduli, when the $p_i$'s share LSBs and/or MSBs, or bits in the middle. In the case of shared MSBs or bits in the middle and two RSA moduli, they get better experimental results in some cases, but we use much lower (at least 23 times lower) lattice dimensions. Our results rely on the following surprisingly simple algebraic relation in which the shared MSBs of p_1$ and $p_2$ cancel out: $q_1 N_2 - q_2 N_1 = q_1 q_2 (p_2 - p_1)$. This relation allows us to build a lattice whose shortest vector yields the factorization of the $N_i$'s.
2010
EPRINT
Lattice-based Identity-Based Broadcast Encryption Scheme
Motivated by the lattice basis delegation technique due to [8], we propose an adaptively secure identity-based broadcast encryption(IBBE) scheme based on the hard worst-case lattice problems. Our construction can be generalized to obtain a hierarchical IBBE (HIBBE) scheme easily. To the best of the authors' knowledge, our construction and its variants constitute the first adaptively secure IBBE schemes from lattices, which are believed secure in the post-quantum environment.
2010
EPRINT
Lattice-Based Public Key Cryptosystem Provably Secure against Adaptive Chosen Ciphertext Attack
We propose a simple and ecient construction of CCA- secure public-key encryption scheme based on lattice. Our construction needs an encryption scheme, which we call \matrix encryption", as building block, and requires the underlying matrix encryption scheme to satisfy only a relatively weak notion of security which can be achievable without random oracles. With the pseudohomomorphism property of mR04 of [3], which is the multi-bit version of single-bit cryptosystems R04 [1], we design a matrix encryption scheme which satisfies the above requirements, thus, our construction provides a new approach for constructing CCA-secure encryption schemes in the standard model. So far as we know, our construction is the first CCA-secure cryptosystem which is directly constructed from lattice and whose security is based on the unique shortest vector problem (uSVP). In addition, the method designing the matrix encryption scheme from mR04 also adapts to mR05, mA05, mADGGH of [3], which are the multibit versions of single-bit cryptosystems R05 [2], A05 [5], and ADGGH [7], respectively, since they have the same pseudohomomorphism property as mR04. This result makes our approach constructing CCA-secure cryptosystem become generic and universal.
2010
EPRINT
Lattice-theoretic Characterization of Secret Sharing Representable Connected Matroids
Necessary and sufficient conditions for a connected matroid to be secret sharing (ss-)representable are obtained. We show that the flat lattices of ss-representable matroids are closely related with well-studied algebraic objects called linear lattices. This fact implies that new powerful methods (from lattice theory and mathematical logic) for investigation of ss-representable matroids can be applied. We also obtain some necessary conditions for a connected matroid to be ss-representable. Namely, we construct an infinite set of sentences (like to Haiman’s “higher Arguesian identities”) which are hold in all ss-representable matroids.
2010
EPRINT
Links Between Theoretical and Effective Differential Probabilities: Experiments on PRESENT
Recent iterated ciphers have been designed to be resistant to differential cryptanalysis. This implies that cryptanalysts have to deal with differentials having so small probabilities that, for a fixed key, the whole codebook may not be sufficient to detect it. The question is then, do these theoretically computed small probabilities have any sense? We propose here a deep study of differential and differential trail probabilities supported by experimental results obtained on a reduced version of PRESENT.
2010
EPRINT
Logical cryptoanalysis on the example of the cryptosystem DES
In the paper on the example of the cryptosystem DES, the successful method of a cryptanalysis is presented. As a result, it is offered as a criterion of the cryptographic security to use a complexity of building and solving the system of Boolean functions, describing the cipher construction procedure.
2010
EPRINT
Low Voltage Fault Attacks to AES and RSA on General Purpose Processors
Fault injection attacks have proven in recent times a powerful tool to exploit implementative weaknesses of robust cryptographic algorithms. A number of different techniques aimed at disturbing the computation of a cryptographic primitive have been devised, and have been successfully employed to leak secret information inferring it from the erroneous results. In particular, many of these techniques involve directly tampering with the computing device to alter the content of the embedded memory, e.g. through irradiating it with laser beams. In this contribution we present a low-cost, non-invasive and effective technique to inject faults in an ARM9 general purpose CPU through lowering its feeding voltage. This is the first result available in fault attacks literature to attack a software implementation of a cryptosystem running on a full fledged CPU with a complete operating system. The platform under consideration (an ARM9 CPU running a full Linux 2.6 kernel) is widely used in mobile computing devices such as smartphones, gaming platforms and network appliances. We fully characterise both the fault model and the errors induced in the computation, both in terms of ensuing frequency and corruption patterns on the computed results. At first, we validate the effectiveness of the proposed fault model to lead practical attacks to implementations of RSA and AES cryptosystems, using techniques known in open literature. Then we devised two new attack techniques, one for each cryptosystem. The attack to AES is able to retrieve all the round keys regardless both their derivation strategy and the number of rounds. A known ciphertext attack to RSA encryption has been devised: the plaintext is retrieved knowing the result of a correct and a faulty encryption of the same plaintext, and assuming the fault corrupts the public key exponent. Through experimental validation, we show that we can break any AES with roughly 4 kb of ciphertext, RSA encryption with 3 to 5 faults and RSA signature with 1 to 2 faults.
2010
EPRINT
Low-weight Pseudo Collision Attack on Shabal and Preimage Attack on Reduced Shabal-512
This paper studies two types of attacks on the hash function Shabal. The first attack is a low-weight pseudo collision attack on Shabal. Since a pseudo collision attack is trivial for Shabal, we focus on a low-weight pseudo collision attack. It means that only low-weight difference in a chaining value is considered. By analyzing the difference propagation in the underlying permutation, we can construct a low-weight (45-bits) pseudo collision attack on the full compression function with complexity of 2^84. The second attack is a preimage attack on variants of Shabal-512. We utilize a guess-and-determine technique, which is originally developed for a cryptanalysis of stream ciphers, and customize the technique for a preimage attack on Shabal-512. As a result, for the weakened variant of Shabal-512 using security parameters (p; r) = (2; 12), a preimage can be found with complexity of 2^497 and memory of 2^400. Moreover, for the Shabal-512 using security parameters (p; r) = (1:5; 8), a preimage can be found with complexity of 2^497 and memory of 2^272. To the best of our knowledge, these are best preimage attacks on Shabal variants and the second result is a first preimage attack on Shabal-512 with reduced security parameters.
2010
EPRINT
Lower Bounds for Straight Line Factoring
Straight line factoring algorithms include a variant Lenstra's elliptic curve method. This note proves lower bounds on the length of straight line factoring algorithms.
2010
EPRINT
Mean value formulas for twisted Edwards curves
R. Feng, and H. Wu recently established a certain mean-value formula for the x-coordinates of the n-division points on an elliptic curve given in Weierstrass form (A mean value formula for elliptic curves, 2010, available at http://eprint.iacr.org/2009/586.pdf ). We prove a similar result for both the x and y-coordinates on a twisted Edwards elliptic curve.
2010
EPRINT
Message Recovery and Pseudo-Preimage Attacks on the Compression Function of Hamsi-256
Hamsi is one of the second round candidates of the SHA-3 competition. In this study, we present non-random differential properties for the compression function of the hash function Hamsi-256. Based on these properties, we first demonstrate a distinguishing attack that requires a few evaluations of the compression function and extend the distinguisher to 5 rounds with complexity $2^{83}$. Then, we present a message recovery attack with complexity of $2^{10.48}$ compression function evaluations. Also, we present a pseudo-preimage attack for the compression function with complexity $2^{254.25}$. The pseudo-preimage attack on the compression function is easily converted to a pseudo second preimage attack on Hamsi-256 hash function with the same complexity.
2010
EPRINT
Modeling Attacks on Physical Unclonable Functions
We show in this paper how several proposed Physical Unclonable Functions (PUFs) can be broken by numerical modeling attacks. Given a set of challenge-response pairs (CRPs) of a PUF, our attacks construct a computer algorithm which behaves indistinguishably from the original PUF on almost all CRPs. This algorithm can subsequently impersonate the PUF, and can be cloned and distributed arbitrarily. This breaks the security of essentially all applications and protocols that are based on the respective PUF. The PUFs we attacked successfully include standard Arbiter PUFs and Ring Oscillator PUFs of arbitrary sizes, and XOR Arbiter PUFs, Lightweight Secure PUFs, and Feed-Forward Arbiter PUFs of up to a given size and complexity. Our attacks are based upon various machine learning techniques, including Logistic Regression and Evolution Strategies. Our work will be useful to PUF designers and attackers alike.
2010
EPRINT
Modular Design of Efficient Secure Function Evaluation Protocols
Two-party Secure Function Evaluation (SFE) allows mutually distrusting parties to (jointly) correctly compute a function on their private input data, without revealing the inputs. SFE, properly designed, guarantees to satisfy the most stringent security requirements, even for interactive computation. Two-party SFE can benefit almost any client-server interaction where privacy is required, such as privacy-preserving credit checking, medical classification, or face recognition. Today, SFE is a subject of immense amount of research in a variety of directions, and is not easy to navigate. In this paper, we systematize some of the vast research knowledge on \emph{practically} efficient SFE. It turns out that the most efficient SFE protocols are obtained by combining several basic techniques, such as garbled circuits and computation under homomorphic encryption. As an important practical contribution, we present a framework in which these techniques can be viewed as building blocks with well-defined interfaces. These components can be easily combined to establish a complete efficient solution. Further, our approach naturally lends itself to automated protocol generation (compilation). We believe, today, this approach is the best candidate for implementation and deployment.
2010
EPRINT
MQ^*-IP: An Identity-based Identification Scheme without Number-theoretic Assumptions
In this article, we propose an identification scheme which is based on the two combinatorial problems Multivariate Quadratic equations (MQ) and Isomorphism of Polynomials (IP). We show that this scheme is statistical zero-knowledge. Using a trapdoor for the MQ-problem, it is possible to make it also identity-based, i.e., there is no need for distributing public keys or for certificates within this scheme. The size of the public keys and the communication complexity\ are within the range of other non-number-theoretic identification schemes. In contrast to MQ^*-IP, these schemes do usually no permit identity-based public keys.
2010
EPRINT
Multi-property-preserving Domain Extension Using Polynomial-based Modes of Operation
In this paper, we propose a new double-piped mode of operation for multi-property-preserving domain extension of MACs~(message authentication codes), PRFs~(pseudorandom functions) and PROs~(pseudorandom oracles). Our mode of operation performs twice as fast as the original double-piped mode of operation of Lucks while providing comparable security. Our construction, which uses a class of polynomial-based compression functions proposed by Stam, makes a single call to a $3n$-bit to $n$-bit primitive at each iteration and uses a finalization function $f_2$ at the last iteration, producing an $n$-bit hash function $H[f_1,f_2]$ satisfying the following properties. \begin{enumerate} \item $H[f_1,f_2]$ is unforgeable up to $O(2^n/n)$ query complexity as long as $f_1$ and $f_2$ are unforgeable. \item $H[f_1,f_2]$ is pseudorandom up to $O(2^n/n)$ query complexity as long as $f_1$ is unforgeable and $f_2$ is pseudorandom. \item $H[f_1,f_2]$ is indifferentiable from a random oracle up to $O(2^{2n/3})$ query complexity as long as $f_1$ and $f_2$ are public random functions. \end{enumerate} To our knowledge, our result constitutes the first time $O(2^n/n)$ unforgeability has been achieved using only an unforgeable primitive of $n$-bit output length. (Yasuda showed unforgeability of $O(2^{5n/6})$ for Lucks' construction assuming an unforgeable primitive, but the analysis is sub-optimal; in this paper, we show how Yasuda's bound can be improved to $O(2^n)$.) In related work, we strengthen Stam's collision resistance analysis of polynomial-based compression functions (showing that unforgeability of the primitive suffices) and discuss how to implement our mode by replacing $f_1$ with a $2n$-bit key blockcipher in Davies-Meyer mode or by replacing $f_1$ with the cascade of two $2n$-bit to $n$-bit compression functions.
2010
EPRINT
Multiparty Computation for Dishonest Majority: from Passive to Active Security at Low Cost
Multiparty computation protocols have been known for more than twenty years now, but due to their lack of efficiency their use is still limited in real-world applications: the goal of this paper is the design of efficient two and multi party computation protocols aimed to fill the gap between theory and practice. We propose a new protocol to securely evaluate reactive arithmetic circuits, that offers security against an active adversary in the universally composable security framework. Instead of the ``do-and-compile'' approach (where the parties use zero-knowledge proofs to show that they are following the protocol) our key ingredient is an efficient version of the ``cut-and-choose'' technique, that allow us to achieve active security for just a (small) constant amount of work more than for passive security.
2010
EPRINT
Multiparty Computation for Modulo Reduction without Bit-Decomposition and a Generalization to Bit-Decomposition
Bit-decomposition, which is proposed by Damg{\aa}rd \emph{et al.}, is a powerful tool for multi-party computation (MPC). Given a sharing of secret \emph{a}, it allows the parties to compute the sharings of the bits of \emph{a} in constant rounds. With the help of bit-decomposition, constant rounds protocols for various MPC problems can be constructed. However, bit-decomposition is relatively expensive, so constructing protocols for MPC problems without relying on bit-decomposition is a meaningful work. In multi-party computation, it remains an open problem whether the "modulo reduction" problem can be solved in constant rounds without bit-decomposition. In this paper, we propose a protocol for (public) modulo reduction without relying on bit-decomposition. This protocol achieves constant round complexity and linear communication complexity. Moreover, we also propose a generalization to bit-decomposition which can, in constant rounds, convert the sharing of secret \emph{a} into the sharings of the "digits" of \emph{a}, along with the sharings of the bits of every "digit". The "digits" can be base-\emph{m} for any $m\geq2$. Obviously, when \emph{m} is a power of 2, this (generalized) protocol is just the original bit-decomposition protocol.
2010
EPRINT
Multiple Bytes Differential Fault Analysis on CLEFIA
This paper examines the strength of CLEFIA against multiple bytes differential fault attack. Firstly, it presents the principle of CLEFIA algorithm and differential fault analysis; then, according to injecting faults into the rth,r-1th,r-2th CLEFIA round three conditions, proposes three fault models and corresponding analysis methods; finally, all of the fault model and analysis methods above have been verified through software simulation. Experiment results demonstrate that: CLEFIA is vulnerable to differential fault attack due to its Feistel structure and S-box feature, 5-6,6-8,2 faults are needed to recover CLEFIA-128 based on the three fault models in this paper respectively, multiple byte faults model can greatly improve the attack practicality and even the attack efficiency, and the fault analysis methods in this paper can provide some fault analysis ideas on other block ciphers using S-box.
2010
EPRINT
Near Collisions for the Compress Function of Hamsi-256 Found by Genetic Algorithm
Hamsi is one of 14 remaining candidates in NIST's Hash Competition for the future hash standard SHA-3 and Hamsi-256 is one of four kinds of Hamsi. In this paper we present a genetic algorithm to search near collisions for the compress function of Hamsi-256 , give a near collision on (256 &#8722; 20) bits and a near collision on (256 &#8722; 21) bits with four differences in the chaining value, and obtain a differential path for three rounds of Hamsi-256 with probability 1/2^24, 1/2^23 respectively, which are better than previous work reported about near collisions.
2010
EPRINT
Near-Collisions on the Reduced-Round Compression Functions of Skein and BLAKE
The SHA-3 competition organized by NIST aims to find a new hash standard as a replacement of SHA-2. Till now, 14 submissions have been selected as the second round candidates, including Skein and BLAKE, both of which have components based on modular addition, rotation and bitwise XOR (ARX). In this paper, we propose improved near-collision attacks on the reduced-round compression functions of Skein and a variant of BLAKE. The attacks are based on linear differentials of the modular additions. The computational complexity of near-collision attacks on a 4-round compression function of BLAKE-32, 4-round and 5-round compression functions of BLAKE-64 are 2^{21}, 2^{16} and 2^{216} respectively, and the attacks on a 24-round compression functions of Skein-256, Skein-512 and Skein-1024 have a complexity of 2^{60}, 2^{230} and 2^{395} respectively.
2010
EPRINT
New Advances on Privacy-Preserving Policy Reconciliation
Entities define their own set of rules under which they are willing to collaborate, e.g., interact, share and exchange resources or information with others. Typically, these individual policies differ for different parties. Thus, collaboration requires the resolving of differences and reaching a consensus. This process is generally referred to as policy reconciliation. Current solutions for policy reconciliation do not take into account the privacy concerns of reconciliating parties. This paper addresses the problem of preserving privacy during policy reconciliation. We introduce new protocols that meet the privacy requirements of the organizations and allow parties to find a common policy rule which optimizes their individual preferences.
2010
EPRINT
New Construction of Identity-based Proxy Re-encryption
A proxy re-encryption (PRE) scheme involves three parties: Alice, Bob, and a proxy. PRE allows the proxy to translate a ciphertext encrypted under Alice's public key into one that can be decrypted by Bob's secret key. We present a general method to construct an identity-based proxy re-encryption scheme from an existing identity-based encryption scheme. The transformed scheme satisfies the properties of PRE, such as unidirectionality, non-interactivity and multi-use. Moreover, the proposed scheme has master key security, allows the encryptor to decide whether the ciphertext can be re-encrypted.
2010
EPRINT
New generic algorithms for hard knapsacks
In this paper, we study the complexity of solving hard knapsack problems, i.e., knapsacks with a density close to $1$ where lattice-based low density attacks are not an option. For such knapsacks, the current state-of-the-art is a 31-year old algorithm by Schroeppel and Shamir which is based on birthday paradox techniques and yields a running time of $\TildeOh(2^{n/2})$ for knapsacks of $n$ elements and uses $\TildeOh(2^{n/4})$ storage. We propose here two new algorithms which improve on this bound, finally lowering the running time down to $\TildeOh (2^{0.3113\, n})$ for almost all knapsacks of density $1$. We also demonstrate the practicality of these algorithms with an implementation.
2010
EPRINT
New Impossible Di®erential Attacks on AES
Some new near $5$ rounds impossible differential properties of AES are first presented in this paper, in which active bytes of $1^{st}$ round or $5^{th}$ round are in different columns and in favor of extension. Additionally, we first propose the complexities expressions of an universal impossible differential attack, which can help us to rapidly search appropriate impossible differential paths. More importantly, our near $5$ rounds impossible differential properties and complexities expressions lead to a series of new impossible differential attacks on 7 rounds AES-128, 7-9 rounds AES-192, and 8-12 rounds AES-256.
2010
EPRINT
New Methodologies for Differential-Linear Cryptanalysis and Its Extensions
In 1994 Langford and Hellman introduced differential-linear cryptanalysis, which involves building a differential-linear distinguisher by concatenating a linear approximation with such a (truncated) differential that with probability 1 does not affect the bit(s) concerned by the input mask of the linear approximation. In 2002 Biham, Dunkelman and Keller presented an enhanced approach to include the case when the differential has a probability smaller than 1; and in 2005 they proposed several extensions of differential-linear cryptanalysis, including the high-order differential-linear analysis, the differential-bilinear analysis and the differential-bilinear-boomerang analysis. In this paper, we show that Biham et al.'s methodologies for computing the probabilities of a differential-linear distinguisher, a high-order differential-linear distinguisher, a differential-bilinear distinguisher and a differential-bilinear-boomerang distinguisher do not have the generality to describe the analytic techniques. Thus the previous cryptanalytic results obtained by using these techniques of Biham et al. are questionable. Finally, from a mathematical point we give general methodologies for computing the probabilities. The new methodologies lead to some better cryptanalytic results, for example, differential-linear attacks on 13-round DES and 10-round CTC2 with a 255-bit block size and key.
2010
EPRINT
New Methods to Construct Golay Complementary Sequences Over the $QAM$ Constellation
In this paper, based on binary Golay complementary sequences, we propose some methods to construct Golay complementary sequences of length $2^n$ for integer n, over the $M^2$-$QAM$ constellation and $2M$-$Q$-$PAM$ constellations, where $M=2^m$ for integer $m$. A method to judge whether a sequence constructed using the new general offset pairs over the $QAM$ constellation is Golay complementary sequence is proposed. Base on this judging rule, we can construct many new Golay complementary sequences. In particular, we study Golay complementary sequences over $16$-$QAM$ constellation and $64$-$QAM$ constellation,many new Golay complementary sequences over these constellations have been found.
2010
EPRINT
New Montgomery-based Semi-systolic Multiplier for Even-type GNB of GF(2^m)
Efficient finite field multiplication is crucial for implementing public key crytosystem. Based on new Gaussian normal basis Montgomery(GNBM) representation, this paper presents a semi-systolic even-type GNBM multiplier.Compared with the only existing semi-systolic even-type GNB multiplier, the proposed multiplier saves about 57% space complexity and 50% time complexity.
2010
EPRINT
New software speed records for cryptographic pairings
This paper presents new software speed records for the computation of cryptographic pairings. More specifically, we present details of an implementation which computes the optimal ate pairing on a 256-bit Barreto-Naehrig curve in only 4,379,912 cycles on one core of an Intel Core 2 Quad Q9550 processor. This speed is achieved by combining 1.) state-of-the-art high-level optimization techniques, 2.) a new representation of elements in the underlying finite fields which makes use of the special modulus arising from the Barreto-Naehrig curve construction, and 3.) implementing arithmetic in this representation using the double-precision floating-point SIMD instructions of the AMD64 architecture.
2010
EPRINT
Non-Transferable Proxy Re-Encryption
A proxy re-encryption (PRE) scheme allows a proxy to reencrypt a ciphertext for Alice (delegator) to a ciphertext for Bob (delegatee) without seeing the underlying plaintext. With the help of the proxy, Alice can delegate the decryption right to any delegatee. However, existing PRE schemes generally suffer from one of the followings. Some schemes fail to provide the non-transferability property in which the proxy and the delegatee can collude to further delegate the decryption right to anyone. Other schemes assume the existence of a fully trusted private key generator (PKG) to generate the re-encryption key to be used by the proxy for encrypting a given ciphertext for a target delegatee. But this poses two problems in PRE schemes: the PKG in their schemes may decrypt all ciphertexts (referred as the key escrow problem) and the PKG can generate re-encryption key for arbitrary delegatees (we refer it as the PKG despotism problem). In this paper, we provide a more satisfactory solution to the problems. We follow the idea of using PKG to generate a re-encryption key to achieve the non-transferability property. To tackle the PKG despotisum problem in our scheme, if the PKG generates a re-encryption key for an unauthorized party, the delegator is able to retrieve the master secret of the PKG. We also show that with a tamper-proof hardware device, we can guarantee that the PKG cannot transfer decryption right to unauthorized delegatee. In addition, we solve the key escrow problem as well.
2010
EPRINT
Number of Jacobi quartic curves over finite fields
In this paper the number of $\overline{\mathbb{F}}_q$-isomorphism classes of Jacobi quartic curves, i.e., the number of Jacobi quartic curves with distinct $j$-invariants, over finite field $\mathbb{F}_q$ is enumerated.
2010
EPRINT
Oblivious RAM Revisited
We reinvestigate the oblivious RAM concept introduced by Goldreich and Ostrovsky, which enables a client, that can store locally only a constant amount of data, to store remotely $n$ data items, and access them while hiding the identities of the items which are being accessed. Oblivious RAM is often cited as a powerful tool, which can be used, for example, for search on encrypted data or for preventing cache attacks. However, oblivious RAM it is also commonly considered to be impractical due to its overhead, which is asymptotically efficient but is quite high: each data request is replaced by $O(\log^4 n)$ requests, or by $O(\log^3 n)$ requests where the constant in the ``$O$'' notation is a few thousands. In addition, $O(n \log n)$ external memory is required in order to store the $n$ data items. We redesign the oblivious RAM protocol using modern tools, namely Cuckoo hashing and a new oblivious sorting algorithm. The resulting protocol uses only $O(n)$ external memory, and replaces each data request by only $O(\log^2 n)$ requests (with a small constant). This analysis is validated by experiments that we ran.
2010
EPRINT
Okamoto-Tanaka Revisited: Fully Authenticated Diffie-Hellman with Minimal Overhead
Okamoto-Tanaka Revisited: Fully Authenticated Diffie-Hellman with Minimal Overhead The Diffie-Hellman protocol (DHP) is one of the most studied protocols in cryptography. Much work has been dedicated to armor the original protocol against active attacks while incurring a minimal performance overhead relative to the basic (unauthenticated) DHP. This line of work has resulted in some remarkable protocols, e.g., MQV, where the protocol's communication cost is identical to that of the basic DHP and the computation overhead is small. Unfortunately, MQV and similar 2-message ``implicitly authenticated" protocols do not achieve full security against active attacks since they cannot provide forward secrecy (PFS), a major security goal of DHP, against active attackers. In this paper we investigate the question of whether one can push the limits of authenticated DHPs even further, namely, to achieve communication complexity as in the original DHP (two messages with a single group element per message), maintain low computational overhead, and yet achieve full PFS against active attackers in a provable way. We answer this question in the affirmative by resorting to an old and elegant key agreement protocol: the Okamoto-Tanaka protocol \cite{okta}. We present a variant of the protocol (denoted mOT) which achieves the above minimal communication, incurs a computational overhead relative to the basic DHP that is practically negligible, and yet achieves full provable key agreement security, including PFS, against active attackers. Moreover, due to the identity-based properties of mOT, even the sending of certificates (typical for authenticated DHPs) can be avoided in the protocol. As additional contributions, we apply our analysis to prove the security of a recent multi-domain extension of the Okamoto-Tanaka protocol by Schridde et al. and show how to adapt mOT to the (non id-based) certificate-based setting.
2010
EPRINT
On lower bounds of second-order nonlinearities of cubic bent functions constructed by concatenating Gold functions
In this paper we consider cubic bent functions obtained by Leander and McGuire (J. Comb. Th. Series A, 116 (2009) 960-970) which are concatenations of quadratic Gold functions. A lower bound of second-order nonlinearities of these functions is obtained. This bound is compared with the lower bounds of second-order nonlinearities obtained for functions belonging to some other classes of functions which are recently studied.
2010
EPRINT
On a conjecture about binary strings distribution
It is a difficult challenge to find Boolean functions used in stream ciphers achieving all of the necessary criteria and the research of such functions has taken a significant delay with respect to cryptanalyses. A lot of attacks has led to design criteria for these functions; mainly: balancedness, a high algebraic degree, a high nonlinearity, a good behavior against Fast Algebraic Attacks and also a high algebraic immunity (which is now an absolutely necessary criterion (but not sufficient) for cryptographic Boolean functions). Very recently, an infinite class of Boolean functions has been proposed by Tu and Deng having many very nice cryptographic properties under the assumption that the following combinatorial conjecture about binary strings is true: \begin{cjt} \label{cjt:original} Let $\Stk$ be the following set: \[ \Stk=\set{(a,b) \in \left(\Zk\right)^2 | a + b = t \text{ and } w(a) + w(b) < k} . \] Then: \[ \abs{\Stk} \leq 2^{k-1} . \] \end{cjt} The main contribution of the present paper is the reformulation of the problem in terms of {\em carries} which gives more insight on it than simple counting arguments. Successful applications of our tools include explicit formulas of $\abs{\Stk}$ for numbers whose binary expansion is made of one block (see theorem \ref{thm:one}), a proof that the conjecture is {\em asymptotically} true (see theorem \ref{thm:asymptotic}) and a proof that a family of numbers (whose binary expansion has a high number of \textttup{1}s and isolated \textttup{0}s) reaches the bound of the conjecture (see theorem \ref{thm:extremal}). We also conjecture that the numbers in that family are the only ones reaching the bound (see conjecture \ref{cjt:extremal}).
2010
EPRINT
On Achieving the "Best of Both Worlds" in Secure Multiparty Computation
Two settings are traditionally considered for secure multiparty computation, depending on whether or not a majority of the parties are assumed to be honest. Protocols designed under this assumption provide ``full security'' (and, in particular, guarantee output delivery and fairness) when this assumption holds; unfortunately, these protocols are completely insecure if this assumption is violated. On the other hand, protocols tolerating an arbitrary number of corruptions do not guarantee fairness or output delivery even if only a \emph{single} party is dishonest. It is natural to wonder whether it is possible to achieve the ``best of both worlds'': namely, a single protocol that simultaneously achieves the best possible security in both the above settings. Here, we rule out this possibility (at least for general functionalities) but show some positive results regarding what \emph{can} be achieved.
2010
EPRINT
On Designated Verifier Signature Schemes
Designated verifier signature schemes allow a signer to convince only the designated verifier that a signed message is authentic. We define attack models on the unforgeability property of such schemes and analyze relationships among the models. We show that the no-message model, where an adversary is given only public keys, is equivalent to the model, where an adversary has also oracle access to the verification algorithm. We also show a separation between the no-message model and the chosen-message model, where an adversary has access to the signing algorithm. Furthermore, we present a modification of the Yang-Liao designated verifier signature scheme and prove its security. The security of the modified scheme is based on the computational Diffie-Hellman problem, while the original scheme requires strong Diffie-Hellman assumption.
2010
EPRINT
On E-Vote Integrity in the Case of Malicious Voter Computers
Norway has started to implement e-voting (over the Internet, and by using voters' own computers) within the next few years. The vulnerability of voter's computers was identified as a serious threat to e-voting. Therefore, in this paper, we study the vote integrity of e-voting when the voter computers cannot be trusted. For this, we first make a number of assumptions---that arose from the discussion with the representatives of Norwegian government, and have been approved by them---about the available infrastructure. In particular, we assume the existence of two out-of-band channels that do not depend on the voter computers. The first channel is used to transmit integrity check codes to the voters prior the election, and the second channel is used to transmit a check code, that corresponds to her vote, back to a voter just after his or her e-vote vast cast. For this we also introduce a new cryptographic protocol. We present the new protocol with enough details to facilitate an implementation, and also present the timings of an actual implementation.
2010
EPRINT
On Efficient Ciphertext-Policy Attribute Based Encryption and Broadcast Encryption
Ciphertext Policy Attribute Based Encryption (CP-ABE) enforces an expressive data access policy, which consists of a number of attributes connected by logical gates. Only those decryptors whose attributes satisfy the data access policy can decrypt the ciphertext. CP-ABE is very appealing since the ciphertext and data access policies are integrated together in a natural and effective way. However, all existing CP-ABE schemes incur very large ciphertext size, which increases linearly with respect to the number of attributes in the access policy. Large ciphertext prevents CP-ABE from being adopted in the communication constrained environments. In this paper, we proposed a new construction of CP-ABE, named Constant-size CP-ABE (denoted as CCP-ABE) that significantly reduces the ciphertext to a constant size for an AND gate access policy with any given number of attributes. Each ciphertext in CCP-ABE requires only 2 elements on a bilinear group. Based on CCP-ABE, we further proposed an Attribute Based Broadcast Encryption (ABBE) scheme. Compared to existing Broadcast Encryption (BE) schemes, ABBE is more flexible because a broadcasted message can be encrypted by an expressive access policy, either with or without explicit specifying the receivers. Moreover, ABBE significantly reduces the storage and communication overhead to the order of $O(\log N)$, where $N$ is the system size. Also, we proved, using information theoretical approaches, ABBE attains minimal bound on storage overhead for each user to construct all possible subgroups in the communication system.
2010
EPRINT
On Efficiently Transferring the Linear Secret-Sharing Scheme Matrix in Ciphertext-Policy Attribute-Based Encryption
Ciphertext-Policy Attribute-Based Encryption(CP-ABE) is a system for realizing complex access control on encrypted data, in which attributes are used to describe a user's credentials and a party encrypting data determines a policy over attributes for who can decrypt. In CP-ABE schemes, access policy is attached to the ciphertext to be the input of the decryption algorithm. An access policy can be expressed in terms of monotone boolean formula or monotone access structure, and can be realized by a linear secret-sharing scheme(LSSS). In recent provably secure and efficient CP-ABE schemes, the LSSS induced from monotone span program(MSP) is used, where the LSSS is a matrix whose rows are labeled by attributes. And a general algorithm for converting a boolean formula into corresponding LSSS matrix is described recently. However, when there are threshold gates in the access structure, the number of rows of the LSSS matrix generated by the algorithm will be unnecessary large, and consequently the ciphertext size is unnecessary large. In this paper, we give a more general and efficient algorithm that the number of rows of the LSSS matrix is as small as possible. And by some tricks, the boolean formula acts as the label function, so that only the boolean formula needs to be attached to the ciphertext, which decreases the communication cost drastically.
2010
EPRINT
On Exponential Sums, Nowton identities and Dickson Polynomials over Finite Fields
Let $\mathbb{F}_{q}$ be a finite field, $\mathbb{F}_{q^s}$ be an extension of $\mathbb{F}_q$, let $f(x)\in \mathbb{F}_q[x]$ be a polynomial of degree $n$ with $\gcd(n,q)=1$. We present a recursive formula for evaluating the exponential sum $\sum_{c\in \mathbb{F}_{q^s}}\chi^{(s)}(f(x))$. Let $a$ and $b$ be two elements in $\mathbb{F}_q$ with $a\neq 0$, $u$ be a positive integer. We obtain an estimate of the exponential sum $\sum_{c\in \mathbb{F}^*_{q^s}}\chi^{(s)}(ac^u+bc^{-1})$, where $\chi^{(s)}$ is the lifting of an additive character $\chi$ of $\mathbb{F}_q$. Some properties of the sequences constructed from these exponential sums are provided also.
2010
EPRINT
On extended algebraic immunity
In this paper, two sufficient conditions for a Boolean function with optimal extended algebraic immunity are given. It is shown that almost all the known functions possess maximum possible algebraic immunity. The results show that about half of them do not possess optimal extended algebraic immunity.
2010
EPRINT
On Foundation and Construction of Physical Unclonable Functions
Physical Unclonable Functions (PUFs) have been introduced as a new cryptographic primitive, and whilst a large number of PUF designs and applications have been proposed, few studies has been undertaken on the theoretical foundation of PUFs. At the same time, several PUF designs have been found to be insecure, raising questions about their design methodology. Moreover, PUFs with efficient implementation are needed to enable many applications in practice. In this paper, we present novel results on the theoretical foundation and practical construction for PUFs. First, we prove that, for an $\ell$-bit-input and $m$-bit-output PUF containing $n$ silicon components, if $n< \frac{m2^{\ell}}{c}$ where $c$ is a constant, then 1) the PUF cannot be a random function, and 2) confusion and diffusion are necessary for the PUF to be a pseudorandom function. Then, we propose a helper data algorithm (HDA) that is secure against active attacks and significantly reduces PUF implementation overhead compared to previous HDAs. Finally, we integrate PUF construction into block cipher design to implement an efficient physical unclonable pseudorandom permutation (PUPRP); to the best of our knowledge, this is the first practical PUPRP using an integrated approach.
2010
EPRINT
On FPGA-based implementations of Gr\{o}stl
The National Institute of Standards and Technology (NIST) has started a competition for a new secure hash standard. To make a significant comparison between the submitted candidates, third party implementations of all proposed hash functions are needed. This is one of the reasons why the SHA-3 candidate Gr\{o}stl has been chosen for a FPGA-based implementation. Mainly our work is motivated by actual and future developments of the automotive market (e.g. car-2-car communication systems), which will increase the necessity for a suitable cryptographic infrastructure in modern vehicles (cf. AUTOSAR project) even further. One core component of such an infrastructure is a secure cryptographic hash function, which is used for a lot of applications like challenge-response authentication systems or digital signature schemes. Another motivation to evaluate Gr\{o}stl is its resemblance to AES. The automotive market demands, like any mass market, low budget and therefore compact implementations, hence our evaluation of Gr\{o}stl focuses on area optimizations. It is shown that, while Gr\{o}stl is inherently quite large compared to AES, it is still possible to implement the Gr\{o}stl algorithm on small and low budget FPGAs like the second smallest available Spartan-3, while maintaining a reasonable high throughput.
2010
EPRINT
On generalized Feistel networks
We prove beyond-birthday-bound security for the well-known types of generalized Feistel networks, including: (1) unbalanced Feistel networks, where the $n$-bit to $m$-bit round functions may have $n\ne m$; (2) alternating Feistel networks, where the round functions alternate between contracting and expanding; (3) type-1, type-2, and type-3 Feistel networks, where $n$-bit to $n$-bit round functions are used to encipher $kn$-bit strings for some $k\ge2$; and (4) numeric variants of any of the above, where one enciphers numbers in some given range rather than strings of some given size. Using a unified analytic framework we show that, in any of these settings, for any $\varepsilon>0$, with enough rounds, the subject scheme can tolerate CCA attacks of up to $q\sim N^{1-\varepsilon}$ adversarial queries, where $N$ is the size of the round functions' domain (the size of the larger domain for alternating Feistel). This is asymptotically optimal. Prior analyses for generalized Feistel networks established security to only $q\sim N^{0.5}$ adversarial queries.
2010
EPRINT
On isotopisms of commutative presemifields and CCZ-equivalence of functions
A function $F$ from \textbf{F}$_{p^n}$ to itself is planar if for any $a\in$\textbf{F}$_{p^n}^*$ the function $F(x+a)-F(x)$ is a permutation. CCZ-equivalence is the most general known equivalence relation of functions preserving planar property. This paper considers two possible extensions of CCZ-equivalence for functions over fields of odd characteristics, one proposed by Coulter and Henderson and the other by Budaghyan and Carlet, and we show that they in fact coincide with CCZ-equivalence. We prove that two finite commutative presemifields of odd order are isotopic if and only if they are strongly isotopic. This result implies that two isotopic commutative presemifields always define CCZ-equivalent planar functions (this was unknown for the general case). Further we prove that, for any odd prime $p$ and any positive integers $n$ and $m$, the indicators of the graphs of functions $F$ and $F'$ from \textbf{F}$_{p^n}$ to \textbf{F}$_{p^m}$ are CCZ-equivalent if and only if $F$ and $F'$ are CCZ-equivalent. We also prove that, for any odd prime $p$, CCZ-equivalence of functions from \textbf{F}$_{p^n}$ to \textbf{F}$_{p^m}$, is strictly more general than EA-equivalence when $n\ge3$ and $m$ is greater or equal to the smallest positive divisor of $n$ different from 1.
2010
EPRINT
On Protecting Cryptographic Keys Against Continual Leakage
Side-channel attacks have often proven to have a devastating effect on the security of cryptographic schemes. In this paper, we address the problem of storing cryptographic keys and computing on them in a manner that preserves security even when the adversary is able to obtain information leakage during the computation on the key. Using the recently achieved fully homomorphic encryption, we show how to encapsulate a key and repeatedly evaluate arbitrary functions on it so that no adversary can gain any useful information from a large class of side-channel attacks. We work in the model of Micali and Reyzin, assuming that only the active part of memory during computation leaks information. Similarly to previous works, our construction makes use of a single ``leak-free'' hardware token that samples from a globally-fixed distribution that does not depend on the key. Our construction is the first general compiler to achieve resilience against polytime leakage functions without performing any leak-free computation on the underlying secret key. Furthermore, the amount of computation our construction must perform does not grow with the amount of leakage the adversary is able to obtain; instead, it suffices to make a stronger assumption about the security of the fully homomorphic encryption.
2010
EPRINT
On Representable Matroids and Ideal Secret Sharing
In secret sharing, the exact characterization of ideal access structures is a longstanding open problem. Brickell and Davenport (J. of Cryptology, 1991) proved that ideal access structures are induced by matroids. Subsequently, ideal access structures and access structures induced by matroids have attracted a lot of attention. Due to the difficulty of finding general results, the characterization of ideal access structures has been studied for several particular families of access structures. In all these families, all the matroids that are related to access structures in the family are representable and, then, the matroid-related access structures coincide with the ideal ones. In this paper, we study the characterization of representable matroids. By using the well known connection between ideal secret sharing and matroids and, in particular, the recent results on ideal multipartite access structures and the connection between multipartite matroids and discrete polymatroids, we obtain a characterization of a family of representable multipartite matroids, which implies a sufficient condition for an access structure to be ideal. By using this result and further introducing the reduced discrete polymatroids, we provide a complete characterization of quadripartite representable matroids, which was until now an open problem, and hence, all access structures related to quadripartite representable matroids are the ideal ones. By the way, using our results, we give a new and simple proof that all access structures related to unipartite, bipartite and tripartite matroids coincide with the ideal ones.
2010
EPRINT
On Robust Key Agreement Based on Public Key Authentication
This paper discusses public-key authenticated key agreement protocols. First, we critically analyze several authenticated key agreement protocols and uncover various theoretical and practical flaws. In particular, we present two new attacks on the HMQV protocol, which is currently being standardized by IEEE P1363. The first attack presents a counterexample to invalidate the basic authentication in HMQV. The second attack is applicable to almost all past schemes, despite that many of them have formal security proofs. These attacks highlight the difficulty to design a crypto protocol correctly and suggest the caution one should always take. We further point out that many of the design errors are caused by sidestepping an important engineering principle, namely ``Do not assume that a message you receive has a particular form (such as $g^{r}$ for known $r$) unless you can check this''. Constructions in the past generally resisted this principle on the grounds of efficiency: checking the knowledge of the exponent is commonly seen as too expensive. In a concrete example, we demonstrate how to effectively integrate the zero-knowledge proof primitive into the protocol design and meanwhile achieve good efficiency. Our new key agreement protocol, YAK, has comparable computational efficiency to the MQV and HMQV protocols with clear advantages on security. Among all the related techniques, our protocol appears to be the simplest so far. We believe simplicity is also an important engineering principle.
2010
EPRINT
On second-order nonlinearities of some $\mathcal{D}_0$ type bent functions
In this paper we study the lower bounds of second-order nonlinearities of bent functions constructed by modifying certain cubic Maiorana-McFarland type bent functions.
2010
EPRINT
On security of a remote user authentication scheme without using smart cards
The security of a password authentication scheme using smart cards proposed by Rhee et al. is analyzed. A kind of impersonation attack is presented. The analyses show that the scheme is insecure for practical application. In order to eliminate the security vulnerability, an efficient countermeasure is proposed.
2010
EPRINT
On Small Subgroup Non-confinement Attack
The small subgroup confinement attack works by confining cryptographic operations within a small subgroup, in which exhaustive search is feasible. This attack is overt and hence can be easily thwarted by adding a public key validation: verifying the received group element has proper order. In this paper, we present a different aspect of the small subgroup attack. Sometimes, the fact that an operation does not fall into the small subgroup confinement may provide an oracle to an attacker, leaking partial information about the long-term secrets. This attack is subtle and reflects structural weakness of a protocol; the question of whether the protocol has a public key validation is completely irrelevant. As a concrete example, we show how this attack works on the Secure Remote Password (SRP-6) protocol.
2010
EPRINT
On Strong Simulation and Composable Point Obfuscation
The Virtual Black Box (VBB) property for program obfuscators provides a strong guarantee: Anything computable by an efficient adversary given the obfuscated program can also be computed by an efficient simulator with only oracle access to the program. However, we know how to achieve this notion only for very restricted classes of programs. This work studies a simple relaxation of VBB: Allow the simulator unbounded computation time, while still allowing only polynomially many queries to the oracle. We then demonstrate the viability of this relaxed notion, which we call Virtual Grey Box (VGB), in the context of fully composable obfuscators for point programs: It is known that, w.r.t. VBB, if such obfuscators exist then there exist multi-bit point obfuscators (aka ``digital lockers'') and subsequently also very strong variants of encryption that are resilient to various attacks, such as key leakage and key-dependent-messages. However, no composable VBB-obfuscators for point programs have been shown. We show fully composable {\em VGB}-obfuscators for point programs under a strong variant of the Decision Diffie Hellman assumption. We show they suffice for the above applications and even for extensions to the public key setting as well as for encryption schemes with resistance to certain related key attacks (RKA).
2010
EPRINT
On Symmetric Encryption and Point Obfuscation
We show tight connections between several cryptographic primitives, namely encryption with weakly random keys, encryption with key-dependent messages (KDM), and obfuscation of point functions with multi-bit output(which we call multi-bit point functions, or MBPFs, for short). These primitives, which have been studied mostly separately in recent works, bear some apparent similarities, both in the flavor of their security requirements and in the flavor of their constructions and assumptions. Still, rigorous connections have not been drawn. Our results can be interpreted as indicating that MBPF obfuscators imply a very strong form of encryption that simultaneously achieves security for weakly-random keys and key-dependent messages as special cases. Similarly, each one of the other primitives implies a certain restricted form of MBPF obfuscation. Our results carry both constructions and impossibility results from one primitive to others. In particular: The recent impossibility result for KDM security of Haitner and Holenstein (TCC ’09) carries over to MBPF obfuscators. The Canetti-Dakdouk construction of MBPF obfuscators based on a strong variant of the DDH assumption (EC ’08) gives an encryption scheme which is secure w.r.t. any weak key distribution of super-logarithmic min-entropy (and in particular, also has very strong leakage resilient properties). All the recent constructions of encryption schemes that are secure w.r.t. weak keys imply a weak form of MBPF obfuscators. 
2010
EPRINT
On the Complexity of the Herding Attack and Some Related Attacks on Hash Functions
In this paper, we analyze the complexity of the construction of the 2^k-diamond structure proposed by Kelsey and Kohno. We point out a flaw in their analysis and show that their construction may not produce the desired diamond structure. We then give a more rigorous and detailed complexity analysis of the construction of a diamond structure. For this, we appeal to random graph theory, which allows us to determine sharp necessary and sufficient conditions for the message complexity (i.e., the number of hash computations required to build the required structure). We also analyze the computational complexity for constructing a diamond structure, which has not been previously studied in the literature. Finally, we study the impact of our analysis on herding and other attacks that use the diamond structure as a subroutine. Precisely, our results shows the following: 1. The message complexity for the construction of a diamond structure is \sqrt{k} times more than what was previously stated in literature. 2. The time complexity is n times the message complexity, where n is the size of hash value. Due to above two results, the complexity of the herding attack and the second preimage attack on iterated hash functions have increased complexity. We also show that the message complexity of herding and second preimage attacks on "hash twice'' is n times the claimed complexity, by giving a more detailed analysis of the attack.
2010
EPRINT
On The Broadcast and Validity-Checking Security of PKCS \#1 v1.5 Encryption
This paper describes new attacks on PKCS \#1 v1.5, a deprecated but still widely used RSA encryption standard. The first cryptanalysis is a broadcast attack, allowing the opponent to reveal an identical plaintext sent to different recipients. This is nontrivial because different randomizers are used for different encryptions (in other words, plaintexts coincide only partially). The second attack predicts, using a single query to a validity checking oracle, which of two chosen plaintexts corresponds to a challenge ciphertext. The attack's success odds are very high. The two new attacks rely on different mathematical tools and underline the need to accelerate the phase out of PKCS \#1 v1.5.
2010
EPRINT
On the claimed privacy of EC-RAC III
In this paper we show how to break the most recent version of EC-RAC with respect to privacy. We show that both the ID-Transfer and ID&PWD-Transfer schemes from EC-RAC do not provide the claimed privacy levels by using a man-in-the-middle attack. The existence of these attacks voids the presented privacy proofs for EC-RAC.
2010
EPRINT
On the Efficiency and Security of Pairing-Based Protocols in the Type 1 and Type 4 Settings
We focus on the implementation and security aspects of cryptographic protocols that use Type 1 and Type 4 pairings. On the implementation front, we report improved timings for Type 1 pairings derived from supersingular elliptic curves in characteristic 2 and 3 and the first timings for supersingular genus-2 curves in characteristic 2 at the 128-bit security level. In the case of Type 4 pairings, our main contribution is a new method for hashing into ${\mathbb G}_2$ which makes the Type 4 setting almost as efficient as Type 3. On the security front, for some well-known protocols we discuss to what extent the security arguments are tenable when one moves to genus-2 curves in the Type 1 case. In Type 4, we observe that the Boneh-Shacham group signature scheme, the very first protocol for which the Type 4 setting was introduced in the literature, is trivially insecure, and we describe a small modification that appears to restore its security.
2010
EPRINT
On the Impossibility of Cryptography Alone for Privacy-Preserving Cloud Computing
Cloud computing denotes an architectural shift toward thin clients and conveniently centralized provision of computing resources. Clients’ lack of direct resource control in the cloud prompts concern about the potential for data privacy violations, particularly abuse or leakage of sensitive information by service providers. Cryptography is an oft-touted remedy. Among its most powerful primitives is fully homomorphic encryption (FHE), dubbed by some the field’s “Holy Grail,” and recently realized as a fully functional construct with seeming promise for cloud privacy. We argue that cryptography alone can’t enforce the privacy demanded by common cloud computing services, even with such powerful tools as FHE.We formally define a hierarchy of natural classes of private cloud applications, and show that no cryptographic protocol can implement those classes where data is shared among clients. We posit that users of cloud services will also need to rely on other forms of privacy enforcement, such as tamperproof hardware, distributed computing, and complex trust ecosystems.
2010
EPRINT
On the Indifferentiability of the Gr{\o}stl Hash Function
The notion of indifferentiability, introduced by Maurer et al., is an important criterion for the security of hash functions. Concretely, it ensures that a hash function has no structural design flaws and thus guarantees security against generic attacks up to the exhibited bounds. In this work we prove the indifferentiability of Gr{\o}stl, a second round SHA-3 hash function candidate. Gr{\o}stl combines characteristics of the wide-pipe and chop-Merkle-Damg{\aa}rd iterations and uses two distinct permutations P and Q internally. Under the assumption that P and Q are random l-bit permutations, where l is the iterated state size of Gr{\o}stl, we prove that the advantage of a distinguisher to differentiate Gr{\o}stl from a random oracle is upper bounded by O((Kq)^4/2^l), where the distinguisher makes at most q queries of length at most K blocks. For the specific Gr{\o}stl parameters, this result implies that Gr{\o}stl behaves like a random oracle up to q=O(2^{n/2}) queries, where n is the output size. Furthermore, we show that the output transformation of Gr{\o}stl, as well as `Gr{\o}stail' (the composition of the final compression function and the output transformation), are clearly differentiable from a random oracle. This renders out indifferentiability proofs which rely on the idealness of a final state transformation.
2010
EPRINT
On the Insecurity of Parallel Repetition for Leakage Resilience
A fundamental question in leakage-resilient cryptography is: can leakage resilience always be amplified by parallel repetition? It is natural to expect that if we have a leakage-resilient primitive tolerating $\ell$ bits of leakage, we can take $n$ copies of it to form a system tolerating $n\ell$ bits of leakage. In this paper, we show that this is not always true. We construct a public key encryption system which is secure when at most $\ell$ bits are leaked, but if we take $n$ copies of the system and encrypt a share of the message under each using an $n$-out-of-$n$ secret-sharing scheme, leaking $n\ell$ bits renders the system insecure. Our results hold either in composite order bilinear groups under a variant of the subgroup decision assumption \emph{or} in prime order bilinear groups under the decisional linear assumption. We note that the $n$ copies of our public key systems share a common reference parameter.
2010
EPRINT
On the order of the polynomial $x^p-x-a$
In this note, we prove that the order of $x^p-x-1\in \F_p[x]$ is $\frac{p^p-1}{p-1}$, where $p$ is a prime and $\mathbb{F}_p$ is the finite field of size $p$. As a consequence, it is shown that $x^p-x-a\in \mathbb{F}_p[x]$ is primitive if and only if $a$ is a primitive element in $\mathbb{F}_p$.
2010
EPRINT
On the Public Key Replacement and Universal Forgery Attacks of Short Certificateless Signature
Certificateless cryptography eliminates the need of certificates in the PKI and solves the inherent key escrow problem in the ID-based cryptography. Recently, Du and Wen proposed a short certi¯cateless signature scheme without MapToPoint hash function, and the signature size is short enough with only half of the DSA signature. In this paper, after the detailing the formal of certificateless signature scheme, we show that the Du and Wen's short certificateless signature scheme is insecure which is broken by a type-I adversary who has the ability in replacing users' public keys and accessing to the signing oracles, and it also cannot resist on the universal forgery attack for any third user.
2010
EPRINT
On the q-Strong Diffie-Hellman Problem
This note is an exposition of reductions among the q-strong Diffie-Hellman problem and related problems.
2010
EPRINT
On the Round Complexity of Covert Computation
In STOC'05, von Ahn, Hopper and Langford introduced the notion of covert computation. In covert computation, a party runs a secure computation protocol over a covert (or steganographic) channel without knowing if the other parties are participating as well or not. At the end of the protocol, if all parties participated in the protocol and if the function output is "favorable" to all parties, then the output is revealed (along with the fact that everyone participated). All covert computation protocols known so far require a large polynomial number of rounds. In this work, we first study the question of the round complexity of covert computation and obtain the following results: 1) There does not exist a constant round covert computation protocol with respect to black box simulation even for the case of two parties. (In comparison, such protocols are known even for the multi-party case if there is no covertness requirement.) 2) By relying on the two slot non-black-box simulation technique of Pass (STOC'04) and techniques from cryptography in NC^0 (Applebaum et al, FOCS'04), we obtain a construction of a constant round covert multi-party computation protocol. Put together, the above adds one more example to the growing list of tasks for which non-black-box simulation techniques (introduced in the work of Barak in FOCS'01) are necessary. Finally, we study the problem of covert multi-party computation in the setting where the parties only have point to point (covert) communication channels. We observe that our covert computation protocol for the broadcast channel inherits, from the protocol of Pass, the property of secure composition in the bounded concurrent setting. Then, as an application of this protocol, somewhat surprisingly we show the existence of covert multi-party computation with point to point channels (assuming that the number of parties is a constant).
2010
EPRINT
On the Security of a Bidirectional Proxy Re-Encryption Scheme from PKC 2010
In PKC 2010, Matsuda, Nishimaki and Tanaka proposed a bidirectional proxy re-encryption (PRE) scheme without bilinear maps, and claimed that their scheme is chosen-ciphertext secure in the standard model. However, by giving a concrete attack, in this paper we indicate that their PRE scheme fails to achieve the chosen-ciphertext security. The purpose of this paper is to clarify the fact that, it is still an open problem to come up with a chosen-ciphertext secure PRE scheme without bilinear maps in the standard model.
2010
EPRINT
On the Security of a Novel Remote User Authentication Scheme using Smart Card based on ECDLP
In 2009, Jena et al. proposed a novel remote user authentication scheme using smart card based on ECDLP and claimed that the proposed scheme withstands to security threats. This paper shows that Jena et al.'s scheme is vulnerable to serious security threats and also does not satisfy the attributes of an ideal password authentication scheme .
2010
EPRINT
On the Security of an Efficient Mobile Authentication Scheme for Wireless Networks
Tang and Wu proposed an efficient mobile authentication scheme for wireless networks, and claimed the scheme can effectively defend all known attacks to mobile networks including the denial-of-service attack. This article shows an existential replication attack on the scheme and, as a result, an attacker can obtain the communication key between a mobile user and the accessed VLR. Fortunately, we improve its security in this paper.
2010
EPRINT
On the Security of Identity Based Threshold Unsigncryption Schemes
Signcryption is a cryptographic primitive that provides confidentiality and authenticity simultaneously at a cost significantly lower than that of the naive combination of encrypting and signing the message. Threshold signcryption is used when a message to be sent needs the authentication of a certain number of members in an organisation, and until and unless a given number of members (known as the threshold) join the signcyption process, a particular message cannot be signcrypted. Threshold unsigncryption is used when this constraint is applicable during the unsigncryption process. In this work, we cryptanalyze two threshold unsigncryption schemes. We show that both these schemes do not meet the stringent requirements of insider security and propose attacks on both confidentiality and unforgeability. We also propose an improved identity based threshold unsigncryption scheme and give the formal proof of security in a new stronger security model.
2010
EPRINT
On the Security of Non-Linear HB (NLHB) Protocol Against Passive Attack
As a variant of the HB authentication protocol for RFID systems, which relies on the complexity of decoding linear codes against passive attacks, Madhavan et al. presented Non-Linear HB(NLHB) protocol. In contrast to HB, NLHB relies on the complexity of decoding a class of non-linear codes to render the passive attacks proposed against HB ine ective. In this paper, we show that passive attacks against HB protocol can still be applicable to NLHB and this protocol does not provide the desired security margin. In our attack, we rst linearize the non-linear part of NLHB to obtain a HB equivalent for NLHB, and then exploit the passive attack techniques proposed for the HB to evaluate the security margin of NLHB. The results show that although NLHB's security margin is relatively higher than HB against similar passive attack techniques, it has been overestimated and, in contrary to what is claimed, NLHB is vulnerable to passive attacks against HB, especially when the noise vector in the protocol has a low weight.
2010
EPRINT
On the Security of Pseudorandomized Information-Theoretically Secure Schemes
In this article, we discuss a naive method of randomness reduction for cryptographic schemes, which replaces the required perfect randomness with output distribution of a computationally secure pseudorandom generator (PRG). We propose novel ideas and techniques for evaluating the indistinguishability between the random and pseudorandom cases, even against an adversary with computationally unbounded attack algorithm. Hence the PRG-based randomness reduction can be effective even for information-theoretically secure cryptographic schemes, especially when the amount of information received by the adversary is small. In comparison to a preceding result of Dubrov and Ishai (STOC 2006), our result removes the requirement of generalized notion of ``nb-PRGs'' and is effective for more general kinds of protocols. We give some numerical examples to show the effectiveness of our result in practical situations, and we also propose a further idea for improving the effect of the PRG-based randomness reduction.
2010
EPRINT
On the Static Diffie-Hellman Problem on Elliptic Curves over Extension Fields
We show that for any elliptic curve $E(\F_{q^n})$, if an adversary has access to a Static Diffie-Hellman Problem (Static DHP) oracle, then by making $O(q^{1-\frac{1}{n+1}})$ Static DHP oracle queries during an initial learning phase, for fixed $n>1$ and $q \rightarrow \infty$ the adversary can solve {\em any} further instance of the Static DHP in {\em heuristic} time $\tilde{O}(q^{1-\frac{1}{n+1}})$. While practical only for very small $n$, our algorithm reduces the security of elliptic curves defined over $\F_{p^2}$ and $\F_{p^4}$ proposed by Galbraith, Lin and Scott at EUROCRYPT 2009, should these curves be used in any protocol where a user can be made to act as a proxy Static DHP oracle. Our proposal also solves the {\em Delayed Target DHP} as defined by Freeman, and naturally extends to provide algorithms for solving the {\em Delayed Target DLP}, the {\em One-More DHP} and {\em One-More DLP} as studied by Koblitz and Menezes in the context of Jacobians of hyperelliptic curves of small genus. Lastly, we argue that for {\em any} group in which index calculus can be effectively applied, the above problems have a natural relationship, and will {\em always} be easier than the DLP.
2010
EPRINT
On the Use of Financial Data as a Random Beacon
In standard voting procedures, random audits are one method for increasing election integrity. In the case of cryptographic (or end-to-end) election verification, random challenges are often used to establish that the tally was computed correctly. In both cases, a source of randomness is required. In two recent binding cryptographic elections, this randomness was drawn from stock market data. This approach allows anyone with access to financial data to verify the challenges were generated correctly and, assuming market fluctuations are unpredictable to some degree, the challenges were generated at the correct time. However the degree to which these fluctuations are unpredictable is not known to be sufficient for generating a fair and unpredictable challenge. In this paper, we use tools from computational finance to provide an estimate of the amount of entropy in the closing price of a stock. We estimate that for each of the 30 stocks in the Dow Jones industrial average, the entropy is between 6 and 9 bits per trading day. We then propose a straight-forward protocol for regularly publishing verifiable 128-bit random seeds with entropy harvested over time from stock prices. These "beacons" can be used as challenges directly, or as a seed to a deterministic pseudorandom generator for creating larger challenges.
2010
EPRINT
On zero practical significance of “Key recovery attack on full GOST block cipher with zero time and memory”
In this paper we show that the related key boomerang attack by E. Fleischmann et al. from the paper mentioned in the title does not allow to recover the master key of the GOST block cipher with complexity less than the complexity of the exhaustive search. Next we present modified attacks. Finally we argue that these attacks and the related key approach itself are of extremely limited practical applications and do not represent a fundamental obstacle to practical usage of the block ciphers such as GOST, AES and Kasumi.
2010
EPRINT
One Round Group Key Exchange with Forward Security in the Standard Model
Constructing a one round group key exchange (GKE) protocol that provides forward secrecy is an open problem in the literature. In this paper, we investigate whether or not the security of one round GKE protocols can be enhanced with any form of forward secrecy without increasing the number of rounds. We apply the {\em key evolving} approach used for forward secure encryption/signature schemes and then model the notion of forward security for the first time for key exchange protocols. This notion is slightly weaker than forward secrecy, considered traditionally for key exchange protocols. We then revise an existing one round GKE protocol to propose a GKE protocol with forward security. In the security proof of the revised protocol we completely avoid reliance on the random oracle assumption that was needed for the proof of the base protocol. Our security proof can be directly applied to the base protocol, making it the most efficient one round GKE protocol secure in the standard model. Our one round GKE protocol is generically constructed from the primitive of forward secure encryption. We also propose a concrete forward secure encryption scheme with constant size ciphertext that can be used to efficiently instantiate our protocol.
2010
EPRINT
One-round and authenticated three-party multiple key exchange protocol from parings
One round three-party authenticated key exchange protocols are extremely important to secure communications and are now extensively adopted in network communications. These protocols allow users to communicate securely over public networks simply by using easy-to-remember long-term private keys. In 2001, Harn and Lin proposed an authentication key exchange protocol in which two parties generate four shared keys in one round, and three of these keys can provide perfect forward secrecy.This work,which aims to generalize two-party multiple key agreement sets to three-party key agreement sets,presents a three-party multiple key exchange protocol based on bilinear pairing.The proposed protocol does not require server's public key and requires only a single round. Compared with existing protocols, the proposed protocol is more efficient and provide greater security.
2010
EPRINT
One-Round Password-Based Authenticated Key Exchange
We show a general framework for constructing password-based authenticated key exchange protocols with optimal round complexity --- one message per party, sent simultaneously --- in the standard model, assuming the existence of a common reference string. When our framework is instantiated using bilinear-map cryptosystems, the resulting protocol is also (reasonably) efficient. Somewhat surprisingly, our framework can be adapted to give protocols (still in the standard model) that are universally composable, while still using only one (simultaneous) round.
2010
EPRINT
Online/Offline Identity-Based Signcryption Re-visited
In this paper, we re-define a cryptographic notion called Online/Offline Identity-Based Signcryption. It is an ``online/offline'' version of identity-based signcryption, where most of the computations are carried out offline and the online part does not require any heavy computations such as pairings or multiplications on elliptic curve. It is particularly suitable for power-constrained devices such as smart cards. We give a concrete implementation of online/offline identity-based signcryption. The construction is very efficient and flexible. Unlike all previous schemes in the literature, our scheme does not require the knowledge of receiver's information (either public key or identity) in the offline stage. The receiver's identity and the message to be signcrypted are only needed in the online stage. This feature provides great flexibility to our scheme and makes it practical to use in real-world applications. We prove that the proposed scheme meets strong security requirements in the random oracle model, assuming the Strong Diffie-Hellman (SDH) and Bilinear Diffie-Hellman Inversion (BDHI) are computationally hard.
2010
EPRINT
Optimal Adversary Behavior for the Serial Model of Financial Attack Trees
Attack tree analysis is used to estimate different parameters of general security threats based on information available for atomic subthreats. We focus on estimating the expected gains of an adversary based on both the cost and likelihood of the subthreats. Such a multi-parameter analysis is considerably more complicated than separate probability or skill level estimation, requiring exponential time in general. However, this paper shows that under reasonable assumptions a completely different type of optimal substructure exists which can be harnessed into a linear-time algorithm for optimal gains estimation. More concretely, we use a decision-theoretic framework in which a rational adversary sequentially considers and performs the available attacks. The assumption of rationality serves as an upper bound as any irrational behavior will just hurt the end result of the adversary himself. We show that if the attacker considers the attacks in a goal-oriented way, his optimal expected gains can be computed in linear time. Our model places the least restrictions on adversarial behavior of all known attack tree models that analyze economic viability of an attack and, as such, provides for the best efficiently computable estimate for the potential reward.
2010
EPRINT
Optimal Authentication of Operations on Dynamic Sets
We study the problem of authenticating outsourced set operations performed by an untrusted server over a dynamic collection of sets that are owned by a trusted source. We present efficient methods for authenticating fundamental set operations, such as \emph{union} and \emph{intersection} so that the client can verify the correctness of the received answer. Based on a novel extension of the security properties of \emph{bilinear-map accumulators}, our authentication scheme is the first to achieve \emph{optimality} in several critical performance measures: (1) \emph{the verification overhead at the client is optimal}, that is, the client can verify an answer in time proportional to the size of the query parameters and answer; (2) \emph{the update overhead at the source is constant}; (3) \emph{the bandwidth consumption is optimal}, namely constant between the source and the server and operation-sensitive between the client and the server (i.e., proportional only to the size of the query parameters and the answer); and (4) \emph{the storage usage is optimal}, namely constant at the client and linear at the source and the server. Updates and queries are also efficient at the server. In contrast, existing schemes entail high bandwidth and verification costs or high storage usage since they recompute the query over authentic data or precompute answers to all possible queries. We also show applications of our techniques to the authentication of \emph{keyword searches} on outsourced document collections (e.g., inverted-index queries) and of queries in outsourced \emph{databases} (e.g., equi-join queries). Since set intersection is heavily used in these applications, we obtain new authentication schemes that compare favorably to existing approaches.
2010
EPRINT
Optimal Average Joint Hamming Weight and Minimal Weight Conversion of d Integers
In this paper, we propose the minimal joint Hamming weight conversion for any binary expansions of $d$ integers. With redundant representations, we may represent a number by many expansions, and the minimal joint Hamming weight conversion is the algorithm to select the expansion that has the least joint Hamming weight. As the computation time of the cryptosystem strongly depends on the joint Hamming weight, the conversion can make the cryptosystem faster. Most of existing conversions are limited to some specific representations, and are difficult to apply to other representations. On the other hand, our conversion is applicable to any binary expansions. The proposed can explore the minimal average weights in a class of representation that have not been found. One of the most interesting results is that, for the expansion of integer pairs when the digit set is $\{0, \pm 1, \pm 3\}$, we show that the minimal average joint Hamming weight is $0.3575$. This improves the upper bound value, $0.3616$, proposed by Dahmen, Okeya, and Takagi.
2010
EPRINT
Pair-wise Cryptographic Models for Secure Data Exchange in P2P Database Management Systems
A peer-to-peer database management system(P2PDBMS) is a collection of autonomous data sources, called peers. In this system each peer augments a conventional database management system with an inter-operability layer (i.e. mappings/policies) for sharing data and services. Peers exchange data in a pair-wise fashion on-the-fly in response to a query without any centralized control. Generally, the communication link between two peers is insecure and peers create a temporary session while exchanging data. When peers exchange highly confidential data between them over an insecure communication network, such as the Internet, the data might be trapped and disclosed by the intruders. In a P2PDBMS there is no centralized control for data exchange, hence we cannot assume any central third party security infrastructure (e.g. PKI) to protect confidential data. So far, there is currently no available/existing security protocol for secured data exchange in P2PDBMS. In this paper we propose three models for secure data exchange in P2PDBMSs and the corresponding security protocols. The proposed protocol allows the peers to compute their secret session keys dynamically during data exchange based on the policies between them. Our proposed protocol is robust against the man-in-the middle attack, the masquerade attack, and the reply attack.
2010
EPRINT
Pairing computation on curves with efficiently computable endomorphism and small embedding degree
Scott uses an efficiently computable isomorphism in order to optimize pairing computation on a particular class of curves with embedding degree 2. He pointed out that pairing implementation becomes thus faster on these curves than on their supersingular equivalent, originally recommended by Boneh and Franklin for Identity Based Encryption. We extend Scott's method to other classes of curves with small embedding degree and efficiently computable endomorphism. In particular, we optimize pairing computation on a class of curves with embedding degree 4 and discriminant 1, which are interesting for pairing based cryptography because they have a very efficient arithmetic.
2010
EPRINT
Parallel Enumeration of Shortest Lattice Vectors
Lattice basis reduction is the problem of finding short vectors in lattices. The security of lattice based cryptosystems is based on the hardness of lattice reduction. Furthermore, lattice reduction is used to attack well-known cryptosystems like RSA. One of the algorithms used in lattice reduction is the enumeration algorithm (ENUM), that provably finds a shortest vector of a lattice. We present a parallel version of the lattice enumeration algorithm. Using multi-core CPU systems with up to 16 cores, our implementation gains a speed-up of up to factor 14. Compared to the currently best public implementation, our parallel algorithm saves more than 90% of runtime.
2010
EPRINT
Parallelizing the Camellia and SMS4 Block Ciphers - Extended version
The n-cell GF-NLFSR (Generalized Feistel-NonLinear Feedback Shift Register) structure [8] is a generalized unbalanced Feistel network that can be considered as a generalization of the outer function FO of the KASUMI block cipher. An advantage of this cipher over other n-cell generalized Feistel networks, e.g. SMS4 [11] and Camellia [5], is that it is parallelizable for up to n rounds. In hardware implementations, the benefits translate to speeding up encryption by up to n times while consuming similar area and significantly less power. At the same time n-cell GF-NLFSR structures offer similar proofs of security against differential cryptanalysis as conventional n-cell Feistel structures. We also ensure that parallelized versions of Camellia and SMS4 are resistant against other block cipher attacks such as linear, boomerang, integral, impossible differential, higher order differential,interpolation, slide, XSL and related-key differential attacks.
2010
EPRINT
Perfectly Balanced Boolean Functions and Goli\'c Conjecture
Goli\'c conjecture states that the necessary condition for a function to be perfectly balanced for any choice of a tapping sequence is linearity of a function in the first or in the last essential variable. In the current paper we prove Goli\'c conjecture.
2010
EPRINT
Perfectly Secure Multiparty Computation and the Computational Overhead of Cryptography
We study the following two related questions: - What are the minimal computational resources required for general secure multiparty computation in the presence of an honest majority? - What are the minimal resources required for two-party primitives such as zero-knowledge proofs and general secure two-party computation? We obtain a nearly tight answer to the first question by presenting a perfectly secure protocol which allows $n$ players to evaluate an arithmetic circuit of size $s$ by performing a total of $\O(s\log s\log^2 n)$ arithmetic operations, plus an additive term which depends (polynomially) on $n$ and the circuit depth, but only logarithmically on $s$. Thus, for typical large-scale computations whose circuit width is much bigger than their depth and the number of players, the amortized overhead is just polylogarithmic in $n$ and $s$. The protocol provides perfect security with guaranteed output delivery in the presence of an active, adaptive adversary corrupting a $(1/3-\epsilon)$ fraction of the players, for an arbitrary constant $\epsilon>0$ and sufficiently large $n$. The best previous protocols in this setting could only offer computational security with a computational overhead of $\poly(k,\log n,\log s)$, where $k$ is a computational security parameter, or perfect security with a computational overhead of $\O(n\log n)$. We then apply the above result towards making progress on the second question. Concretely, under standard cryptographic assumptions, we obtain zero-knowledge proofs for circuit satisfiability with $2^{-k}$ soundness error in which the amortized computational overhead per gate is only {\em polylogarithmic} in $k$, improving over the $\omega(k)$ overhead of the best previous protocols. Under stronger cryptographic assumptions, we obtain similar results for general secure two-party computation.
2010
EPRINT
Perfectly Secure Oblivious RAM Without Random Oracles
We present an algorithm for implementing a secure oblivious RAM where the access pattern is perfectly hidden in the information theoretic sense, without assuming that the CPU has access to a random oracle. In addition we prove a lover bound on the amount of randomness needed for information theoretically secure oblivious RAM.
2010
EPRINT
Piret and Quisquater's DFA on AES Revisited
At CHES 2003, Piret and Quisquater published a very efficient DFA on AES which has served as a basis for many variants published afterwards. In this paper, we revisit P&Q's DFA on AES and we explain how this attack can be much more efficient than originally claimed. In particular, we show that only 2 (resp. 3) faulty ciphertexts allow an attacker to efficiently recover the key in the case of AES-192 (resp. AES-256). Our attack on AES-256 is the most efficient attack on this key length published so far.
2010
EPRINT
Plaintext-Dependent Decryption: A Formal Security Treatment of SSH-CTR
This paper presents a formal security analysis of SSH in counter mode in a security model that accurately captures the capabilities of real-world attackers, as well as security-relevant features of the SSH specifications and the OpenSSH implementation of SSH. Under reasonable assumptions on the block cipher and MAC algorithms used to construct the SSH Binary Packet Protocol (BPP), we are able to show that the SSH BPP meets a strong and appropriate notion of security: indistinguishability under buffered, stateful chosen-ciphertext attacks. This result helps to bridge the gap between the existing security analysis of the SSH BPP by Bellare et al. and the recently discovered attacks against the SSH BPP by Albrecht et al. which partially invalidate that analysis.
2010
EPRINT
Position-Based Quantum Cryptography
In this work, we initiate the study of position-based cryptography in the quantum setting. The aim of position-based cryptography is to use the geographical position of a party as its only credential. This has interesting applications, e.g., it enables two military bases to talk to each other over insecure (i.e. neither private nor authenticated) channels and without having any pre-shared key, with the guarantee that only parties within the bases learn the content of the conversation. We present schemes for several important position-based cryptographic tasks: positioning, authentication, and key exchange, and we prove them unconditionally secure, i.e., without assuming any restriction on the adversaries (beyond the laws of quantum mechanics). At the core of our security proofs lies the strong complementary information tradeoff recently introduced by Renes and Boileau. An attractive feature of all our schemes is that they only involve ``simple'' quantum operations, namely to prepare, communicate and measure-upon-arrival individual qubits. We stress that the above position-based tasks are impossible in the classical setting without limiting the adversary. Therefore, our work shows that position-based quantum cryptography is one of the rare examples besides QKD for which there is such a strong separation between classical and quantum cryptography. Besides the schemes for which we give rigorous security proofs, we also present a couple of significantly more efficient schemes for which we can merely conjecture security; proving them secure remains an interesting challenge. Our results open a fascinating new direction for position-based security in cryptography where security of protocols is solely based on the laws of physics and proofs of security do not require any pre-existing infrastructure.
2010
EPRINT
Practical Adaptive Oblivious Transfer from a Simple Assumption
We present the first efficient, adaptive oblivious transfer protocol which is fully-simulatable under a simple assumption in the standard model. The sole complexity assumption required is that given (g, g^a, g^b, g^c, Q), where g generates a bilinear group of prime order p and a, b, c are selected randomly from Zp, it is hard to decide if Q = g^{abc}. In an adaptive oblivious transfer protocol, a sender with a database of messages and a receiver repeatedly interact in such a way that the receiver obtains one message per interaction of his choice (and nothing more) while the sender learns nothing about any of the choices. All prior protocols in the standard model require dynamic "q-based" assumptions, where the number of group elements in the assumption input grows with the size of the sender's database. Our construction makes an important change to the established "assisted decryption" technique for designing adaptive OT. As in prior works, the sender commits to a database of n messages by publishing an encryption of each message and a signature on each encryption. Then, each transfer phase can be executed in time /independent/ of n as the receiver blinds one of the encryptions and proves knowledge of the blinding factors and a signature on this encryption, after which the sender helps the receiver decrypt the chosen ciphertext. One of the main obstacles to designing an adaptive OT scheme from a simple assumption is realizing a suitable signature for this purpose (i.e., enabling signatures on group elements in a manner that later allows for efficient proofs.) We make the observation that a secure signature scheme is not necessary for this paradigm, provided that signatures can only be forged in certain ways. We then show how to efficiently integrate an insecure signature into a secure adaptive OT construction. We believe this construction and its underlying techniques may be of interest in designing other privacy-preserving protocols from simple complexity assumptions.
2010
EPRINT
Practical consequences of the aberration of narrow-pipe hash designs from ideal random functions
In a recent note to the NIST hash-forum list, the following observation was presented: narrow-pipe hash functions differ significantly from ideal random functions $H:\{0,1\}^{N} \rightarrow \{0,1\}^n$ that map bit strings from a big domain where $N=n+m,\ m\geq n$ ($n=256$ or $n=512$). Namely, for an ideal random function with a big domain space $\{0,1\}^{N}$ and a finite co-domain space $Y=\{0,1\}^n$, for every element $y \in Y$, the probability $Pr\{H^{-1}(y) = \varnothing\} \approx e^{-2^{m}} \approx 0$ where $H^{-1}(y) \subseteq \{0,1\}^{N}$ and $H^{-1}(y) = \{x \ |\ H(x)=y \}$ (in words - the probability that elements of $Y$ are ``unreachable'' is negligible). However, for the narrow-pipe hash functions, for certain values of $N$ (the values that are causing the last padded block that is processed by the compression function of these functions to have no message bits), there exists a huge non-empty subset $Y_\varnothing \subseteq Y$ with a volume $|Y_\varnothing|\approx e^{-1}|Y|\approx 0.36 |Y|$ for which it is true that for every $y \in Y_\varnothing,\ H^{-1}(y) = \varnothing$. In this paper we extend the same finding to SHA-2 and show consequences of this abberation when narrow-pipe hash functions are employed in HMAC and in two widely used protocols: 1. The pseudo-random function defined in SSL/TLS 1.2 and 2. The Password-based Key Derivation Function No.1, i.e. PBKDF1.
2010
EPRINT
Practical ID-based Encryption for Wireless Sensor Network
In this paper, we propose a new practical identity-based encryption scheme which is suitable for wireless sensor network (WSN). We call it \textit{Receiver-Bounded Online/Offline Identity-based Encryption} (RB-OOIBE). It splits the encryption process into two parts -- the offline and the online part. In the offline part, all heavy computations are done without the knowledge of the receiver's identity and the plaintext message. In the online stage, only light computations such as modular operation and symmetric key encryption are required, together with the receiver's identity and the plaintext message. Moreover, since each offline ciphertext can be re-used for the same receiver, the number of offline ciphertexts the encrypter holds only confines the number of receivers instead of the number of messages to be encrypted. In this way, a sensor node (with limited computation power and limited storage) in WSN can send encrypted data easily: A few offline ciphertexts can be computed in the manufacturing stage while the online part is light enough for the sensor to process. We propose an efficient construction for this new notion. The scheme can be proven selective-ID CCA secure in the standard model. Compared to previous online/offline identity-based encryption schemes, our scheme is exempt from a high storage requirement, which is proportional to the number of messages to be sent. The improvement is very significant if many messages are sent to few receivers.
2010
EPRINT
Practical Improvements of Profiled Side-Channel Attacks on a Hardware Crypto-Accelerator
This article investigates the relevance of the theoretical framework on profiled side-channel attacks presented by F.-X. Standaert et al. at Eurocrypt 2009. The analyses consist in a case-study based on sidechannel measurements acquired experimentally from a hardwired cryptographic accelerator. Therefore, with respect to previous formal analyses carried out on software measurements or on simulated data, the investigations we describe are more complex, due to the underlying chip’s architecture and to the large amount of algorithmic noise. In this difficult context, we show however that with an engineer’s mindset, two techniques can greatly improve both the off-line profiling and the on-line attack. First, we explore the appropriateness of different choices for the sensitive variables. We show that a skilled attacker aware of the register transfers occurring during the cryptographic operations can select the most adequate distinguisher, thus increasing its success rate. Second, we introduce a method based on the thresholding of leakage data to accelerate the profiling or the matching stages. Indeed, leveraging on an engineer’s common sense, it is possible to visually foresee the shape of some eigenvectors thereby anticipating their estimation towards their asymptotic value by authoritatively zeroing weak components containing mainly non-informational noise. This method empowers an attacker, in that it saves traces when converging towards correct values of the secret. Concretely, we demonstrate a 5 times speed-up in the on-line phase of the attack.
2010
EPRINT
Practical NFC Peer-to-Peer Relay Attack using Mobile Phones
NFC is a standardised technology providing short-range RFID communication channels for mobile devices. Peer-to-peer applications for mobile devices are receiving increased interest and in some cases these services are relying on NFC communication. It has been suggested that NFC systems are particularly vulnerable to relay attacks, and that the attacker's proxy devices could even be implemented using off-the-shelf NFC-enabled devices. This paper describes how a relay attack can be implemented against systems using legitimate peer-to-peer NFC communication by developing and installing suitable MIDlets on the attacker's own NFC-enabled mobile phones. The attack does not need to access secure program memory nor use any code signing, and can use publicly available APIs. We go on to discuss how relay attack countermeasures using device location could be used in the mobile environment. These countermeasures could also be applied to prevent relay attacks on contactless applications using `passive' NFC on mobile phones.
2010
EPRINT
Practical-time Attack on the Full MMB Block Cipher
Modular Multiplication based Block Cipher (MMB) is a block cipher designed by Daemen \emph{et al.} as an alternative to the IDEA block cipher. In this paper, we give a practical-time attack on the full MMB with adaptive chosen plaintexts and ciphertexts. By the constructive sandwich distinguisher for 5 of the 6 rounds of MMB with amazingly high probability 1, we give the key recovery attack on the full MMB with data complexity $2^{40}$ and time complexity $2^{13.4}$ MMB encryptions. Then a rectangle-like sandwich attack on the full MMB is presented, with $2^{66.5}$ chosen plaintexts, $2^{64}$ MMB encryptions and $2^{70.5}$ memory bytes. By the way, we show an improved differential attack on the full MMB with data complexity of $2^{96}$ chosen plaintexts and ciphertexts, time complexity $2^{64}$ encryptions and $2^{66}$ bytes of memory.
2010
EPRINT
2010
EPRINT
Predicate-Based Key Exchange
We provide the first description of and security model for authenticated key exchange protocols with predicate-based authentication. In addition to the standard goal of session key security, our security model also provides for credential privacy: a participating party learns nothing more about the other party's credentials than whether they satisfy the given predicate. Our model also encompasses attribute-based key exchange since it is a special case of predicate-based key exchange. We demonstrate how to realize a secure predicate-based key exchange protocol by combining any secure predicate-based signature scheme with the basic Diffie-Hellman key exchange protocol, providing an efficient and simple solution.
2010
EPRINT
Preventing Pollution Attacks in Multi-Source Network Coding
Network coding is a method for achieving channel capacity in networks. The key idea is to allow network routers to linearly mix packets as they traverse the network so that recipients receive linear combinations of packets. Network coded systems are vulnerable to pollution attacks where a single malicious node floods the network with bad packets and prevents the receiver from decoding correctly. Cryptographic defenses to these problems are based on homomorphic signatures and MACs. These proposals, however, cannot handle mixing of packets from multiple sources, which is needed to achieve the full benefits of network coding. In this paper we address integrity of multi-source mixing. We propose a security model for this setting and provide a generic construction.
2010
EPRINT
Privacy-friendly Incentives and their Application to Wikipedia (Extended Version)
Double-blind peer review is a powerful method to achieve high quality and thus trustworthiness of user-contributed content. Facilitating such reviews requires incentives as well as privacy protection for the reviewers. In this paper, we present the concept of privacy-friendly incentives and discuss the properties required from it. We then propose a concrete cryptographic realization based on ideas from anonymous e-cash and credential systems. Finally, we report on our software's integration into the MediaWiki software.
2010
EPRINT
Privacy-Preserving Matching Protocols for Attributes and Strings
In this technical report we present two new privacy-preserving matching protocols for singular attributes and strings, respectively. The first one is used for matching of common attributes without revealing unmatched ones to each other. The second protocol is used to discover the longest common sub-string of two input strings in a privacy-preserving manner. Compared with previous work, our solutions are efficient and suitable to implement for many different applications, e.g., discovery of common worm signatures, computation of similarity of IP payloads.
2010
EPRINT
Privacy-Preserving Multi-Objective Evolutionary Algorithms
Existing privacy-preserving evolutionary algorithms are limited to specific problems securing only cost function evaluation. This lack of functionality and security prevents their use for many security sensitive business optimization problems, such as our use case in collaborative supply chain management. We present a technique to construct privacy-preserving algorithms that address multi-objective problems and secure the entire algorithm including survivor selection. We improve performance over Yao's protocol for privacy-preserving algorithms and achieve solution quality only slightly inferior to the multi-objective evolutionary algorithm NSGA-II.
2010
EPRINT
Privacy-Preserving RFID Systems: Model and Constructions
In this paper, we study systems where a reader wants to authenticate and identify legitimate RFID tags. Such system needs thus to be correct (legitimate tags are accepted) and sound (fake tags are rejected). Moreover, an RFID tag in a privacy-preserving system should be anonymous and untraceable, except for the legitimate reader. We here present the first security model for RFID authentication/identification privacy-preserving systems which is at the same time complete and easy to use. Our correctness property permits to take into account active adversaries. Our soundness property incorporates the case of adversaries realizing relay attacks. Finally, our privacy model includes adversaries with no restrictions on their interactions with the system and moreover takes into account the case of ``future correlations''. We next propose several constructions, based on the work from Vaudenay, proving that (i) our strongest property is at least as strong as those of Vaudenay and (ii) this property is reachable by efficient schemes.
2010
EPRINT
Private and Continual Release of Statistics
We ask the question – how can websites and data aggregators continually release updated statistics, and meanwhile preserve each individual user’s privacy? Suppose we are given a stream of 0’s and 1’s. We propose a differentially private continual counter that outputs at every time step the approximate number of 1’s seen thus far. Our counter construction has error that is only poly-log in the number of time steps. We can extend the basic counter construction to allow websites to continually give top-k and hot items suggestions while preserving users’ privacy.
2010
EPRINT
Proposal of a Signature Scheme based on STS Trapdoor
A New digital signature scheme based on Stepwise Triangular Scheme (STS) is proposed. The proposed trapdoor has resolved the vulnerability of STS and secure against both Gröbner Bases and Rank Attacks. In addition, as a basic trapdoor, it is more efficient than the existing systems. With the efficient implementation, the Multivariate Public Key Cryptosystems (MPKC) signature public key has the signature longer than the message by less than 25 %, for example.
2010
EPRINT
Protocols for Reliable and Secure Message Transmission
Consider the following problem: a sender S and a receiver R are part of an unreliable, connected, distributed network. The distrust in the network is modelled by an entity called adversary, who has unbounded computing power and who can corrupt some of the nodes of the network (excluding S and R)in a variety of ways. S wishes to send to R a message m that consists of \ell elements, where \ell \geq 1, selected uniformly from a finite field F. The challenge is to design a protocol, such that after interacting with S as per the protocol, R should output m without any error (perfect reliability). Moreover, this hold irrespective of the disruptive actions done by the adversary. This problem is called reliable message transmission or RMT in short. The problem of secure message transmission or SMT in short requires an additional constraint that the adversary should not get any information about the message what so ever in information theoretic sense (perfect secrecy). Security against an adversary with infinite computing power is also known as non-cryptographic or information theoretic or Shannon security and this is the strongest notion of security. Notice that since the adversary has unbounded computing power, we cannot solve RMT and SMT problem by using classical cryptographic primitives such as public key cryptography, digital signatures, authentication schemes, etc as the security of all these primitives holds good only against an adversary having polynomially bounded computing power. RMT and SMT problem can be studied in various network models and adversarial settings. We may use the following parameters to describe different settings/models for studying RMT/SMT: \begin{enumerate} \item Type of Underlying Network --- Undirected Graph, Directed Graph, Hypergraph. \item Type of Communication --- Synchronous, Asynchronous. \item Adversary capacity --- Threshold Static, Threshold Mobile, Non-threshold Static, Non-threshold Mobile. \item Type of Faults --- Fail-stop, Passive, Byzantine, Mixed. \end{enumerate} Irrespective of the settings in which RMT/SMT is studied, the following issues are common: \begin{enumerate} \item Possibility: What are the necessary and sufficient structural conditions to be satisfied by the underlying network for the existence of any RMT/SMT protocol, tolerating a given type of adversary? \item Feasibility: Once the existence of a RMT/SMT protocol in a network is ascertained, the next natural question is, does there exist an efficient protocol on the given network? \item Optimality: Given a message of specific length, what is the minimum communication complexity (lower bound) needed by any RMT/SMT protocol to transmit the message and how to design a polynomial time RMT/SMT protocol whose total communication complexity matches the lower bound on the communication complexity (optimal protocol)? \end{enumerate} In this dissertation, we look into the above issues in several network models and adversarial settings. This thesis reports several new/improved/efficient/optimal solutions, gives affirmative/negative answers to several significant open problems and last but not the least, provides first solutions to several newly formulated problems.
2010
EPRINT
Provably Secure Higher-Order Masking of AES
Implementations of cryptographic algorithms are vulnerable to Side Channel Analysis (SCA). To counteract it, masking schemes are usually involved which randomize key-dependent data by the addition of one or several random value(s) (the masks). When $d$th-order masking is involved (i.e. when $d$ masks are used per key-dependent variable), the complexity of performing an SCA grows exponentially with the order $d$. The design of generic $d$th-order masking schemes taking the order $d$ as security parameter is therefore of great interest for the physical security of cryptographic implementations. This paper presents the first generic $d$th-order masking scheme for AES with a provable security and a reasonable software implementation overhead. Our scheme is based on the hardware-oriented masking scheme published by Ishai et al. at Crypto 2003. Compared to this scheme, our solution can be efficiently implemented in software on any general-purpose processor. This result is of importance considering the lack of solution for $d\geq 3$.
2010
EPRINT
Pseudo-Linear Approximations for ARX Ciphers: With Application to Threefish
The operations addition modulo 2^n and exclusive-or have recently been combined to obtain an efficient mechanism for nonlinearity in block cipher design. In this paper, we show that ciphers using this approach may be approximated by pseudo-linear expressions relating groups of contiguous bits of the round key, round input, and round output. The bias of an approximation can be large enough for known plaintext attacks. We demonstrate an application of this concept to a reduced-round version of the Threefish block cipher, a component of the Skein entry in the secure hash function competition.
2010
EPRINT
Pseudorandom Functions and Permutations Provably Secure Against Related-Key Attacks
This paper fills an important foundational gap with the first proofs, under standard assumptions and in the standard model, of the existence of pseudorandom functions (PRFs) and pseudorandom permutations (PRPs) resisting rich and relevant forms of related-key attacks (RKA). An RKA allows the adversary to query the function not only under the target key but under other keys derived from it in adversary-specified ways. Based on the Naor-Reingold PRF we obtain an RKA-PRF whose keyspace is a group and that is proven, under DDH, to resist attacks in which the key may be operated on by arbitrary adversary-specified group elements. Previous work was able only to provide schemes in idealized models (ideal cipher, random oracle), under new, non-standard assumptions, or for limited classes of attacks. The reason was technical difficulties that we resolve via a new approach and framework that, in addition to the above, yields other RKA-PRFs including a DLIN-based one derived from the Lewko-Waters PRF. Over the last 15 years cryptanalysts and blockcipher designers have routinely and consistently targeted RKA-security; it is visibly important for abuse-resistant cryptography; and it helps protect against fault-injection sidechannel attacks. Yet ours are the first significant proofs of existence of secure constructs. We warn that our constructs are proofs-of-concept in the foundational style and not practical.
2010
EPRINT
Pushing the Limits of ECM
This paper describes our implementation of phase one of the elliptic curve method on the Cell processor and reports on actual record factors obtained. Our implementation uses a new and particularly efficient variable radix multiplication of independent interest.
2010
EPRINT
Quantifying Trust
Trust is a central concept in public-key cryptography infrastruc- ture and in security in general. We study its initial quantification and its spread patterns. There is empirical evidence that in trust-based reputation model for virtual communities, it pays to restrict the clusters of agents to small sets with high mutual trust. We propose and motivate a mathematical model, where this phenomenon emerges naturally. In our model, we separate trust values from their weights. We motivate this separation using real examples, and show that in this model, trust converges to the extremes, agreeing with and accentuating the observed phenomenon. Specifically, in our model, cliques of agents of maximal mutual trust are formed, and the trust between any two agents that do not maximally trust each other, converges to zero. We offer initial practical relaxations to the model that preserve some of the theoretical flavor.
2010
EPRINT
Quantum Proofs of Knowledge
We motivate, define and construct quantum proofs of knowledge, proofs of knowledge secure against quantum adversaries. Our constructions are based on a new quantum rewinding technique that allows us to extract witnesses in many classical proofs of knowledge. We give criteria under which a classical proof of knowledge is a quantum proof of knowledge. Combining our results with Watrous' results on quantum zero-knowledge, we show that there are zero-knowledge quantum proofs of knowledge for all languages in NP.
2010
EPRINT
Random Oracles in a Quantum World
Once quantum computers reach maturity most of today’s traditional cryptographic schemes based on RSA or discrete logarithms become vulnerable to quantum-based attacks. Hence, schemes which are more likely to resist quantum attacks like lattice-based systems or code-based primitives have recently gained significant attention. Interestingly, a vast number of such schemes also deploy random oracles, which have mainly be analyzed in the classical setting. Here we revisit the random oracle model in cryptography in light of quantum attackers. We show that there are protocols using quantum-immune primitives and random oracles, such that the protocols are secure in the classical world, but insecure if a quantum attacker can access the random oracle via quantum states. We discuss that most of the proof techniques related to the random oracle model in the classical case cannot be transferred immediately to the quantum case. Yet, we show that “quantum random oracles” can nonetheless be used to show for example that the basic Bellare-Rogaway encryption scheme is quantum-immune against plaintext attacks (assuming quantum-immune primitives).
2010
EPRINT
Rational Secret Sharing AS Extensive Games
Some punishments in rational secret sharing schemes turn out to be empty threats. In this paper, we first model 2-out-of-2 rational secret sharing in an extensive game with imperfect information, and then provide a strategy for achieving secret recovery in this game. Moreover, we prove that the strategy is a sequential equilibrium which means after any history of the game no player can benefit from deviations so long as the other players stick to the strategy. Therefor, by considering rational secret sharing as an extensive game, we design a scheme which eliminates empty threats. Except assuming the existence of a simultaneous broadcast channel, our scheme can have dealer off-line and extend to the t-out-of-n rational secret sharing, and also satisfies computational equilibria in some sense.
2010
EPRINT
Rational Secret Sharing without Broadcast
We consider the concept of rational secret sharing, which was initially introduced by Halpern and Teague \cite{ht04}, where players' preferences are that they prefer to learn the secret than not, and moreover they prefer that as few others learn the secret as possible. This paper is an attempt to introduce a rational secret sharing scheme which defers from previous RSS schemes in that this scheme does not rely on broadcast to send messages but instead uses point to point transmissions. Not only that, but the protocol will not rely on any cryptographic primitives and is coalition resilient except for when the short player colludes with a long player.
2010
EPRINT
Recursive Information Hiding in Visual Cryptography
Visual Cryptography is a secret sharing scheme that uses the human visual system to perform computations. This paper presents a recursive hiding scheme for 3 out of 5 secret sharing. The idea used is to hide smaller secrets in the shares of a larger secret without an expansion in the size of the latter.
2010
EPRINT
Related Key Cryptanalysis of the LEX Stream Cipher
LEX is a stream cipher proposed by Alex Biryukov. It was selected to phase 3 of the eSTREAM competition. LEX is based on the Advanced Encryption Standard (AES) block cipher and uses a methodology called "Leak Extraction", proposed by Biryukov himself. In this paper, we cryptanalyze LEX using two related keys. We have mounted a key recovery attack on LEX, which using $2^{54. 3}$ key streams yields a complete round key with $2^{102}$ operations. This improves the existing best cryptanalysis of LEX which needs $2^{112}$ operations to ascertain the key.
2010
EPRINT
Related-Key Boomerang and Rectangle Attacks
This paper introduces the related-key boomerang and the related-key rectangle attacks. These new attacks can expand the cryptanalytic toolbox, and can be applied to many block ciphers. The main advantage of these new attacks, is the ability to exploit the related-key model twice. Hence, even ciphers which were considered resistant to either boomerang or related-key differential attacks may be broken using the new techniques. In this paper we present a rigorous treatment of the related-key boomerang and the related-key rectangle distinguishers. Following this treatment, we devise optimal distinguishing algorithms using the LLR (Logarithmic Likelihood Ratio) statistics. We then analyze the success probability under reasonable independence assumptions, and verify the computation experimentally by implementing an actual attack on a 6-round variant of KASUMI. The paper ends with a demonstration of the strength of our new proposed techniques with attacks on 10-round AES-192 and the full KASUMI.
2010
EPRINT
Related-Key Boomerang Attack on Block Cipher SQUARE
Square is 8-round SPN structure block cipher and its round function and key schedule have been slightly modified to design building blocks of Rijndael. Key schedule of Square is simple and efficient but fully affie, so we apply a related-key attack on it. We find a 3-round related-key differential trail with probability 2^28, which have zero differences both on its input and output states, and this trail is called the local collision in [5]. By extending of this related-key differential, we construct a 7-round related-key boomerang distinguisher and successful attack on full round Square. The best attack on Square have ever been known is the square attack on 6-round reduced variant of Square. In this paper, we present a key recovery attack on the full round of Square using a related-key boomerang distinguisher. We construct a 7-round related-key boomerang distinguisher with probability 2^119 by finding local collision, and calculate its probability using ladder switch and local amplification techniques. As a result, one round on top of distinguisher is added to construct a full round attack on Square which recovers 16-bit key information with 2^36 encryptions and 2^123 data.
2010
EPRINT
Relation for Algebraic Attack on E0 combiner
The low degree relation for algebraic attacks on E0 combiner given in \cite{DBLP:conf/crypto/ArmknechtK03} had an error. The correct version of low degree relation for the E0 combiner for use in algebraic attack is given.
2010
EPRINT
Relay Attacks on Passive Keyless Entry and Start Systems in Modern Cars
We demonstrate a relay attack on Passive Keyless Entry and Start (PKES) systems used in modern cars. The attack allows the attacker to enter and start a car by relaying messages between the car and the smart key. We build two attack realizations, wired and wireless physical layer relays, demonstrating that this attack is both practical and inexpensive. We further show that, for the attack to work, it is sufficient that the attacker's devices are placed within a meter from both the key and the car. Moreover, on the cars we tested, relaying the signal in one direction only (from the car to the key) is sufficient as the responses of the key are transmitted in UHF, which has a longer range. As the signals are relayed at the physical layer, the attack is completely independent of the modulation scheme, protocols, or the presence of strong authentication and encryption. We demonstrate the attack on recent car models from different manufacturers. Our attack works for a set of PKES systems that we evaluated and whose operation is described in this paper. However, given the generality of the relay attack, it is likely that PKES systems based on similar designs are also vulnerable to the same attack. In this work, we further propose simple countermeasures that minimize the risk of relay attacks and that can be immediately deployed by the car owners; however, these countermeasures also disable the operation of the PKES systems. Finally, we discuss countermeasures against relay attacks that were suggested in the open literature and we sketch a new PKES system that prevents relay attacks. This system preserves convenience of use, for which PKES systems were initially introduced.
2010
EPRINT
Ring Signature and Identity-Based Ring Signature from Lattice Basis Delegation
In this paper, we propose a ring signature (RS) and an identity-based ring signature (IBRS) schemes using the lattice basis del- egation technique due to [10,22]. The schemes are unforgeable and hold anonymity in the random oracle model. Using the method in [28,29], we also extend our constructions to obtain RS and IBRS schemes in the standard model. To the best of the authors' knowledge, our proposed constructions constitute the ¯rst ring signature and identity-based ring signature schemes from lattices
2010
EPRINT
Ring signature with divided private key
The ring signature is a group signature without a group manager, so that a signer realizes a signature in the name of the group. In some situations it is necessary for a message to be signed by more than one persons. The scheme of the ring signature with divided key is an algorithm which ensures realizing a key signature by a group of k entities from a group of n entities. Because of the way this scheme is elaborated, each signer has his own private key, which he uses in the signing phase. Checking the key is realized by using a single common public key. The signature scheme is based on the problem of the discrete logarithm. This cryptographic primitive ensures the anonymity of the signature, which is a ring signature.
2010
EPRINT
Robust Combiner for Obfuscators
Practical software hardening schemes are heuristic and are not proven to be secure. One technique to enhance security is {\em robust combiners}. An algorithm $C$ is a robust combiner for specification $S$, e.g., privacy, if for any two implementations $X$ and $Y$, of a cryptographic scheme, the combined scheme $C(X,Y)$ satisfies $S$ provided {\em either} $X$ {\em or} $Y$ satisfy $S$. We present the first robust combiner for software hardening, specifically for obfuscation \cite{barak:obfuscation}. Obfuscators are software hardening techniques that are employed to protect execution of programs in remote, hostile environment. Obfuscators protect the code (and secret data) of the program that is sent to the remote host for execution. Robust combiners are particularly important for software hardening, where there is no standard whose security is established. In addition, robust combiners for software hardening are interesting from software engineering perspective since they introduce new techniques of software only fault tolerance.
2010
EPRINT
Robust Fuzzy Extractors and Authenticated Key Agreement from Close Secrets
Consider two parties holding samples from correlated distributions W and W', respectively, that are within distance t of each other in some metric space. These parties wish to agree on a uniformly distributed secret key R by sending a single message over an insecure channel controlled by an all-powerful adversary. We consider both the keyless case, where the parties share no additional secret information, and the keyed case, where the parties share a long-term secret SK that they can use to generate a sequence of session keys {R_j} using multiple pairs {W_j, W'_j}. The former has applications to, e.g., biometric authentication, while the latter arises in, e.g., the bounded storage model with errors. Our results improve upon previous work in several respects: -- The best previous solution for the keyless case with no errors (i.e., t=0) requires the min-entropy of W to exceed 2n/3, where n is the bit-length of W. Our solution applies whenever min-entropy of W exceeds the minimal possible} threshold n/2, and yields a longer key. -- Previous solutions for the keyless case in the presence of errors (i.e., t>0) required random oracles. We give the first constructions (for certain metrics) in the standard model. -- Previous solutions for the keyed case were stateful. We give the first stateless solution.
2010
EPRINT
Robust RFID Authentication Protocol with Formal Proof and Its Feasibility
The proloferation of RFID tags enhances everyday activities, such as by letting us reference the price, origin and circulation route of specific goods. On the other hand, this lecel of traceability gives rise to new privacy issues and the topic of developing cryptographic protocols for RFID- tags is garnering much attention. A large amount of research has been conducted in this area. In this paper, we reconsider the security model of RFID- authentication with a man-in-the-middle adversary and communication fault. We define model and security proofs via a game-based approach makes our security models compatible with formal security analysis tools. We show that an RFID authentication protocol is robust against the above attacks, and then provide game-based (hand-written) proofs and their erification by using CryptoVerif.
2010
EPRINT
Round-Efficient Perfectly Secure Message Transmission Scheme Against General Adversary
In the model of Perfectly Secure Message Transmission Schemes (PSMTs), there are $n$ channels between a sender and a receiver, and they share no key. An infinitely powerful adversary $A$ can corrupt (observe and forge) the messages sent through some subset of $n$ channels. For non-threshold adversaries called $Q^2$, Kumar et al. showed a many round PSMT \cite{KGSR}. In this paper, we show round efficient PSMTs against $Q^2$-adevrsaries. We first give a $3$-round PSMT which runs in polynomial time in the size of the underlying linear secret sharing scheme. We next present a $2$-round PSMT which is inefficient in general. (However, it is efficient for some special case.)
2010
EPRINT
Sanitizable signatures with strong transparency in the standard model
Sanitizable signatures provide several security features which are useful in many scenarios including military and medical applications. Sanitizable signatures allow a semi-trusted party to update some part of the digitally signed document without interacting with the original signer. Such schemes, where the veri fer cannot identify whether the message has been sanitized, are said to possess strong transparency. In this paper, we have described the first efficient and provably secure sanitizable signature scheme having strong transparency under the standard model.
2010
EPRINT
Scalability and Security Conflict for RFID Authentication Protocols
Many RFID authentication protocols have been proposed to preserve security and privacy. Nevertheless, most of these protocols are analyzed and it is shown that they can not provide security against some RFID attacks. Moreover, some of the secure ones are criticized, because they suffer from scalability at the reader/server side as in tag identification or authentication phase they require a linear search depending on number of tags in the system. Recently, new authentication protocols have been presented to solve scalability issue, i.e. they require constant time for tag identification with providing security. In this paper, we analyze two of these new RFID authentication protocols SSM (very recently proposed by Song and Mitchell) and LRMAP (proposed by Ha et al.) and to the best of our knowledge, they have received no attacks yet. These schemes take O(1) work to authenticate a tag and are designed to meet the privacy and security requirements. The common point of these protocols is that normal and abnormal states are defined for tags. In the normal state, server authenticates the tag in constant time, while in the abnormal state, occurs rarely, authentication is realized with linear search. We show that, however, these authentication protocols do not provide untraceability which is one of their design objectives. We also discover that the SSM protocol is vulnerable to a desynchronization attack, that prevents a legitimate reader/server from authenticating a legitimate tag. Furthermore, in the light of these attacks, we conclude that allowing tags to be in different states may give clue to an adversary in tracing the tags, although such a design is preferred to achieve scalability and efficiency at the server side.
2010
EPRINT
Secrecy-Oriented First-Order Logical Analysis of Cryptographic Protocols
We present a computationally sound first-order system for security analysis of protocols that places secrecy of nonces and keys in its center. Even trace properties such as agreement and authentication are proven via proving a non-trace property, namely, secrecy first. This results a very powerful system, the working of which we illustrate on the agreement and authenti- cation proofs for the Needham-Schroeder-Lowe public-key and the amended Needham-Schroeder shared-key protocols in case of unlimited sessions. Unlike other available formal verification techniques, computational soundness of our approach does not require any idealizations about parsing of bitstrings or unnecessary tagging. In particular, we have total control over detecting or eliminating the possibility of type-flaw attacks.
2010
EPRINT
Secret Sharing Extensions based on the Chinese Remainder Theorem
In this paper, we investigate how to achieve verifiable secret sharing (VSS) schemes by using the Chinese Remainder Theorem (CRT). We first show that two schemes proposed earlier are not secure from an attack where the dealer is able to distribute inconsistent shares to the users. Then we propose a new VSS scheme based on the CRT and prove its security. Using the proposed VSS scheme, we develop joint random secret sharing~(JRSS) and proactive SSS protocols, which, to the best of our knowledge, are the first secure protocols of their kind based on the CRT.
2010
EPRINT
Secure and Fast Implementations of Two Involution Ciphers
Anubis and Khazad are closely related involution block ciphers. Building on two recent AES software results, this work presents a number of constant-time software implementations of Anubis and Khazad for processors with a byte-vector shuffle instruction, such as those that support SSSE3. For Anubis, the first is serial in the sense that it employs only one cipher instance and is compatible with all standard block cipher modes. Efficiency is largely due to the S-box construction that is simple to realize using a byte shuffler. The equivalent for Khazad runs two parallel instances in counter mode. The second for each cipher is a parallel bit-slice implementation in counter mode.
2010
EPRINT
Secure Code Update for Embedded Devices via Proofs of Secure Erasure
Remote attestation is the process of verifying internal state of a remote embedded device. It is an important component of many security protocols and applications. Although techniques assisted by specialized secure hardware are effective, they not yet viable for low-cost embedded devices. One notable alternative is software-based attestation which is both less costly and more efficient. However, recent results identified weaknesses in some proposed methods, thus showing that security of remote software attestation remains a challenge. Inspired by these developments, this paper explores a different approach that relies neither on secure hardware nor on tight timing constraints. By taking advantage of the bounded memory/storage model of low-cost embedded devices and assuming a small amount of read-only memory (ROM), our uses a new primitive -- Proofs of Secure Erasure (PoSE-s). We show that, even though our PoSE-based approach is effective and provably secure, it is not cheap. However, it is particularly well-suited and practical for two other related tasks: secure code update and secure memory/storage erasure. We consider several flavors of PoSE-based protocols and demonstrate their feasibility in the context of existing commodity embedded devices.
2010
EPRINT
Secure Connectivity Model In Wireless Sensor Network(WSN) Using 1st Order Reed Muller Codes
In this paper, we suggest the idea of separately treating the connectivity and communication model of a Wireless Sensor Network(WSN). We then propose a novel connectivity model for a WSN using first order Reed-Muller Codes. While the model has a hierarchical structure, we have shown it works equally well for Distributed WSN. Though one can use any communication model, we prefer to use communication model suggested by Ruj and Roy [1] for all computations and results in our work. One might use two suitable secure (symmetric) cryptosystems on the two different models viz. connectivity and communication. By doing so we have shown how resiliency and scalability are appreciably improved as compared to Ruj and Roy [1].
2010
EPRINT
Secure Guaranteed Computation
We introduce secure committed computation, where n parties commit in advance to compute a function over their private inputs; we focus on two party computations (n = 2). In committed computation, parties initially commit to the computation by providing some (validated) compensation, such that if a party fails to provide an appropriate input during protocol execution, then the peer receives the compensation. Enforcement of the commitments requires a trusted enforcement authority (TEA); however, the protocol protects confidentiality even from the TEA. Secure committed computation has direct practical applications, such as sensitive trading of financial products, and could also be used as a building block to motivate parties to complete protocols, e.g., ensuring unbiased coin tossing. The commitment can be either symmetric (both parties commit) or asymmetric (e.g., only a server commits to a client). Symmetric commitment should also be fair, i.e., one party cannot obtain commitment by the other party without committing as well. Our secure committed computation protocols are optimistic, i.e., the TEA is involved only if and when a party fails to participate (correctly). The protocols we present use two new building blocks, which may be of independent interest. The first is a protocol for optimistic fair secure computation, which is simpler and more efficient than previously known. The second is a protocol for two party computation secure against malicious participants, which is simple and efficient, and relies on a weakly-trusted third party. This protocol can be useful where a trusted third party is unavoidable, e.g., in secure committed or fair computation protocols.
2010
EPRINT
Secure Two-Party Computation via Cut-and-Choose Oblivious Transfer
Protocols for secure two-party computation enable a pair of parties to compute a function of their inputs while preserving security properties such as privacy, correctness and independence of inputs. Recently, a number of protocols have been proposed for the efficient construction of two-party computation secure in the presence of malicious adversaries (where security is proven secure under the standard simulation-based ideal/real model paradigm for defining security). In this paper, we present a protocol for this task that follows the methodology of using cut-and-choose to boost Yao's protocol to be secure in the presence of malicious adversaries. Relying on specific assumptions (DDH), we construct a protocol that is significantly more efficient and far simpler than the protocol of Lindell and Pinkas (Eurocrypt 2007) that follows the same methodology. We provide an exact, concrete analysis of the efficiency of our scheme and demonstrate that (at least for not very small circuits) our protocol is more efficient than any other known today.
2010
EPRINT
Security Analysis of a Threshold Proxy Signature Scheme
The t-out-of-n threshold proxy signatures allow an original signer to delegate his signing capability to a group of proxy signers, and t or more proxy signers can generate valid signatures by cooperating. Recently, Liu and Huang proposed a variant of threshold proxy signature scheme in which all proxy signers remain anonymous. The authors claimed their construction satisfies unforgeability, proxy signer's deviation, identifiability, undeniability and verifiability. In this paper, however, we show that their scheme does not provide the proxy signer's deviation and identifiability requirements.
2010
EPRINT
Security Analysis of SIMD
In this paper we study the security of the SHA-3 candidate SIMD. We first show a new free-start distinguisher based on symmetry relations. It allows to distinguish the compression function of SIMD from a random function with a single evaluation. However, we also show that this property is very hard to exploit to mount any attack on the hash function because of the mode of operation of the compression function. Essentially, if one can build a pair of symmetric states, the symmetry property can only be triggered once. In the second part, we show that a class of free-start distinguishers is not a threat to the wide-pipe hash functions. In particular, this means that our distinguisher has a minimal impact on the security of the hash function, and we still have a security proof for the SIMD hash function. Intuitively, the reason why this distinguisher does not weaken the function is that getting into a symmetric state is about as hard as finding a preimage. Finally, in the third part we study differential path in SIMD, and give an upper bound on the probability of related key differential paths. Our bound is in the order of $2^{n/2}$ using very weak assumptions. Resistance to related key attacks is often overlooked, but it is very important for hash function designs.
2010
EPRINT
Security Improvement on a Password-Authenticated Group Key Exchange Protocol
A group key exchange (GKE) protocol is designed to allow a group of parties communicating over a public network to establish a common secret key. As group-oriented applications gain popularity over the Internet, a number of GKE protocols have been suggested to provide those applications with a secure multicast channel. Among the many protocols is Yi et al.'s password-authenticated GKE protocol in which each participant is assumed to hold their individual password registered with a trusted server. A fundamental requirement for password-authenticated key exchange is security against off-line dictionary attacks. However, Yi et al.'s protocol fails to meet the requirement. In this work, we report this security problem with Yi et al.'s protocol and show how to solve it.
2010
EPRINT
Security of balanced and unbalanced Feistel Schemes with Linear Non Equalities
\begin{abstract} In this paper we will study 2 security results ``above the birthday bound'' related to secret key cryptographic problems.\\ 1. The classical problem of the security of 4, 5, 6 rounds balanced Random Feistel Schemes.\\ 2. The problem of the security of unbalanced Feistel Schemes with contracting functions from $2n$ bits to $n$ bits. This problem was studied by Naor and Reingold~\cite{NR99} and by~\cite{YPL} with a proof of security up to the birthday bound.\\ These two problems are included here in the same paper since their analysis is closely related, as we will see. In problem 1 we will obtain security result very near the information bound (in $O(\frac {2^n}{n})$) with improved proofs and stronger explicit security bounds than previously known. In problem 2 we will cross the birthday bound of Naor and Reingold. For some of our proofs we will use~\cite{A2} submitted to Crypto 2010. \end{abstract}
2010
EPRINT
Security of Encryption Schemes in Weakened Random Oracle Models
Liskov proposed several weakened versions of the random oracle model, called {\em weakened random oracle models} (WROMs), to capture the vulnerability of ideal compression functions, which are expected to have the standard security of hash functions, i.e., collision resistance, second-preimage resistance, and one-wayness properties. The WROMs offer additional oracles to break such properties of the random oracle. In this paper, we investigate whether public-key encryption schemes in the random oracle model essentially require the standard security of hash functions by the WROMs. In particular, we deal with four WROMs associated with the standard security of hash functions; the standard, collision tractable, second-preimage tractable, first-preimage tractable ones (ROM, CT-ROM, SPT-ROM, and FPT-ROM, respectively), done by Numayama et al. for digital signature schemes in the WROMs. We obtain the following results: (1) The OAEP is secure in all the four models. (2) The encryption schemes obtained by the Fujisaki-Okamoto conversion (FO) are secure in the SPT-ROM. However, some encryption schemes with FO are insecure in the FPT-ROM. (3) We consider two artificial variants wFO and dFO of FO for separation of the WROMs in the context of encryption schemes. The encryption schemes with wFO (dFO, respectively) are secure in the CT-ROM (ROM, respectively). However, some encryption schemes obtained by wFO (dFO, respectively) are insecure in the SPT-ROM (CT-ROM, respectively). These results imply that standard encryption schemes such as the OAEP and FO-based one do not always require the standard security of hash functions. Moreover, in order to make our security proofs complete, we construct an efficient sampling algorithm for the binomial distribution with exponentially large parameters, which was left open in Numayama et al.'s paper.
2010
EPRINT
Security Proof of AugPAKE
In this paper, we show that the AugPAKE protocol provides the semantic security of session keys under the strong Diffie-Hellman (SDH) assumption in the random oracle model.
2010
EPRINT
Security Reductions of the Second Round SHA-3 Candidates
In 2007, the US National Institute for Standards and Technology announced a call for the design of a new cryptographic hash algorithm in response to vulnerabilities identified in existing hash functions, such as MD5 and SHA-1. NIST received many submissions, 51 of which got accepted to the first round. At present, 14 candidates are left in the second round. An important criterion in the selection process is the SHA-3 hash function security and more concretely, the possible security reductions of the hash function to the security of its underlying building blocks. While some of the candidates are supported with firm security reductions, for most of the schemes these results are still incomplete. In this paper, we compare the state of the art provable security reductions of the second round candidates. We discuss all SHA-3 candidates at a high functional level, and analyze and summarize the security reduction results. Surprisingly, we derive some security bounds from the literature, which the hash function designers seem to be unaware of. Additionally, we generalize the well-known proof of collision resistance preservation, such that all SHA-3 candidates with a suffix-free padding are covered.
2010
EPRINT
Security weakness of two authenticated key exchange protocols from pairings
Recently, Liu proposed two authenticated multiple key exchange protocols using pairings, and claimed two protocols featured many security attributes. In this paper, we show that Liu’s protocols are insecure. Both of Liu’s protocols cannot provide perfect forward secrecy.
2010
EPRINT
Security Weaknesses in Two Certificateless Signcryption Schemes
Recently, a certificateless signcryption scheme in the standard model was proposed by Liu et al. in \cite{LiuHZM10}. Another certificateless signcryption scheme in the standard model was proposed by Xie et al. in \cite{WZ09}. Here, we show that the scheme in \cite{LiuHZM10} and \cite{WZ09} are not secure against Type-I adversary.
2010
EPRINT
Selecting Parameters for Secure McEliece-based Cryptosystems
In 1994, P. Shor showed that quantum computers will be able to break cryptosystems based on integer factorization and on the discrete logarithm, e.g. RSA or ECC. Code-based crytosystems are promising alternatives to public key schemes based on these problems, and they are believed to be secure against quantum computer attacks. In this paper, we solve the problem of selecting optimal parameters for the McEliece cryptosystem that provide security until a given year and give detailed recommendations. Our analysis is based on the lower bound complexity estimates by Sendrier and Finiasz, and the security requirements model proposed by Lenstra and Verheul.
2010
EPRINT
Selecting Parameters for the Rainbow Signature Scheme - Extended Version -
Multivariate public key cryptography is one of the main approaches to guarantee the security of communication in a post-quantum world. One of the most promising candidates in this area is the Rainbow signature scheme, which was first proposed by J. Ding and D. Schmidt in 2005. In this paper we develop a model of security for the Rainbow signature scheme. We use this model to find parameters for Rainbow over GF(16), GF(31) and GF(256) which, under certain assumptions, guarantee the security of the scheme for now and the near future.
2010
EPRINT
Separable Hash Functions
We introduce a class of hash functions with the property that messages with the same hash are well separated in terms of their Hamming distance. We provide an example of such a function that uses cyclic codes and an elliptic curve group over a finite field. \smallskip A related problem is ensuring that the {\it consecutive distance} between messages with the same hash is as large as possible. We derive bounds on the c.d. separability factor of such hash functions.
2010
EPRINT
Sequential Rationality in Cryptographic Protocols
Much of the literature on rational cryptography focuses on analyzing the strategic properties of cryptographic protocols. However, due to the presence of computationally-bounded players and the asymptotic nature of cryptographic security, a definition of sequential rationality for this setting has thus far eluded researchers. We propose a new framework for overcoming these obstacles, and provide the first definitions of computational solution concepts that guarantee sequential rationality. We argue that natural computational variants of subgame perfection are too strong for cryptographic protocols. As an alternative, we introduce a weakening called threat free Nash equilibrium that is more permissive but still eliminates the undesirable ``empty threats'' of non-sequential solution concepts. To demonstrate the applicability of our framework, we revisit the problem of implementing a mediator for correlated equilibria (Dodis Halevi-Rabin, Crypto'00), and propose a variant of their protocol that is sequentially rational for a non-trivial class of correlated equilibria. Our treatment provides a better understanding of the conditions under which mediators in a correlated equilibrium can be replaced by a stable protocol.
2010
EPRINT
Short One-Time Signatures
We present a new one-time signature scheme having short signatures. Our new scheme supports aggregation, batch veri fication, and admits efficient proofs of knowledge. It has a fast signing algorithm, requiring only modular additions, and its veri fication cost is comparable to ECDSA verifi cation. These properties make our scheme suitable for applications on resource-constrained devices such as smart cards and sensor nodes. Along the way, we give a unifi ed description of fi ve previous one-time signature schemes and improve parameter selection for these schemes, and as a corollary we give a fail-stop signature scheme with short signatures.
2010
EPRINT
Side-channel Analysis of Six SHA-3 Candidates
In this paper we study six 2nd round SHA-3 candidates from a side-channel cryptanalysis point of view. For each of them, we give the exact procedure and appropriate choice of selection functions to perform the attack. Depending on their inherent structure and the internal primitives used (Sbox, addition or XOR), some schemes are more prone to side channel analysis than others, as shown by our simulations.
2010
EPRINT
Signatures for Multi-source Network Coding
We consider the problem of securing inter-flow network coding with multiple sources. We present a practical homomorphic signature scheme that makes possible to verify network coded packets composed of data originating from different sources. The multi-source signature scheme allows to circumvent the need of a secret key shared by all sources. Our solution is an extension of the pairing based homomorphic signature scheme by Boneh et al. We prove the security of the extended scheme by showing a reduction to the single-source case. We evaluated the performance of required computations and our results imply that the solution is applicable in practice.
2010
EPRINT
Signing on Elements in Bilinear Groups for Modular Protocol Design
This paper addresses the construction of signature schemes whose verification keys, messages, and signatures are group elements and the verification predicate is a conjunction of pairing product equations. We answer to the open problem of constructing constant-size signatures by presenting an efficient scheme. The security is proven in the standard model based on a novel non-interactive assumption called Simultaneous Flexible Pairing Assumption that can be justified and has an optimal bound in the generic bilinear group model. We also present efficient schemes with advanced properties including signing unbounded number of group elements, allowing simulation in the common reference string model, signing messages from mixed groups in the asymmetric bilinear group setting, and strong unforgeability. Among many applications, we show two examples; an adaptively secure round optimal blind signature scheme and a group signature scheme with efficient concurrent join. As a bi-product, several homomorphic trapdoor commitment schemes and one-time signature schemes are presented, too. In combination with the Groth-Sahai proof system, these schemes contribute to an efficient instantiation of modular constructions of cryptographic protocols.
2010
EPRINT
Simple and Efficient Public-Key Encryption from Computational Diffie-Hellman in the Standard Model
This paper proposes practical chosen-ciphertext secure public-key encryption systems that are provably secure under the computational Diffie-Hellman assumption, in the standard model. Our schemes are conceptually simpler and more efficient than previous constructions. We also show that in bilinear groups the size of the public-key can be shrunk from n to 2\sqrt{n} group elements, where n is the security parameter.
2010
EPRINT
Skew-Frobenius map on twisted Edwards curve
In this paper, we consider the Frobenius endomorphism on twisted Edwards curve and give the characteristic polynomial of the map. Applying the Frobenius endomorphism on twisted Edwards curve, we construct a skew-Frobenius map defined on the quadratic twist of an twisted Edwards curve. Our results show that the Frobenius endomorphism on twisted Edwards curve and the skew-Frobenius endomorphism on quadratic twist of an twisted Edwards curve can be exploited to devise fast point multiplication algorithm that do not use any point doubling. As an application, the GLV method can be used for speeding up point multiplication on twisted Edwards curve.
2010
EPRINT
Small Scale Variants Of The Block Cipher PRESENT
In this note we de¯ne small scale variants of the block cipher present [1]. The main reason for this is that the running time of some recent attacks (e.g. [2, 3]) remain unclear as they are based on heuristics that are hard or even impossible to verify in practice. Those attacks usually require the full code bock of present to be available and they work only if some independence assumptions hold in practice. While those assumptions are clearly wrong from a theoretical point of view, the impact on the running times of the attacks in question is not clear. With versions of present with smaller block size it might be possible to verify how those attacks scale for those versions and hopefully learn something about present itself.
2010
EPRINT
Solinas primes of small weight for fixed sizes
We give a list of the Solinas prime numbers of the form $f(2^k)=2^m - 2^n \pm 1$, $m \leq 2000$, with small modular reduction weight $wt < 15$, and $k=8,16,32,64$, i.e., $k$ is a multiple of the computer integer arithmetic word size. These can be useful in the construction of cryptographic protocols.
2010
EPRINT
Solving a 676-bit Discrete Logarithm Problem in $GF(3^{6n})$
Pairings on elliptic curves over finite fields are crucial for constructing various cryptographic schemes. The \eta_T pairing on supersingular curves over GF(3^n) is particularly popular since it is efficiently implementable. Taking into account the Menezes-Okamoto-Vanstone (MOV) attack, the discrete logarithm problem (DLP) in GF(3^{6n}) becomes a concern for the security of cryptosystems using \eta_T pairings in this case. In 2006, Joux and Lercier proposed a new variant of the function field sieve in the medium prime case, named JL06-FFS. We have, however, not yet found any practical implementations on JL06-FFS over GF(3^{6n}). Therefore, we first fulfilled such an implementation and we successfully set a new record for solving the DLP in GF(3^{6n}), the DLP in GF(3^{6 \cdot 71}) of 676-bit size. In addition, we also compared JL06-FFS and an earlier version, named JL02-FFS, with practical experiments. Our results confirm that the former is several times faster than the latter under certain conditions.
2010
EPRINT
Solving Generalized Small Inverse Problems
We introduce a ``generalized small inverse problem (GSIP)'' and present an algorithm for solving this problem. GSIP is formulated as finding small solutions of $f(x_0, x_1, \ldots , x_n)=x_0 h(x_1, \ldots , x_n)+C=0 (\bmod \; M)$ for an $n$-variate polynomial $h$, non-zero integers $C$ and $M$. Our algorithm is based on lattice-based Coppersmith technique. We provide a strategy for construction of a lattice basis for solving $f=0$, which are systematically transformed from a lattice basis for solving $h=0$. Then, we derive an upper bound such that the target problem can be solved in polynomial time in $\log M$ in an explicit form. Since GSIPs include some RSA related problems, our algorithm is applicable to them. For example, the small key attacks by Boneh and Durfee are re-found automatically.
2010
EPRINT
Some Applications of Lattice Based Root Finding Techniques
In this paper we present some problems and their solutions exploiting lattice based root finding techniques. In CaLC 2001, Howgrave-Graham proposed a method to find the Greatest Common Divisor (GCD) of two large integers when one of the integers is exactly known and the other one is known approximately. In this paper, we present three applications of the technique. The first one is to show deterministic polynomial time equivalence between factoring $N$ ($N = pq$, where $p > q$ or $p, q$ are of same bit size) and knowledge of $q^{-1} \bmod p$. Next, we consider the problem of finding smooth integers in a short interval. The third one is to factorize $N$ given a multiple of the decryption exponent in RSA. In Asiacrypt 2006, Jochemsz and May presented a general strategy for finding roots of a polynomial. We apply that technique for solving the following two problems. The first one is to factorize $N$ given an approximation of a multiple of the decryption exponent in RSA. The second one is to solve the implicit factorization problem given three RSA moduli considering certain portions of LSBs as well as MSBs of one set of three secret primes are same.
2010
EPRINT
Some Observations on Indifferentiability
At Crypto 2005, Coron et al. introduced a formalism to study the presence or absence of structural flaws in iterated hash functions: If one cannot differentiate a hash function using ideal primitives from a random oracle, it is considered structurally sound, while the ability to differentiate it from a random oracle indicates a structural weakness. This model was devised as a tool to see subtle real world weaknesses while in the random oracle world. In this paper we take in a practical point of view. We show, using well known examples like NMAC and the Mix-Compress-Mix (MCM) construction, how we can prove a hash construction secure and insecure at the same time in the indifferentiability setting. These constructions do not differ in their implementation but only on an abstract level. Naturally, this gives rise to the question what to conclude for the implemented hash function. Our results cast doubts about the notion of “indifferentiability from a random oracle” to be a mandatory, practically relevant criterion (as e.g., proposed by Knudsen [16] for the SHA-3 competition) to separate good hash structures from bad ones.
2010
EPRINT
Some Observations on TWIS Block Cipher
The 128-bit block cipher TWIS was proposed by Ojha et al in 2009. It is a lightweight block cipher and its design is inspired from CLEFIA. In this paper, we first study the properties of TWIS structure, and as an extension we also considered the generalized TWIS-type structure which can be called G-TWIS cipher, where the block size and round number can be arbitrary values. Then we present a series of 10-round differential distinguishers for TWIS and a n-round differential distinguisher for G-TWIS whose probabilities are all equal to 1. Therefore, by utilizing these kinds of differential distinguishers, we can break the full 10-round TWIS cipher and n-round G-TWIS cipher.
2010
EPRINT
Speeding Up The Widepipe: Secure and Fast Hashing
In this paper we propose a new sequential mode of operation -- the \emph{Fast wide pipe} or FWP for short -- to hash messages of arbitrary length. The mode is shown to be (1) \emph{preimage-resistance preserving}, (2) \emph{collision-resistance-preserving} and, most importantly, (3) \emph{indifferentiable} from a random oracle up to $\mathcal{O}(2^{n/2})$ compression function invocations. In addition, our rigorous investigation suggests that any variants of Joux's multi-collision, Kelsey-Schneier 2nd preimage and Herding attack are also ineffective on this mode. This fact leads us to conjecture that the indifferentiability security bound of FWP can be extended beyond the birthday barrier. From the point of view of efficiency, this new mode, for example, is \textit{always} faster than the Wide-pipe mode when both modes use an identical compression function. In particular, it is nearly twice as fast as the Wide-pipe for a reasonable selection of the input and output size of the compression function. We also compare the FWP with several other modes of operation.
2010
EPRINT
Stange's Elliptic Nets and Coxeter Group F4
Stange, generalizing Ward's elliptic divisibility sequences, introduced elliptic nets, and showed an equivalence between elliptic nets and elliptic curves. This note relates Stange's recursion for elliptic nets and the Coxeter group F4.
2010
EPRINT
Starfish on Strike
This paper improves the price-performance ratio of ECM, the elliptic-curve method of integer factorization. In particular, this paper constructs "a = -1" twisted Edwards curves having Q-torsion group Z/2 x Z/4, Z/8, or Z/6 and having a known non-torsion point; demonstrates that, compared to the curves used in previous ECM implementations, some of the new curves are more effective at finding small primes despite being faster; and precomputes particularly effective curves for several specific sizes of primes.
2010
EPRINT
Strongly Unforgeable Signatures and Hierarchical Identity-based Signatures from Lattices without Random Oracles
We propose a variant of Peikert's lattice-based existentially unforgeable signature scheme in the standard model. Our construction offers the same efficiency as Peikert's but supports the stronger notion of strong unforgeability. Strong unforgeability demands that the adversary is unable to produce a new message-signature pair (m, s), even if he or she is allowed to see a different signature sig' for m. In particular, we provide the first treeless signature scheme that supports strong unforgeability for the post-quantum era in the standard model. Moreover, we show how to directly implement identity-based, and even hierarchical identity-based, signatures (IBS) in the same strong security model without random oracles. An additional advantage of this direct approach over the usual generic conversion of hierarchical identity-based encryption to IBS is that we can exploit the efficiency of ideal lattices without significantly harming security. We equip all constructions with strong security proofs based on mild worst-case assumptions on lattices and we also propose concrete security parameters.
2010
EPRINT
Studies on Verifiable Secret Sharing, Byzantine Agreement and Multiparty Computation
This dissertation deals with three most important as well as fundamental problems in secure distributed computing, namely Verifiable Secret Sharing (VSS), Byzantine Agreement (BA) and Multiparty Computation (MPC). VSS is a two phase protocol (Sharing and Reconstruction) carried out among $n$ parties in the presence of a centralized adversary who can corrupt up to $t$ parties. Informally, the goal of the VSS protocol is to share a secret $s$, among the $n$ parties during the sharing phase in a way that would later allow for a unique reconstruction of this secret in the reconstruction phase, while preserving the secrecy of $s$ until the reconstruction phase. VSS is used as a key tool in MPC, BA and many other secure distributed computing problems. It can take many different forms, depending on the underlying network (synchronous or asynchronous), the nature (passive or active) and computing power (bounded or unbounded) of the adversary, type of security (cryptographic or information theoretic) etc. We study VSS in information theoretic setting over both synchronous as well as asynchronous network, considering an active unbounded powerful adversary. Our main contributions for VSS are: \begin{itemize} \item In synchronous network, we carry out in-depth investigation on the round complexity of VSS by allowing a probability of error in computation and show that existing lower bounds for the round complexity of error-free VSS can be circumvented by introducing a negligible probability of error. \item We study the communication and round efficiency of VSS in synchronous network and present a robust VSS protocol that is simultaneously communication efficient and round efficient. In addition, our protocol is the best known communication and round efficient protocol in the literature. \item In asynchronous network, we study the communication complexity of VSS and propose a number of VSS protocols. Our protocols are highly communication efficient and show significant improvement over the existing protocols in terms of communication complexity. \end{itemize} The next problem that we deal with is Byzantine Agreement (BA). BA is considered as one of the most fundamental primitives for fault tolerant distributed computing and cryptographic protocols. BA among a set of $n$ parties, each having a private input value, allows them to reach agreement on a common value even if some of the malicious parties (at most $t$) try to prevent agreement among the parties. Similar to the case of VSS, several models for BA have been proposed during the last three decades, considering various aspects like the underlying network, the nature and computing power of adversary, type of security. One of these models is BA over asynchronous network which is considered to be more realistic network than synchronous in many occasions. Though important, research in BA in asynchronous network has received much less attention in comparison to the BA protocols in synchronous network. Even the existing protocols for asynchronous BA involve high communication complexity and in general are very inefficient in comparison to their synchronous counterparts. We focus on BA in information theoretic setting over asynchronous network tolerating an active adversary having unbounded computing power and mainly work towards the communication efficiency of the problem. Our contributions for BA are as follows: \begin{itemize} \item We propose communication efficient asynchronous BA protocols that show huge improvement over the existing protocols in the same setting. Our protocols for asynchronous BA use our VSS protocols in asynchronous network as their vital building blocks. \item We also construct a communication optimal asynchronous BA protocol for sufficiently long message size. Precisely, our asynchronous BA communicates O(\ell n) bits for \ell bit message, for sufficiently large \ell. \end{itemize} The studies on VSS and BA naturally lead one towards MPC problems. The MPC can model almost any known cryptographic application and uses VSS as well as BA as building blocks. MPC enables a set of $n$ mutually distrusting parties to compute some function of their private inputs, such that the privacy of the inputs of the honest parties is guaranteed (except for what can be derived from the function output) even in the presence of an adversary corrupting up to $t$ of the parties and making them misbehave arbitrarily. Much like VSS and BA, MPC can also be studied in various models. Here, we attempt to solve MPC in information theoretic setting over synchronous as well as asynchronous network, tolerating an active unbounded powerful adversary. As for MPC, our main contributions are: \begin{itemize} \item Using one of our synchronous VSS protocol, we design a synchronous MPC that minimizes the communication and round complexity simultaneously, where existing MPC protocols try to minimize one complexity measure at a time (i.e the existing protocols minimize either communication complexity or round complexity). \item We study the communication complexity of asynchronous MPC protocols and design a number of protocols for the same that show significant gain in communication complexity in comparison to the existing asynchronous MPC protocols. \item We also study a specific instance of MPC problem called Multiparty Set Intersection (MPSI) and provide protocols for the same. \end{itemize} In brief, our work in this thesis has made significant advancement in the state-of-the-art research on VSS, BA and MPC by presenting several inherent lower bounds and efficient/optimal solutions for the problems in terms of their key parameters such as communication complexity and time/round complexity. Thus our work has made a significant contribution to the field of secure distributed computing by carrying out a foundation research on the three most important problems of this field.
2010
EPRINT
Subspace Distinguisher for 5/8 Rounds of the ECHO-256 Hash Function
In this work we present first results for the hash function of ECHO. We provide a subspace distinguisher for 5 rounds, near-collisions on 4.5 rounds and collisions for 4 out of 8 rounds of the ECHO-256 hash function. The complexities are $2^{96}$ compression function calls for the distinguisher and near-collision attack, and $2^{64}$ for the collision attack. The memory requirements are $2^{64}$ for all attacks. Furthermore, we provide improved compression function attacks on ECHO-256 to get distinguishers on 7 rounds and near-collisions for 6 and 6.5 rounds. The compression function attacks also apply to ECHO-512. To get these results, we consider new and sparse truncated differential paths through ECHO. We are able to construct these paths by analyzing the combined MixColumns and BigMixColumns transformation. Since in these sparse truncated differential paths at most one fourth of all bytes of each ECHO state are active, missing degrees of freedom are not a problem. Therefore, we are able to mount a rebound attack with multiple inbound phases to efficiently find according message pairs for ECHO.
2010
EPRINT
Symmetric States and their Structure: Improved Analysis of CubeHash
This paper provides three improvements over previous work on analyzing CubeHash, based on its classes of symmetric states: (1) We present a detailed analysis of the hierarchy of symmetry classes. (2) We point out some flaws in previously claimed attacks which tried to exploit the symmetry classes. (3) We present and analyze new multicollision and preimage attacks. For the default parameter setting of CubeHash, namely for a message block size of b = 32, the new attacks are slightly faster than 2^384 operations. If one increases the size of a message block by a single byte to b = 33, our multicollision and preimage attacks become much faster – they only require about 2^256 operations. This demonstrates how sensitive the security of CubeHash is, depending on minor changes of the tunable security parameter b.
2010
EPRINT
Synchronized Aggregate Signatures: New Definitions, Constructions and Applications
An aggregate signature scheme is a digital signature scheme where anyone given n signatures on n messages from n users can aggregate all these signatures into a single short signature. Unfortunately, no ``fully non-interactive'' aggregate signature schemes are known outside of the random oracle heuristic; that is, signers must pass messages between themselves, sequentially or otherwise, to generate the signature. Interaction is too costly for some interesting applications. In this work, we consider the task of realizing aggregate signatures in the model of Gentry and Ramzan (PKC 2006) when all signers share a synchronized clock, but do not need to be aware of or interactive with one another. Each signer may issue at most one signature per time period and signatures aggregate only if they were created during the same time period. We call this synchronized aggregation. We present a practical synchronized aggregate signature scheme secure under the Computational Diffie-Hellman assumption in the standard model. Our construction is based on the stateful signatures of Hohenberger and Waters (Eurocrypt 2009). Those signatures do not aggregate since each signature includes unique randomness for a chameleon hash and those random values do not compress. To overcome this challenge, we remove the chameleon hash from their scheme and find an alternative method for moving from weak to full security that enables aggregation. We conclude by discussing applications of this construction to sensor networks and software authentication.
2010
EPRINT
TASTY: Tool for Automating Secure Two-partY computations
Secure two-party computation allows two untrusting parties to jointly compute an arbitrary function on their respective private inputs while revealing no information beyond the outcome. Existing cryptographic compilers can automatically generate secure computation protocols from high-level specifications, but are often limited in their use and efficiency of generated protocols as they are based on either garbled circuits or (additively) homomorphic encryption only. In this paper we present TASTY, a novel tool for automating, i.e., describing, generating, executing, benchmarking, and comparing, efficient secure two-party computation protocols. TASTY is a new compiler that can generate protocols based on homomorphic encryption and efficient garbled circuits as well as combinations of both, which often yields the most efficient protocols available today. The user provides a high-level description of the computations to be performed on encrypted data in a domain-specific language. This is automatically transformed into a protocol. TASTY provides most recent techniques and optimizations for practical secure two-party computation with low online latency. Moreover, it allows to efficiently evaluate circuits generated by the well-known Fairplay compiler. We use TASTY to compare protocols for secure multiplication based on homomorphic encryption with those based on garbled circuits and highly efficient Karatsuba multiplication. Further, we show how TASTY improves the online latency for securely evaluating the AES functionality by an order of magnitude compared to previous software implementations. TASTY allows to automatically generate efficient secure protocols for many privacy-preserving applications where we consider the use cases for private set intersection and face recognition protocols.
2010
EPRINT
Terrorists in Parliament, Distributed Rational Consensus
\The \textit{consensus} is a very important problem in distributed computing, where among the $n$ players, the honest players try to come to an agreement even in the presence of $t$ malicious players. In game theoretic environment, \textit{the group choice problem} is similar to the \textit{rational consensus problem}, where every player $p_i$ prefers come to consensus on his value $v_i$ or to a value which is as close to it as possible. All the players need to come to an agreement on one value by amalgamating individual preferences to form a group or social choice. In rational consensus problem, there are no malicious players. We consider the rational consensus problem in the presence of few malicious players. The players are assumed to be rational rather than honest and there exist few malicious players among them. Every rational player primarily prefers to come to consensus on his value and secondarily, prefers to come to consensus on other player's value. In other words, if $w_1$, $w_2$ and $w_3$ are the payoffs obtained when $p_i$ comes to consensus on his value, $p_i$ comes to consensus on other's value and $p_i$ does not come to consensus respectively, then $w_1 > w_2 > w_3$. We name it as \textit{distributed rational consensus problem} DRC. This situation resembles situation of a parliament, where two political parties fight for their choice to be followed, and there are few terrorists among them, whose main objective is that parliament should not make any decision. The players can have two values, either 1 or 0, i.e binary consensus. The rational majority is defined as number of players, who wants to agree on one particular value, and they are more than half of the rational players. Similarly rational minority can be defined. We have considered EIG protocol, and characterized the rational behaviour, and shown that EIG protocol will not work in rational environment. We have proved that, there exists no protocol, which solves distributed consensus problem in fixed running time, where players have knowledge of other players values during the protocol. This proof is based on Maskin's monotonicity property. The good news is, if the players do not have knowledge about other players values, then then it can be solved. This can be achieved by verifiable rational secret sharing, where players do not exchange their values directly, but as pieces of it.
2010
EPRINT
The analytical property for $\zeta(s)$
In this article it's discussed the analytic property of $\zeta(s)$. The popular opinion is denied.
2010
EPRINT
The collision security of Tandem-DM in the ideal cipher model
We prove that Tandem-DM, one of the two ``classical'' schemes for turning a blockcipher of $2n$-bit key into a double block length hash function, has birthday-type collision resistance in the ideal cipher model. A collision resistance analysis for Tandem-DM achieving a similar birthday-type bound was already proposed by Fleischmann, Gorski and Lucks at FSE 2009. As we detail, however, the latter analysis is wrong, thus leaving the collision resistance of Tandem-DM as an open problem until now.
2010
EPRINT
The Discrete Logarithm Problem Modulo One: Cryptanalysing the Ariffin--Abu cryptosystem
The paper provides a cryptanalysis the $AA_\beta$-cryptosystem recently proposed by Ariffin and Abu. The scheme is in essence a key agreement scheme whose security is based on a discrete logarithm problem in the infinite (additive) group $\mathbb{R}/\mathbb{Z}$ (the reals modulo $1$). The paper breaks the $AA_\beta$-cryptosystem (in a passive adversary model) by showing that this discrete logarithm problem can be efficiently solved in practice.
2010
EPRINT
The Effects of the Omission of Last Round's MixColumns on AES
The Advanced Encryption Standard (AES) is the most widely deployed block cipher. It follows the modern iterated block cipher approach, iterating a simple round function multiple times. The last round of AES slightly differs from the others, as a linear mixing operation (called MixColumns) is omitted from it. Following a statement of the designers, it is widely believed that the omission of the last round MixColumns has no security implications. As a result, the majority of attacks on reduced-round variants of AES assume that the last round of the reduced-round version is free of the MixColumns operation. In this note we refute this belief, showing that the omission of MixColumns does affect the security of (reduced-round) AES. First, we consider a simple example of 1-round AES, where we show that the omission reduces the time complexity of an attack with a single known plaintext from 2^{48} to 2^{16}. Then, we examine several previously known attacks on 7-round AES-192 and show that the omission reduces their time complexities by a factor of 2^{16}.
2010
EPRINT
The Eris hybrid cipher
An earlier paper by the same author (IACR Eprint 2008/473) suggested combining a block cipher and a stream cipher to get a strong hybrid cipher. This paper proposes a specific cipher based on those ideas, using the HC-128 stream cipher and a tweakable block cipher based on Serpent.
2010
EPRINT
The Extended Access Control for Machine Readable Travel Documents
Machine Readable travel documents have been rapidly put in place since 2004. The initial standard was made by the ICAO and it has been quickly followed by the Extended Access Control (EAC). In this paper we discuss about the evolution of these standards and more precisely on the evolution of EAC. We intend to give a realistic survey on these standards. We discuss about their problems, such as the inexistence of a clock in the biometric passports and the absence of a switch preventing the lecture of a closed passport. We also look at the issue with retrocompatibility that could be easily solved and the issue with terminal revocation that is harder.
2010
EPRINT
The Fiat--Shamir Transform for Group and Ring Signature Schemes
The Fiat-Shamir (FS) transform is a popular tool to produce particularly efficient digital signature schemes out of identification protocols. It is known that the resulting signature scheme is secure (in the random oracle model) if and only if the identification protocol is secure against passive impersonators. A similar results holds for constructing ID-based signature schemes out of ID-based identification protocols. The transformation had also been applied to identification protocols with additional privacy properties. So, via the FS transform, ad-hoc group identification schemes yield ring signatures and identity escrow schemes yield group signature schemes. Unfortunately, results akin to those above are not known to hold for these latter settings and the security of the resulting schemes needs to be proved from scratch, or worse, it is often simply assumed. Therefore, the security of the schemes obtained this way does not clearly follow from that of the base identification protocol and needs to be proved from scratch. Even worse, some papers seem to simply assume that the transformation works without proof. In this paper we provide the missing foundations for the use of the FS transform in these more complex settings.We start with defining a formal security model for identity escrow schemes (a concept proposed earlier but never rigorously formalized). Our main result constists of necessary and sufficient conditions for an identity escrow scheme to yield (via the FS transform) a secure group signature schemes. In addition, we discuss several variants of this result that account for the constructions of group signatures that fulfill weaker notions of security. In addition, using the similarity between group and ring signature schemes we give analogous results for the latter primitive.
2010
EPRINT
The impossibility of computationally sound XOR
We give a simple example that there is no symbolic theory for exclusive or (XOR) that is computationally sound.
2010
EPRINT
The Improbable Differential Attack: Cryptanalysis of Reduced Round CLEFIA
In this paper we present a new statistical cryptanalytic technique that we call improbable differential cryptanalysis which uses a differential that is less probable when the correct key is used. We provide data complexity estimates for this kind of attacks and we also show a method to expand impossible differentials to improbable differentials. By using this expansion method, we cryptanalyze 13, 14, and 15-round CLEFIA for the key sizes of length 128, 192, and 256 bits, respectively. These are the best cryptanalytic results on CLEFIA up to this date.
2010
EPRINT
The Lower Bounds on the Second Order Nonlinearity of Cubic Boolean Functions
It is a difficult task to compute the $r$-th order nonlinearity of a given function with algebraic degree strictly greater than $r>1$. Even the lower bounds on the second order nonlinearity is known only for a few particular functions. We investigate the lower bounds on the second order nonlinearity of cubic Boolean functions $F_u(x)=Tr(\sum_{l=1}^{m}\mu_{l}x^{d_{l}})$, where $u_{l} \in F_{2^n}^{*}$, $d_{l}=2^{i_{l}}+2^{j_{l}}+1$, $i_{l}$ and $j_{l}$ are positive integers, $n>i_{l}> j_{l}$. Especially, for a class of Boolean functions $G_u(x)=Tr(\sum_{l=1}^{m}\mu_{l}x^{d_{l}})$, we deduce a tighter lower bound on the second order nonlinearity of the functions, where $u_{l} \in F_{2^n}^{*}$, $d_{l}=2^{i_{l}\gamma}+2^{j_{l}\gamma}+1$, $i_{l}> j_{l}$ and $\gamma\neq 1$ is a positive integer such that $gcd(n,\gamma)=1$. \\The lower bounds on the second order nonlinearity of cubic monomial Boolean functions, represented by $f_\mu(x)=Tr(\mu x^{2^i+2^j+1})$, $\mu\in F_{2^n}^*$, $i$ and $j$ are positive integers such that $i>j$, have recently (2009) been obtained by Gode and Gangopadhvay. Our results have the advantages over those of Gode and Gangopadhvay as follows. We first extend the results from monomial Boolean functions to Boolean functions with more trace terms. We further generalize and improve the results to a wider range of $n$. Also, our bounds are better than those of Gode and Gangopadhvay for monomial functions $f_\mu(x)$.
2010
EPRINT
The PASSERINE Public Key Encryption and Authentication Mechanism
PASSERINE is a lightweight public key encryption mechanism which is based on a hybrid, randomized variant of the Rabin public key encryption scheme. Its design is targeted for extremely low-resource applications such as wireless sensor networks, RFID tags, embedded systems, and smart cards. As is the case with the Rabin scheme, the security of PASSERINE can be shown to be equivalent to factoring the public modulus. On most low-resource implementation platforms PASSERINE offers smaller transmission latency, hardware and software footprint and better encryption speed when compared to RSA or Elliptic Curve Cryptography. This is mainly due to the fact that PASSERINE implementations can avoid expensive big integer arithmetic in favor of a fully parallelizable CRT randomized-square operation. In order to reduce latency and memory requirements, PASSERINE uses Naccache-Shamir randomized multiplication, which is implemented with a system of simultaneous congruences modulo small coprime numbers. The PASSERINE private key operation is of comparable computational complexity to the RSA private key operation. The private key operation is typically performed by a computationally superior recipient such as a base station. The PASSERINE project is entirely open source (hardware and software).
2010
EPRINT
The Rebound Attack and Subspace Distinguishers: Application to Whirlpool
We introduce the rebound attack as a variant of differential cryptanalysis on hash functions and apply it to the hash function Whirlpool, standardized by ISO/IEC. We give attacks on reduced variants of the Whirlpool hash function and the Whirlpool compression function. Next, we introduce the subspace problems as generalizations of near-collision resistance. Finally, we present distinguishers based on the rebound attack, that apply to the full compression function of Whirlpool and the underlying block cipher $W$.
2010
EPRINT
the upper bounds on differntial characteristics in block cipher SMS4
SMS4 is a 128-bit block cipher with a 128-bit user key and 32 rounds, which is used in the Chinese National Standard for Wireless LAN WAPI. In this paper, all possible differential patterns are divided into several sections by six designed rules. In order to evaluate the security against the differential cryptanalysis of SMS4, we calculate the lower bounds on the number of active S-Boxes for all kinds of sections, based on which the lower bounds on the number of active S-Boxes in all possible differential patterns can be derived. Finally, the upper bounds on differential characteristic probabilities of arbitrary round numbers are given, which can be used to estimate the strength of SMS4 against differential attack and linear attack.
2010
EPRINT
The World is Not Enough: Another Look on Second-Order DPA
In a recent work, Mangard et al. showed that under certain assumptions, the (so-called) standard univariate side-channel attacks using a distance-of-means test, correlation analysis and Gaussian templates are essentially equivalent. In this paper, we show that in the context of multivariate attacks against masked implementations, this conclusion does not hold anymore. In other words, while a single distinguisher can be used to compare the susceptibility of different unprotected devices to first-order DPA, understanding second-order attacks requires to carefully investigate the information leakages and the adversaries exploiting these leakages, separately. Using a framework put forward by Standaert et al. at Eurocrypt 2009, we provide the first analysis that considers these two questions in the case of a masked device exhibiting a Hamming weight leakage model. Our results lead to new intuitions regarding the efficiency of various practically-relevant distinguishers. Further, we also investigate the case of second- and third-order masking (i.e. using three and four shares to represent one value). It turns out that moving to higher-order masking only leads to significant security improvements if the secret sharing is combined with a sufficient amount of noise. Eventually, we show that an information theoretic analysis allows determining this necessary noise level, for different masking schemes and target security levels, with high accuracy and smaller data complexity than previous methods.
2010
EPRINT
Throughput-Optimal Routing in Unreliable Networks
We demonstrate the feasibility of throughput-efficient routing in a highly unreliable network. Modeling a network as a graph with vertices representing nodes and edges representing the links between them, we consider two forms of unreliability: unpredictable edge-failures, and deliberate deviation from protocol specifications by corrupt nodes. The first form of unpredictability represents networks with dynamic topology, whose links may be constantly going up and down; while the second form represents malicious insiders attempting to disrupt communication by deliberately disobeying routing rules, by e.g. introducing junk messages or deleting or altering messages. We present a robust routing protocol for end-to-end communication that is simultaneously resilient to both forms of unreliability, achieving provably optimal throughput performance. Our proof proceeds in three steps: 1) We use competitive-analysis to find a lower-bound on the optimal throughput-rate of a routing protocol in networks susceptible to only edge-failures (i.e. networks with no malicious nodes); 2) We prove a matching upper bound by presenting a routing protocol that achieves this throughput rate (again in networks with no malicious nodes); and 3) We modify the protocol to provide additional protection against malicious nodes, and prove the modified protocol performs (asymptotically) as well as the original.
2010
EPRINT
Time-Specific Encryption
This paper introduces and explores the new concept of Time-Specific Encryption (TSE). In (Plain) TSE, a Time Server broadcasts a key at the beginning of each time unit, a Time Instant Key (TIK). The sender of a message can specify any time interval during the encryption process; the receiver can decrypt to recover the message only if it has a TIK that corresponds to a time in that interval. We extend Plain TSE to the public-key and identity-based settings, where receivers are additionally equipped with private keys and either public keys or identities, and where decryption now requires the use of the private key as well as an appropriate TIK. We introduce security models for the plain, public-key and identity-based settings. We also provide constructions for schemes in the different settings, showing how to obtain Plain TSE using identity-based techniques, how to combine Plain TSE with public-key and identity-based encryption schemes, and how to build schemes that are chosen-ciphertext secure from schemes that are chosen-plaintext secure. Finally, we suggest applications for our new primitive, and discuss its relationships with existing primitives, such as Timed Release Encryption and Broadcast Encryption.
2010
EPRINT
Towards a Theory of Trust Based Collaborative Search
Trust Based Collaborative Search is an interactive metasearch engine, presenting the user with clusters of results, based not only on the similarity of content, but also on the similarity of the recommending agents. The theory presented here is broad enough to cover search, browsing, recommendations, demographic profiling, and consumer targeting. We use the term search as an example. We developed a novel general trust theory. In this context, as a special case, we equate trust between agents with the similarity between their search-behaviors. The theory suggests that clusters should be close to maximal similarity within a tolerance dictated by the amount of uncertainty about the vectors of probabilities of attributes representing queries, pages and agents. In addition, we give a new theoretical analysis of clustering tolerances, enabling more judicial decisions about optimal tolerances. Specifically, we show that tolerances should at least be divided by a constant>1 as we descend from one layer in the hierarchical clustering to the next. We also show a promising connection between collaborative search and cryptography: A query plays the role of a cryptogram, the search engine is the cryptanalyst, and the user's intention is the cleartext. Shannon's unicity distance is the length of the search. It is needed to quantify the clustering-tolerance.
2010
EPRINT
Towards provable security of the Unbalanced Oil and Vinegar signature scheme under direct attacks
In this paper we show that solving systems coming from the public key of the Unbalanced Oil and Vinegar (UOV) signature scheme is on average at least as hard as solving a certain quadratic system with completely random quadratic part. In providing lower bounds on direct attack complexity we rely on the empirical fact that complexity of solving a non-linear polynomial system is determined by the homogeneous part of this system of the highest degree. Our reasoning explains, in particular, the results on solving the UOV systems presented by J.-C. Faugere and L. Perret at the SCC conference in 2008.
2010
EPRINT
Towards Side-Channel Resistant Block Cipher Usage or Can We Encrypt Without Side-Channel Countermeasures?
Based on re-keying techniques by Abdalla, Bellare, and Borst [1,2], we consider two black-box secure block cipher based symmetric encryption schemes, which we prove secure in the physically observable cryptography model. They are proven side-channel secure against a strong type of adversary that can adaptively choose the leakage function as long as the leaked information is bounded. It turns out that our simple construction is side-channel secure against all types of attacks that satisfy some reasonable assumptions. In particular, the security turns out to be negligible in the block cipher’s block size n, for all attacks. We also show that our ideas result in an interesting alternative to the implementation of block ciphers using different logic styles or masking countermeasures.
2010
EPRINT
Tracker: Security and Privacy for RFID-based Supply Chains
The counterfeiting of pharmaceutics or luxury objects is a major threat to supply chains today. As different facilities of a supply chain are distributed and difficult to monitor, malicious adversaries can inject fake objects into the supply chain. This paper presents Tracker, a protocol for object genuineness verification in RFID-based supply chains. More precisely, Tracker allows to securely identify which (legitimate) path an object/tag has taken through a supply chain. Tracker provides privacy: an adversary can neither learn details about an object's path, nor can it trace and link objects in supply chain. Tracker's security and privacy is based on an extension of polynomial signature techniques for run-time fault detection using homomorphic encryption. Contrary to related work, RFID tags in this paper are not required to perform \emph{any computation}, but only feature a few bytes of storage such as ordinary EPC Class 1 Gen 2 tags.
2010
EPRINT
Transfinite Cryptography
\begin{abstract} Let assume that Alice, Bob, and Charlie, the three classical people of cryptography are not limited anymore to perform a finite number of computations on real computers, but are limited to $\alpha$ computations and to $\alpha$ bits of memory, where $\alpha$ is a fixed infinite cardinal. For example $\alpha = \aleph _0$ (the countable cardinal, i.e. the cardinal of $\mathbb {N}$ the set of integers), or $\alpha = \mathfrak {C}$ (the cardinal of the set $\mathbb {R}$ of real numbers). Is it possible to do secret key cryptography? Public key cryptography? Encryption? Authentication? Signatures? Is it possible to generalize the notion of one way function? The aim of this paper is to give some elements of answers to these questions. We will see for example that for secret key cryptography there are some simple solutions. However for public key cryptography the results are much less clear. \end{abstract}
2010
EPRINT
Two improved authenticated multiple key exchange protocols
Many authenticated multiple key exchange protocols were published in recent years. In 2008, Lee et al. presented an authenticated multiple key exchange protocol based on bilinear pairings. However, Vo et al. demonstrated an impersonation attack on the protocol , and it failed to provide authenticity and perfect forward secrecy as they had claimed. Later, Vo et al. proposed their enhancement protocol conforming which conforms to all desirable security properties. But, Vo's protocol required any party had held the public key each other, which required a large amount of storage. In this paper, we propose two new authenticated multiple key exchange protocols based on Lee's protocol, and makes them immune against Vo et al.'s attacks.
2010
EPRINT
Type-II Optimal Polynomial Bases
In the 1990s and early 2000s several papers investigated the relative merits of polynomial-basis and normal-basis computations for $\F_{2^n}$. Even for particularly squaring-friendly applications, such as implementations of Koblitz curves, normal bases fell behind in performance unless a type-I normal basis existed for $\F_{2^n}$. In 2007 Shokrollahi proposed a new method of multiplying in a type-II normal basis. Shokrollahi's method efficiently transforms the normal-basis multiplication into a single multiplication of two size-$(n+1)$ polynomials. This paper speeds up Shokrollahi's method in several ways. It first presents a simpler algorithm that uses only size-$n$ polynomials. It then explains how to reduce the transformation cost by dynamically switching to a `type-II optimal polynomial basis' and by using a new reduction strategy for multiplications that produce output in type-II polynomial basis. As an illustration of its improvements, this paper explains in detail how the multiplication overhead in Shokrollahi's original method has been reduced by a factor of $1.4$ in a major cryptanalytic computation, the ongoing attack on the ECC2K-130 Certicom challenge. The resulting overhead is also considerably smaller than the overhead in a traditional low-weight-polynomial-basis approach. This is the first state-of-the-art binary-elliptic-curve computation in which type-II bases have been shown to outperform traditional low-weight polynomial bases.
2010
EPRINT
Unconditionally Secure Rational Secret Sharing in Standard Communication Networks
Rational secret sharing protocols in both the two-party and multi-party settings are proposed. These protocols are built in standard communication networks and with unconditional security. Namely, the protocols run over standard point-to-point networks without requiring physical assumptions or simultaneous channels, and even a computationally unbounded player cannot gain more than $\epsilon$ by deviating from the protocol. More precisely, for the $2$-out-of-$2$ protocol the $\epsilon$ is a negligible function in the size of the secret, which is caused by the information-theoretic MACs used for authentication. The $t$-out-of-$n$ protocol is $(t-1)$-resilient and the $\epsilon$ is exponentially small in the number of participants. Although secret recovery cannot be guaranteed in this setting, a participant can at least reduce the Shannon entropy of the secret to less than $1$ after the protocol. When the secret-domain is large, every rational player has great incentive to participate in the protocol.
2010
EPRINT
Unfolding Method for Shabal on Virtex-5 FPGAs: Concrete Results.pdf
Recent cryptanalysis on SHA-1 family has led the NIST to call for a public competition named SHA-3 Contest. Efficient implementations on various platforms are a criterion for ranking performance of all the candidates in this competition. It appears that most of the hardware architectures proposed for SHA-3 candidates are basic. In this paper, we focus on an optimized implementation of the Shabal candidate. We improve the state-of-the-art using the unfolding method. This transformation leads to unroll a part of the Shabal core. More precisely, our design can produce a throughput over 3 Gbps on Virtex-5 FPGAs, with a reasonable area usage.
2010
EPRINT
Universal One-Way Hash Functions via Inaccessible Entropy
This paper revisits the construction of Universally One-Way Hash Functions (UOWHFs) from any one-way function due to Rompel (STOC 1990). We give a simpler construction of UOWHFs which also obtains better efficiency and security. The construction exploits a strong connection to the recently introduced notion of *inaccessible entropy* (Haitner et al. STOC 2009). With this perspective, we observe that a small tweak of any one-way function f is already a weak form of a UOWHF: Consider F(x, i) that outputs the i-bit long prefix of f(x). If F were a UOWHF then given a random x and i it would be hard to come up with x' \neq x such that F(x, i) = F(x', i). While this may not be the case, we show (rather easily) that it is hard to sample x' with almost full entropy among all the possible such values of x'. The rest of our construction simply amplifies and exploits this basic property. With this and other recent works we have that the constructions of three fundamental cryptographic primitives (Pseudorandom Generators, Statistically Hiding Commitments and UOWHFs) out of one-way functions are to a large extent unified. In particular, all three constructions rely on and manipulate computational notions of entropy in similar ways. Pseudorandom Generators rely on the well-established notion of pseudoentropy, whereas Statistically Hiding Commitments and UOWHFs rely on the newer notion of inaccessible entropy.
2010
EPRINT
Universally Composable Symbolic Analysis of Diffie-Hellman based Key Exchange
Canetti and Herzog (TCC'06) show how to efficiently perform fully automated, computationally sound security analysis of key exchange protocols with an unbounded number of sessions. A key tool in their analysis is {\em composability}, which allows deducing security of the multi-session case from the security of a single session. However, their framework only captures protocols that use public key encryption as the only cryptographic primitive, and only handles static corruptions. We extend the [CH'06] modeling in two ways. First, we handle also protocols that use digital signatures and Diffie-Hellman exchange. Second, we handle also forward secrecy under fully adaptive party corruptions. This allows us to automatically analyze systems that use an unbounded number of sessions of realistic key exchange protocols such as the ISO 9798-3 or TLS protocol. A central tool in our treatment is a new abstract modeling of plain Diffie-Hellman key exchange. Specifically, we show that plain Diffie-Hellman securely realizes an idealized version of Key Encapsulation.
2010
EPRINT
Update-Optimal Authenticated Structures Based on Lattices
We study the problem of authenticating a \emph{dynamic table} with $n$ entries in the authenticated data structures model, which is related to memory checking. We present the first dynamic authenticated table that is \emph{update-optimal}, using a \emph{lattice-based} construction. In particular, the update time is $O(1)$, improving in this way the ``a priori'' $O(\log n)$ update bounds for previous constructions, such as the Merkle tree. Moreover, the space used by our data structure is $O(n)$ and logarithmic bounds hold for the other complexity measures, such as \emph{proof size}. To achieve this result, we exploit the \emph{linearity} of lattice-based hash functions and show how the security of lattice-based digests can be guaranteed under updates. This is the first construction achieving constant update bounds without causing other time complexities to increase beyond logarithmic. All previous solutions enjoying constant update complexity have $\Omega(n^\epsilon)$ proof or query bounds. As an application of our lattice-based authenticated table, we provide the first construction of an authenticated Bloom filter, an update-intensive data structure that falls into our model.
2010
EPRINT
Using the Inhomogeneous Simultaneous Approximation Problem for Cryptographic Design
Since the introduction of the concept of provable security, there has been the steady search for suitable problems that can be used as a foundation for cryptographic schemes. Indeed, identifying such problems is a challenging task. First, it should allow to build cryptographic applications on top of them. Second, it should be open and investigated for a long time to make its hardness assumption plausible. Third, it should be easy to construct hard problem instances. Not surprisingly, only a few problems are known today that satisfy all conditions, e.g., factorization, discrete logarithm, and lattice problems. In this work, we investigate another candidate: the Inhomogeneous Simultaneous Approximation Problem (ISAP), an old problem from the field of analytic number theory. Although this problem is already known in cryptography, it has mainly been considered for attacks while we take a look at its hardness and applicability for cryptographic design. More precisely, we define a decisional problem related to ISAP, called DISAP, and show that it is NP-complete. As a starting point for concrete parameter ranges, we review the hardness of a related problem, being a computational and homogeneous variant of DISAP. Regarding the applicability, we describe as a proof of concept a bit commitment scheme where the hiding property is directly reducible to DISAP. An implementation confirms its usability in principle (e.g., size of one commitment is slightly more than 6 KB and execution time is in the milliseconds). From our point of view, DISAP is an interesting problem that can be used for cryptographic designs. We hope to encourage further research on (D)ISAP in particular and possibly other problems from analytic number theory in general.
2010
EPRINT
Virtual Secure Circuit: Porting Dual-Rail Pre-charge Technique into Software on Multicore
This paper discusses a novel direction for multicore cryptographic software, namely the use of multicore to protect a design against side-channel attacks. We present a technique which is based on the principle of dual-rail pre-charge, but which can be completely implemented in software. The resulting protected software is called a Virtual Secure Circuit (VSC). Similar to the dual-rail pre-charge technique, a VSC executes as two complementary programs on two identical processor cores. Our key contributions include (1) the analysis of the security properties of a VSC, (2) the construction of a VSC AES prototype on a dual-PowerPC architecture, (3) the demonstration of VSC's protection effectiveness with real side-channel attack experiments. The attack results showed that the VSC protected AES needs 80 times more measurements than the unprotected AES to find the first correct key byte. Even one million measurements were not sufficient to fully break VSC protected AES, while unprotected AES was broken using only 40000 measurements. We conclude that VSC can provide a similar side-channel resistance as WDDL, the dedicated hardware equivalent of dual-rail pre-charge. However, in contrast to WDDL, VSC is a software technique, and therefore it is flexible.
2010
EPRINT
Weaknesses of a dynamic ID-based remote user authentication scheme
The security of a password authentication scheme using smart cards proposed by Khan et al. is analyzed. Four kinds of attacks are presented in different scenarios. The analyses show that the scheme is insecure for practical application.
2010
EPRINT
Weaknesses of a dynamic ID-based remote user authentication scheme
The security of a password authentication scheme using smart cards proposed by Khan et al. is analyzed. Four kinds of attacks are presented in different scenarios. The analyses show that the scheme is insecure for practical application.
2010
EPRINT
White-Box Cryptography and SPN ciphers. LRC method
The method of concealing a linear relationship between elements of a finite field (LRC method) is described. An LRC method based approach to the secure white-box implementations creating problem is considered. SPN cipher characteristics to create its secure White-Box implementation are revealed.
2010
EPRINT
Wild McEliece
The original McEliece cryptosystem uses length-n codes over F_2 with dimension >=n-mt efficiently correcting t errors where 2^m>=n. This paper presents a generalized cryptosystem that uses length-n codes over small finite fields F_q with dimension >=n-m(q-1)t efficiently correcting floor(qt/2) errors where q^m>=n. Previously proposed cryptosystems with the same length and dimension corrected only floor((q-1)t/2) errors for q>=3. This paper also presents list-decoding algorithms that efficiently correct even more errors for the same codes over F_q. Finally, this paper shows that the increase from floor((q-1)t/2) errors to more than floor(qt/2) errors allows considerably smaller keys to achieve the same security level against all known attacks.
2010
EPRINT
Zero-Knowledge Proofs, Revisited: The Simulation-Extraction Paradigm
The concept of zero-knowledge proofs has been around for about 25 years. It has been redefined over and over to suit the special security requirements of protocols and systems. Common among all definitions is the requirement of the existence of some efficient machine simulating the view of the verifier (or the transcript of the protocol), such that the simulation is indistinguishable from the reality. In this paper, we will scrutinize the philosophy behind such definition, and show that the indistinguishability requirement is stated in a \emph{conceptually} wrong way: Present definitions allow the knowledge of the \emph{verifier} and \emph{distinguisher} to be independent; while, the two entities are essentially coupled. Therefore, our main take on the problem will be \emph{conceptual} and \emph{semantic}, rather than \emph{literal}. We formalize the concept by introducing a ``knowledge extractor'' into the definition, which tries to extract the extra knowledge hard-coded into the distinguisher (if any), and then helps the simulator to construct the view of the verifier. The new paradigm is termed \emph{Simulation-Extraction Paradigm}. We also provide an important application of the new formalization: Using the simulation-extraction paradigm, we construct one-round (i.e. two-move) zero-knowledge protocols of proving ``the computational ability to invert some trapdoor permutation'' in the Random-Oracle Model. It is shown that the protocol cannot be proven zero-knowledge in the classical \emph{simulation paradigm}. The proof of the zero-knowledge property in the new paradigm is interesting in that it does not require knowing the internal structure of the trapdoor permutation, or a polynomial-time reduction from it to another (e.g. an $\mathcal{NP}$-complete) problem.