International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Papers from EPRINT 2008

Year
Venue
Title
2008
EPRINT
{Threshold Homomorphic Encryption in the Universally Composable Cryptographic Library
Protocol security analysis has become an active research topic in recent years. Researchers have been trying to build sufficient theories for building automated tools, which give security proofs for cryptographic protocols. There are two approaches for analysing protocols: formal and computational. The former, often called Dolev-Yao style, uses abstract terms to model cryptographic messages with an assumption about perfect security of the cryptographic primitives. The latter mathematically uses indistinguishability to prove that adversaries with computational resources bounds cannot gain anything significantly. The first method is easy to be automated while the second one can give sound proofs of security. Therefore there is a demand to bridge the gap between two methods in order to have better security-proof tools. One idea is to prove that some Dolev-Yao style cryptographic primitives used in formal tools are computationally sound for arbitrary active attacks in arbitrary reactive environments, i.e universally composable. As a consequence, protocols that use such primitives can also be proved secure by formal tools. In this paper, we prove that a homomorphic encryption used together with a non-interactive zero-knowledge proof in Dolev-Yao style are sound abstractions for the real implementation under certain conditions. It helps to automatically design and analyze a class of protocols that use homomorphic encryptions together with non-interactive zero-knowledge proofs, such as e-voting.
2008
EPRINT
2-Adic Complexity of a Sequence Obtained from a Periodic Binary Sequence by Either Inserting or Deleting k Symbols within One Period
In this paper, we propose a method to get the lower bounds of the 2-adic complexity of a sequence obtained from a periodic sequence over GF(2) by either inserting or deleting k symbols within one period. The results show the variation of the distribution of the 2-adic complexity becomes as k increases. Particularly, we discuss the lower bounds when k respectively.
2008
EPRINT
A CCA2 Secure Public Key Encryption Scheme Based on the McEliece Assumptions in the Standard Model
We show that a recently proposed construction by Rosen and Segev can be used for obtaining the first public key encryption scheme based on the McEliece assumptions which is secure against adaptive chosen ciphertext attacks in the standard model.
2008
EPRINT
A Chosen IV Attack Using Phase Shifting Equivalent Keys against DECIM v2
DECIM v2 is a stream cipher submitted to the ECRYPT stream cipher project (eSTREAM) and ISO/IEC 18033-4. No attack against DECIM v2 has been proposed yet. In this paper, we propose a chosen IV attack against DECIM v2 using a new equivalent key class. Our attack can recover an $80$-bit key with a time complexity of $2^{79.90}$ when all bits of the IV are zero. This result is the best one on DECIM v2.
2008
EPRINT
A Combinatorial Analysis of Recent Attacks on Step Reduced SHA-2 Family
We perform a combinatorial analysis of SHA-2 compression function. This analysis explains in a unified way the recent attacks against reduced round SHA-2. We start with a general class of local collisions and show that the previously used local collision by Nikoli\'{c} and Biryukov (NB) and Sanadhya and Sarkar (SS) are special cases. The study also clarifies several advantages of the SS local collision over the NB local collision. Deterministic constructions of up to 22-round SHA-2 collisions are described using the SS local collision and up to 21-round SHA-2 collisions are described using the NB local collision. For 23 and 24-round SHA-2, we describe a general strategy and then apply the SS local collision to this strategy. The resulting attacks are faster than those proposed by Indesteege et al using the NB local collision. We provide colliding message pairs for 22, 23 and 24-round SHA-2. Although these attacks improve upon the existing reduced round SHA-256 attacks, they do not threaten the security of the full SHA-2 family. \footnote{This work builds upon and subsumes previous work done by us. Whereas the previous works focused on obtaining collisions for fixed number of rounds, the current work provides the combinatorial framework for understanding how such collisions arise.}
2008
EPRINT
A Comparison Between Hardware Accelerators for the Modified Tate Pairing over $\mathbb{F}_{2^m}$ and $\mathbb{F}_{3^m}$
In this article we propose a study of the modified Tate pairing in characteristics two and three. Starting from the $\eta_T$ pairing introduced by Barreto {\em et al.} (Des Codes Crypt, 2007), we detail various algorithmic improvements in the case of characteristic two. As far as characteristic three is concerned, we refer to the survey by Beuchat {\em et al.} (ePrint 2007-417). We then show how to get back to the modified Tate pairing at almost no extra cost. Finally, we explore the trade-offs involved in the hardware implementation of this pairing for both characteristics two and three. From our experiments, characteristic three appears to have a slight advantage over characteristic two.
2008
EPRINT
A Complete Treatment of 2-party SFE in the Information-Theoretic Setting with Applications to Long-Term Security
It is well known that general secure function evaluation (SFE) with information-theoretical (IT) security is infeasible in the secure channels model in presence of a corrupted majority \cite{Cle86,Kil91, Kus92, Kil00, IKLP06, Kat06}. In particular these results extend to and are derived from the 2-party scenario, where any corrupted party is already a corrupted majority. On the other hand \cite{BroTap07} have recently demonstrated that a wealth of interesting functions can be computed securely even in presence of a corrupted majority, at least if one is willing to sacrifice robustness, thus raising interest in a general description of these functions. In this work we give a complete combinatorial classification of 2-party functions, by their secure computability under active, semi-honest, passive and quantum adversaries. Our treatment is constructive, in the sense that, if a function is computable in a given setting, then we exhibit a protocol. We then proceed to apply our results to gain insight into long-term security, where we admit computational assumptions for the duration of a computation, but require information-theoretical security (privacy) once the computation is concluded.
2008
EPRINT
A correction to ``Efficient and Secure Comparison for On-Line Auctions''
In this note, we describe a correction to the cryptosystem proposed by the authors in "Efficient and Secure Comparison for On-Line Auctions". Although the correction is small and does not affect the performance of the protocols proposed in that paper, it is necessary as the cryptosystem is not secure without it.
2008
EPRINT
A Framework for the Development Playfair Cipher Considering Probability of Occurrence of Characters in English Literature
The Playfair cipher was the first practical digraph substitution cipher. The scheme was invented in 1854 by Charless Wheatstone, but was the named after Lord Playfair who promoted the use of the cipher. In this paper, two algorithms have been proposed to written the Playfair cipher. The first effort is being made to write the playfair cipher in a new way by considering the probability of the occurrence of english character in context to english literature. In the proposed algorithm at each and every time, the algorithm will check whether, the plain text contains odd number of words or not. If the plain text is of odd number of characters, the algorithm will concatenate a dummy character at the end with lowest probability to form even length word in the cipher text and then search is being made in the dictionary to find out whether the word with dummy character is present in the dictionary or not. If that word is present in the dictionary, delete that dummy character of that word and the same effort is being made with another dummy character with next lowest probability occurrence. The second method add another new approach, it replace all blanks between two consecutive word by some character which has least probability to appear at that position in the sentence and search the dictionary to check whether this new word has different meaning . If new word if present, replace blank with next higher probability character for this blank. Finally, time comparison has made of encryption and decryption of some sentences using original with two algorithms and it has been found that the decryption time are much better than original play fair algorithm and that will produce the original message from encrypted message directly.
2008
EPRINT
A Generalized Brezing-Weng Algorithm for Constructing Pairing-Friendly Ordinary Abelian Varieties
We give an algorithm that produces families of Weil numbers for ordinary abelian varieties over finite fields with prescribed embedding degree. The algorithm uses the ideas of Freeman, Stevenhagen, and Streng to generalize the Brezing-Weng construction of pairing-friendly elliptic curves. We discuss how CM methods can be used to construct these varieties, and we use our algorithm to give examples of pairing-friendly ordinary abelian varieties of dimension 2 and 3 that are absolutely simple and have smaller $\rho$-values than any previous such example.
2008
EPRINT
A Generic Method to Extend Message Space of a Strong Pseudorandom Permutation
In this paper we present an efficient and secure generic method which can encrypt messages of size at least $n$. This generic encryption algorithm needs a secure encryption algorithm for messages of multiple of $n$. The first generic construction, XLS, has been proposed by Ristenpart and Rogaway in FSE-07. It needs two extra invocations of an independently chosen strong pseudorandom permutation or SPRP defined over $\s^n$ for encryption of an incomplete message block. Whereas our construction needs only one invocation of a weak pseudorandom function and two multiplications over a finite field (equivalently, two invocations of an universal hash function). We prove here that the proposed method preserves (tweakable) SPRP. This new construction is meaningful for two reasons. Firstly, it is based on weak pseudorandom function which is a weaker security notion than SPRP. Thus we are able to achieve stronger security from a weaker one. Secondly, in practice, finite field multiplication is more efficient than an invocation of SPRP. Hence our method can be more efficient than XLS.
2008
EPRINT
A Hardware Interface for Hashing Algorithms
The submissions to the SHA-3 competition include a reference implementation in C, built on top of a standard programmer's interface (API). This greatly improves the evaluation process: it enables portability across platforms, and it makes performance comparison of the algorithms easy. For hardware crypto-implementations, such a stan-dard interface does not exist. As a result, the evaluation and comparison of hardware hashing algorithms becomes complex and error prone. The first step to improve the evaluation process for hardware is the definition of an interface. This document describes a general hardware interface for hashing algorithms. The operation of the interface is discussed, and the appendix lists a SHA-256 reference implementation that uses the interface.
2008
EPRINT
A Modular Security Analysis of the TLS Handshake Protocol
We study the security of the widely deployed Secure Session Layer/Transport Layer Security (TLS) key agreement protocol. Our analysis identifies, justifies, and exploits the modularity present in the design of the protocol: the {\em application keys} offered to higher level applications are obtained from a {\em master key}, which in turn is derived, through interaction, from a {\em pre-master key}. Our first contribution consists of formal models that clarify the security level enjoyed by each of these types of keys. The models that we provide fall under well established paradigms in defining execution, and security notions. We capture the realistic setting where only one of the two parties involved in the execution of the protocol (namely the server) has a certified public key, and where the same master key is used to generate multiple application keys. The main contribution of the paper is a modular and generic proof of security for the application keys established through the TLS protocol. We show that the transformation used by TLS to derive master keys essentially transforms an {\em arbitrary} secure pre-master key agreement protocol into a secure master-key agreement protocol. Similarly, the transformation used to derive application keys works when applied to an arbitrary secure master-key agreement protocol. These results are in the random oracle model. The security of the overall protocol then follows from proofs of security for the basic pre-master key generation protocols employed by TLS.
2008
EPRINT
A new almost perfect nonlinear function which is not quadratic
We show how to change one coordinate function of an almost perfect nonlinear (APN) function in order to obtain new examples. It turns out that this is a very powerful method to construct new APN functions. In particular, we show that the approach can be used to construct ``non-quadratic'' APN functions. This new example is in remarkable contrast to all recently constructed functions which have all been quadratic.
2008
EPRINT
A New Approach for Algebraically Homomorphic Encryption
The existence of an efficient and provably secure algebraically homomorphic scheme (AHS), i.e., one that supports both addition and multiplication operations, is a long stated open problem. All proposals so far are either insecure or not provable secure, inefficient, or allow only for one multiplication (and arbitrary additions). As only very limited progress has been made on the existing approaches in the recent years, the question arises whether new methods can lead to more satisfactory solutions. In this paper we show how to construct a provably secure AHS based on a coding theory problem. It allows for arbitrary many additions and for a fixed, but arbitrary number of multiplications and works over arbitrary finite fields. Besides, it possesses some useful properties: i) the plaintext space can be extended adaptively without the need for re-encryption, ii) it operates over arbitrary infinite fields as well, e.g., rational numbers, but the hardness of the underlying decoding problem in such cases is less studied, and iii) depending on the parameter choice, the scheme has inherent error-correcting up to a certain number of transmission errors in the ciphertext. However, since our scheme is symmetric and its ciphertext size grows exponentially with the expected total number of encryptions, its deployment is limited to specific client-server-applications with few number of multiplications. Nevertheless, we believe room for improvement due to the huge number of alternative coding schemes that can serve as the underlying hardness problem. For these reasons and because of the interesting properties of our scheme, we believe that using coding theory to design AHS is a promising approach and hope to encourage further investigations.
2008
EPRINT
A New Approach to Secure Logging
The need for secure logging is well-understood by the security professionals, including both researchers and practitioners. The ability to efficiently verify all (or some) log entries is important to any application employing secure logging techniques. In this paper, we begin by examining state-of-the-art in secure logging and identify some problems inherent to systems based on trusted third-party servers. We then propose a different approach to secure logging based upon recently developed Forward-Secure Sequential Aggregate (FssAgg) authentication techniques. Our approach offers both space-efficiency and provable security. We illustrate two concrete schemes -- one private-verifiable and one public-verifiable -- that offer practical secure logging without any reliance on on-line trusted third parties or secure hardware. We also investigate the concept of immutability in the context of forward secure sequential aggregate authentication to provide finer grained verification. Finally, we report on some experience with a prototype built upon a popular code version control system.
2008
EPRINT
A New Blind Identity-Based Signature Scheme with Message Recovery
Anonymity of consumers is an essential functionality that should be supported in e-cash systems, locations based services, electronic voting systems as well as digital rights management system. Privacy protection is an important aspect for wider acceptance of consumers of DRM systems. The concept of a blind signature is one possible cryptographic solution, yet it has not received much attention in the identity-based setting. In the identity-based setting, the public key of a user is derived from his identity, thus simplifying certificates management process compared to traditional public key cryptosystems. In this paper, a new blind identity-based signature scheme with message recovery based on bilinear pairings on elliptic curves is presented. The use of bilinear pairings over elliptic curves enables utilizing smaller key sizes, while achieving the same level of security compared to other schemes not utilizing elliptic curves. The scheme achieves computational savings compared to other schemes in literature. The correctness of the proposed scheme is validated and the proof of the blindness property is provided. Performance and other security related issues are also addressed.
2008
EPRINT
A new class of Bent functions in Polynomial Forms
This paper is a contribution to the construction of bent functions having the form $f(x) = \tr {o(s_1)} (a x^ {s_1}) + \tr {o(s_2)} (b x^{s_2})$ where $o(s_i$) denotes the cardinality of the cyclotomic class of 2 modulo $2^n-1$ which contains $i$ and whose coefficients $a$ and $b$ are, respectively in $F_{2^{o(s_1)}}$ and $F_{2^{o(s_2)}}$. Many constructions of monomial bent functions are presented in the literature but very few are known even in the binomial case. We prove that the exponents $s_1=2^{\frac n2}-1$ and $s_2={\frac {2^n-1}3}$, where $a\in\GF{n}$ and $ b\in\GF[4]{}$ provide the construction of new infinite class of bent functions over $\GF{n}$ with maximum algebraic degree. For $m$ odd, we give an explicit characterization of the bentness of these functions, in terms of the Kloosterman sums of the corresponding coefficients. For $m$ even, we give a necessary condition in terms of these Kloosterman sums.
2008
EPRINT
A New Collision Differential For MD5 With Its Full Differential Path
Since the first collision differential with its full differential path was presented for MD5 function by Wang et al. in 2004, renewed interests on collision attacks for the MD family of hash functions have surged over the world of cryptology. To date, however, no cryptanalyst can give a second computationally feasible collision differential for MD5 with its full differential path, even no improved differential paths based on Wangs MD5 collision differential have appeared in literature. Firstly in this paper, a new differential cryptanalysis called signed difference is defined, and some principles or recipes on finding collision differentials and designing differential paths are proposed, the signed difference generation or elimination rules which are implicit in the auxiliary functions, are derived. Then, based on these newly found properties and rules, this paper comes up with a new computationally feasible collision differential for MD5 with its full differential path, which is simpler thus more understandable than Wangs, and a set of sufficient conditions considering carries that guarantees a full collision is derived from the full differential path. Finally, a multi-message modification-based fast collision attack algorithm for searching collision messages is specialized for the full differential path, resulting in a computational complexity of 2 to the power of 36 and 2 to the power of 32 MD5 operations, respectively for the first and second blocks. As for examples, two collision message pairs with different first blocks are obtained.
2008
EPRINT
A New Family of Perfect Nonlinear Binomials
We prove that the binomials $x^{p^s+1}-\alpha x^{p^k+p^{2k+s}}$ define perfect nonlinear mappings in $GF(p^{3k})$ for an appropriate choice of the integer $s$ and $\alpha \in GF(p^{3k})$. We show that these binomials are inequivalent to known perfect nonlinear monomials. As a consequence we obtain new commutative semifields for $p\geq 5$ and odd $k$.
2008
EPRINT
A NEW HASH ALGORITHM$:$ Khichidi$-$1
This is a technical report describing a new hash algorithm called Khichidi-1 and has been written in response to a Hash competition (SHA-3) called by National Institute of Standards and Technology (NIST), USA. This algorithm generates a condensed representation of a message called a Message Digest. A group of functions used in the development of Khichidi-1 is described, followed by a detailed explanation of preprocessing approach. Differences between Khichidi-1 and NIST SHA-2, the algorithm that is in use widely, have also been highlighted. Analytical proofs, implementations with examples, various attack scenarios, entropy tests, security strength and computational complexity of the algorithm are described as separate sections in this document.
2008
EPRINT
A new identity based proxy signature scheme
Proxy signature schemes allow a proxy signer to generate proxy signatures on behalf of an original signer. Mambo et al. first introduced the notion of proxy signature and a lot of research work can be found on this topic nowadays. Recently, many identity based proxy signature schemes were proposed. However, some schemes are vulnerable to proxy key exposure attack. In this paper, we propose a security model for identity based proxy signature schemes. Then an efficient scheme from pairings is presented. The presented scheme is provably secure in the random oracle model. In particular, the new scheme is secure against proxy key exposure attack.
2008
EPRINT
A New Message Recognition Protocol for Ad Hoc Pervasive Networks
We propose a message recognition protocol which is suitable for ad hoc pervasive networks without the use of hash chains. Hence, we no longer require the sensor motes to save values of a hash chain in their memories. This relaxes the memory requirements. Moreover, we do not need to fix the total number of times the protocol can be executed which implies a desired flexibility in this regard. Furthermore, our protocol is secure without having to consider families of assumptions that depend on the number of sessions the protocol is executed. Hence, the security does not weaken as the protocol is executed over time. Last but not least, we provide a practical procedure for resynchronization in case of any adversarial disruption or communication failure.
2008
EPRINT
A New Proxy Identity-Based Signcryption Scheme for Partial Delegation of Signing Rights
In this paper, a new identity-based proxy signcryption scheme is presented. The proposed scheme allows partial delegation of signing rights. Consequently, a signature created by the proxy signer is distinguishable from that created by the principal signer. This level of security is a common requirement in many applications to prevent malicious proxy agents from impersonating the principal signer. Moreover, the scheme is based on bilinear pairings over elliptic curves and thus smaller key sizes are required compared to schemes not utilizing elliptic curves. A revocation protocol of dishonest agents is given together with a renewal procedure for the proxies of honest agents. Finally, an application scenario for the proposed scheme is presented.
2008
EPRINT
A New Randomness Extraction Paradigm for Hybrid Encryption
We present a new approach to the design of IND-CCA2 secure hybrid encryption schemes in the standard model. Our approach provides an efficient generic transformation from 1-universal to 2-universal hash proof systems. The transformation involves a randomness extractor based on a 4-wise independent hash function as the key derivation function. Our methodology can be instantiated with efficient schemes based on standard intractability assumptions such as DDH, QR and Paillier. Interestingly, our framework also allows to prove IND-CCA2 security of a hybrid version of 1991’s Damgaard’s ElGamal public-key encryption scheme under the DDH assumption.
2008
EPRINT
A New Universal Hash Function and Other Cryptographic Algorithms Suitable for Resource Constrained Devices
A new multi-linear universal hash family is described. Messages are sequences over a finite field $\rF_q$ while keys are sequences over an extension field $\rF_{q^n}$. A linear map $\psi$ from $\rF_{q^n}$ to itself is used to compute the output digest. Of special interest is the case $q=2$. For this case, we show that there is an efficient way to implement $\psi$ using a tower field representation of $\rF_{q^n}$. Such a $\psi$ corresponds to a word oriented LFSR. We describe a method of combining the new universal hash function and a stream cipher with IV to obtain a MAC algorithm. Further, we extend the basic universal hash function to an invertible blockwise universal hash function. Following the Naor-Reingold approach, this is used to construct a tweakable enciphering scheme which uses a single layer of encryption and no finite field multiplications. From an efficiency viewpoint, the focus of all our constructions is small hardware and other resource constrained applications. For such platforms, our constructions compare favourably to previous work.
2008
EPRINT
A New Variant of the Cramer-Shoup KEM Secure against Chosen Ciphertext Attack
We propose a new variant of the Cramer-Shoup KEM (key encapsulation mechanism). The proposed variant is more efficient than the original Cramer-Shoup KEM scheme in terms of public key size and encapsulation cost, but is proven to be (still) secure against chosen ciphertext attack in the standard model, relative to the Decisional Diffie-Hellman problem.
2008
EPRINT
A non-delegatable identity-based strong designated verifier signature scheme
In a strong designated verifier signature scheme, no third party can verify the validity of a signature. On the other hand, non-delegatability, proposed by Lipmaa, Wang and Bao, is another stronger notion for designated verifier signature schemes. In this paper, we formalize a security model for non-delegatable identity based strong designated verifier signature (IDSDVS) schemes. Then a novel non-delegatable IDSDVS scheme based on pairing is presented. The presented scheme is proved to be non-delegatable, non-transferable and unforgeable under the Gap Bilinear Diffie-Hellman assumption.
2008
EPRINT
A non-interactive deniable authentication scheme based on designated verifier proofs
A deniable authentication protocol enables a receiver to identify the source of the given messages but unable to prove to a third party the identity of the sender. In recent years, several non-interactive deniable authentication schemes have been proposed in order to enhance efficiency. In this paper, we propose a security model for non-interactive deniable authentication schemes. Then a non-interactive deniable authentication scheme is presented based on designated verifier proofs. Furthermore, we prove the security of our scheme under the DDH assumption.
2008
EPRINT
A Note on Differential Privacy: Defining Resistance to Arbitrary Side Information
In this note we give a precise formulation of "resistance to arbitrary side information" and show that several relaxations of differential privacy imply it. The formulation follows the ideas originally due to Dwork and McSherry, stated implicitly in [Dwork06]. This is, to our knowledge, the first place such a formulation appears explicitly. The proof that relaxed definitions satisfy the Bayesian formulation is new.
2008
EPRINT
A Novel Probabilistic Passive Attack on the Protocols HB and HB+
We present a very simple probabilistic, passive attack against the protocols HB and HB+. Our attack presents some interesting features: it requires less captured transcripts of protocol executions when com- pared to previous results; It makes possible to trade the amount of required transcripts for computational complexity; the value of noise used in the protocols HB and HB+ need not be known.
2008
EPRINT
A Pipelined Karatsuba-Ofman Multiplier over GF($3^{97}$) Amenable for Pairing Computation
We present a subquadratic ternary field multiplier based on the combination of several variants of the Karatsuba-Ofman scheme recently published. Since one of the most relevant applications for this kind of multipliers is pairing computation, where several field multiplications need to be computed at once, we decided to design a $k$-stage pipeline structure for $k=1,\ldots,4$, where each stage is composed of a 49-trit polynomial multiplier unit. That architecture can compute an average of $k$ field multiplications every three clock cycles, which implies that our four-stage pipeline design can perform more than one field multiplication per clock cycle. When implemented in a Xilinx Virtex V XC5VLX330 FPGA device, this multiplier can compute one field multiplication over \gf($3^{97}$) in just $11.47$ns.
2008
EPRINT
A Proof of Security in O(2^n) for the Xor of Two Random Permutations
\begin{abstract} Xoring two permutations is a very simple way to construct pseudorandom functions from pseudorandom permutations. The aim of this paper is to get precise security results for this construction. Since such construction has many applications in cryptography (see \cite{BI,BKrR,HWKS,SL} for example), this problem is interesting both from a theoretical and from a practical point of view. In \cite{SL}, it was proved that Xoring two random permutations gives a secure pseudorandom function if $m << 2^{\frac {2n}{3}}$. By ``secure'' we mean here that the scheme will resist all adaptive chosen plaintext attacks limited to $m$ queries (even with unlimited computing power). More generally in \cite{SL} it is also proved that with $k$ Xor, instead of 2, we have security when $m << 2^{\frac {kn}{k+1}}$. In this paper we will prove that for $k=2$, we have in fact already security when $m << O(2^n)$. Therefore we will obtain a proof of a similar result claimed in \cite{BI} (security when $m<<O(2^n /n^{2/3})$). Moreover our proof is very different from the proof strategy suggested in \cite{BI} (we do not use Azuma inequality and Chernoff bounds for example), and we will get precise and explicit $O$ functions. Another interesting point of our proof is that we will show that this (cryptographic) problem of security is directly related to a very simple to describe and purely combinatorial problem. \end{abstract}
2008
EPRINT
A protocol for K-multiple substring matching
A protocol is introduced to show $K$ copies of a pattern string are embedded is a host string. Commitments to both strings, and to offsets of copies of the pattern in the host is input of Verifier. Protocol does not leak useful information about strings, and is zero knowledge.
2008
EPRINT
A Proxy Signature Scheme over Braid Groups
Proxy Signatures, introduced by Mambo, Usuda and Okamoto, allow a designated person to sign on behalf of an original signer. Braid groups has been playing an important role in the theory of cryptography as these are non commutative groups used in cryptography. Some digital signature schemes have been given but no proxy signature scheme has been introduced over braid groups. In this paper we have proposed proxy signature scheme using conjugacy search problem over braid groups. Our proxy signature scheme is partial delegated protected proxy signature.
2008
EPRINT
A Public Key Encryption In Standard Model Using Cramer-Shoup Paradigm
We present a public-key encryption scheme which is provably secure against adaptive chosen ciphertext attack. The scheme is constructed using Cramer-Shoup paradigm. The security of the scheme is based on the Decisional Bilinear Diffie-Hellman problem
2008
EPRINT
A public key encryption scheme secure against key dependent chosen plaintext and adaptive chosen ciphertext attacks
Recently, at Crypto 2008, Boneh, Halevi, Hamburg, and Ostrovsky (BHHO) solved the long-standing open problem of ``circular encryption,'' by presenting a public key encryption scheme and proving that it is semantically secure against key dependent chosen plaintext attack (KDM-CPA security) under standard assumptions (and without resorting to random oracles). However, they left as an open problem that of designing an encryption scheme that \emph{simultaneously} provides security against both key dependent chosen plaintext \emph{and} adaptive chosen ciphertext attack (KDM-CCA2 security). In this paper, we solve this problem. First, we show that by applying the Naor-Yung ``double encryption'' paradigm, one can combine any KDM-CPA secure scheme with any (ordinary) CCA2 secure scheme, along with an appropriate non-interactive zero-knowledge proof, to obtain a KDM-CCA2 secure scheme. Second, we give a concrete instantiation that makes use the above KDM-CPA secure scheme of BHHO, along with a generalization of the Cramer-Shoup CCA2 secure encryption scheme, and recently developed pairing-based NIZK proof systems. This instantiation increases the complexity of the BHHO scheme by just a small constant factor.
2008
EPRINT
A Real-World Attack Breaking A5/1 within Hours
In this paper we present a real-world hardware-assisted attack on the well-known A5/1 stream cipher which is (still) used to secure GSM communication in most countries all over the world. During the last ten years A5/1 has been intensively analyzed. However, most of the proposed attacks are just of theoretical interest since they lack from practicability — due to strong preconditions, high computational demands and/or huge storage requirements — and have never been fully implemented. In contrast to these attacks, our attack which is based on the work by Keller and Seitz [KS01] is running on an existing special-purpose hardware device, called COPACOBANA. With the knowledge of only 64 bits of keystream the machine is able to reveal the corresponding internal 64-bit state of the cipher in about 7 hours on average. Besides providing a detailed description of our attack architecture as well as implementation results, we propose and analyze an optimization that leads again to an improvement of about 16% in computation time.
2008
EPRINT
A Recursive Threshold Visual Cryptography Scheme
This paper presents a recursive hiding scheme for 2 out of 3 secret sharing. In recursive hiding of secrets, the user encodes additional information about smaller secrets in the shares of a larger secret without an expansion in the size of the latter, thereby increasing the efficiency of secret sharing. We present applications of our proposed protocol to images as well as text.
2008
EPRINT
A Secure Remote User Authentication Scheme with Smart Cards
Remote user authentication scheme is one of the simplest and the most convenient authentication mechanisms to deal with secret data over insecure networks. These types of schemes are applicable to the areas such as computer networks, wireless networks, remote login systems, operation systems and database management systems.The goal of a remote user authentication scheme is to identify a valid card holder as having the rights and privileges indicated by the issuer of the card. In recent years, so many remote user authentication schemes have been proposed to authenticate a legitimate user, but none of them can solve all possible problems and withstand all possible attacks. This paper presents a secure remote user authentication scheme with smart cards. The proposed scheme provides the essential security requirements and achieves particular attributes.
2008
EPRINT
A Secure Threshold Anonymous Password-Authenticated Key Exchange Protocol
At Indocrypt 2005, Viet et al., [22] have proposed an anonymous password-authenticated key exchange (PAKE) protocol and its threshold construction both of which are designed for client's password-based authentication and anonymity against a passive server, who does not deviate the protocol. In this paper, we first point out that their threshold construction is completely insecure against off-line dictionary attacks. For the threshold t > 1, we propose a secure threshold anonymous PAKE (for short, TAP) protocol with the number of clients n upper-bounded, such that n \leq 2 \sqrt{N-1} -1, where N is a dictionary size of passwords. We rigorously prove that the TAP protocol has semantic security of session keys in the random oracle model by showing the reduction to the computational Diffie-Hellman problem. In addition, the TAP protocol provides unconditional anonymity against a passive server. For the threshold t=1, we propose an efficient anonymous PAKE protocol that significantly improves efficiency in terms of computation costs and communication bandwidth compared to the original (not threshold) anonymous PAKE protocol [22].
2008
EPRINT
A Short Proof of the PRP/PRF Switching Lemma
In Eurocrypt 2006, Bellare and Rogaway \cite{BeRo06} gave a proof of the PRP/PRF switching Lemma using their game-based proof technique. In the appendix of the same paper, they also gave an proof without games. In this paper, we give another proof of the switching lemma, which is simple and mathematically-clear and easy to uderstand. Our proof is based on \textit{the strong interpolation theorem}.
2008
EPRINT
A Simple Derivation for the Frobenius Pseudoprime Test
Probabilistic compositeness tests are of great practical importance in cryptography. Besides prominent tests (like the well-known Miller-Rabin test) there are tests that use Lucas-sequences for testing compositeness. One example is the so-called Frobenius test that has a very low error probability. Using a slight modification of the above mentioned Lucas sequences we present a simple derivation for the Frobenius pseudoprime test in the version proposed by Crandall and Pommerance.
2008
EPRINT
A simple generalization of the {E}l{G}amal cryptosystem to non-abelian groups II
In this paper I study the MOR cryptosystem using the special linear group over finite fields. At our current state of knowledge, I show that the MOR cryptosystem is more secure than the ElGamal cryptosystem over finite fields.
2008
EPRINT
A strategy for any DAA Issuer and an additional verification by a Host
A successful strategy was identified for any Verifier colluding with any Issuer to distinguish honest Provers issuing DAA signatures. An additional verification equation was introduced for a Prover to detect 'tagged' credentials that may be issued while Join protocol. This verification can be done by the Host and do not affect TPM in any way.
2008
EPRINT
A Tamper-Evident Voting Machine Resistant to Covert Channels
To provide a high level of security guarantee cryptography is introduced into the design of the voting machine. The voting machine based on cryptography is vulnerable to attacks through covert channels. An adversary may inject malicious codes into the voting machine and make it leak vote information unnoticeably by exploiting the randomness used in encryptions and zero-knowledge proofs. In this paper a voting machine resistant to covert channels is designed. It has the following properties: Firstly, it is tamper-evident. The randomness used by the voting machine is generated by the election authority. The inconsistent use of the randomness can be detected by the voter from examining a destroyable verification code. Even if malicious codes are run in the voting machine attacks through subliminal channels are thwarted. Next, it is voter-verifiable. The voter has the ability to verify if the ballot cast by the machine is consistent with her intent without doing complicated cryptographic computation. Finally, the voting system is receipt-free. Vote-buying and coercion are prevented.
2008
EPRINT
Abelian varieties with prescribed embedding degree
We present an algorithm that, on input of a CM-field $K$, an integer $k \ge 1$, and a prime $r \equiv 1 \bmod k$, constructs a $q$-Weil number $\pi \in \O_K$ corresponding to an ordinary, simple abelian variety $A$ over the field $\F$ of $q$ elements that has an $\F$-rational point of order $r$ and embedding degree $k$ with respect to $r$. We then discuss how CM-methods over $K$ can be used to explicitly construct $A$.
2008
EPRINT
Accelerating the Scalar Multiplication on Elliptic Curve Cryptosystems over Prime Fields
Elliptic curve cryptography (ECC), independently introduced by Koblitz and Miller in the 80's, has attracted increasing attention in recent years due to its shorter key length requirement in comparison with other public-key cryptosystems such as RSA. Shorter key length means reduced power consumption and computing effort, and less storage requirement, factors that are fundamental in ubiquitous portable devices such as PDAs, cellphones, smartcards, and many others. To that end, a lot of research has been carried out to speed-up and improve ECC implementations, mainly focusing on the most important and time-consuming ECC operation: scalar multiplication. In this thesis, we focus in optimizing such ECC operation at the point and scalar arithmetic levels, specifically targeting standard curves over prime fields. At the point arithmetic level, we introduce two innovative methodologies to accelerate ECC formulae: the use of new composite operations, which are built on top of basic point doubling and addition operations; and the substitution of field multiplications by squarings and other cheaper operations. These techniques are efficiently exploited, individually or jointly, in several contexts: to accelerate computation of scalar multiplications, and the computation of pre-computed points for window-based scalar multiplications (up to 30% improvement in comparison with previous best method); to speed-up computations of simple side-channel attack (SSCA)-protected implementations using innovative atomic structures (up to 22% improvement in comparison with scalar multiplication using original atomic structures); and to develop parallel formulae for SIMD-based applications, which are able to execute three and four operations simultaneously (up to 72% of improvement in comparison with a sequential scalar multiplication). At the scalar arithmetic level, we develop new sublinear (in terms of Hamming weight) multibase scalar multiplications based on NAF-like conversion algorithms that are shown to be faster than any previous scalar multiplication method. For instance, proposed multibase scalar multiplications reduce computing times in 10.9% and 25.3% in comparison with traditional NAF for unprotected and SSCA-protected scenarios, respectively. Moreover, our conversion algorithms overcome the problem of converting any integer to multibase representation, solving an open problem that was defined as hard. Thus, our algorithms make the use of multiple bases practical for applications as ECC scalar multiplication for first time.
2008
EPRINT
Accountability of Perfect Concurrent Signature
Concurrent signature provided a novel idea for fair exchange protocol without trusted third party. Perfect Concurrent Signature is proposed to strengthen theambiguity of the concurrent signature. Wang et al, pointed out there exist an attack against the fairness of Perfect Concurrent Signature and proposed the improved perfect concurrent signature. This paper find that in proposed (perfect) concurrent signature protocol, no matter two party or multi-party, the signer could bind multiple messages with one keystone set but let the other signers know only one of the messages. This is a new unfair case in the application of concurrent signature. Based on this observation, we propose that accountability should be one of the security properties of (perfect) concurrent signature and we give the definition of accountability of concurrent signature. To illustrate this idea, we give an attack scene against the accountability of improved perfect concurrent signature proposed by Wang et al, and propose an update version of perfect concurrent signature to avoid such attack.
2008
EPRINT
Adaptive Security in Broadcast Encryption Systems
We present new techniques for achieving adaptive security in broadcast encryption systems. Previous work on fully-collusion resistant broadcast encryption with short ciphertexts was limited to only considering static security. First, we present a new definition of security that we call semi-static security and show a generic ``two-key" transformation from semi-statically secure systems to adaptively secure ones that have comparable-sized ciphertexts. Using bilinear maps, we then construct broadcast encryption systems that are semi-statically secure in the standard model and have constant size ciphertexts. Our semi-static constructions work when the number of indices or identifiers in the system is polynomial in the security parameter. For identity-based broadcast encryption, where the number of potential indices or identifiers may be exponential, we present the first adaptively secure system with sublinear ciphertexts. We prove security in the standard model.
2008
EPRINT
Additive Homomorphic Encryption with t-Operand Multiplications
Homomorphic encryption schemes are an essential ingredient to design protocols where different users interact in order to obtain some information from the others, at the same time that each user keeps private some of his information. When the algebraic structure underlying these protocols is complicated, then standard homomorphic encryption schemes are not enough, because they do not allow to crypto-compute at the same time additions and products of plaintexts. In this work we define a theoretical object, $t$-chained encryption schemes, which can be used to design crypto-computers for the addition and product of $t$ integer values. Previous solutions in the literature worked for the case $t=2$. Our solution is not only theoretical: we show that some existing (pseudo-)homomorphic encryption schemes (some of them based on lattices) can be used to implement in practice the concept of $t$-chained encryption scheme.
2008
EPRINT
Algebraic Attacks on the Crypto-1 Stream Cipher in MiFare Classic and Oyster Cards
MiFare Crypto 1 is a lightweight stream cipher used in London's Oyster card, Netherland's OV-Chipcard, US Boston's CharlieCard, and in numerous wireless access control and ticketing systems worldwide. Recently, researchers have been able to recover this algorithm by reverse engineering. We have examined MiFare from the point of view of the so called "algebraic attacks". We can recover the full 48-bit key of MiFare algorithm in 200 seconds on a PC, given 1 known IV (from one single encryption). The security of this cipher is therefore close to zero. This is particularly shocking, given the fact that, according to the Dutch press, 1 billion of MiFare Classic chips are used worldwide, including in many governmental security systems.
2008
EPRINT
Algebraic Cryptanalysis of Curry and Flurry using Correlated Messages
In \cite{BPW}, Buchmann, Pyshkin and Weinmann have described two families of Feistel and SPN block ciphers called Flurry and Curry respectively. These two families of ciphers are fully parametrizable and have a sound design strategy against basic statistical attacks; i.e. linear and differential attacks. The encryption process can be easily described by a set of algebraic equations. These ciphers are then targets of choices for algebraic attacks. In particular, the key recovery problem has been reduced to changing the order of a Groebner basis \cite{BPW,BPWext}. This attack - although being more efficient than linear and differential attacks - remains quite limited. The purpose of this paper is to overcome this limitation by using a small number of suitably chosen pairs of message/ciphertext for improving algebraic attacks. It turns out that this approach permits to go one step further in the (algebraic) cryptanalysis of Flurry and \textbf{Curry}. To explain the behavior of our attack, we have established an interesting connection between algebraic attacks and high order differential cryptanalysis \cite{Lai}. From extensive experiments, we estimate that our approach, that we can call an ``algebraic-high order differential" cryptanalysis, is polynomial when the Sbox is a power function. As a proof of concept, we have been able to break Flurry -- up to $8$ rounds -- in few hours.
2008
EPRINT
Algebraic Cryptanalysis of MQQ Public Key Cryptosystem by MutantXL
In this paper, we present an efficient attack to the multivariate Quadratic Quasigroups (MQQ) cryptosystem. Our cryptanalysis breaks MQQ cryptosystems by solving systems of multivariate quadratic polynomial equations using a modified version of the MutantXL algorithm. We present experimental results comparing the behavior of our implementation of MutantXL to Magma's implementation of $F_4$ on MQQ systems ($\geq$ 135 bit). Based on our results we show that the MutantXL implementation solves with much less memory than Magma's implementation of $F_4$ algorithm.
2008
EPRINT
Algebraic Techniques in Differential Cryptanalysis
In this paper we propose a new cryptanalytic method against block ciphers, which combines both algebraic and statistical techniques. More specifically, we show how to use algebraic relations arising from differential characteristics to speed up and improve key-recovery differential attacks against block ciphers in some situations. To illustrate the new technique, we apply it to reduced round versions of the cipher PRESENT, an ultra lightweight block cipher proposed at CHES~2007, particularly suitable for deployment in RFID tags.
2008
EPRINT
All Pairings Are in a Group
In this paper, we suggest that all pairings be in a group from an abstract angle. It is possible that our observation can be applied into other aspects of pairing-based cryptosystems.
2008
EPRINT
Almost-Asynchronous MPC with Faulty Minority
Secure multiparty computation (MPC) allows a set of parties to securely evaluate any agreed function of their inputs, even when up to $t$ of the $n$ parties are faulty. Protocols for synchronous networks (where every sent message is assumed to arrive within a constant time) tolerate up to $t<n/2$ faulty parties, whereas in the more realistic asynchronous setting (with no \emph{a priory} information on maximal message delay) only security against $t<n/3$ is possible. We present the first protocol that achieves security against $t<n/2$ without assuming a fully synchronous network. Actually our protocol guarantees security against any faulty minority in an \emph{almost asynchronous} network, i.e. in a network with one single round of synchronous broadcast (followed by a fully asynchronous communication). Furthermore our protocol takes inputs of all parties (in a fully asynchronous network only inputs of $n-t$ parties can be guaranteed), and so achieves everything that is possible in synchronous networks (but impossible in fully asynchronous networks) at the price of just one synchronous broadcast round. As tools for our protocol we introduce the notions of \emph{almost non-interactive verifiable secret-sharing} and \emph{almost non-interactive zero-knowledge proof of knowledge}, which are of independent interest as they can serve as efficient replacements for fully non-interactive verifiable secret-sharing and fully non-interactive zero-knowledge proof of knowledge.
2008
EPRINT
An Accumulator Based on Bilinear Maps and Efficient Revocation for Anonymous Credentials
The success of electronic authentication systems, be it e-ID card systems or Internet authentication systems such as CardSpace, highly depends on the provided level of user-privacy. Thereby, an important requirement is an efficient means for revocation of the authentication credentials. In this paper we consider the problem of revocation for certificate-based privacy-protecting authentication systems. To date, the most efficient solutions for revocation for such systems are based on cryptographic accumulators. Here, an accumulate of all currently valid certificates is published regularly and each user holds a {\em witness} enabling her to prove the validity of her (anonymous) credential while retaining anonymity. Unfortunately, the users' witnesses must be updated at least each time a credential is revoked. For the know solutions, these updates are computationally very expensive for users and/or certificate issuers which is very problematic as revocation is a frequent event as practice shows. In this paper, we propose a new dynamic accumulator scheme based on bilinear maps and show how to apply it to the problem of revocation of anonymous credentials. In the resulting scheme, proving a credential's validity and updating witnesses both come at (virtually) no cost for credential owners and verifiers. In particular, updating a witness requires the issuer to do only one multiplication per addition or revocation of a credential and can also be delegated to untrusted entities from which a user could just retrieve the updated witness. We believe that thereby we provide the first authentication system offering privacy protection suitable for implementation with electronic tokens such as eID cards or drivers' licenses.
2008
EPRINT
An analysis of the infrastructure in real function fields
We construct a map injecting the set of infrastructure ideals in a real function field into the class group of the correspoding hyperelliptic curve. This map respects the `group-like' structure of the infrastructure, as a consequence of this construction we show that calculating distances in the set of infrastructure ideals is equivalent to the DLP in the underlying hyperelliptic curve. We also give a precise description of the elements missing in the infrastructure to be a group.
2008
EPRINT
An Approach to ensure Information Security through 252-Bit Integrated Encryption System (IES)
In this paper, a block-cipher, Integrated Encryption System (IES), to be implemented in bit-level is presented that requires a 252-bit secret key. In IES, there exist at most sixteen rounds to be implemented in cascaded manner. RPSP, TE, RPPO, RPMS, RSBM, RSBP are the six independent block-ciphering protocols, that are integrated to formulate IES. A round is constituted by implementing one of the protocols on the output produced in the preceding round. The process of encryption is an interactive activity, in which the encryption authority is given enough scope of flexibility regarding choice of protocols in any round. However no protocol can be used in more than three rounds. Formation of key is a run-time activity, which gets built with the interactive encryption process proceeds. Results of implementation of all six protocols in cascaded manner have been observed and analyzed. On achieving a satisfactory level of performance in this cascaded implementation, IES and its schematic and operational characteristics have been proposed.
2008
EPRINT
An argument for Hamiltonicity
A constant-round interactive argument is introduced to show existence of a Hamiltonian cycle in a directed graph. Graph is represented with a characteristic polynomial, top coefficient of a verification polynomial is tested to fit the cycle, soundness follows from Schwartz-Zippel lemma.
2008
EPRINT
An argument for rank metric
A protocol is introduced to show an upper bound for rank of a square matrix
2008
EPRINT
An Efficient and Provably Secure ID-Based Threshold Signcryption Scheme
Signcryption is a cryptographic primitive that performs digital signature and public key encryption simultaneously, at a lower computational costs and communication overheads than the signature-then-encryption approach. Recently, two identity-based threshold signcryption schemes[12],[26] have been proposed by combining the concepts of identity-based threshold signature and signcryption together. However, the formal models and security proofs for both schemes are not considered. In this paper, we formalize the concept of identity-based threshold signcryption and give a new scheme based on the bilinear pairings. We prove its confidentiality under the Decisional Bilinear Diffie-Hellman assumption and its unforgeability under the Computational Diffie-Hellman assumption in the random oracle model. Our scheme turns out to be more efficient than the two previously proposed schemes.
2008
EPRINT
An Efficient and Provably-Secure Identity-based Signcryption Scheme for Multiple PKGs
In this paper, based on the scheme proposed by Barreto et al in ASIACRYPT 2005, an identity-based signcryption scheme in multiple Private Key Generator (PKG) environment is proposed, which mitigates the problems referred to users' private keys escrow and distribution in single PKG system. For security of the scheme, it is proved to satisfy the properties of message confidentiality and existential signature-unforgeability, assuming the intractability of the q-Strong Diffie-Hellman problem and the q-Bilinear Diffie-Hellman Inversion problem. For efficiency, compared with the state-of-the-art signcryption schemes of the same kind, our proposal needs less pairing computations and is shown to be the most efficient identity-based signcryption schemes for multiple PKGs up to date.
2008
EPRINT
An Efficient Authenticated Key Exchange Protocol with a Tight Security Reduction
In this paper, we present a new authenticated key exchange(AKE) protocol, called NETS, and prove its security in the extended Canetti-Krawczyk model under the random oracle assumption and the gap Diffie-Hellman(GDH) assumption. Our protocol enjoys a simple and tight security reduction compared to those of HMQV and CMQV without using the Forking Lemma. Each session of the NETS protocol requires only three exponentiations per party, which is comparable to the efficiency of MQV, HMQV and CMQV.
2008
EPRINT
An Efficient ID-based Ring Signature Scheme from Pairings
A ring signature allows a user from a set of possible signers to convince the verifier that the author of the signature belongs to the set but identity of the author is not disclosed. It protects the anonymity of a signer since the verifier knows only that the signature comes from a member of a ring, but doesn't know exactly who the signer is. This paper proposes a new ID-based ring signature scheme based on the bilinear pairings. The new scheme provides signatures with constant-size without counting the list of identities to be included in the ring. When using elliptic curve groups of order 160 bit prime, our ring signature size is only about 61 bytes. There is no pairing operation involved in the ring sign procedure, and there are only three paring operations involved in the verification procedure. So our scheme is more efficient compared to schemes previously proposed. The new scheme can be proved secure with the hardness assumption of the k-Bilinear Diffie-Hellman Inverse problem, in the random oracle model.
2008
EPRINT
An Efficient Identity-based Ring Signcryption Scheme
ID-based ring signcryption schemes (IDRSC) are usually derived from bilinear parings, a powerful but computationally expensive primitive. The number of paring computations of all existing ID-based ring signcryption schemes from bilinear pairings grows linearly with the group size, which makes the efficiency of ID-based schemes over traditional schemes questionable. In this paper, we present a new identity-based ring signcryption scheme, which only takes four pairing operations for any group size and the scheme is proven to be indistinguishable against adaptive chosen ciphertext ring attacks (IND-IDRSC-CCA2) and secure against an existential forgery for adaptive chosen messages attacks (EF-IDRSC-ACMA).
2008
EPRINT
An Efficient Protocol for Secure Two-Party Computation in the Presence of Malicious Adversaries
We show an efficient secure two-party protocol, based on Yao's construction, which provides security against malicious adversaries. Yao's original protocol is only secure in the presence of semi-honest adversaries, and can be transformed into a protocol that achieves security against malicious adversaries by applying the compiler of Goldreich, Micali and Wigderson (the ``GMW compiler''). However, this approach does not seem to be very practical as it requires using generic zero-knowledge proofs. Our construction is based on applying cut-and-choose techniques to the original circuit and inputs. Security is proved according to the {\sf ideal/real simulation paradigm}, and the proof is in the standard model (with no random oracle model or common reference string assumptions). The resulting protocol is computationally efficient: the only usage of asymmetric cryptography is for running $O(1)$ oblivious transfers for each input bit (or for each bit of a statistical security parameter, whichever is larger). Our protocol combines techniques from folklore (like cut-and-choose) along with new techniques for efficiently proving consistency of inputs. We remark that a naive implementation of the cut-and-choose technique with Yao's protocol does \emph{not} yield a secure protocol. This is the first paper to show how to properly implement these techniques, and to provide a full proof of security. Our protocol can also be interpreted as a constant-round black-box reduction of secure two-party computation to oblivious transfer and perfectly-hiding commitments, or a black-box reduction of secure two-party computation to oblivious transfer alone, with a number of rounds which is linear in a statistical security parameter. These two reductions are comparable to Kilian's reduction, which uses OT alone but incurs a number of rounds which is linear in the depth of the circuit~\cite{Kil}.
2008
EPRINT
An Efficient SPRP-secure Construction based on Pseudo Random Involution
Here we present a new security notion called as pseudo random involution or PRI which are associated with tweakable involution enciphering schemes or TIES (i.e., the encryption and decryption are same algorithm). This new security notion is important in two reasons. Firstly, it is the natural security notion for TIES which are having practical importance. Secondly, we show that there is a generic method to obtain a sprp-secure tweakable enciphering scheme (TES) from pri-secure construction. The generic method costs an extra xor with an extra key. In this paper, we also propose an efficient pri-secure construction Hash-Counter Involution or HCI and based on it we obtain a sprp-secure construction which is real improvement over XCB. We call the new construction as MXCB or Modified-XCB. HCH, XCB and HCTR are some of the popular counter based enciphering schemes, where HCTR is more efficient among them and HCH, XCB guarantee more security compare to HCTR. The new proposal MXCB has efficiency similar to HCTR and guarantees more security similar to HCH and XCB. We consider this new construction to be an important in light of the current activities of the IEEE working group on storage security which is working towards a standard for a wide block TES.
2008
EPRINT
An ID-based Authenticated Key Exchange Protocol based on Bilinear Diffie-Hellman Problem
In recent years, a great deal of ID-based authenticated key exchange protocols have been proposed. However, many of them have been broken or have no security proof. The main issue is that without static private key it is difficult for simulator to fully support the SessionKeyReveal and EphemeralKeyReveal queries. Some proposals which have purported to be provably secure just hold in relatively weak model, which does not fully support above-mentioned two queries. For protocols to be proven secure in more desirable model, people must make use of the stronger gap [15] assumption, which means that the computational problem remains hard even in the presence of an effective decision oracle. However, the gap assumption may not be acceptable at all, since the decision oracle, which the proofs rely on, may not exist in real world. Cash, Kiltz and Shoup [14] recently proposed a new computational problem called twin Diffie-Hellman problem, a nice feature of which not enjoyed by ordinary Diffie-Hellman problem is that the twin Diffie-Hellman problem remains hard, even with access to a decision oracle that recognizes solutions to the problem. At the heart of their method is the "trapdoor test" that allows us to implement an effective decision oracle for the twin Diffie-Hellman problem, without knowing the corresponding discrete logarithm. In this paper,we present a new ID-based authenticated key exchange (ID-AKE) protocol based on the trapdoor test technique. Compared with previous ID-AKE protocols, our proposal is based on the Bilinear Diffie-Hellman (BDH) assumption, which is more standard than Gap Bilinear Diffie-Hellman (GBDH) assumption, on which previous protocols are based. Moreover, our scheme is shown to be secure in the enhanced Canetti-Krawczyk (eCK) model, which is the currently strongest AKE security model.
2008
EPRINT
An improved preimage attack on MD2
This paper describes an improved preimage attack on the cryptographic hash function MD2. The attack has complexity equivalent to about $2^{73}$ evaluations of the MD2 compression function. This is to be compared with the previous best known preimage attack, which has complexity about $2^{97}$.
2008
EPRINT
An Improved Robust Fuzzy Extractor
We consider the problem of building robust fuzzy extractors, which allow two parties holding similar random variables W, W' to agree on a secret key R in the presence of an active adversary. Robust fuzzy extractors were defined by Dodis et al. in Crypto 2006 to be noninteractive, i.e., only one message P, which can be modified by an unbounded adversary, can pass from one party to the other. This allows them to be used by a single party at different points in time (e.g., for key recovery or biometric authentication), but also presents an additional challenge: what if R is used, and thus possibly observed by the adversary, before the adversary has a chance to modify P. Fuzzy extractors secure against such a strong attack are called post-application robust. We construct a fuzzy extractor with post-application robustness that extracts a shared secret key of up to (2m-n)/2 bits (depending on error-tolerance and security parameters), where n is the bit-length and m is the entropy of W. The previously best known result, also of Dodis et al., extracted up to (2m-n)/3 bits (depending on the same parameters).
2008
EPRINT
An improvement of discrete Tardos fingerprinting codes
It has been known that the code lengths of Tardos's collusion-secure fingerprinting codes are of theoretically minimal order with respect to the number of adversarial users (pirates). However, the code lengths can be further reduced, as some preceding studies on Tardos's codes already revealed. In this article we improve a recent discrete variant of Tardos's codes, and give a security proof of our codes under an assumption weaker than the original assumption (Marking Assumption). Our analysis shows that our codes have significantly shorter lengths than Tardos's codes. For example, in a practical setting, the code lengths of our codes are about 3.01%, 4.28%, and 4.81% of Tardos's codes if the numbers of pirates are 2, 4, and 6, respectively.
2008
EPRINT
Analysis and Details of the Random Cipher Output Mode Of Operation Primitives
Consider that Hardware and Software attack Technologies seem to be advancing at an exponential pace. Should it be acceptable to believe that all of the current Modes Of Operation (MOO) will still be 100% safe from technology attacks 5 to 30 years or more into the future? Predictions about DES’s security when it was first developed proved to be wrong; with the volume of information and data being protected by current MOO’s, the security industry cannot afford to be wrong again. This is not to say that just because the experts were wrong about the DES that they are wrong now. They have never had and do not have perfect vision into the future about what will develop in the security attacking technology arena. Suppose some ‘brainiac’ teenager devises a sophisticated attack technology that no one thought of and one or more of the accepted MOO’s are broken; then we will all be racing to recover. With these potential advances in hardware and attack technology could come the time when none of the currently accepted modes of operation are safe from an attack. We ought to consider not designing ciphers that are even more complex, as this will just escalate the ‘leap-frog’ race between cipher developers and attackers. The problem isn’t the complexity; the mathematical connection between the plaintext/ciphertext pair and the connection to only one key needs to be expanded to multiple key connections. This MOO is presented as one potential solution to be considered to combat this potential problem by attempting a solution along this path. This proposal does not involve any new cipher engine technology.
2008
EPRINT
Analysis and Improvement of Authenticatable Ring Signcryption Scheme
Ring signcryption is an anonymous signcryption which allows a user to anonymously signcrypt a message on behalf of a set of users including himself. In an ordinary ring signcryption scheme, even if a user of the ring generates a signcryption, he also cannot prove that the signcryption was produced by himself. In 2008, Zhang, Yang, Zhu, and Zhang solve the problem by introducing an identity-based authenticatable ring signcryption scheme (denoted as the ZYZZ scheme). In the ZYZZ scheme, the actual signcrypter can prove that the ciphertext is generated by himself, and the others cannot authenticate it. However, in this paper, we show that the ZYZZ scheme is not secure against chosen plaintext attacks. Furthermore, we propose an improved scheme that remedies the weakness of the ZYZZ scheme. The improved scheme has shorter ciphertext size than the ZYZZ scheme. We then prove that the improved scheme satisfies confidentiality, unforgeability, anonymity and authenticatability.
2008
EPRINT
Analysis of RC4 and Proposal of Additional Layers for Better Security Margin
In this paper, the RC4 Key Scheduling Algorithm (KSA) is theoretically studied to reveal non-uniformity in the expected number of times each value of the permutation is touched by the indices $i, j$. Based on our analysis and the results available in literature regarding the existing weaknesses of RC4, few additional layers over the RC4 KSA and RC4 Pseudo-Random Generation Algorithm (PRGA) are proposed. Analysis of the modified cipher (we call it RC4$^+$) shows that this new strategy avoids existing weaknesses of RC4.
2008
EPRINT
Analysis of Step-Reduced SHA-256
This is the first article analyzing the security of SHA-256 against fast collision search which considers the recent attacks by Wang et al. We show the limits of applying techniques known so far to SHA-256. Next we introduce a new type of perturbation vector which circumvents the identified limits. This new technique is then applied to the unmodified SHA-256. Exploiting the combination of Boolean functions and modular addition together with the newly developed technique allows us to derive collision-producing characteristics for step-reduced SHA-256, which was not possible before. Although our results do not threaten the security of SHA-256, we show that the low probability of a single local collision may give rise to a false sense of security.
2008
EPRINT
Analyzing the Galbraith-Lin-Scott Point Multiplication Method for Elliptic Curves over Binary Fields
Galbraith, Lin and Scott recently constructed efficiently-computable endomorphisms for a large family of elliptic curves defined over F_{q^2} and showed, in the case where q is prime, that the Gallant-Lambert-Vanstone point multiplication method for these curves is significantly faster than point multiplication for general elliptic curves over prime fields. In this paper, we investigate the potential benefits of using Galbraith-Lin-Scott elliptic curves in the case where q is a power of 2. The analysis differs from the q prime case because of several factors, including the availability of the point halving strategy for elliptic curves over binary fields. Our analysis and implementations show that Galbraith-Lin-Scott offers significant acceleration for curves over binary fields, in both doubling- and halving-based approaches. Experimentally, the acceleration surpasses that reported for prime fields (for the platform in common), a somewhat counterintuitive result given the relative costs of point addition and doubling in each case.
2008
EPRINT
Anonymous Consecutive Delegation of Signing Rights: Unifying Group and Proxy Signatures
We define a general model for consecutive delegations of signing rights with the following properties: The delegatee actually signing and all intermediate delegators remain anonymous. As for group signatures, in case of misuse, a special authority can open signatures to reveal the chain of delegations and the signer's identity. The scheme satisfies a strong notion of non-frameability generalizing the one for dynamic group signatures. We give formal definitions of security and show them to be satisfiable by constructing an instantiation proven secure under general assumptions in the standard model. Our primitive is a proper generalization of both group signatures and proxy signatures and can be regarded as non-frameable dynamic hierarchical group signatures.
2008
EPRINT
Another approach to pairing computation in Edwards coordinates
The recent introduction of Edwards curves has significantly reduced the cost of addition on elliptic curves. This paper presents new explicit formulae for pairing implementation in Edwards coordinates. We prove our method gives performances similar to those of Miller's algorithm in Jacobian coordinates and is thus of cryptographic interest when one chooses Edwards curve implementations of protocols in elliptic curve cryptography. The method is faster than the recent proposal of Das and Sarkar for computing pairings on supersingular curves using Edwards coordinates.
2008
EPRINT
Another Glance At Blockcipher Based Hashing
In this note we revisit the rate-1 blockcipher based hash functions as first studied by Preneel, Govaerts and Vandewalle (Crypto'93) and later extensively analysed by Black, Rogaway and Shrimpton (Crypto'02). We analyze a further generalization where any pre- and postprocessing is considered. By introducing a new tweak to earlier proof methods, we obtain a simpler proof that is both more general and more tight than existing results. As added benefit, this also leads to a clearer understanding of the current classification of rate-1 blockcipher based schemes as introduced by Preneel et al. and refined by Black et al.
2008
EPRINT
Argument of knowledge of a bounded error
A protocol is introduced to show knowledge of a codeword of Goppa code and Goppa polynomial. Protocol does not disclosure any useful information about the codeword and polynomial coefficients. A related protocol is introduced to show Hamming weight of an error is below a threshold. Protocol does not disclosure codeword and weight of the error. Verifier only uses commitments to codeword components and coefficients while testing validity of statements. Both protocols are honest verifier zero knowledge.
2008
EPRINT
Attack on Kang et al.'s Identity-Based Strong Designated Verifier Signature Scheme
In this paper, we propose a universal forgery attack on Kang et al.'s identity-based strong designated verifier signature (IBSDVS) scheme. We show any one can forge a valid IBSDVS on an arbitrary message without the knowledge of the private key of either the signer or the designated verifier.
2008
EPRINT
Attacking and defending the McEliece cryptosystem
This paper presents several improvements to Stern's attack on the McEliece cryptosystem and achieves results considerably better than Canteaut et al. This paper shows that the system with the originally proposed parameters can be broken in just 1400 days by a single 2.4GHz Core 2 Quad CPU,or 7 days by a cluster of 200 CPUs. This attack has been implemented and is now in progress. This paper proposes new parameters for the McEliece and Niederreiter cryptosystems achieving standard levels of security against all known attacks. The new parameters take account of the improved attack; the recent introduction of list decoding for binary Goppa codes; and the possibility of choosing code lengths that are not a power of 2. The resulting public-key sizes are considerably smaller than previous parameter choices for the same level of security.
2008
EPRINT
Attacking Reduced Round SHA-256
The SHA-256 hash function has started getting attention recently by the cryptanalysis community due to the various weaknesses found in its predecessors such as MD4, MD5, SHA-0 and SHA-1. We make two contributions in this work. First we describe message modification techniques and use them to obtain an algorithm to generate message pairs which collide for the actual SHA-256 reduced to 18 steps. Our second contribution is to present differential paths for 19, 20, 21, 22 and 23 steps of SHA-256. We construct parity check equations in a novel way to find these characteristics. Further, the 19-step differential path presented here is constructed by using only 15 local collisions, as against the previously known 19-step near collision differential path which consists of interleaving of 23 local collisions. Our 19-step differential path can also be seen as a single local collision at the message word level. We use a linearized local collision in this work. These results do not cause any threat to the security of the SHA-256 hash function.
2008
EPRINT
Attacking Step Reduced SHA-2 Family in a Unified Framework
In this work, we make a detailed analysis of local collisions and their applicability to obtain collisions for step reduced SHA-2 hash family. Our analysis explains previously reported collisions for up to 22-step SHA-2 hash functions with probability one. We provide new and improved attacks against 23 and 24-step SHA-256 using a local collision given by Sanadhya and Sarkar (SS) at ACISP '08. The computational efforts for the 23-step and 24-step attacks are respectively $2^{12.5}$ and $2^{28.5}$ calls to the corresponding step reduced SHA-256. Using a look-up table having $2^{32}$ entries the computational effort for finding 24-step collisions can be reduced to $2^{14.5}$ calls. We exhibit colliding message pairs for both the 23 and 24-step attacks. The previous work on 23 and 24-step SHA-256 attacks is due to Indesteege et al. and utilizes the local collision presented by Nikoli\'{c} and Biryukov (NB) at FSE '08. The reported computational efforts are $2^{18}$ and $2^{28.5}$ respectively. The previous 23 and 24-step attacks first constructed a pseudo-collision and later converted it into a collision for the reduced round SHA-256. We show that this two step procedure is unnecessary. Although these attacks improve upon the existing reduced round SHA-256 attacks, they do not threaten the security of the full SHA-2 family.
2008
EPRINT
Attacks on RFID Protocols
This document consists of a collection of attacks upon RFID protocols and is meant to serve as a quick and easy reference. This document will be updated as new attacks are found. Currently the only attacks on protocols shown are the authors' original attacks with references to similar attacks on other protocols. The main security properties considered are authentication, untraceability, and - for stateful protocols - desynchronization resistance.
2008
EPRINT
Attacks on Singelee and Preneel's protocol
Singelee and Preneel have recently proposed a enhancement of Hancke and Kuhn's distance bounding protocol for RFID. The authors claim that their protocol offers substantial reductions in the number of rounds, though preserving its advantages: suitable to be employed in noisy wireless environments, and requiring so few resources to run that it can be implemented on a low-cost device. Subsequently, the same authors have also proposed it as an efficient key establishment protocol in wireless personal area networks. Nevertheless, in this paper we show effective relay attacks on this protocol, which dramatically increase the success probability of an adversary. As a result, the effectiveness of Singelee and Preneel's protocol is seriously questioned.
2008
EPRINT
Attribute-Based Encryption with Key Cloning Protection
In this work, we consider the problem of key cloning in attribute-based encryption schemes. We introduce a new type of attribute-based encryption scheme, called token-based attribute-based encryption, that provides strong deterrence for key cloning, in the sense that delegation of keys reveals some personal information about the user. We formalize the security requirements for such a scheme in terms of indistinguishability of the ciphertexts and two new security requirements which we call uncloneability and privacy-preserving. We construct a privacy-preserving uncloneable token-based attribute-based encryption scheme based on Cheung and Newport's ciphertext-policy attribute-based encryption scheme and prove the scheme satisfies the above three security requirements. We discuss our results and show directions for future research.
2008
EPRINT
Attribute-Based Ring Signatures
Ring signature was proposed to keep signer's anonymity when it signs messages on behalf of a ``ring" of possible signers. In this paper, we propose a novel notion of ring signature which is called attribute-based ring signature. In this kind of signature, it allows the signer to sign message with its attributes from attribute center. All users that possess of these attributes form a ring. The identity of signer is kept anonymous in this ring. Furthermore, anyone out of this ring could not forge the signature on behalf of the ring. Two constructions of attribute-based ring signature are also presented in this paper. The first scheme is proved to be secure in the random oracle model, with large universal attributes. We also present another scheme in order to avoid the random oracle model. It does not rely on non-standard hardness assumption or random oracle model. Both schemes in this paper are based on standard computational Diffie-Hellman assumption.
2008
EPRINT
Attribute-Based Signatures: Achieving Attribute-Privacy and Collusion-Resistance
We introduce a new and versatile cryptographic primitive called {\em Attribute-Based Signatures} (ABS), in which a signature attests not to the identity of the individual who endorsed a message, but instead to a (possibly complex) claim regarding the attributes she posseses. ABS offers: * A strong unforgeability guarantee for the verifier, that the signature was produced by a {\em single} party whose attributes satisfy the claim being made; i.e., not by a collusion of individuals who pooled their attributes together. * A strong privacy guarantee for the signer, that the signature reveals nothing about the identity or attributes of the signer beyond what is explicitly revealed by the claim being made. We formally define the security requirements of ABS as a cryptographic primitive, and then describe an efficient ABS construction based on groups with bilinear pairings. We prove that our construction is secure in the generic group model. Finally, we illustrate several applications of this new tool; in particular, ABS fills a critical security requirement in attribute-based messaging (ABM) systems. A powerful feature of our ABS construction is that unlike many other attribute-based cryptographic primitives, it can be readily used in a {\em multi-authority} setting, wherein users can make claims involving combinations of attributes issued by independent and mutually distrusting authorities.
2008
EPRINT
Authenticated Adversarial Routing
The aim of this paper is to demonstrate the feasibility of authenticated throughput-efficient routing in an unreliable and dynamically changing synchronous network in which the majority of malicious insiders try to destroy and alter messages or disrupt communication in any way. More specifically, in this paper we seek to answer the following question: Given a network in which the majority of nodes are controlled by a malicious adversary and whose topology is changing every round, is it possible to develop a protocol with polynomially-bounded memory per processor that guarantees throughput-efficient and correct end-to-end communication? We answer the question affirmatively for extremely general corruption patterns: we only request that the topology of the network and the corruption pattern of the adversary leaves at least one path each round connecting the sender and receiver through honest nodes (though this path may change at every round). Out construction works in the public-key setting and enjoys bounded memory per processor (that does not depend on the amount of traffic and is polynomial in the network size.) Our protocol achieves optimal transfer rate with negligible decoding error. We stress that our protocol assumes no knowledge of which nodes are corrupted nor which path is reliable at any round, and is also fully distributed with nodes making decisions locally, so that they need not know the topology of the network at any time. The optimality that we prove for our protocol is very strong. Given any routing protocol, we evaluate its efficiency (rate of message delivery) in the ``worst case,'' that is with respect to the worst possible graph and against the worst possible (polynomially bounded) adversarial strategy (subject to the above mentioned connectivity constraints). Using this metric, we show that there does not exist any protocol that can be asymptotically superior (in terms of throughput) to ours in this setting. We remark that the aim of our paper is to demonstrate via explicit example the feasibility of throughput-efficient authenticated adversarial routing. However, we stress that out protocol is not intended to provide a practical solution, as due to its complexity, no attempt thus far has been made to make the protocol practical by reducing constants or the large (though polynomial) memory requirements per processor. Our result is related to recent work of Barak, Goldberg and Xiao in 2008 who studied fault localization in networks assuming a private-key trusted setup setting. Our work, in contrast, assumes a public-key PKI setup and aims at not only fault localization, but also transmission optimality. Among other things, our work answers one of the open questions posed in the Barak et. al. paper regarding fault localization on multiple paths. The use of a public-key setting to achieve strong error-correction results in networks was inspired by the work of Micali, Peikert, Sudan and Wilson who showed that classical error-correction against a polynomially-bounded adversary can be achieved with surprisingly high precision. Our work is also related to an interactive coding theorem of Rajagopalan and Schulman who showed that in noisy-edge static-topology networks a constant overhead in communication can also be achieved (provided none of the processors are malicious), thus establishing an optimal-rate routing theorem for static-topology networks. Finally, our work is closely related and builds upon to the problem of End-To-End Communication in distributed networks, studied by Afek and Gafni, Awebuch, Mansour, and Shavit, and Afek, Awerbuch, Gafni, Mansour, Rosen, and Shavit, though none of these papers consider or ensure correctness in the setting of a malicious adversary controlling the majority of the network.
2008
EPRINT
Authenticated Byzantine Generals Strike Again
Pease {\em et al.}\/ introduced the problem of \emph{Authenticated Byzantine General} (ABG) where players could use digital signatures (or similar tools) to thwart the challenge posed by Byzantine faults in distributed protocols for agreement. Subsequently it is well known that ABG among $n$ players tolerating up to $t$ faults is (efficiently) possible if and only if $t < n$ (which is a huge improvement over the $n > 3t$ condition in the absence of authentication for the same functionality). However, in this paper we argue that the extant result of $n>t$ does not \emph{truly} simulate a broadcast channel as intended. Thus, we re-initiate the study of ABG. We show that in a completely connected synchronous network of $n$ players where up to any $t$ players are controlled by an adversary, ABG is possible if and only if $n > 2t-1$. The result is pleasantly surprising since it deviates from the standard template of ``$n > \beta t$" for some integer $\beta$, particularly $\beta = 1,2$ or $3$. The result uses new proof/design techniques which may be of independent interest.
2008
EPRINT
Authenticated Key Exchange Secure under the Computational Diffie-Hellman Assumption
In this paper, we present a new authenticated key exchange(AKE) protocol and prove its security under the random oracle assumption and the computational Diffie-Hellman(CDH) assumption. In the extended Canetti-Krawczyk model, there has been no known AKE protocol based on the CDH assumption. Our protocol, called NAXOS+, is obtained by slightly modifying the NAXOS protocol proposed by LaMacchia, Lauter and Mityagin. We establish a formal security proof of NAXOS+ in the extended Canetti-Krawczyk model using as a main tool the trapdoor test presented by Cash, Kiltz and Shoup.
2008
EPRINT
Authenticated Wireless Roaming via Tunnels: Making Mobile Guests Feel at Home
In wireless roaming a mobile device obtains a service from some foreign network while being registered for the similar service at its own home network. However, recent proposals try to keep the service provider role behind the home network and let the foreign network create a tunnel connection through which all service requests of the mobile device are sent to and answered directly by the home network. Such Wireless Roaming via Tunnels (WRT) offers several (security) benefits but states also new security challenges on authentication and key establishment, as the goal is not only to protect the end-to-end communication between the tunnel peers but also the tunnel itself. In this paper we formally specify mutual authentication and key establishment goals for WRT and propose an efficient and provably secure protocol that can be used to secure such roaming session. Additionally, we describe some modular protocol extensions to address resistance against DoS attacks, anonymity of the mobile device and unlinkability of its roaming sessions, as well as the accounting claims of the foreign network in commercial scenarios.
2008
EPRINT
Authenticating with Attributes
Attribute based group signature (ABGS) is a a new generation of group signature schemes. In such a scheme the verifier could define the role of a signer within the group. He determines the attributes possessed by the member signing the document. In this paper we propose the first ABGS scheme with multi-authorities and define its security notions. We construct an ABGS that is proved to be fully traceable and fully anonymous. Our scheme is efficient and secure.
2008
EPRINT
Automatic Generation of Sound Zero-Knowledge Protocols
Efficient zero-knowledge proofs of knowledge (ZK-PoK) are basic building blocks of many practical cryptographic applications such as identification schemes, group signatures, and secure multiparty computation. Currently, first applications that essentially rely on ZK-POKs are being deployed in the real world. The most prominent example is Direct Anonymous Attestation (DAA), which was adopted by the Trusted Computing Group (TCG) and implemented as one of the functionalities of the cryptographic chip Trusted Platform Module (TPM). Implementing systems using ZK-PoK turns out to be challenging, since ZK-PoK are, loosely speaking, significantly more complex than standard crypto primitives, such as encryption and signature schemes. As a result, implementation cycles of ZK-PoK are time-consuming and error-prone, in particular for developers with minor or no cryptographic skills. To overcome these challenges, we have designed and implemented a compiler with corresponding languages that given a high-level ZK-PoK protocol specification automatically generates a sound implementation of this. The output is given in form of $\Sigma$-protocols, which are the most efficient protocols for ZK-PoK currently known. Our compiler translates ZK-PoK protocol specifications, written in a high-level protocol description language, into Java code or \LaTeX\ documentation of the protocol. The compiler is based on a unified theoretical framework that encompasses a large number of existing ZK-PoK techniques. Within this framework we present a new efficient ZK-PoK protocol for exponentiation homomorphisms in hidden order groups. Our protocol overcomes several limitations of the existing proof techniques.
2008
EPRINT
BGKM: An Efficient Secure Broadcasting Group Key Management Scheme
Broadcasting Group Key Management (BGKM) scheme is designed to reduce communication, storage, and computation overhead of existing Broadcasting Encryption Schemes (BES). To this end, BGKM proposes a new broadcasting group management scheme by utilizing the basic construction of ciphertext policy attribute based encryption and flat table identity management. Compared to previous approaches, our performance evaluation shows that BGKM greatly improved performance in communication O(log n) for single revocation and O((log n)(log m)) for bulk revocations), storage (O(log n) for each group member), and computation (O((log n) (log m)) for both encryption and decryption), where n is the ID space size and m is the number of GMs in the group. Moreover, BGKM scheme provides group forward/backward secrecy, and it is resilience to colluding attacks.
2008
EPRINT
Binary Edwards Curves
This paper presents a new shape for ordinary elliptic curves over fields of characteristic 2. Using the new shape, this paper presents the first complete addition formulas for binary elliptic curves, i.e., addition formulas that work for all pairs of input points, with no exceptional cases. If n >= 3 then the complete curves cover all isomorphism classes of ordinary elliptic curves over F_2^n. This paper also presents dedicated doubling formulas for these curves using 2M + 6S + 3D, where M is the cost of a field multiplication, S is the cost of a field squaring, and D is the cost of multiplying by a curve parameter. These doubling formulas are also the first complete doubling formulas in the literature, with no exceptions for the neutral element, points of order 2, etc. Finally, this paper presents complete formulas for differential addition, i.e., addition of points with known difference. A differential addition and doubling, the basic step in a Montgomery ladder, uses 5M + 4S + 2D when the known difference is given in affine form.
2008
EPRINT
Blind HIBE and its Applications to Identity-Based Blind Signature and Blind Decryption
We explicitly describe and analyse \textit{blind} hierachical identity-based encryption (\textit{blind} HIBE) schemes, which are natural generalizations of blind IBE schemes \cite{gh07}. We then uses the blind HIBE schemes to construct: (1) An identity-based blind signature scheme secure in the standard model, under the computational Diffie-Hellman (CDH) assumption, and with much shorter signature size and lesser communication cost, compared to existing proposals. (2) A new mechanism supporting a user to buy digital information over the Internet without revealing what he/she has bought, while protecting the providers from cheating users.
2008
EPRINT
Blind Signature Scheme over Braid Groups
A blind signature scheme is a cryptographic protocol for obtaining a signature from a signer such that the signer's view of the protocol cannot be linked to the resulting message signature pair. In this paper we have proposed two blind signature schemes using Braid groups. The security of given schemes depends upon conjugacy search problem in Braid groups.
2008
EPRINT
Block Ciphers Implementations Provably Secure Against Second Order Side Channel Analysis
In the recent years, side channel analysis has received a lot of attention, and attack techniques have been improved. Side channel analysis of second order is now successful in breaking implementations of block ciphers supposed to be effectively protected. This progress shows not only the practicability of second order attacks, but also the need for provably secure countermeasures. Surprisingly, while many studies have been dedicated to the attacks, only a few papers have been published about the dedicated countermeasures. In fact, only the method proposed by Schramm and Paar at CT-RSA 2006 enables to thwart second order side channel analysis. In this paper, we introduce two new methods which constitute a worthwhile alternative to Schramm and Paar's proposition. We prove their security in a strong security model and we exhibit a way to signifficantly improve their efficiency by using the particularities of the targeted architectures. Finally, we argue that the introduced methods allow to efficiently protect a wide variety of block ciphers, including AES.
2008
EPRINT
Breaking One-Round Key-Agreement Protocols in the Random Oracle Model
In this work we deal with one-round key-agreement protocols, called Merkle's Puzzles, in the random oracle model, where the players Alice and Bob are allowed to query a random permutation oracle $n$ times. We prove that Eve can always break the protocol by querying the oracle $O(n^2)$ times. The long-time unproven optimality of the quadratic bound in the fully general, multi-round scenario has been proven recently by Barak and Mahmoody-Ghidary. The results in this paper have been found independently of their work.
2008
EPRINT
Breaking RSA Generically is Equivalent to Factoring
We show that a generic ring algorithm for breaking RSA in $\mathbb{Z}_N$ can be converted into an algorithm for factoring the corresponding RSA-modulus $N$. Our results imply that any attempt at breaking RSA without factoring $N$ will be non-generic and hence will have to manipulate the particular bit-representation of the input in $\mathbb{Z}_N$. This provides new evidence that breaking RSA may be equivalent to factoring the modulus.
2008
EPRINT
Breaking the Akiyama-Goto cryptosystem
Akiyama and Goto have proposed a cryptosystem based on rational points on curves over function fields (stated in the equivalent form of sections of fibrations on surfaces). It is easy to construct a curve passing through a few given points, but finding the points, given only the curve, is hard. We show how to break their original cryptosystem by using algebraic points instead of rational points and discuss possibilities for changing their original system to create a secure one.
2008
EPRINT
Buying random votes is as hard as buying no-votes
In voting systems where a mark in a fixed position may mean a vote for Alice on a ballot,and a vote for Bob on another ballot, an attacker may coerce voters to put their mark at a certain position, enforcing effectively a random vote. This attack is meaningful if the voting system allows to take receipts with them and/or posts them to a bulletin board. The coercer may also ask for a blank receipt. We analyze this kind of attack and prove that it requires the same effort as a comparable attack would require against any voting system, even one without receipts.
2008
EPRINT
CCA2 Secure IBE: Standard Model Efficiency through Authenticated Symmetric Encryption
We propose two constructions of chosen-ciphertext secure identity-based encryption (IBE) schemes. Our schemes have a security proof in the standard model, yet they offer performance competitive with all known random-oracle based schemes. The efficiency improvement is obtained by combining modifications of the IBE schemes by Waters and Gentry with authenticated symmetric encryption.
2008
EPRINT
Certificate-Based Signature Schemes without Pairings or Random Oracles
In this paper, we propose two new certificate-based signature (CBS) schemes with new features and advantages. The first one is very efficient as it does not require any pairing computation and its security can be proven using Discrete Logarithm assumption in the random oracle model. We also propose another scheme whose security can be proven in the standard model without random oracles. To the best of our knowledge, these are the \emph{first} CBS schemes in the literature that have such kind of features.
2008
EPRINT
Certificateless Signcryption
Certificateless cryptography achieves the best of the two worlds: it inherits from identity-based techniques a solution to the certificate management problem in public-key encryption, whilst removing the secret key escrow functionality inherent to the identity-based setting. Signcryption schemes achieve confidentiality and authentication simultaneously by combining public-key encryption and digital signatures, offering better overall performance and security. In this paper, we introduce the notion of certificateless signcryption and present an efficient construction which guarantees security under insider attacks, and therefore provides forward secrecy and non-repudiation. The scheme is shown to be secure using random oracles under a variant of the bilinear Diffie-Hellman assumption.
2008
EPRINT
Cheon's algorithm, pairing inversion and the discrete logarithm problem
We relate the fixed argument pairing inversion problems (FAPI) and the discrete logarithm problem on an elliptic curve. This is done using the reduction from the DLP to the Diffie-Hellman problem developed by Boneh, Lipton, Maurer and Wolf. This approach fails when only one of the FAPI problems can be solved. In this case we use Cheon's algorithm to get a reduction.
2008
EPRINT
Chosen ciphertext secure public key encryption under DDH assumption with short ciphertext
An efficient variant of the ElGamal public key encryption scheme is proposed which is provably secure against adaptive chosen ciphertext attacks(IND-CCA2) under the decisional Diffie-Hellman(DDH) assumption. Compared to the previously most efficient scheme under DDH assumption by Kurosawa and Desmedt [Crypto 2004] it has one group element shorter ciphertexts, 50\% shorter secret keys, 25\% shorter public keys and it is 28.6\% more efficient in terms of encryption speed, 33.3\% more efficient in terms of decryption speed. A new security proof logic is used, which shows directly that the decryption oracle will not help the adversary in the IND-CCA2 game. Compared to the previous security proof, the decryption simulation is not needed in the new logic. This makes the security proof simple and easy to understand.
2008
EPRINT
Chosen-Ciphertext Secure Fuzzy Identity-Based Key Encapsulation without ROM
We use hybrid encryption with Fuzzy Identity-Based Encryption (Fuzzy-IBE) schemes, and present the first and efficient fuzzy identity-based key encapsulation mechanism (Fuzzy-IB-KEM) schemes which are chosen-ciphertext secure (CCA) without random oracle in the selective-ID model. To achieve these goals, we consider Fuzzy-IBE schemes as consisting of separate key and data encapsulation mechanisms (KEM-DEM), and then give the definition of Fuzzy-IB-KEM. Our main idea is to enhance Sahai and Waters' "large universe" construction (Sahai and Waters, 2005), chosen-plaintext secure (CPA) Fuzzy-IBE, by adding some redundant information to the ciphertext to make it CCA-secure.
2008
EPRINT
Chosen-Ciphertext Security via Correlated Products
We initiate the study of one-wayness under {\em correlated products}. We are interested in identifying necessary and sufficient conditions for a function $f$ and a distribution on inputs $(x_1, \ldots, x_k)$, so that the function $(f(x_1), \ldots, f(x_k))$ is one-way. The main motivation of this study is the construction of public-key encryption schemes that are secure against chosen-ciphertext attacks (CCA). We show that any collection of injective trapdoor functions that is secure under very natural correlated products can be used to construct a CCA-secure public-key encryption scheme. The construction is simple, black-box, and admits a direct proof of security. We provide evidence that security under correlated products is achievable by demonstrating that any collection of lossy trapdoor functions, a powerful primitive introduced by Peikert and Waters (STOC '08), yields a collection of injective trapdoor functions that is secure under the above mentioned natural correlated products. Although we eventually base security under correlated products on lossy trapdoor functions, we argue that the former notion is potentially weaker as a general assumption. Specifically, there is no fully-black-box construction of lossy trapdoor functions from trapdoor functions that are secure under correlated products.
2008
EPRINT
Ciphertext-Policy Attribute-Based Encryption: An Expressive, Efficient, and Provably Secure Realization
We present new techniques for realizing Ciphertext-Policy Attribute Encryption (CP-ABE) under concrete and noninteractive cryptographic assumptions. Our solutions allow any encryptor to specify access control in terms of an LSSS matrix, $M$, over the attributes in the system. We present three different constructions that allow different tradeoffs between the systems efficiency and the complexity of the assumptions used. All three constructions use a common methodology of ``directly'' solving the CP-ABE problem that enable us to get much better efficiency than prior approaches.
2008
EPRINT
Classification and Generation of Disturbance Vectors for Collision Attacks against SHA-1
In this paper, we present a deterministic algorithm to produce disturbance vectors for collision attacks against SHA-1. We show that all published disturbance vectors can be classified into two types of vectors, type-I and type-II. We define a cost function, close to those described in \cite{Mendel06}, to evaluate the complexity of a collision attack for a given disturbance vector. Using the classification and the cost function we made an exhaustive search which allowed us to retrieve all known vectors. We also found new vectors which have lower cost. This may lead to the best collision attack against SHA-1, with a theoretical attack complexity of $2^{51}$ hash function calls.
2008
EPRINT
CM construction of genus 2 curves with p-rank 1
We present an algorithm for constructing cryptographic hyperelliptic curves of genus $2$ and $p$-rank $1$, using the CM method. We also present an algorithm for constructing such curves that, in addition, have a prescribed small embedding degree. We describe the algorithms in detail, and discuss other aspects of $p$-rank 1 curves too, including the reduction of the class polynomials modulo $p$.
2008
EPRINT
Collision attack on NaSHA-512
The hash function NaSHA is a new algorithm proposed for SHA-3. It follows the wide-pipe structure and compression function adopts quasigroup transformations. These properties of operation in quasigroup raise obstacles to analysis. However, The high probability difference to cause inner collision can be found in the quasigroup transformations. We propose a collision attack to NaSHA-512 with the complexity is 2^{192}, which is lower than the complexity of birthday attack to NaSHA-512. Using the similar method, we can find free-start collision on all versions with negligible complexity.
2008
EPRINT
Collision Attack on the Waterfall Hash Function
We give a method that appears to be able to find colliding messages for the Waterfall hash function with approximately $O(2^{70})$ work for all hash sizes. If correct, this would show that the Waterfall hash function does not meet the required collision resistance.
2008
EPRINT
Collision attacks against 22-step SHA-512
In this work, we present two attacks against 22-step SHA-512. Our first attack succeeds with probability about $2^{-8}$ whereas the second attack is deterministic. To construct the attack, we use a single local collision and handle conditions on the colliding pair of messages. All but one condition can be satisfied deterministically in our first attack while in the second attack all conditions can be satisfied deterministically. There are four free words in our second attack and hence we get exactly $2^{256}$ collisions for 22-step SHA-512. Recently, attacks against up to 24-step SHA-256 have been reported in the literature which use a local collision given earlier by Nikoli\'{c} and Biryukov at FSE'08. We provide evidence which shows that using this local collision is unlikely to produce collisions for step reduced SHA-512. Consequently, our attacks are currently the best against reduced round SHA-512. The same attacks also work against SHA-256. Since our second attack is a deterministic construction, it is also the best attack against 22-step SHA-256.
2008
EPRINT
Collisions and other Non-Random Properties for Step-Reduced SHA-256
We study the security of step-reduced but otherwise unmodified SHA-256. We show the first collision attacks on SHA-256 reduced to 23 and 24 steps with complexities $2^{18}$ and $2^{28.5}$, respectively. We give example colliding message pairs for 23-step and 24-step SHA-256. The best previous, recently obtained result was a collision attack for up to 22 steps. We extend our attacks to 23 and 24-step reduced SHA-512 with respective complexities of $2^{44.9}$ and $2^{53.0}$. Additionally, we show non-random behaviour of the SHA-256 compression function in the form of free-start near-collisions for up to 31 steps, which is 6 more steps than the recently obtained non-random behaviour in the form of a free-start near-collision. Even though this represents a step forwards in terms of cryptanalytic techniques, the results do not threaten the security of applications using SHA-256.
2008
EPRINT
Collisions for Round-Reduced LAKE
LAKE is a family of cryptographic hash functions presented at FSE 2008. It is an iterated hash function and defines two main instances with a 256 bit and 512 bit hash value. In this paper, we present the first security analysis of LAKE. We show how collision attacks, exploiting the non-bijectiveness of the internal compression function of LAKE, can be mounted on reduced variants of LAKE. We show an efficient attack on the 256 bit hash function LAKE-256 reduced to 3 rounds and present an actual colliding message pair. Furthermore, we present a theoretical attack on LAKE-256 reduced to 4 rounds with a complexity of $2^{109}$. By using more sophisticated message modification techniques we expect that the attack can be extended to 5 rounds. However, for the moment our approach does not appear to be applicable to the full LAKE-256 hash function (with all 8 rounds).
2008
EPRINT
Collusion-Free Multiparty Computation in the Mediated Model
Collusion-free protocols prevent subliminal communication (i.e., covert channels) between parties running the protocol. In the standard communication model (and assuming the existence of one-way functions), protocols satisfying any reasonable degree of privacy cannot be collusion-free. To circumvent this impossibility result, Alwen et al. recently suggested the mediated model where all communication passes through a mediator; the goal is to design protocols where collusion-freeness is guaranteed as long as the mediator is honest, while standard security guarantees continue to hold if the mediator is dishonest. In this model, they gave constructions of collusion-free protocols for commitments and zero-knowledge proofs in the two-party setting. We strengthen the definition of Alwen et al. in several ways, and resolve the key open questions in this area by showing a collusion-free protocol (in the mediated model) for computing any multi-party functionality.
2008
EPRINT
Combinatorial batch codes
In this paper, we study batch codes, which were introduced by Ishai, Kushilevitz, Ostrovsky and Sahai. A batch code specifies a method to distribute a database of n items among m devices (servers) in such a way that any k items can be retrieved by reading at most t items from each of the servers. It is of interest to devise batch codes that minimize the total storage, denoted by N, over all m servers. In this paper, we study the special case t=1, under the assumption that every server stores a subset of the items. This is purely a combinatorial problem, so we call this kind of batch code a "combinatorial batch code''. For various parameter situations, we are able to present batch codes that are optimal with respect to the storage requirement, N. We also study uniform codes, where every item is stored in precisely c of the m servers (such a code is said to have rate 1/c). Interesting new results are presented in the cases c = 2, k-2 and k-1. In addition, we obtain improved existence results for arbitrary fixed c using the probabilistic method.
2008
EPRINT
Combined (identity-based) public key schemes
Consider a scenario in which parties use a public key encryption scheme and a signature scheme with a single public key/private key pair---so the private key sk is used for both signing and decrypting. Such a simultaneous use of a key is in general considered poor cryptographic practice, but from an efficiency point of view looks attractive. We offer security notions to analyze such violations of key separation. For both the identity- and the non-identity-based setting, we show that---although being insecure in general---for schemes of interest the resulting combined (identity-based) public key scheme can offer strong security guarantees.
2008
EPRINT
Comments on two multi-server authentication protocols
Recently, Tsai and Liao et al. each proposed a multi-server authentication protocol. They claimed their protocols are secure and can withstand various attacks. But we found some security loopholes in each protocol. We will show the attacks on their schemes.
2008
EPRINT
Comments on two password based protocols
Recently, M. Hölbl et al. and I. E. Liao et al. each proposed an user authentication protocol. Both claimed that their schemes can withstand password guessing attack. However, T. Xiang et al. pointed out I. E. Liao et al.'s protocol suffers three kinds of attacks, including password guessing attacks. We present an improvement protocol to get rid of password guessing attacks. In this paper, we first point out the security loopholes of M. Hölbl et al.'s protocol and review T. Xiang et al.'s cryptanalysis on I. E. Liao et al.'s protocol. Then, we present the improvements on M. Hölbl et al.'s protocol and I. E. Liao et al.'s protocol, respectively.
2008
EPRINT
Compact Proofs of Retrievability
In a proof-of-retrievability system, a data storage center must prove to a verifier that he is actually storing all of a client's data. The central challenge is to build systems that are both efficient and provably secure -- that is, it should be possible to extract the client's data from any prover that passes a verification check. All previous provably secure solutions require that a prover send O(l) authenticator values (i.e., MACs or signatures) to verify a file, for a total of O(l^2) bits of communication, where l is the security parameter. The extra cost over the ideal O(l) communication can be prohibitive in systems where a verifier needs to check many files. We create the first compact and provably secure proof of retrievability systems. Our solutions allow for compact proofs with just one authenticator value -- in practice this can lead to proofs with as little as 40 bytes of communication. We present two solutions with similar structure. The first one is privately verifiable and builds elegantly on pseudorandom functions (PRFs); the second allows for publicly verifiable proofs and is built from the signature scheme of Boneh, Lynn, and Shacham in bilinear groups. Both solutions rely on homomorphic properties to aggregate a proof into one small authenticator value.
2008
EPRINT
Compact Signatures for Network Coding
Network coding offers increased throughput and improved robustness to random faults in completely decentralized networks. Since it does not require centralized control, network coding has been suggested for routing packets in ad-hoc networks, for content distribution in P2P file systems, and for improving the efficiency of large-scale data dissemination over the Internet. In contrast to traditional routing schemes, however, network coding requires intermediate nodes to process and modify data packets en route. For this reason, standard signature schemes are inapplicable and it is therefore a challenge to provide resilience to tampering by malicious nodes in the network. Here, we propose a novel homomorphic signature scheme that can be used in conjunction with network coding to prevent malicious modification of data. The overhead of our scheme is small and independent of the file or packet size: both public keys and signatures in our scheme consist of only a single group element.
2008
EPRINT
Compartmented Threshold RSA Based on the Chinese Remainder Theorem
In this paper we combine the compartmented secret sharing schemes based on the Chinese remainder theorem with the RSA scheme in order to obtain, as a novelty, a dedicated solution for compartmented threshold decryption or compartmented threshold digital signature generation. AMS Subject Classification: 94A60, 94A62, 11A07 Keywords and phrases: threshold cryptography, secret sharing, Chinese remainder theorem
2008
EPRINT
Complete Fairness in Secure Two-Party Computation
In the setting of secure two-party computation, two mutually distrusting parties wish to compute some function of their inputs while preserving, to the extent possible, security properties such as privacy, correctness, and more. One desirable property is fairness which guarantees, informally, that if one party receives its output, then the other party does too. Cleve (STOC 1986) showed that complete fairness cannot be achieved, in general, without an honest majority. Since then, the accepted folklore has been that nothing non-trivial can be computed with complete fairness in the two-party setting, and the problem has been treated as closed since the late '80s. In this paper, we demonstrate that this folklore belief is false by showing completely-fair protocols for various non-trivial functions in the two-party setting based on standard cryptographic assumptions. We first show feasibility of obtaining complete fairness when computing any function over polynomial-size domains that does not contain an ``embedded XOR''; this class of functions includes boolean AND/OR as well as Yao's ``millionaires' problem''. We also demonstrate feasibility for certain functions that do contain an embedded XOR, and prove a lower bound showing that any completely-fair protocol for such functions must have round complexity super-logarithmic in the security parameter. Our results demonstrate that the question of completely-fair secure computation without an honest majority is far from closed.
2008
EPRINT
Complexity Analysis of a Fast Modular Multiexponentiation Algorithm
Recently, a fast modular multiexponentiation algorithm for computing A^X B^Y (mod N) was proposed. The authors claimed that on average their algorithm only requires to perform 1.306k modular multiplications (MMs), where k is the bit length of the exponents. This claimed performance is significantly better than all other comparable algorithms, where the best known result by other algorithms achieves 1.503k MMs only. In this paper, we give a formal complexity analysis and show the claimed performance is not true. The actual computational complexity of the algorithm should be 1.556k. This means that the best modular multiexponentiation algorithm based on canonical-sighed-digit technique is still not able to overcome the 1.5k barrier.
2008
EPRINT
Complexity of Multiparty Computation Problems: The Case of 2-Party Symmetric Secure Function Evaluation
In symmetric secure function evaluation (SSFE), Alice has an input $x$, Bob has an input $y$, and both parties wish to securely compute $f(x,y)$. We classify these functions $f$ according to their ``cryptographic complexities,'' and show that the landscape of complexity among these functions is surprisingly rich. We give combinatorial characterizations of the SSFE functions $f$ that have passive-secure protocols, and those which are protocols secure in the standalone setting. With respect to universally composable security (for unbounded parties), we show that there is an infinite hierarchy of increasing complexity for SSFE functions, That is, we describe a family of SSFE functions $f_1, f_2, \ldots$ such that there exists a UC-secure protocol for $f_i$ in the $f_j$-hybrid world if and only if $i \le j$. Our main technical tool for deriving complexity separations is a powerful protocol simulation theorem which states that, even in the strict setting of UC security, the canonical protocol for $f$ is as secure as any other protocol for $f$, as long as $f$ satisfies a certain combinatorial characterization. We can then show intuitively clear impossibility results by establishing the combinatorial properties of $f$ and then describing attacks against the very simple canonical protocols, which by extension are also feasible attacks against {\em any} protocol for the same functionality.
2008
EPRINT
Computational Soundness of Symbolic Zero-Knowledge Proofs Against Active Attackers
The abstraction of cryptographic operations by term algebras, called Dolev-Yao models, is essential in almost all tool-supported methods for proving security protocols. Recently significant progress was made in proving that Dolev-Yao models offering the core cryptographic operations such as encryption and digital signatures can be sound with respect to actual cryptographic realizations and security definitions. Recent work, however, has started to extend Dolev-Yao models to more sophisticated operations with unique security features. Zero-knowledge proofs arguably constitute the most amazing such extension. In this paper, we first identify which additional properties a cryptographic zero-knowledge proof needs to fulfill in order to serve as a computationally sound implementation of symbolic (Dolev-Yao style) zero-knowledge proofs; this leads to the novel definition of a symbolically-sound zero-knowledge proof system. We prove that even in the presence of arbitrary active adversaries, such proof systems constitute computationally sound implementations of symbolic zero-knowledge proofs. This yields the first computational soundness result for symbolic zero-knowledge proofs and the first such result against fully active adversaries of Dolev-Yao models that go beyond the core cryptographic operations.
2008
EPRINT
Computing Almost Exact Probabilities of Differential Hash Collision Paths by Applying Appropriate Stochastic Methods
Generally speaking, the probability of a differential path determines an upper bound for the expected workload and thus for the true risk potential of a differential attack. In particular, if the expected workload seems to be in a borderline region between practical feasibility and non-feasibility it is desirable to know the path probability as exact as possible. We present a generally applicable approach to determine at least almost exact probabilities of differential paths where we focus on (near-)collision paths for Merkle-Damgard-type hash functions. Our results show both that the number of bit conditions provides only a rough estimate for the true path probability and that the IV may have significant impact on the path probability. For MD5 we verified the effectivity of our approach experimentally. An abbreviated version [GIS4], which in particular omits proofs, technical details and several examples, will appear in the proceedings of the security conference 'Sicherheit 2008'.
2008
EPRINT
Computing Hilbert Class Polynomials
We present and analyze two algorithms for computing the Hilbert class polynomial H_D(X). The first is a p-adic lifting algorithm for inert primes p in the order of discriminant D < 0. The second is an improved Chinese remainder algorithm which uses the class group action on CM-curves over finite fields. Our run time analysis gives tighter bounds for the complexity of all known algorithms for computing H_D(X), and we show that all methods have comparable run times.
2008
EPRINT
Computing Pairings Using x-Coordinates Only
To reduce bandwidth in elliptic curve cryptography one can transmit only $x$-coordinates of points (or $x$-coordinates together with an extra bit). For further computation using the points one can either recover the $y$-coordinates by taking square roots or one can use point multiplication formulae which use $x$-coordinates only. We consider how to efficiently use point compression in pairing-based cryptography. We give a method to compute compressed Weil pairings using $x$-coordinates only. We also show how to compute the compressed Tate and ate pairings using only one $y$-coordinate. Our methods are more efficient than taking square roots when the embedding degree is small. We implemented the algorithms in the case of embedding degree 2 curves over $\F_p$ where $p \equiv 3 \pmod{4}$ and found that our methods are $10-15\%$ faster than the analogous methods using square roots.
2008
EPRINT
Computing the Bilinear Pairings on Elliptic Curves with Automorphisms
In this paper, a super-optimal pairing based on the Weil pairing is proposed with great efficiency. It is the first approach to reduce the Miller iteration loop when computing the variants of the Weil pairing. The super-optimal pairing based on the Weil pairing is computed rather fast, while it is slightly slower than the previous fastest pairing on the corresponding elliptic curves.
2008
EPRINT
Constant-Round Concurrent Non-Malleable Commitments and Decommitments
In this paper we consider commitment schemes that are secure against concurrent poly-time man-in-the-middle (cMiM) attacks. Under such attacks, two possible notions of security for commitment schemes have been proposed in the literature: concurrent non-malleability with respect to commitment and concurrent non-malleability with respect to decommitment (i.e., opening). After the original notion of non-malleability introduced by [Dolev, Dwork and Naor STOC 91] that is based on the independence of the committed and decommitted message, a new and stronger notion of non-malleability has been given in [Pass and Rosen STOC 05] by requiring that for any man-in-the-middle adversary there is a stand-alone adversary that succeeds with the same probability. Under this stronger security notion, a constant-round commitment scheme that is concurrent non-malleable only with respect to commitment has been given in [Pass and Rosen FOCS 05] for the plain model, thus leaving as an open problem the construction of a constant-round concurrent non-malleable commitments with respect to decommitment. In other words, in [Pass and Rosen FOCS 05] security against adversaries that mount concurrent man-in-the-middle attacks is guaranteed only during the commitment phase (under their stronger notion of non-malleability). The main result of this paper is a commitment scheme that is concurrent non-malleable with respect to both commitment and decommitment, under the stronger notion of [Pass and Rosen STOC 05]. This property protects against cMiM attacks mounted during both commitments and decommitments which is a crucial security requirement in several applications, as in some digital auctions, in which players have to perform both commitments and decommitments. Our scheme uses a constant number of rounds of interaction in the plain model and is the first scheme that enjoys all these properties under the definitions of [Pass and Rosen FOCS 05]. We stress that, exactly as in [Pass and Rosen FOCS 05], we assume that commitments and decommitments are performed in two distinct phases that do not overlap in time.
2008
EPRINT
Constant-Size Dynamic $k$-TAA
$k$-times anonymous authentication ($k$-TAA) schemes allow members of a group to be authenticated anonymously by application providers for a bounded number of times. Dynamic $k$-TAA allows application providers to independently grant or revoke users from their own access group so as to provide better control over their clients. In terms of time and space complexity, existing dynamic $k$-TAA schemes are of complexities O($k$), where $k$ is the allowed number of authentication. In this paper, we construct a dynamic $k$-TAA scheme with space and time complexities of $O(log(k))$. We also outline how to construct dynamic $k$-TAA scheme with a constant proving effort. Public key size of this variant, however, is $O(k)$. We then describe a trade-off between efficiency and setup freeness of AP, in which AP does not need to hold any secret while maintaining control over their clients. To build our system, we modify the short group signature scheme into a signature scheme and provide efficient protocols that allow one to prove in zero-knowledge the knowledge of a signature and to obtain a signature on a committed block of messages. We prove that the signature scheme is secure in the standard model under the $q$-SDH assumption. Finally, we show that our dynamic $k$-TAA scheme, constructed from bilinear pairing, is secure in the random oracle model.
2008
EPRINT
Constructing Variable-Length PRPs and SPRPs from Fixed-Length PRPs
We create variable-length pseudorandom permutations (PRPs) and strong PRPs (SPRPs) accepting any input length chosen from the range of b to 2b bits from fixed-length, b-bit PRPs. We utilize the elastic network that underlies the recently introduced concrete design of elastic block ciphers, exploiting it as a network of PRPs. We prove that three and four-round elastic networks are variable-length PRPs and five-round elastic networks are variable-length SPRPs, accepting any input length that is fixed in the range of b to 2b bits, when the round functions are independently chosen fixed-length PRPs on b bits. We also prove that these are the minimum number of rounds required.
2008
EPRINT
Construction of Resilient Functions with Multiple Cryptographic Criteria
In this paper, we describe a method to construct (n, m, t) resilient functions which satisfy multiple cryptographic criteria including high nonlinearity, good resiliency, high algebraic degree, and nonexistence of nonzero linear structure. Given a [u, m, t+1] linear code, we show that it is possible to construct (n, m, t) resilient functions with multiple good cryptographic criteria, where 2m<u<n.
2008
EPRINT
Controlling access to personal data through Accredited Symmetrically Private Information Retrieval
With the digitization of society and the continuous migration of services to the electronic world, individuals have lost significant control over their data. In this paper, we consider the problem of protecting personal information according to privacy policies defined by the data subjects. More specifically, we propose a new primitive allowing a data subject to decide when, how, and by whom his data can be accessed, without the database manager learning anything about his identity, at the time the data is retrieved. The proposed solution, which we call Accredited SPIR, combines symmetrically private information retrieval and privacy-preserving digital credentials. We present three constructions based on the discrete logarithm and RSA problems. Despite the added privacy safeguards, the extra cost incurred by our constructions is negligeable compared to that of the underlying building blocks.
2008
EPRINT
Could The 1-MSB Input Difference Be The Fastest Collision Attack For MD5 ?
So far, two different 2-block collision differentials, both with 3-bit input differences for MD5, have been found by Wang etc in 2005 and Xie etc in 2008 respectively, and those differentials have been improved later on to generate a collision respectively within around one minute and half an hour on a desktop PC. Are there more collision differentials for MD5? Can a more efficient algorithm be developed to find MD5 collisions? In this paper, we list the whole set of 1-bit to 3-bit input difference patterns that are possibly qualified to construct a feasible collision differential, and from which a new collision differential with only 1-MSB input difference is then analyzed in detail, finally the performances are compared with the prior two 3-bit collision attacks according to seven criteria proposed in this paper. In our approach, a two-block message is still needed to produce a collision, the first block being only one MSB different while the second block remains the same. Although the differential path appears to be computationally infeasible, most of the conditions that a collision differential path must satisfy can be fulfilled by multi-step modifications, and the collision searching efficiency can be much improved further by a specific divide-and-conquer technique, which transforms a multiplicative accumulation of the computational complexities into an addition by properly grouping of the conditional bits. In particular, a tunneling-like technique is applied to enhance the attack algorithm by introducing some additional conditions. As a result, the fastest attack algorithm is obtained with an averaged computational complexity of 2^20.96 MD5 compressions, which implies that it is able to search a collision within a second on a common PC for arbitrary random initial values. With a reasonable probability a collision can be found within milliseconds, allowing for instancing an attack during the execution of a practical protocol. The collision searching algorithm, however, is very complex, but the algorithm has been implemented which is available from the website http://www.is.iscas.ac.cn/gnomon, and we suggest you download the implementation program from the website for a personal experience if you are interested in it.
2008
EPRINT
Cryptanalysing the Critical Group
Biggs has recently proposed the critical group of a certain class of finite graphs as a platform group for cryptosystems relying on the difficulty of the discrete log problem. The paper uses techniques from the theory of Picard groups on finite graphs to show that the discrete log problem can be efficiently solved in Biggs's groups. Thus this class of groups is not suitable as a platform for discrete log based cryptography.
2008
EPRINT
cryptanalysis and Improvement of a Recently Proposed Remote User Authentication Scheme Using Smart Cards
Recently Debasis et al[1] proposed an improvement to prevent offline attack in Fang et al’s[2] scheme, where [2] was an improvement of Das et al’s[3] scheme. However the improved scheme is insecure against side channel attack. In this paper we propose an enhancement for [1]. The enhanced scheme is secure against substitution, impersonation, spoofing, replay, side-channel and password guessing attacks.
2008
EPRINT
Cryptanalysis of a client-to-client password-authenticated key agreement protocol
Recently, Byun et al. proposed an efficient client-to-client password-authenticated key agreement protocol (EC2C-PAKA), which was provably secure in a formally defined security model. This letter shows that EC2C-PAKA protocol is vulnerable to password compromise impersonate attack and man-in-the-middle attack if the key between servers is compromised.
2008
EPRINT
Cryptanalysis of an Authentication Scheme Using Truncated Polynomials
An attack on a recently proposed authentication scheme of Shpilrain and Ushakov is presented. The public information allows the derivation of a system of polynomial equations for the secret key bits. Our attack uses simple elimination techniques to distill linear equations. For the proposed parameter choice, the attack often finds secret keys or alternative secret keys within minutes with moderate resources.
2008
EPRINT
Cryptanalysis of Bohio et al.'s ID-Based Broadcast Signcryption (IBBSC) Scheme for Wireless Ad-hoc Networks
Broadcast signcryption enables the broadcaster to simultaneously encrypt and sign the content meant for a specific set of users in a single logical step. It provides a very efficient solution to the dual problem of achieving confidentiality and authentication during content distribution. Among other alternatives, ID-based schemes are arguably the best suited for its implementation in wireless ad-hoc networks because of the unique advantage that they provide - any unique, publicly available parameter of a user can be his public key, which eliminates the need for a complex public key infrastructure. In 2004, Bohio et al. [4] proposed an ID-based broadcast signcryption (IBBSC) scheme which achieves constant ciphertext size. They claim that their scheme provides both message authentication and confidentiality, but do not give formal proofs. In this paper, we demonstrate how a legitimate user of the scheme can forge a valid signcrypted ciphertext, as if generated by the broadcaster. Moreover, we show that their scheme is not IND-CCA secure. Following this, we propose a fix for Bohio et al.'s scheme, and formally prove its security under the strongest existing security models for broadcast signcryption (IND-CCA2 and EUF-CMA). While fixing the scheme, we also improve its efficiency by reducing the ciphertext size to two elements compared to three in [4].
2008
EPRINT
Cryptanalysis of CRUSH hash structure
In this paper, we will present a cryptanalysis of CRUSH hash structure. Surprisingly, our attack could find pre-image for any desired length of internal message. Time complexity of this attack is completely negligible. We will show that the time complexity of finding a pre-image of any length is O(1). In this attack, an adversary could freely find a pre-image with the length of his own choice for any given message digits. We can also find second pre-image, collision, multi-collision in the same complexity with our attack. In this paper, we also introduce a stronger variant of the algorithm, and show that an adversary could still be able to produce collisions for this stronger variant of CRUSH hash structure with a time complexity less than a Birthday attack.
2008
EPRINT
Cryptanalysis of ID-Based Signcryption Scheme for Multiple Receivers
In ATC 2007, an identity-based signcryption scheme for multiple receivers was proposed by Yu et al. They prove confidentiality of their scheme and also claim unforgeability without any proof. In this paper, we show that their signcryption scheme is insecure by demonstrating a universal forgeability attack - anyone can generate a valid signcrypted ciphertext on any message on behalf of any legal user for any set of legal receivers without knowing the secret keys of the legal users. Further, we propose a corrected version of their scheme and formally prove its security (confidentiality and unforgeability) under the existing security model for signcryption. We also analyze the efficiency of the corrected scheme by comparing it with existing signcryption schemes for multiple receivers.
2008
EPRINT
Cryptanalysis of Li et al.'s Identity-Based Threshold Signcryption Scheme
Signcryption is a cryptographic primitive that aims at providing confidentiality and authentication simultaneously. Recently in May 2008, a scheme for identity based threshold signcryption was proposed by Fagen Li and Yong Yu. They have proved the confidentiality of their scheme and have also claimed the unforgeability without providing satisfactory proof. In this paper, we show that in their signcryption scheme the secret key of the sender is exposed(total break) to the clerk during sincryption and hence insecure in the presence of malicious clerks. Further, we propose a corrected version of the scheme and formally prove its security under the existing security model for signcryption.
2008
EPRINT
Cryptanalysis of LU Decomposition-based Key Pre-distribution Scheme for Wireless Sensor Networks
S. J. Choi and H. Y. Youn proposed a key pre-distribution scheme for Wireless Sensor Networks based on LU decomposition of symmetric matrix, and later many researchers did works based on this scheme. Nevertheless, we find a mathematical relationship of L and U matrixes decomposed from symmetric matrix, by using which we can calculate one matrix from another regardless of their product -- the key matrix K. This relationship would profoundly harm the secure implementation of this decomposition scheme in the real world. In this paper, we first present and prove the mathematical theorem. Next we give samples to illustrate how to break the networks by using this theorem. Finally, we state the conclusion and some directions for improving the security of the key pre-distribution scheme.
2008
EPRINT
Cryptanalysis of RadioGatun
In this paper we study the security of the RadioGatun family of hash functions, and more precisely the collision resistance of this proposal. We show that it is possible to find differential paths with acceptable probability of success. Then, by using the freedom degrees available from the incoming message words, we provide a significant improvement over the best previously known cryptanalysis. As a proof of concept, we provide a colliding pair of messages for RadioGatun with 2-bit words. We finally argue that, under some light assumption, our technique is very likely to provide the first collision attack on RadioGatun.
2008
EPRINT
Cryptanalysis of Short Exponent RSA with Primes Sharing Least Significant Bits
LSBS-RSA denotes an RSA system with modulus primes, p and q, sharing a large number of least significant bits. In ISC 2007, Zhao and Qi analyzed the security of short exponent LSBS-RSA. They claimed that short exponent LSBS-RSA is much more vulnerable to the lattice attack than the standard RSA. In this paper, we point out that there exist some errors in the calculation of Zhao & Qi's attack. After re-calculating, the result shows that their attack is unable for attacking RSA with primes sharing bits. Consequently, we give a revised version to make their attack feasible. We also propose a new method to further extend the security boundary, compared with the revised version. The proposed attack also supports the result of analogue Fermat factoring on LSBS-RSA, which claims that p and q cannot share more than (n/4) least significant bits, where n is the bit-length of pq. In conclusion, it is a trade-off between the number of sharing bits and the security level in LSBS-RSA. One should be more careful when using LSBS-RSA with short exponents.
2008
EPRINT
Cryptanalysis of the Cai-Cusick Lattice-based Public-key Cryptosystem
In 1998, Cai and Cusick proposed a lattice-based public-key cryptosystem based on the similar ideas of the Ajtai-Dwork cryptosystem, but with much less data expansion. However, they didn't give any security proof. In our paper, we present an efficient ciphertext-only attack which runs in polynomial time against the cryptosystem to recover the message, so the Cai-Cusick lattice-based public-key cryptosystem is not secure. We also present two chosen-ciphertext attacks to get a similar private key which acts as the real private key.
2008
EPRINT
Cryptanalysis of the Hash Function LUX-256
LUX is a new hash function submitted to NIST's SHA-3 competition. In this paper, we found some non-random properties of LUX due to the weakness of origin shift vector. We also give reduced blank round collision attack, free-start collision attack and free-start preimage attack on LUX-256. The two collision attacks are trivial. The free-start preimage attack has complexity of about 2^80 and requires negligible memory.
2008
EPRINT
Cryptanalysis of the Improved Cellular Message Encryption Algorithm
This paper analyzes the Improved Cellular Message Encryption Algorithm (CMEA-I) which is an improved version of the Telecommunication Industry Association's Cellular Message Encryption Algorithm (CMEA). We present a chosen-plaintext attack of CMEA-I which requires less than 850 plaintexts in its adaptive version. This demonstrates that the improvements made over CMEA are ineffective to thwart such attacks and confirms that the security of CMEA and its variants must be reconsidered from the beginning.
2008
EPRINT
Cryptanalysis of White-Box Implementations
A white-box implementation of a block cipher is a software implementation from which it is difficult for an attacker to extract the cryptographic key. Chow et al. published white-box implementations for AES and DES that both have been cryptanalyzed. However, these white-box implementations are based on ideas that can easily be used to derive white-box implementations for other block ciphers as well. As the cryptanalyses published use typical properties of AES and DES, it remains an open question whether the white-box techniques proposed by Chow et al. can result in a secure white-box implementation for other ciphers than AES and DES. In this paper we identify a generic class of block ciphers for which the white-box techniques of Chow et al. do not result in a secure white-box implementation. The result can serve as a basis to design block ciphers and to develop white-box techniques that do result in secure white-box implementations.
2008
EPRINT
Cube Attacks on Tweakable Black Box Polynomials
Almost any cryptographic scheme can be described by \emph{tweakable polynomials} over $GF(2)$, which contain both secret variables (e.g., key bits) and public variables (e.g., plaintext bits or IV bits). The cryptanalyst is allowed to tweak the polynomials by choosing arbitrary values for the public variables, and his goal is to solve the resultant system of polynomial equations in terms of their common secret variables. In this paper we develop a new technique (called a \emph{cube attack}) for solving such tweakable polynomials, which is a major improvement over several previously published attacks of the same type. For example, on the stream cipher Trivium with a reduced number of initialization rounds, the best previous attack (due to Fischer, Khazaei, and Meier) requires a barely practical complexity of $2^{55}$ to attack $672$ initialization rounds, whereas a cube attack can find the complete key of the same variant in $2^{19}$ bit operations (which take less than a second on a single PC). Trivium with $735$ initialization rounds (which could not be attacked by any previous technique) can now be broken with $2^{30}$ bit operations. Trivium with $767$ initialization rounds can now be broken with $2^{45}$ bit operations, and the complexity of the attack can almost certainly be further reduced to about $2^{36}$ bit operations. Whereas previous attacks were heuristic, had to be adapted to each cryptosystem, had no general complexity bounds, and were not expected to succeed on random looking polynomials, cube attacks are provably successful when applied to random polynomials of degree $d$ over $n$ secret variables whenever the number $m$ of public variables exceeds $d+log_dn$. Their complexity is $2^{d-1}n+n^2$ bit operations, which is polynomial in $n$ and amazingly low when $d$ is small. Cube attacks can be applied to any block cipher, stream cipher, or MAC which is provided as a black box (even when nothing is known about its internal structure) as long as at least one output bit can be represented by (an unknown) polynomial of relatively low degree in the secret and public variables.
2008
EPRINT
David and Goliath Commitments: UC Computation for Asymmetric Parties Using Tamper-Proof Hardware
Designing secure protocols in the Universal Composability (UC) framework confers many advantages. In particular, it allows the protocols to be securely used as building blocks in more complex protocols, and assists in understanding their security properties. Unfortunately, most existing models in which universally composable computation is possible (for useful functionalities) require a trusted setup stage. Recently, Katz [Eurocrypt '07] proposed an alternative to the trusted setup assumption: tamper-proof hardware. Instead of trusting a third party to correctly generate the setup information, each party can create its own hardware tokens, which it sends to the other parties. Each party is only required to trust that its own tokens are tamper-proof. Katz designed a UC commitment protocol that requires both parties to generate hardware tokens. In addition, his protocol relies on a specific number-theoretic assumption. In this paper, we construct UC commitment protocols for ``David'' and ``Goliath'': we only require a single party (Goliath) to be capable of generating tokens. We construct a version of the protocol that is secure for computationally unbounded parties, and a more efficient version that makes computational assumptions only about David (we require only the existence of a one-way function). Our protocols are simple enough to be performed by hand on David's side. These properties may allow such protocols to be used in situations which are inherently asymmetric in real-life, especially those involving individuals versus large organizations. Classic examples include voting protocols (voters versus ``the government'') and protocols involving private medical data (patients versus insurance-agencies or hospitals).
2008
EPRINT
Degradation and Amplification of Computational Hardness
What happens when you use a partially defective bit-commitment protocol to commit to the same bit many times? For example, suppose that the protocol allows the receiver to guess the committed bit with advantage $\eps$, and that you used that protocol to commit to the same bit more than $1/\eps$ times. Or suppose that you encrypted some message many times (to many people), only to discover later that the encryption scheme that you were using is partially defective, and an eavesdropper has some noticeable advantage in guessing the encrypted message from the ciphertext. Can we at least show that even after many such encryptions, the eavesdropper could not have learned the message with certainty? In this work we take another look at amplification and degradation of computational hardness. We describe a rather generic setting where one can argue about amplification or degradation of computational hardness via sequential repetition of interactive protocols, and prove that in all the cases that we consider, it behaves as one would expect from the corresponding information theoretic bounds. In particular, for the example above we can prove that after committing to the same bit for $n$ times, the receiver's advantage in guessing the encrypted bit is negligibly close to $1-(1-\eps)^n$. Our results for hardness amplification follow just by observing that some of the known proofs for Yao's lemmas can be easily extended also to handle interactive protocols. On the other hand, the question of hardness degradation was never considered before as far as we know, and we prove these results from scratch.
2008
EPRINT
Delegatable Anonymous Credentials
We construct an efficient delegatable anonymous credential system. Users can anonymously and unlinkably obtain credentials from any authority, delegate their credentials to other users, and prove possession of a credential $L$ levels away from the given authority. The size of the proof (and time to compute it) is $O(Lk)$, where $k$ is the security parameter. The only other construction of delegatable anonymous credentials (Chase and Lysyanskaya, Crypto 2006) relies on general non-interactive proofs for NP-complete languages of size $k \Omega(2^{L})$. We revise the entire approach to constructing anonymous credentials and identify \emph{randomizable} zero-knowledge proof of knowledge systems as the key building block. We formally define the notion of randomizable non-interactive zero-knowledge proofs, and give the first construction by showing how to appropriately rerandomize Groth and Sahai (Eurocrypt 2008) proofs. We show that such proof systems, in combination with an appropriate authentication scheme and a few other protocols, allow us to construct delegatable anonymous credentials. Finally, we instantiate these building blocks under appropriate assumptions about groups with bilinear maps.
2008
EPRINT
Delegating Capabilities in Predicate Encryption Systems
In predicate encryption systems, given a capability, one can evaluate one or more predicates on the encrypted data, while all other information about the plaintext remains hidden. We consider the first such systems to permit delegation of capabilities. In a system that supports delegation, a user Alice who has a capability can delegate to Bob a more restrictive capability, which allows him to learn less about encrypted data than she did. We formally define delegation in predicate encryption systems, and propose a new security definition for delegation. In addition, we present an efficient construction supporting conjunctive queries. The security of our construction can be reduced to the general 3-party Bilinear Diffie-Hellman assumption, and the Bilinear Decisional Diffie-Hellman assumption in composite-order bilinear groups.
2008
EPRINT
Democratic Group Signatures with Threshold Traceability
Recently, democratic group signatures(DGSs) particularly catch our attention due to their great flexibilities, \emph{i.e}., \emph{no group manager}, \emph{anonymity}, and \emph{individual traceability}. In existing DGS schemes, individual traceability says that any member in the group can reveal the actual signer's identity from a given signature. In this paper, we formally describe the definition of DGS, revisit its security notions by strengthening the requirement for the property of traceability, and present a concrete DGS construction with $(t, n)$-\emph{threshold traceability} which combines the concepts of group signatures and of threshold cryptography. The idea behind the $(t, n)$-threshold traceability is to distribute between $n$ group members the capability of tracing the actual signer such that any subset of not less than $t$ members can jointly reconstruct a secret and reveal the identity of the signer while preserving security even in the presence of an active adversary which can corrupt up to $t-1$ group members.
2008
EPRINT
Detection of Algebraic Manipulation with Applications to Robust Secret Sharing and Fuzzy Extractors
Consider an abstract storage device $\Sigma(\G)$ that can hold a single element $x$ from a fixed, publicly known finite group $\G$. Storage is private in the sense that an adversary does not have read access to $\Sigma(\G)$ at all. However, $\Sigma(\G)$ is non-robust in the sense that the adversary can modify its contents by adding some offset $\Delta \in \G$. Due to the privacy of the storage device, the value $\Delta$ can only depend on an adversary's {\em a priori} knowledge of $x$. We introduce a new primitive called an {\em algebraic manipulation detection} (AMD) code, which encodes a source $s$ into a value $x$ stored on $\Sigma(\G)$ so that any tampering by an adversary will be detected, except with a small error probability $\delta$. We give a nearly optimal construction of AMD codes, which can flexibly accommodate arbitrary choices for the length of the source $s$ and security level $\delta$. We use this construction in two applications: \begin{itemize} \item We show how to efficiently convert any linear secret sharing scheme into a {\em robust secret sharing scheme}, which ensures that no \emph{unqualified subset} of players can modify their shares and cause the reconstruction of some value $s'\neq s$. \item We show how how to build nearly optimal {\em robust fuzzy extractors} for several natural metrics. Robust fuzzy extractors enable one to reliably extract and later recover random keys from noisy and non-uniform secrets, such as biometrics, by relying only on {\em non-robust public storage}. In the past, such constructions were known only in the random oracle model, or required the entropy rate of the secret to be greater than half. Our construction relies on a randomly chosen common reference string (CRS) available to all parties. \end{itemize}
2008
EPRINT
Deterministic Encryption: Definitional Equivalences and Constructions without Random Oracles
We strengthen the foundations of deterministic public-key encryption via definitional equivalences and standard-model constructs based on general assumptions. Specifically we consider seven notions of privacy for deterministic encryption, including six forms of semantic security and an indistinguishability notion, and show them all equivalent. We then present a deterministic scheme for the secure encryption of uniformly and independently distributed messages based solely on the existence of trapdoor one-way permutations. We show a generalization of the construction that allows secure deterministic encryption of independent high-entropy messages. Finally we show relations between deterministic and standard (randomized) encryption.
2008
EPRINT
DISH: Distributed Self-Healing in Unattended Sensor Networks
Unattended wireless sensor networks (UWSNs) operating in hostile environments face the risk of compromise. % by a mobile adversary. Unable to off-load collected data to a sink or some other trusted external entity, sensors must protect themselves by attempting to mitigate potential compromise and safeguarding their data. In this paper, we focus on techniques that allow unattended sensors to recover from intrusions by soliciting help from peer sensors. We define a realistic adversarial model and show how certain simple defense methods can result in sensors re-gaining secrecy and authenticity of collected data, despite adversary's efforts to the contrary. We present an extensive analysis and a set of simulation results that support our observations and demonstrate the effectiveness of proposed techniques.
2008
EPRINT
Disjunctive Multi-Level Secret Sharing
A disjunctive multi-level secret sharing scheme divides users into different levels. Each level L is associated with a threshold t_L, and a group of users can only recover the secret if, for some L, there are at least t_L users at levels 0....L in the group. We present a simple ideal disjunctive multi-level secret sharing scheme -- in fact, the simplest and most direct scheme to date. It is the first polynomial-time solution that allows the dealer to add new users dynamically. Our solution is by far the most efficient; the dealer must perform O(t) field operations per user, where t is the highest threshold in the system. We demonstrate the simplicity of our scheme by extending our construction into a distributed commitment scheme using standard techniques.
2008
EPRINT
Distinguishing and Forgery Attacks on Alred and Its AES-based Instance Alpha-MAC
In this paper, we present new distinguishers of the MAC construction \textsc{Alred} and its specific instance \textsc{Alpha}-MAC based on AES, which is proposed by Daemen and Rijmen in 2005. For the \textsc{Alred} construction, we describe a general distinguishing attack which leads to a forgery attack directly. The complexity is $2^{64.5}$ chosen messages and $2^{64.5}$ queries with success probability 0.63. We also use a two-round collision differential path for \textsc{Alpha}-MAC, to construct a new distinguisher with about $2^{65.5}$ queries. The most important is that the new distinguisher can be used to recover the internal state, which is an equivalent secret subkey, and leads to a second preimage attack. Moreover, the distinguisher on \textsc{Alred} construction is also applicable to the MACs based on CBC and CFB encryption mode.
2008
EPRINT
Distinguishing Attack and Second-Preimage Attack on the CBC-like MACs
In this paper, we first present a new distinguisher on the CBC-MAC based on a block cipher in Cipher Block Chaining (CBC) mode. It can also be used to distinguish other CBC-like MACs from random functions. The main results of this paper are on the second-preimage attack on CBC-MAC and CBC-like MACs include TMAC, OMAC, CMAC, PC-MAC and MACs based on three-key encipher CBC mode. Instead of exhaustive search, this attack can be performed with the birthday attack complexity.
2008
EPRINT
Divisible On-line/Off-line Signatures
On-line/Off-line signatures are used in a particular scenario where the signer must respond quickly once the message to be signed is presented. The idea is to split the signing procedure into two phases: the off-line and on-line phases. The signer can do some pre-computations in off-line phase before he sees the message to be signed. In most of these schemes, when signing a message $m$, a partial signature of $m$ is computed in the off-line phase. We call this part of signature the off-line signature token of message $m$. In some special applications, the off-line signature tokens might be exposed in the off-line phase. For example, some signers might want to transmit off-line signature tokens in the off-line phase in order to save the on-line transmission bandwidth. Another example is in the case of on-line/off-line threshold signature schemes, where off-line signature tokens are unavoidably exposed to all the players in the off-line phase. This paper discusses this exposure problem and introduces a new notion: divisible on-line/off-line signatures, in which exposure of off-line signature tokens in off-line phase is allowed. An efficient construction of this type of signatures is also proposed. Furthermore, we show an important application of divisible on-line/off-line signatures in the area of on-line/off-line threshold signatures.
2008
EPRINT
Double-Base Number System for Multi-Scalar Multiplications
The Joint Sparse Form is currently the standard representation system to perform multi-scalar multiplications of the form $[n]P+m[Q]$. We introduce the concept of Joint Double-Base Chain, a generalization of the Double-Base Number System to represent simultaneously $n$ and $m$. This concept is relevant because of the high redundancy of Double-Base systems, which ensures that we can find a chain of reasonable length that uses exactly the same terms to compute both $n$ and $m$. Furthermore, we discuss an algorithm to produce such a Joint Double-Base Chain. Because of its simplicity, this algorithm is straightforward to implement, efficient, and also quite easy to analyze. Namely, in our main result we show that the average number of terms in the expansion is less than $0.3945\log_2 n$. With respect to the Joint Sparse Form, this induces a reduction by more than $20\%$ of the number of additions. As a consequence, the total number of multiplications required for a scalar multiplications is minimal for our method, across all the methods using two precomputations, $P+Q$ and $P-Q$. This is the case even with coordinate systems offering very cheap doublings, in contrast with recent results on scalar multiplications. Several variants are discussed, including methods using more precomputed points and a generalization relevant for Koblitz curves. Our second contribution is a new way to evaluate $\overline\phi$, the dual endomorphism of the Frobenius. Namely, we propose formulae to compute $\pm\overline\phi(P)$ with at most 2 multiplications and 2 squarings in $\F_{2^d}$. This represents a speed-up of about $50\%$ with respect to the fastest techniques known. This has very concrete consequences on scalar and multi-scalar multiplications on Koblitz curves.
2008
EPRINT
Dynamic Provable Data Possession
As online storage-outsourcing services (e.g., Amazon's S3) and resource-sharing networks (e.g., peer-to-peer and grid networks) became popular, the problem of efficiently proving the integrity of data stored at untrusted servers has received increased attention. Ateniese et al. \cite{pdp} formalized this problem with a model called provable data possession (PDP). In this model, data is preprocessed by the client and then sent to an untrusted server for storage. The client can then challenge the server to prove that the data has not been tampered with or deleted (without sending the actual data). However, their PDP scheme applies only to static (or append-only) files. In reality, many outsourced storage applications (including network file systems and outsourced databases) need to handle dynamic data. This paper presents a definitional framework and an efficient construction for dynamic provable data possession (DPDP), which extends the PDP model to support provable updates to stored data (modifications to a block or insertion/deletion of a block). To achieve efficient DPDP, we use a new version of authenticated dictionaries based on rank information. The price of dynamic updates is a performance change from $O(1)$ to $O(\log{n})$, for a file consisting of $n$ blocks, while maintaining the same probability of detection. Yet, our experiments show that this price is very low in practice, and hence our system is applicable to real scenarios. Our contributions can be summarized as defining the DPDP framework formally, and constructing the first fully dynamic PDP solution, which performs verification without downloading actual data and is very efficient. We also show how our DPDP scheme can be extended to construct complete file systems and version control systems (e.g., CVS) at untrusted servers, so that it can be used in complex outsourcing applications.
2008
EPRINT
Dynamic SHA-2
In this paper I describe the construction of Dynamic SHA-2 family of cryptographic hash functions. They are built with design components from the SHA-2 family, but I use the bits in message as parameters of function G, R and ROTR operation in the new hash functionh. It enabled us to achieve a novel design principle: When message is changed, the calculation will be different. It make the system can resistant against all extant attacks.
2008
EPRINT
Dynamic Threshold Cryptosystem without Group Manager
In dynamic networks with flexible memberships, group signatures and distributed signatures are an important problem. Dynamic threshold cryptosystems are best suited to realize distributed signatures in dynamic (e.g. meshed) networks. Without a group manager or a trusted third party even more flexible scenarios can be realized. Gennaro et al. showed, it is possible to dynamically increase the size of the signer group, without altering the public key. We extend this idea by removing members from the group, also without changing the public key. This is an important feature for dynamic groups, since it is very common, e.g. in meshed networks that members leave a group. Gennaro et al. used RSA and bi-variate polynomials for their scheme. In contrast, we developed a DL-based scheme that uses ideas from the field of proactive secret sharing (PSS). One advantage of our scheme is the possibility to use elliptic curve cryptography and thereby decrease the communication and computation complexity through a smaller security parameter. Our proposal is an efficient threshold cryptosystem that is able to adapt the group size in both directions. Since it is not possible to realize a non-interactive scheme with the ability to remove members (while the public key stays unchanged), we realized an interactive scheme whose communication efficency is highly optimized to compete with non-interactive schemes. Our contribution also includes a security proof for our threshold scheme.
2008
EPRINT
ECM on Graphics Cards
This paper reports record-setting performance for the elliptic-curve method of integer factorization: for example, 926.11 curves/second for ECM stage 1 with B1=8192 for 280-bit integers on a single PC.The state-of-the-art GMP-ECM software handles 124.71 curves/second for ECM stage 1 with B1=8192 for 280-bit integers using all four cores of a 2.4 GHz Core 2 Quad Q6600. The extra speed takes advantage of extra hardware,specifically two NVIDIA GTX 295 graphics cards,using a new ECM implementation introduced in this paper.Our implementation uses Edwards curves, relies on new parallel addition formulas, and is carefully tuned for the highly parallel GPU architecture.On a single GTX 295 the implementation performs 41.88 million modular multiplications per second for a general 280-bit modulus.GMP-ECM, using all four cores of a Q6600, performs 13.03 million modular multiplications per second. This paper also reports speeds on other graphics processors: for example, 2414 280-bit elliptic-curve scalar multiplications per second on an older NVIDIA 8800 GTS (G80), again for a general 280-bit modulus.For comparison, the CHES 2008 paper ``Exploiting the Power of GPUs for Asymmetric Cryptography'' reported 1412 elliptic-curve scalar multiplications per second on the same graphics processor despite having fewer bits in the scalar (224 instead of 280), fewer bits in the modulus (224 instead of 280), and a special modulus (2^{224}-2^{96}+1).
2008
EPRINT
ECM using Edwards curves
This paper introduces GMP-EECM, a fast implementation of the elliptic-curve method of factoring integers. GMP-EECM is based on, but faster than, the well-known GMP-ECM software. The main changes are as follows: (1) use Edwards curves instead of Montgomery curves; (2) use twisted inverted Edwards coordinates; (3) use signed-sliding-window addition chains; (4) batch primes to increase the window size; (5) choose curves with small parameters $a,d,X_1,Y_1,Z_1$; (6) choose curves with larger torsion.
2008
EPRINT
Efficient and Generalized Pairing Computation on Abelian Varieties
In this paper, we propose a new method for constructing a bilinear pairing over (hyper)elliptic curves, which we call the R-ate pairing. This pairing is a generalization of the Ate and Ate_i pairing, and also improves efficiency of the pairing computation. Using the R-ate pairing, the loop length in Miller's algorithm can be as small as ${\rm log}(r^{1 / \phi(k)})$ for some pairing-friendly elliptic curves which have not reached this lower bound. Therefore we obtain from 29 % to 69 % savings in overall costs compared to the Ate_i pairing. On supersingular hyperelliptic curves of genus 2, we show that this approach makes the loop length in Miller's algorithm shorter than that of the Ate pairing.
2008
EPRINT
Efficient arithmetic on elliptic curves using a mixed Edwards-Montgomery representation
From the viewpoint of x-coordinate-only arithmetic on elliptic curves, switching between the Edwards model and the Montgomery model is quasi cost-free. We use this observation to speed up Montgomery's algorithm, reducing the complexity of a doubling step from 2M + 2S to 1M + 3S for suitably chosen curve parameters.
2008
EPRINT
Efficient Asynchronous Multiparty Computation with Optimal Resilience
We propose an efficient information theoretic secure asynchronous multiparty computation (AMPC) protocol with optimal fault tolerance; i.e., with $n = 3t+1$, where $n$ is the total number of parties and $t$ is the number of parties that can be under the influence of a Byzantine (active) adversary ${\cal A}_t$ having unbounded computing power. Our protocol communicates O(n^5 \kappa)$ bits per multiplication and involves a negligible error probability of $2^{- O(\kappa)}$, where $\kappa$ is the error parameter. As far as our knowledge is concerned, the only known AMPC protocol with $n = 3t+1$ providing information theoretic security with negligible error probability is due to \cite{BenOrPODC94AsynchronousMPC}, which communicates $\Omega(n^{11} \kappa^4)$ bits and A-Casts $\Omega(n^{11} \kappa^2 \log(n))$ bits per multiplication. Here A-Cast is an asynchronous broadcast primitive, which allows a party to send the same information to all other parties identically. Thus our AMPC protocol shows significant improvement in communication complexity over the AMPC protocol of \cite{BenOrPODC94AsynchronousMPC}. As a tool for our AMPC protocol, we introduce a new asynchronous primitive called Asynchronous Complete Verifiable Secret Sharing (ACVSS), which is first of its kind and is of independent interest. For designing our ACVSS, we employ a new asynchronous verifiable secret sharing (AVSS) protocol which is better than all known communication-efficient AVSS protocols with $n=3t+1$.
2008
EPRINT
Efficient Chosen Ciphertext Secure Public Key Encryption under the Computational Diffie-Hellman Assumption
Recently Cash, Kiltz, and Shoup showed a variant of the Cramer-Shoup (CS) public key encryption (PKE) scheme whose chosen-ciphertext (CCA) security relies on the computational Diffie-Hellman (CDH) assumption. The cost for this high security is that the size of ciphertexts is much longer than the CS scheme. In this paper, we show how to achieve CCAsecurity under the CDH assumption without increasing the size of ciphertexts. We further show a more efficient scheme under the hashed Diffie-Hellman (HDH) assumption such that the size of ciphertexts is the same as that of the Kurosawa-Desmedt (KD) scheme. Note that the CDH and HDH assumptions are weaker than the decisional Diffie-Hellman assumption which the CS and KD schemes rely on. Both of our schemes are based on a certain broadcast encryption (BE) scheme while the Cash-Kiltz-Shoup scheme is based on a different paradigm which is called the Twin DH problem. As an independent interest, we also show a generic method of constructing CCA-secure PKE schemes from BE schemes such that the existing CCA-secure constructions can be viewed as special cases.
2008
EPRINT
Efficient Conversion of Secret-shared Values Between Different Fields
We show how to effectively convert a secret-shared bit $b$ over a prime field to another field. If initially given a random replicated secret share this conversion can be done by the cost of revealing one secret shared value. By using a pseudo-random function it is possible to convert arbitrary many bit values from one initial random replicated share. Furthermore, we generalize the conversion to handle general values of a bounded size.
2008
EPRINT
Efficient Fully-Simulatable Oblivious Transfer
Oblivious transfer, first introduced by Rabin, is one of the basic building blocks of cryptographic protocols. In an oblivious transfer (or more exactly, in its 1-out-of-2 variant), one party known as the sender has a pair of messages and the other party known as the receiver obtains one of them. Somewhat paradoxically, the receiver obtains exactly one of the messages (and learns nothing of the other), and the sender does not know which of the messages the receiver obtained. Due to its importance as a building block for secure protocols, the efficiency of oblivious transfer protocols has been extensively studied. However, to date, there are almost no known oblivious transfer protocols that are secure in the presence of \emph{malicious adversaries} under the \emph{real/ideal model simulation paradigm} (without using general zero-knowledge proofs). Thus, \emph{efficient protocols} that reach this level of security are of great interest. In this paper we present efficient oblivious transfer protocols that are secure according to the ideal/real model simulation paradigm. We achieve constructions under the DDH, $N$th residuosity and quadratic residuosity assumptions, as well as under the assumption that homomorphic encryption exists.
2008
EPRINT
Efficient Hyperelliptic Arithmetic using Balanced Representation for Divisors
We discuss arithmetic in the Jacobian of a hyperelliptic curve $C$ of genus $g$. The traditional approach is to fix a point $P_\infty \in C$ and represent divisor classes in the form $E - d(P_\infty)$ where $E$ is effective and $0 \le d \le g$. We propose a different representation which is balanced at infinity. The resulting arithmetic is more efficient than previous approaches when there are 2 points at infinity. This is a corrected and extended version of the article presented in ANTS 2008. We include an appendix with explicit formulae to compute a very efficient `baby step' in genus 2 hyperelliptic curves given by an imaginary model.
2008
EPRINT
Efficient ID-Based Signcryption Schemes for Multiple Receivers
This paper puts forward new efficient constructions for Multi-Receiver Signcryption in the Identity-based setting. We consider a scenario where a user wants to securely send a message to a dynamically changing subset of the receivers in such a way that non-members of the of this subset cannot learn the message. The obvious solution is to transmit an individually signcrypted message to every member of the subset. This requires a very long transmission (the number of receivers times the length of the message) and high computation cost. Another simple solution is to provide every possible subset of receivers with a key. This requires every user to store a huge number of keys. In this case, the storage efficiency is compromised. The goal of this paper is to provide solutions which are efficient in all three measures i.e. transmission length, storage of keys and computation at both ends. We propose three new schemes that achieve both confidentiality and authenticity simultaneously in this setting and are the most efficient schemes to date, in the parameters described above. The first construction achieves optimal computational and storage cost. The second construction achieves much lesser transmission length than the previous scheme (down to a ratio of one-third), while still maintaining optimal storage cost. The third scheme breaks the barrier of ciphertext length of linear order in the number of receivers, and achieves constant sized ciphertext, independent of the size of the receiver set. This is the first Multi-receiver Signcryption scheme to do so. We support all three schemes with security proofs under a precisely defined formal security model.
2008
EPRINT
Efficient Key Distribution Schemes for Large Scale Mobile Computing Applications
In emerging networks consisting of large-scale deployments of mobile devices, efficient security mechanisms are required to facilitate cryptographic authentication. While computation and bandwidth overheads are expensive for mobile devices, the cost of storage resources continue to fall at a rapid rate. We propose a simple novel key predistribution scheme, \textit{key subset and symmetric certificates} (KSSC) which can take good advantage of inexpensive storage resources, and has many compelling advantages over other approaches for facilitating ad hoc establishment of pairwise secrets in mobile computing environments. We argue that a combination of KSSC with a variant of an elegant KDS proposed by Leighton and Micali is an appealing choice for securing large scale deployments of mobile devices.
2008
EPRINT
Efficient Lossy Trapdoor Functions based on the Composite Residuosity Assumption
Lossy trapdoor functions (Peikert and Waters, STOC '08) are an intriguing and powerful cryptographic primitive. Their main applications are simple and black-box constructions of chosen-ciphertext secure encryption, as well as collision-resistant hash functions and oblivious transfer. An appealing property of lossy trapdoor functions is the ability to realize them from a variety of number-theoretic assumptions, such as the hardness of the decisional Diffie-Hellman problem, and the worst-case hardness of lattice problems. In this short note we propose a new construction of lossy trapdoor functions based on the Damg{\aa}rd-Jurik encryption scheme (whose security relies on Paillier's decisional composite residuosity assumption). Our approach also yields a direct construction of all-but-one trapdoor functions, an important ingredient of the Peikert-Waters encryption scheme. The functions we propose enjoy short public descriptions, which in turn yield more efficient encryption schemes.
2008
EPRINT
Efficient One-round Key Exchange in the Standard Model
We consider one-round identity-based key exchange protocols secure in the standard model. The security analysis uses the powerful security model of Canetti and Krawczyk and a natural extension of it to the ID-based setting. It is shown how KEMs can be used in a generic way to obtain two different protocol designs with progressively stronger security guarantees. A detailed analysis of the performance of the protocols is included; surprisingly, when instantiated with specific KEM constructions, the resulting protocols are competitive with the best previous schemes that have proofs only in the random oracle model.
2008
EPRINT
Efficient Perfectly Reliable and Secure Communication Tolerating Mobile Adversary
We study the problem of Perfectly Reliable Message Transmission}(PRMT) and Perfectly Secure Message Transmission (PSMT) between two nodes S and R in an undirected synchronous network, a part of which is under the influence of an all powerful mobile Byzantine adversary. In ACISP 2007 Srinathan et. al. has proved that the connectivity requirement for PSMT protocols is same for both static and mobile adversary thus showing that mobility of the adversary has no effect on the possibility of PSMT (also PRMT) protocols. Similarly in CRYPTO 2004, Srinathan et. al. has shown that the lower bound on the communication complexity of any multiphase PSMT protocol is same for static and mobile adversary. The authors have also designed a $O(t)$ phase (A phase is a send from S to R or R to S or both) protocol satisfying this bound where $t$ is the maximum number of nodes corrupted by the Byzantine adversary. In this work, we design a three phase bit optimal PSMT protocol using a novel technique, whose communication complexity matches the lower bound proved in CRYPTO 2004 and thus significantly reducing the number of phases from $O(t)$ to three. Further using our novel technique, we design a three phase bit optimal PRMT protocol which achieves reliability with constant factor overhead against a mobile adversary. These are the first ever constant phase optimal PRMT and PSMT protocols against mobile Byzantine adversary. We also characterize PSMT protocols in directed networks tolerating mobile adversary. All the existing PRMT and PSMT protocols abstracts the paths between S and R as wires, neglecting the intermediate nodes in the paths. However, this causes significant over estimation in the communication complexity as well as round complexity (Round is different from phase. Round is a send from one node to its immediate neighbor in the network} of protocols. Here, we consider the underlying paths as a whole instead of abstracting them as wires and derive a tight bound on the number of rounds required to achieve reliable communication from S to R tolerating a mobile adversary with arbitrary roaming speed (By roaming speed we mean the speed with which the adversary changes the set of corrupted node). We show how our constant phase PRMT and PSMT protocols can be easily adapted to design round optimal and bit optimal PRMT and PSMT protocols provided the network is given as a collection of vertex disjoint paths.
2008
EPRINT
Efficient Post-quantum Blind Signatures
We present the first efficient post-quantum blind signature scheme. Our scheme is provably secure in the random oracle model, unconditionally blind, and round-optimal. We propose it as a replacement for current blind signature schemes for the post-quantum era. Its basis of security is a problem related to finding short vectors in a lattice.
2008
EPRINT
Efficient Quantum-immune Blind Signatures
We present the first quantum-immune blind signature scheme. Our scheme is provably secure, efficient, and round-optimal.
2008
EPRINT
Efficient Rational Secret Sharing in the Standard Communication Model
We propose a new methodology for rational secret sharing leading to various instantiations that are simple and efficient in terms of computation, share size, and round complexity. Our protocols do not require physical assumptions or simultaneous channels, and can even be run over asynchronous, point-to-point networks. Of additional interest, we propose new equilibrium notions for this setting (namely, computational versions of strict Nash equilibrium and stability with respect to trembles), show relations between them, and prove that our protocols satisfy them.
2008
EPRINT
Efficient Receipt-Free Ballot Casting Resistant to Covert Channels
We present an efficient, covert-channel-resistant, receipt-free ballot casting scheme that can be used by humans without trusted hardware. In comparison to the recent Moran-Naor proposal, our scheme produces a significantly shorter ballot, prevents covert channels in the ballot, and opts for statistical soundness rather than everlasting privacy (achieving both seems impossible). The human interface remains the same, based on Neff's MarkPledge scheme, and requires of the voter only short-string operations.
2008
EPRINT
Efficient RFID authentication protocols based on pseudorandom sequence generators
In this paper, we introduce a new class of PRSGs, called \emph{partitioned pseudorandom sequence generators}(PPRSGs), and propose an RFID authentication protocol using a PPRSG, called {\em $S$-protocol}. Since most existing stream ciphers can be regarded as secure PPRSGs, and stream ciphers outperform other types of symmetric key primitives such as block ciphers and hash functions in terms of power, performance and gate size, $S$-protocol is expected to be suitable for use in highly constrained environments such as RFID systems. We present a formal proof that guarantees resistance of $S$-protocol to desynchronization and tag-impersonation attacks. Specifically, we reduce availability of $S$-protocol to pseudorandomness of the underlying PPRSG, and the security of the protocol to the availability. Finally, we give a modification of $S$-protocol, called $S^*$-protocol, that provide mutual authentication of tag and reader.
2008
EPRINT
Efficient Sequential Aggregate Signed Data
We generalize the concept of sequential aggregate signatures (SAS), proposed by Lysyanskaya, Micali, Reyzin, and Shacham (LMRS) at Eurocrypt~2004, to a new primitive called "sequential aggregate signed data" (SASD) that tries to minimize the total amount of transmitted data, rather than just signature length. We present SAS and SASD schemes that offer numerous advantages over the LMRS scheme. Most importantly, our schemes can be instantiated with uncertified claw-free permutations, thereby allowing implementations based on low-exponent RSA and factoring, and drastically reducing signing and verification costs. Our schemes support aggregation of signatures under keys of different lengths, and the SASD scheme even has as little as 160~bits of bandwidth overhead. Finally, we present a multi-signed data scheme that, when compared to the state-of-the-art multi-signature schemes, is the first scheme with non-interactive signature generation not based on pairings. All of our constructions are proved secure in the random oracle model based on families of claw-free permutations.
2008
EPRINT
Efficient Tweakable Enciphering Schemes from (Block-Wise) Universal Hash Functions
This paper describes several constructions of tweakable strong pseudorandom permutations (SPRPs) built from different modes of operations of a block cipher and suitable universal hash functions. For the electronic codebook (ECB) based construction, an invertible blockwise universal hash function is required. We simplify an earlier construction of such a function described by Naor and Reingold. The other modes of operations considered are the counter mode and the output feedback (OFB) mode. All the constructions make the same number of block cipher calls and the same number of multiplications. Combined with a class of polynomials defined by Bernstein, the new constructions provide the currently best known algorithms for the important practical problem of disk encryption.
2008
EPRINT
Elliptic Curve Cryptography: The Serpentine Course of a Paradigm Shift
Over a period of sixteen years elliptic curve cryptography went from being an approach that many people mistrusted or misunderstood to being a public key technology that enjoys almost unquestioned acceptance. We describe the sometimes surprising twists and turns in this paradigm shift, and compare this story with the commonly accepted Ideal Model of how research and development function in cryptography. We also discuss to what extent the ideas in the literature on "social construction of technology" can contribute to a better understanding of this history.
2008
EPRINT
Elliptic Curves Scalar Multiplication Combining Multi-base Number Representation with Point halving
Elliptic curves scalar multiplication over some finite fields, attractive research area, which paid much attention by researchers in the recent years. Researchs still in progress to improve elliptic curves cryptography implementation and reducing its complexity. Elliptic curve point-halving algorithm proposed in and later double-base chain and step multi-base chain are among efficient techniques offered in this field. Our paper proposes new algorithm combining step multi-base number representation and point halving. We extend the work done by K. W. Wong, which combined double base chain with point halving technique. The expriment results show our contribution will enhance elliptic curves scalar multiplication.
2008
EPRINT
Elliptic divisibility sequences and the elliptic curve discrete logarithm problem
We use properties of the division polynomials of an elliptic curve $E$ over a finite field $\mathbb{F}_q$ together with a pure result about elliptic divisibility sequences from the 1940s to construct a very simple alternative to the Menezes-Okamoto-Vanstone algorithm for solving the elliptic curve discrete logarithm problem in the case where $\#E(\mathbb{F}_q) = q-1$.
2008
EPRINT
Embedding in Two Least Significant Bits with Wet Paper Coding
In this paper, we present three embedding schemes for extensions of least significant bit overwriting to both of the two lowest bit planes in digital images. Our approaches are inspired by the work of Fridrich et al. [8] who proposed wet paper coding as an efficient method for steganographic schemes. Our new works generalize it to the embedding in two least significant bits, that is to say, combine two novel extensions of least significant bits embedding and the double-layered embedding developed in [16] with wet paper coding, respectively. The proposed schemes improve steganographic security and are less vulnerable to steganalytic attacks compared with original schemes with shared selection channels between the sender and the recipient.
2008
EPRINT
Encrypting Proofs on Pairings and Its Application to Anonymity for Signatures
We give a generic methodology to unlinkably anonymize cryptographic schemes in bilinear groups using the Boneh-Goh-Nissim cryptosystem and NIZK proofs in the line of Groth, Ostrovsky and Sahai. We illustrate our techniques by presenting the first instantiation of anonymous proxy signatures, a recent primitive unifying the functionalities and strong security notions of group and proxy signatures. To construct our scheme, we introduce various efficient NIZK and witness-indistinguishable proofs, and a relaxed version of simulation soundness.
2008
EPRINT
Encryption-On-Demand: Practical and Theoretical Considerations
Alice and Bob may develop a spontaneous, yet infrequent need for online confidential exchange. They may be served by an 'encryption-on-demand' (EoD) service which will enable them to communicate securely with no prior preparations, and no after effects. We delineate a possible EoD service, and describe some of its theoretical and practical features. The proposed framework is a website which could tailor-make an encryption package to be downloaded by both Alice and Bob for their ad-hoc use. The downloaded package will include the cryptographic algorithm and a unique key, which may be of any size, since Alice and Bob will not have to enter, or regard the key per se, they would simply use the downloaded page to encrypt and decrypt their data. After their secure exchange both Alice and Bob may ignore, or discard the downloaded software, and restart the same procedure, with a different tailor-made package, exactly when needed. This framework allows for greater flexibility in managing the complexity aspects that ensures security. Alice and Bob will not have to know what encryption scheme they use. The server based tailoring program could pseudo-randomly pick AES, DES, RSA, ECC, select a short, or long key, and otherwise greatly increase the variability that would have to be negotiated by a cryptanalyst. Encryption-on-demand is offered on http://youdeny.com . Features are described.
2008
EPRINT
Endomorphisms for faster elliptic curve cryptography on a large class of curves
Efficiently computable homomorphisms allow elliptic curve point multiplication to be accelerated using the Gallant-Lambert-Vanstone (GLV) method. We extend results of Iijima, Matsuo, Chao and Tsujii which give such homomorphisms for a large class of elliptic curves by working over quadratic extensions and demonstrate that these results can be applied to the GLV method. Our implementation runs in between 0.70 and 0.84 the time of the previous best methods for elliptic curve point multiplication on curves without small class number complex multiplication. Further speedups are possible when using more special curves.
2008
EPRINT
Entropy Bounds for Traffic Confirmation
Consider an open MIX-based anonymity system with $N$ participants and a batch size of $b$. Assume a global passive adversary who targets a given participant Alice with a set ${\cal R}_A$ of $m$ communicating partners. Let $H( {\cal R}_A \mid {\cal B}_t)$ denote the entropy of ${\cal R}_A$ as calculated by the adversary given $t$ message sets (MIX batches) where Alice is a sender in each message set. Our main result is to express the rate at which the anonymity of Alice (as measured by ${\cal R}_A$) degrades over time as a function of the main parameters $N$, $b$ and $m$.
2008
EPRINT
Enumeration of Balanced Symmetric Functions over GF(p)
It is proved that the construction and enumeration of the number of balanced symmetric functions over GF(p) are equivalent to solving an equation system and enumerating the solutions. Furthermore, we give an lower bound on number of balanced symmetric functions over GF(p), and the lower bound provides best known results.
2008
EPRINT
Enumeration of Homogeneous Rotation Symmetric functions over GF(p)
Rotation symmetric functions have been used as components of different cryptosystems. This class of functions are invariant under circular translation of indices. In this paper, we will do some enumeration on homogeneous rotation symmetric functions over GF(p). And we givea formula to count homogeneous rotation symmetric functions when the greatest common divisor of input variable n and the degree d is a power of a prime, which solves the open problem in [7].
2008
EPRINT
Essentially Optimal Universally Composable Oblivious Transfer
Oblivious transfer is one of the most important cryptographic primitives, both for theoretical and practical reasons and several protocols were proposed during the years. We provide the first oblivious transfer protocol which is simultaneously optimal on the following list of parameters: Security: it has universal composition. Trust in setup assumptions: only one of the parties needs to trust the setup and some setup is needed for UC security. Trust in computational assumptions: only one of the parties needs to trust a computational assumption. Round complexity: it uses only two rounds. Communication complexity: it communicates O(1) group elements to transfer one out of two group elements. The Big-O notation hides 32, meaning that the communication is probably not optimal, but is essentially optimal in that the overhead is at least constant. Our construction is based on pairings, and we assume the presence of a key registration authority.
2008
EPRINT
Explicit hard instances of the shortest vector problem
Building upon a famous result due to Ajtai, we propose a sequence of lattice bases with growing dimension, which can be expected to be hard instances of the shortest vector problem (SVP) and which can therefore be used to benchmark lattice reduction algorithms. The SVP is the basis of security for potentially post-quantum cryptosystems. We use our sequence of lattice bases to create a challenge, which may be helpful in determining appropriate parameters for these schemes.
2008
EPRINT
Explicit hard instances of the shortest vector problem
Building upon a famous result due to Ajtai, we propose a sequence of lattice bases with growing dimension, which can be expected to be hard instances of the shortest vector problem (SVP) and which can therefore be used to benchmark lattice reduction algorithms. The SVP is the basis of security for potentially post-quantum cryptosystems. We use our sequence of lattice bases to create a challenge, which may be helpful in determining appropriate parameters for these schemes.
2008
EPRINT
Exponentiation in pairing-friendly groups using homomorphisms
We present efficiently computable homomorphisms of the groups $G_2$ and $G_T$ for pairings $G_1 \times G_2 \rightarrow G_T$. This allows exponentiation in $G_2$ and $G_T$ to be accelerated using the Gallant-Lambert-Vanstone method.
2008
EPRINT
FACTORING IS EQUIVALENT TO GENERIC RSA
We show that a generic ring algorithm for breaking RSA in $\mathbb{Z}_N$ can be converted into an algorithm for factoring the corresponding RSA-modulus $N$. Our results imply that any attempt at breaking RSA without factoring $N$ will be non-generic and hence will have to manipulate the particular bit-representation of the input in $\mathbb{Z}_N$. This provides new evidence that breaking RSA may be equivalent to factoring the modulus.
2008
EPRINT
Factoring Polynomials for Constructing Pairing-friendly Elliptic Curves
In this paper we present a new method to construct a polynomial $u(x) \in \mathbb{Z}[x]$ which will make $\mathrm{\Phi}_{k}(u(x))$ reducible. We construct a finite separable extension of $\mathbb{Q}(\zeta_{k})$, denoted as $\mathbb{E}$. By primitive element theorem, there exists a primitive element $\theta \in \mathbb{E}$ such that $\mathbb{E}=\mathbb{Q}(\theta)$. We represent the primitive $k$-th root of unity $\zeta_{k}$ by $\theta$ and get a polynomial $u(x) \in \mathbb{Q}[x]$ from the representation. The resulting $u(x)$ will make $\mathrm{\Phi}_{k}(u(x))$ factorable.
2008
EPRINT
Fair Traceable Multi-Group Signatures
This paper presents fair traceable multi-group signatures (FTMGS), which have enhanced capabilities, compared to group and traceable signatures, that are important in real world scenarios combining accountability and anonymity. The main goal of the primitive is to allow multiple groups that are managed separately (managers are not even aware of the other ones), yet allowing users (in the spirit of the Identity 2.0 initiative) to manage what they reveal about their identity with respect to these groups by themselves. This new primitive incorporates the following additional features. - While considering multiple groups it discourages users from sharing their private membership keys through two orthogonal and complementary approaches. In fact, it merges functionality similar to credential systems with anonymous type of signing with revocation. - The group manager now mainly manages joining procedures, and new entities (called fairness authorities and consisting of various representatives, possibly) are involved in opening and revealing procedures. In many systems scenario assuring fairness in anonymity revocation is required. We specify the notion and implement it in the random oracle model.
2008
EPRINT
Fairness with an Honest Minority and a Rational Majority
We provide a simple protocol for secret reconstruction in any threshold secret sharing scheme, and prove that it is fair when executed with many rational parties together with a small minority of honest parties. That is, all parties will learn the secret with high probability when the honest parties follow the protocol and the rational parties act in their own self-interest (as captured by the notion of a Bayesian subgame perfect equilibrium). The protocol only requires a standard (synchronous) broadcast channel, and tolerates fail-stop deviations (i.e. early stopping, but not incorrectly computed messages). Previous protocols for this problem in the cryptographic or economic models have either required an honest majority, used strong communication channels that enable simultaneous exchange of information, or settled for approximate notions of security/equilibria.
2008
EPRINT
Fast Algorithms for Arithmetic on Elliptic Curves Over Prime Fields
We present here a thorough discussion of the problem of fast arithmetic on elliptic curves over prime order &#64257;nite &#64257;elds. Since elliptic curves were independently pro- posed as a setting for cryptography by Koblitz [53] and Miller [67], the group of points on an elliptic curve has been widely used for discrete logarithm based cryptosystems. In this thesis, we survey, analyse and compare the fastest known serial and parallel algorithms for elliptic curve scalar multiplication, the primary operation in discrete logarithm based cryptosystems. We also introduce some new algorithms for the basic group operation and several new parallel scalar multiplication algorithms. We present a mathematical basis for comparing the various algorithms and make recommendations for the fastest algorithms to use in di&#64256;erent circumstances.
2008
EPRINT
Fast Arithmetic on ATmega128 for Elliptic Curve Cryptography
Authentication protocols are indispensable in wireless sensor networks. Commonly they are based on asymmetric cryptographic algorithms. In this paper we investigate all categories of finite fields suitable for elliptic curve cryptography on the ATmega128 microcontroller: $\F{p}$, $\F{2^d}$, and $\F{p^d}$. It turns out that binary fields enable the most efficient implementations.
2008
EPRINT
Fast explicit formulae for genus 2 hyperelliptic curves using projective coordinates (Updated)
This contribution proposes a modification of method of divisors group operation in the Jacobian of hyperelliptic curve over even and odd characteristic fields in projective coordinate. The hyperelliptic curve cryptosystem (HECC), enhances cryptographic security efficiency in e.g. information and telecommunications systems.
2008
EPRINT
Fast hashing to G2 on pairing friendly curves
When using pairing-friendly ordinary elliptic curves over prime fields to implement identity-based protocols, there is often a need to hash identities to points on one or both of the two elliptic curve groups of prime order $r$ involved in the pairing. Of these $G_1$ is a group of points on the base field $E(\F_p)$ and $G_2$ is instantiated as a group of points with coordinates on some extension field, over a twisted curve $E'(\F_{p^d})$, where $d$ divides the embedding degree $k$. While hashing to $G_1$ is relatively easy, hashing to $G_2$ has been less considered, and is regarded as likely to be more expensive as it appears to require a multiplication by a large cofactor. In this paper we introduce a fast method for this cofactor multiplication on $G_2$ which exploits an efficiently computable homomorphism.
2008
EPRINT
Fast Multiple Point Multiplication on Elliptic Curves over Prime and Binary Fields using the Double-Base Number System
Multiple-point multiplication on elliptic curves is the highest computational complex operation in the elliptic curve cyptographic based digital signature schemes. We describe three algorithms for multiple-point multiplication on elliptic curves over prime and binary fields, based on the representations of two scalars, as sums of mixed powers of 2 and 3. Our approaches include sliding window mechanism and some precomputed values of points on the curve. A proof for formulae to calculate the number of double-based elements, doublings and triplings below 2^n is listed. Affine coordinates and Jacobian coordinates are considered in both prime fields and binary fields. We have achieved upto 24% of improvements in new algorithms for multiple-point multiplication.
2008
EPRINT
Fast Point Multiplication Formulae on Elliptic Curves of Weierstrass Form
\begin{abstract} In this paper, we presented fast point multiplication formulae for two different families elliptic curve of Weierstrass form $y^2=x^3+ax^2+bx$ and $y^2=x^3+ax^2+2atx+at^2$. The costs of doubling and tripling formulae are $5S+2M+3C$ and $6S+6M+4C$ respectively, even $5S+2M$ and $6S+6M$ with the parameter of the curve suitably chosen. \end{abstract}
2008
EPRINT
Flaws in Some Efficient Self-Healing Key Distribution Schemes with Revocation
Dutta and Mukhopadhyay have recently proposed some very efficient self-healing key distribution schemes with revocation. The parameters of these schemes contradict some results (lower bounds) presented by Blundo et al. In this paper different attacks against the schemes of Dutta and Mukhopadhyay are explained: one of them can be easily avoided with a slight modification in the schemes, but the other one is really serious. The conclusion is that the results of Dutta and Mukhopadhyay are wrong.
2008
EPRINT
Formal Proof of Relative Strengths of Security between ECK2007 Model and other Proof Models for Key Agreement Protocols
In 2005, Choo, Boyd & Hitchcock compared four well-known indistinguishability-based proof models for key agreement protocols, which contains the Bellare & Rogaway (1993, 1995) model, the Bellare , Pointcheval & Rogaway 2000 model and the Canetti & Krawczyk (2001) model. After that, researchers from Microsoft presented a stronger security model, called Extended Canetti-Krawczyk model (2007). In this paper, we will point out the differences between the new proof model and the four previous models, and analyze the relative strengths of security of these models. To support the implication or non-implication relation between these models, we will provide proof or counter-example.
2008
EPRINT
Formally Bounding the Side-Channel Leakage in Unknown-Message Attacks
We propose a novel approach for quantifying a system's resistance to unknown-message side-channel attacks. The approach is based on a measure of the secret information that an attacker can extract from a system from a given number of side-channel measurements. We provide an algorithm to compute this measure, and we use it to analyze the resistance of hardware implementations of cryptographic algorithms with respect to power and timing attacks. In particular, we show that message-blinding -- the common countermeasure against timing attacks -- reduces the rate at which information about the secret is leaked, but that the complete information is still eventually revealed. Finally, we compare information measures corresponding to unknown-message, known-message, and chosen-message attackers and show that they form a strict hierarchy.
2008
EPRINT
Foundations of Group Key Management – Framework, Security Model and a Generic Construction
Group Key Management (GKM) solves the problem of efficiently establishing and managing secure communication in dynamic groups. Many GKM schemes that have been proposed so far have been broken, as they cite ambiguous arguments and lack formal proofs. In fact, no concrete framework and security model for GKM exists in literature. This paper addresses this serious problem by providing firm foundations for Group Key Management. We provide a generalized framework for centralized GKM along with a formal security model and strong definitions for the security properties that dynamic groups demand. We also show a generic construction of a centralized GKM scheme from any given multi-receiver ID-based Key Encapsulation Mechanism (mID-KEM). By doing so, we unify two concepts that are significantly different in terms of what they achieve. Our construction is simple and efficient. We prove that the resulting GKM inherits the security of the underlying mID-KEM up to CCA security. We also illustrate our general conversion using the mID-KEM proposed in 2007 by Delerablée.
2008
EPRINT
FPGA and ASIC Implementations of the $\eta_T$ Pairing in Characteristic Three
Since their introduction in constructive cryptographic applications, pairings over (hyper)elliptic curves are at the heart of an ever increasing number of protocols. As they rely critically on efficient algorithms and implementations of pairing primitives, the study of hardware accelerators became an active research area. In this paper, we propose two coprocessors for the reduced $\eta_T$ pairing introduced by Barreto {\it et al.} as an alternative means of computing the Tate pairing on supersingular elliptic curves. We prototyped our architectures on FPGAs. According to our place-and-route results, our coprocessors compare favorably with other solutions described in the open literature. We also present the first ASIC implementation of the reduced $\eta_T$ pairing.
2008
EPRINT
From Weaknesses to Secret Disclosure in a Recent Ultra-Lightweight RFID Authentication Protocol
A recent research trend, motivated by the massive deployment of RFID technology, looks at cryptographic protocols for securing communication between entities in which al least one of the parties has very limited computing capabilities. In this paper we focus our attention on SASI, a new Ultra-Lightweight RFID Authentication Protocol, designed for providing Strong Authentication and Strong Integrity. The protocol, suitable for passive Tags with limited computational power and storage, involves simple bitwise operations like $and$, $or$, exclusive $or$, modular addition, and cyclic shift operations. It is efficient, fits the hardware constraints, and can be seen as an example of the above research trend. We start by showing some weaknesses in the protocol and, then, we describe how such weaknesses, through a sequence of simple steps, can be used to compute in an efficient way all secret data used for the authentication process. More precisely, we describe three attacks: - A de-synchronisation attack, through which an adversary can break the synchronisation between the RFID Reader and the Tag. - An identity disclosure attack, through which an adversary can compute the identity of the Tag. - A full disclosure attack, which enables an adversary to retrieve all secret data stored in the Tag. Then we present some experimental results we have obtained by running several tests on an implementation of the protocol, in order to evaluate the performance of the proposed attacks. The results confirm that the attacks are effective and efficient.
2008
EPRINT
Full Cryptanalysis of LPS and Morgenstern Hash Function
Collisions in the LPS cryptographic hash function of Charles, Goren and Lauter have been found by Zémor and Tillich, but it was not clear whether computing preimages was also easy for this hash function. We present a probabilistic polynomial time algorithm solving this problem. Subsequently, we study the Morgenstern hash, an interesting variant of LPS hash, and break this function as well. Our attacks build upon the ideas of Zémor and Tillich but are not straightforward extensions of it. Finally, we discuss fixes for the Morgenstern hash function and other applications of our results.
2008
EPRINT
Full Security: Fuzzy Identity Based Encryption
At EUROCRYPT 2005, Sahai and Waters presented the Fuzzy Identity Based Encryption (Fuzzy-IBE) which could be used for biometrics and attribute-based encryption in the selective-identity model. When a secure Fuzzy-IBE scheme in the selective-identity model is transformed to full identity model it exist an exponential loss of security. In this paper, we use the CPA secure Gentry's IBE (exponent inversion IBE) to construct the first Fuzzy IBE that is fully secure without random oracles. In addition, the same technique used to the modification of CCA secure Gentry's IBE which introduced by Kiltz and Vahlis to get the CCA secure Fuzzy IBE in the full-identity model.
2008
EPRINT
Full Security:Fuzzy Identity Based Encryption
At EUROCRYPT 2005, Sahai and Waters presented the Fuzzy Identity Based Encryption (Fuzzy-IBE) which could be used for biometrics and attribute-based encryption in the selective-identity model. When a secure Fuzzy-IBE scheme in the selective-identity model is transformed to full identity model it exist an exponential loss of security. In this paper, we use the CPA secure Gentry's IBE (exponent inversion IBE) to construct the first Fuzzy IBE that is fully secure without random oracles. In addition, the same technique is used to the modification of CCA secure Gentry's IBE which introduced by Kiltz and Vahlis to get the CCA secure Fuzzy IBE in the full-identity model.
2008
EPRINT
Fuzzy Identity Based Signature
We introduce a new cryptographic primitive which is the signature analogue of fuzzy identity based encryption(IBE). We call it fuzzy identity based signature(IBS). It possesses similar error-tolerance property as fuzzy IBE that allows a user with the private key for identity $\omega$ to decrypt a ciphertext encrypted for identity $\omega'$ if and only if $\omega$ and $\omega'$ are within a certain distance judged by some metric. A fuzzy IBS is useful whenever we need to allow the user to issue signature on behalf of the group that has certain attributes. Fuzzy IBS can also be applied to biometric identity based signature. To our best knowledge, this primitive was never considered in the identity based signature before. We give the definition and security model of the new primitive and present the first practical implementation based on Sahai-Waters construction\cite{6} and the two level hierarchical signature of Boyen and Waters\cite{9}. We prove that our scheme is existentially unforgeable against adaptively chosen message attack without random oracles.
2008
EPRINT
General Certificateless Encryption and Timed-Release Encryption
While recent timed-release encryption (TRE) schemes are implicitly supported by a certificateless encryption (CLE) mechanism, the security models of CLE and TRE differ and there is no generic transformation from a CLE to a TRE. This paper gives a generalized model for CLE that fulfills the requirements of TRE. This model is secure against adversaries with adaptive trapdoor extraction capabilities, decryption capabilities for arbitrary public keys, and partial decryption capabilities. It also supports hierarchical identifiers. We propose a concrete scheme under our generalized model and prove it secure without random oracles, yielding the first strongly-secure security-mediated CLE and the first TRE in the standard model. In addition, our technique of partial decryption is different from the previous approach.
2008
EPRINT
Generalized Universal Circuits for Secure Evaluation of Private Functions with Application to Data Classification
Secure Evaluation of Private Functions (PF-SFE) allows two parties to compute a private function which is known by one party only on private data of both. It is known that PF-SFE can be reduced to Secure Function Evaluation (SFE) of a Universal Circuit (UC). Previous UC constructions only simulated circuits with gates of $d=2$ inputs while gates with $d>2$ inputs were decomposed into many gates with $2$ inputs which is inefficient for large $d$ as the size of UC heavily depends on the number of gates. We present generalized UC constructions to efficiently simulate any circuit with gates of $d \ge 2$ inputs having efficient circuit representation. Our constructions are non-trivial generalizations of previously known UC constructions. As application we show how to securely evaluate private functions such as neural networks (NN) which are increasingly used in commercial applications. Our provably secure PF-SFE protocol needs only one round in the semi-honest model (or even no online communication at all using non-interactive oblivious transfer) and evaluates a generalized UC that entirely hides the structure of the private NN. This enables applications like privacy-preserving data classification based on private NNs without trusted third party while simultaneously protecting user's data and NN owner's intellectual property.
2008
EPRINT
Generating genus two hyperelliptic curves over large characteristic finite fields
In hyperelliptic curve cryptography, finding a suitable hyperelliptic curve is an important fundamental problem. One of necessary conditions is that the order of its Jacobian is a product of a large prime number and a small number. In the paper, we give a probabilistic polynomial time algorithm to test whether the Jacobian of the given hyperelliptic curve of the form $Y sup 2 = X sup 5 + u X sup 3 + v X$ satisfies the condition and, if so, gives the largest prime factor. Our algorithm enables us to generate random curves of the form until the order of its Jacobian is almost prime in the above sense. A key idea is to obtain candidates of its zeta function over the base field from its zeta function over the extension field where the Jacobian splits.
2008
EPRINT
Generating Shorter Bases for Hard Random Lattices
We revisit the problem of generating a ``hard'' random lattice together with a basis of relatively short vectors. This problem has gained in importance lately due to new cryptographic schemes that use such a procedure for generating public/secret key pairs. In these applications, a shorter basis directly corresponds to milder underlying complexity assumptions and smaller key sizes. The contributions of this work are twofold. First, using the \emph{Hermite normal form} as an organizing principle, we simplify and generalize an approach due to Ajtai (ICALP 1999). Second, we improve the construction and its analysis in several ways, most notably by tightening the length of the output basis essentially to the optimum value.
2008
EPRINT
Generators of Jacobians of Genus Two Curves
We prove that in most cases relevant to cryptography, the Frobenius endomorphism on the Jacobian of a genus two curve is represented by a diagonal matrix with respect to an appropriate basis of the subgroup of l-torsion points. From this fact we get an explicit description of the Weil-pairing on the subgroup of l-torsion points. Finally, the explicit description of the Weil-pairing provides us with an efficient, probabilistic algorithm to find generators of the subgroup of l-torsion points on the Jacobian of a genus two curve.
2008
EPRINT
Generic Attacks for the Xor of k random permutations
\begin{abstract} Xoring the output of $k$ permutations, $k\geq 2$ is a very simple way to construct pseudo-random functions (PRF) from pseudo-random permutations (PRP). Moreover such construction has many applications in cryptography (see \cite{BI,BKrR,HWKS,SL} for example). Therefore it is interesting both from a theoretical and from a practical point of view, to get precise security results for this construction. In this paper, we will describe the best attacks that we have found on the Xor of $k$ random $n$-bit to $n$-bit permutations. When $k=2$, we will get an attack of computational complexity $O(2^n)$. This result was already stated in \cite{BI}. On the contrary, for $k \geq 3$, our analysis is new. We will see that the best known attacks require much more than $2^n$ computations when not all of the $2^n$ outputs are given, or when the function is changed on a few points. We obtain like this a new and very simple design that can be very usefull when a security larger than $2^n$ is wanted, for example when $n$ is very small. \end{abstract}
2008
EPRINT
Generic Attacks on Feistel Schemes
\begin{abstract} Let $A$ be a Feistel scheme with $5$ rounds from $2n$ bits to $2n$ bits. In the present paper we show that for most such schemes $A$: \begin{enumerate} \item It is possible to distinguish $A$ from a random permutation from $2n$ bits to $2n$ bits after doing at most ${\cal O}(2^{n})$ computations with ${\cal O}(2^{n})$ non-adaptive {\bf chosen} plaintexts. \item It is possible to distinguish $A$ from a random permutation from $2n$ bits to $2n$ bits after doing at most ${\cal O}(2^{\frac{3n}{2}})$ computations with ${\cal O}(2^{\frac{3n}{2}})$ {\bf random} plaintext/ciphertext pairs. \end{enumerate} Since the complexities are smaller than the number $2^{2n}$ of possible inputs, they show that some generic attacks always exist on Feistel schemes with $5$ rounds. Therefore we recommend in Cryptography to use Feistel schemes with at least $6$ rounds in the design of pseudo-random permutations. We will also show in this paper that it is possible to distinguish most of $6$ round Feistel permutations generator from a truly random permutation generator by using a few (i.e. ${\cal O}(1)$) permutations of the generator and by using a total number of ${\cal O}(2^{2n})$ queries and a total of ${\cal O}(2^{2n})$ computations. This result is not really useful to attack a single $6$ round Feistel permutation, but it shows that when we have to generate several pseudo-random permutations on a small number of bits we recommend to use more than $6$ rounds. We also show that it is also possible to extend these results to any number of rounds, however with an even larger complexity. \end{abstract}
2008
EPRINT
GUC-Secure Set-Intersection Computation
Secure set-intersection computation is one of important problems in the field of secure multiparty computation with valuable applications. We propose a very gerneral construction for 2-party set-intersection computation based-on anonymous IBE scheme and its user private-keys blind generation techniques. Compared with recently-proposed protocols, e.g., those of Freedman-Nissim-Pinkas, Kissner-Song and Hazay-Lindell, this construction is provabley GUC-secure in standard model with acceptable efficiency. For this goal a new notion of non-malleable zero-knowledge proofs of knowledge and its efficient general construction is presented. In addition, we present an efficient instantiation of this general construction via anonymous Boyen-Waters IBE scheme.
2008
EPRINT
HAIL: A High-Availability and Integrity Layer for Cloud Storage
We introduce HAIL (High-Availability and Integrity Layer), a distributed cryptographic system that permits a set of servers to prove to a client that a stored file is intact and retrievable. Proofs in HAIL are efficiently computable by servers and highly compact---typically tens or hundreds of bytes, irrespective of file size. HAIL cryptographically verifies and reactively reallocates file shares. It is robust against an active, mobile adversary, i.e., one that may progressively corrupt the full set of servers. We propose a strong, formal adversarial model for HAIL, and rigorous analysis and parameter choices. We also report on a prototype implementation. HAIL strengthens, formally unifies, and streamlines distinct approaches from the cryptographic and distributed-systems communities. HAIL also includes an optional new tool for proactive protection of stored files. HAIL is primarily designed to protect static stored objects, such as backup files or archives.
2008
EPRINT
Hash Functions from Sigma Protocols and Improvements to VSH
We present a general way to get a provably collision-resistant hash function from any (suitable) $\Sigma$-protocol. This enables us to both get new designs and to unify and improve previous work. In the first category, we obtain, via a modified version of the Fiat-Shamir protocol, the fastest known hash function that is provably collision-resistant based on the \textit{standard} factoring assumption. In the second category, we provide a modified version VSH^* of VSH which is faster when hashing short messages. (Most Internet packets are short.) We also show that $\Sigma$-hash functions are chameleon, thereby obtaining several new and efficient chameleon hash functions with applications to on-line/off-line signing, chameleon signatures and designated-verifier signatures.
2008
EPRINT
HB#: Increasing the Security and Efficiency of HB+
The innovative HB+ protocol of Juels and Weis [10] extends device authentication to low-cost RFID tags. However, despite the very simple on-tag computation there remain some practical problems with HB+ and despite an elegant proof of security against some limited active attacks, there is a simple man-in-the-middle attack due to Gilbert et al. [8]. In this paper we consider improvements to HB+ in terms of both security and practicality. We introduce a new protocol that we denote random-HB#. This proposal avoids many practical drawbacks of HB+, remains provably resistant to attacks in the model of Juels and Weis, and at the same time is provably resistant to a broader class of active attacks that includes the attack of [8]. We then describe an enhanced variant called HB# which offers practical advantages over HB+.
2008
EPRINT
HENKOS Cryptanalysis-Related keys attack
This paper describes a series of vulnerabilities found on HENKOS algorithm (http://eprint.iacr.org/080) having a description below, regarding to the related key attacks, mounting this type of attack for a particular relation between keys and showing that is a practical attack, having results in real time.
2008
EPRINT
High Performance Architecture for Elliptic Curve Scalar Multiplication over GF(2^m)
We propose a new architecture for performing Elliptic Curve Scalar Multiplication (ECSM) on elliptic curves over GF(2^m). This architecture maximizes the parallelism that the projective version of the Montgomery ECSM algorithm can achieve. It completes one ECSM operation in about $2(m-1)( \lceil m/D \rceil +4)+m$ cycles, and is at least three times the speed of the best known result currently available. When implemented on a Virtex-4 FPGA, it completes one ECSM operation over GF(2^163) in 12.5us with the maximum achievable frequency of 222MHz. Two other implementation variants for less resource consumption are also proposed. Our first variant reduces the resource consumption by almost 50% while still maintaining the utilization efficiency, which is measured by a performance to resource consumption ratio. Our second variant achieves the best utilization efficiency and in our actual implementation on an elliptic curve group over GF(2^163), it gives more than 30% reduction on resource consumption while maintaining almost the same speed of computation as that of our original design. For achieving this high performance, we also propose a modified finite field inversion algorithm which takes only m cycles to invert an element over GF(2^m), rather than 2m cycles as the traditional Extended Euclid algorithm does, and this new design yields much better utilization of the cycle time.
2008
EPRINT
High Performance Implementation of a Public Key Block Cipher - MQQ, for FPGA Platforms
We have implemented in FPGA recently published class of public key algorithms -- MQQ, that are based on quasigroup string transformations. Our implementation achieves decryption throughput of 399 Mbps on an Xilinx Virtex-5 FPGA that is running on 249.4 MHz. The encryption throughput of our implementation achieves 44.27 Gbps on four Xilinx Virtex-5 chips that are running on 276.7 MHz. Compared to RSA implementation on the same FPGA platform this implementation of MQQ is 10,000 times faster in decryption, and is more than 17,000 times faster in encryption.
2008
EPRINT
Higher Order Differential Cryptanalysis of Multivariate Hash Functions
In this paper we propose an attack against multivariate hash functions, which is based on higher order differential cryptanalysis. As a result, this attack can be successful in finding the preimage of the compression function better than brute force and it is easy to make selective forgeries when a MAC is constructed by multivariate polynomials. It gives evidence that families of multivariate hash functions are neither pseudo-random nor unpredictable and one can distinguish a function from random functions, regardless of the finite field and the degree of the polynomials.
2008
EPRINT
History-Independent Cuckoo Hashing
Cuckoo hashing is an efficient and practical dynamic dictionary. It provides expected amortized constant update time, worst case constant lookup time, and good memory utilization. Various experiments demonstrated that cuckoo hashing is highly suitable for modern computer architectures and distributed settings, and offers significant improvements compared to other schemes. In this work we construct a practical {\em history-independent} dynamic dictionary based on cuckoo hashing. In a history-independent data structure, the memory representation at any point in time yields no information on the specific sequence of insertions and deletions that led to its current content, other than the content itself. Such a property is significant when preventing unintended leakage of information, and was also found useful in several algorithmic settings. Our construction enjoys most of the attractive properties of cuckoo hashing. In particular, no dynamic memory allocation is required, updates are performed in expected amortized constant time, and membership queries are performed in worst case constant time. Moreover, with high probability, the lookup procedure queries only two memory entries which are independent and can be queried in parallel. The approach underlying our construction is to enforce a canonical memory representation on cuckoo hashing. That is, up to the initial randomness, each set of elements has a unique memory representation.
2008
EPRINT
Homomorphic Encryption with CCA Security
We address the problem of constructing public-key encryption schemes that meaningfully combine useful {\em computability features} with {\em non-malleability}. In particular, we investigate schemes in which anyone can change an encryption of an unknown message $m$ into an encryption of $T(m)$ (as a {\em feature}), for a specific set of allowed functions $T$, but the scheme is ``non-malleable'' with respect to all other operations. We formulate precise definitions that capture these intuitive requirements and also show relationships among our new definitions and other more standard ones (IND-CCA, gCCA, and RCCA). We further justify our definitions by showing their equivalence to a natural formulation of security in the Universally Composable framework. We also consider extending the definitions to features which combine {\em multiple} ciphertexts, and show that a natural definition is unattainable for a useful class of features. Finally, we describe a new family of encryption schemes that satisfy our definitions for a wide variety of allowed transformations $T$, and which are secure under the standard Decisional Diffie-Hellman (DDH) assumption.
2008
EPRINT
How Risky is the Random-Oracle Model?
RSA-FDH and many other schemes provably secure in the Random-Oracle Model (ROM) require a cryptographic hash function whose output size does not match any of the standard hash functions. We show that the random-oracle instantiations proposed in the literature for such general cases are insecure, including the two historical instantiations proposed by Bellare and Rogaway themselves in their seminal papers from ACM~CCS~'93 and EUROCRYPT~'96: for instance, for 1024-bit digests, we present a $2^{68}$ preimage attack on BR93 and a $2^{106}$ collision attack on BR96. This leads us to study the potential security impact of such defects. While one might think that a hash collision may at worst give rise to an existential forgery on a signature scheme, we show that for several (real-world) schemes secure in the ROM, collisions or slight hash function defects can have much more dramatic consequences, namely key-recovery attacks. For instance, we point out that a hash collision discloses the master key in the Boneh-Gentry-Hamburg identity-based cryptosystem from FOCS~'07, and the secret key in the Rabin-Williams signature scheme for which Bernstein proved tight security at EUROCRYPT~'08. This problem can be fixed, but still, such schemes, as well as the Rabin-Williams variant implemented in the IEEE P1363 standard, strongly require that the hash function is immune to malleability variants of collision attacks, which does not hold for the BR93 instantiation. Our results suggest an additional criterion to compare schemes secure in the ROM: assessing the risks by carefully studying the impact of potential flaws in the random-oracle instantiation. In this light, RSA-PSS seems more robust than other RSA signatures secure in the ROM.
2008
EPRINT
How to Build a Hash Function from any Collision-Resistant Function
Recent collision-finding attacks against hash functions such as MD5 and SHA-1 motivate the use of provably collision-resistant (CR) functions in their place. Finding a collision in a provably CR function implies the ability to solve some hard problem (e.g., factoring). Unfortunately, existing provably CR functions make poor replacements for hash functions as they fail to deliver behaviors demanded by practical use. In particular, they are easily distinguished from a random oracle. We initiate an investigation into building hhash functions from provably CR functions. As a method for achieving this, we present the Mix-Compress-Mix (MCM) construction; it envelopes any provably CR function H (with suitable regularity properties) between two injective ``mixing'' stages. The MCM construction simultaneously enjoys (1) provable collision-resistance in the standard model, and (2) indifferentiability from a monolithic random oracle when the mixing stages themselves are indifferentiable from a random oracle that observes injectivity. We instantiate our new design approach by specifying a blockcipher-based construction that appropriately realizes the mixing stages.
2008
EPRINT
How To Ensure Forward and Backward Untraceability of RFID Identification Schemes By Using A Robust PRBG
In this paper, we analyze an RFID identification scheme which is designed to provide forward untraceability and backward untraceability. We show that if a standard cryptographic pseudorandom bit generator (PRBG) is used in the scheme, then the scheme may fail to provide forward untraceability and backward untraceability. To achieve the desired untraceability features, the scheme can use a robust PRBG which provides forward security and backward security. We also note that the backward security is stronger than necessary for the backward untraceability of the scheme.
2008
EPRINT
How to Launch A Birthday Attack Against DES
We present a birthday attack against DES. It is entirely based on the relationship $L_{i+1}=R_{i}$ and the simple key schedule in DES. It requires about $2^{16}$ ciphertexts of the same $R_{16}$, encrypted by the same key $K$. We conjecture it has a computational complexity of $2^{48}$. Since the requirement for the birthday attack is more accessible than that for Differential cryptanalysis, Linear cryptanalysis or Davies' attack, it is of more practical significance.
2008
EPRINT
How to Protect Yourself without Perfect Shredding
Erasing old data and keys is an important tool in cryptographic protocol design. It is useful in many settings, including proactive security, adaptive security, forward security, and intrusion resilience. Protocols for all these settings typically assume the ability to perfectly erase information. Unfortunately, as amply demonstrated in the systems literature, perfect erasures are hard to implement in practice. We propose a model of partial erasures where erasure instructions leave almost all the data erased intact, thus giving the honest players only a limited capability for disposing of old data. Nonetheless, we provide a general compiler that transforms any secure protocol using perfect erasures into one that maintains the same security properties when only partial erasures are available. The key idea is a new redundant representation of secret data which can still be computed on, and yet is rendered useless when partially erased. We prove that any such a compiler must incur a cost in additional storage, and that our compiler is near optimal in terms of its storage overhead.
2008
EPRINT
Hybrid Binary-Ternary Joint Sparse Form and its Application in Elliptic Curve Cryptography
Multi-exponentiation is a common and time consuming operation in public-key cryptography. Its elliptic curve counterpart, called multi-scalar multiplication is extensively used for digital signature verification. Several algorithms have been proposed to speed-up those critical computations. They are based on simultaneously recoding a set of integers in order to minimize the number of general multiplications or point additions. When signed-digit recoding techniques can be used, as in the world of elliptic curves, Joint Sparse Form (JSF) and interleaving $w$-NAF are the most efficient algorithms. In this paper, a novel recoding algorithm for a pair of integers is proposed, based on a decomposition that mixes powers of 2 and powers of 3. The so-called Hybrid Binary-Ternary Joint Sparse Form require fewer digits and is sparser than the JSF and the interleaving $w$-NAF. Its advantages are illustrated for elliptic curve double-scalar multiplication; the operation counts show a gain of up to 18\%.
2008
EPRINT
ID based generalized signcryption
Generalized signcryption is a new cryptographic primitive in which a signcryption scheme can work as an encryption scheme as well as a signature scheme. This paper presents an identity based generalized signcryption scheme based on bilinear pairing and discusses its security for message confidentiality non repudiation and ciphertext authentication.
2008
EPRINT
Identification and Privacy: Zero-Knowledge is not Enough
At first glance, privacy and zero-knowledgeness seem to be similar properties. A scheme is private when no information is revealed on the prover and in a zero-knowledge scheme, communications should not leak provers' secrets. Until recently, privacy threats were only partially formalized and some zero-knowledge (ZK) schemes have been proposed so far to ensure privacy. We here explain why the intended goal is not reached. Following the privacy model proposed by Vaudenay at Asiacrypt 2007, we then reconsider the analysis of these schemes and thereafter introduce a general framework to modify identification schemes leading to different levels of privacy. Our new protocols can be useful, for instance, for identity documents, where privacy is a great issue. Furthermore, we propose efficient implementations of zero-knowledge and private identification schemes based on modifications of the GPS scheme. The security and the privacy are based on a new problem: the Short Exponent Strong Diffie-Hellman (SESDH) problem. The hardness of this problem is related to the hardness of the Strong Diffie-Hellman (SDH) problem and to the hardness of the Discrete Logarithm with Short Exponent (DLSE) problem. The security and privacy of these new schemes are proved in the random oracle paradigm.
2008
EPRINT
Identity Based Strong Bi-Designated Verifier Proxy Signature Schemes
Proxy signature schemes allow delegation of signing rights. The paper proposes the notion of Identity Based Strong Bi-Designated Verifier Proxy Signature (ID-SBDVPS) schemes. In such schemes, only the two designated verifiers can verify that the proxy signer on behalf of the original signer signed the message but none of them is able to convince anyone else of this fact. The paper proposes nine such schemes and analyses the computational efficiency of each.
2008
EPRINT
Identity-Based Directed Signature Scheme from Bilinear Pairings
In a directed signature scheme, a verifier can exclusively verify the signatures designated to himself, and shares with the signer the ability to prove correctness of the signature to a third party when necessary. Directed signature schemes are suitable for applications such as bill of tax and bill of health. This paper studies directed signatures in the identity-based setting. We first present the syntax and security notion that includes unforgeability and invisibility, then propose a concrete identity-based directed signature scheme from bilinear pairings. We then prove our scheme existentially unforgeable under the computational Diffie-Hellman assumption, and invisible under the decisional Bilinear Diffie-Hellman assumption, both in the random oracle model.
2008
EPRINT
Identity-Based Proxy Re-encryption Schemes with Multiuse, Unidirection, and CCA Security
A proxy re-encryption (PRE) scheme allows a proxy to transform a ciphertext under Alice's public key into a ciphertext under Bob's public key on the same message. In 2006, Green and Ateniese extended the above notion to identity-based proxy re-encryption (IB-PRE), and proposed two open problems \cite{GA06}: building 1. IB-PRE schemes which are CCA-secure in the standard model; 2. multi-use CCA-secure IB-PRE schemes. Chu and Tzeng proposed two identity-based proxy re-encryption schemes afterwards in \cite{CT07}, aiming at solving the two open problems. However, in this paper, we show that they don't solve these two open problems by revealing a security flaw in their solution. Furthermore, we propose an improvement which is a solution to the open problems.
2008
EPRINT
IEEE P1363.1 Draft 10: Draft Standard for Public Key Cryptographic Techniques Based on Hard Problems over Lattices
Engineering specifications and security considerations for NTRUEncrypt, secure against the lattice attacks presented at Crypto 2007
2008
EPRINT
Imaginary quadratic orders with given prime factor of class number
Abelian class group Cl(D) of imaginary quadratic order with odd squarefree discriminant D is used in public key cryptosystems, based on discrete logarithm problem in class group and in cryptosystems, based on isogenies of elliptic curves. Discrete logarithm problem in Cl(D) is hard if #Cl(D) is prime or has large prime divisor. But no algorithms for generating such D are known. We propose probabilistic algorithm that gives discriminant of imaginary quadratic order O_D with subgroup of given prime order l. Algorithm is based on properties of Hilbert class field polynomial H_D for elliptic curve over field of p^l elements. Let trace of Frobenius endomorphism is T, discriminant of Frobenius endomorphism D = T^2-4p^l and j is not in prime field. Then deg(H_D) = #Cl(O_D) and #Cl(D)=0 (mod l). If Diophantine equation D = T^2-4p^l with variables l<O(|D|^(1/4)), prime p and T has solution only for l=1, then class number is prime.
2008
EPRINT
Impossible Differential Cryptanalysis of CLEFIA
This paper mainly discussed the impossible differerential crypt- analysis on CLEFIA which was proposed in FSE2007. New 9-round impossible differentials which are difrererent from the previous ones are discovered. Then these differerences are applied to the attack of reduced-CLEFIA. For 128-bit case, it is possible to apply an impossible differen-tial attack to 12-round CLEFIA which requires 2^110.93 chosen plaintexts and the time complexity is 2^111. For 192/256-bit cases, it is possible to apply impossible differential attack to 13-round CLEFIA and the chosen plaintexts and time complexity are 2^111.72 and 2^158 respectively. For 256-bit cases, it needs 2^112.3 chosen plaintexts and no more than 2^199 encryptions to attack 14-round CLEFIA and 2^113 chosen plaintexts to attack 15-round 256-bit CLEFIA with the time complexity less than 2^248 encryptions.
2008
EPRINT
Improved Cryptanalysis of APOP-MD4 and NMAC-MD4 using New Differential Paths
In case of security analysis of hash functions, finding a good collision-inducing differential paths has been only focused on. However, it is not clear how differential paths of a hash function influence the securities of schemes based on the hash function. In this paper, we show that any differential path of a hash function can influence the securities of schemes based on the hash function. We explain this fact with the MD4 hash function. We first show that APOP-MD4 with a nonce of fixed length can be analyzed efficiently with a new differential path. Then we improve the result of the key-recovery attack on NMAC-MD4 described by Fouque {\em et al.} \cite{FoLeNg07} by combining new differential paths. Our results mean that good hash functions should have the following property : \textit{It is computationally infeasible to find differential a path of hash functions with a high probability}.
2008
EPRINT
Improved Cryptanalysis of SHAMATA-BC
We state the design flaws of the 1-round block cipher SHA\-MATA-BC, designed by Fleishmann and Gorski by using the building blocks of SHAMATA hash function. We fix the flaws and then show that the amended version of SHAMATA-BC is much weaker. We believe that there is no connection between security level of SHAMATA as a hash function and that of SHAMATA-BC as a block cipher.
2008
EPRINT
Improved efficiency of Kiltz07-KEM
Kiltz proposed a practical key encapsulation mechanism(Kiltz07-KEM) which is secure against adaptive chosen ciphertext attacks(IND-CCA2) under the gap hashed Diffie-Hellman(GHDH) assumption\cite{Kiltz2007}. We show a variant of Kiltz07-KEM which is more efficient than Kiltz07-KEM in encryption. The new scheme can be proved to be IND-CCA2 secure under the same assumption, GHDH.
2008
EPRINT
Improved lower bound on the number of balanced symmetric functions over GF(p)
The lower bound on the number of n-variable balanced symmetric functions over finite fields GF(p) presented in {\cite{Cusick}} is improved in this paper.
2008
EPRINT
Improving the Farnel, Threeballot, and Randell-Ryan Voting Schemes
A number of recent voting schemes provide the property of voter verifiability: voters can confirm that their votes are accurately counted in the tally. The Farnel type voting schemes are based on the observation that to achieve voter-verifiability it is not necessary for the voter to carry away a receipt corresponding to their own vote. The Farnel approach then is to provide voters, when they cast their vote, with copies of receipts of one or more randomly selected, previous cast votes. This idea has a number of attractive features: ballot secrecy is achieved up front and does not have to be provided by anonymising mixes etc during tabulation. In fact, plaintext receipts can be used in contrast to the encrypted receipts of many other voter-verifiable schemes. Furthermore, any fears that voters might have that their vote is not truly concealed in an encrypted receipt are mitigated. The Farnel mechanism also mitigates randomization style attacks. In this paper we explore some enhancements to the original Farnel scheme and ways that the Farnel concept can be combined with some existing voter-verifiable schemes, namely Prˆet-`a-Voter, ThreeBallot, and Randell-Ryan.
2008
EPRINT
Improving the Rules of the DPA Contest
A DPA contest has been launched at CHES 2008. The goal of this initiative is to make it possible for researchers to compare different side-channel attacks in an objective manner. For this purpose, a set of 80000 traces corresponding to the encryption of 80000 different plaintexts with the Data Encryption Standard and a fixed key has been made available. In this short note, we discuss the rules that the contest uses to rate the effectiveness of different distinguishers. We first describe practical examples of attacks in which these rules can be misleading. Then, we suggest an improved set of rules that can be implemented easily in order to obtain a better interpretation of the comparisons performed.
2008
EPRINT
Improving upon HCTR and matching attacks for Hash-Counter-Hash approach
McGrew and Fluhrer first proposed hash-counter-hash approach to encrypt arbitrary length messages. By its nature, counter can handle incomplete message blocks as well as complete message blocks in the same manner. HCTR is the till date best (in terms of efficiency) strong pseudo random permutation or SPRP among all known counter based SPRPs. But as of now, a cubic bound for HCTR is known. Moreover, all invocations of underlying block ciphers can not be made in parallel. Our new proposal (we call it HMC or Hash Modified Counter) provides a quadratic security bound and all block cipher invocations are parallel in nature even if we have an incomplete message block. We also present a prp-distinguishing attack on a generic counter based encryption, which makes $q$ non-adaptive encryption queries consisting of $(\ell+1)$ $n$-bit blocks and has success probability roughly $\ell^2q^2/2^n$. Loosely speaking, the success probability matches with the upper bound of distinguishing probability. As a result, we prove that the known quadratic bounds for XCB, HCH and HMC are tight.
2008
EPRINT
Indifferentiable Security Analysis of choppfMD, chopMD, a chopMDP, chopWPH, chopNI, chopEMD, chopCS, and chopESh Hash Domain Extensions
We provide simple and unified indifferentiable security analyses of choppfMD, chopMD, a chopMDP (where the permutation $P$ is to be xored with any non-zero constant.), chopWPH (the chopped version of Wide-Pipe Hash proposed in \cite{Lucks05}), chopEMD, chopNI, chopCS, chopESh hash domain extensions. Even though there are security analysis of them in the case of no-bit chopping (i.e., $s=0$), there is no unified way to give security proofs. All our proofs in this paper follow the technique introduced in \cite{BeDaPeAs08}. These proofs are simple and easy to follow.
2008
EPRINT
Information Leakage in Optimal Anonymized and Diversified Data
To reconcile the demand of information dissemination and preservation of privacy, a popular approach generalizes the attribute values in the dataset, for example by dropping the last digit of the postal code, so that the published dataset meets certain privacy requirements, like the notions of k-anonymity and l-diversity. On the other hand, the published dataset should remain useful and not over generalized. Hence it is desire to disseminate a database with high "usefulness", measured by a utility function. This leads to a generic framework whereby the optimal dataset (w.r.t. the utility function) among all the generalized datasets that meet certain privacy requirements, is chosen to be disseminated. In this paper, we observe that, the fact that a generalized dataset is optimal may leak information about the original. Thus, an adversary who is aware of how the dataset is generalized may able to derive more information than what the privacy requirements constrained. This observation challenges the widely adopted approach that treats the generalization process as an optimization problem. We illustrate the observation by giving counter-examples in the context of k-anonymity and l-diversity.
2008
EPRINT
Information Leakage of Flip-Flops in DPA-Resistant Logic Styles
This contribution discusses the information leakage of flip-flops for different DPA-resistant logic styles. We show that many of the proposed side-channel resistant logic styles still employ flip-flops that leak data-dependent information. Furthermore, we apply simple models for the leakage of masked flip-flops to design a new attack on circuits implemented using masked logic styles. Contrary to previous attacks on masked logic styles, our attack does not predict the mask bit and does not need detailed knowledge about the attacked device, e.g., the circuit layout. Moreover, our attack works even if all the load capacitances of the complementary logic signals are perfectly balanced and even if the PRNG is ideally unbiased. Finally, after performing the attack on DRSL, MDPL, and iMDPL circuits we show that single-bit masks do not influence the exploitability of the revealed leakage of the masked flip-flops.
2008
EPRINT
Information Theoretic Evaluation of Side-Channel Resistant Logic Styles
We propose to apply an information theoretic metric to the evaluation of side-channel resistant logic styles. Due to the long design and development time required for the physical evaluation of such hardware countermeasures, our analysis is based on simulations. Although they do not aim to replace the need of actual measurements, we show that simulations can be used as a meaningful first step in the validation chain of a cryptographic product. For illustration purposes, we apply our methodology to gate-level simulations of different logic styles and stress that it allows a significant improvement of the previously considered evaluation methods. In particular, our results allow putting forward the respective strengths and weaknesses of actual countermeasures and determining to which extent they can practically lead to secure implementations (with respect to a noise parameter), if adversaries were provided with simulation-based side-channel traces. Most importantly, the proposed methodology can be straightforwardly adapted to adversaries provided with any other kind of leakage traces (including physical ones).
2008
EPRINT
Information-Theoretically Secure Voting Without an Honest Majority
We present three voting protocols with unconditional privacy and information-theoretic correctness, without assuming any bound on the number of corrupt voters or voting authorities. All protocols have polynomial complexity and require private channels and a simultaneous broadcast channel. Our first protocol is a basic voting scheme which allows voters to interact in order to compute the tally. Privacy of the ballot is unconditional, but any voter can cause the protocol to fail, in which case information about the tally may nevertheless transpire. Our second protocol introduces voting authorities which allow the implementation of the first protocol, while reducing the interaction and limiting it to be only between voters and authorities and among the authorities themselves. The simultaneous broadcast is also limited to the authorities. As long as a single authority is honest, the privacy is unconditional, however, a single corrupt authority or a single corrupt voter can cause the protocol to fail. Our final protocol provides a safeguard against corrupt voters by enabling a verification technique to allow the authorities to revoke incorrect votes. We also discuss the implementation of a simultaneous broadcast channel with the use of temporary computational assumptions, yielding versions of our protocols achieving everlasting security.
2008
EPRINT
Infringing and Improving Password Security of a Three-Party Key Exchange Protocol
Key exchange protocols allow two or more parties communicating over a public network to establish a common secret key called a session key. Due to their significance in building a secure communication channel, a number of key exchange protocols have been suggested over the years for a variety of settings. Among these is the so-called S-3PAKE protocol proposed by Lu and Cao for password-authenticated key exchange in the three-party setting. In the current work, we are concerned with the password security of the S-3PAKE protocol. We first show that S-3PAKE is vulnerable to an off-line dictionary attack in which an attacker exhaustively enumerates all possible passwords in an attempt to determine the correct one. We then figure out how to eliminate the security vulnerability of S-3PAKE.
2008
EPRINT
Inside the Hypercube
Bernstein's CubeHash is a hash function family that includes four functions submitted to the NIST Hash Competition. A CubeHash function is parametrized by a number of rounds r, a block byte size b, and a digest bit length h (the compression function makes r rounds, while the finalization function makes 10r rounds). The 1024-bit internal state of CubeHash is represented as a five-dimensional hypercube. The submissions to NIST recommends r=8, b=1, and h in {224,256,384,512}. This paper presents the first external analysis of CubeHash, with: improved standard generic attacks for collisions and preimages; a multicollision attack that exploits fixed points; a study of the round function symmetries; a preimage attack that exploits these symmetries; a practical collision attack on a weakened version of CubeHash; a study of fixed points and an example of nontrivial fixed point; high-probability truncated differentials over 10 rounds. Since the first publication of these results, several collision attacks for reduced versions of CubeHash were published by Dai, Peyrin, et al. Our results are more general, since they apply to any choice of the parameters, and show intrinsic properties of the CubeHash design, rather than attacks on specific versions.
2008
EPRINT
Investigating the DPA-Resistance Property of Charge Recovery Logics
The threat of DPA attacks is of crucial importance when designing cryptographic hardware. As a result, several DPA countermeasures at the cell level have been proposed in the last years, but none of them offers perfect protection against DPA attacks. Moreover, all of these DPA-resistant logic styles increase the power consumption and the area consumption significantly. On the other hand, there are some logic styles which provide less power dissipation (so called charge recovery logic) that can be considered as a DPA countermeasure. In this article we examine them from the DPA-resistance point of view. As an example of charge recovery logic styles, 2N-2N2P is evaluated. It is shown that the usage of this logic style leads to an improvement of the DPA-resistance and at the same time reduces the energy consumption which make it especially suitable for pervasive devices. In fact, it is the first time that a proposed DPA-resistant logic style consumes less power than the corresponding standard CMOS circuit.
2008
EPRINT
Iterative Probabilistic Reconstruction of RC4 Internal States
It is shown that an improved version of a previously proposed iterative probabilistic algorithm, based on forward and backward probability recursions along a short keystream segment, is capable of reconstructing the RC4 internal states from a relatively small number of known initial permutation entries. Given a modulus $N$, it is argued that about $N/3$ and $N/10$ known entries are sufficient for success, for consecutive and specially generated entries, respectively. The complexities of the corresponding guess-and-determine attacks are analyzed and, e.g., for $N=256$, the data and time complexities are (conservatively) estimated to be around $D \approx 2^{41}$, $C \approx 2^{689}$ and $D \approx 2^{211}$, $C \approx 2^{262}$, for the two types of guessed entries considered, respectively.
2008
EPRINT
Joint State Theorems for Public-Key Encryption and Digital Signature Functionalities with Local Computation
Composition theorems in simulation-based approaches allow to build complex protocols from sub-protocols in a modular way. However, as first pointed out and studied by Canetti and Rabin, this modular approach often leads to impractical implementations. For example, when using a functionality for digital signatures within a more complex protocol, parties have to generate new verification and signing keys for every session of the protocol. This motivates to generalize composition theorems to so-called joint state theorems, where different copies of a functionality may share some state, e.g., the same verification and signing keys. In this paper, we present a joint state theorem which is more general than the original theorem of Canetti and Rabin, for which several problems and limitations are pointed out. We apply our theorem to obtain joint state realizations for three functionalities: public-key encryption, replayable public-key encryption, and digital signatures. Unlike most other formulations, our functionalities model that ciphertexts and signatures are computed locally, rather than being provided by the adversary. To obtain the joint state realizations, the functionalities have to be designed carefully. Other formulations are shown to be unsuitable. Our work is based on a recently proposed, rigorous model for simulation-based security by K{\"u}sters, called the IITM model. Our definitions and results demonstrate the expressivity and simplicity of this model. For example, unlike Canetti's UC model, in the IITM model no explicit joint state operator needs to be defined and the joint state theorem follows immediately from the composition theorem in the IITM model.
2008
EPRINT
Key-Private Proxy Re-Encryption
Proxy re-encryption (PRE) allows a proxy to convert a ciphertext encrypted under one key into an encryption of the same message under another key. The main idea is to place as little trust and reveal as little information to the proxy as necessary to allow it to perform its translations. At the very least, the proxy should not be able to learn the keys of the participants or the content of the messages it re-encrypts. However, in all prior PRE schemes, it is easy for the proxy to determine between which participants a re-encryption key can transform ciphertexts. This can be a problem in practice. For example, in a secure distributed file system, content owners may want to use the proxy to help re-encrypt sensitive information *without* revealing to the proxy the *identity* of the recipients. In this work, we propose key-private (or anonymous) re-encryption keys as an additional useful property of PRE schemes. We formulate a definition of what it means for a PRE scheme to be secure and key-private. Surprisingly, we show that this property is not captured by prior definitions or achieved by prior schemes, including even the secure *obfuscation* of PRE by Hohenberger, Rothblum, shelat and Vaikuntanathan (TCC 2007). Finally, we propose the first key-private PRE construction and prove its security under a simple extension of the Decisional Bilinear Diffie Hellman assumption and its key-privacy under the Decision Linear assumption in the standard model.
2008
EPRINT
Knapsack cryptosystems built on NP-hard instances
We construct three public key knapsack cryptosystems. Standard knapsack cryptosystems hide easy instances of the knapsack problem and have been broken. The systems considered in the article face this problem: They hide a random (possibly hard) instance of the knapsack problem. We provide both complexity results (size of the key, time needed to encypher/decypher...) and experimental results. Security results are given for the second cryptosystem ( the fastest one and the one with the shortest key). Probabilistic polynomial reductions show that finding the private key is as difficult as factorizing a product of two primes. We also consider heuristic attacks. First, the density of the cryptosystem can be chosen arbitrarily close to one, discarding low density attacks. Finally, we consider explicit heuristic attacks based on the LLL algorithm and we prove that with respect to these attacks, the public key is as secure as a random key.
2008
EPRINT
Leakage-Resilient Cryptography in the Standard Model
We construct a stream-cipher $\SC$ whose \emph{implementation} is secure even if arbitrary (adversely chosen) information on the internal state of $\SC$ is leaked during computation. This captures \emph{all} possible side-channel attacks on $\SC$ where the amount of information leaked in a given period is bounded, but overall can be arbitrary large, in particular much larger than the internal state of $\SC$. The only other assumption we make on the \emph{implementation} of $\SC$ is that only data that is accessed during computation leaks information. The construction can be based on any pseudorandom generator, and the only computational assumption we make is that this PRG is secure against non-uniform adversaries in the classical sense (i.e. when there are no side-channels). The stream-cipher $\SC$ generates its output in chunks $K_1,K_2,\ldots$, and arbitrary but bounded information leakage is modeled by allowing the adversary to adaptively chose a function $f_\ell:\bin^*\rightarrow\bin^\lambda$ before $K_\ell$ is computed, she then gets $f_\ell(\tau_\ell)$ where $\tau_\ell$ is the internal state of $\SC$ that is accessed during the computation of $K_\ell$. One notion of security we prove for $\SC$ is that $K_\ell$ is indistinguishable from random when given $K_1,\ldots,K_{\ell-1}$, $f_1(\tau_1),\ldots, f_{\ell-1}(\tau_{\ell-1})$ and also the complete internal state of $\SC$ after $K_{\ell}$ has been computed (i.e. our cipher is forward-secure). The construction is based on alternating extraction (previously used in the intrusion-resilient secret-sharing scheme from FOCS'07). We move this concept to the computational setting by proving a lemma that states that the output of any PRG has high HILL pseudoentropy (i.e. is indistinguishable from some distribution with high min-entropy) even if arbitrary information about the seed is leaked. The amount of leakage $\leak$ that we can tolerate in each step depends on the strength of the underlying PRG, it is at least logarithmic, but can be as large as a constant fraction of the internal state of $\SC$ if the PRG is exponentially hard.
2008
EPRINT
LEGO for Two Party Secure Computation
The first and still most popular solution for secure two-party computation relies on Yao's garbled circuits. Unfortunately, Yao's construction provide security only against passive adversaries. Several constructions (zero-knowledge compiler, cut-and-choose) are known in order to provide security against active adversaries, but most of them are not efficient enough to be considered practical. In this paper we propose a new approach called LEGO (Large Efficient Garbled-circuit Optimization) for two-party computation, which allows to construct more efficient protocols secure against active adversaries. The basic idea is the following: Alice constructs and provides Bob a set of garbled NAND gates. A fraction of them is checked by Alice giving Bob the randomness used to construct them. When the check goes through, with overwhelming probability there are very few bad gates among the non-checked gates. These gates Bob permutes and connects to a Yao circuit, according to a fault-tolerant circuit design which computes the desired function even in the presence of a few random faulty gates. Finally he evaluates this Yao circuit in the usual way. For large circuits, our protocol offers better performance than any other existing protocol. The protocol is universally composable (UC) in the OT-hybrid model.
2008
EPRINT
Linear and Differential Cryptanalysis of Reduced SMS4 Block Cipher
SMS4 is a 128-bit block cipher with a 128-bit user key and 32 rounds, which is used in WAPI, the Chinese WLAN national standard. In this paper, we present a linear attack and a differential attack on a 22-round reduced SMS4; our 22-round linear attack has a data complexity of 2^{117} known plaintexts, a memory complexity of 2^{109} bytes and a time complexity of 2^{109.86} 22-round SMS4 encryptions and 2^{120.39} arithmetic operations, while our 22-round differential attack requires 2^{118} chosen plaintexts, 2^{123} memory bytes and 2^{125.71} 22-round SMS4 encryptions. Both of our attacks are better than any previously known cryptanalytic results on SMS4 in terms of the number of attacked rounds. Furthermore, we present a boomerang and a rectangle attacks on a 18-round reduced SMS4. These results are better than previously known rectangle attacks on reduced SMS4. The methods presented to attack SMS4 can be applied to other unbalanced Feistel ciphers with incomplete diffusion.
2008
EPRINT
Linear Bandwidth Naccache-Stern Encryption
The Naccache-Stern (NS) knapsack cryptosystem is an original yet little-known public-key encryption scheme. In this scheme, the ciphertext is obtained by multiplying public-keys indexed by the message bits modulo a prime $p$. The cleartext is recovered by factoring the ciphertext raised to a secret power modulo $p$. NS encryption requires a multiplication per two plaintext bits on the average. Decryption is roughly as costly as an RSA decryption. However, NS features a bandwidth sublinear in $\log p$, namely $\log p/\log \log p$. As an example, for a $2048$-bit prime $p$, NS encryption features a 233-bit bandwidth for a 59-kilobyte public key size. This paper presents new NS variants achieving bandwidths {\sl linear} in $\log p$. As linear bandwidth claims a public-key of size $\log^3 p/\log \log p$, we recommend to combine our scheme with other bandwidth optimization techniques presented here. For a $2048$-bit prime $p$, we obtain figures such as 169-bit plaintext for a 10-kilobyte public key, 255-bit plaintext for a 20-kilobyte public key or a 781-bit plaintext for a 512-kilobyte public key. Encryption and decryption remain unaffected by our optimizations: As an example, the 781-bit variant requires 152 multiplications per encryption.
2008
EPRINT
Local Affinity Based Inversion of Filter Generators
We propose a novel efficient cryptanalytic technique allowing an adversary to recover an initial state of filter generator given its output sequence. The technique is applicable to filter generators possessing local affinity property.
2008
EPRINT
Lower Bounds on Black-Box Ring Extraction
The black-box ring extraction problem is the problem of extracting a secret ring element from a black-box by performing only the ring operations and testing for equality of elements. An efficient algorithm for the black-box ring extraction problem implies the equivalence of the discrete logarithm and the Diffie-Hellman problem. At the same time this implies the inexistence of secure ring-homomorphic encryption schemes. It is unknown whether the known algorithms for the black-box ring extraction problem are optimal. In this paper we derive exponential-time lower complexity bounds for a large class of rings satisfying certain conditions. The existence of these bounds is surprising, having in mind that there are subexponential-time algorithms for certain rings which are very similar to the rings considered in this work. In addition, we introduce a novel technique to reduce the problem of factoring integers to the black-box ring extraction problem, extending previous work to a more general class of algorithms and obtaining a much tighter reduction.
2008
EPRINT
Lower Bounds on Signatures From Symmetric Primitives
We show that every construction of one-time signature schemes from a random oracle achieves black-box security at most 2^{(1+o(1))q}, where q is the total number of oracle queries asked by the key generation, signing, and verification algorithms. That is, any such scheme can be broken with probability close to 1 by a (computationally unbounded) adversary making 2^{(1+o(1))q} queries to the oracle. This is tight up to a constant factor in the number of queries, since a simple modification of Lamport's one-time signatures (Lamport '79) achieves 2^{(0.812-o(1))q} black-box security using q queries to the oracle. Our result extends (with a loss of a constant factor in the number of queries) also to the random permutation and ideal-cipher oracles. Since the symmetric primitives (e.g. block ciphers, hash functions, and message authentication codes) can be constructed by a constant number of queries to the mentioned oracles, as corollary we get lower bounds on the efficiency of signature schemes from symmetric primitives when the construction is black-box. This can be taken as evidence of an inherent efficiency gap between signature schemes and symmetric primitives.
2008
EPRINT
Machine Learning Attacks Against the ASIRRA CAPTCHA
The ASIRRA CAPTCHA [EDHS2007], recently proposed at ACM CCS 2007, relies on the problem of distinguishing images of cats and dogs (a task that humans are very good at). The security of ASIRRA is based on the presumed difficulty of classifying these images automatically. In this paper, we describe a classifier which is 82.7% accurate in telling apart the images of cats and dogs used in ASIRRA. This classifier is a combination of support-vector machine classifiers trained on color and texture features extracted from images. Our classifier allows us to solve a 12-image ASIRRA challenge automatically with probability 10.3%. This probability of success is significantly higher than the estimate given in [EDHS2007] for machine vision attacks. The weakness we expose in the current implementation of ASIRRA does not mean that ASIRRA cannot be deployed securely. With appropriate safeguards, we believe that ASIRRA offers an appealing balance between usability and security. One contribution of this work is to inform the choice of safeguard parameters in ASIRRA deployments.
2008
EPRINT
Maximizing data survival in Unattended Wireless Sensor Networks against a focused mobile adversary
Some sensor network settings involve disconnected or unattended operation with periodic visits by a mobile sink. An unattended sensor network operating in a hostile environment can collect data that represents a high-value target for the adversary. Since an unattended sensor can not immediately off-load sensed data to a safe external entity (such as a sink), the adversary can easily mount a focused attack aiming to erase or modify target data. To maximize chances of data survival, sensors must collaboratively attempt to mislead the adversary and hide the location, the origin and the contents of collected data. In this paper, we focus on applications of well-known security techniques to maximize chances of data survival in unattended sensor networks, where sensed data can not be off-loaded to a sink in real time. Our investigation yields some interesting insights and surprising results. The highlights of our work are: (1) thorough exploration of the data survival challenge, (2) exploration of the design space for possible solutions, (3) construction of several practical and effective techniques, and (4) their evaluation.
2008
EPRINT
Merkle Puzzles are Optimal
We prove that every key exchange protocol in the random oracle model in which the honest users make at most n queries to the oracle can be broken by an adversary making O(n^2) queries to the oracle. This improves on the previous Omega(n^6) query attack given by Impagliazzo and Rudich (STOC '89). Our bound is optimal up to a constant factor since Merkle (CACM '78) gave an n query key exchange protocol in this model that cannot be broken by an adversary making o(n^2) queries.
2008
EPRINT
Modeling Computational Security in Long-Lived Systems, Version 2
For many cryptographic protocols, security relies on the assumption that adversarial entities have limited computational power. This type of security degrades progressively over the lifetime of a protocol. However, some cryptographic services, such as timestamping services or digital archives, are long-lived in nature; they are expected to be secure and operational for a very long time (i.e. super-polynomial). In such cases, security cannot be guaranteed in the traditional sense: a computationally secure protocol may become insecure if the attacker has a super-polynomial number of interactions with the protocol. This paper proposes a new paradigm for the analysis of long-lived security protocols. We allow entities to be active for a potentially unbounded amount of real time, provided they perform only a polynomial amount of work per unit of real time. Moreover, the space used by these entities is allocated dynamically and must be polynomially bounded. We propose a new notion of long-term implementation, which is an adaptation of computational indistinguishability to the long-lived setting. We show that long-term implementation is preserved under polynomial parallel composition and exponential sequential composition. We illustrate the use of this new paradigm by analyzing some security properties of the long-lived timestamping protocol of Haber and Kamat.
2008
EPRINT
Modified Huang-Wang's Convertible Nominative Signature Scheme
At ACISP 2004, Huang and Wang first introduced the concept of convertible nominative signatures and also proposed a concrete scheme. However, it was pointed out by many works that Huang-Wang's scheme is in fact not a nominative signature. In this paper, we first present a security model for convertible nominative signatures. The properties of Unforgeability, Invisibility, Non-impersonation and Non-repudiation in the setting of convertible nominative signatures are defined formally. Then we modify Huang-Wang's scheme into a secure one. Formal proofs are provided to show that the modified Huang-Wang's scheme satisfies all the security properties under some conventional assumptions in the random oracle model.
2008
EPRINT
Modular polynomials for genus 2
Modular polynomials are an important tool in many algorithms involving elliptic curves. In this article we generalize this concept to the genus 2 case. We give the theoretical framework describing the genus 2 modular polynomials and discuss how to explicitly compute them.
2008
EPRINT
More Discriminants with the Brezing-Weng Method
The Brezing-Weng method is a general framework to generate families of pairing-friendly elliptic curves. Here, we introduce an improvement which can be used to generate more curves with larger discriminants. Apart from the number of curves this yields, it provides an easy way to avoid endomorphism rings with small class number.
2008
EPRINT
Multi-Factor Password-Authenticated Key Exchange
We consider a new form of authenticated key exchange which we call multi-factor password-authenticated key exchange, where session establishment depends on successful authentication of multiple short secrets that are complementary in nature, such as a long-term password and a one-time response, allowing the client and server to be mutually assured of each other's identity without directly disclosing private information to the other party. Multi-factor authentication can provide an enhanced level of assurance in higher security scenarios such as online banking, virtual private network access, and physical access because a multi-factor protocol is designed to remain secure even if all but one of the factors has been compromised. We introduce the first formal security model for multi-factor password-authenticated key exchange protocols, propose an efficient and secure protocol called MFPAK, and provide a formal argument to show that our protocol is secure in this model. Our security model is an extension of the Bellare-Pointcheval-Rogaway security model for password-authenticated key exchange and the formal analysis proceeds in the random oracle model.
2008
EPRINT
Multi-PKG ID based signcryption
Here we propose an identity based signcryption scheme in the multi-PKG environment where sender and receiver receive public key from different PKG. We also define security models for our scheme and give security proofs in random oracle model.
2008
EPRINT
Multi-Recipient Signcryption for Secure Wireless Group Communication
Secure group communication is significant for wireless and mobile computing. Overheads can be reduced efficiently when a sender sends multiple messages to multiple recipients using multi-recipient signcryption schemes. In this paper, we proposed the formal definition and security model of multi-recipient signcryption, presented the definition of reproducible signcryption and proposed security theorems for randomness reusing based multi-recipient signcryption schemes. We found that a secure reproducible signcryption scheme can be used to construct an efficient multi-recipient signcryption scheme which has the same security level as the underlying base signcryption scheme. We constructed a multi-recipient scheme which is provable secure in random oracle model assuming that the GDH problem is hard, based on a new BLS-type signcryption scheme. Overheads of the new scheme are only (n+1)/2n times of traditional ways when a sender sends different messages to n distinct recipients. It is more efficient than other known schemes. It creates a possibility for the practice of the public key cryptosystem in ubiquitous computing.
2008
EPRINT
Multiparty Computation Goes Live
In this note, we briefly report on the first large-scale and practical application of multiparty computation, which took place in January 2008.
2008
EPRINT
New AES software speed records
This paper presents new speed records for AES software,taking advantage of (1) architecture-dependent reduction of instructions used to compute AES and (2) microarchitecture-dependent reduction of cycles used for those instructions. A wide variety of common CPU architectures---amd64, ppc32, sparcv9, and x86---are discussed in detail, along with several specific microarchitectures.
2008
EPRINT
New Applications of Differential Bounds of the SDS Structure
In this paper, we present some new applications of the bounds for the differential probability of a SDS (Substitution-Diffusion-Substitution) structure by Park et al. at FSE 2003. Park et al. have applied their result on the AES cipher which uses the SDS structure based on MDS matrices. We shall apply their result to practical ciphers that use SDS structures based on {0,1}-matrices of size n times n. These structures are useful because they can be efficiently implemented in hardware. We prove a bound on {0,1}-matrices to show that they cannot be MDS and are almost-MDS only when n=2,3 or 4. Thus we have to apply Park's result whenever {0,1}-matrices where $n \geq 5$ are used because previous results only hold for MDS and almost-MDS diffusion matrices. Based on our bound, we also show that the {0,1}-matrix used in E2 is almost-optimal among {0,1}-matrices. Using Park's result, we prove differential bounds for E2 and an MCrypton-like cipher, from which we can deduce their security against boomerang attack and some of its variants. At ICCSA 2006, Khoo and Heng constructed block cipher-based universal hash functions, from which they derived Message Authentication Codes (MACs) which are faster than CBC-MAC. Park's result provides us with the means to obtain a more accurate bound for their universal hash function. With this bound, we can restrict the number of MAC's performed before a change of MAC key is needed.
2008
EPRINT
New attacks on ISO key establishment protocols
Cheng and Comley demonstrated type flaw attacks against the key establishment mechanism 12 standardized in ISO/IEC 11770-2:1996. They also proposed enhancements to fix the security flaws in the mechanism. We show that the enhanced version proposed by Cheng and Comley is still vulnerable to type flaw attacks. As well we show that the key establishment mechanism 13 in the above standard is vulnerable to a type flaw attack.
2008
EPRINT
New balanced Boolean functions satisfying all the main cryptographic criteria
We study an infinite class of functions which provably achieve an optimum algebraic immunity, an optimum algebraic degree and a good nonlinearity. We checked that it has also a good behavior against fast algebraic attacks.
2008
EPRINT
New Collision attacks Against Up To 24-step SHA-2
In this work, we provide new and improved attacks against 22, 23 and 24-step SHA-2 family using a local collision given by Sanadhya and Sarkar (SS) at ACISP '08. The success probability of our 22-step attack is 1 for both SHA-256 and SHA-512. The computational efforts for the 23-step and 24-step SHA-256 attacks are respectively $2^{11.5}$ and $2^{28.5}$ calls to the corresponding step reduced SHA-256. The corresponding values for the 23 and 24-step SHA-512 attack are respectively $2^{16.5}$ and $2^{32.5}$ calls. Using a look-up table having $2^{32}$ (resp. $2^{64}$) entries the computational effort for finding 24-step SHA-256 (resp. SHA-512) collisions can be reduced to $2^{15.5}$ (resp. $2^{22.5}$) calls. We exhibit colliding message pairs for 22, 23 and 24-step SHA-256 and SHA-512. This is the \emph{first} time that a colliding message pair for 24-step SHA-512 is provided. The previous work on 23 and 24-step SHA-2 attacks is due to Indesteege et al. and utilizes the local collision presented by Nikoli\'{c} and Biryukov NB) at FSE '08. The reported computational efforts are $2^{18}$ and $2^{28.5}$ for 23 and 24-step SHA-256 respectively and $2^{43.9}$ and $2^{53}$ for 23 and 24-step SHA-512. The previous 23 and 24-step attacks first constructed a pseudo-collision and later converted it into a collision for the reduced round SHA-2 family. We show that this two step procedure is unnecessary. Although these attacks improve upon the existing reduced round SHA-2 attacks, they do not threaten the security of the full SHA-2 family.
2008
EPRINT
New Composite Operations and Precomputation Scheme for Elliptic Curve Cryptosystems over Prime Fields (full version)
We present a new methodology to derive faster composite operations of the form dP+Q, where d is a small integer >= 2, for generic ECC scalar multiplications over prime fields. In particular, we present an efficient Doubling-Addition (DA) operation that can be exploited to accelerate most scalar multiplication methods, including multiscalar variants. We also present a new precomputation scheme useful for window-based scalar multiplications that is shown to achieve the lowest cost among all known methods using only one inversion. In comparison to the remaining approaches that use none or several inversions, our scheme offers higher performance for most common I/M ratios. By combining the benefits of our precomputation scheme and the new DA operation, we can save up to 6.2% in the scalar multiplication using fractional wNAF.
2008
EPRINT
New construction of Boolean functions with maximun algebraic immunity
Because of the algebraic attacks, a high algebraic immunity is now an important criteria for Boolean functions used in stream ciphers. In this paper, by using the relationship between some flats and support of a n variables Boolean function f, we introduce a general method to determine the algebraic immunity of a Boolean function and finally construct some balanced functions with optimum algebraic immunity.
2008
EPRINT
New Differential-Algebraic Attacks and Reparametrization of Rainbow
A recently proposed class of multivariate quadratic schemes, the Rainbow-Like signature Schemes, in which successive sets of central variables are obtained from previous ones by solving linear equations, seem to lead to efficient schemes (TTS, TRMS, and Rainbow) that perform well on systems of low computational resources. Recently SFLASH ($C^{\ast-}$) was broken by Dubois, Fouque, Shamir, and Stern via a differential attack. In this paper, we exhibit similar attacks based on differentials, that will reduce published Rainbow-like schemes below their security levels. We will present a new type of construction of Rainbow-Like schemes and design signature schemes with new parameters for practical applications.
2008
EPRINT
New Directions in Cryptanalysis of Self-synchronizing Stream Ciphers
In cryptology we commonly face the problem of finding an unknown key K from the output of an easily computable keyed function F(C,K) where the attacker has the power to choose the input parameter C. In this work we focus on self-synchronizing stream ciphers. First we show how to model these primitives in the above-mentioned general problem by relating appropriate functions F to the underlying ciphers. Then we apply the recently proposed framework at AfricaCrypt’08 [4] (for dealing with this kind of problems) to the proposed T-function based self-synchronizing stream cipher by Klimov and Shamir at FSE’05 [5] and show how to deduce some non-trivial information about the key. We also open a new window for answering an open question raised in [4].
2008
EPRINT
New ID-based Fair Blind Signatures
A blind signature is a cryptographic premitive in which a user can obtain a signature from the signer without revealing any information about message signature pair.Blind signatures are used in electronic payment systems, electronic voting machines etc.The anonymity can be misused by criminals by money laundering or by dubious money.To prevent these crimes, the idea of fair blind signature scheme was given by stadler et al.In fair blind signature scheme, there is a trusted third party judge who can provide a linking protocol to signer to link his view to the message signature pair.In this paper we are proposing some identity based fair blind signatures.
2008
EPRINT
New Impossible Differential Attacks on AES
In this paper we apply impossible differential attacks to reduced round AES. Using various techniques, including the early abort approach and key schedule considerations, we significantly improve previously known attacks due to Bahrak-Aref and Phan. The improvement of these attacks leads to the best known impossible differential attacks on 7-round AES-128 and AES-192, as well as to the best known impossible differential attacks on 8-round AES-256.
2008
EPRINT
New Impossible Differential Cryptanalysis of ARIA
This paper studies the security of ARIA against impossible differential cryptanalysis. Firstly an algorithm is given to find many new 4-round impossible differentials of ARIA. Followed by such impossible differentials, we improve the previous impossible differential attack on 5/6-round ARIA. We also point out that the existence of such impossible differentials are due to the bad properties of the binary matrix employed in the diffusion layer.
2008
EPRINT
New Multibase Non-Adjacent Form Scalar Multiplication and its Application to Elliptic Curve Cryptosystems (extended version)
In this paper we present a new method for scalar multiplication that uses a generic multibase representation to reduce the number of required operations. Further, a multibase NAF-like algorithm that efficiently converts numbers to such representation without impacting memory or speed performance is developed and showed to be sublinear in terms of the number of nonzero terms. Additional representation reductions are discussed with the introduction of window-based variants that use an extended set of precomputations. To realize the proposed multibase scalar multiplication with or without precomputations in the setting of Elliptic Curve Cryptosystems (ECC) over prime fields, we also present a methodology to derive fast composite operations such as tripling or quintupling of a point that require less memory than previous point formulae. Point operations are then protected against simple side-channel attacks using a highly efficient atomic structure. Extensive testing is carried out to show that our multibase scalar multiplication is the fastest method to date in the setting of ECC and exhibits a small footprint, which makes it ideal for implementation on constrained devices.
2008
EPRINT
New proofs for old modes
We study the standard block cipher modes of operation: CBC, CFB, and OFB and analyse their security. We don't look at ECB other than briefly to note its insecurity, and we have no new results on counter mode. Our results improve over those previously published in that (a) our bounds are better, (b) our proofs are shorter and easier, (c) the proofs correct errors we discovered in previous work, or some combination of these. We provide a new security notion for symmetric encryption which turns out to be rather useful when analysing block cipher modes. Finally, we pay attention to different methods for selecting initialization vectors for the block cipher modes, and prove security for a number of different selection policies. In particular, we introduce the concept of a `generalized counter', and prove that generalized counters suffice for security in (full-width) CFB and OFB modes and that generalized counters encrypted using the block cipher (with the same key) suffice for all three modes.
2008
EPRINT
New Related-Key Boomerang Attacks on AES
In this paper we present two new attacks on round reduced versions of the AES. We present the first application of the related-key boomerang attack on 7 and 9 rounds of AES-192. The 7-round attack requires only 2^{18} chosen plaintexts and ciphertexts and needs 2^{67.5} encryptions. We extend our attack to nine rounds of AES-192. This leaves to a data complexity of 2^{67} chosen plaintexts and ciphertexts using about 2^{143.33} encryptions to break 9 rounds of AES-192.
2008
EPRINT
New Results on Unconditionally Secure Multireceiver Manual Authentication
Manual authentication is a recently proposed model of communication motivated by the settings where the only trusted infrastructure is a low bandwidth authenticated channel, possibly realized by the aid of a human, that connects the sender and the receiver who are otherwise connected through an insecure channel and do not have any shared key or public key infrastructure. A good example of such scenarios is pairing of devices in Bluetooth. Manual authentication systems are studied in computational and information theoretic security model and protocols with provable security have been proposed. In this paper we extend the results in information theoretic model in two directions. Firstly, we extend a single receiver scenario to multireceiver case where the sender wants to authenticate the same message to a group of receivers. We show new attacks (compared to single receiver case) that can launched in this model and demonstrate that the single receiver lower bound $2\log(1/\epsilon)+O(1)$ on the bandwidth of manual channel stays valid in the multireceiver scenario. We further propose a protocol that achieves this bound and provides security, in the sense that we define, if up to $c$ receivers are corrupted. The second direction is the study of non-interactive protocols in unconditionally secure model. We prove that unlike computational security framework, without interaction a secure authentication protocol requires the bandwidth of the manual channel to be at least the same as the message size, hence non-trivial protocols do not exist.
2008
EPRINT
New State Recovery Attack on RC4
The stream cipher RC4 was designed by R.~Rivest in 1987, and it has a very simple and elegant structure. It is probably the most deployed cipher on the Earth. ~~~~In this paper we analyse the class RC4-$N$ of RC4-like stream ciphers, where $N$ is the modulus of operations, as well as the length of internal arrays. Our new attack is a state recovery attack which accepts the keystream of a certain length, and recovers the internal state. For the original RC4-256, our attack has total complexity of around $2^{241}$ operations, whereas the best previous attack needs $2^{779}$ of time. Moreover, we show that if the secret key is of length $N$ bits or longer, the new attack works faster than an exhaustive search. The algorithm of the attack was implemented and verified on small cases.
2008
EPRINT
Non-black-box Techniques Are Not Necessary for Constant Round Non-malleable Protocols
Recently, non-black-box techniques have enjoyed great success in cryptography. In particular, they have led to the construction of \emph{constant round} protocols for two basic cryptographic tasks (in the plain model): non-malleable zero-knowledge (NMZK) arguments for NP, and non-malleable commitments. Earlier protocols, whose security proofs relied only on black-box techniques, required non-constant (e.g., $O(\log n)$) number of rounds. Given the inefficiency (and complexity) of existing non-black-box techniques, it is natural to ask whether they are \emph{necessary} for achieving constant-round non-malleable cryptographic protocols. In this paper, we answer this question in the \emph{negative}. Assuming the validity of a recently introduced assumption, namely the \emph{Gap Discrete Logarithm} (Gap-DL) assumption [MMY06], we construct a constant round \emph{simulation-extractable} argument system for NP, which implies NMZK. The Gap-DL assumption also leads to a very simple and natural construction of \emph{non-interactive non-malleable commitments}. In addition, plugging our simulation-extractable argument in the construction of Katz, Ostrovsky, and Smith [KOS03] yields the first $O(1)$-round secure multiparty computation with a dishonest majority using only black-box techniques. Although the Gap-DL assumption is relatively new and non-standard, in addition to answering some long standing open questions, it brings a new approach to non-malleability which is simpler and very natural. We also demonstrate that \odla~holds unconditionally against \emph{generic} adversaries.
2008
EPRINT
Non-Cyclic Subgroups of Jacobians of Genus Two Curves
Let E be an elliptic curve defined over a finite field. Balasubramanian and Koblitz have proved that if the l-th roots of unity m_l is not contained in the ground field, then a field extension of the ground field contains m_l if and only if the l-torsion points of E are rational over the same field extension. We generalize this result to Jacobians of genus two curves. In particular, we show that the Weil- and the Tate-pairing are non-degenerate over the same field extension of the ground field. From this generalization we get a complete description of the l-torsion subgroups of Jacobians of supersingular genus two curves. In particular, we show that for l>3, the l-torsion points are rational over a field extension of degree at most 24.
2008
EPRINT
Non-Cyclic Subgroups of Jacobians of Genus Two Curves with Complex Multiplication
Let E be an elliptic curve defined over a finite field. Balasubramanian and Koblitz have proved that if the l-th roots of unity m_l is not contained in the ground field, then a field extension of the ground field contains m_l if and only if the l-torsion points of E are rational over the same field extension. We generalize this result to Jacobians of genus two curves with complex multiplication. In particular, we show that the Weil- and the Tate-pairing on such a Jacobian are non-degenerate over the same field extension of the ground field.
2008
EPRINT
Non-Linear Reduced Round Attacks Against SHA-2 Hash family
Most of the attacks against (reduced) SHA-2 family in literature have used local collisions which are valid for linearized version of SHA-2 hash functions. Recently, at FSE '08, an attack against reduced round SHA-256 was presented by Nikoli\'{c} and Biryukov which used a local collision which is valid for the actual SHA-256 function. It is a 9-step local collision which starts by introducing a modular difference of 1 in the two messages. It succeeds with probability roughly 1/3. We build on the work of Nikoli\'{c} and Biryukov and provide a generalized nonlinear local collision which accepts an arbitrary initial message difference. This local collision succeeds with probability 1. Using this local collision we present attacks against 18-step SHA-256 and 18-step SHA-512 with arbitrary initial difference. Both of these attacks succeed with probability 1. We then present special cases of our local collision and show two different differential paths for attacking 20-step SHA-256 and 20-step SHA-512. One of these paths is the same as presented by Nikoli\'{c} and Biryukov while the other one is a new differential path. Messages following both these differential paths can be found with probability 1. This improves on the previous result where the success probability of 20-step attack was 1/3. Finally, we present two differential paths for 21-step collisions for SHA-256 and SHA-512, one of which is a new path. The success probability of these paths for SHA-256 is roughly $2^{-15}$ and $2^{-17}$ which improves on the 21-step attack having probability $2^{-19}$ reported earlier. We show examples of message pairs following all the presented differential paths for up to 21-step collisions in SHA-256. We also show first real examples of colliding message pairs for up to 20-step reduced SHA-512.
2008
EPRINT
Non-Malleable Obfuscation
Existing definitions of program obfuscation do not rule out malleability attacks, where an adversary that sees an obfuscated program is able to generate another (potentially obfuscated) program that is related to the original one in some way. We formulate two natural flavors of non-malleability requirements for program obfuscation, and show that they are incomparable in general. We also construct non-malleable obfuscators of both flavors for some program families of interest. Some of our constructions are in the Random Oracle model, whereas another one is in the common reference string model. We also define the notion of verifiable obfuscation which is of independent interest.
2008
EPRINT
Nonlinear Piece In Hand Matrix Method for Enhancing Security of Multivariate Public Key Cryptosystems
It is widely believed to take exponential time to find a solution of a system of random multivariate polynomials because of the NP-completeness of such a task. On the other hand, in most of multivariate public key cryptosystems proposed so far, the computational complexity of cryptanalysis is apt to be polynomial time due to the trapdoor structure. In this paper, we develop the concept, piece in hand matrix (PH matrix, for short), which aims to bring the computational complexity of cryptanalysis of multivariate public key cryptosystems close to exponential time by adding random polynomial terms to original cryptosystems. This is a general concept which can be applicable to any reasonable type of multivariate public key cryptosystems for the purpose of enhancing their security. There are two types of the PH matrices: a linear matrix whose elements are constants and a nonlinear matrix whose elements are polynomial functions of the plain text or random numbers. In the present paper, we focus our thought on the nonlinear PH matrix method and develop the framework of it. The nonlinear PH matrix method is obtained by generalizing the linear PH matrix method, and the nonlinearity may bring an additional randomization to the original linear PH matrix method. Thus, the nonlinear PH matrix method may enhance the security of the original multivariate public key cryptosystem more than the linear PH matrix method. We show, in an experimental manner, that this actually holds in the enhancement of the security of the Matsumoto-Imai cryptosystem and RSE(2)PKC against the Gr\"obner basis attack.
2008
EPRINT
Nonlinear Piece In Hand Perturbation Vector Method for Enhancing Security of Multivariate Public Key Cryptosystems
Abstract. The piece in hand (PH) is a general scheme which is applicable to any reasonable type of multivariate public key cryptosystems for the purpose of enhancing their security. In this paper, we propose a new class PH method called NLPHPV (NonLinear Piece in Hand Perturbation Vector) method. Although our NLPHPV uses similar perturbation vectors as is used for the previously known internal perturbation method, this new method can avoid redundant repetitions in decryption process. With properly chosen parameter sizes, NLPHPV achieves an observable gain in security from the original multivariate public key cryptosystem. We demonstrate these by both theoretical analyses and computer simulations against major known attacks and provides the concrete sizes of security parameters, with which we even expect the grater security against potential quantum attacks.
2008
EPRINT
Novel Precomputation Schemes for Elliptic Curve Cryptosystems
We present an innovative technique to add elliptic curve points with the form P+-Q, and discuss its application to the generation of precomputed tables for the scalar multiplication. Our analysis shows that the proposed schemes offer, to the best of our knowledge, the lowest costs for precomputing points on both single and multiple scalar multiplication and for various elliptic curve forms, including the highly efficient Jacobi quartics and Edwards curves.
2008
EPRINT
Oblivious Transfer based on the McEliece Assumptions}
We implement one-out-of-two bit oblivious transfer (OT) based on the assumptions used in the McEliece cryptosystem: the hardness of decoding random binary linear codes, and the difficulty of distinguishing a permuted generating matrix of Goppa codes from a random matrix. To our knowledge this is the first OT reduction to these problems only.
2008
EPRINT
Oblivious Transfer from Weak Noisy Channels
Various results show that oblivious transfer can be implemented using the assumption of noisy channels. Unfortunately, this assumption is not as weak as one might think, because in a cryptographic setting, these noisy channels must satisfy very strong security requirements. Unfair noisy channels, introduced by Damgaard, Kilian and Salvail [Eurocrypt '99], reduce these limitations: They give the adversary an unfair advantage over the honest player, and therefore weaken the security requirements on the noisy channel. However, this model still has many shortcomings: For example, the adversary's advantage is only allowed to have a very special form, and no error is allowed in the implementation. In this paper we generalize the idea of unfair noisy channels. We introduce two new models of cryptographic noisy channels that we call the weak erasure channel and the weak binary symmetric channel, and show how they can be used to implement oblivious transfer. Our models are more general and use much weaker assumptions than unfair noisy channels, which makes implementation a more realistic prospect.
2008
EPRINT
Obtaining and solving systems of equations in key variables only for the small variants of AES
This work is devoted to attacking the small scale variants of the Advanced Encryption Standard (AES) via systems that contain only the initial key variables. To this end, we introduce a system of equations that naturally arises in the AES, and then eliminate all the intermediate variables via normal form reductions. The resulting system in key variables only is solved then. We also consider a possibility to apply our method in the meet-in-the-middle scenario especially with several plaintext/ciphertext pairs. We elaborate on the method further by looking for subsystems which contain fewer variables and are overdetermined, thus facilitating solving the large system.
2008
EPRINT
Odd-Char Multivariate Hidden Field Equations
We present a multivariate version of Hidden Field Equations (HFE) over a finite field of odd characteristic, with an extra ``embedding'' modifier. Combining these known ideas makes our new MPKC (multivariate public key cryptosystem) more efficient and scalable than any other extant multivariate encryption scheme. Switching to odd characteristics in HFE-like schemes affects how an attacker can make use of field equations. Extensive empirical tests (using MAGMA-2.14, the best commercially available \mathbold{F_4} implementation) suggests that our new construction is indeed secure against algebraic attacks using Gr\"obner Basis algorithms. The ``embedding'' serves both to narrow down choices of pre-images and to guard against a possible Kipnis-Shamir type (rank-based) attack. We may hence reasonably argue that for practical sizes, prior attacks take exponential time. We demonstrate that our construction is in fact efficient by implementing practical-sized examples of our ``odd-char HFE'' with 3 variables (``THFE'') over $\mathrm{GF}(31)$. To be precise, our preliminary THFE implementation is $15\times$--$20\times$ the speed of RSA-1024.
2008
EPRINT
ON A CRYPTOGRAPHIC IDENTITY IN OSBORN LOOPS
This study digs out some new algebraic properties of an Osborn loop that will help in the future to unveil the mystery behind the middle inner mappings $T_{(x)}$ of an Osborn loop. These new algebraic properties, will open our eyes more to the study of Osborn loops like CC-loops which has received a tremendious attention in this $21^\textrm{st}$ and VD-loops whose study is yet to be explored. In this study, some algebraic properties of non-WIP Osborn loops have been investigated in a broad manner. Huthnance was able to deduce some algebraic properties of Osborn loops with the WIP i.e universal weak WIPLs. So this work exempts the WIP. Two new loop identities, namely left self inverse property loop(LSIPL) identity and right self inverse property loop(RSLPL) are introduced for the first time and it is shown that in an Osborn loop, they are equivalent. A CC-loop is shown to be power associative if and only if it is a RSLPL or LSIPL. Among the few identities that have been established for Osborn loops, one of them is recognized and recommended for cryptography in a similar spirit in which the cross inverse property has been used by Keedwell following the fact that it was observed that Osborn loops that do not have the LSIP or RSIP or 3-PAPL or weaker forms of inverse property, power associativity and diassociativity to mention a few, will have cycles(even long ones). These identity is called an Osborn cryptographic identity(or just a cryptographic identity).
2008
EPRINT
On a New Formal Proof Model for RFID Location Privacy
We discuss a new formal proof model for RFID location privacy, recently proposed at ESORICS 2008. We show that protocols which intuitively and in several other models are considered not to be location private (or untraceable), are provably location private in this model, and vice-versa. Specifically, we prove a protocol in which every tag transmits the same constant message to not be location private in the proposed model. Then we prove a protocol in which a tag's ID is transmitted in clear text to be weakly location private in the model. Finally, we consider a protocol with known weaknesses with respect to location privacy and show it to be location private in the model.
2008
EPRINT
On Black-Box Ring Extraction and Integer Factorization
The black-box extraction problem over rings has (at least) two important interpretations in cryptography: An efficient algorithm for this problem implies (i) the equivalence of computing discrete logarithms and solving the Diffie-Hellman problem and (ii) the in-existence of secure ring-homomorphic encryption schemes. In the special case of a finite field, Boneh/Lipton and Maurer/Raub showed that there exist algorithms solving the black-box extraction problem in subexponential time. It is unknown whether there exist more efficient algorithms. In this work we consider the black-box extraction problem over finite rings of characteristic $n$, where $n$ has at least two different prime factors. We provide a polynomial-time reduction from factoring $n$ to the black-box extraction problem for a large class of finite commutative unitary rings. Under the factoring assumption, this implies the in-existence of certain efficient generic reductions from computing discrete logarithms to the Diffie-Hellman problem on the one side, and might be an indicator that secure ring-homomorphic encryption schemes exist on the other side.
2008
EPRINT
On CCA1-Security of Elgamal And Damg{\aa}rd Cryptosystems
Denote by $X^{Y[i]}$ the assumption that the adversary, given a non-adaptive oracle access to the $Y$ oracle with $i$ free variables cannot break the assumption $X$. We show that Elgamal is CCA1-secure under the $DDH^{CCH[1]}$ assumption. We then give a simple proof that the Damg{\aa}rd cryptosystem is CCA1-secure under the $DDH^{DDH[2]}$ assumption, where the proof uses a recent trapdoor test trick by Cash, Kiltz and Shoup.
2008
EPRINT
On CCA1-Security of Elgamal And Damg{\aa}rd's Elgamal
We establish the complete complexity landscape surrounding CCA1-security of Elgamal and Damg{\aa}rd's Elgamal (DEG). Denote by $X^{Y[i]}$ the assumption that the adversary, given a non-adaptive oracle access to the $Y$ oracle with $i$ free variables cannot break the assumption $X$. We show that the CCA1-security of Elgamal is equivalent to the $DDH^{CDH[1]}$ assumption. We then give a simple alternative to Gj{\o}steen's proof that DEG cryptosystem is CCA1-secure under the $DDH^{DDH[2]}$ assumption. We also provide several separations. We show that $DDH$ cannot be reduced to $DDH^{CDH[1]}$ in the generic group model. We give an irreduction showing that $DDH$ cannot be reduced to $DDEG^{CDEG[2]}$ (unless $\DDH$ is easy), $DDEG^{CDEG[2]}$ cannot be reduced to $DDH^{DDH[2]}$ (unless $DDEG^{CDEG[2]}$ is easy) and $DDH^{DDH[2]}$ cannot be reduced to the $DDH^{CDH[1]}$(unless $DDH^{DDH[2]}$ is easy). All those irreductions are optimal in the sense that they show that if assumption $X$ can be reduced to $Y$ in polynomial time then $X$ has to be solvable in polynomial time itself and thus \emph{both} assumptions are broken.
2008
EPRINT
On Collisions of Hash Functions Turbo SHA-2
In this paper we don't examine security of Turbo SHA-2 completely; we only show new collision attacks on it, with smaller complexity than it was considered by Turbo SHA-2 authors. In [1] they consider Turbo SHA-224/256-r and Turbo SHA-384/512-r with variable number of rounds r from 1 to 8. The authors of [1] show collision attack on Turbo SHA-256-1 with one round which has the complexity of 2^64. For other r from 2 to 8 they don't find better attack than with the complexity of 2^128. Similarly, for Turbo SHA-512 they find only collision attack on Turbo SHA-512-1 with one round which has the complexity of 2^128. For r from 2 to 8 they don't find better attack than with the complexity of 2^256. In this paper we show collision attack on SHA-256-r for r = 1, 2,..., 8 with the complexity of 2^{16*r}. We also show collision attack on Turbo SHA-512-r for r = 1, 2,..., 8 with the complexity of 2^{32*r}. It follows that the only one remaining candidate from the hash family Turbo SHA is Turbo SHA-256 (and Turbo SHA-512) with 8 rounds. The original security reserve of 6 round has been lost.
2008
EPRINT
On Communication Complexity of Perfectly Reliable and Secure Communication in Directed Networks
In this paper, we re-visit the problem of {\it perfectly reliable message transmission} (PRMT) and {\it perfectly secure message transmission}(PSMT) in a {\it directed network} under the presence of a threshold adaptive Byzantine adversary, having {\it unbounded computing power}. Desmedt et.al have given the necessary and sufficient condition for the existence of PSMT protocols in directed networks. In this paper, we first show that the necessary and sufficient condition (characterization) given by Desmedt et.al does not hold for two phase\footnote{A phase is a send from sender to receiver or vice-versa.} PSMT protocols. Hence we provide a different necessary and sufficient condition for two phase PSMT in directed networks. We also derive the lower bound on communication complexity of two phase PSMT and show that our lower bound is {\it asymptotically tight} by designing a two phase PSMT protocol whose communication complexity satisfies the lower bound. Though the characterization for three or more phase PSMT is resolved by the result of Desmedt et. al, the lower bound on communication complexity for the same has not been investigated yet. Here we derive the lower bound on the communication complexity of three or more phase PSMT in directed networks and show that our lower bound is {\it asymptotically tight} by designing {\it communication optimal} PSMT protocols. Finally, we characterize the class of directed networks over which communication optimal PRMT or PRMT with constant factor overhead is possible. By communication optimal PRMT or PRMT with constant factor overhead, we mean that the PRMT protocol is able to send $\ell$ field elements by communicating $O(\ell)$ field elements from a finite field $\mathbb F$. To design our protocols, we use several techniques, which are of independent interest.
2008
EPRINT
On construction of signature schemes based on birational permutations over noncommutative rings
In the present paper, we give a noncommutative version of Shamir's birational permutation signature scheme proposed in Crypto'93 in terms of square matrices. The original idea to construct the multivariate quadratic signature is to hide a quadratic triangular system using two secret linear transformations. However, the weakness of the triangular system remains even after taking two transformations, and actually Coppersmith et al. broke it linear algebraically. In the non-commutative case, such linear algebraic weakness does not appear. We also give several examples of noncommutative rings to use in our scheme, the ring consisting of all square matrices, the quaternion ring and a subring of three-by-three matrix ring generated by the symmetric group of degree three. Note that the advantage of Shamir's original scheme is its efficiency. In our scheme, the efficiency is preserved enough.
2008
EPRINT
On DDos Attack against Proxy in Re-encryption and Re-signature
In 1998, Blaze, Bleumer, and Strauss proposed new kind of cryptographic primitives called proxy re-encryption and proxy re-signature[BBS98]. In proxy re-encryption, a proxy can transform a ciphertext computated under Alice's public key into one that can be opened under Bob's decryption key. In proxy re-signature, a proxy can transform a signature computated under Alice's secret key into one that can be verified by Bob's public key. In 2005, Ateniese et al proposed a few new re-encryption schemes and discussed its several potential applications especially in the secure distributed storage[AFGH05]. In 2006, they proposed another few re-signature schemes and also discussed its several potential applications[AH06]. They predicated that re-encryption and re-signature will play an important role in our life. Since then, researchers are sparked to give new lights to this area. Many excellent schemes have been proposed. In this paper, we introduce a new attack- DDos attack against proxy in the proxy re-cryptography. Although this attack can also be implemented against other cryptographic primitives, the danger caused by it in proxy re-cryptography seems more serious. We revisit the current literature, paying attention on their resisting DDos attack ability. We suggest a solution to decline the impact of DDos attacking. Also we give a new efficient re-encryption scheme which can achieve CCA2 secure based on Cramer-Shoup encryption scheme and prove its security. We point out this is the most efficient proxy re-encryption schemes for the proxy which can achieve CCA2 secure in the literature. At last we give our conclusions with hoping researchers give more attention on this attack.
2008
EPRINT
On differences of quadratic residues
Factoring an integer is equivalent to express the integer as the difference of two squares. We test that for any odd modulus, in the corresponding ring of remainders, any element can be realized as the difference of two quadratic residues, and also that, for a fixed remainder value, the map assigning to each modulus the number of ways to express the remainder as difference of quadratic residues is non-decreasing with respect to the divisibility ordering in the odd numbers. The reduction to remainders rings of the problem to express a remainder as the difference of two quadratic residues does not diminish the complexity of the factorization problem.
2008
EPRINT
On Implementation of GHS Attack against Elliptic Curve Cryptosystems over Cubic Extension Fields of Odd Characteristics
In this paper, we present algorithms for implementation of the GHS attack to Elliptic curve cryptosystems (ECC). In particular, we consider two large classes of elliptic curves over cubic extension fields of odd characteristics which have weak covering curves against GHS attack, whose existence have been shown recently. We show an algorithm to find definition equation of the covering curve and an algorithm to transfer DLP of the elliptic curve to Jacobian of the covering curve. An algorithm to test if the covering curve is hyperelliptic is also shown in the appendix.
2008
EPRINT
On Kasami Bent Functions
It is proved that no non-quadratic Kasami bent is affine equivalent to Maiorana-MacFarland type bent functions.
2008
EPRINT
ON MIDDLE UNIVERSAL $m$-INVERSE QUASIGROUPS AND THEIR APPLICATIONS TO CRYPTOGRAPHY
This study presents a special type of middle isotopism under which $m$-inverse quasigroups are isotopic invariant. A sufficient condition for an $m$-inverse quasigroup that is specially isotopic to a quasigroup to be isomorphic to the quasigroup isotope is established. It is shown that under this special type of middle isotopism, if $n$ is a positive even integer, then, a quasigroup is an $m$-inverse quasigroup with an inverse cycle of length $nm$ if and only if its quasigroup isotope is an $m$-inverse quasigroup with an inverse cycle of length $nm$. But when $n$ is an odd positive integer. Then, if a quasigroup is an $m$-inverse quasigroup with an inverse cycle of length $nm$, its quasigroup isotope is an $m$-inverse quasigroup with an inverse cycle of length $nm$ if and only if the two quasigroups are isomorphic. Hence, they are isomorphic $m$-inverse quasigroups. Explanations and procedures are given on how these results can be used to apply $m$-inverse quasigroups to cryptography, double cryptography and triple cryptography.
2008
EPRINT
ON MIDDLE UNIVERSAL WEAK AND CROSS INVERSE PROPERTY LOOPS WITH EQUAL LENGHT OF INVERES CYCLES
This study presents a special type of middle isotopism under which the weak inverse property(WIP) is isotopic invariant in loops. A sufficient condition for a WIPL that is specially isotopic to a loop to be isomorphic to the loop isotope is established. Cross inverse property loops(CIPLs) need not satisfy this sufficient condition. It is shown that under this special type of middle isotopism, if $n$ is a positive even integer, then a WIPL has an inverse cycle of length $n$ if and only if its isotope is a WIPL with an inverse cycle of length $n$. But, when $n$ is an odd positive integer. If a loop or its isotope is a WIPL with only $e$ and inverse cycles of length $n$, its isotope or the loop is a WIPL with only $e$ and inverse cycles of length $n$ if and only if they are isomorphic. So, that both are isomorphic CIPLs. Explanations and procedures are given on how these results can be used to apply CIPLs to cryptography.
2008
EPRINT
On Notions of Security for Deterministic Encryption, and Efficient Constructions without Random Oracles
The study of deterministic public-key encryption was initiated by Bellare et al. (CRYPTO~'07), who provided the ``strongest possible" notion of security for this primitive (called PRIV) and constructions in the random oracle (RO) model. We focus on constructing efficient deterministic encryption schemes \emph{without} random oracles. To do so, we propose a slightly weaker notion of security, saying that no partial information about encrypted messages should be leaked as long as each message is a-priori hard-to-guess \emph{given the others} (while PRIV did not have the latter restriction). Nevertheless, we argue that this version seems adequate for certain practical applications. We show equivalence of this definition to single-message and indistinguishability-based ones, which are easier to work with. Then we give general constructions of both chosen-plaintext (CPA) and chosen-ciphertext-attack (CCA) secure deterministic encryption schemes, as well as efficient instantiations of them under standard number-theoretic assumptions. Our constructions build on the recently-introduced framework of Peikert and Waters (STOC '08) for constructing CCA-secure \emph{probabilistic} encryption schemes, extending it to the deterministic-encryption setting and yielding some improvements to their original results as well.
2008
EPRINT
On Resettably-Sound Resttable Zero Knowledege Arguments
We study the simultaneous resettability problem, namely whether resettably-sound resettable ZK arguments for non-trivial languages exist (posed by Barak et al. [BGGL FOCS'01]), in both the plain model and the bare public-key (BPK for short) model. Under general hardness assumptions, we show: 1. in the BPK model, there exist constant-round (full-fledged) resettably-sound resettable ZK arguments for NP. This resolves a main problem in this model that remained open since the Micali and Reyzin's identification of notions of soundness [MR Crypto 2001] in the BPK model. 2.in the plain model, there exist constant-round (unbounded) resettably-sound class-bounded resettable ZK (as defined by Deng and Lin in [DL Eurocrypt 2007]) arguments for NP. This improves the previous result of Deng and Lin [Eurocrypt 2007] in that the DL construction for class-bounded resettable ZK argument achieves only a weak notion of resettable-soundness. The crux of these results is a construction of constant-round instance-dependent (full-fledged) resettably-sound resettable WI argument of knowledge (IDWIAOK for short) for any NP statement of the form x_0\in L_0 or x_1\in L_1, a notion also introduced by Deng and Lin [Eurocrypt 2007], whose construction, however, obtains only weak resettable-soundness when x_0\notin L_0. Our approach to the simultaneous resettability problem in the BPK model is to make a novel use of IDWIAOK, which gives rise to an elegant structure we call \Sigma-puzzles. Given the fact that all previously known resettable ZK arguments in the BPK model can be achieved in the plain model when ignoring round complexity, we believe this approach will shed light on the simultaneous resettability problem in the plain model.
2008
EPRINT
On Round Complexity of Unconditionally Secure VSS
In this work, we initiate the study of round complexity of {\it unconditionally secure weak secret sharing} (UWSS) and {\it unconditionally secure verifiable secret sharing} (UVSS) \footnote{In the literature, these problems are also called as statistical WSS and statistical VSS \cite{GennaroVSSSTOC01} respectively.} in the presence of an {\it all powerful} $t$-active adversary. Specifically, we show the following for UVSS: (a) 1-round UVSS is possible iff $t=1$ and $n>3$, (b) 2-round UVSS is possible if $n>3t$ and (c) 5-round {\it efficient} UVSS is possible if $n>2t$. For UWSS we show the following: (a) 1-round UWSS is possible iff $n>3t$ and (b) 3-round UWSS is possible if $n>2t$. Comparing our results with existing results for trade-off between fault tolerance and round complexity of perfect (zero error) VSS and WSS \cite{GennaroVSSSTOC01,FitziVSSTCC06,KatzVSSICALP2008}, we find that probabilistically relaxing the conditions of VSS/WSS helps to increase fault tolerance significantly.
2008
EPRINT
On Security Notions for Verifiable Encrypted Signature
First we revisit three - BGLS, MBGLS and GZZ verifiably encrypted signature schemes[2,3,6].We find that they are all not strong unforgeable.We remark that the notion of existential unforgeable is not sufficient for fair exchange protocols in most circumstances.So we propose three new - NBGLS, MBGLS and NGZZ verifiably encrypted signature schemes which are strong unforgeable. Also we reconsider other two - ZSS and CA verifiably encrypted signature schemes[4,8], we find that they both cannot resist replacing public key attack. So we strongly suggest that strong unforgeable for verifiably encrypted signature maybe a better notion than existential unforgeable and checking adjudicator knowing its private key is a necessary step for secure verifiably encrypted signature scheme.
2008
EPRINT
On Software Parallel Implementation of Cryptographic Pairings
A significant amount of research has focused on methods to improve the efficiency of cryptographic pairings; in part this work is motivated by the wide range of applications for such primitives. Although numerous hardware accelerators for pairing evaluation have used parallelism within extension field arithmetic to improve efficiency, similar techniques have not been examined in software thus far. In this paper we focus on parallelism within one pairing evaluation (intra-pairing), and parallelism between different pairing evaluations (inter-pairing). We identify several methods for exploiting such parallelism (extending previous results in the context of ECC) and show that it is possible to accelerate pairing evaluation by a significant factor in comparison to a naive approach.
2008
EPRINT
On the (Im)Possibility of Key Dependent Encryption
We study the possibility of constructing encryption schemes secure under messages that are chosen depending on the key k of the encryption scheme itself. We give the following separation results: 1. Let H be the family of poly(n)-wise independent hash-functions. There exists no fully-black-box reduction from an encryption scheme secure against key-dependent inputs to one-way permutations (and also to families of trapdoor permutations) if the adversary can obtain encryptions of h(k) for h \in H. 2. Let G be the family of polynomial sized circuits. There exists no reduction from an encryption scheme secure against key-dependent inputs to, seemingly, any cryptographic assumption, if the adversary can obtain an encryption of g(k) for g \in G, as long as the reduction's proof of security treats both the adversary and the function g as black box.
2008
EPRINT
On the Chikazawa-Inoue ID based key system
In this paper, we show that Chikazawa-Inoue ID-based key system is insecure by collusion, where Chikazawa-Inoue ID-based key system means the key parameters established during the initiation phase. We describe an algorithm factorizing a public key of Trust Center. Since our attack is based on only the key system and has no relation with specific key sharing protocols, it can be applied to all variant protocols of Chikazawa-Inoue ID based key sharing protocol. From this analysis, we obtain conclusions that Chikazawa-Inoue ID-based key system cannot be used any more for any application protocols.
2008
EPRINT
On the Correctness of An Approach Against Side-channel attacks
Side-channel attacks are a very powerful cryptanalytic technique. Li and Gu [ProvSec'07] proposed an approach against side-channel attacks, which states that a symmetric encryption scheme is IND-secure in side-channel model, if it is IND-secure in black-box model and there is no adversary who can recover the whole key of the scheme computationally in side-channel model, i.e. WKR-SCA ^ IND -> IND-SCA. Our researches show that it is not the case. We analyze notions of security against key recovery attacks and security against distinguishing attacks, and then construct a scheme which is WKR-SCA-secure and IND-secure, but not IND-SCA-secure in the same side-channel environment. Furthermore, even if the scheme is secure again partial key recovery attacks in side-channel model, this approach still does not hold true.
2008
EPRINT
On the Design of Secure and Fast Double Block Length Hash Functions
This paper reconsiders the security of the rate-1 double block length hash functions, which based on a block cipher with a block length of $n$-bit and a key length of $2n$-bit. Counter-examples and new attacks are presented on this general class of double block length hash functions with rate 1, which disclose there exist uncovered flaws in the former analysis given by Satoh \textit{et al.} and Hirose. Preimage and second preimage attacks are designed to break Hirose's two examples which were left as an open problem. Some refined conditions are proposed for ensuring this general class of the rate-1 hash functions to be optimally secure against the collision attack. In particular, two typical examples, which designed under the proposed conditions, are proven to be indifferentiable from the random oracle in the ideal cipher model. The security results are extended to a new class of double block length hash functions with rate 1, where one block cipher used in the compression function has the key length is equal to the block length, while the other is doubled.
2008
EPRINT
On the Design of Secure Double Block Length Hash Functions with Rate 1
This paper reconsiders the security of the rate-1 double block length hash functions, which based on a block cipher with a block length of $n$-bit and a key length of $2n$-bit. Counter-examples and new attacks are presented on this general class of double block length hash functions with rate 1, which disclose there exist uncovered flaws in the former analysis given by Satoh \textit{et al.} and Hirose. Preimage and second preimage attacks are designed to break Hirose's two examples which were left as an open problem. Some refined conditions are proposed for ensuring this general class of the rate-1 hash functions to be optimally secure against the collision attack. In particular, two typical examples, which designed under the proposed conditions, are proven to be indifferentiable from the random oracle in the ideal cipher model. The security results are extended to a new class of double block length hash functions with rate 1, where one block cipher used in the compression function has the key length is equal to the block length, while the other is doubled.
2008
EPRINT
On the economic payoff of forensic systems when used to trace Counterfeited Software and content
We analyze how well forensic systems reduce counterfeiting of soft- ware and content. We make a few idealized assumptions, and show that if the revenues of the producer before the introduction of forensics (Ro) are non-zero then the payoff of forensics is independent of the overall market size, it declines as the ratio between the penalty and the crime (both monetized) goes up, but that this behavior is reversed if Ro = 0. We also show that the payoff goes up as the ratio between success probability with and without forensics grows; however, for typical parameters most of the payoff is already reached when this ratio is 5.
2008
EPRINT
On the Practicality of Short Signature Batch Verification
As pervasive communication becomes a reality, where everything from vehicles to heart monitors constantly communicate with their environments, system designers are facing a cryptographic puzzle on how to authenticate messages. These scenarios require that : (1) cryptographic overhead remain short, and yet (2) many messages from many different signers be verified very quickly. Pairing-based signatures have property (1) but not (2), whereas schemes like RSA have property (2) but not (1). As a solution to this dilemma, Camenisch, Hohenberger and Pedersen showed how to batch verify two pairing-based signatures so that the total number of pairing operations was independent of the number of signatures to verify. CHP left open the task of batching privacy-friendly authentication, which is desirable in many pervasive communication scenarios. In this work, we revisit this issue from a more practical standpoint and present the following results: 1. We describe a framework, consisting of general techniques, to help scheme and system designers understand how to {\em securely} and {\em efficiently} batch the verification of pairing equations. 2. We present a detailed study of when and how our framework can be applied to existing regular, identity-based, group, ring, and aggregate signature schemes. To our knowledge, these batch verifiers for group and ring signatures are the first proposals for batching privacy-friendly authentication, answering an open problem of Camenisch et al. 3. While prior work gave mostly asymptotic efficiency comparisons, we show that our framework is practical by implementing our techniques and giving detailed performance measurements. Additionally, we discuss how to deal with invalid signatures in a batch and our empirical results show that when roughly less than 10% of signatures are invalid, batching remains more efficient that individual verification. Indeed, our results show that batch verification for short signatures is an effective, efficient approach.
2008
EPRINT
On the Role of KGC for Proxy Re-encryption in Identity Based Setting
In 1998, Blaze, Bleumer, and Strauss proposed a kind of cryptographic primitive called proxy re-encryption\cite{Blaze:98}. In proxy re-encryption, a proxy can transform a ciphertext computed under Alice's public key into one that can be opened under Bob's decryption key. They predicated that proxy re-encryption and re-signature will play an important role in our life. In 2007, Matsuo proposed the concept of four types of re-encryption schemes: CBE to IBE(type 1), IBE to IBE(type 2), IBE to CBE (type 3), CBE to CBE (type 4)\cite{Matsuo:07}. Now CBE to IBE and IBE to IBE proxy re-encryption schemes are being standardized by IEEEP1363.3 working group\cite{P1363.3:08}. In this paper, based on \cite{Matsuo:07} we pay attention to the role of KGC for proxy re-encryption in identity based setting. We find that if we can introduce the KGC in the process of generating re-encryption key for proxy re-encryption in identity based setting, many open problems can be solved. Our main results are as following: 1. One feature of proxy re-encryption from CBE to IBE scheme in \cite{Matsuo:07} is that it inherits the key escrow problem from IBE, that is, KGC can decrypt every re-encrypted ciphertext for IBE users. We ask question like this: is it possible that the malicious KGC can not decrypt the re-encryption ciphertext? Surprisingly, the answer is affirmative.We construct such a scheme and prove its security in the standard model. 2. We propose a proxy re-encryption scheme from IBE to CBE. To the best of our knowledge, this is the first type 3 scheme. We give the security model for proxy re-encryption scheme from IBE to CBE and prove our scheme's security in this model without random oracle. 3. In \cite{Matsuo:08} there was a conclusion that it is hard to construct proxy re-encryption scheme based on BF and SK IBE. When considering KGC in the proxy key generation, we can construct a proxy re-encryption scheme based on SK IBE. Interestingly, this proxy re-encryption even can achieve IND-Pr-ID-CCA2 secure, which makes it is a relative efficient proxy re-encryption scheme using pairing which can achieve CCA2 secure in the literature.
2008
EPRINT
On the Secure Obfuscation of Deterministic Finite Automata
In this paper, we show how to construct secure obfuscation for Deterministic Finite Automata, assuming non-uniformly strong one-way functions exist. We revisit the software protection approaches originally proposed by [B79,G87,GO96,K80] and revise them to the current obfuscation setting of Barak et al. [BGI+01]. Under this model, we introduce an efficient oracle that retains some ``small" secret about the original program. Using this secret, we can construct an obfuscator and two-party protocol that securely obfuscates Deterministic Finite Automata against malicious adversaries. The security of this model retains the strong ``virtual black box" property originally proposed in [BGI+01] while incorporating the stronger condition of dependent auxiliary inputs in [GTK05]. Additionally, we further show that our techniques remain secure under concurrent self-composition with adaptive inputs and that Turing machines are obfuscatable under this model.
2008
EPRINT
On the Security of a Visual Cryptography Scheme for Color Images
In Pattern Recognition, vol. 36, 2003, Hou proposed a four-share visual cryptography scheme for color images. The scheme splits a secret image into four shares, the black mask and the other three shares. It was claimed that without knowing the black mas, no information about the secret image can be obtained even if all the other three shares are known. In this paper, we show that this may be true for a few specific two-color secret images only. In all other cases however, security cannot be guaranteed. We show that an attacker can compromise a randomly chosen two-color secret image from any two of the other three shares with probability 4/7. The advantage will increase to 6/7 if all the other three shares are known. If the secret image has three or four colors, we show that the attacker can compromise it with probability 4/7 and 8/35, respectively. Finally, we show that our technique can be extended to compromising secret images with more than four colors.
2008
EPRINT
On the Security of Chien's Ultralightweight RFID Authentication Protocol
Recently, Chien proposed an ultralightweight RFID authentication protocol to prevent all possible attacks. However, we find two de-synchronization attacks to break the protocol.
2008
EPRINT
On the Security of Fully Collusion Resistant Traitor Tracing Schemes
This paper investigates the security of FTT (fully collusion resistant traitor tracing) schemes in terms of DOT (Denial Of Tracing) and framing. With DOT attack, a decoder is able to detect tracing activity, and then prolongs the tracing process such that the tracer is unable to complete tracing job in a realistic time duration and hence has to abort his effort. On the other hand, by merely embedding several bytes of non-volatile memory in the decoder, we demonstrate, for the FTT schemes, how the decoder can frame innocent users at will. Furthermore, we propose a countermeasure on the framing attack.
2008
EPRINT
On The Security of The ElGamal Encryption Scheme and Damgard’s Variant
In this paper, we discuss the security of the ElGamal encryption scheme and its variant by Damgard. For the ElGamal encryption, we show that (1) under the generalized knowledge-of-exponent assumption and the one-more discrete log assumption, ElGamal encryption is one-way under nonadaptive chosen cipher attacks; (2) one-wayness of ElGamal encryption under non-adaptive chosen cipher attacks is equivalent to the hardness of one-more computational Diffie-Hellman problem. For a variant of ElGamal encryption proposed by Damgard (DEG), we give a new proof that DEG is semantically secure against non-adaptive chosen ciphertext attacks under the one-more decisional Diffie-Hellman assumption (although the same result for DEG security has been presented in the literature before, our proof is simpler). We also give a new security proof for DEG based on the decisional Diffie- Hellman assumption (DDHA) and a weaker version of the knowledge-of-exponent assumption (KEA), and note that KEA is stronger than necessary in the security proof of DEG, for which KEA was originally proposed.
2008
EPRINT
On the Strength of the Concatenated Hash Combiner when All the Hash Functions are Weak
At Crypto 2004 Joux showed a novel attack against the concatenated hash combiner instantiated with \md iterated hash functions. His method of producing multicollisions in the \md design was the first in a recent line of generic attacks against the \md construction. In the same paper, Joux raised an open question concerning the strength of the concatenated hash combiner and asked whether his attack can be improved when the attacker can efficiently find collisions in both underlying compression functions. We solve this open problem by showing that even in the powerful adversarial scenario first introduced by Liskov (SAC 2006) in which the underlying compression functions can be fully inverted (which implies that collisions can be easily generated), collisions in the concatenated hash cannot be created using fewer than $2^{n/2}$ queries. We then expand this result to include the double pipe hash construction of Lucks from Asiacrypt 2005. One of the intermediate results is of interest on its own and provides the first streamable construction provably indifferentiable from a random oracle in this model.
2008
EPRINT
On White-Box Cryptography and Obfuscation
We study the relationship between obfuscation and white-box cryptography. We capture the requirements of any white-box primitive using a \emph{White-Box Property (WBP)} and give some negative/positive results. Loosely speaking, the WBP is defined for some scheme and a security notion (we call the pair a \emph{specification}), and implies that w.r.t. the specification, an obfuscation does not leak any ``useful'' information, even though it may leak some ``useless'' non-black-box information. Our main result is a negative one - for most interesting programs, an obfuscation (under \emph{any} definition) cannot satisfy the WBP for every specification in which the program may be present. To do this, we define a \emph{Universal White-Box Property (UWBP)}, which if satisfied, would imply that under \emph{whatever} specification we conceive, the WBP is satisfied. We then show that for every non-approximately-learnable family, there exist certain (contrived) specifications for which the WBP (and thus, the UWBP) fails. On the positive side, we show that there exists an obfuscator for a non-approximately-learnable family that achieves the WBP for a certain specification. Furthermore, there exists an obfuscator for a non-learnable (but approximately-learnable) family that achieves the UWBP. Our results can also be viewed as formalizing the distinction between ``useful'' and ``useless'' non-black-box information.
2008
EPRINT
One-Round Authenticated Key Agreement from Weak Secrets
We study the question of information-theoretically secure authenticated key agreement from weak secrets. In this setting, Alice and Bob share a $n$-bit secret $W$, which might \emph{not} be uniformly random but the adversary has at least $k$ bits of uncertainty about it (formalized using conditional min-entropy). Alice and Bob wish to use $W$ to agree on a nearly uniform secret key $R$, over a public channel controlled by an \emph{active} adversary Eve. We show that non-interactive (single-message) protocols do not work when $k\le \frac{n}{2}$, and require poor parameters even when $\frac{n}{2} &lt; k\ll n$. On the other hand, for arbitrary values of $k$, we design a communication efficient {\em two-message (i.e, one-round!)} protocol extracting nearly $k$ random bits. This dramatically improves the only previously known protocol of Renner and Wolf~\cite{RennerW03}, which required $O(\lambda)$ rounds where $\lambda$ is the security parameter. Our solution takes a new approach by studying and constructing \emph{``non-malleable'' seeded randomness extractors} --- if an attacker sees a random seed $X$ and comes up with an arbitrarily related seed $X'$, then we bound the relationship between $R= \Ext(W;X)$ and $R' = \Ext(W;X')$. We also extend our one-round key agreement protocol to the ``fuzzy'' setting, where Alice and Bob share ``close'' (but not equal) secrets $W_A$ and $W_B$, and to the Bounded Retrieval Model (BRM) where the size of the secret $W$ is huge.
2008
EPRINT
One-Up Problem for (EC)DSA
The one-up problem is defined for any group and a function from the group to integers. An instance of problem consists of two points in the group. A solution consists of another point satisfying an equation involving the group and the function. The one-up problem must be hard for the security of (EC)DSA. Heuristic arguments are given for the independence of the discrete log and one-up problems. DSA and ECDSA are conjectured to be secure if the discrete log and one-up problems are hard and the hash is collision resistant.
2008
EPRINT
Open Source Is Not Enough. Attacking the EC-package of Bouncycastle version 1.x_132
BouncyCastle is an open source Crypto provider written in Java which supplies classes for Elliptic Curve Cryptography (ECC). We have found a flaw in the class ECPoint resulting from an unhappy interaction of elementary algorithms. We show how to exploit this flaw to a real world attack, e.g., on the encryption scheme ECIES. BouncyCastle has since fixed this flaw (version 1.x_133 and higher) but all older versions remain highly vulnerable to an active attacker and the attack shows a certain vulnerability of the involved validation algorithms.
2008
EPRINT
Optimal Discretization for High-Entropy Graphical Passwords
In click-based graphical password schemes that allow arbitrary click locations on image, a click should be verified as correct if it is close within a predefined distance to the originally chosen location. This condition should hold even when for security reasons the password hash is stored in the system, not the password itself. To solve this problem, a robust discretization method has been proposed, recently. In this paper, we show that previous work on discretization does not give optimal results with respect to the entropy of the graphical passwords and propose a new discretization method to increase the password space. To improve the security further, we also present several methods that use multiple hash computations for password verification.
2008
EPRINT
Optimal Pairings
In this paper we introduce the concept of an \emph{optimal pairing}, which by definition can be computed using only $\log_2 r/ \varphi(k)$ basic Miller iterations, with $r$ the order of the groups involved and $k$ the embedding degree. We describe an algorithm to construct optimal ate pairings on all parametrized families of pairing friendly elliptic curves. Finally, we conjecture that any non-degenerate pairing on an elliptic curve without efficiently computable endomorphisms different from powers of Frobenius requires at least $\log_2 r/ \varphi(k)$ basic Miller iterations.
2008
EPRINT
Optimal Subset-Difference Broadcast Encryption with Free Riders
Broadcast encryption (BE) deals with secure transmission of a message to a group of receivers such that only an authorized subset of receivers can decrypt the message. The transmission cost of a BE system can be reduced considerably if a limited number of free riders can be tolerated in the system. In this paper, we study the problem of how to optimally place a given number of free riders in a subset difference (SD) based BE system, which is currently the most efficient BE scheme in use and has also been incorporated in standards, and we propose a polynomial-time optimal placement algorithm and three more efficient heuristics for this problem. Simulation experiments show that SD-based BE schemes can benefit significantly from the proposed algorithms.
2008
EPRINT
Oracle-Assisted Static Diffie-Hellman Is Easier Than Discrete Logarithms
This paper extends Joux-Naccache-Thom\'e's $e$-th root algorithm to the static Diffie-Hellman problem ({\sc sdhp}). The new algorithm can be adapted to diverse finite fields by customizing it with an {\sc nfs}-like core or an {\sc ffs}-like core. In both cases, after a number of {\sc sdhp} oracle queries, the attacker builds-up the ability to solve new {\sc sdhp} instances {\sl unknown before the query phase}. While sub-exponential, the algorithm is still significantly faster than all currently known {\sc dlp} and {\sc sdhp} resolution methods. We explore the applicability of the technique to various cryptosystems. The attacks were implemented in ${\mathbb F}_{2^{1025}}$ and also in ${\mathbb F}_{p}$, for a $516$-bit $p$.
2008
EPRINT
Pairing Lattices
We provide a convenient mathematical framework that essentially encompasses all known pairing functions based on the Tate pairing. We prove non-degeneracy and bounds on the lowest possible degree of these pairing functions and show how efficient endomorphisms can be used to achieve a further degree reduction.
2008
EPRINT
Pairing with Supersingular Trace Zero Varieties Revisited
A Trace Zero Variety is a specific subgroup of the group of the divisor classes on a hyperelliptic curve $C/\F_q$, which are rational over a small degree extension $\F_{q^r}$ of the definition field. Trace Zero Varieties (\tzv) are interesting for cryptographic applications since they enjoy properties that can be exploited to achieve fast arithmetic and group construction. Furthermore, supersingular \tzv allows to achieve higher MOV security per bit than supersingular elliptic curves, thus making them interesting for applications in pairing-based cryptography. In this paper we survey algorithms in literature for computing bilinear pairings and we present a new algorithm for the Tate pairing over supersingular \tzv, which exploits the action of the $q$-Frobenius. We give explicit examples and provide experimental results for supersingular \tzv defined over fields of characteristic 2. Moreover, in the same settings, we propose a more efficient variant of the Silverberg's point compression algorithm.
2008
EPRINT
Pairing-Based Onion Routing with Improved Forward Secrecy
This paper presents new protocols for onion routing anonymity networks. We define a provably secure privacy-preserving key agreement scheme in an identity-based infrastructure setting, and use it to forge new onion routing circuit constructions. These constructions, based on a user's selection, offer immediate or eventual forward secrecy at each node in a circuit and require significantly less computation and communication than the telescoping mechanism used by Tor. Further, the use of the identity-based infrastructure also leads to a reduction in the required amount of authenticated directory information. Therefore, our constructions provide practical ways to allow onion routing anonymity networks to scale gracefully.
2008
EPRINT
Pairing-friendly Hyperelliptic Curves with Ordinary Jacobians of Type $y^2=x^5+ax$
An explicit construction of pairing-friendly hyperelliptic curves with ordinary Jacobians was firstly given by D.~Freeman. In this paper, we give other explicit constructions of pairing-friendly hyperelliptic curves with ordinary Jacobians based on the closed formulae for the order of the Jacobian of a hyperelliptic curve of type $y^2=x^5+ax$. We present two methods in this paper. One is an analogue of the Cocks-Pinch method and the other is a cyclotomic method. By using these methods, we construct a pairing-friendly hyperelliptic curve $y^2=x^5+ax$ over a finite prime field ${¥mathbb F}_p$ whose Jacobian is ordinary and simple over ${¥mathbb F}_p$ with a prescribed embedding degree. Moreover, the analogue of the Cocks-Pinch produces curves with $¥rho¥approx 4$ and the cyclotomic method produces curves with $3¥le ¥rho¥le 4$.
2008
EPRINT
Pairings on hyperelliptic curves with a real model
We analyse the efficiency of pairing computations on hyperelliptic curves given by a real model using a balanced divisor at infinity. Several optimisations are proposed and analysed. Genus two curves given by a real model arise when considering pairing friendly groups of order dividing $p^{2}-p+1$. We compare the performance of pairings on such groups in both elliptic and hyperelliptic versions. We conclude that pairings can be efficiently computable in real models of hyperelliptic curves.
2008
EPRINT
Partial Fairness in Secure Two-Party Computation
Complete fairness is impossible to achieve, in general, in secure two-party computation. In light of this, various techniques for obtaining \emph{partial} fairness in this setting have been suggested. We explore the possibility of achieving partial fairness with respect to a strong, simulation-based definition of security within the standard real/ideal world paradigm. We show feasibility with respect to this definition for randomized functionalities where each player may possibly receive a different output, as long as at least one of the domains or ranges of the functionality are polynomial in size. When one of the domains is polynomial size, our protocol is also secure-with-abort. In contrast to much of the earlier work on partial fairness, we rely on standard assumptions only (namely, enhanced trapdoor permutations). We also provide evidence that our results are, in general, optimal. Specifically, we show a boolean function defined on a domain of super-polynomial size for which it is impossible to achieve both partial fairness and security with abort, and provide evidence that partial fairness is impossible altogether for functions whose domains and ranges all have super-polynomial size.
2008
EPRINT
Password Mistyping in Two-Factor-Authenticated Key Exchange
Abstract: We study the problem of Key Exchange (KE), where authentication is two-factor and based on both electronically stored long keys and human-supplied credentials (passwords or biometrics). The latter credential has low entropy and may be adversarily mistyped. Our main contribution is the first formal treatment of mistyping in this setting. Ensuring security in presence of mistyping is subtle. We show mistyping-related limitations of previous KE definitions and constructions. We concentrate on the practical two-factor authenticated KE setting where servers exchange keys with clients, who use short passwords (memorized) and long cryptographic keys (stored on a card). Our work is thus a natural generalization of Halevi-Krawczyk and Kolesnikov-Rackoff. We discuss the challenges that arise due to mistyping. We propose the first KE definitions in this setting, and formally discuss their guarantees. We present efficient KE protocols and prove their security.
2008
EPRINT
Perfectly Hiding Commitment Scheme with Two-Round from Any One-Way Permutation
Commitment schemes are arguably among the most important and useful primitives in cryptography. According to the computational power of receivers, commitments can be classified into three possible types: {\it computational hiding commitments, statistically hiding commitments} and {\it perfect computational commitments}. The fist commitment with constant rounds had been constructed from any one-way functions in last centuries, and the second with non-constant rounds were constructed from any one-way functions in FOCS2006, STOC2006 and STOC2007 respectively, furthermore, the lower bound of round complexity of statistically hiding commitments has been proven to be $\frac{n}{logn}$ rounds under the existence of one-way function. Perfectly hiding commitments implies statistically hiding, hence, it is also infeasible to construct a practically perfectly hiding commitments with constant rounds under the existence of one-way function. In order to construct a perfectly hiding commitments with constant rounds, we have to relax the assumption that one-way functions exist. In this paper, we will construct a practically perfectly hiding commitment with two-round from any one-way permutation. To the best of our knowledge, these are the best results so far.
2008
EPRINT
Perfectly Reliable and Secure Communication Tolerating Static and Mobile Mixed Adversary
In the problem of perfectly reliable message transmission} (PRMT), a sender {\bf S} and a receiver {\bf R} are connected by $n$ bidirectional synchronous channels. A mixed adversary ${\mathcal A}_{(t_b,t_f,t_p)}$ with {\it infinite computing power} controls $t_b, t_f$ and $t_p$ channels in Byzantine, fail-stop and passive fashion respectively. Inspite of the presence of ${\mathcal A}_{(t_b,t_f,t_p)}$, {\bf S} wants to reliably send a message $m$ to {\bf R}, using some protocol, without sharing any key with {\bf R} beforehand. After interacting in phases\footnote{A phase is a send from {\bf S} to {\bf R} or vice-versa} as per the protocol, {\bf R} should output $m' = m$, without any error. In the problem of {\it perfectly secure message transmission} (PSMT), there is an additional constraint that ${\mathcal A}_{(t_b,t_f,t_p)}$ should not know {\it any} information about $m$ in {\it information theoretic} sense. The adversary can be either static\footnote{A static adversary corrupts the same set of channels in each phase of the protocol. The choice of the channel to corrupt is decided before the beginning of the protocol.} or mobile.\footnote{A mobile adversary can corrupt different set of channels in different phases of the protocol.} The {\it connectivity requirement}, {\it phase complexity} and {\it communication complexity} are three important parameters of any interactive PRMT/PSMT protocol and are well studied in the literature when the channels are controlled by a static/mobile Byzantine adversary. However, when the channels are controlled by mixed adversary ${\mathcal A}_{(t_b,t_f,t_p)}$ , we encounter several surprising consequences. In this paper, we study the problem of PRMT and PSMT tolerating "static/mobile mixed adversary". We prove that even though the connectivity requirement for PRMT is same against both static and mobile mixed adversary, the lower bound on communication complexity for PRMT tolerating mobile mixed adversary is more than its static mixed counterpart. This is interesting because against only "Byzantine adversary", the connectivity requirement and the lower bound on the communication complexity of PRMT protocols are same for both static and mobile case. Thus our result shows that for PRMT, mobile mixed adversary is more powerful than its static counterpart. As our second contribution, we design a four phase communication optimal PSMT protocol tolerating "static mixed adversary". Comparing this with the existing three phase communication optimal PSMT protocol against "static Byzantine adversary", we find that additional one phase is enough to design communication optimal protocol against static mixed adversary. Finally, we show that the connectivity requirement and lower bound on communication complexity of any PSMT protocol is same against both static and mobile mixed adversary, thus proving that mobility of the adversary has no effect in PSMT. To show that our bound is tight, we also present a worst case nine phase communication optimal PSMT protocol tolerating mobile mixed adversary which is first of it's kind. This also shows that the mobility of the adversary does not hinder to design constant phase communication optimal PSMT protocol. In our protocols, we have used new techniques which can be effectively used against both static and mobile mixed adversary and are of independent interest.
2008
EPRINT
Physical Cryptanalysis of KeeLoq Code Hopping Applications
KeeLoq remote keyless entry systems are widely used for access control purposes such as garage door openers for car anti-theft systems. We present the first successful differential power analysis attacks on numerous commercially available products employing KeeLoq code hopping. Our new techniques combine side-channel cryptanalysis with specific properties of the KeeLoq algorithm. They allow for efficiently revealing both the secret key of a remote transmitter and the manufacturer key stored in a receiver. As a result, a remote control can be cloned from only ten power traces, allowing for a practical key recovery in few minutes. Once knowing the manufacturer key, we demonstrate how to disclose the secret key of a remote control and replicate it from a distance, just by eavesdropping at most two messages. This key-cloning without physical access to the device has serious real-world security implications. Finally, we mount a denial-of-service attack on a KeeLoq access control system. All the proposed attacks have been verified on several commercial KeeLoq products.
2008
EPRINT
Playing Hide-and-Seek with a Focused Mobile Adversary: Maximizing Data Survival in Unattended Sensor Networks
Some sensor network settings involve disconnected or unattended operation with periodic visits by a mobile sink. An unattended sensor network operating in a hostile environment can collect data that represents a high-value target for the adversary. Since an unattended sensor can not immediately off-load sensed data to a safe external entity (such as a sink), the adversary can easily mount a focused attack aiming to erase or modify target data. To maximize chances of data survival, sensors must collaboratively attempt to mislead the adversary and hide the location, the origin and the contents of collected data. In this paper, we focus on applications of well-known security techniques to maximize chances of data survival in unattended sensor networks, where sensed data can not be off-loaded to a sink in real time. Our investigation yields some interesting insights and surprising results. The highlights of our work are: (1) thorough exploration of the data survival challenge, (2) exploration of the design space for possible solutions, (3) construction of several practical and effective techniques, and (4) their evaluation.
2008
EPRINT
Polynomials for Ate Pairing and $\mathbf{Ate}_{i}$ Pairing
The irreducible factor $r(x)$ of $\mathrm{\Phi}_{k}(u(x))$ and $u(x) $ are often used in constructing pairing-friendly curves. $u(x)$ and $u_{c} \equiv u(x)^{c} \pmod{r(x)}$ are selected to be the Miller loop control polynomial in Ate pairing and $\mathrm{Ate}_{i}$ pairing. In this paper we show that when $4|k$ or the minimal prime which divides $k$ is larger than $2$, some $u(x)$ and $r(x)$ can not be used as curve generation parameters if we want $\mathrm{Ate}_{i}$ pairing to be efficient. We also show that the Miller loop length can not reach the bound $\frac{\mathrm{log_{2}r}}{\varphi(k)}$ when we use the factorization of $\mathrm{\Phi}_{k}(u(x))$ to generate elliptic curves.
2008
EPRINT
Possibility and impossibility results for selective decommitments
The *selective decommitment problem* can be described as follows: assume an adversary receives a number of commitments and then may request openings of, say, half of them. Do the unopened commitments remain secure? Although this question arose more than twenty years ago, no satisfactory answer could be presented so far. We answer the question in several ways: - If simulation-based security is desired (i.e., if we demand that the adversary's output can be simulated by a machine that does not see the unopened commitments), then security is *not achievable* for non-interactive or perfectly binding commitment schemes via black-box reductions to standard cryptographic assumptions. *However,* we show how to achieve security in this sense with interaction and a non-black-box reduction to one-way permutations. - If only indistinguishability of the unopened commitments from random commitments is desired, then security is *not achievable* for (interactive or non-interactive) perfectly binding commitment schemes, via black-box reductions to standard cryptographic assumptions. *However,* any statistically hiding scheme *does* achieve security in this sense. Our results give an almost complete picture when and how security under selective openings can be achieved. Applications of our results include: - Essentially, an encryption scheme *must* be non-committing in order to achieve provable security against an adaptive adversary. - When implemented with our commitment scheme, the interactive proof for graph 3-coloring due to Goldreich et al. becomes black-box zero-knowledge under parallel composition. On the technical side, we develop a technique to show very general impossibility results for black-box proofs.
2008
EPRINT
Practical Attacks on HB and HB+ Protocols
HB and HB+ are a shared-key authentication protocol designed for low-cost devices such as RFID tags. It was proposed by Juels and Weis at Crypto 2005. The security of the protocol relies on the ``learning parity with noise'' (LPN) problem, which was proved to be NP-hard. The best known attack on LPN (by Levieil and Fouque, SCN 2006) requires exponential number of samples and exponential number of operations to be performed. This makes this attack impractical because it is infeasible to collect exponentially-many observations of the protocol execution. We present a passive attack on HB protocol which requires only linear (to the length of the secret key) number of samples. Number of performed operations is still exponential, but attack is efficient for some real-life values of the parameters, i.~e.~noise $\frac{1}{8}$ and key length $144$-bits.
2008
EPRINT
Preimage Attacks on 3-Pass HAVAL and Step-Reduced MD5
This paper presents preimage attacks for the hash functions 3-pass HAVAL and step-reduced MD5. Introduced in 1992 and 1991 respectively, these functions underwent severe collision attacks, but no preimage attack. We describe two preimage attacks on the compression function of 3-pass HAVAL. The attacks have a complexity of about $2^{224}$ compression function evaluations instead of $2^{256}$. Furthermore, we present several preimage attacks on the MD5 compression function that invert up to 47 (out of 64) steps within $2^{96}$ trials instead of $2^{128}$. Though our attacks are not practical, they show that the security margin of 3-pass HAVAL and step-reduced MD5 with respect to preimage attacks is not as high as expected.
2008
EPRINT
Privacy-Preserving Audit and Extraction of Digital Contents
A growing number of online services, such as Google, Yahoo!, and Amazon, are starting to charge users for their storage. Customers often use these services to store valuable data such as email, family photos and videos, and disk backups. Today, a customer must entirely trust such external services to maintain the integrity of hosted data and return it intact. Unfortunately, no service is infallible. To make storage services accountable for data loss, we present protocols that allow a third-party auditor to periodically verify the data stored by a service and assist in returning the data intact to the customer. Most importantly, our protocols are privacy-preserving, in that they never reveal the data contents to the auditor. Our solution removes the burden of verification from the customer, alleviates both the customer’s and storage service’s fear of data leakage, and provides a method for independent arbitration of data retention contracts.
2008
EPRINT
Privacy-Preserving Matching of DNA Profiles
In the last years, DNA sequencing techniques have advanced to the point that DNA identification and paternity testing has become almost a commodity. Due to the critical nature of DNA related data, this causes substantial privacy issues. In this paper, we introduce cryptographic privacy enhancing protocols that allow to perform the most common DNA-based identity, paternity and ancestry tests and thus implement privacy-enhanced online genealogy services or research projects. In the semi-honest attacker model, the protocols guarantee that no sensitive information about the involved DNA is exposed, and are resilient against common forms of measurement errors during DNA sequencing. The protocols are practical and efficient, both in terms of communication and computation complexity.
2008
EPRINT
Private Branching Programs: On Communication-Efficient Cryptocomputing
We polish a recent cryptocomputing method that makes it possible to cryptocompute every language in $\mathbf{L/poly}$. We give several nontrivial applications, including: (a) A CPIR protocol with log-squared communication and sublinear server-computation by giving a secure function evaluation protocol for Boolean functions with similar performance, (b) A protocol that makes it possible to compute (say) how similar is client's input to an element in server's database, without revealing any information to the server, (c) A protocol for private database updating with low amortized complexity.
2008
EPRINT
Probabilistic Verifiable Secret Sharing Tolerating Adaptive Adversary
In this work we focus on two basic secure distributed computation tasks- Probabilistic Weak Secret Sharing (PWSS) and Probabilistic Verifiable Secret Sharing (PVSS). PVSS allows a dealer to share a secret among several players in a way that would later allow a unique reconstruction of the secret with negligible error probability. PWSS is slightly weaker version of PVSS where the dealer can choose not to disclose his secret later. Both of them are well-studied problems. While PVSS is used as a building block in every general probabilistic secure multiparty computation, PWSS can be used as a building block for PVSS protocols. Both these problems can be parameterized by the number of players ($n$) and the fault tolerance threshold ($t$) which bounds the total number of malicious (Byzantine) players having {\it unbounded computing power}. We focus on the standard {\it secure channel model}, where all players have access to secure point-to-point channels and a common broadcast medium. We show the following for PVSS: (a) 1-round PVSS is possible iff $t=1$ and $n>3$ (b) 2-round PVSS is possible if $n>3t$ (c) 4-round PVSS is possible if $n>2t$. For the PWSS we show the following: (a) 1-round PWSS is possible iff $n>3t$ and (b) 3-round PWSS is possible if $n>2t$. All our protocols are {\it efficient}. Comparing our results with the existing trade-off results for perfect (zero error probability) VSS and WSS, we find that probabilistically relaxing the conditions of VSS/WSS helps to increase fault tolerance significantly.
2008
EPRINT
Proofs of Knowledge with Several Challenge Values
In this paper we consider the problem of increasing the number of possible challenge values from 2 to $s$ in various zero-knowledge cut and choose protocols. First we discuss doing this for graph isomorphism protocol. Then we show how increasing this number improves efficiency of protocols for double discrete logarithm and $e$-th root of discrete logarithm which are potentially very useful tools for constructing complex cryptographic protocols. The practical improvement given by our paper is 2-4 times in terms of both time complexity and transcript size.
2008
EPRINT
Proofs of Retrievability: Theory and Implementation
A proof of retrievability (POR) is a compact proof by a file system (prover) to a client (verifier) that a target file F is intact, in the sense that the client can fully recover it. As PORs incur lower communication costs than transmission of F itself, they are an attractive building block for high-assurance remote storage systems. In this paper, we propose a theoretical framework for the design of PORs. This framework leads to improvements in the previously proposed POR constructions of Juels-Kaliski and Shacham-Waters, and also sheds light on the conceptual limitations of previous theoretical models for PORs. We propose a new variant on the Juels-Kaliski protocol with significantly improved efficiency and describe a prototype implementation. We demonstrate practical encoding even for files F whose size exceeds that of client main memory.
2008
EPRINT
Properties of Cryptographic Hash Functions
This paper extends the work of Rogaway and Shrimpton (2004), where they formalized seven security properties: notions of preimage resistance (Pre, aPre, ePre), second-preimage resistance (Sec, aSec, eSec) and collision resistance (Coll). They also give all the implications and separations among the properties. In this paper we consider three additional security properties which are important in applications of hash functions: unforgeability (MAC), pseudo-random function (Prf) and pseudo-random oracle (Pro). We give a new type of the implication and separation between the security notions since the ones defined by Rogaway and Shrimpton were too constraining, and work out all the relationships among the ten security notions above. Some of the relations have been proven before, some of them appear to be new. We show that a property pseudo-random oracle (Pro) introduced by Coron, Dodis, Malinaud and Puniya is (as expected) the strongest one, since it implies almost all of the other properties.
2008
EPRINT
Provable Security of Digital Signatures in the Tamper-Proof Device Model
We study security of digital signature schemes in tamper-proof device model. In this model each participant of the protocol has access to a tamper-proof device to encrypt hash values. This substitutes commonly used random oracle. In the tamper-proof device model we demonstrate security of a modified version of the GOST signature scheme.
2008
EPRINT
Provably Secure ID-Based Broadcast Signcryption (IBBSC) Scheme
With the advent of mobile and portable devices such as cell phones and PDAs, wireless content distribution has become a major means of communications and entertainment. In such applications, a central authority needs to deliver encrypted data to a large number of recipients in such a way that only a privileged subset of users can decrypt it. A broadcasting news channel may face this problem, for example, when a large number of people subscribe to a daily exclusive news feature. This is exactly the kind of problem that \textit{broadcast encryption} attempts to efficiently solve. On top of this, especially in the current digital era, junk content or spam is a major turn off in almost every Internet application. If all the users who subscribe to the news feed receive meaningless noise or any unwanted content, then the broadcaster is going to lose them. This results in the additional requirement that subscribers have source authentication with respect to their broadcaster. \textit{Broadcast signcryption}, which enables the broadcaster to simultaneously encrypt and sign the content meant for a specific set of users in a single logical step, provides the most efficient solution to the dual problem of confidentiality and authentication. Efficiency is a major concern, because mobile devices have limited memory and computational power and wireless bandwidth is an extremely costly resource. While several alternatives exist in implementing broadcast signcryption schemes, identity-based (ID-based) schemes are arguably the best suited because of the unique advantage that they provide --- any unique, publicly available parameter of a user can be his public key, which eliminates the need for a complex public key infrastructure. In ASIAN 2004, Mu et al. \cite{MSLR04} propose what they call an ID-based authenticated broadcast encryption scheme, which is also a broadcast signcryption scheme, as the security goals are the same. They claim that their scheme provides message authentication and confidentiality and formally prove that the broadcaster's secret is not compromised, but in this paper, we demonstrate that even without knowing the broadcaster's secret, it is possible for a legal user to impersonate the broadcaster. We demonstrate this by mounting a universal forgeability attack --- any valid user, on receiving and decrypting a valid ciphertext from a broadcaster, can generate a valid ciphertext on any message on behalf of that broadcaster for the same set of legal receivers to which the broadcaster signcrypted the earlier message, without knowing any secrets. Following this, we propose a new ID-based broadcast signcryption (IBBSC) scheme, and formally prove its security under the strongest existing security models for broadcast signcryption (IND-CCA2 and EUF-CMA2).
2008
EPRINT
Proxy Key Re-encapsulation Mechanism for Group Communications
Many practical applications use hybrid encryption mechanism to deal with large plaintext messages or real-time communication since the performance of the public key encryption scheme is poor. The key encapsulation is a crucial part in hybrid encryption mechanism, which allows a sender to generate a random session key and distribute it to recipient. In this paper we present a proxy key re-encapsulation scheme for group communication. The proxy in our scheme is allowed to transform the encapsulated message corresponding to group A's public key into one that can be decapsulated by the member in group B. It can be used in cases when a group users need to perform sensitive operation without holding the necessary secret key.
2008
EPRINT
Public Key Block Cipher Based on Multivariate Quadratic Quasigroups
We have designed a new class of public key algorithms based on quasigroup string transformations using a specific class of quasigroups called \emph{multivariate quadratic quasigroups (MQQ)}. Our public key algorithm is a bijective mapping, it does not perform message expansions and can be used both for encryption and signatures. The public key consist of $n$ quadratic polynomials with $n$ variables where $n=140, 160, \ldots$. A particular characteristic of our public key algorithm is that it is very fast and highly parallelizable. More concretely, it has the speed of a typical modern symmetric block cipher -- the reason for the phrase \emph{"A Public Key Block Cipher"} in the title of this paper. Namely the reference C code for the 160--bit variant of the algorithm performs decryption in less than 11,000 cycles (on Intel Core 2 Duo -- using only one processor core), and around 6,000 cycles using two CPU cores and OpenMP 2.0 library. However, implemented in Xilinx Virtex-5 FPGA that is running on 249.4 MHz it achieves decryption throughput of 399 Mbps, and implemented on four Xilinx Virtex-5 chips that are running on 276.7 MHz it achieves encryption throughput of 44.27 Gbps. Compared to fastest RSA implementations on similar FPGA platforms, MQQ algorithm is more than 10,000 times faster.
2008
EPRINT
Public Key Cryptography from Different Assumptions
We construct a new public key encryption based on two assumptions: 1) One can obtain a pseudorandom generator with small locality by connecting the outputs to the inputs using any sufficiently good unbalanced expander. 2) It is hard to distinguish between a random graph that is such an expander and a random graph where a (planted) random logarithmic-sized subset S of the outputs is connected to fewer than |S| inputs. The validity and strength of the assumptions raise interesting new algorithmic and pseudorandomness questions, and we explore their relation to the current state-of-art.
2008
EPRINT
Public key encryption and encryption emulation attacks
The main purpose of this paper is to suggest that public key encryption can be secure against the "encryption emulation" attack (on the sender's encryption) by computationally unbounded adversary, with one reservation: a legitimate receiver decrypts correctly with probability that can be made arbitrarily close to 1, but not equal to 1.
2008
EPRINT
Public-Key Cryptosystems from the Worst-Case Shortest Vector Problem
We construct public-key cryptosystems that are secure assuming the \emph{worst-case} hardness of approximating the length of a shortest nonzero vector in an $n$-dimensional lattice to within a small $\poly(n)$ factor. Prior cryptosystems with worst-case connections were based either on the shortest vector problem for a \emph{special class} of lattices (Ajtai and Dwork, STOC 1997; Regev, J.~ACM 2004), or on the conjectured hardness of lattice problems for \emph{quantum} algorithms (Regev, STOC 2005). Our main technical innovation is a reduction from certain variants of the shortest vector problem to corresponding versions of the ``learning with errors'' ($\lwe$) problem; previously, only a \emph{quantum} reduction of this kind was known. In addition, we construct new cryptosystems based on the \emph{search} version of $\lwe$, including a very natural \emph{chosen ciphertext-secure} system that has a much simpler description and tighter underlying worst-case approximation factor than prior constructions.
2008
EPRINT
Public-Key Encryption with Efficient Amortized Updates
Searching and modifying public-key encrypted data (without having the decryption key) has received a lot of attention in recent literature. In this paper we re-visit this important problem and achieve much better amortized communication-complexity bounds. Our solution resolves the main open question posed by Boneh at al., \cite{BKOS07}. First, we consider the following much simpler to state problem (which turns out to be central for the above): A server holds a copy of Alice's database that has been encrypted under Alice's public key. Alice would like to allow other users in the system to replace a bit of their choice in the server's database by communicating directly with the server, despite other users not having Alice's private key. However, Alice requires that the server should not know which bit was modified. Additionally, she requires that the modification protocol should have ``small" communication complexity (sub-linear in the database size). This task is referred to as private database modification, and is a central tool in building a more general protocol for modifying and searching over public-key encrypted data with small communication complexity. The problem was first considered by Boneh at al., \cite{BKOS07}. The protocol of \cite{BKOS07} to modify $1$ bit of an $N$-bit database has communication complexity $\mathcal{O}(\sqrt N)$. Naturally, one can ask if we can improve upon this. Unfortunately, \cite{OS08} give evidence to the contrary, showing that using current algebraic techniques, this is not possible to do. In this paper, we ask the following question: what is the communication complexity when modifying $L$ bits of an $N$-bit database? Of course, one can achieve naive communication complexity of $\mathcal{O}(L\sqrt N)$ by simply repeating the protocol of \cite{BKOS07}, $L$ times. Our main result is a private database modification protocol to modify $L$ bits of an $N$-bit database that has communication complexity $\mathcal{O}(\sqrt{NL^{1+\alpha}}\textrm{poly-log~} N)$, where $0<\alpha<1$ is a constant. (We remark that in contrast with recent work of Lipmaa \cite{L08} on the same topic, our database size {\em does not grow} with every update, and stays exactly the same size.) As sample corollaries to our main result, we obtain the following: \begin{itemize} \item First, we apply our private database modification protocol to answer the main open question of \cite{BKOS07}. More specifically, we construct a public key encryption scheme supporting PIR queries that allows every message to have a non-constant number of keywords associated with it. \item Second, we show that one can apply our techniques to obtain more efficient communication complexity when parties wish to increment or decrement multiple cryptographic counters (formalized by Katz at al. ~\cite{KMO01}). \end{itemize} We believe that ``public-key encrypted'' amortized database modification is an important cryptographic primitive in it's own right and will be a useful in other applications.
2008
EPRINT
Recognition in Ad Hoc Pervasive Networks
We examine the problem of message and entity recognition in the context of ad hoc networks. We review the definitions and the security model described in the literature and examine previous recognition protocols described in ABCLMN98, HWGW05, LZWW05, M03, and WW03. We prove that there is a one to one correspondence between non-interactive message recognition protocols and digital signature schemes. Hence, we concentrate on designing interactive recognition protocols. We look at LZWW05 in more detail and suggest a variant to overcome a certain shortcoming. In particular, in case of communication failure or adversarial disruption, this protocol is not equipped with a practical resynchronization process and can fail to resume. We propose a variant of this protocol which is equipped with a resynchronization technique that allows users to resynchronize whenever they wish or when they suspect an intrusion.
2008
EPRINT
Reducing Complexity Assumptions for Oblivious Transfer
Reducing the minimum assumptions needed to construct various cryptographic primitives is an important and interesting task in theoretical cryptography. Oblivious Transfer, one of the most basic cryptographic building blocks, is also studied under this scenario. Reducing the minimum assumptions for Oblivious Transfer seems not an easy task, as there are a few impossibility results under black-box reductions. Until recently, it is widely believed that Oblivious Transfer can be constructed with trapdoor permutations but not trapdoor functions in general. In this paper, we enhance previous results and show one Oblivious Transfer protocol based on a collection of trapdoor functions with some extra properties. We also provide reasons for adding the extra properties and argue that the assumptions in the protocol are nearly minimum.
2008
EPRINT
Reducing the Complexity of the Weil Pairing Computation
In this paper, we investigate to compute the variants based on the Weil pairing with short Miller iteration loops.
2008
EPRINT
Redundant $\tau$-adic Expansions I: Non-Adjacent Digit Sets and their Applications to Scalar Multiplication
This paper investigates some properties of $\tau$-adic expansions of scalars. Such expansions are widely used in the design of scalar multiplication algorithms on Koblitz Curves, but at the same time they are much less understood than their binary counterparts. Solinas introduced the width-$w$ $\tau$-adic non-adjacent form for use with Koblitz curves. This is an expansion of integers $z=\sum_{i=0}^\ell z_i\tau^i$, where $\tau$ is a quadratic integer depending on the curve, such that $z_i\ne 0$ implies $z_{w+i-1}=\ldots=z_{i+1}=0$, like the sliding window binary recodings of integers. It uses a redundant digit set, i.e., an expansion of an integer using this digit set need not be uniquely determined if the syntactical constraints are not enforced. We show that the digit sets described by Solinas, formed by elements of minimal norm in their residue classes, are uniquely determined. Apart from this digit set of minimal norm representatives, other digit sets can be chosen such that all integers can be represented by a width-$w$ non-adjacent form using those digits. We describe an algorithm recognizing admissible digit sets. Results by Solinas and by Blake, Murty, and Xu are generalized. In particular, we introduce two new useful families of digit sets. The first set is syntactically defined. As a consequence of its adoption we can also present improved and streamlined algorithms to perform the precomputations in $\tau$-adic scalar multiplication methods. The latter use an improvement of the computation of sums and differences of points on elliptic curves with mixed affine and L\'opez-Dahab coordinates. The second set is suitable for low-memory applications, generalizing an approach started by Avanzi, Ciet, and Sica. It permits to devise a scalar multiplication algorithm that dispenses with the initial precomputation stage and its associated memory space. A suitable choice of the parameters of the method leads to a scalar multiplication algorithm on Koblitz Curves that achieves sublinear complexity in the number of expensive curve operations.
2008
EPRINT
Redundant $\tau$-adic Expansions II: Non-Optimality and Chaotic Behaviour
When computing scalar multiples on Koblitz curves, the Frobenius endomorphism can be used to replace the usual doublings on the curve. This involves digital expansions of the scalar to the complex base $\tau=(\pm 1\pm \sqrt{-7})/2$ instead of binary expansions. As in the binary case, this method can be sped up by enlarging the set of valid digits at the cost of precomputing some points on the curve. In the binary case, it is known that a simple syntactical condition (the so-called $w$-NAF-condition) on the expansion ensures that the number of curve additions is minimised. The purpose of this paper is to show that this is not longer true for the base $\tau$ and $w\in\{4,5,6\}$. Even worse, it is shown that there is no longer an online algorithm to compute an optimal expansion from the digits of some standard expansion from the least to the most significant digit, which can be interpreted as chaotic behaviour. The proofs heavily depend on symbolic computations involving transducer automata.
2008
EPRINT
Remarks on the Attack of Fouque et al. against the {\ell}IC Scheme
In 2007, $\ell$-Invertible Cycles ($\ell$IC) was proposed by Ding et al. This is one of the most efficient trapdoors for encryption/signature schemes, and of the mixed field type for multivariate quadratic public-key cryptosystems. Such schemes fit on the implementation over low cost smart cards or PDAs. In 2008, Fouque et al. proposed an efficient attack against the $\ell$IC signature scheme by using Gr\"obner basis algorithms. However, they only explicitly dealt with the odd case, i.e. $\ell$ is odd, but the even case; they only implemented their proposed attack in the odd case. In this paper, we propose an another practical attack against the $\ell$IC encryption/signature scheme. Our proposed attack does not employ Gr\"obner basis algorithms, and can be applied to the both even and odd cases. We show the efficiency of the attack by using some experimental results. Furthermore, the attack can be also applied to the $\ell$IC- scheme. To the best of our knowledge, we for the first time show some experimental results of a practical attack against the $\ell$IC- scheme for the even case.
2008
EPRINT
Remarks on the NFS complexity
In this contribution we investigate practical issues with implementing the NFS algorithm to solve the DLP arising in XTR-based cryptosystems. We can transform original XTR-DLP to a DLP instance in $\mathbb{F}_{p^6},$ where $p$ is a medium sized prime. Unfortunately, for practical ranges of $p,$ the optimal degree of NFS polynomial is less than the required degree 6. This leads to a problem to find enough smooth equations during the sieve stage of the NFS algorithm. We discuss several techniques that can increase the NFS output, i.e. the number of equations produced during the sieve, without increasing the smoothness bound.
2008
EPRINT
Remote Integrity Check with Dishonest Storage Server
We are interested in this problem: a verifier, with a small and reliable storage, wants to periodically check whether a remote server is keeping a large file $\mathbf{x}$. A dishonest server, by adapting the challenges and responses, tries to discard partial information of $\mathbf{x}$ and yet evades detection. Besides the security requirements, there are considerations on communication, storage size and computation time. Juels et al. \cite{Pors} gave a security model for {\em Proof of Retrievability} ({\POR}) system. The model imposes a requirement that the original ${\bf x}$ can be recovered from multiple challenges-responses. Such requirement is not necessary in our problem. Hence, we propose an alternative security model for {\em Remote Integrity Check} ({\RIC}). We study a few schemes and analyze their efficiency and security. In particular, we prove the security of a proposed scheme {\simplePIR}. This scheme can be deployed as a {\POR} system and it also serves as an example of an effective {\POR} system whose ``extraction'' is not verifiable. We also propose a combination of the RSA-based scheme by Filho et al. \cite{DDPs} and the ECC-based authenticator by Naor et al. \cite{complex_memcheck}, which achieves good asymptotic performance. This scheme is not a {\POR} system and seems to be a secure {\RIC}. In-so-far, all schemes that have been proven secure can also be adopted as {\POR} systems. This brings out the question of whether there are fundamental differences between the two models. To highlight the differences, we introduce a notion, {\em trap-door compression}, that captures a property on compressibility.
2008
EPRINT
Restricted Adaptive Oblivious Transfer
In this work we consider the following primitive, that we call {\it restricted adaptive oblivious transfer}. On the one hand, the owner of a database wants to restrict the access of users to this data according to some policy, in such a way that a user can only obtain information satisfying the restrictions imposed by the owner. On the other hand, a legitimate user wants to privately retrieve allowed parts of the data, in a sequential and adaptive way, without letting the owner know which part of the data is being obtained. After having formally described the components and required properties of a protocol for restricted adaptive oblivious transfer, we propose two schemes to realize this primitive. The first one is only of theoretical interest at the current time, because it uses a cryptographic tool which has not been realized yet: cryptosystems which are both multiplicatively and additively homomorphic. The second scheme, fully implementable, is based on secret sharing schemes.
2008
EPRINT
Results from a Search for the Best Linear Approximation of a Block Cipher
In this paper, we investigate the application of an algorithm to find the best linear approximation of a basic Substitution-Permutation Network block cipher. The results imply that, while it is well known that the S-box used for the Advanced Encryption Standard has good nonlinear properties, it is straightforward to randomly select other S-boxes which are able to provide a similar level of security, as indicated by the exact bias of the best linear approximation found by the algorithm, rather than a simple upper bound on the maximum bias.
2008
EPRINT
Revisit of Group-based Unidirectional Proxy Re-encryption Scheme
Currently, researchers have focused their attention on proxy re-encryption scheme deployed between two entities. Lots of bidirectional schemes have been proposed and this kind of scheme is suitable for the scenario in which the two entities have already established a relationship of trust. How to construct a unidirectional scheme is an open problem and receiving increasing attention. In this paper, we present a unidirectional proxy re-encryption scheme for group communication. In this scheme, a proxy is only allowed to convert ciphertext for Alice into ciphertext for Bob without revealing any information on plaintext or private key. It is suitable for the environment in which no mutual relationship exists and transitivity is not permitted. We prove the scheme secure against chosen ciphertext attack in standard model.
2008
EPRINT
Revisiting Wiener's Attack -- New Weak Keys in RSA
In this paper we revisit Wiener's method (IEEE-IT, 1990) of continued fraction (CF) to find new weaknesses in RSA. We consider RSA with $N = pq$, $q < p < 2q$, public encryption exponent $e$ and private decryption exponent $d$. Our motivation is to find out when RSA is insecure given $d$ is $O(n^\delta)$, where we are mostly interested in the range $0.3 \leq \delta \leq 0.5$. We use both the upper and lower bounds on $\phi(N)$ and then try to find out what are the cases when $\frac{t}{d}$ is a convergent in the CF expression of $\frac{e}{N - \frac{3}{\sqrt{2}} \sqrt{N} + 1}$. First we show that the RSA keys are weak when $d = N^\delta$ and $\delta < \frac{3}{4} - \gamma - \tau$, where $2q - p = N^\gamma$ and $\tau$ is a small value based on certain parameters. This presents additional results over the work of de Weger (AAECC 2002). We also discuss how the idea of Boneh-Durfee (Eurocrypt 1999) works better to find weak keys beyond the bound $\delta < \frac{3}{4} - \gamma - \tau$. Further we show that, the RSA keys are weak when $d < \frac{1}{2} N^\delta$ and $e$ is $O(N^{\frac{3}{2}-2\delta})$ for $\delta \leq \frac{1}{2}$. Using similar idea we also present new results over the work of Bl{\"o}mer and May (PKC 2004).
2008
EPRINT
Revocation Systems with Very Small Private Keys
In this work, we design a new public key broadcast encryption system, and we focus on a critical parameter of device key size: the amount of the cryptographic key material that must be stored securely on the receiving devices. Our new scheme has ciphertext size overhead O(r), where $r$ is the number of revoked users, and the size of public and private keys is only a constant number of group elements from an elliptic-curve group of prime order. All previous work, even in the restricted case of systems based on symmetric keys, required at least lg(n) keys stored on each device. In addition, we show that our techniques can be used to realize Attribute-Based Encryption (ABE) systems with non-monotonic access formulas, where are key storage is significantly more efficient than previous solutions. Our results are in the standard model under a new, but non-interactive, assumption.
2008
EPRINT
Robust Combiners for White-Box Security
{\em White-box} security techniques are employed to protect programs so that they can be executed securely in untrusted environments, e.g. for copyright protection. We present the first robust combiner for white-box primitive, specifically for {\em White-Box Remote Program Execution (WBRPE)} schemes. The {\em WBRPE} combiner takes two input candidate {\em WBRPE} schemes, $W'$ and $W''$, and outputs a third candidate $W=W'\circ W''$. The combiner is $(1,2)$-{\em robust}, namely, $W$ is secure as long as either $W'$ or $W''$ is secure. The security of the combined scheme is established by presenting a reduction to the security of the white-box candidates. %The combiner employs new techniques of code manipulation, which can be used by other {\em white-box} constructions. The {\em WBRPE} combiner is interesting since it presents new techniques of code manipulation, and in addition it provides both properties of confidentiality and authentication, even though it is a $(1,2)$-robust combiner. Robust combiners are particularly important for {\em white-box} security, since no secure candidates are known to exist. Furthermore, robust combiners for white-box primitives, are interesting since they introduce new techniques of reductions.
2008
EPRINT
RSA Cryptanalysis with Increased Bounds on the Secret Exponent using Less Lattice Dimension
We consider RSA with $N = pq$, $q < p < 2q$, public encryption exponent $e$ and private decryption exponent $d$. Boneh and Durfee (Eurocrypt 1999, IEEE-IT 2000) used Coppersmith's method (Journal of Cryptology, 1997) to factorize $N$ using $e$ when $d < N^{0.292}$, the theoretical bound. However, the experimental bound that has been reached so far is only $N^{0.280}$ for 1000 bits integers (and less for higher number of bits). The basic idea relied on LLL algorithm, but the experimental bounds were constrained by large lattice dimensions. In this paper we present theoretical results and experimental evidences to extend the bound of $d$ for which RSA is weak. This requires the knowledge of a few most significant bits of $p$ (alternatively these bits need to be searched exhaustively). We provide experimental results to highlight that the problem can be solved with low lattice dimensions in practice. Our results outperform the existing experimental results by increasing the bounds of $d$ and also we provide clear evidence that RSA with 1000 bit $N$ and $d$ of the order of $N^{0.3}$ can be cryptanalysed in practice from the knowledge of $N, e$.
2008
EPRINT
RSA-TBOS Signcryption with Proxy Re-encryption
The recent attack on Apple iTunes Digital Rights Management \cite{SJ05} has brought to light the usefulness of proxy re-encryption schemes for Digital Rights Management. It is known that the use of proxy re-encryption would have prevented the attack in \cite{SJ05}. With this utility in mind and with the added requirement of non-repudiation, we propose the first ever signcryption scheme with proxy re-encryption that does not involve bilinear maps. Our scheme is called RSA-TBOS-PRE and is based on the RSA-TBOS signcryption scheme of Mao and Malone-Lee \cite{MM03}. We adapt various models available in the literature concerning authenticity, unforgeability and non-repudiation and propose a signature non-repudiation model suitable for signcryption schemes with proxy re-encryption. We show the non-repudiability of our scheme in this model. We also introduce and define a new security notion of Weak-IND-CCA2, a slightly weakened adaptation of the IND-CCA2 security model for signcryption schemes and prove that RSA-TBOS-PRE is secure in this model. Our scheme is Weak-IND-CCA2 secure, unidirectional, extensible to multi-use and does not use bilinear maps. This represents significant progress towards solving the open problem of designing an IND-CCA2 secure, unidirectional, multi-use scheme not using bilinear maps proposed in \cite{CH07}\cite{SXC08}.
2008
EPRINT
Scalable and Efficient Provable Data Possession
Storage outsourcing is a rising trend which prompts a number of interesting security issues, many of which have been extensively investigated in the past. However, Provable Data Possession (PDP) is a topic that has only recently appeared in the research literature. The main issue is how to frequently, efficiently and securely verify that a storage server is faithfully storing its client’s (potentially very large) outsourced data. The storage server is assumed to be untrusted in terms of both security and reliability. (In other words, it might maliciously or accidentally erase hosted data; it might also relegate it to slow or off-line storage.) The problem is exacerbated by the client being a small computing device with limited resources. Prior work has addressed this problem using either public key cryptography or requiring the client to outsource its data in encrypted form. In this paper, we construct a highly efficient and provably secure PDP technique based entirely on symmetric key cryptography, while not requiring any bulk encryption. Also, in contrast with its predecessors, our PDP technique allows outsourcing of dynamic data, i.e, it efficiently supports operations, such as block modification, deletion and append.
2008
EPRINT
Scratch, Click & Vote: E2E voting over the Internet
We present Scratch, Click & Vote voting scheme, which is a modification of the Punchscan and ThreeBallot systems. The scheme is end-to-end veryfiable and allows for voting over the Internet. Security against malicious hardware and software used by a voter %TT is due to the fact that a voter's computer does not get any knowledge about the voter's choice. Moreover, it can change successfully a voter's ballot only with a small probability. As a side result, we present a modification of the ThreeBallot that eliminates Strauss'-like attacks on this scheme.
2008
EPRINT
Searching for Low Weight Codewords in Linear Binary Codes
In this work we revisit the known algorithms for searching for low weight codewords in linear binary codes. We propose some improvements on them and also propose a new efficient heuristic.
2008
EPRINT
Secure Adiabatic Logic: a Low-Energy DPA-Resistant Logic Style
The charge recovery logic families have been designed several years ago not in order to eliminate the side-channel leakage but to reduce the power consumption. However, in this article we present a new charge recovery logic style not only to gain high energy efficiency but also to achieve the resistance against side-channel attacks (SDA) especially against differential power analysis (DPA) attacks. Simulation results show a significant improvement in DPA-resistance level as well as in power consumption reduction in comparison with DPA-resistant logic styles proposed so far.
2008
EPRINT
Secure Arithmetic Computation with No Honest Majority
We study the complexity of securely evaluating arithmetic circuits over finite rings. This question is motivated by natural secure computation tasks. Focusing mainly on the case of {\em two-party} protocols with security against {\em malicious} parties, our main goals are to: (1) only make black-box calls to the ring operations and standard cryptographic primitives, and (2) minimize the number of such black-box calls as well as the communication overhead. We present several solutions which differ in their efficiency, generality, and underlying intractability assumptions. These include: \begin{itemize} \item An {\em unconditionally secure} protocol in the OT-hybrid model which makes a black-box use of an arbitrary ring $R$, but where the number of ring operations grows linearly with (an upper bound on) $\log|R|$. \item Computationally secure protocols in the OT-hybrid model which make a black-box use of an underlying ring, and in which the numberof ring operations does not grow with the ring size. The protocols rely on variants of previous intractability assumptions related to linear codes. In the most efficient instance of these protocols, applied to a suitable class of fields, the (amortized) communication cost is a constant number of field elements per multiplication gate and the computational cost is dominated by $O(\log k)$ field operations per gate, where$k$ is a security parameter. These results extend a previous approach of Naor and Pinkas for secure polynomial evaluation ({\em SIAM J.\ Comput.}, 35(5), 2006). \item A protocol for the rings $\mathbb{Z}_m=\mathbb{Z}/m\mathbb{Z}$ which only makes a black-box use of a homomorphic encryption scheme. When $m$ is prime, the (amortized) number of calls to the encryption scheme for each gate of the circuit is constant. \end{itemize} All of our protocols are in fact {\em UC-secure} in the OT-hybrid model and can be generalized to {\em multiparty} computation with an arbitrary number of malicious parties.
2008
EPRINT
Secure Biometric Authentication With Improved Accuracy
We propose a new hybrid protocol for cryptographically secure biometric authentication. The main advantages of the proposed protocol over previous solutions can be summarised as follows: (1) potential for much better accuracy using di&#64256;erent types of biometric signals, including behavioural ones; and (2) improved user privacy, since user identities are not transmitted at any point in the protocol execution. The new protocol takes advantage of state-of-the-art identi&#64257;cation classi&#64257;ers, which provide not only better accuracy, but also the possibility to perform authentication without knowing who the user claims to be. Cryptographic security is based on the Paillier public key encryption scheme.
2008
EPRINT
Secure Certificateless Public Key Encryption without Redundancy
Certificateless public key cryptography was introduced to solve the key escrow problem in identity based cryptography while enjoying the most attractive {\em certificateless} property. In this paper, we present the first {\em secure} certificateless public key encryption (CLPKE) scheme {\em without redundancy}. Our construction provides optimal bandwidth and a quite efficient decryption process among all the existing CLPKE schemes. It is provably secure against adaptive chosen ciphertext attacks in the random oracle model under a slightly stronger assumption.
2008
EPRINT
Secure Computability of Functions in the IT setting with Dishonest Majority and Applications to Long-Term Security
It is well known that general secure function evaluation (SFE) with information-theoretical (IT) security is infeasible in presence of a corrupted majority in the standard model. On the other hand, there are SFE protocols (Goldreich et al.~[STOC'87]) that are computationally secure (without fairness) in presence of an actively corrupted majority of the participants. Now, the issue with computational assumptions is not so much that they might be unjustified at the time of protocol execution. Rather, we are usually worried about a potential violation of the privacy of sensitive data by an attacker whose power increases over time (e.g.~due to new technical developments). Therefore, we ask which functions can be computed with long-term security, where we admit computational assumptions for the duration of a computation, but require IT security (privacy) once the computation is concluded. Toward this end we combinatorially characterize the classes of functions that can be computed IT securely in the authenticated channels model in presence of passive, semi-honest, active, and quantum adversaries (our results for quantum adversaries and in part for active adversaries are limited to the 2-party setting). In particular we obtain results on the fair computability of functions in the IT setting along the lines of the work of Gordon et al.~[STOC'08] for the computational setting. Our treatment is constructive in the sense that if a function is computable in a given setting, then we exhibit a protocol. We show that the class of functions computable with long-term security in a very practical setting where the adversary may be active and insecure channels and a public-key infrastructure are provided is precisely the class of functions computable with IT security in the authenticated channels model in presence of a semi-honest adversary. Finally, from our results and the work of Kushilevitz [SIAM Journal on Discrete Mathematics '92] and Kraschewski and Müller-Quade we can derive a complete combinatorial classification of functions, by secure computability and completeness under passive, semi-honest, active, and quantum adversaries.
2008
EPRINT
Secure Multiparty Computation for Privacy-Preserving Data Mining
In this paper, we survey the basic paradigms and notions of secure multiparty computation and discuss their relevance to the field of privacy-preserving data mining. In addition to reviewing definitions and constructions for secure multiparty computation, we discuss the issue of efficiency and demonstrate the difficulties involved in constructing highly efficient protocols. We also present common errors that are prevalent in the literature when secure multiparty computation techniques are applied to privacy-preserving data mining. Finally, we discuss the relationship between secure multiparty computation and privacy-preserving data mining, and show which problems it solves and which problems it does not.
2008
EPRINT
Secure Online Elections in Practice
Current remote e-voting schemes aim at a number of security objectives. However, this is not enough for providing secure online elections in practice. Beyond a secure e-voting protocol, there are many organizational and technical security requirements that have to be satisfied by the operational environment in which the scheme is implemented. We have investigated four state-of-the-art e-voting protocols in order to identify the organizational and technical requirements which these protocols need to be met in order to work correctly. Satisfying these requirements is a costly task which reduces the potential advantages of e-voting considerably. We introduce the concept of a Voting Service Provider (VSP) which carries out electronic elections as a trusted third party and is responsible for satisfying the organizational and technical requirements. We show which measures the VSP takes to meet these requirements. To establish trust in the VSP we propose a Common Criteria evaluation and a legal framework. Following this approach, we show that the VSP enables secure, cost-effective, and thus feasible online elections.
2008
EPRINT
Security needs in embedded systems
The paper discusses the hardware and software security requirements in an embedded device that are involved in the transfer of secure digital data. The paper gives an overview on the security processes like encryption/decryption, key agreement, digital signatures and digital certificates that are used to achieve data protection during data transfer. The paper also discusses the security requirements in the device to prevent possible physical attacks to expose the secure data such as secret keys from the device. The paper also briefs on the security enforced in a device by the use of proprietary security technology and also discusses the security measures taken during the production of the device.
2008
EPRINT
Security Proof for the Improved Ryu-Yoon-Yoo Identity-Based Key Agreement Protocol
Key agreement protocols are essential for secure communications in open and distributed environments. The protocol design is, however, extremely error-prone as evidenced by the iterative process of fixing discovered attacks on published protocols. We revisit an efficient identity-based (ID-based) key agreement protocol due to Ryu, Yoon and Yoo. The protocol is highly efficient and suitable for real-world applications despite offering no resilience against key-compromise impersonation (K-CI). We then show that the protocol is, in fact, insecure against reflection attacks. A slight modification to the protocol is proposed, which results in significant benefits for the security of the protocol without compromising on its efficiency. Finally, we prove the improved protocol secure in a widely accepted model.
2008
EPRINT
Semi-free start collision attack on Blender
Blender is a cryptographic hash function submitted to NIST's SHA3 competition. We have found a semi-free start collision attack on Blender with trivial complexity. One pair of semi-free start collision messages with zero initial values is presented.
2008
EPRINT
Session-state Reveal is stronger than Ephemeral Key Reveal: Breaking the NAXOS key exchange protocol
In the papers Stronger Security of Authenticated Key Exchange [LLM07, LLM06], a new security model for key exchange protocols is proposed. The new model is suggested to be at least as strong as previous models for key exchange protocols. In particular, the model includes a new notion of an Ephemeral Key Reveal adversary query, which is claimed in [LLM06, Oka07, Ust08] to be at least as strong as existing definitions of the Session-state Reveal query. We show that for some protocols, Session-state Reveal is strictly stronger than Ephemeral Key Reveal. In particular, we show that the NAXOS protocol from [LLM07, LLM06] does not meet its security requirements if the Session-state Reveal query is allowed in the security model.
2008
EPRINT
Setting Speed Records with the (Fractional) Multibase Non-Adjacent Form Method for Efficient Elliptic Curve Scalar Multiplication
In this paper, we introduce the Fractional Window-w Multibase Non-Adjacent Form (Frac-wmbNAF) method to perform the scalar multiplication. This method generalizes the recently developed Window-w mbNAF (wmbNAF) method by allowing an unrestricted number of precomputed points. We then make a comprehensive analysis of the most recent and relevant methods existent in the literature for the ECC scalar multiplication, including the presented generalization and its original non-window version known as Multibase Non-Adjacent Form (mbNAF). Moreover, we present new improvements in the point operation formulae. Specifically, we reduce further the cost of composite operations such as doubling-addition, tripling, quintupling and septupling of a point, which are relevant for the speed up of methods using multiple bases. Following, we also analyze the precomputation stage in scalar multiplications and present efficient schemes for the different studied scenarios. Our analysis includes the standard elliptic curves using Jacobian coordinates, and also Edwards curves, which are gaining growing attention due to their high performance. We demonstrate with extensive tests that mbNAF is currently the most efficient method without precomputations not only for the standard curves but also for the faster Edwards form. Similarly, Frac-wmbNAF is shown to attain the highest performance among window-based methods for all the studied curve forms.
2008
EPRINT
Sharemind: a framework for fast privacy-preserving computations
Gathering and processing sensitive data is a difficult task. In fact, there is no common recipe for building the necessary information systems. In this paper, we present a provably secure and efficient general-purpose computation system to address this problem. Our solution - SHAREMIND - is a virtual machine for privacy-preserving data processing that relies on share computing techniques. This is a standard way for securely evaluating functions in a multi-party computation environment. The novelty of our solution is in the choice of the secret sharing scheme and the design of the protocol suite. We have made many practical decisions to make large-scale share computing feasible in practice. The protocols of SHAREMIND are information-theoretically secure in the honest-but-curious model with three computing participants. Although the honest-but-curious model does not tolerate malicious participants, it still provides significantly increased privacy preservation when compared to standard centralised databases.
2008
EPRINT
Sharing DSS by the Chinese Remainder Theorem
A new threshold scheme for the Digital Signature Standard is proposed using Asmuth-Bloom secret sharing based on the Chinese Remainder Theorem. The proposed scheme is simple and can be used practically in real life.
2008
EPRINT
Side Channel Attack Resistant Implementation of Multi-Power RSA using Hensel Lifting
Multi-Power RSA [1] is a fast variant of RSA [2] with a small decryption time, making it attractive for implementation on lightweight cryptographic devices such as smart cards. Hensel Lifting is a key component in the implementation of fast Multi-Power RSA Decryption. However, it is found that a naive implementation of this algorithm is vulnerable to a host of side channel attacks, some of them powerful enough to entirely break the cryptosystem by providing a factorisation of the public modulus $N$. We propose here a secure (under reasonable assumptions) implementation of the Hensel Lifting algorithm. We then use this algorithm to obtain a secure implementation of Multi-Power RSA Decryption.
2008
EPRINT
Signcryption with Proxy Re-encryption
Confidentiality and authenticity are two of the most fundamental problems in cryptography. Many applications require both confidentiality and authenticity, and hence an efficient way to get both together was very desirable. In 1997, Zheng proposed the notion of ``signcryption'', a single primitive which provides both confidentiality and authenticity in a way that's more efficient than signing and encrypting separately. Proxy re-encryption is a primitive that allows a semi-trusted entity called the ``proxy'' to convert ciphertexts addressed to a ``delegator'' to those that can be decrypted by a ``delegatee'', by using some special information given by the delegator, called the ``rekey''. In this work, we propose the notion of signcryption with proxy re-encryption (SCPRE), and motivate the same. We define security models for SCPRE, and also propose a concrete unidirectional, non-interactive identity-based SCPRE construction. We also provide complete proofs of security for the scheme in the security models defined. We finally provide directions for further research in this area.
2008
EPRINT
Signing a Linear Subspace: Signature Schemes for Network Coding
Network coding offers increased throughput and improved robustness to random faults in completely decentralized networks. In contrast to traditional routing schemes, however, network coding requires intermediate nodes to modify data packets en route; for this reason, standard signature schemes are inapplicable and it is a challenge to provide resilience to tampering by malicious nodes. Here, we propose two signature schemes that can be used in conjunction with network coding to prevent malicious modification of data. In particular, our schemes can be viewed as signing linear subspaces in the sense that a signature on V authenticates exactly those vectors in V. Our first scheme is homomorphic and has better performance, with both public key size and per-packet overhead being constant. Our second scheme does not rely on random oracles and uses weaker assumptions. We also prove a lower bound on the length of signatures for linear subspaces showing that both of our schemes are essentially optimal in this regard.
2008
EPRINT
Simple and Efficient Asynchronous Byzantine Agreement with Optimal Resilience
Consider a completely asynchronous network consisting of n parties where every two parties are connected by a private channel. An adversary A_t with unbounded computing power actively controls at most t < n / 3 parties in Byzantine fashion. In these settings, we say that \pi is a $t$-resilient, $(1-\epsilon)$-terminating Asynchronous Byzantine Agreement (ABA) protocol, if $\pi$ satisfies all the properties of Byzantine Agreement (BA) in asynchronous settings tolerating A_t and terminates (i.e every honest party terminates $\pi$) with probability at least $(1-\epsilon)$. In this work, we present a new $t$-resilient, $(1-\epsilon)$-terminating ABA protocol which privately communicates O(C n^{5} \kappa) bits and A-casts O(C n^{5} \kappa) bits, where $\epsilon = 2^{-O(\kappa)$ and C is the expected running time of the protocol. Here A-Cast is a primitive in asynchronous world, allowing a party to send the same value to all the other parties. Hence A-Cast in asynchronous world is the parallel notion of broadcast in synchronous world. Moreover, conditioned on the event that our ABA protocol terminates, it does so in constant expected time; i.e., C = O(1). Our ABA protocol is to be compared with the only known $t$-resilient, $(1-\epsilon)$-terminating ABA protocol of \cite{CanettiSTOC93} in the same settings, which privately communicates O(C n^{11} \kappa^{4}) bits and A-casts O(C n^{11} \kappa^2 \log(n)) bits, where $\epsilon = 2^{-O(\kappa)} and C = O(1). So our ABA achieves a huge gain in communication complexity in comparison to the ABA of \cite{CanettiSTOC93}, while keeping all other properties in place. In another landmark work, in PODC 2008, Abraham et. al \cite{DolevAsynchronousBAPODC2008} proposed a $t$-resilient, $1$-terminating (called as almost-surely terminating in \cite{DolevAsynchronousBAPODC2008}) ABA protocol which communicates O(C n^{6} \log{n}) bits and A-casts O(C n^{6} \log{n}) bits. But ABA protocol of Abraham et. al. takes polynomial (C = O(n^2)) expected time to terminate. Hence the merits of our ABA protocol over the ABA of Abraham et. al. are: (i) For any \kappa < n^3 \log{n}, our ABA is better in terms of communication complexity (ii) conditioned on the event that our ABA protocol terminates, it does so in constant expected time (the constant is independent of n, t and \kappa), whereas ABA of Abraham et. al. takes polynomial expected time. However, it should be noted that our ABA is only $(1-\epsilon)$-terminating whereas ABA of Abraham et al. is almost-surely terminating. Summing up, in a practical scenario where a faster and communication efficient ABA protocol is required, our ABA fits the bill better than ABA protocols of \cite{CanettiSTOC93,DolevAsynchronousBAPODC2008}. For designing our ABA protocol, we present a novel and simple asynchronous verifiable secret sharing (AVSS) protocol which significantly improves the communication complexity of the only known AVSS protocol of \cite{CanettiSTOC93} in the same settings. We believe that our AVSS can be used in many other applications for improving communication complexity and hence is of independent interest.
2008
EPRINT
Simplified Security Notions of Direct Anonymous Attestation and a Concrete Scheme from Pairings
Direct Anonymous Attestation (DAA) is a cryptographic mechanism that enables remote authentication of a user while preserving privacy under the user's control. The DAA scheme developed by Brickell, Camenisch, and Chen has been adopted by the Trust Computing Group (TCG) for remote anonymous attestation of Trusted Platform Module (TPM), a small hardware device with limited storage space and communication capability. In this paper, we provide two contributions to DAA. We first introduce simplified security notions of DAA including the formal definitions of user controlled anonymity and traceability. We then propose a new DAA scheme from elliptic curve cryptography and bilinear maps. The lengths of private keys and signatures in our scheme are much shorter than the lengths in the original DAA scheme, with a similar level of security and computational complexity. Our scheme builds upon the Camenisch-Lysyanskaya signature scheme and is efficient and provably secure in the random oracle model under the LRSW (stands for Lysyanskaya, Rivest, Sahai and Wolf) assumption and the decisional Bilinear Diffie-Hellman assumption.
2008
EPRINT
Simulatable Adaptive Oblivious Transfer
We study an adaptive variant of oblivious transfer in which a sender has N messages, of which a receiver can adaptively choose to receive k one-after-the-other, in such a way that (a) the sender learns nothing about the receiver’s selections, and (b) the receiver only learns about the k requested messages. We propose two practical protocols for this primitive that achieve a stronger security notion than previous schemes with comparable efficiency. In particular, by requiring full simulatability for both sender and receiver security, our notion prohibits a subtle selective-failure attack not addressed by the security notions achieved by previous practical schemes. Our first protocol is a very efficient generic construction from unique blind signatures in the random oracle model. The second construction does not assume random oracles, but achieves remarkable efficiency with only a constant number of group elements sent during each transfer. This second construction uses novel techniques for building efficient simulatable protocols.
2008
EPRINT
Simultaneous field divisions: an extension of Montgomery's trick
Montgomery's trick is a technique which can be used to quickly compute multiple field inversion simultaneously. We extend this technique to simultaneous field divisions (that is, combinations of field multiplications and field inversion). The generalized Montgomery's trick is faster in some fields than a simple inversion with Montgomery's trick followed by a simple field multiplication
2008
EPRINT
Slid Pairs in Salsa20 and Trivium
The stream ciphers Salsa20 and Trivium are two of the finalists of the eSTREAM project which are in the final portfolio of new promising stream ciphers. In this paper we show that initialization and key-stream generation of these ciphers is {\em slidable}, i.e. one can find distinct (Key, IV) pairs that produce identical (or closely related) key-streams. There are $2^{256}$ and more then $2^{39}$ such pairs in Salsa20 and Trivium respectively. We write out and solve the non-linear equations which describe such related (Key, IV) pairs. This allows us to sample the space of such related pairs efficiently as well as detect such pairs in large portions of key-stream very efficiently. We show that Salsa20 does not have 256-bit security if one considers general birthday and related key distinguishing and key-recovery attacks.
2008
EPRINT
Slide Attacks on a Class of Hash Functions
This paper studies the application of slide attacks to hash functions. Slide attacks have mostly been used for block cipher cryptanalysis. But, as shown in the current paper, they also form a potential threat for hash functions, namely for sponge-function like structures. As it turns out, certain constructions for hash-function-based MACs can be vulnerable to forgery and even to key recovery attacks. In other cases, we can at least distinguish a given hash function from a random oracle. To illustrate our results, we describe attacks against the Grindahl-256 and Grindahl-512 hash functions. To the best of our knowledge, this is the first cryptanalytic result on Grindahl-512. Furthermore, we point out a slide-based distinguisher attack on a slightly modified version of RadioGatun. We finally discuss simple countermeasures as a defense against slide attacks.
2008
EPRINT
Small Odd Prime Field Multivariate PKCs
We show that Multivariate Public Key Cryptosystems (MPKCs) over fields of small odd prime characteristic, say 31, can be highly efficient. Indeed, at the same design security of $2^{80}$ under the best known attacks, odd-char MPKC is generally faster than prior MPKCs over \GF{2^k}, which are in turn faster than ``traditional'' alternatives. This seemingly counter-intuitive feat is accomplished by exploiting the comparative over-abundance of small integer arithmetic resources in commodity hardware, here embodied by SSE2 or more advanced special multimedia instructions on modern x86-compatible CPUs. We explain our implementation techniques and design choices in implementing our chosen MPKC instances modulo small a odd prime. The same techniques are also applicable in modern FPGAs which often contains a large number of multipliers.
2008
EPRINT
SMS4 Encryption Algorithm for Wireless Networks
SMS4 is a Chinese block cipher standard, mandated for use in protecting wireless net- works, and issued in January 2006. The input, output, and key of SMS4 are each 128 bits. The algorithm has 32 rounds, each of which modifies one of the four 32-bit words that make up the block by xoring it with a keyed function of the other three words. Encryption and decryption have the same structure except that the round key schedule for decryption is the reverse of the round key schedule for encryption.
2008
EPRINT
Software Implementation of Genus-2 Hyperelliptic Curve Cryptosystems Over Prime Fields
This paper describes the system parameters and software implementation of a HECDSA cryptosystem based on genus-2 hyperelliptic curves over prime fields. We show how to reduce the computational complexity for special cases and compare the given cryptosystem with the well-known ECDSA cryptosystem based on elliptic curves.
2008
EPRINT
Some Observations on SHAMATA
In this note we discuss some observation of the SHA-3 candidate SHAMATA. We observe that its internal block cipher is very weak, which could possibly lead to an attack on the hash function.
2008
EPRINT
Some Observations on Strengthening the SHA-2 Family
In this work, we study several properties of the SHA-2 design which have been utilized in recent collision attacks against reduced SHA-2. We suggest small modifications to the SHA-2 design to thwart these attacks. The cost of SHA-2 evaluations does not change significantly due to our modifications but the new design provides resistance to the recent collision attacks. Further, we describe an easy method of exhibiting non-randomness of the compression functions of the entire SHA family, that is SHA-0, SHA-1 and all the hash functions in SHA-2. Specifically, we show that given any $\IV_1$ and any pair of messages $M_1$ and $M_2$, an $\IV_2$ can be easily and deterministically constructed such that the relation $H(\IV_1,M_1)-\IV_1 = H(\IV_2,M_2)-\IV_2$ holds. For a truly random hash function $H$ outputting a $k$-bit digest, such a relation should hold with probability $2^{-k}$. We introduce the general idea of ``multiple feed-forward" in the context of construction of cryptographic hash functions. When used in SHA designs, this technique removes the non-randomness mentioned earlier. Perhaps more importantly, it provides increased resistance to the Chabaud-Joux type ``perturbation-correction'' collision attacks. The idea of feed-forward is taken further by introducing the idea of feed-forward across message blocks. This provides quantifiably better resistance to Joux type generic multi-collision attacks. For example, with our modification of SHA-256, finding $2^r$ messages which map to the same value will require $r\times 2^{384}$ invocations of the compression function.
2008
EPRINT
Somewhat Non-Committing Encryption and Efficient Adaptively Secure Oblivious Transfer
Designing efficient cryptographic protocols tolerating adaptive adversaries, who are able to corrupt parties on the fly as the computation proceeds, has been an elusive task. Indeed, thus far no \emph{efficient} protocols achieve adaptive security for general multi-party computation, or even for many specific two-party tasks such as oblivious transfer (OT). In fact, it is difficult and expensive to achieve adaptive security even for the task of \emph{secure communication}, which is arguably the most basic task in cryptography. In this paper we make progress in this area. First, we introduce a new notion called \emph{semi-adaptive} security which is slightly stronger than static security but \emph{significantly weaker than fully adaptive security}. The main difference between adaptive and semi-adaptive security is that, for semi-adaptive security, the simulator is not required to handle the case where \emph{both} parties start out honest and one becomes corrupted later on during the protocol execution. As such, semi-adaptive security is much easier to achieve than fully adaptive security. We then give a simple, generic protocol compiler which transforms any semi-adaptively secure protocol into a fully adaptively secure one. The compilation effectively decomposes the problem of adaptive security into two (simpler) problems which can be tackled separately: the problem of semi-adaptive security and the problem of realizing a weaker variant of secure channels. We solve the latter problem by means of a new primitive that we call {\em somewhat non-committing encryption} resulting in significant efficiency improvements over the standard method for realizing (fully) secure channels using (fully) non-committing encryption. Somewhat non-committing encryption has two parameters: an equivocality parameter $\ell$ (measuring the number of ways that a ciphertext can be ``opened'') and the message sizes $k$. Our implementation is very efficient for small values $\ell$, \emph{even} when $k$ is large. This translates into a very efficient compilation of many semi-adaptively secure protocols (in particular, for a task with small input/output domains such as bit-OT) into a fully adaptively secure protocol. Finally, we showcase our methodology by applying it to the recent Oblivious Transfer protocol by Peikert \etal\ [Crypto 2008], which is only secure against static corruptions, to obtain the first efficient (expected-constant round, expected-constant number of public-key operations), adaptively secure and composable bit-OT protocol.
2008
EPRINT
Sound and Fine-grain Specification of Cryptographic Tasks
The Universal Composability (UC) framework, introduced by Canetti, allows for the design of cryptographic protocols satisfying strong security properties, such as non-malleability and preservation of security under (concurrent) composition. In the UC framework (as in several other frameworks), the security of a protocol carrying out a given task is formulated via the ``trusted-party paradigm,'' where the protocol execution is compared with an ideal process where the outputs are computed by a trusted party that sees all the inputs. A protocol is said to securely carry out a given task if running the protocol with a realistic adversary amounts to ``emulating'' the ideal process with the appropriate trusted party. In the UC framework the program run by the trusted party is called an {\em ideal functionality}. However, while this simulation-based security formulation provides strong security guarantees, its usefulness is contingent on the properties and correct specification of the realized ideal functionality, which, as demonstrated in recent years by the coexistence of complex, multiple functionalities for the same task as well as by their ``unstable'' nature, does not seem to be an easy task. On the other hand, the more traditional, {\em gamed-based} definitions of cryptographic tasks, although providing a less satisfying level of security (stand-alone executions, or executions in very controlled settings), have been successful in terms of formalizing as well as capturing the underlying task's natural properties. In this paper we address this gap in the security modeling of cryptographic properties, and introduce a general methodology for translating game-based definitions of properties of cryptographic tasks to syntactically concise ideal functionality programs. Moreover, taking advantage of a suitable algebraic structure of the space of our ideal functionality programs, we are able to ``accumulate'' ideal functionalities based on many different game-based security notions. In this way, we can obtain a well-defined mapping of all the game-based security properties of a cryptographic task to its corresponding UC counterpart. In addition, the methodology allows us to ``debug'' existing ideal functionalities, establish relations between them, and make some critical observations about the modeling of the ideal process in the UC framework. We demonstrate the power of our approach by applying our methodology to a variety of basic cryptographic tasks, including commitments, digital signatures, public-key encryption, zero-knowledge proofs, and oblivious transfer. Instrumental in our translation methodology is the new notion of a {\em canonical functionality class} for a cryptographic task which is endowed with a bounded semilattice structure. This structure allows the grading of ideal functionalities according to the level of security they offer as well as their natural joining, enabling the modular combination of security properties.
2008
EPRINT
Sphinx: A Compact and Provably Secure Mix Format
Sphinx is a cryptographic message format used to relay anonymized messages within a mix network. It is more compact than any comparable scheme, and supports a full set of security features: indistinguishable replies, hiding the path length and relay position, as well as providing unlinkability for each leg of the message's journey over the network. We prove the full cryptographic security of Sphinx in the random oracle model, and we describe how it can be used as an efficient drop-in replacement in deployed remailer systems.
2008
EPRINT
Strongly Secure Authenticated Key Exchange Protocol Based on Computational Diffie-Hellman Problem
Currently, there are a lot of authenticated key exchange (AKE) protocols in literature. However, the security proofs of this kind of protocols have been established to be a non-trivial task. The main issue is that without static private key it is difficult for simulator to fully support the SessionKeyReveal and EphemeralKeyReveal queries. Some proposals which have been proven secure either just hold in relatively weak models which do not fully support above-mentioned two queries or make use of the stronger gap assumption. In this paper, using a new technique named twin Diffie-Hellman problem proposed by Cash, Kiltz and Shoup, we present a new AKE protocol based on the computational Diffie-Hellman (CDH) assumption, which is more standard than gap Diffie-Hellman (GDH) assumption. Moreover, our scheme is shown to be secure in strong security definition, the enhanced Canetti-Krawczyk (eCK) model introduced by LaMacchia, Lauter and Mityagin, which better supports the adversaries' queries than previous models.
2008
EPRINT
Strongly Unforgeable ID-based Signatures Without Random Oracles
There is an open problem to construct ID-based signature schemes which satisfy strongly EUF-ID-CMA, without random oracles. It is known that strongly EUF-ID-CMA is a concept of the strongest security in ID-based signatures. In this paper, we propose a solution to the open problem, that is an ID-based signature scheme, which satisfies strongly EUF-ID-CMA, without random oracles for the first time. Security of the scheme is based on the difficulty to solve three problems related to the Diffie-Hellman problem and a one-way isomorphism.
2008
EPRINT
Strongly-Resilient and Non-Interactive Hierarchical Key-Agreement in MANETs
Key agreement is a fundamental security functionality by which pairs of nodes agree on shared keys to be used for protecting their pairwise communications. In this work we study key-agreement schemes that are well-suited for the mobile network environment. Specifically, we describe schemes with the following haracteristics: -- Non-interactive: any two nodes can compute a unique shared secret key without interaction; -- Identity-based: to compute the shared secret key, each node only needs its own secret key and the identity of its peer; -- Hierarchical: the scheme is decentralized through a hierarchy where intermediate nodes in the hierarchy can derive the secret keys for each of its children without any limitations or prior knowledge on the number of such children or their identities; -- Resilient: the scheme is fully resilient against compromise of {\em any number of leaves} in the hierarchy, and of a threshold number of nodes in each of the upper levels of the hierarchy. Several schemes in the literature have three of these four properties, but the schemes in this work are the first to possess all four. This makes them well-suited for environments such as MANETs and tactical networks which are very dynamic, have significant bandwidth and energy constraints, and where many nodes are vulnerable to compromise. We provide rigorous analysis of the proposed schemes and discuss implementations aspects.
2008
EPRINT
Survival in the Wild: Robust Group Key Agreement in Wide-Area Networks
Group key agreement (GKA) allows a set of players to establish a shared secret and thus bootstrap secure group communication. GKA is very useful in many types of peer group scenarios and applications. Since all GKA protocols involve multiple rounds, robustness to player failures is important and desirable. A robust group key agreement (RGKA) protocol runs to completion even if some players fail during protocol execution. Previous work yielded constant-round RGKA protocols suitable for the LAN setting, assuming players are homogeneous, failure probability is uniform and player failures are independent. However, in a more general widearea network (WAN) environment, heterogeneous hardware/software and communication facilities can cause wide variations in failure probability among players. Moreover, congestion and communication equipment failures can result in correlated failures among subsets of GKA players. In this paper, we construct the first RGKA protocol that supports players with different failure probabilities, spread across any LAN/WAN combination, while also allowing for correlated failures among subgroups of players. The proposed protocol is efficient (2 rounds) and provably secure. We evaluate its robustness and performance both analytically and via simulations.
2008
EPRINT
Template Attacks on ECDSA
Template attacks have been considered exclusively in the context of implementations of symmetric cryptographic algorithms on 8-bit devices. Within these scenarios, they have proven to be the most powerful attacks. This is not surprising because they assume the most powerful adversaries. In this article we investigate how template attacks can be applied to implementations of an asymmetric cryptographic algorithm on a 32-bit platform. The asymmetric cryptosystem under scrutiny is the elliptic curve digital signature algorithm (ECDSA). ECDSA is particularly suitable for 32-bit platforms. In this article we show that even SPA resistant implementations of ECDSA on a typical 32-bit platform succumb to template-based SPA attacks. The only way to secure such implementations against template-based SPA attacks is to make them resistant against DPA attacks.
2008
EPRINT
The $F_f$-Family of Protocols for RFID-Privacy and Authentication
In this paper, we present the design of the lightweight $F_f$ family of privacy-preserving authentication protocols for RFID-systems. $F_f$ is based on a new algebraic framework for reasoning about and analyzing this kind of authentication protocols. $F_f$ offers user-adjustable, strong authenticity and privacy against known algebraic and also recent SAT-solving attacks. In contrast to related work, $F_f$ achieves these two security properties without requiring an expensive cryptographic hash function. $F_f$ is designed for a challenge-response protocol, where the tag sends random nonces and the results of HMAC-like computations of one of the nonces together with its secret key. In this paper, the authenticity and privacy of $F_f$ is evaluated using analytical and experimental methods.
2008
EPRINT
The arithmetic of characteristic 2 Kummer surfaces
The purpose of this paper is a description of a model of Kummer surfaces in characteristic 2, together with the associated formulas for the pseudo-group law. Since the classical model has bad reduction, a renormalization of the parameters is required, that can be justified using the theory of algebraic theta functions. The formulas that are obtained are very efficient and may be useful in cryptographic applications. We also show that applying the same strategy to elliptic curves gives Montgomery-like formulas in odd characteristic that are of some interest, and we recover already known formulas by Stam in characteristic 2.
2008
EPRINT
The CCA2-Security of Hybrid Damgård's ElGamal
We consider a hybrid version of Damgård's ElGamal public-key encryption scheme that incorporates the use of a symmetric cipher and a hash function for key-derivation. We prove that under appropriate choice of the hash function this scheme is IND-CCA secure under the Decisional Diffie-Hellman assumption in the standard model. Our results can be generalized to universal hash proof systems where our main technical contribution can be viewed as an efficient generic transformation from 1-universal to 2-universal hash proof systems.
2008
EPRINT
The computational SLR: a logic for reasoning about computational indistinguishability
Computational indistinguishability is a notion in complexity-theoretic cryptography and is used to define many security criteria. However, in traditional cryptography, proving computational indistinguishability is usually informal and becomes error-prone when cryptographic constructions are complex. This paper presents a formal axiomatization system based on an extension of Hofmann's SLR language, which can capture probabilistic polynomial-time computations through typing and is sufficient for expressing cryptographic constructions. We in particular define rules that justify directly the computational indistinguishability between programs and prove that these rules are sound with respect to the set-theoretic semantics, hence the standard definition of security. We also show that it is applicable in cryptography by verifying, in our proof system, Goldreich and Micali's construction of pseudorandom generator, and the equivalence between next-bit unpredictability and pseudorandomness.
2008
EPRINT
The Cost of False Alarms in Hellman and Rainbow Tradeoffs
With any cryptanalytic time memory tradeoff algorithm, false alarms pose a major obstacle in accurate assessment of its online time complexity. We study the size of pre-image set under function iterations and use the obtained theory to analyze the cost incurred by false alarms. We are able to present the expected online time complexities for the Hellman tradeoff and the rainbow table method in a manner that takes false alarms into account. The effect of checkpoints in reducing the cost of false alarms is also quantified.
2008
EPRINT
THE DESIGN OF BOOLEAN FUNCTIONS BY MODIFIED HILL CLIMBING METHOD
With cryptographic investigations, the design of Boolean functions is a wide area. The Boolean functions play important role in the construction of a symmetric cryptosystem. In this paper the modifed hill climbing method is considered. The method allows using hill climbing techniques to modify bent functions used to design balanced, highly nonlinear Boolean functions with high algebraic degree and low autocorrelation. The experimental results of constructing the cryptographically strong Boolean functions are presented.
2008
EPRINT
The Elliptic Curve Discrete Logarithm Problem and Equivalent Hard Problems for Elliptic Divisibility Sequences
We define three hard problems in the theory of elliptic divisibility sequences (EDS Association, EDS Residue and EDS Discrete Log), each of which is solvable in sub-exponential time if and only if the elliptic curve discrete logarithm problem is solvable in sub-exponential time. We also relate the problem of EDS Association to the Tate pairing and the MOV, Frey-R\"{u}ck and Shipsey EDS attacks on the elliptic curve discrete logarithm problem in the cases where these apply.
2008
EPRINT
The Encrypted Elliptic Curve Hash
Bellare and Micciancio's MuHASH applies a pre-existing hash function to map indexed message blocks into a secure group. The resulting hash is the product. Bellare and Micciancio proved, in the random oracle model, that MuHASH is collision-resistant if the group's discrete logarithm problem is infeasible. MuHASH, however, relies on a pre-existing hash being collision resistant. In this paper, we remove such a reliance by replacing the pre-existing hash with a block cipher under a fixed key. We adapt Bellare and Micciancio's collision-resistance proof to the ideal cipher model. Preimage resistance requires us to add a further modification.
2008
EPRINT
The Enigmatique Toolkit
Abstract: This paper describes a new method of creating systems of poly-alphabetic, symmetric ciphers with a large set of algorithms. The method uses two translation tables containing the code-text character set, one for encryption and the other for decryption. After each character is encrypted or decrypted, these tables are permuted through a series of pseudo random, pairwise swaps. The implementation of alternative swap formulas leads to systems with large sets of encryption/decryption algorithms. Systems that contain more than a googol (1E100) algorithms have been easily implemented. An algorithm set can be easily sub-setted, producing hierarchical systems where a program may contain a single algorithm or a portion of the full set. The strength of these systems has not been determined. However strength can be inferred through the use of a substantial number of numeric keys , pseudo random number generators, algorithm switching, and other features.
2008
EPRINT
The Generic Hardness of Subset Membership Problems under the Factoring Assumption
We analyze a large class of subset membership problems related to integer factorization. We show that there is no algorithm solving these problems efficiently without exploiting properties of the given representation of ring elements, unless factoring integers is easy. Our results imply that problems with high relevance for a large number of cryptographic applications, such as the quadratic residuosity and the subgroup decision problems, are generically equivalent to factoring.
2008
EPRINT
The Hidden Root Problem
In this paper we study a novel computational problem called the Hidden Root Problem, which appears naturally when considering fault attacks on pairing based cryptosystems. Furthermore, a variant of this problem is one of the main obstacles for efficient pairing inversion. We present an algorithm to solve this problem over extension fields and investigate for which parameters the algorithm becomes practical.
2008
EPRINT
The Multireceiver Commitment Schemes
Existing commitment schemes were addressed under the classic two-party scenario. However, popularity of the secure multi-party computation in today's lush network communication is motivating us to adopt more sophisticate commitment schemes. In this paper, we study for the first time multireceiver commitment in unconditionally secure setting, i.e., one committer promises a group of verifiers a common secret value (in computational setting it is trivial). We extend the Rivest model for this purpose and present a provably secure generic construction using multireceiver authentication codes (without secrecy) as a building block. Two concrete schemes are proposed as its immediate implementations, which are almost as efficient as an optimal MRA-code. Furthermore, to affirmatively answer the open question of Pinto, Souto, Matos and Antunes, we present also a generic construction (for two-party case) using only an A-code with secrecy. Finally, we show the possibility of constructing multireceiver commitment schemes using other primitives such as verifiable secret sharing. We leave open problems and believe the work will open doors for more interesting research.
2008
EPRINT
The Random Oracle Model and the Ideal Cipher Model are Equivalent
The Random Oracle Model and the Ideal Cipher Model are two well known idealised models of computation for proving the security of cryptosystems. At Crypto 2005, Coron et al. showed that security in the random oracle model implies security in the ideal cipher model; namely they showed that a random oracle can be replaced by a block cipher-based construction, and the resulting scheme remains secure in the ideal cipher model. The other direction was left as an open problem, i.e. constructing an ideal cipher from a random oracle. In this paper we solve this open problem and show that the Feistel construction with 6 rounds is enough to obtain an ideal cipher; we also show that 5 rounds are insufficient by providing a simple attack. This contrasts with the classical Luby-Rackoff result that 4 rounds are necessary and sufficient to obtain a (strong) pseudo-random permutation from a pseudo-random function.
2008
EPRINT
The SIP security enhanced by using pairing-assisted Massey-Omura signcryption
Voice over IP (or VoIP) has been adopted progressively not only by a great number of companies but also by an expressive number of people, in Brazil and in other countries. However, this crescent adoption of VoIP in the world brings some concerns such as security risks and threats, mainly on the privacy and integrity of the communication. The risks and threats (which we can emphasize the man-in-the-middle attack, because his high danger degree) already exist in the signalling process to the call establishment. This signalling process is executed by specific types of protocols, like SIP (Session Initiation Protocol). After doing a bibliographical revision of the current SIP security mechanisms and analyzing some proposals to improve these mechanisms, we verified that the SIP vulnerability to the man-in-the-middle was not totally solved. Then we propose a new security mechanism for SIP in this paper, aiming both to be an alternative security mechanism and to solve the vulnerability to the man-in-the-middle attack. In our proposal we use a protocol for secure information exchange -- the Massey-Omura protocol -- which, when combined with Pairing-based Cryptography (PBC), provides a high security level for SIP in all its aspects.
2008
EPRINT
The SIP Security Enhanced by Using Pairing-assisted Massey-Omura Signcryption
Voice over IP (or VoIP) has been adopted progressively not only by a great number of companies but also by an expressive number of people, in Brazil and in other countries. However, this crescent adoption of VoIP in the world brings some concerns such as security risks and threats, mainly on the privacy and integrity of the communication. The risks and threats already exist in the signaling process to the call establishment. This signaling process is performed by specific types of protocols, like the H.323 and SIP (Session Initiation Protocol). Among those risks and threats, we can emphasize the man-in-the-middle attack because of its high danger degree. After doing a bibliographical revision of the current SIP security mechanisms and analyzing some proposals to improve these mechanisms, we verified that the SIP vulnerability to the man-in-the-middle was not totally solved. Then we propose a new security mechanism for SIP in this paper, aiming both to be an alternative security mechanism and a solution for the vulnerability to the man-in-the-middle attack. In our proposal we use a protocol for secure information exchange -- the Massey-Omura protocol -- which, when combined with Pairing-based Cryptography (PBC), provides a better security level for SIP in all its aspects.
2008
EPRINT
The Twin Diffie-Hellman Problem and Applications
We propose a new computational problem called the twin Diffie-Hellman problem. This problem is closely related to the usual (computational) Diffie-Hellman problem and can be used in many of the same cryptographic constructions that are based on the Diffie-Hellman problem. Moreover, the twin Diffie-Hellman problem is at least as hard as the ordinary Diffie-Hellman problem. However, we are able to show that the twin Diffie-Hellman problem remains hard, even in the presence of a decision oracle that recognizes solutions to the problem — this is a feature not enjoyed by the ordinary Diffie-Hellman problem. In particular, we show how to build a certain “trapdoor test” that allows us to effectively answer such decision oracle queries without knowing any of the corresponding discrete logarithms. Our new techniques have many applications. As one such application, we present a new variant of ElGamal encryption with very short ciphertexts, and with a very simple and tight security proof, in the random oracle model, under the assumption that the ordinary Diffie-Hellman problem is hard. We present several other applications as well, including: a new variant of Diffie and Hellman’s non-interactive key exchange protocol; a new variant of Cramer-Shoup encryption, with a very simple proof in the standard model; a new variant of Boneh-Franklin identity-based encryption, with very short ciphertexts; a more robust version of a password-authenticated key exchange protocol of Abdalla and Pointcheval.
2008
EPRINT
The Walsh Spectrum of a New Family of APN Functions
The extended Walsh spectrum of a new family of APN functions is computed out. It turns out that the walsh spectrum of these functions are the same as that of Gold functions.
2008
EPRINT
Threshold RSA for Dynamic and Ad-Hoc Groups
We consider the use of threshold signatures in ad-hoc and dynamic groups such as MANETs ("mobile ad-hoc networks"). While the known threshold RSA signature schemes have several properties that make them good candidates for deployment in these scenarios, we show that none of these schemes is practical enough for realistic use in these highly-constrained environments. In particular, this is the case of the most efficient of these threshold RSA schemes, namely, the one due to Shoup. Our contribution is in presenting variants of Shoup's protocol that overcome the limitations that make the original protocol unsuitable for dynamic groups. The resultant schemes provide the efficiency and flexibility needed in ad-hoc groups, and add the capability of incorporating new members (share-holders) to the group of potential signers without relying on central authorities. Namely, any threshold of existing members can cooperate to add a new member. The schemes are efficient, fully non-interactive and do not assume broadcast.
2008
EPRINT
Time-Area Optimized Public-Key Engines: MQ-Cryptosystems as Replacement for Elliptic Curves?
In this paper ways to efficiently implement public-key schemes based onMultivariate Quadratic polynomials (MQ-schemes for short) are investigated. In particular, they are claimed to resist quantum computer attacks. It is shown that such schemes can have a much better time-area product than elliptic curve cryptosystems. For instance, an optimised FPGA implementation of amended TTS is estimated to be over 50 times more efficient with respect to this parameter. Moreover, a general framework for implementing small-field MQ-schemes in hardware is proposed which includes a systolic architecture performing Gaussian elimination over composite binary fields.
2008
EPRINT
TinyECCK: Efficient Elliptic Curve Cryptography Implementation over $GF(2^m)$ on 8-bit MICAz Mote
In this paper, we revisit a generally accepted opinion: implementing Elliptic Curve Cryptosystem (ECC) over $GF(2^m)$ on sensor motes using small word size is not appropriate because XOR multiplication over $GF(2^m)$ is not efficiently supported by current low-powered microprocessors. Although there are some implementations over $GF(2^m)$ on sensor motes, their performances are not satisfactory enough to be used for wireless sensor networks (WSNs). We have found that a field multiplication over $GF(2^m)$ are involved in a number of redundant memory accesses and its inefficiency is originated from this problem. Moreover, the field reduction process also requires many redundant memory accesses. Therefore, we propose some techniques for reducing unnecessary memory accesses. With the proposed strategies, the running time of field multiplication and reduction over $GF(2^{163})$ can be decreased by 21.1\% and 24.7\%, respectively. These savings noticeably decrease execution times spent in Elliptic Curve Digital Signature Algorithm (ECDSA) operations (signing and verification) by around $15\% \sim 19\%$. We present TinyECCK (Tiny Elliptic Curve Cryptosystem with Koblitz curve -- a kind of TinyOS package supporting elliptic curve operations) which is the fastest ECC implementation over $GF(2^m)$ on 8-bit sensor motes using ATmega128L as far as we know. Through comparisons with existing software implementations of ECC built in C or hybrid of C and inline assembly on sensor motes, we show that TinyECCK outperforms them in terms of running time, code size and supporting services. Furthermore, we show that a field multiplication over $GF(2^m)$ can be faster than that over $GF(p)$ on 8-bit ATmega128L processor by comparing TinyECCK with TinyECC, a well-known ECC implementation over $GF(p)$. TinyECCK with sect163k1 can compute a scalar multiplication within 1.14 secs on a MICAz mote at the expense of 5,592-byte of ROM and 618-byte of RAM. Furthermore, it can also generate a signature and verify it in 1.37 and 2.32 secs with 13,748-byte of ROM and 1,004-byte of RAM.
2008
EPRINT
Topology Knowledge Versus Fault Tolerance: The Case of Probabilistic Communication Or: How Far Must You See to Hear Reliably?
We consider the problem of probabilistic reliable communication (PRC) over synchronous networks modeled as directed graphs in the presence of a Byzantine adversary when players' knowledge of the network topology is not complete. We show that possibility of PRC is extremely sensitive to the changes in players' knowledge of the topology. This is in complete contrast with earlier known results on the possibility of perfectly reliable communication over undirected graphs where the case of each player knowing only its neighbours gives the same result as the case where players have complete knowledge of the network. Specifically, in either case, $2t+1$-vertex connectivity is necessary and sufficient, where $t$ is the number of nodes that can be corrupted by the adversary [1][2]. We use a novel model for quantifying players' knowledge of network topology, denoted by {$\mathcal K$}. Given a directed graph $G$, influenced by a Byzantine adversary that can corrupt up to any $t$ players, we give a necessary and sufficient condition for possibility of PRC over $G$ for any arbitrary topology knowledge {$\mathcal K$}. As a corollary, we answer the question: {\em Up to what neighbourhood level $\ell$ must the players know for the possibility of PRC} where, a player is said to know up to neighbourhood level: $1$ - if it knows only its neighbours; $2$ - if it knows its neighbours and its neighbours' neighbours and so on. [1] Subramanian, L., Katz, R. H., Roth, V., Shenker, S., and Stoica, I. 2005. Reliable broadcast in unknown fixed-identity networks. In Proceedings of the Twenty-Fourth Annual ACM Symposium on Principles of Distributed Computing (Las Vegas, NV, USA, July 17 - 20, 2005). PODC '05. ACM, New York, NY, 342-351. [2] DOLEV, D. The byzantine generals strike again. In Journal of Algorithms (1982), pp. 14–30.
2008
EPRINT
Towards a Theory of White-Box Security
Program hardening for secure execution in remote untrusted environment is an important yet elusive goal of security, with numerous attempts and efforts of the research community to produce secure solutions. Obfuscation is the prevailing practical technique employed to tackle this issue. Unfortunately, no provably secure obfuscation techniques currently exist. Moreover, Barak et al., showed that not all programs can be obfuscated. We present a rigorous approach to {\em program hardening}, based on a new white box primitive, the {\em White Box Remote Program Execution (WBRPE)}, whose security specifications include confidentiality and integrity of both the local and the remote hosts. We then show how the {\em WBRPE} can be used to address the needs of a wide range of applications, e.g. grid computing and mobile agents. Next, we construct a specific program and show that if there exists a secure {\em WBRPE} for that program, then there is a secure {\em WBRPE} for {\em any} program, reducing its security to the underlying {\em WBRPE} primitive. This reduction among two white box primitives introduces new techniques that employ program manipulation.
2008
EPRINT
Toy Factoring by Newton's Method
A theoretical approach for integer factoring using complex analysis is to apply Newton's method on an analytic function whose zeros, within in a certain strip, correspond to factors of a given integer. A few successful toy experiments of factoring small numbers with this approach, implemented in a naive manner that amounts to complexified trial division, show that, at least for small numbers, Newton's method finds the requisite zeros, at least some of time (factors of nearby numbers were also sometimes found). Nevertheless, some heuristic arguments predict that this approach will be infeasible for factoring numbers of the size currently used in cryptography.
2008
EPRINT
Treatment of the Initial Value in Time-Memory-Data Tradeoff Attacks on Stream Ciphers
Time-Memory Tradeoff (TMTO) attacks on stream ciphers are a serious security threat and the resistance to this class of attacks is an important criterion in the design of a modern stream cipher. TMTO attacks are especially effective against stream ciphers where a variant of the TMTO attack can make use of multiple data to reduce the off-line and the on-line time complexities of the attack (given a fixed amount of memory). In this paper we present a new approach to TMTO attacks against stream ciphers using a publicly known initial value (IV): We suggest not to treat the IV as part of the secret key material (as done in current attacks), but rather to choose in advance some IVs and apply a TMTO attack to streams produced using these IVs. We show that while the obtained tradeoff curve is identical to the curve obtained by the current approach, the new technique allows to mount the TMTO attack in a larger variety of settings. For example, if both the secret key and the IV are of length n, it is possible to mount an attack with data, time, and memory complexities of 2^{4n/5}, while in the current approach, either the time complexity or the memory complexity is not less than 2^n. We conclude that if the IV length of a stream cipher is less than 1.5 times the key length, there exists an attack on the cipher with data, time, and memory complexities less than the complexity of exhaustive key search.
2008
EPRINT
Truly Efficient 2-Round Perfectly Secure Message Transmission Scheme
In the model of perfectly secure message transmission schemes (PSMTs), there are $n$ channels between a sender and a receiver. An infinitely powerful adversary $\A$ may corrupt (observe and forge)the messages sent through $t$ out of $n$ channels. The sender wishes to send a secret $s$ to the receiver perfectly privately and perfectly reliably without sharing any key with the receiver. In this paper, we show the first $2$-round PSMT for $n=2t+1$ such that not only the transmission rate is $O(n)$ but also the computational costs of the sender and the receiver are both polynomial in $n$. This means that we solve the open problem raised by Agarwal, Cramer and de Haan at CRYPTO 2006.
2008
EPRINT
Trusted-HB: a low-cost version of HB+ secure against Man-in-The-Middle attacks
Since the introduction at Crypto'05 by Juels and Weis of the protocol HB+, a lightweight protocol secure against active attacks but only in a detection based-model, many works have tried to enhance its security. We propose here a new approach to achieve resistance against Man-in-The-Middle attacks. Our requirements - in terms of extra communications and hardware - are surprisingly low.
2008
EPRINT
Twisted Ate Pairing on Hyperelliptic Curves and Applications
In this paper we show that the twisted Ate pairing on elliptic curves can be generalized to hyperelliptic curves, we also give a series of variations of the hyperelliptic Ate and twisted Ate pairings. Using the hyperelliptic Ate pairing and twisted Ate pairing, we propose a new approach to speed up the Weil pairing computation, and obtain an interested result: For some hyperelliptic curves with high degree twist, using this approach to compute Weil pairing will be faster than Tate pairing, Ate pairing etc. all known pairings.
2008
EPRINT
Twisted Edwards Curves
This paper introduces ``twisted Edwards curves,'' a generalization of the recently introduced Edwards curves; shows that twisted Edwards curves include more curves over finite fields, and in particular every elliptic curve in Montgomery form; shows how to cover even more curves via isogenies; presents fast explicit formulas for twisted Edwards curves in projective and inverted coordinates; and shows that twisted Edwards curves save time for many curves that were already expressible as Edwards curves.
2008
EPRINT
Two attacks on a sensor network key distribution scheme of Cheng and Agrawal
A sensor network key distribution scheme for hierarchical sensor networks was recently proposed by Cheng and Agrawal. A feature of their scheme is that pairwise keys exist between any pair of high-level nodes (which are called cluster heads) and between any (low-level) sensor node and the nearest cluster head. We present two attacks on their scheme. The first attack can be applied for certain parameter sets. If it is applicable, then this attack can result in the compromise of most if not all of the sensor node keys after a small number of cluster heads are compromised. The second attack can always be applied, though it is weaker.
2008
EPRINT
Two New Efficient CCA-Secure Online Ciphers: MHCBC and MCBC
Online ciphers are those ciphers whose ciphertexts can be computed in real time by using a length-preserving encryption algorithm. HCBC1 and HCBC2 are two known examples of Hash Cipher Block Chaining online ciphers. The first construction is secure against chosen plaintext adversary (or called CPA-secure) whereas the latter is secure against chosen ciphertext adversary (or called CCA-secure). In this paper, we have provided simple security analysis of these online ciphers. We have also proposed two new more efficient chosen ciphertext secure online ciphers modified-HCBC (MHCBC) and modified-CBC (MCBC). If one uses a finite field multiplication based universal hash function, the former needs one less key and one less field multiplication compared to HCBC2. The MCBC does not need any universal hash function and it needs only one blockcipher key unlike the other three online ciphers where two independent keys (hash function and blockcipher) are required.
2008
EPRINT
Unbalanced Digit Sets and the Closest Choice Strategy for Minimal Weight Integer Representations
An online algorithm is presented that produces an optimal radix-2 representation of an input integer $n$ using digits from the set $D_{\ell,u}=\{a\in\Z:\ell\le a\le u\}$, where $\ell \leq 0$ and $u \geq 1$. The algorithm works by scanning the digits of the binary representation of $n$ from left-to-right (\ie from most-significant to least-significant). The output representation is optimal in the sense that, of all radix-2 representations of $n$ with digits from $D_{\ell,u}$, it has as few nonzero digits as possible (\ie it has \emph{minimal weight}). Such representations are useful in the efficient implementation of elliptic curve cryptography. The strategy the algorithm utilizes is to choose an integer of the form $d 2^i$, where $d \in D_{\ell,u}$, that is closest to $n$ with respect to a particular distance function. It is possible to choose values of $\ell$ and $u$ so that the set $D_{\ell,u}$ is unbalanced in the sense that it contains more negative digits than positive digits, or more positive digits than negative digits. Our distance function takes the possible unbalanced nature of $D_{\ell,u}$ into account.
2008
EPRINT
Unconditionally Reliable and Secure Message Transmission in Directed Networks Revisited
In this paper, we re-visit the problem of {\it unconditionally reliable message transmission} (URMT) and {\it unconditionally secure message transmission} (USMT) in a {\it directed network} under the presence of a threshold adaptive Byzantine adversary, having {\it unbounded computing power}. Desmedt et.al \cite{Desmedt} have given the necessary and sufficient condition for the existence of URMT and USMT protocols in directed networks. Though their protocols are efficient, they are not communication optimal. In this paper, we prove for the first time the lower bound on the communication complexity of URMT and USMT protocols in directed networks. Moreover, we show that our bounds are tight by giving efficient communication optimal URMT and USMT protocols, whose communication complexity satisfies our proven lower bounds.
2008
EPRINT
Unconditionally Reliable and Secure Message Transmission in Undirected Synchronous Networks: Possibility, Feasibility and Optimality
We study the interplay of network connectivity and the issues related to the possibility, feasibility and optimality for {\it unconditionally reliable message transmission} (URMT) and {\it unconditionally secure message transmission} (USMT) in an undirected {\it synchronous} network, under the influence of an adaptive {\it mixed} adversary ${\mathcal A}_{(t_b,t_o,t_f,t_p)}$, who has {\it unbounded computing power} and can corrupt $t_b$, $t_o$, $t_f$ and $t_p$ nodes in the network in Byzantine, {\it omission}, {\it fail-stop} and passive fashion respectively. In URMT problem, a sender {\bf S} and a receiver {\bf R} are part of a distributed network, where {\bf S} and {\bf R} are connected by intermediate nodes, of which at most $t_b, t_o, t_f$ and $t_p$ nodes can be under the control of ${\mathcal A}_{(t_b,t_o,t_f,t_p)}$. {\bf S} wants to send a message $m$ which is a sequence of $\ell$ field elements from a finite field ${\mathbb F}$ to {\bf R}. The challenge is to design a protocol, such that after interacting in phases\footnote{A phase is a send from {\bf S} to {\bf R} or viceversa.} as per the protocol, {\bf R} should output $m' = m$ with probability at least $1 - \delta$, where $0 < \delta < \frac{1}{2}$. Moreover, this should happen, irrespective of the adversary strategy of ${\mathcal A}_{(t_b,t_o,t_f,t_p)}$. The USMT problem has an additional requirement that ${\mathcal A}_{(t_b,t_o,t_f,t_p)}$ should not know anything about $m$ in {\it information theoretic} sense. In this paper, we answer the following in context of URMT and USMT: (a) {\sc Possibility:} when is a protocol possible in a given network? (b) {\sc Feasibility:} Once the existence of a protocol is ensured then does there exists a polynomial time protocol on the given network? (c) {\sc Optimality: } Given a message of specific length, what is the minimum communication complexity (lower bound) needed by any protocol to transmit the message and how to design a protocol whose total communication complexity matches the lower bound on the communication complexity? Finally we also show that {\it allowing a negligible error probability significantly helps in the possibility, feasibility and optimality of both reliable and secure message transmission protocols.} To design our protocols, we propose several new techniques which are of independent interest.
2008
EPRINT
Unconditionally Reliable Message Transmission in Directed Hypergraphs
We study the problem of unconditionally reliable message transmission (URMT), where two non-faulty players, the sender S and the receiver R are part of a synchronous network modeled as a directed hypergraph, a part of which may be under the influence of an adversary having unbounded computing power. S intends to transmit a message $m$ to R, such that R should correctly obtain S's message with probability at least $(1-\delta)$ for arbitrarily small $\delta > 0$. However, unlike most of the literature on this problem, we assume the adversary modeling the faults is threshold mixed, and can corrupt different set of nodes in Byzantine, passive and fail-stop fashion simultaneously. The main contribution of this work is the complete characterization of URMT in directed hypergraph tolerating such an adversary. Working out a direct characterization of URMT over directed hypergraphs tolerating threshold mixed adversary is highly un-intuitive. So we first propose a novel technique, which takes as input a directed hypergraph and a threshold mixed adversary on that hypergraph and outputs a corresponding digraph, along with a non-threshold mixed adversary, such that URMT over the hypergraph tolerating the threshold mixed adversary is possible iff a special type of URMT is possible over the obtained digraph, tolerating the corresponding non-threshold mixed adversary}. Thus characterization of URMT over directed hypergraph tolerating threshold mixed adversary reduces to characterizing special type of a URMT over arbitrary digraph tolerating non-threshold mixed adversary. We then characterize URMT in arbitrary digraphs tolerating non-threshold mixed adversary and modify it to obtain the characterization for special type of URMT over digraphs tolerating non-threshold mixed adversary. This completes the characterization of URMT over the original hypergraph. Surprisingly, our results indicate that even passive corruption, in collusion with active faults, substantially affects the reliability of URMT protocols! This is interesting because it is a general belief that passive corruption (eavesdropping) does not affect reliable communication.
2008
EPRINT
Unconditionally Secure Message Transmission in Arbitrary Directed Synchronous Networks Tolerating Generalized Mixed Adversary
In this paper, we re-visit the problem of {\it unconditionally secure message transmission} (USMT) from a sender {\bf S} to a receiver {\bf R}, who are part of a distributed synchronous network, modeled as an {\it arbitrary} directed graph. Some of the intermediate nodes between {\bf S} and {\bf R} can be under the control of the adversary having {\it unbounded} computing power. Desmedt and Wang \cite{Desmedt} have given the characterization of USMT in directed networks. However, in their model, the underlying network is abstracted as directed node disjoint paths (also called as wires/channels) between {\bf S} and {\bf R}, where the intermediate nodes are oblivious, message passing nodes and perform no other computation. In this work, we first show that the characterization of USMT given by Desmedt et.al \cite{Desmedt} does not hold good for {\it arbitrary} directed networks, where the intermediate nodes perform some computation, beside acting as message forwarding nodes. We then give the {\it true} characterization of USMT in arbitrary directed networks. As far our knowledge is concerned, this is the first ever {\it true} characterization of USMT in arbitrary directed networks.
2008
EPRINT
Unconditionally Secure Multiparty Set Intersection Re-Visited
In this paper, we re-visit the problem of unconditionally secure multiparty set intersection in information theoretic model. Li et.al \cite{LiSetMPCACNS07} have proposed a protocol for $n$-party set intersection problem, which provides unconditional security when $t < \frac{n}{3}$ players are corrupted by an active adversary having {\it unbounded computing power}. Moreover, they have claimed that their protocol takes six rounds of communication and incurs a communication complexity of ${\cal O}(n^4m^2)$, where each player has a set of size $m$. However, we show that the round complexity and communication complexity of the protocol in \cite{LiSetMPCACNS07} is much more than what is claimed in \cite{LiSetMPCACNS07}. We then propose a {\it novel} unconditionally secure protocol for multiparty set intersection problem with $n > 3t$ players, which significantly improves the "actual" round and communication complexity (as shown in this paper) of the protocol given in \cite{LiSetMPCACNS07}. To design our protocol, we use several tools which are of independent interest.
2008
EPRINT
Understanding Phase Shifting Equivalent Keys and Exhaustive Search
Recent articles~\cite{kucuk,ckp08,isobe,cryptoeprint:2008:128} introduce the concept of phase shifting equivalent keys in stream ciphers, and exploit this concept in order to mount attacks on some specific ciphers. The idea behind phase shifting equivalent keys is that, for many ciphers, each internal state can be considered as the result of an injection of a key and initialization vector. This enables speeding up the standard exhaustive search algorithm among the $2^n$ possible keys by decreasing the constant factor of $2^n$ in the time complexity of the algorithm. However, this has erroneously been stated in~\cite{isobe,cryptoeprint:2008:128} as decreasing the complexity of the algorithm below $2^n$. In this note, we show why this type of attacks, using phase shifting equivalent keys to improve exhaustive key search, can never reach time complexity below $2^n$, where $2^n$ is the size of the key space.
2008
EPRINT
Unidirectional Key Distribution Across Time and Space with Applications to RFID Security
We explore the problem of secret-key distribution in _unidirectional_ channels, those in which a sender transmits information blindly to a receiver. We consider two approaches: (1) Key sharing across _space_, i.e., across simultaneously emitted values that may follow different data paths and (2) Key sharing across _time_, i.e., in temporally staggered emissions. Our constructions are of general interest, treating, for instance, the basic problem of constructing highly compact secret shares. Our main motivating problem, however, is that of practical key management in RFID (Radio-Frequency IDentification) systems. We describe the application of our techniques to RFID-enabled supply chains and a prototype privacy-enhancing system.
2008
EPRINT
Unique Shortest Vector Problem for max norm is NP-hard
The unique Shortest vector problem (uSVP) in lattice theory plays a crucial role in many public-key cryptosystems. The security of those cryptosystems bases on the hardness of uSVP. However, so far there is no proof for the proper hardness of uSVP even in its exact version. In this paper, we show that the exact version of uSVP for $\ell_\infty$ norm is NP-hard. Furthermore, many other lattice problems including unique Subspace avoiding problem, unique Closest vector problem and unique Generalized closest vector problem, for any $\ell_p$ norm, are also shown to be NP-hard.
2008
EPRINT
Universally Composable Adaptive Oblivious Transfer
In an oblivious transfer (OT) protocol, a Sender with messages M_1,...,M_N and a Receiver with indices s_1,...,s_k interact in such a way that at the end the Receiver obtains M_{s_1},...,M_{s_k} without learning anything about the other messages, and the Sender does not learn anything about s_1,...,s_k. In an adaptive protocol, the Receiver may obtain M_{s_{i-1}} before deciding on $s_i$. Efficient adaptive OT protocols are interesting both as a building block for secure multiparty computation and for enabling oblivious searches on medical and patent databases. Historically, adaptive OT protocols were analyzed with respect to a ``half-simulation'' definition which Naor and Pinkas showed to be flawed. In 2007, Camenisch, Neven, and shelat, and subsequent other works, demonstrated efficient adaptive protocols in the full-simulation model. These protocols, however, all use standard rewinding techniques in their proofs of security and thus are not universally composable. Recently, Peikert, Vaikuntanathan and Waters presented universally composable (UC) non-adaptive OT protocols (for the 1-out-of-2 variant). However, it is not clear how to preserve UC security while extending these protocols to the adaptive k-out-of-N setting. Further, any such attempt would seem to require O(N) computation per transfer for a database of size N. In this work, we present an efficient and UC-secure adaptive k-out-of-N OT protocol, where after an initial commitment to the database, the cost of each transfer is constant. Our construction is secure under bilinear assumptions in the standard model.
2008
EPRINT
Universally Composable Security Analysis of TLS---Secure Sessions with Handshake and Record Layer Protocols
We present a security analysis of the complete TLS protocol in the Universal Composable security framework. This analysis evaluates the composition of key exchange functionalities realized by the TLS handshake with the message transmission of the TLS record layer to emulate secure communication sessions and is based on the adaption of the secure channel model from Canetti and Krawczyk to the setting where peer identities are not necessarily known prior the protocol invocation and may remain undisclosed. Our analysis shows that TLS, including the Diffie-Hellman and key transport suites in the uni-directional and bi-directional models of authentication, securely emulates secure communication sessions.
2008
EPRINT
Universally Composable Undeniable Signature
How to define the security of undeniable signature schemes is a challenging task. This paper presents two security definitions of undeniable signature schemes which are more useful or natural than the existing definition. It then proves their equivalence. We first define the UC-security, where UC means universal composability. We next show that there exists a UC-secure undeniable signature scheme which does not satisfy the standard definition of security that has been believed to be adequate so far. More precisely, it does not satisfy the invisibility defined by \cite{DP96}. We then show a more adequate definition of invisibility which captures a wider class of (naturally secure) undeniable signature schemes. We finally prove that the UC-security against non-adaptive adversaries is equivalent to this definition of invisibility and the strong unforgeability in $\cF_{ZK}$-hybrid model, where $\cF_{ZK}$ is the ideal ZK functionality. Our result of equivalence implies that all the known proven secure undeniable signature schemes (including Chaum's scheme) are UC-secure if the confirmation/disavowal protocols are both UC zero-knowledge.
2008
EPRINT
Usable Optimistic Fair Exchange
Fairly exchanging digital content is an everyday problem. It has been shown that fair exchange cannot be done without a trusted third party (called the Arbiter). Yet, even with a trusted party, it is still non-trivial to come up with an efficient solution, especially one that can be used in a p2p file sharing system with a high volume of data exchanged. We provide an efficient optimistic fair exchange mechanism for bartering digital files, where receiving a payment in return to a file (buying) is also considered fair. The exchange is optimistic, removing the need for the Arbiter’s involvement unless a dispute occurs. While the previous solutions employ costly cryptographic primitives for every file or block exchanged, our protocol employs them only once per peer, therefore achieving O(n) efficiency improvement when n blocks are exchanged between two peers. The rest of our protocol uses very efficient cryptography, making it perfectly suitable for a p2p file sharing system where tens of peers exchange thousands of blocks and they do not know beforehand which ones they will end up exchanging. Thus, for the first time, a provably secure (and privacy respecting when payments are made using e-cash) fair exchange protocol is being used in real bartering applications (e.g., BitTorrent) [14] without sacrificing performance.
2008
EPRINT
User-Sure-and-Safe Key Retrieval
In a key retrieval scheme, a human user interacts with a client computer to retrieve a key. A scheme is user-sure if any adversary without access to the the user cannot distinguish the retrieved key from a random key. A scheme is user-safe if any adversary without access to the client's keys, or simultaneous user and client access, cannot exploit the user to distinguish the retrieved key from a random key. A multiple-round key retrieval scheme, where the user is given informative prompts to which the user responds, is proved to be user-sure and user-safe. Remote key retrieval involves a keyless client and a remote, keyed server. User-sure and user-safe are defined similarly for remote key retrieval. The scheme is user-anonymous if the server cannot identify the user. A remote version of the multiple-round key retrieval scheme is proved to be user-sure, user-safe and user-anonymous.
2008
EPRINT
Using Commutative Encryption to Share a Secret
It is shown how to use commutative encryption to share a secret. Suppose Alice wants to share a secret with Bob such that Bob cannot decrypt the secret unless a group of trustees agree. It is assumed that Alice, Bob and the trustees communicate over insecure channels. This paper presents a scheme that uses modular exponentiation and does not require key exchange. The security of the scheme rest of the difficulty of the discrete logarithm problem.
2008
EPRINT
Variants of the Distinguished Point Method for Cryptanalytic Time Memory Trade-offs (Full version)
The time memory trade-off (TMTO) algorithm, first introduced by Hellman, is a method for quickly inverting a one-way function, using pre-computed tables. The distinguished point method (DP) is a technique that reduces the number of table lookups performed by Hellman's algorithm. In this paper we propose a new variant of the DP technique, named variable DP (VDP), having properties very different from DP. It has an effect on the amount of memory required to store the pre-computed tables. We also show how to combine variable chain length techniques like DP and VDP with a more recent trade-off algorithm called the rainbow table method.
2008
EPRINT
Various Security Analysis of a pfCM-MD Hash Domain Extension and Applications based on the Extension
We propose a new hash domain extension \textit{a prefix-free-Counter-Masking-MD (pfCM-MD)}. And, among security notions for the hash function, we focus on the indifferentiable security notion by which we can check whether the structure of a given hash function has any weakness or not. Next, we consider the security of HMAC, two new prf constructions, NIST SP 800-56A key derivation function, and the randomized hashing in NIST SP 800-106, where all of them are based on the pfCM-MD. Especially, due to the counter of the pfCM-MD, the pfCM-MD are secure against all of generic second-preimage attacks such as Kelsey-Schneier attack \cite{KeSc05} and Elena {\em et al.}' attck \cite{AnBoFoHoKeShZi08}. Our proof technique and most of notations follow those in \cite{BeDaPeAs08,Bellare06,BeCaKr96a}.
2008
EPRINT
Weaknesses in HENKOS Stream Cipher
HENKOS is a synchronous stream cipher posted by Marius Oliver Gheorghita to eprint. In this paper we are going to present some weaknesses in the cipher. We first present a chosen IV attack which is very straight forward attack on the cipher. Second we present a group of weak keys.
2008
EPRINT
Yet Another Secure Distance-Bounding Protocol
Distance-bounding protocols have been proposed by Brands and Chaum in 1993 in order to detect \emph{relay attacks}, also known as \emph{mafia fraud}. Although the idea has been introduced fifteen years ago, only recently distance-bounding protocols attracted the attention of the researchers. Several new protocols have been proposed the last five years. In this paper, a new secure distance-bounding protocol is presented. It is self-contained and composable with other protocols for example for authentication or key-negotiation. It allows periodically execution and achieves better use of the communication channels by exchanging authenticated nonces. The proposed protocol becomes suitable for wider class of devices, since the resource requirements to the prover are relaxed.
2008
EPRINT
Zcipher Algorithm Specification
This paper describes the Zcipher - a symmetric encryption/decryption cryptographic algorithm, designed to be secure and high-performance with small memory footprint and simple structure.