International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Papers from EPRINT 2009

Year
Venue
Title
2009
EPRINT
1024 - A High Security Software Oriented Block Cipher
A cryptographer with week algorithm get his feedback almost instantly by the open crypto community. But what about government cryptanalysis ? Given the fact that there is a considerable amount of cryptanalysis behind closed doors, what is to be done to get COMINT deaf ? The NSA, as the most closely examined SIGINT agency, has a workforce of 38,000 [2], among them several thousand cryptologist. The actual wiretapping is done by the Central Security Service with 25,000 women and men. Other industrialised states have also thousands cryptologist at their wage role. The block cipher 1024 is an attempt to make cryptanalysis more difficult especially with differential (DC) and linear cryptanalysis. The assumption is that the increased security will defeat other cryptanalytical not yet known by the open crypto community. 1024 has a block size of 1024 bits and a key length of 2048 bits.
2009
EPRINT
A Brief History of Provably-Secure Public-Key Encryption
Public-key encryption schemes are a useful and interesting field of cryptographic study. The ultimate goal for the cryptographer in the field of public-key encryption would be the production of a very efficient encryption scheme with a proof of security in a strong security model using a weak and reasonable computational assumption. This ultimate goal has yet to be reached. In this invited paper, we survey the major results that have been achieved in the quest to find such a scheme.
2009
EPRINT
A Fast Implementation of $\eta_T$ Pairing in Characteristic Three on Intel Core 2 Duo Processor
We present an efficient implementation of $\eta_T$ pairing on Intel Core 2 Duo processor. The processing speed of our implementation achieves 92 $\mu$sec over ${\mathbb F}_3^{97}$ and 553 $\mu$sec over ${\mathbb F}_3^{193}$ on 2.6GHz processor.
2009
EPRINT
A general framework for computational soundness proofs - or - The computational soundness of the applied pi-calculus
We provide a general framework for conducting computational soundness proofs of symbolic models. Our framework considers arbitrary sets of constructors, deduction rules, and computational implementations, and it abstracts away from many details that are not core for proving computational soundness such as message scheduling, corruption models, and even the internal structure of a protocol. We identify several properties of a so-called simulator such that the existence of a simulator with all these properties already entails computational soundness in the sense of preservation of trace properties in our framework. This simulator-based characterization allows for proving soundness results in a conceptually modular and generic way. We exemplify the usefulness of our framework by proving the first computational soundness result for the full-fledged applied $\pi$-calculus under active attacks. Concretely, we embed the applied $\pi$-calculus into our framework and give a sound implementation of public-key encryption.
2009
EPRINT
A Hardware Analysis of Twisted Edwards Curves for an Elliptic Curve Cryptosystem
This paper presents implementation results of a reconfigurable elliptic curve processor defined over prime fields $GF(p)$. We use this processor to compare a new algorithm for point addition and point doubling operations on the twisted Edwards curves, against a current standard algorithm in use, namely the Double-and-Add. Secure power analysis versions of both algorithms are also examined and compared. The algorithms are implemented on an FPGA, and the speed, area and power performance of each are then evaluated for various modes of circuit operation using parallel processing. To the authors' knowledge, this work introduces the first documented FPGA implementation for computations on twisted Edwards curves over fields $GF(p)$.
2009
EPRINT
A note on Agrawal conjecture
We prove that Lenstra proposition suggesting existence of many counterexamples to Agrawal conjecture is true in a more general case. At the same time we obtain a strictly ascending chain of subgroups of the group (Zp[X]/(Cr(X)))* and state the modified conjecture that the set {X-1, X+2} generate big enough subgroup of this group.
2009
EPRINT
A note on the security of MST3
In this paper, we study the recently proposed encryption scheme MST3, focusing on a concrete instantiation using Suzuki-2-groups. In a passive scenario, we argue that the one wayness of this scheme may not, as claimed, be proven without the assumption that factoring group elements with respect to random covers for a subset of the group is hard. As a result, we conclude that for the proposed Suzuki 2-groups instantiation, impractical key sizes should be used in order to prevent more or less straightforward factorization attacks.
2009
EPRINT
A Provably Secure And Efficient Countermeasure Against Timing Attacks
We show that the expected number of key bits that an unknown-message attacker can extract from a deterministic side-channel is bounded from above by |O| log_2 (n+1), where n is the number of side-channel measurements and O is the set of possible observations. We use this bound to derive a novel countermeasure against timing attacks, where the strength of the security guarantee can be freely traded for the resulting performance penalty. We give algorithms that efficiently and optimally adjust this trade-off for given constraints on the side-channel leakage or on the efficiency of the cryptosystem. Finally, we perform a case-study that shows that applying our countermeasure leads to implementations with minor performance overhead and strong security guarantees.
2009
EPRINT
A Single Initialization Server for Multi-Party Cryptography
We present information-theoretically secure bit commitment, zero-knowledge and multi-party computation based on the assistance of an initialization server. In the initialization phase, the players interact with the server to gather resources that are later used to perform useful protocols. This initialization phase does not depend on the input of the protocol it will later enable. Once the initialization is complete, the server’s assistance is no longer required. This paper improves on previous work as there is only one server and it does not need to be trusted. If the server is honest, the protocols are secure against any coalition of dishonest players. If all players are honest, then there is an exponentially small probability that both the initialization phase succeeds and that later the protocol fails. That is, the server cannot create a situation in the initialization phase that would lead honest players to accuse each other. The protocols are built in a modular fashion and achieve linear complexity for the players in terms of the security parameter, number of players and the size of the circuit.
2009
EPRINT
A Step Towards QC Blind Signatures
In this paper we propose a conversion from signature schemes connected to coding theory into blind signature schemes. We give formal security reductions to combinatorial problems not connected to number theory. This is the first blind signature scheme which can not be broken by quantum computers via cryptanalyzing the underlying signature scheme employing Shor’s algorithms. We thus present a step towards diversifying computational assumptions on which blind signatures can be based. We achieve blind signatures by a different concept of blinding: Instead of blinding the message, we blind the public key, such that generating a (blind) signature for the blinded key requires the interaction of the holder of the original secret key. To verify the blind signature, the connection between the original and the blinded key is proven by a static ZK proof. The major ingredient for our conversion is the PKP protocol by Shamir.
2009
EPRINT
A Very Compact "Perfectly Masked" S-Box for AES (corrected)
Implementations of the Advanced Encryption Standard (AES), including hardware applications with limited resources (e.g., smart cards), may be vulnerable to "side-channel attacks" such as differential power analysis. One countermeasure against such attacks is adding a random mask to the data; this randomizes the statistics of the calculation at the cost of computing "mask corrections." The single nonlinear step in each AES round is the "S-box" (involving a Galois inversion), which incurs the majority of the cost for mask corrections. Oswald et al. showed how the "tower field" representation allows maintaining an additive mask throughout the Galois inverse calculation. This work applies a similar masking strategy to the most compact (unmasked) S-box to date. The result is the most compact masked S-box so far, with "perfect masking" (by the definition of Blomer) giving suitable implementations immunity to first-order differential side-channel attacks.
2009
EPRINT
Adaptive Preimage Resistance and Permutation-based Hash Functions
In this paper, we introduce a new notion of security, called adaptive preimage resistance. We prove that a compression function that is collision resistant and adaptive preimage resistant can be combined with a public random function to yield a hash function that is indifferentiable from a random oracle. Specifically, we analyze adaptive preimage resistance of 2n-bit to n-bit compression functions that use three calls to n-bit public random permutations. By using such compression functions as building blocks, we obtain a method for construction of permutation-based pseudorandom oracles that is comparable to the Sponge construction [4] both in terms of security and efficiency.
2009
EPRINT
Adaptively Secure Two-Party Computation with Erasures
In the setting of multiparty computation a set of parties with private inputs wish to compute some joint function of their inputs, whilst preserving certain security properties (like privacy and correctness). An adaptively secure protocol is one in which the security properties are preserved even if an adversary can adaptively and dynamically corrupt parties during a computation. This provides a high level of security, that is arguably necessary in today's world of active computer break-ins. Until now, the work on adaptively secure multiparty computation has focused almost exclusively on the setting of an honest majority, and very few works have considered the honest minority and two-party cases. In addition, significant computational and communication costs are incurred by most protocols that achieve adaptive security. In this work, we consider the two-party setting and assume that honest parties may \emph{erase} data. We show that in this model it is possible to securely compute any two-party functionality in the presence of \emph{adaptive semi-honest adversaries}. Furthermore, our protocol remains secure under concurrent general composition (meaning that it remains secure irrespective of the other protocols running together with it). Our protocol is based on Yao's garbled-circuit construction and, importantly, is as efficient as the analogous protocol for static corruptions. We argue that the model of adaptive corruptions with erasures has been unjustifiably neglected and that it deserves much more attention.
2009
EPRINT
An efficient fuzzy extractor for limited noise
A fuzzy extractor is a security primitive that allows for reproducible extraction of an almost uniform key from a noisy non-uniform source. We analyze a fuzzy extractor scheme that uses universal hash functions for both information reconciliation and privacy amplification. This is a useful scheme when the number of error patterns likely to occur is limited, regardless of the error probabilities. We derive a sharp bound on the uniformity of the extracted key, making use of the concatenation property of universal hash functions and a recent tight formulation of the leftover hash lemma.
2009
EPRINT
Anonymity in Shared Symmetric Key Primitives
We provide a stronger definition of anonymity in the context of shared symmetric key primitives, and show that existing schemes do not provide this level of anonymity. A new scheme is presented to share symmetric key operations amongst a set of participants according to a $(t,n)$-threshold access structure. We quantify the amount of information the output of the shared operation provides about the group of participants which collaborated to produce it.
2009
EPRINT
Anonymous signature scheme
In order to hide the identity of a signer, an anonymous signaure scheme is presented in this paper. In this scheme, a signer located in a specified group produces a signautre on behalf of the group. The recipient can verify whether the signature is valid and comes from the specified group, while tracing the signature to its source is impossible. The proposed anonymous signature is similarly to ring signature in some respects, for example, there is no manager, and no revocation mechanism against signer's anonymity. The most different between these two kinds of signatures is that the group in ring signature is adaptively constructed by the signer, while the group in our scheme is fixed.
2009
EPRINT
Applying Time-Memory-Data Trade-Off to Meet-in-the-Middle Attack
In this paper, we present several new attacks on multiple encryption block ciphers based on the meet-in-the-middle attack. In the first attack (GDD-MTM), we guess a certain number of secret key bits and apply the meet-in-the-middle attack on multiple ciphertexts. The second attack (TMTO-MTM) is derived from applying the time-memory trade-off attack to the meet-in-the-middle attack on a single ciphertext. We may also use rainbow chains in the table construction to get the Rainbow-MTM attack. The fourth attack (BS-MTM) is defined by combining the time-memory-data trade-off attack proposed by Biryukov and Shamir to the meet-in-the-middle attack on multiple ciphertexts. Lastly, for the final attack (TMD-MTM), we apply the TMTO-Data curve, which demonstrates the general methodology for multiple data trade-offs, to the meet-in-the-middle attack. GDD-MTM requires no pre-processing, but the attack complexity is high while memory requirement is low. In the last four attacks, pre-processing is required but we can achieve lower (faster) online attack complexity at the expense of more memory in comparison with the GDD-MTM attack. To illustrate how the attacks may be used, we applied them in the cryptanalysis of triple DES. In particular, for the BS-MTM attack, we managed to achieve pre-computation and data complexity which are much lower while maintaining almost the same memory and online attack complexity, as compared to a time-memory-data trade-off attack by Biryukov et al. at SAC 2005. In all, our new methodologies offer viable alternatives and provide more flexibility in achieving time-memory-data trade-offs.
2009
EPRINT
Attacking Cryptographic Schemes Based on "Perturbation Polynomials"
We show attacks on several cryptographic schemes that have recently been proposed for achieving various security goals in sensor networks. Roughly speaking, these schemes all use "perturbation polynomials" to add "noise" to polynomial-based systems that offer information-theoretic security, in an attempt to increase the resilience threshold while maintaining efficiency. We show that the heuristic security arguments given for these modified schemes do not hold, and that they can be completely broken once we allow even a slight extension of the parameters beyond those achieved by the underlying information-theoretic schemes. Our attacks apply to the key predistribution scheme of Zhang et al. (MobiHoc~2007), the access-control schemes of Subramanian et al. (PerCom~2007), and the authentication schemes of Zhang et~al. (INFOCOM~2008).
2009
EPRINT
Attacks on the DECT authentication mechanisms
Digital Enhanced Cordless Telecommunications (DECT) is a standard for connecting cordless telephones to a fixed telecommunications network over a short range. The cryptographic algorithms used in DECT are not publicly available. In this paper we reveal one of the two algorithms used by DECT, the DECT Standard Authentication Algorithm (DSAA). We give a very detailed security analysis of the DSAA including some very effective attacks on the building blocks used for DSAA as well as a common implementation error that can practically lead to a total break of DECT security. We also present a low cost attack on the DECT protocol, which allows an attacker to impersonate a base station and therefore listen to and reroute all phone calls made by a handset.
2009
EPRINT
Automatic Approach of Provable Security and its Application for OAEP+
Probable security is an important criteria for analyzing the security of cryptographic protocols. However, writing and verifying proofs by hand are prone to errors. This paper introduces the game-based approach of writing security proofs and its automatic technique. It advocates the automatic security proof approach based on process calculus, and presents the initial game and observational equivalences of OAEP+.
2009
EPRINT
Avoid Mask Re-use in Masked Galois Multipliers
This work examines a weakness in re-using masks for masked Galois inversion, specifically in the masked Galois multipliers. Here we show that the mask re-use scheme included in our work[1] cannot result in "perfect masking," regardless of the order in which the terms are added; explicit distributions are derived for each step. The same problem requires new masks in the subfield calculations, not included in [1]. Hence, for resistance to first-order differential attacks, the masked S-box must use distinct, independent masks for input and output bytes of the masked inverter, and new masks in the subfields, resulting in a larger size. Ref[1]: Canright, D., Batina, L.: A Very Compact "Perfectly Masked" S-Box for AES. In ACNS2008, LNCS 5037, Springer-Verlag (2008), 446-459
2009
EPRINT
Cascade Encryption Revisited
The security of cascade blockcipher encryption is an important and well-studied problem in theoretical cryptography with practical implications. It is well-known that double encryption improves the security only marginally, leaving triple encryption as the shortest reasonable cascade. In a recent paper, Bellare and Rogaway showed that in the ideal cipher model, triple encryption is significantly more secure than single and double encryption, stating the security of longer cascades as an open question. In this paper, we propose a new lemma on the indistinguishability of systems extending Maurer's theory of random systems. In addition to being of independent interest, it allows us to compactly rephrase Bellare and Rogaway's proof strategy in this framework, thus making the argument more abstract and hence easy to follow. As a result, this allows us to address the security of longer cascades as well as some errors in their paper. Our result implies that for blockciphers with smaller key space than message space (e.g. DES), longer cascades improve the security of the encryption up to a certain limit. This partially answers the open question mentioned above.
2009
EPRINT
CCZ-equivalence and Boolean functions
We study further CCZ-equivalence of $(n,m)$-functions. We prove that for Boolean functions (that is, for $m=1$), CCZ-equivalence coincides with EA-equivalence. On the contrary, we show that for $(n,m)$- functions, CCZ-equivalence is strictly more general than EA-equivalence when $n\ge5$ and $m$ is greater or equal to the smallest positive divisor of $n$ different from 1. Our result on Boolean functions allows us to study the natural generalization of CCZ-equivalence corresponding to the CCZ-equivalence of the indicators of the graphs of the functions. We show that it coincides with CCZ-equivalence.
2009
EPRINT
Collision Attack on NaSHA-384/512
In this paper, we present a collision attack on the hash function NaSHA for the output sizes 384-bit and 512-bit. This attack is based on the the weakness in the generate course of the state words and the fact that the quasigroup operation used in the compression function is only determined by partial state words. Its complexity is about $2^{128}$ (much lower than the complexity of the corresponding birthday attack) and its probability is more than $(1- \frac{2}{{2^{64} - 1}})^2$ ($\gg \frac{1}{2}$).
2009
EPRINT
Combining Computational and Information-Theoretic Security in Multi-Party Computation
Most protocols for multi-party computation (MPC) are secure either against information-theoretic (IT) or against computationally bounded adversaries. In this work, we bring together the best of both worlds: For any robustness parameter $\rob<\frac{n}{2}$ we obtain one MPC protocol that is simultaneously IT secure with robustness for up to $t\leq\rob$ actively corrupted parties, IT secure with fairness (no robustness) for up to $t<\frac{n}{2}$ and computationally secure with agreement on abort (no fairness) for up to $t<n-\rob$. Our construction is secure in the universal composability (UC) framework, and achieves the bounds of Ishai et al. [CRYPTO'06], Katz [STOC'07], and Cleve [STOC'86] on trade-offs between robustness and privacy, and on fairness. For example, for the special case $\rob=0$ our protocol simultaneously achieves non-robust MPC for up to $t<n$ corrupted parties in the computational setting (like Goldreich et al. [STOC'87]) while providing security with fairness in the IT setting for up to $t<\frac{n}{2}$ corrupted parties (like Rabin and Ben-Or [STOC'89] though without robustness).
2009
EPRINT
Common Modulus Attacks on Small Private Exponent RSA and Some Fast Variants (in Practice)
In this work we re-examine two common modulus attacks on RSA. First, we show that Guo's continued fraction attack works much better in practice than previously expected. Given three instances of RSA with a common modulus $N$ and private exponents each smaller than $N^{0.33}$ the attack can factor the modulus about $93\%$ of the time in practice. The success rate of the attack can be increased up to almost $100\%$ by including a relatively small exhaustive search. Next, we consider Howgrave-Graham and Seifert's lattice-based attack and show that a second necessary condition for the attack exists that limits the bounds (beyond the original bounds) once $n \geq 7$ instances of RSA are used. In particular, by construction, the attack can only succeed when the private exponents are each smaller than $N^{0.5-\epsilon}$, given sufficiently many instances, instead of the original bound of $N^{1-\epsilon}$. In addition, we also consider the effectiveness of the attacks when mounted against multi-prime RSA and Tagaki's variant of RSA. For multi-prime RSA, we show three (or more) instances with a common modulus and private exponents smaller than $N^{1/3-\epsilon}$ is unsafe. For Takagi's variant, we show that three or more instances with a common modulus $N=p^rq$ is unsafe when all the private exponents are smaller than $N^{2/(3(r+1))-\epsilon}$. The results, for both variants, is obtained using Guo's method and are successful almost always with the inclusion of a small exhaustive search. When only two instances are available, Howgrave-Graham and Seifert's attack can be mounted on multi-prime RSA when the private exponents are smaller than $N^{(3+r)/7r-\epsilon}$ when there are $r$ primes in the modulus.
2009
EPRINT
Communication-Efficient Private Protocols for Longest Common Subsequence
We design communication efficient two-party and multi-party protocols for the longest common subsequence (LCS) and related problems. Our protocols achieve privacy with respect to passive adversaries, under reasonable cryptographic assumptions. We benefit from the somewhat surprising interplay of an efficient block-retrieval PIR (Gentry-Ramzan, ICALP 2005) with the classic “four Russians” algorithmic design. This result is the first improvement to the communication complexity for this application over generic results (such as Yao’s garbled circuit protocol) and, as such, is interesting as a contribution to the theory of communication efficiency for secure two-party and multiparty applications.
2009
EPRINT
Comparing Two Pairing-Based Aggregate Signature Schemes
In 2003, Boneh, Gentry, Lynn and Shacham (BGLS) devised the first provably-secure aggregate signature scheme. Their scheme uses bilinear pairings and their security proof is in the random oracle model. The first pairing-based aggregate signature scheme which has a security proof that does not make the random oracle assumption was proposed in 2006 by Lu, Ostrovsky, Sahai, Shacham and Waters (LOSSW). In this paper, we compare the security and efficiency of the BGLS and LOSSW schemes when asymmetric pairings derived from Barreto-Naehrig (BN) elliptic curves are employed.
2009
EPRINT
Comparing With RSA
A multi-set (MS) is a set where an element can occur more than once. MS hash functions (MSHFs) map MSs of arbitrary cardinality to fixed-length strings. This paper introduces a new RSA-based MSHF. The new function is efficient and produces small hashes. We prove that the proposed MSHF is collision-resistant under the assumption of unforgeability of deterministic RSA signatures. In many practical applications, programmers need to compare two (unordered) sets of integers. A trivial solution consists in sorting both sets ($\mathcal{O}(n \log n)$) and comparing them linearly. We show how MS hash functions can be turned into a linear-time, constant-space integer set equality test.
2009
EPRINT
Comparison-Based Key Exchange and the Security of the Numeric Comparison Mode in Bluetooth v2.1
In this paper we study key exchange protocols in a model where the key exchange takes place between devices with limited displays that can be compared by a human user. If the devices display the same value then the human user is convinced that the key exchange terminated successfully and securely, and if they do not then the user knows that it came under attack. The main result of this paper is a rigorous proof that the numeric comparison mode for device pairing in Bluetooth version 2.1 is secure, under appropriate assumptions regarding the cryptographic functions used. Our proof is in the standard model and in particular does not model any of the functions as random oracles. In order to prove our main result, we present formal definitions for key exchange in this model and show our definition to be equivalent to a simpler definition. This is a useful result of independent interest that facilitates an easier security analysis of protocols in this model.
2009
EPRINT
Computational Oblivious Transfer and Interactive Hashing
We present a simple approach for constructing oblivious transfer (OT) using a trapdoor function (TDF) and interactive hashing (IH). In a nutshell, an OT-receiver inputs a (randomly chosen) function index (encoded as a binary string) into IH. The resulting output strings are interpreted by an OT-sender and used to encrypt his private inputs. Two functions are shown to be eligible: 1) A specific candidate function: a coding based McEliece PKC; 2) A collection of TDF with some special properties, loosely speaking: succinctly representable index set and a unique trapdoor for each index. The aim of this presentation is to show a proof of concept in two ways: 1) Introduction of an apparent connection between OT and IH; 2) Emphasizing importance of IH as a cryptographic primitive in its own right and bringing up some aspects in which the further development of IH may be required.
2009
EPRINT
Computing the endomorphism ring of an ordinary elliptic curve over a finite field
We present two algorithms to compute the endomorphism ring of an ordinary elliptic curve E defined over a finite field F_q. Under suitable heuristic assumptions, both have subexponential complexity. We bound the complexity of the first algorithm in terms of log q, while our bound for the second algorithm depends primarily on log |D_E|, where D_E is the discriminant of the order isomorphic to End(E). As a byproduct, our method yields a short certificate that may be used to verify that the endomorphism ring is as claimed.
2009
EPRINT
Construction of large families of pseudorandom subsets using elliptic curves
Recently, Dartyge and S\'{a}rk\"{o}zy investigated the measures, i.e., the well distribution measure and the correlation measure of order $k$, of pseudorandomness of subsets of the set $\{1, 2,\ldots, N\}$, and they presented several constructive examples for subsets with strong pseudorandom properties when $N$ is a prime number. In this article, we present a construction of pseudorandom subsets using elliptic curves over finite fields and estimate the pseudorandom measures. Character sums play an important role in the proofs.
2009
EPRINT
Constructions of Truly Practical Secure Protocols using Standard Smartcards
In this paper we show that using standard smartcards it is possible to construct truly practical secure protocols for a variety of tasks. Our protocols achieve full \emph{simulation-based security} in the presence of \emph{malicious adversaries}, and can be run on very large inputs. We present protocols for secure set intersection, oblivious database search and more. We have also implemented our set intersection protocol in order to show that it is truly practical: on sets of size 30,000 elements takes 20 seconds for one party and 30 minutes for the other (where the latter can be parallelized to further reduce the time). This demonstrates that in settings where physical smartcards can be sent between parties (as in the case of private data mining tasks between security and governmental agencies), it is possible to use secure protocols with proven simulation-based security.
2009
EPRINT
Correctness of Li Generalization of RSA Cryptosystem
For given N=pq with p and q different odd primes and natural m Li introduced the public key cryptosystem. In the case m=1 the system is just the famous RSA system. We answer the Li question about correctness of the system.
2009
EPRINT
Cryptanalysis of Ring Signature and Ring Signcryption Schemes
Ring signature and ring signcryption are cryptographic primitives, that allow an user to sign and signcrypt a message respectively without revealing their identity, i.e. the verifier or the unsigncrypter is convinced that the message is valid and authentic but is handicapped of knowing the identity of the actual signer or signcrypter. In this paper we consider three schemes, the first one is a ring signature scheme and the second one is a ring signcryption scheme, we demonstrate attacks on them to show that, both schemes lack anonymity and the latter doesn't provide confidentiality. We also consider a secret authenticatable anonymous signcryption scheme with identity privacy as the third scheme in our paper, which is a ring signcryption scheme that allows the actual signcrypter to reveal his identity (if required in a dispute at a later point of time). We show that this scheme is void of indistinguishability because the ciphertext is distinguishable during the challenge phase of the confidentiality game.
2009
EPRINT
Cube Attacks on Trivium
This paper discusses the Cube attacks proposed by Dinur and Shamir applied to Trivium. Independent verification of the equations given in Dinur and Shamir's paper were carried out. Experimentation showed that the precomputed equations were not general. They are correct when applied to the class of IVs for which they were computed - where IV bits at locations other than those corresponding to the cube are fixed at 0. When these IV bits are fixed at some other values, the relations do not hold. The probable cause for this is given and an extra step to the method for equation generation is suggested to take care of such cases.
2009
EPRINT
Davies-Meyer Merkle-Damg{\aa}rd Revisited:\\Variants of Indifferentiability and Random Oracles
In this paper, we succeed in analyzing practical cryptosystems that employ the Davies-Meyer Merkle-Damg{\aa}rd hash function $\mddm^E$ with ideal cipher $E$ by using two approaches: {\it indifferentiability from variants of random oracles} and {\it indifferentiability from a random oracle $\ro$ with conditions}. We show that RSA-KEM with $\mddm^E$ is secure by using the former approach and that OAEP with $\mddm^E$ is secure by using the latter approach. The public-use random oracle ($\pubro$) model is a variant of random oracle (proposed by Dodis et al. and Yoneyama et al.). We also show that cryptosystems secure under $\pubro$ model, such as FDH, Fiat-Shamir, PSS and so on, are also secure under $\mddm^E$ by using the former approach. Note that Dodis et al. failed in the paper of EUROCRYPT 2009 in analyzing the security of cryptosystems with $\mddm^E$, because they started by analyzing the underlying compression function, while our first approach starts by analyzing the hash function.
2009
EPRINT
Designing an ASIP for Cryptographic Pairings over Barreto-Naehrig Curves
This paper presents a design-space exploration of an application-specific instruction-set processor (ASIP) for the computation of various cryptographic pairings over Barreto-Naehrig curves (BN curves). Cryptographic pairings are based on elliptic curves over finite fields--in the case of BN curves a field Fp of large prime order p. Efficient arithmetic in these fields is crucial for fast computation of pairings. Moreover, computation of cryptographic pairings is much more complex than elliptic-curve cryptography (ECC) in general. Therefore, we facilitate programming of the proposed ASIP by providing a C compiler. In order to speed up $\mathbb{F}_p$ -arithmetic, a RISC core is extended with additional functional units. The critical path delay of these units is adjusted to the base architecture in order to maintain the operating frequency. Independently from that adjustment, these units are scalable allowing for a trade-off between execution time and area consumption. Because the resulting speedup can be limited by the memory throughput, utilization of multiple data memories is proposed. However, developing a C compiler for multiple memories is a challenging task. Therefore, we introduce an enhanced memory system enabling multiple concurrent memory accesses while remaining totally transparent to the C compiler. The proposed design needs 15.8 ms for the computation of the Optimal-Ate pairing over a 256-bit BN curve at 338 MHz implemented with a 130 nm standard cell library. The processor core consumes 97 kGates making it suitable for the use in embedded systems.
2009
EPRINT
Deterministic Polynomial-Time Equivalence of Computing the CRT-RSA Secret Keys and Factoring
Let $N = pq$ be the product of two large primes. Consider CRT-RSA with the public encryption exponent $e$ and private decryption exponents $d_p, d_q$. It is well known that given any one of $d_p$ or $d_q$ (or both) one can factorize $N$ in probabilistic poly$(\log N)$ time with success probability almost equal to 1. Though this serves all the practical purposes, from theoretical point of view, this is not a deterministic polynomial time algorithm. In this paper, we present a lattice based deterministic poly$(\log N)$ time algorithm that uses both $d_p, d_q$ (in addition to the public information $e, N$) to factorize $N$.
2009
EPRINT
Efficient Protocols for Set Intersection and Pattern Matching with Security Against Malicious and Covert Adversaries
In this paper we construct efficient secure protocols for \emph{set intersection} and \emph{pattern matching}. Our protocols for securely computing the set intersection functionality are based on secure pseudorandom function evaluations, in contrast to previous protocols that are based on polynomials. In addition to the above, we also use secure pseudorandom function evaluation in order to achieve secure pattern matching. In this case, we utilize specific properties of the Naor-Reingold pseudorandom function in order to achieve high efficiency. Our results are presented in two adversary models. Our protocol for secure pattern matching and one of our protocols for set intersection achieve security against \emph{malicious adversaries} under a relaxed definition where one corruption case is simulatable and for the other only privacy (formalized through indistinguishability) is guaranteed. We also present a protocol for set intersection that is fully simulatable in the model of covert adversaries. Loosely speaking, this means that a malicious adversary can cheat, but will then be caught with good probability.
2009
EPRINT
Encryption Schemes Secure under Selective Opening Attack
The existence of encryption schemes secure under selective opening attack (SOA) has remained open despite considerable interest and attention. We provide the first public key encryption schemes secure against sender corruptions in this setting. The underlying tool is lossy encryption. The schemes have short keys. (Public and secret keys of a fixed length suffice for encrypting an arbitrary number of messages.) The schemes are stateless and noninteractive, and security does not rely on erasures. The schemes are without random oracles, proven secure under standard assumptions (DDH, Paillier’s DCR, QR, lattices), and even efficient. We are able to meet both an indistinguishability (IND-SO-ENC) and a simulation-style, semantic security (SEM-SO-ENC) definition.
2009
EPRINT
Enhanced Privacy ID from Bilinear Pairing
Enhanced Privacy ID (EPID) is a cryptographic scheme that enables the remote authentication of a hardware device while preserving the privacy of the device. EPID can be seen as a direct anonymous attestation scheme with enhanced revocation capabilities. In EPID, a device can be revoked if the private key embedded in the hardware device has been extracted and published widely so that the revocation manager finds the corrupted private key. In addition, the revocation manager can revoke a device based on the signatures the device has signed, if the private key of the device is not known. In this paper, we introduce a new security notion of EPID including the formal definitions of anonymity and unforgeability with revocation. We also give a construction of an EPID scheme from bilinear pairing. Our EPID scheme is efficient and provably secure in the random oracle model under the strong Diffie-Hellman assumption and the decisional Diffie-Hellman assumption.
2009
EPRINT
Enhanced Target Collision Resistant Hash Functions Revisited
Enhanced Target Collision Resistance (eTCR) property for a hash function was put forth by Halevi and Krawczyk in Crypto 2006, in conjunction with the randomized hashing mode that is used to realize such a hash function family. eTCR is a strengthened variant of the well-known TCR (or UOWHF) property for a hash function family (i.e. a dedicated-key hash function). The contributions of this paper are twofold. First, we compare the new eTCR property with the well-known collision resistance (CR) property, where both properties are considered for a dedicated-key hash function. We show there is a separation between the two notions, that is, in general, eTCR property cannot be claimed to be weaker (or stronger) than CR property for any arbitrary dedicated-key hash function. Second, we consider the problem of eTCR property preserving domain extension. We study several domain extension methods for this purpose, including (Plain, Strengthened, and Prefix-free) Merkle-Damg{\aa}rd, Randomized Hashing (considered in dedicated-key hash setting), Shoup, Enveloped Shoup, XOR Linear Hash (XLH), and Linear Hash (LH) methods. Interestingly, we show that the only eTCR preserving method is a nested variant of LH which has a drawback of having high key expansion factor. Therefore, it is interesting to design a new and efficient eTCR preserving domain extension in the standard model.
2009
EPRINT
Ensuring Data Storage Security in Cloud Computing
Cloud Computing has been envisioned as the next-generation architecture of IT Enterprise. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, Cloud Computing moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this article, we focus on cloud data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in the cloud, we propose an effective and flexible distributed scheme with two salient features, opposing to its predecessors. By utilizing the homomorphic token with distributed verification of erasure-coded data, our scheme achieves the integration of storage correctness insurance and data error localization, i.e., the identification of misbehaving server(s). Unlike most prior works, the new scheme further supports secure and efficient dynamic operations on data blocks, including: data update, delete and append. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server colluding attacks.
2009
EPRINT
Extensions of the Cube Attack
At Crypto 2008, Shamir introduced a new algebraic attack called the cube attack, which allows us to solve black-box polynomials if we are able to tweak the inputs by varying an initialization vector. We offer a few extensions of this attack by applying it to Boolean functions for which we can find low-degree multiples. We then extend this to vectorial Boolean functions by finding relations with low-degree polynomials.
2009
EPRINT
Fast elliptic-curve cryptography on the Cell Broadband Engine
This paper is the first to investigate the power of the Cell Broadband Engine for state-of-the-art public-key cryptography. We present a high-speed implementation of elliptic-curve Diffie-Hellman (ECDH) key exchange for this processor, which needs 777000 cycles on one Synergistic Processor Unit for a scalar multiplication on a 255-bit elliptic curve, including the costs for key verification and key compression. This cycle count is independent of inputs therefore protecting against timing attacks. This speed relies on a new representation of elements of the underlying finite field suited for the unconventional instruction set of this architecture. Furthermore we demonstrate that an implementation based on the multi-precision integer arithmetic functions provided by IBM's multi-precision math (MPM) library would take at least 9660640 cycles. Comparison with implementations of the same function for other architectures shows that the Cell Broadband Engine is competitive in terms of cost-performance ratio to other recent processors such as the Core 2 for public-key cryptography. Specifically, the state-of-the-art Galbraith-Lin-Scott ECDH software performs 27370 scalar multiplications per second using all four cores of a 2.5GHz Intel Core 2 Quad Q9300 inside a \$400 computer, while the new software reported in this paper performs 24528 scalar multiplications per second on a Playstation 3 that costs just \$279. Both of these speed reports are for high-security 256-bit elliptic-curve cryptography.
2009
EPRINT
Foundations of Non-Malleable Hash and One-Way Functions
Non-malleability is an interesting and useful property which ensures that a cryptographic protocol preserves the independence of the underlying values: given for example an encryption Enc(m) of some unknown message m, it should be hard to transform this ciphertext into some encryption Enc(m*) of a related message m*. This notion has been studied extensively for primitives like encryption, commitments and zero-knowledge. Non-malleability of one-way functions and hash functions has surfaced as a crucial property in several recent results, but it has not undergone a comprehensive treatment so far. In this paper we initiate the study of such non-malleable functions. We start with the design of an appropriate security definition. We then show that non-malleability for hash and one-way functions can be achieved, via a theoretical construction that uses perfectly one-way hash functions and simulation-sound non-interactive zero-knowledge proofs of knowledge (NIZKPoK). We also discuss the complexity of non-malleable hash and one-way functions. Specifically, we give a black-box based separation of non-malleable functions from one-way permutations (which our construction bypasses due to the 'non-black-box' NIZKPoK). We exemplify the usefulness of our definition in cryptographic applications by showing that non-malleability is necessary and sufficient to securely replace one of the two random oracles in the IND-CCA encryption scheme by Bellare and Rogaway, and to improve the security of client-server puzzles.
2009
EPRINT
Framework for Analyzing Optimistic Fair Exchange with Distributed Arbiters
Fair exchange is one of the most fundamental problems in secure distributed computation. Alice has something that Bob wants, and Bob has something that Alice wants. A fair exchange protocol would guarantee that, even if one of them maliciously deviates from the protocol, either both of them get the desired content, or neither of them do. It is known that no two-party protocol can guarantee fairness in general; therefore the presence of a trusted arbiter is necessary. In optimistic fair exchange, the arbiter only gets involved in case of faults. To reduce the trust put in an arbiter, it is natural to consider employing multiple arbiters. Avoine and Vaudenay (AV) [6] employ multiple autonomous arbiters in their optimistic fair exchange protocol which uses timeout mechanisms. They leave two open questions: (1) Can an optimistic fair exchange protocol without timeouts provide fairness when employing multiple autonomous arbiters? (2) Can any other optimistic fair exchange protocol with timeouts achieve better bounds on the number of honest arbiters required? In this paper, we answer both questions negatively. To answer these questions, we define a general class of optimistic fair exchange protocols with multiple arbiters, called “distributed arbiter fair exchange” (DAFE) protocols. Informally, in a DAFE protocol, if a participant fails to send a correctly formed message, the other party must contact some subset of the arbiters and get correctly formed responses from them. The arbiters do not talk to each other, but only to Alice and Bob. We prove that no DAFE protocol can exist. However, our impossibility results can be overcome in the timeout model (where all arbiters have access to loosely synchronized clocks) and also in case the arbiters can communicate (e.g., using secure multi-party computation with Omega(n^2) communication).
2009
EPRINT
From Dolev-Yao to Strong Adaptive Corruption: Analyzing Security in the Presence of Compromising Adversaries
We formalize a hierarchy of adversary models for security protocol analysis, ranging from a Dolev-Yao style adversary to more powerful adversaries who can reveal different parts of principals' states during protocol execution. We define our hierarchy by a modular operational semantics describing adversarial capabilities. We use this to formalize various, practically-relevant notions of key and state compromise. We also use our semantics as a basis to extend an existing symbolic protocol-verification tool with our adversary models. This tool is the first that supports notions such as weak perfect forward secrecy, key compromise impersonation, and adversaries capable of so-called strong corruptions and state-reveal queries. As applications, we use our model hierarchy to relate different adversarial notions, gaining new insights on their relative strengths, and we use our tool find new attacks on protocols.
2009
EPRINT
Homomorphic Trapdoor Commitments to Group Elements
We present a homomorphic trapdoor commitment to group elements. In contrast, previous homomorphic trapdoor commitment schemes only allow the messages to be exponents. Our commitment scheme is length-reducing, we can make a short commitment to many group elements at once, and it is perfectly hiding and computationally binding. The construction is based on groups with a bilinear map and the binding property follows from the simultaneous triple pairing assumption. While the simultaneous triple pairing assumption is new, we demonstrate that it is implied by the well-known decision linear assumption.
2009
EPRINT
How to Prove the Security of Practical Cryptosystems with Merkle-Damg{\aa}rd Hashing by Adopting Indifferentiability
In this paper, we show that major cryptosystems such as FDH, OAEP, and RSA-KEM are secure under a hash function $MD^h$ with Merkle-Damg{\aa}rd (MD) construction that uses a random oracle compression function $h$. First, we propose two new ideal primitives called Traceable Random Oracle ($\mathcal{TRO}$) and Extension Attack Simulatable Random Oracle ($\mathcal{ERO}$) which are weaker than a random oracle ($\mathcal{RO}$). Second, we show that $MD^h$ is indifferentiable from $\mathcal{LRO}$, $\mathcal{TRO}$ and $\mathcal{ERO}$, where $\mathcal{LRO}$ is Leaky Random Oracle proposed by Yoneyama et al. This result means that if a cryptosystem is secure in these models, then the cryptosystem is secure under $MD^h$ following the indifferentiability theory proposed by Maurer et al. Finally, we prove that OAEP is secure in the $\mathcal{TRO}$ model and RSA-KEM is secure in the $\mathcal{ERO}$ model. Since it is also known that FDH is secure in the $\mathcal{LRO}$ model, as a result, major cryptosystems, FDH, OAEP and RSA-KEM, are secure under $MD^h$, though $MD^h$ is not indifferentiable from $\mathcal{RO}$.
2009
EPRINT
Huge 2ndpreimages and collisions of khichidi-1
Khichidi-1 is a contestant of Sha-3. In this paper we exploited a weakness in the design of the algorithm. This allowed us to propose two kind of attacks 1) 2ndpreimage attack, 2) Collision attack. Our attacks are applicable to all the versions 224, 256, 384 and 512 and it is potentially strong.
2009
EPRINT
Huge Multicollisions and Multipreimages of Hash Functions BLENDER-n
In this paper we present a multicollision and multipreimage attack on the hash function Blender-n [1] for all output sizes n = 224, 256, 384 and 512. The complexity and memory requirements for finding 2^{2n} multipreimages (multicollisions) of Blender-n is roughly 10 times more than finding a collision for n/2-bit random hash function. All previous attacks were based on the trick by Joux [2] using many messages. Our attacks are based on one message with several fixpoints. The state register has eight words. By properly choosing message words we force half of the register to go to the original state. Then we will find a collision in the rest with complexity 2^{n/4}. The collision creates a fix point in the sequence of states of the state register. We use 10 such fix points. Previously known attacks [4, 5] on Blender-n have the complexity at least 2^{n/2}. Our 2^{2n}-multicollision and multipreimage attacks have a complexity 10*2^{n/4}.
2009
EPRINT
ID-GC: An Efficient Identity-based Group Key Management Scheme
We present an efficient identity-based key management scheme for group communications (ID-GC). ID-GC out-performs all existing group key management schemes in terms of reducing the communication overhead for group management, e.g., adding or deleting group members. In average cases, ID-GC requires O(log N) messages for removing multiple members and 1 message for adding a new group member, where N is the group size. ID-GC scheme provides group forward/backward secrecy, and it is resilience to colluding attackers.
2009
EPRINT
Identification of Multiple Invalid Signatures in Pairing-based Batched Signatures
This paper describes new methods in pairing-based signature schemes for identifying the invalid digital signatures in a batch, after batch verification has failed. These methods efficiently identify non-trivial numbers of invalid signatures in batches of (potentially large) numbers of signatures. Our methods use “divide-and-conquer” search to identify the invalid signatures within a batch, but prune the search tree to substantially reduce the number of pairing computations required. The methods presented in this paper require computing on average O(w) products of pairings to identify w invalid signatures within a batch of size N, compared with the O(w(log2(N/w)+1)) [for w < N/2] that traditional divide-and-conquer methods require. Our methods avoid the problem of exponential growth in expected computational cost that affect earlier proposals which, on average, require computing O(w) products of pairings. We compare the expected performance of our batch verification methods with previously published divide-and-conquer and exponential cost methods for Cha-Cheon identity-based signatures [6]. However, our methods also apply to a number of short signature schemes and as well as to other identity-based signature schemes.
2009
EPRINT
Image Encryption by Pixel Property Separation
Pixels in an image are essentially constituted of two properties, position and colour. Pixel Property Separation, a radically different approach for Symmetric-key image encryption, separates these properties to disturb the semantics of the image. The scheme operates in two orthogonal stages each requiring an encryption key. The first stage creates the Position Vector, an ordered set of Pixel Position Information controlled by a set of plaintext dependent Random Permutations. A bitmap flagging the presence of all the 24 bit colours is generated. The second stage randomly positions the image width and height within the ciphertext and finally applies a byte transposition on the ciphertext bytes. The complete set of image properties including width, height and pixel position-colour correlation are obscured, resulting in a practically unbreakable encryption. The orthogonality of the stages acts as an anti-catalyst for cryptanalysis. The information retrieved from compromising a stage is totally independent and cannot be used to derive the other. Classical cryptanalytic techniques demand huge number of attempts, most failing to generate valid encryption information. Plaintext attacks are rendered ineffective due to the dependency of the Random Permutations on the plaintext. Linear and Differential cryptanalysis are highly inefficient due to high Diffusion and Confusion. Although the paper describes the algorithm as applied to images, its applicability is not limited to images only. The cryptographic strength and the efficiency of operation is independent of the nature of the plaintext.
2009
EPRINT
Implementing cryptographic pairings: a magma tutorial
In this paper we show an efficient implementation if the Tate, ate, and R-ate pairings in magma. This will be demostrated by using the KSS curves with embedding degree k=18
2009
EPRINT
Impossible Differential Cryptanalysis of Pelican, MT-MAC-AES and PC-MAC-AES
In this paper, the impossible differential cryptanalysis is extended to MAC algorithms \textsc{Pelican}, MT-MAC and PC-MAC based on AES and 4-round AES. First, we collect message pairs that produce the inner near-collision with some specific differences by the birthday attack. Then the impossible differential attack on 4-round AES is implemented using a 3-round impossible differential property. For \textsc{Pelican}, our attack can recover the internal state, which is an equivalent subkey. For MT-MAC-AES, the attack turns out to be a subkey recovery attack directly. The data complexity of the two attacks is $2^{85.5}$ chosen messages, and the time complexity is about $2^{85.5}$ queries. For PC-MAC-AES, we can recover the 256-bit key with $2^{85.5}$ chosen messages and $2^{128}$ queries.
2009
EPRINT
Key Insulation and Intrusion Resilience Over a Public Channel
Key insulation (KI) and Intrusion resilience (IR) are methods to protect a user's key against exposure by utilizing periodic communications with an auxiliary helper. But existing work assumes a secure channel between user and helper. If we want to realize KI or IR in practice we must realize this secure channel. This paper looks at the question of how to do this when the communication is over what we are more likely to have in practice, namely a public channel such as the Internet or a wireless network. We explain why this problem is not trivial, introduce models and definitions that capture the desired security in a public channel setting, and provide a complete (and surprising) answer to the question of when KI and IR are possible over a public channel. The information we provide is important to guide practitioners with regard to the usage of KI and IR and also to guide future research in this area.
2009
EPRINT
Key Predistribution Techniques for Grid-Based Wireless Sensor Networks
We consider symmetric key predistribution in grid-based wireless sensor networks. Networks consisting of wireless sensor nodes arranged in a grid pattern have many useful applications, including environmental monitoring and agribusiness. The structured physical distribution of nodes in such networks facilitates efficient distribution of keys to the nodes prior to deployment. It has been shown that combinatorial objects known as distinct-difference configurations (DDCs) can be used to construct effective key predistribution schemes (KPSs) for grid-based networks. In this paper we observe that the regular topology of a grid-based network enables an efficient trade-off between the connectivity, resilience and storage requirements of a KPS, and we discuss the balancing of these properties to suit application requirements. We then show how recent results on the construction of DDCs can be used to produce KPSs that achieve the desired balance, and we provide explicit algorithms for the instantiation of these schemes.
2009
EPRINT
Key-Exposure Free Chameleon Hashing and Signatures Based on Discrete Logarithm Systems
Chameleon signatures are based on well established hash-and-sign paradigm, where a \emph{chameleon hash function} is used to compute the cryptographic message digest. Chameleon signatures simultaneously provide the properties of non-repudiation and non-transferability for the signed message. However, the initial constructions of chameleon signatures suffer from the problem of key exposure: the signature forgery results in the signer recovering the recipient's trapdoor information, $i.e.,$ the private key. This creates a strong disincentive for the recipient to forge signatures, partially undermining the concept of non-transferability. Recently, some specific constructions of key-exposure free chameleon hashing are presented, based on RSA or pairings, using the idea of ``Customized Identities". In this paper, we propose the first key-exposure free chameleon hash scheme based on discrete logarithm systems, without using the gap Diffile-Hellman groups. Moreover, one distinguished advantage of the resulting chameleon signature scheme is that the property of ``message hiding" or ``message recovery" can be achieved freely by the signer. Another main contribution in this paper is that we propose the first identity-based chameleon hash scheme without key exposure, which gives a positive answer for the open problem introduced by Ateniese and de Mederious in 2004.
2009
EPRINT
Knapsack Cryptosystem on Elliptic Curves
The LLL algorithm is strong algorithm that decrypts the additional type Knapsack cryptosystem. However, the LLL algorithm is not applicable in the addition in the group that rational points of elliptic curves on finite fields do. Therefore, we think the Knapsack cryptosystem constructed on elliptic curves. By using the pairing for the decryption, it is shown to be able to make the computational complexity of the decryption a polynomial time by making the decryption function by the pairing values.
2009
EPRINT
Low Complexity Cubing and Cube Root Computation over $\F_{3^m}$ in Standard Basis
We present low complexity formulae for the computation of cubing and cube root over $\F_{3^m}$ constructed using special classes of trinomials, tetranomials and pentanomials. We show that for all those special classes of polynomials, cube root operation has the same area and time complexity as field cubing when implemented in hardware or software platforms.
2009
EPRINT
Multi-authority attribute based encryption with honest-but-curious central authority
An attribute based encryption scheme capable of handling multiple authorities was recently proposed by Chase. The scheme is built upon a single-authority attribute based encryption scheme presented earlier by Sahai and Waters. Chase’s construction uses a trusted central authority that is inherently capable of decrypting arbitrary ciphertexts created within the system. We present a multi-authority attribute based encryption scheme in which only the set of recipients defined by the encrypting party can decrypt a corresponding ciphertext. The central authority is viewed as “honest-but-curious”: on the one hand it honestly follows the protocol, and on the other hand it is curious to decrypt arbitrary ciphertexts thus violating the intent of the encrypting party. The proposed scheme, which like its predecessors relies on the Bilinear Diffie-Hellman assumption, has a complexity comparable to that of Chase’s scheme. We prove that our scheme is secure in the selective ID model and can tolerate an honest-but-curious central authority.
2009
EPRINT
NESHA-256, NEw 256-bit Secure Hash Algorithm
In this paper, we introduce a new dedicated 256-bit hash function: NESHA-256. The recently contest for hash functions held by NIST, motivates us to design the new hash function which has a parallel structure. Advantages of parallel structures and also using some ideas from the designing procedure of block-cipher-based hash functions strengthen our proposed hash function both in security and in efficiency. NESHA-256 is designed not only to have higher security but also to be faster than SHA-256: the performance of NESHA-256 is at least 38% better than that of SHA-256 in software. We give security proofs supporting our design, against existing known cryptographic attacks on hash functions.
2009
EPRINT
New commutative semifields defined by PN multinomials
We introduce infinite families of perfect nonlinear Dembowski-Ostrom multinomials over $F_{p^{2k}}$ where $p$ is any odd prime. We prove that for $k$ odd and $p\ne3$ these PN functions define new commutative semifields (in part by studying the nuclei of these semifields). This implies that these functions are CCZ-inequivalent to all previously known PN mappings.
2009
EPRINT
Nofish - A new stream cipher
The proposed algorithm is a synchronous stream cipher, more precisely a binary additive stream cipher because it using the XOR function to encrypt the plaintext. The design is based on HENKOS stream cipher (http://eprint.iacr.org/2004/080.pdf), the functions used in the internal state are kept, the initialization and mixing key part being modified with respect to its revealed weaknesses. This stream cipher uses a named key of 64 bytes (512 bits) as a secret key and no initialization vector. Nofish is free to use for any non-commercial purposes, and the reference source code can be found in the appendix.
2009
EPRINT
On a Conditional Collision Attack on NaSHA-512
A collision attack on NaSHA-512 was proposed by L. Ji et al. The claimed complexity of the attack is $2^{192}$. The proposed attack is realized by using a suitable differential pattern. In this note we show that the correct result that can be inferred from their differential pattern is in fact a conditional one. It can be stated correctly as follows: A collision attack on NaSHA-512 of complexity $k=1,2,\dots,2^{320}$ can be performed with an unknown probability of success $p_k$, where $ 0\le p_1\le p_2\le p_{2^{320}}\le 1$. Consequently, the attack proposed by L. Ji et al. can be considered only as a direction how a possible collision attack on NaSHA-512 could be realized. The birthday attack remains the best possible attack on NaSHA-512.
2009
EPRINT
On Algebraic Relations of Serpent S-Boxes
Serpent is a 128-bit block cipher designed by Ross Anderson, Eli Biham and Lars Knudsen as a candidate for the Advanced Encryption Standard (AES). It was a finalist in the AES competition. The winner, Rijndael, got 86 votes at the last AES conference while Serpent got 59 votes [1]. The designers of Serpent claim that Serpent is more secure than Rijndael.In this paper we have observed that the nonlinear order of all output bits of serpent S-boxes are not 3 as it is claimed by the designers.
2009
EPRINT
On Approximating Addition by Exclusive OR
Let $X^{(1)},X^{(2)},\ldots,X^{(n)}$ be independent and uniformly distributed over the non-negative integers $\{0,1,\ldots,2^i-1\}$; $S^{(n)}=X^{(1)}+X^{(2)}+\cdots+X^{(n)}$ and $L^{(n)}=X^{(1)}\oplus X^{(2)}\oplus \cdots\oplus X^{(n)}$. Denote the $i$-th bits of $S^{(n)}$ and $L^{(n)}$ by $S^{(n)}_i$ and $L^{(n)}_i$ respectively. We show that as $i\rightarrow \infty$, $\pr[S^{(n)}_i=L^{(n)}_i]\rightarrow \gamma^{(n)}= \frac{1}{2}+\frac{2^{n+1}(2^{n+1}-1)}{2(n+1)}\times\frac{b_{n+1}}{n!}$, where $b_n$ is the $n$-th Bernoulli number. As a consequence, $\gamma^{(2r)}=1/2$ for every $r$; and we show that $\gamma^{(2r+1)}\rightarrow 1/2$ as $r\rightarrow \infty$. For small values of $r$, $\gamma^{(2r+1)}$ is significantly different from $1/2$; for example $\gamma^{(3)}=1/3$ and $\gamma^{(5)}=17/30$. The behaviour of $\gamma^{(n)}$ for even and odd values of $n$ was earlier shown by Staffelbach and Meier without actually obtaining the formula mentioned above. For every fixed $n\geq 2$, we give a simple method for computing $\pr[S^{(n)}_i=L^{(n)}_i]$ for each $i\geq 0$. The expression involving Bernoulli numbers is arrived at via the Eulerian number triangle which in turn arises in relation to the stationary distribution of a Markov chain formed by the carry values.
2009
EPRINT
On CCZ-equivalence and its use in secondary constructions of bent functions
We prove that, for bent vectorial functions, CCZ-equivalence coincides with EA-equivalence. However, we show that CCZ-equivalence can be used for constructing bent functions which are new up to CCZ-equivalence. Using this approach we construct classes of nonquadratic bent Boolean and bent vectorial functions.
2009
EPRINT
On fractional correlation immunity of majority functions
The correlation immunity is known as an important cryptographic measure of a Boolean function with respect to its resist against the correlation attack. This paper generalizes the concept of correlation immunity to be of a fractional value, called fractional correlation immunity, which is a fraction between 0 and 1, and correlation immune function is the extreme case when the fractional correlation immunity is 1. However when a function is not correlation immune in the traditional sense, it may also has a nonzero fractional correlation immunity, which also indicates the resistance of the function against correlation attack. This paper first shows how this generalized concept of fractional correlation immunity is a reasonable measure on the resistance against the correlation attack, then studies the fractional correlation immunity of a special class of Boolean functions, i.e. majority functions, of which the subset of symmetric ones have been proved to have highest algebraic immunity. This paper shows that all the majority functions, including the symmetric ones and the non-symmetric ones, are not correlation immune. However their fractional correlation immunity approaches to 1 when the number of variable grows. This means that this class of functions also have good resistance against correlation attack, although they are not correlation immune in the traditional sense.
2009
EPRINT
On Generalization of Cheon's Algorithm
We consider a generalization of Cheon's algorithm on the strong Diffie-Hellman problem. More specifically, we consider the circumstance that p^k-1 has a small divisor for k>=3, where p is the order of group on which we consider the strong Diffie-Hellman problem. It seems that our algorithm is only effective for k=1, 2, that is, the original Cheon's algorithm.
2009
EPRINT
On Second-Order Fault Analysis Resistance for CRT-RSA Implementations
Since their publication in 1996, Fault Attacks have been widely studied from both theoretical and practical points of view and most of cryptographic systems have been shown vulnerable to this kind of attacks. Until recently, most of the theoretical fault attacks and countermeasures used a fault model which assumes that the attacker is able to disturb the execution of a cryptographic algorithm only once. However, this approach seems too restrictive since the publication in 2007 of the successful experiment of an attack based on the injection of two faults, namely a second-order fault attack. Amongst the few papers dealing with second-order fault analysis, three countermeasures were published at WISTP'07 and FDTC'07 to protect the RSA cryptosystem using the CRT mode. In this paper, we analyse the security of these countermeasures with respect to the second-order fault model considered by their authors. We show that these countermeasures are not intrinsically resistant and we propose a new method allowing us to implement a CRT-RSA that resists to this kind of second-order fault attack.
2009
EPRINT
On Stateless Schemes for Message Authentication Using Pseudorandom Functions
We consider the construction and analysis of pseudorandom functions (PRF) for message authentication. Earlier work due to Bernstein and Vaudenay show how to reduce the analysis of PRFs to some probability calculations. We revisit this result and use it to prove some general results on constructions which use a PRF with ``small'' domain to build a PRF with ``large'' domain. These results are then used to analyse several existing and new constructions. Important among them is a simplified proof of a bound on the PRF-property of the cipher block chaining (CBC) mode of operation of a block cipher for message authentication code (MAC). Several existing variants of CBC-MAC are analysed using our framework and new schemes are described. One of the new schemes improve upon the NIST standard CMAC scheme by reducing the number of block cipher invocations by one for messages which are longer than $n$ bits. Next, we consider parallelizable constructions. An improved version of the well known PMAC scheme is described; the improvement consists of removing the requirement of a discrete log computation in the design stage of PMAC. An earlier parallel construction called the protected counter sum (PCS) had been proposed by Bernstein. PCS uses a keyed compressing function rather than a block cipher. We describe a variant of PMAC which works with keyed compressing function and compared to PCS requires lesser number of invocations. All our constructions are in the stateless setting, i.e., a setting where the sender and the receiver do not share any state (apart from the common secret key). One of the aspects of our work is the simple and direct approach to the analysis of PRFs. In particular, we avoid the extensive and heavy machinery of game-playing technique which is used in most papers on this topic.
2009
EPRINT
On the Data Complexity of Statistical Attacks Against Block Ciphers (full version)
Many attacks on iterated block ciphers rely on statistical considerations using plaintext/ciphertext pairs to distinguish some part of the cipher from a random permutation. We provide here a simple formula for estimating the amount of plaintext/ciphertext pairs which is needed for such distinguishers and which applies to a lot of different scenarios (linear cryptanalysis, differential-linear cryptanalysis, differential/truncated differential/impossible differential cryptanalysis). The asymptotic data complexities of all these attacks are then derived. Moreover, we give an efficient algorithm for computing the data complexity accurately.
2009
EPRINT
On the impossibility of graph secret sharing
A {\it perfect secret sharing scheme} based on a graph $G$ is a randomized distribution of a secret among the vertices of the graph so that the secret can be recovered from the information assigned to vertices at the endpoints of any edge, while the total information assigned to an independent set of vertices is independent (in statistical sense) of the secret itself. The efficiency of a scheme is measured by the amount of information the most heavily loaded vertex receives divided by the amount of information in the secret itself. The (worst case) {\it information ratio} of $G$ is the infimum of this number. We calculate the best lower bound on the information ratio for an infinite family of graphs the celebrated entropy method can give. We argue that all existing constructions for secret sharing schemes are special cases of the generalized vector space construction. We give direct constructions of this type for the first two members of the family, and show that for the other members no construction exists which would match the bound yielded by the entropy method.
2009
EPRINT
On the Lower Bounds of the Second Order Nonlinearity of some Boolean Functions
The $r$-th order nonlinearity of a Boolean function is an important cryptographic criterion in analyzing the security of stream as well as block ciphers. It is also important in coding theory as it is related to the covering radius of the Reed-Muller code $\mathcal{R}(r, n)$. In this paper we deduce the lower bounds of the second order nonlinearity of the two classes of Boolean functions of the form \begin{enumerate} \item $f_{\lambda}(x) = Tr_1^n(\lambda x^{d})$ with $d=2^{2r}+2^{r}+1$ and $\lambda \in \mathbb{F}_{2^{n}}$ where $n = 6r$. \item $f(x,y)=Tr_1^t(xy^{2^{i}+1})$ where $x,y \in \mathbb{F}_{2^{t}}, n = 2t, n \ge 6$ and $i$ is an integer such that $1\le i < t$, $\gcd(2^t-1, 2^i+1) = 1$. \end{enumerate} For some $\lambda$, the first class gives bent functions whereas Boolean functions of the second class are all bent, i.e., they achieve optimum first order nonlinearity.
2009
EPRINT
On the Portability of Generalized Schnorr Proofs
The notion of Zero Knowledge Proofs (of knowledge) [ZKP] is central to cryptography; it provides a set of security properties that proved indispensable in concrete protocol design. These properties are defined for any given input and also for any auxiliary verifier private state, as they are aimed at any use of the protocol as a subroutine in a bigger application. Many times, however, moving the theoretical notion to practical designs has been quite problematic. This is due to the fact that the most efficient protocols fail to provide the above ZKP properties {\em for all} possible inputs and verifier states. This situation has created various problems to protocol designers who have often either introduced imperfect protocols with mistakes or with lack of security arguments, or they have been forced to use much less efficient protocols in order to achieve the required properties. In this work we address this issue by introducing the notion of ``protocol portability,'' a property that identifies input and verifier state distributions under which a protocol becomes a ZKP when called as a subroutine in a sequential execution of a larger application. We then concentrate on the very efficient and heavily employed ``Generalized Schnorr Proofs'' (GSP) and identify the portability of such protocols. We also point to previous protocol weaknesses and errors that have been made in numerous applications throughout the years, due to employment of GSP instances while lacking the notion of portability (primarily in the case of unknown order groups). This demonstrates that cryptographic application designers who care about efficiency need to consider our notion carefully. We provide a compact specification language for GSP protocols that protocol designers can employ. Our specification language is consistent with the ad-hoc notation that is currently widely used and it offers automatic derivation of the proof protocol while dictating its portability (i.e., the proper initial state and inputs) and its security guarantees. Thus, our language specifications can be used modularly in designs and proofs. This assures that the protocol implementation can indeed be used as a subroutine that is ZKP in its context. Finally, as a second alternative to designers wishing to use GSPs, we present a modification of GSP protocols that is unconditionally portable (i.e., ZKP) and is still quite efficient. Our constructions are the first such protocols proven secure in the standard model (while the previously known efficient constructions relied on the Random Oracle model).
2009
EPRINT
On the Security of Iterated Hashing based on Forgery-resistant Compression Functions
In this paper we re-examine the security notions suggested for hash functions, with an emphasis on the delicate notion of second preimage resistance. We start by showing that, in the random oracle model, both Merkle-Damgaard and HAIFA achieve second preimage resistance beyond the birthday bound, and actually up to the level of known generic attacks, hence demonstrating the optimality of HAIFA in this respect. We then try to distill a more elementary requirement out of the compression function to get some insight on the properties it should have to guarantee the second preimage resistance of its iteration. We show that if the (keyed) compression function is a secure FIL-MAC then the Merkle-Damgaard mode of iteration (or HAIFA) still maintains the same level of second preimage resistance. We conclude by showing that this ``new'' assumption (or security notion) implies the recently introduced Preimage-Awareness while ensuring all other classical security notions for hash functions.
2009
EPRINT
On the Security of Tandem-DM
We provide the first proof of security for Tandem-DM one of the oldest and most well-known constructions for turning a blockcipher with n-bit blocklength and 2n-bit keylength into a 2n-bit cryptographic hash function. We prove, that when Tandem-DM is instantiated with AES-256, i.e. blocklength 128 bits and keylength 256 bits, any adversary that asks less than 2^{120.4} queries cannot find a collision with success probability greater than 1/2. We also prove a bound for preimage resistance of Tandem-DM. Interestingly, as there is only one practical construction known (FSE'06, Hirose) turning such an (n,2n)-bit blockcipher into a 2n-bit compression function that has provably birthday-type collision resistance, Tandem-DM is one out of two structures that possess this desirable feature.
2009
EPRINT
Overview of Turbo-Code Reconstruction Techniques
In this paper we analyze different techniques to blindly recover the parameters of turbo-code encoders with only the knowledge of noisy eavesdropped binary stream. We compare different strategies to reconstruct convolutional encoders, particularly when both are recursive systematic coders. We explain the intrinsic indetermination due to these techniques but also how to overcome such an issue in the case of turbo-code encoders. Moreover, we generalize Burel and al.'s particular reconstruction of the second (n,1)-encoder for (n,k)-encoders.
2009
EPRINT
Point Compression for Koblitz Elliptic Curves
Elliptic curves over finite fields have applications in public key cryptography. A Koblitz curve is an elliptic curve $E$ over $\F_2$; the group $E( \Ftn )$ has convenient features for efficient implementation of elliptic curve cryptography. Wiener and Zuccherato and Gallant, Lambert and Vanstone showed that one can accelerate the Pollard rho algorithm for the discrete logarithm problem on Koblitz curves. This implies that when using Koblitz curves, one has a lower security per bit than when using general elliptic curves defined over the same field. Hence for a fixed security level, systems using Koblitz curves require slightly more bandwidth. We present a method to reduce this bandwidth. Our method is appropriate for applications such as Diffie-Hellman key exchange or Elgamal encryption. We show that, with a low probability of failure, our method gives the expected bandwidth for a given security level.
2009
EPRINT
Polynomial Runtime and Composability
To prove security of a multi-party cryptographic protocol, one often reduces attacks on the protocol to attacks on a suitable computational problem. Thus, if the computational problem is hard, then the protocol is secure. But to allow for a security reduction, the protocol itself and the attack on the protocol must be efficient, i.e., polynomial-time. Of course, the obvious way to enforce an overall polynomial runtime of the protocol is to require each individual protocol machine and adversarial entity to be polynomial-time. However, as the specific case of zero-knowledge protocols demonstrates, an a priori polynomial-time bound on all entities may not be an optimal choice because the running time of some machines needs to depend on that of others. As we want to be able to model arbitrary protocol tasks, we work in the Universal Composability framework (UC). This framework additionally provides strong composability guarantees. We will point out that in the UC setting, finding a useful notion of polynomial-time for the analysis of general protocols is a highly non-trivial task. Our goal in this work is to find a good and useful definition of polynomial-time for multi-party protocols in the UC setting that matches the intuition of what is feasible. A good definition should have the following properties: * Flexibility: All "intuitively feasible" protocols and protocol tasks should be considered polynomial-time. * Soundness: All "intuitively feasible" attacks (i.e., adversaries) should be considered polynomial-time. * Completeness: Only "intuitively feasible" attacks should be considered polynomial-time. In particular, this implies that the security of protocols can be reduced to computational hardness assumptions. * Composability: The induced security notion should support secure (universal) composition of protocols. * Simplicity: The notion should be easy to formulate, and for all practical cases, it should be easy to decide whether a protocol or attack runs in polynomial time. The problem of finding a good definition of polynomial time in the UC framework has been considered in a number of works, but no definition satisfying the five above criteria had been found so far. This seemingly simple problem is surprisingly elusive and it is hard to come up with a definition that does not involve many technical artifacts. In this contribution, we give a definition of polynomial time for cryptographic protocols in the UC model, called reactively polynomial, that satisfies all five properties. Our notion is simple and easy to verify. We argue for its flexibility, completeness and soundness with practical examples that are problematic with previous approaches. We give a very general composition theorem for reactively polynomial protocols. The theorem states that arbitrarily many instances of a secure protocol can be used in any larger protocol without sacrificing security. Our proof is technically different from and substantially more involved than proofs for previous protocol composition theorems (for previous definitions of polynomial runtime). We believe that it is precisely this additional proof complexity, which appears only once and for all in the proof of the composition theorem, that makes a useful definition as simple as ours possible.
2009
EPRINT
Proofs of Retrievability via Hardness Amplification
Proofs of Retrievability (PoR), introduced by Juels and Kaliski, allow the client to store a file $F$ on an untrusted server, and later run an efficient audit protocol in which the server proves that it (still) possesses the client's data. Constructions of PoR schemes attempt to minimize the client and server storage, the communication complexity of an audit, and even the number of file-blocks accessed by the server during the audit. In this work, we identify several different variants of the problem (such as bounded-use vs. unbounded-use, knowledge-soundness vs. information-soundness), and giving nearly optimal PoR schemes for each of these variants. Our constructions either improve (and generalize) the prior PoR constructions, or give the first known PoR schemes with the required properties. In particular, we \begin{itemize} \item Formally prove the security of an (optimized) variant of the bounded-use scheme of Juels and Kaliski~\cite{JuelsK07}, without making any simplifying assumptions on the behavior of the adversary. \item Build the first unbounded-use PoR scheme where the communication complexity is linear in the security parameter and which does not rely on Random Oracles, resolving an open question of Shacham and Waters~\cite{ShachamW08}. \item Build the first bounded-use scheme with {\em information-theoretic} security. \end{itemize} The main insight of our work comes from a simple connection between PoR schemes and the notion of {\em hardness amplification}, extensively studied in complexity theory. In particular, our improvements come from first abstracting a purely information-theoretic notion of {\em PoR codes}, and then building nearly optimal PoR codes using state-of-the-art tools from coding and complexity theory.
2009
EPRINT
Re-randomizable Encryption implies Selective Opening Security
In this paper, we present new general constructions of commitments and encryptions secure against a Selective Opening Adversary (SOA). Although it was recognized almost twenty years ago that SOA security was important, it was not until the recent breakthrough works of Hofheinz (H08) and Bellare, Hofheinz and Yilek (BHY09) that any progress was made on this fundamental problem. The Selective Opening problem is as follows: suppose an adversary receives $n$ commitments (or encryptions) of (possibly) correlated messages, and now the adversary can choose $n/2$ of the messages, and receive decommitments (or decryptions \emph{and} the randomness used to encrypt them). Do the unopened commitments (encryptions) remain secure? A protocol which achieves this type of security is called \emph{secure against a Selective Opening Adversary (SOA)}. This question arises naturally in the context of Byzantine Agreement and Secure Multiparty Computation, where an active adversary is able to eavesdrop on all the wires, and then choose a subset of players to corrupt. Unfortunately, the traditional definitions of security (IND-CPA,IND-CCA) do not guarantee security in this setting. In this paper: We formally define re-randomizable encryption and show that every re-randomizable encryption scheme gives rise to efficient encryptions secure against a selective opening adversary. (Very informally, an encryption is re- We formally define e-randomizable one-way functions and show that every re-randomizable one-way function family gives rise to efficient commitments secure against a Selective Opening Adversary. Applying our constructions to the known cryptosystems of El-Gamal, Paillier, and Goldwasser and Micali, we obtain IND-SO secure commitments and encryptions from the Decisional Diffie-Hellman (DDH), Decisional Composite Residuosity (DCR) and Quadratic Residuosity (QR) assumptions, that are either simpler or more efficient than existing constructions of Bellare Hofheinz and Yilek. Applying our general results to the Paillier Cryptosystem we demonstrate the first cryptosystem to achieve Semantic Selective Opening security from the DCR assumption. We give black-box constructions of Perfectly Binding SOA secure commitments, which is surprising given the negative results of Bellare, Hofheinz and Yilek. We define the notion of adaptive chosen ciphertext security (CCA-2) in the selective opening setting, and describe the first encryption scheme which is CCA-2 secure (and simultaneously SOA-secure).
2009
EPRINT
Realizing Hash-and-Sign Signatures under Standard Assumptions
Currently, there are relatively few instances of ``hash-and-sign'' signatures in the standard model. Moreover, most current instances rely on strong and less studied assumptions such as the Strong RSA and q-Strong Diffie-Hellman assumptions. In this paper, we present a new approach for realizing hash-and-sign signatures in the standard model. In our approach, a signer associates each signature with an index i that represents how many signatures that signer has issued up to that point. Then, to make use of this association, we create simple and efficient techniques that restrict an adversary which makes q signature requests to forge on an index no greater than 2q. Finally, we develop methods for dealing with this restricted adversary. Our approach requires that a signer maintains a small amount of state --- a counter of the number of signatures issued. We achieve two new realizations for hash-and-sign signatures respectively based on the RSA assumption and the Computational Diffie-Hellman assumption in bilinear groups.
2009
EPRINT
Reducing RFID Reader Load with the Meet-in-the-Middle Strategy
In almost any RFID system, a reader needs to identify, and optionally authenticate, a multitude of tags. If each tag has a unique secret, identification and authentication are trivial, however, the reader (or a back-end server) needs to perform a brute-force search for each tag-reader interaction. In this paper, we suggest a simple, efficient and secure technique that reduces reader computation to $O(\sqrt N \cdot \log N)$. Our technique is based on the well-known ``meet-in-the-middle'' strategy used in the past to attack certain symmetric ciphers.
2009
EPRINT
Secret sharing on trees: problem solved
We determine the worst case information rate for all secret sharing schemes based on trees. It is the inverse of $2-1/c$, where $c$ is the size of the maximal core in the tree. A {\it core} is a connected subset of the vertices so that every vertex in the core has a neighbor outside the core. The upper bound comes from an application of the entropy method, while the lower bound is achieved by a construction using Stinson's decomposition theorem. It is shown that $2-1/c$ is also the {\it fractional cover number} of the tree where the edges of the tree are covered by stars, and the vertex cover should be minimized. We also give an $O(n^2)$ algorithm which finds an optimal cover on any tree, and thus a perfect secret sharing scheme with optimal rate.
2009
EPRINT
Security Enhancement of Various MPKCs by 2-layer Nonlinear Piece In Hand Method
Following the last proposal of the nonlinear Piece in Hand method, which has 3-layer structure, 2-layer nonlinear Piece in Hand method is proposed. Both of them aim at enhancing the security of existing and future multivariate public key cryptosystems. The new nonlinear Piece in Hand is compared with the 3-layer method and PMI+, which was proposed by Ding, et al.
2009
EPRINT
Security of Verifiably Encrypted Signatures
In a verifiably encrypted signature scheme, signers encrypt their signature under the public key of a trusted third party and prove that they did so correctly. The security properties are unforgeability and opacity. Unforgeability states that a malicious signer should not be able to forge verifiably encrypted signatures and opacity prevents extraction from an encrypted signature. This paper proposes two novel fundamental requirements for verifiably encrypted signatures, called \emph{extractability} and \emph{abuse-freeness}, and analyze its effects on the security model of Boneh et al. Extractability ensures that the trusted third party is always able to extract a valid signature from a valid verifiably encrypted signature and abuse-freeness guarantees that a malicious signer, who cooperates with the trusted party, is not able to forge a verifiably encrypted signature. We further show that both properties are not covered by the model of Boneh et al., introduced at Eurocrypt 2003.
2009
EPRINT
Separating two roles of hashing in one-way message authentication
We analyse two new and related families of one-way authentication protocols, where a party wants to authenticate its public information to another. In the first, the objective is to do without shared passwords or a PKI, making use of low-bandwidth empirical/authentic channels where messages cannot be faked or modified. The analysis of these leads to a new security principle, termed separation of security concerns, under which protocols should be designed to tackle one-shot attacks and combinatorial search separately. This also leads us develop a new class of protocols for the case such as PKI where a relatively expensive signature mechanism exists. We demonstrate as part of this work that a popular protocol in the area, termed MANA I, neither optimises human effort nor offers as much security as had previously been believed. We offer a number of improved versions for MANA I that provides more security for half the empirical work, using a more general empirical channel.
2009
EPRINT
Short Redactable Signatures Using Random Trees
A redactable signature scheme for a string of objects supports verification even if multiple substrings are removed from the original string. It is important that the redacted string and its signature do not reveal anything about the content of the removed substrings. Existing schemes completely or partially leak a piece of information: the lengths of the removed substrings. Such length information could be crucial for many applications, especially when the removed substring has low entropy. We propose a scheme that can hide the length. Our scheme consists of two components. The first component $\mathcal{H}$, which is a ``collision resistant'' hash, maps a string to an unordered set, whereby existing schemes on unordered sets can then be applied. However, a sequence of random numbers has to be explicitly stored and thus it produces a large signature of size at least $(m k)$-bits where $m$ is the number of objects and $k$ is the size of a key sufficiently large for cryptographic operations. The second component uses RGGM tree, a variant of GGM tree, to generate the pseudo random numbers from a short seed, expected to be of size $O(k+ tk\log m)$ where $t$ is the number of removed substrings. Unlike GGM tree, the structure of the proposed RGGM tree is random. By an intriguing statistical property of the random tree, the redacted tree does not reveal the lengths of the substrings removed. The hash function $\mathcal{H}$ and the RGGM tree can be of independent interests.
2009
EPRINT
Simulation without the Artificial Abort: Simplified Proof and Improved Concrete Security for Waters' IBE Scheme
Waters' variant of the Boneh-Boyen IBE scheme is attractive because of its efficency, applications, and security attributes,but suffers from a relatively complex proof with poor concrete security. This is due in part to the proof's ``artificial abort'' step, which has then been inherited by numerous derivative works. It has often been asked whether this step is necessary. We show that it is not, providing a new proof that eliminates this step. The new proof is not only simpler than the original one but offers better concrete security for important ranges of the parameters. As a result, one can securely use smaller groups, resulting in significant efficiency improvements.
2009
EPRINT
The Brezing-Weng-Freeman Method for Certain Genus two Hyperelliptic Curves
We construct paring friendly curves of the form $Y^2 = X^5 + uX^3 + vX$ over large finite prime fields. The rho value of our family is always less than 4. Our method is based on the fact that, under a certain condition, the Jacobian $J$ of the curve splits to a square of an elliptic curve over the quadratic extension of the base field. However, the generated curves by our method are $F_p$-simple. A key ingredient is the construction of a pairing non-friendly elliptic curve by the modified Brezing-Weng-Freeman method so that $J$ is pairing friendly.
2009
EPRINT
The Case for Quantum Key Distribution
Quantum key distribution (QKD) promises secure key agreement by using quantum mechanical systems. We argue that QKD will be an important part of future cryptographic infrastructures. It can provide long-term confidentiality for encrypted information without reliance on computational assumptions. Although QKD still requires authentication to prevent man-in-the-middle attacks, it can make use of either information-theoretically secure symmetric key authentication or computationally secure public key authentication: even when using public key authentication, we argue that QKD still offers stronger security than classical key agreement.
2009
EPRINT
Thermocommunication
Looking up -- you realize that one of the twelve light bulbs of your living room chandelier has to be replaced. You turn electricity off, move a table to the center of the room, put a chair on top of the table and, new bulb in hand, you climb up on top of the chair. As you reach the chandelier, you notice that\ldots all bulbs look alike and that you have forgotten which bulb needed to be changed.\smallskip Restart all over again?\smallskip Luckily, an effortless creative solution exists. By just touching the light bulbs you can determine the cold one and replace it! Put differently, information about the device's malfunction leaked-out via its temperature...
2009
EPRINT
Traceability Codes
Traceability codes are combinatorial objects introduced by Chor, Fiat and Naor in 1994 to be used in traitor tracing schemes to protect digital content. A $k$-traceability code is used in a scheme to trace the origin of digital content under the assumption that no more than $k$ users collude. It is well known that an error correcting code of high minimum distance is a traceability code. When does this `error correcting construction' produce good traceability codes? The paper explores this question. The paper shows (using probabilistic techniques) that whenever $k$ and $q$ are fixed integers such that $k\geq 2$ and $q\geq k^2-\lceil k/2\rceil+1$, or such that $k=2$ and $q=3$, there exist infinite families of $q$-ary $k$-traceability codes of constant rate. These parameters are of interest since the error correcting construction cannot be used to construct $k$-traceability codes of constant rate for these parameters: suitable error correcting codes do not exist because of the Plotkin bound. This answers a question of Barg and Kabatiansky from 2004. Let $\ell$ be a fixed positive integer. The paper shows that there exists a constant $c$, depending only on $\ell$, such that a $q$-ary $2$-traceability code of length $\ell$ contains at most $cq^{\lceil \ell/4\rceil}$ codewords. When $q$ is a sufficiently large prime power, a suitable Reed--Solomon code may be used to construct a $2$-traceability code containing $q^{\lceil \ell/4\rceil}$ codewords. So this result may be interpreted as implying that the error correcting construction produces good $q$-ary $2$-traceability codes of length $\ell$ when $q$ is large when compared with $\ell$.
2009
EPRINT
Trade-Off Between Key Size and Efficiency in Universal Hashing Using Polynomials
Consider the following universal hashing set-up. Let $\rF$ be a finite field and suppose any message consists of at most $2^{30}$ elements of $\rF$. Suppose further that a collision bound of $2^{-128}$ is desired. This can be achieved using usual polynomial hashing with $|\rF|\approx 2^{160}$ and having a digest of $160$ bits and a key of length also $160$ bits; hashing an $n$-bit message requires approximately $n/160$ multiplications over $\rF$. We describe a new hashing method which can achieve the same collision bound of $2^{-128}$ for messages of at most $L$ elements by working over a field $\rF$ of size $\approx 2^{136}$. Hashing an $n$-bit message requires approximately $n/136$ multiplications over $\rF$. Supposing similar efficiency in implementation of field arithmetic in both cases, the ratio of the two processing times is $160/136\times[M_{136}]/[M_{160}]$, where $M_a$ is the time for one multiplication in a field of size $\approx 2^a$. Since $M_a$ is quadratic in $a$, the new algorithm is expected to be faster than usual polynomial hashing. Further, the size of the digest also reduces to $136$ bits. The trade-off is that the key size increases to $5\times 136=680$ bits compared to the key size of $160$ bits for usual polynomial hashing. The method mentioned above is a special case of a general method of combining independent universal hash functions at multiple levels to obtain a new universal hash function. Other consequences of the new algorithm are worked out including how it can be instantiated by a class of polynomials introduced by Bernstein.
2009
EPRINT
UC-Secure Source Routing Protocol
The multi-path routing scheme provides reliable guarantee for mobile ad hoc network. A new method is proposed that is using to analyze the security of multi-path routing protocol within the framework of Universally Composable (UC) security. Based on the topological model that there exist adversarial nodes, the concept of plausible route is extended and the definition of plausible-route set is presented. Plausible-route set is used in description of the multi-path routing for Ad hoc network, and a formal security definition based on UC-RP is given. A provably Security Multiple Node-Disjoint Paths source routing (SMNDP) is proposed and address secure fault issue of MNDP in the active adversary model. The new approach shows that the security of SMNDP can be reduced to the security of the message authentication code and the digital signature. SMNDP implements the correctness of route discovery process, the authentication of nodes identifier and the integrality of route information.
2009
EPRINT
Un-Trusted-HB: Security Vulnerabilities of Trusted-HB
With increased use of passive RFID tags, the need for secure lightweight identification protocols arose. HB+ is one such protocol, which was proven secure in the detection-based model, but shown breakable by man-in-the-middle attacks. Trusted-HB is a variant of HB+, specifically designed to resist man-in-the-middle attacks. In this paper, we discuss several weaknesses of Trusted-HB, show that the formal security proof provided by its designers is incorrect, and demonstrate how to break it in realistic scenarios.
2009
EPRINT
Unconditionally Secure Asynchronous Multiparty Computation with Quadratic Communication
Secure multiparty computation (MPC) allows a set of $n$ parties to securely compute an agreed function, even if up to $t$ parties are under the control of an adversary. In this paper, we propose a new Asynchronous secure multiparty computation (AMPC) protocol that provides information theoretic security with $n = 4t+1$, where $t$ out of $n$ parties can be under the influence of a Byzantine (active) adversary ${\cal A}_t$ having unbounded computing power. Our protocol communicates O(n^2 \log|{\mathbb F}|) bits per multiplication and involves a negligible error probability of $2^{-\Omega(\kappa)}$, where $\kappa$ is the error parameter and ${\mathbb F}$ is the field over which the computation is carried out. The best known information theoretically secure AMPC with $n=4t+1$ communicates O(n^3 \log|{\mathbb F}|) bits per multiplication and does not involve any error probability in computation. Though a negligible error probability is involved, our AMPC protocol provides the best communication complexity among all the known AMPC protocols providing information theoretic security. Moreover, the communication complexity of our AMPC is same as the communication complexity of the best known AMPC protocol with cryptographic assumptions. As a tool for our AMPC protocol, we propose a new method of efficiently generating (t,2t)-sharing of multiple secrets concurrently in asynchronous setting, which is of independent interest.
2009
EPRINT
Universally Composable Symmetric Encryption
For most basic cryptographic tasks, such as public key encryption, digital signatures, authentication, key exchange, and many other more sophisticated tasks, ideal functionalities have been formulated in the simulation-based security approach, along with their realizations. Surprisingly, however, no such functionality exists for symmetric encryption, except for a more abstract Dolev-Yao style functionality. In this paper, we fill this gap. We propose two functionalities for symmetric encryption, an unauthenticated and an authenticated version, and show that they can be implemented based on standard cryptographic assumptions for symmetric encryption schemes, namely IND-CCA security and authenticated encryption, respectively. We also illustrate the usefulness of our functionalities in applications, both in simulation-based and game-based security settings.