International Association for Cryptologic Research

# IACR News Central

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2014-05-15
09:17 [Pub][ePrint]

The present public key encryption in this paper involves the use of two values and they are the shadows values of a base value, and the base value is derived from the two shadows values. Whenever two integer values (first shadow value and second shadow value) are multiplied producing a product value and the value of one is subtracted from the product value a first base value is derived and it is the first base value of the two shadows values. The derived first base value may be divided by any divisor that it may be divided with which produces a positive integer quotient result and zero for the remainder. All values that are used in the division and the quotient result are bases values for the chosen shadow value-pair. Then one of the base values is chosen along with the two chosen shadows values and they comprise a triplet values that represent the public key to encrypt a message and the private key to decrypt the encrypted message.

09:17 [Pub][ePrint]

Increasingly, confidential medical records are being stored in data centers hosted by hospitals or large companies. As sophisticated algorithms for predictive analysis on medical data continue to be developed, it is likely that, in the future, more and more computation will be done on private patient data. While encryption provides a tool for assuring the privacy of medical information, it limits the functionality for operating on such data. Conventional

encryption methods used today provide only very restricted possibilities or none at all to operate on encrypted data without decrypting it first. Homomorphic encryption provides a tool for

handling such computations on encrypted data, without decrypting the data, and without even needing the decryption key.

In this paper, we discuss possible application scenarios for homomorphic encryption in order to ensure privacy of sensitive medical data. We describe how to privately conduct predictive analysis tasks on encrypted data using homomorphic encryption. As a proof of concept, we present a working implementation of a prediction service running in the cloud (hosted on Microsoft\'s Windows Azure), which takes as input private encrypted health data, and returns the probability of suffering cardiovascular disease in encrypted form. Since the cloud service uses homomorphic encryption, it makes this prediction while handling only encrypted data, learning nothing about

the submitted confidential medical data.

09:17 [Pub][ePrint]

Several recent and high-profile incidents give cause to believe that randomness failures of various kinds are endemic in deployed cryptographic systems. In the face of this, it behoves cryptographic researchers to develop methods to immunise - to the extent that it is possible - cryptographic schemes against such failures. This paper considers the practically-motivated situation where an adversary is able to force a public key encryption scheme to reuse random values, and functions of those values, in encryption computations involving adversarially chosen public keys and messages. It presents a security model appropriate to this situation, along with variants of this model. It also provides necessary conditions on the set of functions used in order to attain this security notation, and demonstrates that these conditions are also sufficient in the Random Oracle Model. Further standard model constructions achieving weaker security notions are also given, with these constructions having interesting connections to other primitives including: pseudo-random functions that are secure in the related key attack setting; Correlated Input Secure hash functions; and public key encryption schemes that are secure in the auxiliary input setting (this being a special type of leakage resilience).

09:17 [Pub][ePrint]

We present a ``universal\'\' Random Access Machine (RAM in short) for tamper and leakage resilient computation. The RAM has one CPU that accesses three storages (called disks in the following), two of them are secret, while the other one is public. The CPU has constant size for each fixed value of security parameter \$k\$. We construct a compiler for this architecture which transforms any keyed primitive into a RAM program where the key is encoded and stored on the two secret disks and the instructions for evaluating the functionality are stored on the public disk.

The compiled program tolerates arbitrary independent tampering of the disks. That is, the adversary can tamper with the intermediate values produced by the CPU, and the program code of the compiled primitive on the public disk. In addition, it tolerates bounded independent leakage from the disks and continuous leakage from the communication channels between the disks and the CPU.

Although it is required that the circuit of the CPU is tamper and leakage proof, its design is independent of the actual primitive being computed and its internal storage is non-persistent, i.e., all secret registers are reset between invocations. Hence, our result can be interpreted as reducing the problem of shielding arbitrary complex computations to protecting a single, simple and ``universal\'\' component. As a main ingredient of our construction we use continuous

non-malleable codes that satisfy certain additional properties.

09:17 [Pub][ePrint]

09:17 [Pub][ePrint]

This paper extends the certificateless public key infrastructure model that was proposed by Hassouna et al by proposing new digital signature scheme to provide true non-repudiation,

the proposed signature scheme is short and efficient, it is also has strength point that the KGC has no contribution in signature generation/verification process, therefore any compromise

of the KGC does not affect the non-repudiation service of the system. Furthermore, even the KGC cannot do signature forgery by (temporary) replacing the user\'s public key.

09:17 [Pub][ePrint]

Mix nets with randomized partial checking (RPC mix nets) have been introduced by Jakobsson, Juels, and Rivest as particularly simple and efficient verifiable mix nets. These mix nets have been used in several implementations of prominent e-voting systems to provide vote privacy and verifiability. In RPC mix nets, higher efficiency is traded for a lower level of privacy and verifiability. However, these mix nets have never undergone a rigorous formal analysis. Recently, Kahazei and Wikstroem even pointed out several severe problems in the original proposal and in implementations of RPC mix nets in e-voting systems, both for so-called re-encryption and Chaumian RPC mix nets. While Kahazei and Wikstroem proposed several fixes, the security status of Chaumian RPC mix nets (with the fixes applied) has been left open; re-encryption RPC mix nets, as they suggest, should not be used at all.

In this paper, we provide the first formal security analysis of Chaumian RPC mix nets. We propose security definitions that allow one to measure the level of privacy and verifiability RPC mix nets offer, and then based on these definitions, carry out a rigorous analysis. Altogether, our results show that these mix nets provide a reasonable level of privacy and verifiability, and that they are still an interesting option for the use in e-voting systems.

00:19 [News]

## Statement of Principle from the IACR Membership on Mass Surveillance and the Subversion of Cryptography

The membership of the IACR repudiates mass surveillance and the undermining of cryptographic solutions and standards. Population-wide surveillance threatens democracy and human dignity. We call for expediting research and deployment of effective techniques to protect personal privacy against governmental and corporate overreach.

2014-05-13
09:17 [Pub][ePrint]

Authentication is the process for identify the user authorized or not. The identity contains mainly the username and passwords for verifying the two entities. The authentication information\'s are stored in the form encryption in a device which is properly registered in the server. At the time of authentication process perform between user and server the intruder can eves-dropping the communication channel and login into the system by an authorized user. To overcome this optimal strong password authentication (OSPA) protocol uses the multiple hash operation the time of authentication for the users. The server chooses the hash function only at the time of user request for the login process. So the intruder cannot know the information which is transferred at the time of authentication process.

The OSPA can improve the authentication process for obtaining mutual communication between user and server. The authentication information will not know to the intruder. So the multiple hash function obtains the secure authentication information process. The OSPA protect information of the user & server and protect from the guessing attack. The guessing attack prevention performs by the server using the multiple hash function & USB Stick.

09:17 [Pub][ePrint]

In this work, we describe a simple and efficient construction of a large subset S of F_p, where p is a prime, such that the set A(S) for any non-identity affine map A over F_p has small intersection with S.

Such sets, called affine-evasive sets, were defined and constructed in~\\cite{ADL14} as the central step in the construction of non-malleable codes against affine tampering over F_p, for a prime p. This was then used to obtain efficient non-malleable codes against split-state tampering.

Our result resolves one of the two main open questions in~\\cite{ADL14}. It improves the rate of non-malleable codes against affine tampering over F_p from log log p to a constant, and consequently the rate for non-malleable codes against split-state tampering for n-bit messages is improved from n^6 log^7 n to n^6.

09:17 [Pub][ePrint]

We present explicit optimal binary pebbling algorithms for reversing one-way hash chains. For a hash chain of length \$2^k\$, the number of hashes performed per output round is at most \$\\lceil \\tfrac{k}{2}\\rceil\$, whereas the number of hash values stored throughout is at most \$k\$. This is optimal for binary pebbling algorithms characterized by the property that the midpoint of the hash chain is computed just once and stored until it is output, and that this property applies recursively to both halves of the hash chain.

We develop a framework for easy comparison of explicit binary pebbling algorithms, including simple speed-1 binary pebbles, Jakobsson\'s binary speed-2 pebbles, and our optimal binary pebbles. Explicit schedules describe for each pebble exactly how many hashes need to be performed in each round. The optimal schedule exhibits a nice recursive structure, which allows fully optimized implementations that can readily be deployed. In particular, we develop in-place implementations with minimal storage overhead (essentially, storing only hash values), and fast implementations with minimal computational overhead.