*08:48*[Event][New] FSE'14: Fast Software Encryption 2014

Submission: 12 November 2013

Notification: 18 January 2014

From March 3 to March 5

Location: London, United Kingdom

More Information: http://fse2014.isg.rhul.ac.uk

Get an update on changes of the IACR web-page here. For questions, contact *newsletter (at) iacr.org*.
You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

Submission: 12 November 2013

Notification: 18 January 2014

From March 3 to March 5

Location: London, United Kingdom

More Information: http://fse2014.isg.rhul.ac.uk

2013-07-19

We study various classical secure computation problems in the context of fairness, and relate them with each other. We also systematically study fair sampling problems (i.e., inputless functionalities) and discover three levels of complexity for them.

Our results include the following:

-Fair exchange cannot be securely reduced to the problem of fair coin-tossing by an r-round protocol, except with an error that is $\\Omega(1/r)$.

-Finite fair {\\em sampling} problems with rational probabilities can all be reduced to fair coin-tossing and unfair 2-party computation (or equivalently, under computational assumptions). Thus, for this class of functionalities, fair coin-tossing is complete.

-Only sampling problems which have fair protocols without any fair setup are the trivial ones in which the two parties can sample their outputs independently. Others all have an $\\Omega(1/r)$ error, roughly matching an upper bound for fair sampling from Moran et al.(TCC 2009).

-We study communication-less protocols for sampling, given another sampling problem as setup, since such protocols are inherently fair. We use spectral graph theoretic tools to show that it is impossible to reduce a sampling problem with {\\em common information} (like fair coin-tossing) to a sampling problem without (like \'noisy\' coin-tossing, which has a small probability of disagreement).

The last result above is a slightly sharper version of a classical result by Witsenhausen from 1975. Our proof reveals the connection between the tool used by Witsenhausen, namely \'maximal correlation\', and spectral graph theoretic tools like Cheeger inequality.

Transferable conditional electronic-cash (e-cash) allows a payer to spend an e-cash based on the outcome not known in advance. It also allows a payee to spend the e-cash to others, or deposit the e-cash to a bank based on the future outcome. Among security properties, the anonymity of the payer has been widely studied. However, the payer is linkable in the existing conditional e-cash schemes. This paper presents the first optimally anonymous and transferable conditional electronic-cash (e-cash) system based on two recent cryptographic primitives, i.e., the Groth-Sahai(GS) proof system

and the commuting signatures, to obtain the user\'s unlinkability and optimal anonymity. A publisher is introduced to publish the conditions, and is firstly formalized. By dividing the deposit protocol into two parts, the anonymity of the user is obtained in the deposit protocol. Compared with the existing conditional e-cash schemes, this scheme has the constant size for the computation and communication. Finally, we give the security proof in the standard model.

2013-07-18

Most implementations of public key cryptography employ exponentiation algorithms. Side-channel attacks on secret exponents are typically bound to the leakage of single executions because of cryptographic protocols or side-channel countermeasures such as blinding. We propose a new class of algorithms, i.e. unsupervised cluster classification algorithms, to attack cryptographic exponentiations and recover secret exponents without any prior profiling or heuristic leakage models. Not requiring profiling is a significant advantage to attackers. In fact, the proposed non-profiled single-execution attack is able to exploit any available single-execution leakage and provides a straight-forward option to combine simultaneous measurements to improve the signal-to-noise ratio of available leakage. We present empirical results from attacking an elliptic curve scalar multiplication and exploit location-based leakage from high-resolution electromagnetic field measurements without prior profiling. Individual measurements lead to a sufficiently low remaining brute-force complexity of the secret exponent. An errorless recovery of the exponent is achieved after a combination of few measurements.

There exists a broad range of RFID protocols in literature that propose hash functions as cryptographic primitives. Since Keccak has been selected as the winner of the NIST SHA-3 competition in 2012, there is the question of how far we can push the limits of Keccak to fulfill the stringent requirements of passive low-cost RFID. In this paper, we address this question by presenting a hardware implementation of Keccak that aims for lowest power and lowest area. Our smallest (full-state) design requires only 2\\,927 GEs (for designs with external memory available) and 5\\,522 GEs (total size including memory). It has a power consumption of $12.5\\,\\mu$W at 1\\,MHz on a low leakage 130\\,nm CMOS process technology. As a result, we provide a design that needs 40\\,\\% less resources than related work. Our design is even smaller than the smallest SHA-1 and SHA-2 implementations.

In this paper, information theoretic cryptography is discussed based on conditional Renyi entropies. Our discussion focuses not only on cryptography but also on the denitions of conditional Renyi entropies and the related information theoretic inequalities. First, we revisit conditional Renyi entropies, and clarify what kind of properties are required and actually satised. Then, we propose security criteria based on Renyi entropies, which suggests us deep relations between (conditional) Renyi entropies and error probabilities by using several guessing strategies. Based on these results, unied proof of impossibility, namely, the lower bounds of key sizes is derived based on conditional Renyi entropies. Our model and lower bounds include the Shannon\'s perfect secrecy, and the min-entropy based encryption presented by Dodis, and Alimomeni and Safavi-Naini. Finally, new optimal symmetric key cryptography and almost optimal secret sharing schemes are proposed which achieve our lower bounds.

Cryptographic primitives such as secure hash functions

(e.g., SHA1, SHA2, and SHA3) and symmetric

key block ciphers (e.g., AES and TDES) have been commonly used

to design pseudorandom generators with counter modes

(e.g., in Java Crypto Library and in NIST SP800-90A standards).

It is assumed that if these primitives

are secure then the pseudorandom generators

based on these primitives are also secure. However,

no systematic research and analysis have been done

to support this assumption.

Based on complexity theoretic results for pseudorandom sequences,

this paper analyzes stochastic properties of long sequences

produced by hash function based pseudorandom generators DRBG from

NIST SP800-90A and SHA1PRNG from Java Crypto Library.

Our results show that none of these sequences satisfy the

law of the iterated logarithm (LIL) which holds

for polynomial time pseudorandom sequences. Our results also

show that if the seeds and counters for pseudorandom generators

are not appropriately chosen, then the generated sequences

have strongly biased values for LIL-tests

and could be distinguished from uniformly chosen sequences

with a high probability.

Based on these results, appropriate seeding and counter

methods are proposed for pseudorandom generator designs.

The results in this paper reveal some ``non-random\'\' behavior of

SHA1, SHA2, and of the recently announced SHA3.

2013-07-17

In 2010, Bouillaguet et al. proposed an efficient solver for polynomial systems over $\\mathbb{F}_2$ that trades memory for speed. As a result, 48 quadratic equations in 48 variables can be solved on a graphics card (GPU) in 21 minutes. The research question that we would like to answer in this paper is how specifically designed hardware performs on this task. We approach the answer by solving multivariate quadratic systems on reconfigurable hardware, namely Field-Programmable Gate Arrays (FPGAs). We show that, although the algorithm proposed by Bouillaguet et al. has a better asymptotic time complexity than traditional enumeration algorithms, it does not have a better asymptotic complexity in terms of silicon area. Nevertheless, our FPGA implementation consumes 25 times less energy than their GPU implementation. This is a significant improvement, not to mention that the monetary cost per unit of computational power for FPGAs is generally much cheaper than that of GPUs.

We present a new, more constructive proof of von Neumann\'s Min-Max Theorem for two-player zero-sum game --- specifically, an algorithm that builds a near-optimal mixed strategy for the second player from several best-responses of the second player to mixed strategies of the first player. The algorithm extends previous work of Freund and Schapire (Games and Economic Behavior \'99) with the advantage that the algorithm runs in poly(n) time even when a pure strategy for the first player is a distribution chosen from a set of distributions over {0,1}^n. This extension enables a number of additional applications in cryptography and complexity theory, often yielding uniform security versions of results that were previously only proved for nonuniform security (due to use of the non-constructive Min-Max Theorem).

We describe several applications, including: a more modular and improved uniform version of Impagliazzo\'s Hardcore Theorem (FOCS \'95); regularity theorems that provide efficient simulation of distributions within any sufficiently nice convex set (extending a result of Trevisan, Tulsiani and Vadhan (CCC \'09)); an improved version of the Weak Regularity Lemma of Frieze and Kannan; a Dense Model Theorem for uniform algorithms; and showing impossibility of constructing Succinct Non-Interactive Arguments (SNARGs) via black-box reductions under uniform hardness assumptions (using techniques from Gentry and Wichs (STOC \'11) for the nonuniform setting).

2013-07-16

From August 14 to August 16

Location: Washington, DC, United States

More Information: https://www.usenix.org/conference/usenixsecurity13

CryptoLux/LACS team of University of Luxembourg is looking for two Ph.D. students in cryptography and information security. We look for excellent candidates (GPA > 80%) interested in: applied cryptography, anonymity and privacy, network/mobile security, code security, embedded systems security, etc. We offer attractive research environment within a large team of cryptographers and IT security experts and a highly competitive salary.

Potential candidates should send their application by email to lacs.acrypt(at)gmail.com. The application material should contain a cover letter explaining the candidate\'s motivation and research interests, a detailed CV (including photo), a list of publications, copies of diploma certificates, and names and contact details of three references.

Positions are available from 1-September 2013.