International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also receive updates via:

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2015-04-24
03:17 [Pub][ePrint]

The existence of tight reductions in cryptographic security proofs is an important question, motivated by the theoretical search for cryptosystems whose security guarantees are truly independent of adversarial behavior and the practical necessity of concrete security bounds for the theoretically-sound selection of cryptographic parameters.

At Eurocrypt 2002, Coron described a meta-reduction technique that allows to prove the impossibility of tight reductions for certain digital signature schemes.

This seminal result has found many further interesting applications.

However, due to a technical subtlety in the argument, the applicability of this technique beyond digital signatures in the single-user setting has turned out to be rather limited.

We describe a new meta-reduction technique for proving such impossibility results, which improves on known ones in several ways.

First, it enables interesting novel applications. This includes a formal proof that for certain cryptographic primitives (including public-key encryption/key encapsulation mechanisms and digital signatures), the security loss incurred when the primitive is transferred from an idealized single-user setting to the more realistic multi-user setting is impossible to avoid, and a lower tightness bound for non-interactive key exchange protocols. Second, the technique allows to rule out tight reductions from a very general class of non-interactive complexity assumptions. Third, the provided bounds are quantitatively and qualitatively better, yet simpler, than the bounds derived from Coron\'s technique and its extensions.

2015-04-23
15:17 [Pub][ePrint]

A {\\em randomized encoding} allows to express a complex\'\' computation, given by a function $f$ and input $x$, by a simple to compute\'\' randomized representation $\\hat{f}(x)$ whose distribution encodes $f(x)$, while revealing nothing else regarding $f$ and $x$. Existing randomized encodings, geared mostly to allow encoding with low parallel-complexity, have proven instrumental in various strong applications such as multiparty computation and parallel cryptography.

This work focuses on another natural complexity measure: {\\em the time

required to encode}. We construct {\\em succinct randomized

encodings} where the time to encode a computation, given by a

program $\\Pi$ and input $x$, is essentially independent of $\\Pi$\'s

time complexity, and only depends on its space complexity, as well as

the size of its input, output, and description. The scheme guarantees

computational privacy of $(\\Pi,x)$, and is based on

indistinguishability obfuscation for a relatively simple circuit

class, for which there exist instantiations based on polynomial

hardness assumptions on multi-linear maps.

We then invoke succinct randomized encodings to obtain several strong applications, including:

\\begin{itemize}

\\item

Succinct indistinguishability obfuscation, where the obfuscated program $iO({\\Pi})$ computes the same function as $\\Pi$ for inputs $x$ of apriori-bounded size. Obfuscating $\\Pi$ is roughly as fast as encoding the computation of $\\Pi$ on any such input $x$. Here we also require subexponentially-secure indistinguishability obfuscation for circuits.

\\item

Succinct functional encryption, where a functional decryption key corresponding to $\\Pi$ allows decrypting $\\Pi(x)$ from encryptions of any plaintext $x$ of apriori-bounded size. Key derivation is as fast as encoding the corresponding computation.

\\item

Succinct reusable garbling, a stronger form of randomized encodings where any number of inputs $x$ can be encoded separately of $\\Pi$, independently of $\\Pi$\'s time and space complexity.

\\item

Publicly-verifiable 2-message delegation where verifying the result of a long computation given by $\\Pi$ and input $x$ is as fast as encoding the corresponding computation. We also show how to transform any 2-message delegation scheme to an essentially non-interactive system where the verifier message is reusable.

\\end{itemize}

Previously, succinct randomized encodings or any of the above applications were only known based on various non-standard knowledge assumptions.

At the heart of our techniques is a generic method of compressing a

piecemeal garbled computation, without revealing anything about the

secret randomness utilized for garbling.

15:17 [Pub][ePrint]

In this paper, we consider the cycle structures of feedback shift registers (FSRs). At the beginning, the cycle structures of two special classes of FSRs, pure circulating registers (PCRs) and pure summing registers (PSRs), are studied and it is proved that there are no other FSRs have the same cycle structure of an PCR (or PSR). Then, we regard $n$-stage FSRs as permutations over $2^n$ elements. According to the group theory, two permutations have the same cycle structure if and only if they are conjugate with each other. Since a conjugate of an FSR may no longer an FSR, it is interesting to consider the permutations that always transfer an FSR to an FSR. It is proved that there are exactly two such permutations, the identity mapping and the mapping that map every state to its dual. Furthermore, we prove that they are just the two permutations that transfer any maximum length FSR to an maximum length FSR.

15:17 [Pub][ePrint]

The first fall degree assumption provides a complexity approximation of Gr\\\"obner basis algorithms when the degree of regularity of a polynomial system cannot be precisely evaluated. Most importantly, this assumption was recently used by Petit and Quisquater\'s to conjecture that the elliptic curve discrete logarithm problem can be solved in subexponential time for binary fields (binary ECDLP). The validity of the assumption may however depend on the systems in play.

In this paper, we theoretically and experimentally study the first fall degree assumption for a class of polynomial systems including those considered in Petit and Quisquater\'s analysis. In some cases, we show that the first fall degree assumption seems to hold and we deduce complexity improvements on previous binary ECDLP algorithms. On the other hand, we also show that the assumption is unlikely to hold in other cases where it would have very unexpected consequences.

Our results shed light on a Gr\\\"obner basis assumption with major consequences on several cryptanalysis problems, including binary ECDLP.

15:17 [Pub][ePrint]

Masking is a widely used countermeasure to protect block cipher implementations against side-channel attacks. The principle is to split every sensitive intermediate variable occurring in the computation into d + 1 shares, where d is called the masking order and plays the

role of a security parameter. A masked implementation is then said to achieve dth-order security if any set of d (or less) intermediate variables does not reveal key-dependent information. At CHES 2010, Rivain and Prouff have proposed a higher-order masking scheme for AES that works for any arbitrary order d. This scheme, and its subsequent extensions, are based on an improved version of the shared multiplication processing published by Ishai et al. at CRYPTO 2003. This improvement enables better memory/timing performances but its security relies on the refreshing of the masks at some points in the algorithm. In this paper, we show that the method proposed at CHES 2010 to do such mask refreshing introduces a security flaw in the overall masking scheme. Specically, we show that it is vulnerable to an attack of order d/2 + 1 whereas the scheme is supposed to achieve dth-order security. After exhibiting and analyzing the flaw, we propose a new solution which avoids the use of mask refreshing, and we prove its security. We also provide some implementation trick that makes our proposed solution, not only secure, but also faster than the original scheme.

15:17 [Pub][ePrint]

We revisit the question of achieving differential privacy with realistic imperfect randomness. In the design of differentially private mechanisms, it\'s usually assumed that uniformly random source is available. However, in many situations it seems unrealistic, and one must deal with various imperfect random sources. Dodis et al. (CRYPTO\'12) proposed that differential privacy can be achieved with Santha-Vazirani(SV) source via adding a stronger property called SV-consistent sampling and left open the question if differential privacy is possible with more realistic

(i.e., less structured) sources than SV source. A new source, called

Bias-Control Limited (BCL) source, introduced by Dodis (ICALP\'01),

as a generalization of the SV source and sequential bit-fixing source, is

more realistic. Unfortunately, if we nationally expand SV-consistent sampling to the BCL source, the expansion is hopeless to achieve differential privacy. One main reason is that SV-consistent sampling requires \"consecutive\"strings, while some strings can\'t be generated from \"non-trivial\"BCL source.

Motivated by this question, we introduce a new appealing property, called

compact BCL-consistent sampling, the degeneration of which is different

from SV-consistent sampling proposed by Dodis et al. We prove that if

the mechanism based on the BCL source satisfies this property, then it\'s

differentially private. Even if the BCL source is degenerated into the SV source,our proof is much more intuitive and simpler than that of Dodis

et al. Further, we construct explicit mechanisms using a new truncation

technique as well as arithmetic coding. We also propose its concrete

results for differential privacy and accuracy. While the results of [DY14]imply that if there exist differentially private mechanisms for imperfect randomness, then some parameters should have some constraints, ours show explicit construction of such mechanisms whose parameters match

the prior constraints.

15:17 [Pub][ePrint]

We present a new definition of computationally binding commitment

schemes in the quantum setting, which we call \"collapse-binding\". The

definition applies to string commitments, composes in parallel, and

works well with rewinding-based proofs. We give simple constructions

of collapse-binding commitments in the random oracle model, giving

evidence that they can be realized from hash functions like SHA-3. We

evidence the usefulness of our definition by constructing three-round

statistical zero-knowledge quantum arguments of knowledge for all NP

languages.

15:17 [Pub][ePrint]

In this work, we define a new notion of weakly Random-Self-Reducibile cryptosystems and show how it can be used to implement secure Oblivious Transfer. We also show that two recent (Post-quantum) cryptosystems (based on Learning with errors and Approximate Integer GCD) can be considered as weakly Random-Self-Reducible.

15:17 [Pub][ePrint]

We consider the generic design of a tweakable blockcipher from one or more evaluations of a classical blockcipher, in such a way that all input and output wires are of size n bits. As a first contribution, we show that any tweakable blockcipher with one primitive call and arbitrary linear pre- and postprocessing functions can be distinguished from an ideal one with an attack complexity of about 2^{n/2}. Next, we introduce the tweakable blockcipher tilde{F}[1]. It consists of one multiplication and one blockcipher call with tweak-dependent key, and achieves 2^{2n/3} security. Finally, we introduce tilde{F}[2], which makes two blockcipher calls, one of which with tweak-dependent key, and achieves optimal 2^n security. Both schemes are more efficient than all existing beyond birthday bound tweakable blockciphers known to date, as long as one blockcipher key renewal is cheaper than one blockcipher evaluation plus one universal hash evaluation.

15:17 [Pub][ePrint]

Nowadays, recommender systems have become an indispensable part of our daily life and provide personalized services for almost everything. However, nothing is for free -- such systems have also upset the society with severe privacy concerns because they accumulate a lot of personal information in order to provide recommendations. In this work, we construct privacy-preserving recommendation protocols by incorporating cryptographic techniques and the inherent data characteristics in recommender systems. We first revisit the protocols by Jeckmans et al. at ESORICS 2013 and show a number of security and usability issues. Then, we propose two privacy-preserving protocols, which compute predicted ratings for a user based on inputs from both the user\'s friends and a set of randomly chosen strangers. A user has the flexibility to retrieve either a predicted rating for an unrated item or the Top-N unrated items. The proposed protocols prevent information leakage from both protocol executions and the protocol outputs: a somewhat homomorphic encryption scheme is used to make all computations run in encrypted form, and inputs from the randomly-chosen strangers guarantee that the inputs of a user\'s friends will not be compromised even if this user\'s outputs are leaked. Finally, we use the well-known MovieLens 100k dataset to evaluate the performances for different parameter sizes.

15:17 [Pub][ePrint]

Recently,Wang et al. analyzed the security of two EPC C1-G2 compliant RFID authentication protocols, called RAPLT and SRP^+, and proved that these protocols are vulnerable against de-synchronization and secret disclosure attacks. The time complexity of their attacks were O(2^{16}). In addition, they proposed an improved version of SRP^+ entitled SRP^{++}, for which they claim the security would be O(2^{32}). However, in this letter, we analyze the security of SRP^{++} and show that the complexity of retrieving all secret parameters of a given tag is $O(2^{16})$, similar to its predecessor protocol.