International Association for Cryptologic Research

# IACR News Central

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2014-09-01
15:17 [Pub][ePrint]

Attribute-based Credentials (ABCs) allow citizens to prove certain properties about themselves without necessarily revealing their full identity. Smart cards are an attractive container for such credentials, for security and privacy reasons. But their limited processing power and random access storage capacity pose a severe challenge. Recently, we, the IRMA team, managed to fully implement a limited subset of the Idemix ABC system on a smart card, with acceptable running times. In this paper we extend this functionality by overcoming the main hurdle: limited RAM. We implement an efficient

extended Pseudo-Random Number Generator (PRNG) for recomputing pseudorandomness and reconstructing variables. Using this we implement Idemix standard and domain pseudonyms, AND proofs based on prime-encoded attributes, and equality proofs of representation modulo a composite, together with terminal verification and secure messaging. In contrast to prior work that only addressed the verification of one credential with only one attribute (particularly, the master secret), we can now perform multi-credential proofs on credentials of 5 attributes and complex proofs in reasonable time. We provide a detailed performance analysis and compare our results to other approaches.

2014-08-31
15:17 [Pub][ePrint]

Most entropy notions $H(.)$ like Shannon or min-entropy satisfy a chain rule stating that for random variables $X,Z$ and $A$ we have $H(X|Z,A)\\ge H(X|Z)-|A|$. That is, by conditioning on $A$ the entropy of $X$ can decrease by at most the bitlength $|A|$ of $A$.

Such chain rules are known to hold for some computational entropy notions like

Yao\'s and unpredictability-entropy. For HILL entropy, the computational analogue of

min-entropy, the chain rule is of special interest and has found many applications, including leakage-resilient cryptography, deterministic encryption and memory delegation.

These applications rely on restricted special cases of the chain rule. Whether the chain rule for conditional HILL entropy holds in general was an open problem for which we give a strong negative answer: We construct joint distributions $(X,Z,A)$, where $A$ is a

distribution over a \\emph{single} bit, such that the HILL entropy $H_\\infty(X|Z)$ is

large but $H_\\infty(X|Z,A)$ is basically zero.

Our counterexample just makes the minimal assumption that

${\\bf NP}\\nsubseteq{\\bf P/poly}$. Under the stronger assumption that

injective one-way function exist, we can make all the distributions efficiently samplable.

Finally, we show that some more sophisticated cryptographic objects

like lossy functions can be used to sample a distribution constituting a counterexample to the chain rule making only a single invocation to the underlying object.

15:17 [Pub][ePrint]

Attribute-based encryption (ABE) which allows users to encrypt and decrypt messages based on user attributes is a type of one-to-many encryption. Unlike the conventional one-to-one encryption which has no intention to exclude any partners of the intended receiver from obtaining the plaintext, an ABE system tries to exclude some unintended recipients from obtaining the plaintext whether they are partners of some intended recipients. We remark that this requirement for ABE is very hard to meet. An ABE system cannot truly exclude some unintended recipients from decryption because some users can exchange their decryption keys in order to maximize their own interests. The flaw discounts the importance of the cryptographic primitive.

15:17 [Pub][ePrint]

SIMON is a family of ten lightweight block ciphers published by Beaulieu et al. from U.S. National Security Agency (NSA). A cipher in this family with $K$-bit key and $N$-bit block is called SIMON ${N}/{K}$. In this paper we investigate the security of SIMON against different variants of linear cryptanalysis, i.e., classic linear, multiple linear and linear hull attacks. We present a connection between linear characteristic and differential characteristic, multiple linear and differential and linear hull and differential, and employ it to adapt the current known results on differential cryptanalysis of SIMON to linear cryptanalysis of this block cipher. Our best linear cryptanalysis covers SIMON 32/64 reduced to 20 rounds out of 32 rounds with the

data complexity $2^{31.69}$ and time complexity $2^{59.69}$. We have implemented our attacks for small scale variants of SIMON and our experiments confirm the theoretical bias presented in this work. So far, our results are the best known with respect to linear cryptanalysis for any variant of SIMON.

2014-08-30
18:17 [Pub][ePrint]

Nowadays there are different types of attacks in block and stream ciphers. In this work we will present some

of the most used attacks on stream ciphers. We will present the newest techniques with an example of usage in

a cipher, explain and comment. Previous we will explain the difference between the block ciphers and stream

ciphers.

15:17 [Pub][ePrint]

Recently a series of expressive, secure and efficient Attribute-Based Encryption (ABE) schemes, both in key-policy flavor and ciphertext-policy flavor, have been proposed.

However, before being applied into practice, these systems have to attain traceability of malicious users.

As the decryption privilege of a decryption key in Key-Policy ABE (resp. Ciphertext-Policy ABE) may be shared by multiple users who own the same access policy (resp. attribute set), malicious users might tempt to leak their decryption privileges to third parties, for financial gain as an example, if there is no tracing mechanism for tracking them down.

In this work we study the traceability notion in the setting of Key-Policy ABE, and formalize Key-Policy ABE supporting fully collusion-resistant blackbox traceability. An adversary is allowed to access an arbitrary number of keys of its own choice when building a decryption-device, and given such a decryption-device while the underlying decryption algorithm or key may not be given, a Blackbox tracing algorithm can find out at least one of the malicious users whose keys have been used for building the decryption-device.

We propose a construction, which supports both fully collusion-resistant blackbox traceability and high expressiveness (i.e. supporting any monotonic access structures). The construction

is fully secure in the standard model (i.e. it achieves the best security level that the conventional non-traceable ABE systems do to date), and

is efficient that the fully collusion-resistant blackbox traceability is attained at the price of making ciphertexts grow only sub-linearly in the number of users in the system, which is the most efficient level to date.

12:17 [Pub][ePrint]

Streebog is a new Russian hash function standard. It follows the HAIFA

framework as domain extension algorithm and claims to resist recent generic

second-preimage attacks with long messages. However, we demonstrate in this

article that the specific instantiation of the HAIFA framework used in Streebog

makes it weak against such attacks. More precisely, we observe that Streebog

makes a rather poor usage of the HAIFA counter input in the compression

function, which allows to construct second-preimages on the full Streebog-512

with a complexity as low as 2^{266} compression function evaluations for long

messages. This complexity has to be compared with the expected 2^{512}

computations bound that an ideal hash function should provide. Our work is a

good example that one must be careful when using a design framework for which

not all instances are secure. HAIFA helps designers to build a secure hash

function, but one should pay attention to the way the counter is handled inside

the compression function.

00:17 [Pub][ePrint]

Oblivious RAMs (ORAMs) have traditionally been measured by their \\emph{bandwidth overhead} and \\emph{client storage}. We observe that when using ORAMs to build secure computation protocols for RAM programs, the \\emph{size} of the ORAM circuits is more relevant to the performance.

We therefore embark on a study of the \\emph{circuit-complexity} of several recently proposed ORAM constructions. Our careful implementation and experiments show that asymptotic analysis is not indicative of the true performance of ORAM in secure computation protocols with practical data sizes.

We then present SCORAM, a heuristic \\emph{compact} ORAM design optimized for secure computation protocols. Our new design is almost 10x smaller in circuit size and also faster than all other designs we have tested for realistic settings (i.e., memory sizes between 4MB and 2GB, constrained by $2^{-80}$ failure probability). SCORAM\\ makes it feasible to perform secure computations on gigabyte-sized data sets.

00:17 [Pub][ePrint]

Oblivious RAM (ORAM) constructions have traditionally been measured by their

bandwidth cost,

or the blowup in the ORAM\'s running time in comparison with the non-oblivious baseline.

While these metrics can suitably characterize

an ORAM\'s performance in secure processor

and cloud outsourcing applications, recent works

have observed that other applications such as

secure multi-party computation

demand a different metric, namely, the ORAM\'s circuit complexity.

Following the tree-based ORAM paradigm by Shi et al., we propose a new ORAM scheme called

Circuit ORAM. Circuit ORAM achieves $O(D \\log N) \\omega(1)$

total circuit size\\footnote{ We use the notation $g(N) = O(f(N)) \\omega(1)$

to denote that for any $\\alpha(N) = \\omega(1)$, it holds that $g (N) = O(f(N) \\alpha(N))$.}

(over all protocol interactions) for memory words of $D = \\Omega(\\log^2 N)$ bits, while achieving a negligible failure probability.

For memory words of $D = \\Omega(\\log^2 N)$ bits,

Circuit ORAM

achieves smaller circuits both asymptotically and in practice

than all

previously known ORAM schemes.

Empirical results suggest that Circuit ORAM yields circuits that are 8x to 48x smaller than Path ORAM for datasets of roughly 1GB. The speedup will be even greater for larger data sizes.

Circuit ORAM is

also theoretically interesting

when interpreted under the traditional metrics.

Parameterizing the scheme slightly differently, we show the following.

Let $0 < \\epsilon < 1$ denote any constant, and consider a family of RAMs with $N$

words each of which $N^\\epsilon$ bits in size. Any RAM in this class can be compiled to

an Oblivious RAM with $O(1)$ words of CPU cache,

running in $O(T \\log N) \\omega(1)$ time, and achieving negligible statistical failure probability (or running in $O(T \\log N)$ time but with inverse polynomial failure probability).

This suggests that certain stronger interpretations of the

Goldreich-Ostrovsky ORAM lower bound are tight --- in particular their lower

bound trivially generalizes to any $O(1)$ failure probability,

and works for arbitrary memory word sizes.

00:17 [Pub][ePrint]

The resistance of a cryptographic implementation with regards to side-channel analysis is often quantified by measuring the success rate of a given attack. This approach cannot always be followed in practice, especially when the implementation includes some countermeasures that may render the attack too costly for an evaluation purpose, but not costly enough from a security point of view. An evaluator then faces the issue of estimating the success rate of an attack he cannot mount. The present paper adresses this issue by presenting a methodology to estimate the success rate of higher-order side-channel attacks targeting implementations protected by masking. Specifically, we generalize the approach initially proposed at SAC 2008 in the context of first-order side-channel attacks. The principle is to approximate the distribution of an attack\'s score vector by a multivariate Gaussian distribution, whose parameters are derived by profiling the leakage. One can then accurately compute the expected attack success rate with respect to the number of leakage measurements. We apply this methodology to higher-order side-channel attacks based on the widely used correlation and likelihood distinguishers. Moreover, we validate our approach with simulations and practical attack experiments against masked AES implemenations running on two different microcontrollers.

00:17 [Pub][ePrint]

Recent work on proof-based verifiable computation has resulted in built

systems that employ tools from complexity theory and cryptography to

address a basic problem in systems security: allowing a local computer

to outsource the execution of a program while providing the local

computer with a guarantee of integrity and the remote computer with a

guarantee of privacy. However, support for programs that use RAM and

complicated control flow has been problematic. State of the art systems

restrict the use of these constructs (e.g., requiring static loop

bounds), incur sizable overhead on every step to support these

constructs, or pay tremendous costs when the constructs are invoked.

This paper describes Buffet, a built system that solves these problems

by providing inexpensive \"a la carte\" RAM and dynamic control flow

constructs. Buffet composes an elegant prior approach to RAM with a

novel adaptation of techniques from the compiler community. The result

is a system that allows the programmer to express programs in an

expansive subset of C (disallowing only \"goto\" and function pointers),

can handle essentially any example in the verifiable computation

literature, and achieves the best performance in the area by multiple

orders of magnitude.