International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2015-06-21
18:17 [Pub][ePrint] Generating S-Box Multivariate Quadratic Equation Systems And Estimating Algebraic Attack Resistance Aided By SageMath, by A.-M. Leventi-Peetz and J.-V. Peetz

  Methods are presented to derive with the aid of the computer mathematics

software system SageMath the Multivariate Quadratic equation systems (MQ) for the input and output bit variables of a cryptographic S-box starting from its algebraic expressions. Motivation to this work were the results of recent articles which we have verified and extended in an original way, to our knowledge, not yet published elsewhere. At the same time we present results contrary to the published ones which cast serious doubts on the suitability of previously presented formulas, supposed to quantify the resistance of S-boxes against algebraic attacks.



18:17 [Pub][ePrint] TriviA: A Fast and Secure Authenticated Encryption Scheme, by Avik Chakraborti, Anupam Chattopadhyay, Muhammad Hassan, Mridul Nandi

  In this paper, we propose a new hardware friendly authen- ticated encryption (AE) scheme TriviA based on (i) a stream cipher for generating keys for the ciphertext and the tag, and (ii) a pairwise in- dependent hash to compute the tag. We have adopted one of the ISO- standardized stream ciphers for lightweight cryptography, namely Triv- ium, to obtain our underlying stream cipher. This new stream cipher has a state that is a little larger than the state of Trivium to accommodate a 128-bit secret key and IV. Our pairwise independent hash is also an adaptation of the EHC or \"Encode-Hash-Combine\" hash, that requires the optimum number of field multiplications and hence requires small hardware footprint. We have implemented the design in synthesizable RTL. Pre-layout synthesis, using 65 nm standard cell technology under typical operating conditions, reveals that TriviA is able to achieve a high throughput of 91.2 Gbps for an area of 24.4 KGE. We prove that our construction has at least 128-bit security for privacy and 124-bit security of authenticity under the assumption that the underlying stream cipher produces a pseudorandom bit stream.



18:17 [Pub][ePrint] How much randomness can be extracted from memoryless Shannon entropy sources?, by Maciej Skorski

  We revisit the classical problem: given a memoryless source having a certain amount of Shannon Entropy, how many random bits can be extracted? This question appears in works studying random number generators built from physical entropy sources.

Some authors use a heuristic estimate obtained from the Asymptotic Equipartition Property, which yields roughly $n$ extractable bits, where $n$ is the total Shannon entropy amount. However the best known precise form gives only $n-O(\\sqrt{\\log(1/\\epsilon) n})$, where $\\epsilon$ is the distance of the extracted bits from uniform. In this paper we show a matching $ n-\\Omega(\\sqrt{\\log(1/\\epsilon) n})$ upper bound. Therefore, the loss of $\\Theta(\\sqrt{\\log(1/\\epsilon) n})$ bits is necessary. As we show, this theoretical bound is of practical relevance. Namely, applying the imprecise AEP heuristic to a mobile phone accelerometer one might overestimate extractable entropy even by $100\\%$, no matter what the extractor is. Thus, the ``AEP extracting heuristic\'\' should not be used without taking the precise error into account.



18:17 [Pub][ePrint] Oblivion: Mitigating Privacy Leaks by Controlling the Discoverability of Online Information, by Milivoj Simeonovski and Fabian Bendun and Muhammad Rizwan Asghar and Michael Backes and Ninja Marnau and

  Search engines are the prevalently used tools to collect information about individuals on the Internet. Search results typically comprise a variety of sources that contain personal information --- either intentionally released by the person herself, or unintentionally leaked or published by third parties without being noticed, often with detrimental effects on the individual\'s privacy. To grant individuals the ability to regain control over their disseminated personal information, the European Court of Justice recently ruled that EU citizens have a right to be forgotten in the sense that indexing systems, such as Google, must offer them technical means to request removal of links from search results that point to sources violating their data protection rights. As of now, these technical means consist of a web form that requires a user to manually identify all relevant links herself upfront and to insert them into the web form, followed by a manual evaluation by employees of the indexing system to assess if the request to remove those links is eligible and lawful.

In this work, we propose a universal framework Oblivion to support

the automation of the right to be forgotten in a scalable,

provable and privacy-preserving manner. First, Oblivion enables a

user to automatically find and tag her disseminated personal

information using natural language processing (NLP) and image recognition techniques and

file a request in a privacy-preserving manner. Second, Oblivion

provides indexing systems with an automated and provable eligibility

mechanism, asserting that the author of a request is indeed affected

by an online resource. The automated eligibility proof ensures censorship-resistance so that only legitimately affected

individuals can request the removal of corresponding links from

search results. We have conducted comprehensive evaluations of Oblivion, showing that the framework is capable of handling 278 removal requests per second on a standard notebook (2.5 GHz dual core), and is hence suitable for large-scale deployment.



18:17 [Pub][ePrint] A Physical Approach for Stochastic Modeling of TERO-based TRNG, by Patrick HADDAD and Viktor FISCHER and Florent BERNARD and Jean NICOLAI

  Security in random number generation for cryptography is closely related to the entropy rate at the generator output. This rate has to be evaluated using an appropriate stochastic model. The stochastic model proposed in this paper is dedicated to the transition effect ring oscillator (TERO) based true random number generator (TRNG) proposed by Varchola and Drutarovsky in 2010. The advantage and originality of this model is that it is derived from a physical model based on a detailed study and on the precise electrical description of the noisy physical phenomena that contribute to the generation of random numbers. We compare the proposed electrical description with data generated in a 28 nm CMOS ASIC implementation. Our experimental results are in very good agreement with those obtained with both the physical model of TERO\'s noisy behavior and with the stochastic model of the TERO TRNG, which we also confirmed using the AIS 31 test suites.



18:17 [Pub][ePrint] Disk Encryption: Do We Need to Preserve Length?, by Debrup Chakraborty and Cuauhtemoc Mancillas-Lopez and Palash Sarkar

  In the last one-and-a-half decade there has been a lot of activity towards development of cryptographic techniques for disk

encryption. It has been almost canonised that an encryption scheme suitable for the application of disk encryption must be

length preserving, i.e., it rules out the use of schemes like authenticated encryption where an authentication tag is also

produced as a part of the ciphertext resulting in ciphertexts being longer than the corresponding plaintexts. The notion of

a tweakable enciphering scheme (TES) has been formalised as the appropriate primitive for disk encryption and it has been argued

that they provide the maximum security possible for a tag-less scheme. On the other hand, TESs are less efficient than some

existing authenticated encryption schemes. Also TES cannot provide true authentication as they do not have authentication tags.

In this paper, we analyze the possibility of the use of encryption schemes where length expansion is produced for

the purpose of disk encryption. On the negative side, we argue that nonce based authenticated encryption schemes are not appropriate

for this application. On the positive side, we demonstrate that deterministic authenticated encryption (DAE) schemes may

have more advantages than disadvantages compared to a TES when used for disk encryption. Finally, we propose a new deterministic

authenticated encryption scheme called BCTR which is suitable for this purpose. We provide the full specification of BCTR, prove

its security and also report an efficient implementation in reconfigurable hardware. Our experiments suggests that BCTR performs

significantly better than existing TESs and existing DAE schemes.



18:17 [Pub][ePrint] Differential Fault Intensity Analysis, by Nahid Farhady Ghalaty and Bilgiday Yuce and Mostafa Taha and Patrick Schaumont

  --Recent research has demonstrated that there is

no sharp distinction between passive attacks based on sidechannel

leakage and active attacks based on fault injection.

Fault behavior can be processed as side-channel information,

offering all the benefits of Differential Power Analysis including

noise averaging and hypothesis testing by correlation. This paper

introduces Differential Fault Intensity Analysis, which combines

the principles of Differential Power Analysis and fault injection.

We observe that most faults are biased - such as single-bit,

two-bit, or three-bit errors in a byte - and that this property

can reveal the secret key through a hypothesis test. Unlike

Differential Fault Analysis, we do not require precise analysis

of the fault propagation. Unlike Fault Sensitivity Analysis, we do

not require a fault sensitivity profile for the device under attack.

We demonstrate our method on an FPGA implementation of

AES with a fault injection model. We find that with an average

of 7 fault injections, we can reconstruct a full 128-bit AES key



18:17 [Pub][ePrint] Zeroizing Without Low-Level Zeroes: New MMAP Attacks and Their Limitations, by Jean-Sebastien Coron and Craig Gentry and Shai Halevi and Tancrede Lepoint and Hemanta K. Maji and Eric Miles and Mariana

  We extend the recent zeroizing attacks of Cheon, Han, Lee, Ryu and Stehle (Eurocrypt\'15) on multilinear maps to settings where no encodings of zero below the maximal level are available. Some of the new attacks apply to the CLT13 scheme (resulting in a total break) while others apply to (a variant of) the GGH13 scheme (resulting in a weak-DL attack). We also note the limits of these zeroizing attacks.



18:17 [Pub][ePrint] Assessment of Hiding the Higher-Order Leakages in Hardware - what are the achievements versus overheads?, by Amir Moradi and Alexander Wild

  Higher-order side-channel attacks are becoming amongst the major interests of academia as well as industry sector. It is indeed being motivated by the development of countermeasures which can prevent the leakages up to certain orders. As a concrete example, threshold implementation (TI) as an efficient way to realize Boolean masking in hardware is able to avoid first-order leakages. Trivially, the attacks conducted at second (and higher) orders can exploit the corresponding leakages hence devastating the provided security. Hence, the extension of TI to higher orders was being expected which has been presented at ASIACRYPT 2014. Following its underlying univariate settings it can provide security at higher orders, and its area and time overheads naturally increase with the desired security order.

In this work we look at the feasibility of higher-order attacks on first-order TI from another perspective. Instead of increasing the order of resistance by employing higher-order TIs, we realize the first-order TI designs following the principles of a power-equalization technique dedicated to FPGA platforms, that naturally leads to hardening higher-order attacks. We show that although the first-order TI designs, which are additionally equipped by the power-equalization methodology, have significant area overhead, they can maintain the same throughput and more importantly can avoid the higher-order leakages to be practically exploitable by up to 1 billion traces.



18:17 [Pub][ePrint] Combining Differential Privacy and Secure Multiparty Computation, by Martin Pettai and Peeter Laud

  We consider how to perform privacy-preserving analyses on private data from different data providers and containing personal information of many different individuals. We combine differential privacy and secret sharing in the same system to protect the privacy of both the data providers and the individuals. We have implemented a prototype of this combination and the overhead of adding differential privacy to secret sharing is small enough to be usable in practice.



18:17 [Pub][ePrint] The Chain Rule for HILL Pseudoentropy, Revisited, by Krzysztof Pietrzak and Maciej Skorski

  Computationalnotionsofentropy(a.k.a.pseudoentropy)have found many applications, including leakage-resilient cryptography, deter- ministic encryption or memory delegation. The most important tools to argue about pseudoentropy are chain rules, which quantify by how much (in terms of quantity and quality) the pseudoentropy of a given random variable X decreases when conditioned on some other variable Z (think for example of X as a secret key and Z as information leaked by a side-channel). In this paper we give a very simple and modular proof of the chain rule for HILL pseudoentropy, improving best known parameters. Our version allows for increasing the acceptable length of leakage in ap- plications up to a constant factor compared to the best previous bounds. As a contribution of independent interest, we provide a comprehensive study of all known versions of the chain rule, comparing their worst-case strength and limitations.