International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2015-02-28
01:17 [Pub][ePrint] Constant Size Ring Signature Without Random Oracle, by Priyanka Bose and Dipanjan Das and C. Pandu Rangan

  Ring signature enables an user to anonymously sign a message on behalf of a group of users termed as \'ring\' formed in an \'ad-hoc\' manner. A naive scheme produces a signature linear in the size of the ring, but this is extremely inefficient when ring size is large. Dodis et al. proposed a constant size scheme in EUROCRYPT\'13, but provably secure in random oracle model. Best known result without random oracle is a sub-linear size construction by Chandran et al. in ICALP\'07 and a follow-up work by Essam Ghadafi in IMACC\'13. Therefore, construction of a constant size ring signature scheme without random oracle meeting stringent security requirement still remains as an

interesting open problem.

Our first contribution is a generic technique to convert a compatible signature scheme to a constantsized ring signature scheme. The technique employs a constant size set membership check that may be of independent interest. Our construction is instantiated over asymmetric pairing of composite order and optimally efficient. The scheme meets strongest security requirements, viz. anonymity under full key exposure and unforgeability against insider-corruption without using random oracle under simple hardness assumptions. We also provide a concrete instantiation of the scheme based on Full Boneh-Boyen signatures.



01:17 [Pub][ePrint] The Cryptographic Hardness of Random Local Functions -- Survey, by Benny Applebaum

  Constant parallel-time cryptography allows to perform complex cryptographic tasks at an ultimate level of parallelism, namely, by local functions

that each of their output bits depend on a constant number of input bits. A natural way to obtain local cryptographic constructions is to use \\emph{random local functions} in which each output bit is computed by applying some fixed $d$-ary predicate $P$ to a randomly chosen $d$-size subset of the input bits.

In this work, we will study the cryptographic hardness of random local functions. In particular, we will survey known attacks and hardness results, discuss different flavors of hardness (one-wayness, pseudorandomness, collision resistance, public-key encryption), and mention applications to other problems in cryptography and computational complexity. We also present some open questions with the hope to develop a systematic study of the cryptographic hardness of local functions.



01:17 [Pub][ePrint] Naturally Rehearsing Passwords, by Jeremiah Blocki and Manuel Blum and Anupam Datta

  We introduce quantitative usability and security models to guide the design of \\emph{password

management schemes} --- systematic strategies to help users create and remember multiple

passwords. In the same way that security proofs in cryptography are based on

complexity-theoretic assumptions (e.g., hardness of factoring and discrete logarithm), we quantify

usability by introducing \\emph{usability assumptions}. In particular, password management relies

on assumptions about human memory, e.g., that a user who follows a particular rehearsal

schedule will successfully maintain the corresponding memory. These assumptions are informed by research in cognitive science and can be tested empirically. Given rehearsal requirements and a user\'s

visitation schedule for each account, we use the total number of extra rehearsals that

the user would have to do to remember all of his passwords as a measure of the usability of

the password scheme. Our usability model leads us to a key observation: password reuse benefits users not only by reducing the number of passwords that the user has to memorize, but more importantly by increasing the natural rehearsal rate for each password. We also present a security model which accounts for the complexity of password

management with multiple accounts and associated threats,

including online, offline, and plaintext password leak attacks. Observing that current

password management schemes are either insecure or unusable, we present Shared Cues--- a new scheme in which the underlying secret is strategically

shared across accounts to ensure that most rehearsal requirements are satisfied naturally while

simultaneously providing strong security. The construction uses the Chinese Remainder Theorem to achieve these competing goals.



01:17 [Pub][ePrint] Post-Zeroizing Obfuscation: The case of Evasive Circuits, by Saikrishna Badrinarayanan and Eric Miles and Amit Sahai and Mark Zhandry

  Recent devastating attacks by Cheon et al. [Eurocrypt\'15] and others have highlighted significant gaps in our intuition about security in candidate multilinear map schemes, and in candidate

obfuscators that use them. The new attacks, and some that were previously known, are typically called \"zeroizing\" attacks because they all crucially rely on the ability of the adversary to create

encodings of 0.

In this work, we initiate the study of post-zeroizing obfuscation, and we present a construction for the special case of evasive functions. We show that our obfuscator survives all known attacks

on the underlying multilinear maps, by proving that no encodings of 0 can be created by a generic-model adversary. Previous obfuscators (for both evasive and general functions) were either analyzed in a less-conservative \"pre-zeroizing\" model that does not capture recent attacks, or were proved secure relative to assumptions that are now known to be false.

To prove security, we introduce a new technique for analyzing polynomials over multilinear map encodings. This technique shows that the types of encodings an adversary can create are much more restricted than was previously known, and is a crucial step toward achieving postzeroizing security. We also believe the technique is of independent interest, as it yields efficiency improvements for existing schemes.



01:17 [Pub][ePrint] More PS and H-like bent functions, by C. Carlet

  Two constructions/classes of bent functions are derived from the notion of spread. The first class, ${\\cal PS}$, gives a useful framework for designing bent functions which are constant (except maybe at 0) on the elements of a (partial) spread. Dillon has deduced the explicit class ${\\cal PS}_{ap}$ of bent functions obtained from the spread of all multiplicative cosets of ${\\Bbb F}_{2^m}^*$ (added with 0) in ${\\Bbb F}_{2^{2m}}^*$ (that we shall call the Dillon spread). The second class, $H$, later slightly modified into a class called ${\\cal H}$ so as to relate it to the so-called Niho bent functions, is up to addition of affine functions the set of bent functions whose restrictions to the spaces of the Dillon spread are linear. It has been observed that the functions in ${\\cal H}$ are related to o-polynomials, and this has led to several explicit classes of bent functions. In this paper we first apply the ${\\cal PS}$ construction to a larger class of spreads, well-known in the finite geometry domain and that we shall call Andr\\\'e\'s spreads, and we describe explicitly the ${\\cal PS}$ corresponding bent functions and their duals. We also characterize those bent functions whose restrictions to the spaces of an Andr\\\'e spread are linear. This leads to a notion extending that of o-polynomial. Finally, we obtain similar characterizations for the ${\\cal H}$-like functions derived from the spreads used by Wu to deduce ${\\cal PS}$ bent functions from the Demp\\-wolff-M\\\"uller pre-quasifield, the Knuth pre-semifield and the Kantor pre-semifield. In each case, this also leads to a new notion on polynomials.



01:17 [Pub][ePrint] Short Schnorr signatures require a hash function with more than just random-prefix resistance, by Daniel R. L. Brown

  Neven, Smart and Warinschi (NSW) proved, in the generic group model, that full-length Schnorr signatures require only random-prefix resistant hash functions to resist passive existential forgery.

Short Schnorr signatures halve the length of the hash function, and

have been conjectured to provide a similar level of security. The

NSW result is too loose to provide a meaningful security for short

Schnorr signatures, but Neven, Smart and Warinschi conjecture that

this is mere artefact of the proof technique, and not an essential

deficiency of the short Schnorr signatures. In particular, this

amounts to a conjecture that short Schnorr signature are secure

under the same set of assumptions, namely random-prefix resistance

of the hash function.

This report provides a counterexample to the latter conjecture, in

other words, a separation result. It finds a hash function that

seems to suggest random-prefix resistance does not suffice for short

Schnorr signatures. In other words, the loose reduction implicit

in the NSW theorem is as tight as possible.

Obviously, this result does not preclude the possibility of another

proof for short Schnorr signatures, based on different hash function

security properties such as preimage resistance.



01:17 [Pub][ePrint] Stealing Keys from PCs by Radio: Cheap Electromagnetic Attacks on Windowed Exponentiation, by Daniel Genkin and Lev Pachmanov and Itamar Pipman and Eran Tromer

  We present new side-channel attacks on implementations of RSA and ElGamal encryption. The attacks can extract secret keys using a very low measurement bandwidth (a frequency band of less than 100 kHz, residing under 2 MHz) even when attacking multi-GHz CPUs. They targets implementation that use the popular sliding-window and fixed-window (m-ary) modular exponentiation.

We demonstrate the attacks\' feasibility by extracting keys from laptop computers running GnuPG, using a nonintrusive measurement of electromagnetic emanations for a few seconds from a range of 50 cm. The measurement is made using cheap and readily-available components, such as a Software Defined Radio USB dongle or a consumer-grade radio receiver. The measurement equipment is compact and can operate untethered and concealed, e.g., inside pita bread.

The attack uses a few non-adaptive chosen ciphertexts to trigger the occurrence of specially-structured values inside the sliding-window or fixed-window exponentiation routine. These special values cause observable fluctuations in the electromagnetic field surrounding the laptop, in a way that depends on the key-bit pattern within the sliding window. The secret key can be deduced from these fluctuations, through suitable signal processing and cryptanalysis.



01:17 [Pub][ePrint] Authenticated Network Time Synchronization, by Benjamin Dowling and Douglas Stebila and Greg Zaverucha

  The Network Time Protocol (NTP) is used by many network-connected devices to synchronize device time with remote servers. Many security features depend on the device knowing the current time, for example in deciding whether a certificate is still valid. Currently, most services implement NTP without authentication, and the authentication mechanisms available in the standard have not been formally analyzed, require a pre-shared key, or are known to have cryptographic weaknesses. In this paper we design an authenticated version of NTP, called ANTP, a generic construction which protects against desynchronization attacks. To make ANTP suitable for large-scale deployments, it is designed to minimize server-side public-key operations and requires no server-side state. Additionally, ANTP ensures that authenticity does not degrade accuracy. Authentication adds no latency to server responses, by using the fact that authentication information can arrive in a separate, subsequent message. We define a novel provable security framework involving adversary control of time, and use the framework to analyze ANTP. The framework may also be used to analyze other secure time synchronization protocols.





2015-02-27
22:17 [Pub][ePrint] Analysis of Impossible, Integral and Zero-Correlation Attacks on Type-II Generalized Feistel Networks using the Matrix Method, by CĂ©line Blondeau and Marine Minier

  While some recent publications have shown some strong relations between impossible differential and zero-correlation distinguishers as well as between zero-correlation and integral distinguishers, we analyze in this paper some relation between the underlying key-recovery attacks against Type-II Feistel networks. The

results of this paper are build on the relation presented at ACNS 2013.

In particular, using a matrix representation of the round function, we show that we can not only find impossible, integral and multidimensional zero-correlation distinguishers but also find the key-words involved in the underlined key-recovery attacks. Based on this representation, for matrix-method-derived strongly-related zero-correlation and impossible distinguishers, we show that the key-words involved in the zero-correlation

attack is a subset of the key-words involved in the impossible differential attack. Other relations between the key-words involved in zero-correlation, impossible and integral attacks are also extracted.

Also we show that in this context the data complexity of the multidimensional zero-correlation attack is larger than that of the other two attacks.



22:17 [Pub][ePrint] Multi-Client Verifiable Computation with Stronger Security Guarantees, by S. Dov Gordon and Jonathan Katz and Feng-Hao Liu and Elaine Shi and Hong-Sheng Zhou

  Choi et al. (TCC 2013) introduced the notion of multi-client verifiable computation (MVC) in which a set of clients outsource to an untrusted server the computation of a function f over their collective inputs in a sequence of time periods. In that work, the authors defined and realized multi-client verifiable computation satisfying soundness against a malicious server and privacy against the semi-honest corruption of a single client. Very recently, Goldwasser et al. (Eurocrypt 2014) provided an alternative solution relying on multi-input functional encryption.

Here we conduct a systematic study of MVC, with the goal of satisfying stronger security requirements. We begin by introducing a simulation-based notion of security that provides a unified way of defining soundness and privacy, and automatically captures several attacks not addressed in previous work. We then explore the feasibility of achieving this notion of security. Assuming no collusion between the server and the clients, we demonstrate a protocol for multi-client verifiable computation that achieves strong security in several respects. When server-client collusion is possible, we show (somewhat surprisingly) that simulation-based security cannot be achieved in general, even assuming semi-honest behavior.



22:17 [Pub][ePrint] Harder, Better, Faster, Stronger - Elliptic Curve Discrete Logarithm Computations on FPGAs, by Erich Wenger and Paul Wolfger

  Computing discrete logarithms takes time. It takes time to develop new algorithms, choose the best algorithms, implement these algorithms correctly and efficiently, keep the system running for several months, and, finally, publish the results. In this paper, we present a highly performant architecture that can be used to compute discrete logarithms of Weierstrass curves defined over binary fields and Koblitz curves using FPGAs. We used the architecture to compute for the first time a discrete logarithm of the elliptic curve \\texttt{sect113r1}, a previously standardized binary curve, using 10 Kintex-7 FPGAs. To achieve this result, we investigated different iteration functions, used a negation map, dealt with the fruitless cycle problem, built an efficient FPGA design that processes 900 million iterations per second, and we tended for several months the optimized implementations running on the FPGAs.