International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also get this service via

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2013-06-23
21:17 [Forum] [IACR Publication Reform] Re: two-stage review process by Orr

  But in this case, if you accept X papers, you need to pass to Stage 2 X+Y papers (assuming some will fail the testing), and then, you are susceptible to variance (Y should be O(X), IMHO) in the number of accepted papers, or some decent papers gets rejected due to lack of space. In addition, why not to just separate the submission into two parts: abstract, and the rest. Then, interested committee members could immediately check the details if they wish. Which is btw, what happens now, but with shorter abstract. From: 2013-23-06 18:32:14 (UTC)

15:17 [Forum] [IACR Publication Reform] Re: Testable change by cbw

  Thanks for the insight! My question would be a different one though: Does rebuttal/rebattle [1] change something from the perspective of the /reviewers/: Do you write your review more carefully (it could be questioned) or not? If the first happens, I guess it\'s worth the overhead. If no, you are right. In this case, it\'s not worth the bother. Best, Christopher [1] I do prefer the second term ;-) From: 2013-23-06 14:10:09 (UTC)

12:17 [Forum] [IACR Publication Reform] two-stage review process by Joan Daemen

  Dear all, Here a proposal that aims at reducing review workload. The idea is to split the review of a paper in two stages. It implies that each paper has two parts: - an abstract aimed at the non-specialized reader clearly stating the contribution of the paper (selling it, actually). So the abstract should probably be longer that what we have now (say a 2-page limit) - a technical part that will typically be more specialized Stage 1: the paper is reviewed by a relatively large number of people from different sub-disciplines, based on the abstract only. Reviewers should assume that the technical part will deliver what the abstract announces. If the paper survives this phase, it proceeds to phase 2. Stage 2: a few specialized reviewers check in detail whether the technical part delivers on the promise made in the abstract. If so, the paper is accepted. This includes verification of proofs, claimed attack complexity etc. At least in theory this may reduce the workload as most reviewers only have to read the abstract. Forcing the authors to write an abstract aimed at a wider audience has the additional benefit that papers may become more accessible to people working in other sub-disciplines. I realise that whether this really works depends on how it is implemented. For example, something must be built in against overselling. This could be done by having a system with (negative) points where each co-author gets a point when his paper passes stage 1 but not stage 2 and these points are somehow taken into account in stage 1. And of course there are many other details that may make this a success or a failure. But let\'s first see if there is support for the basic idea in the first place. Joan From: 2013-23-06 11:06:21 (UTC)



2013-06-22
21:17 [Forum] [IACR Publication Reform] Re: Testable change by Orr

  Amit, From my past experience with conferences with two-phase review (rebuttal, or whatever you want to call it) - it rarely changes anything in the final program. In other words, it\'s a lot of overhead, for a very little impact on the outcome of the review process. So this idea, as logical as it sounds, is not "working", IMHO. From: 2013-22-06 18:20:06 (UTC)



2013-06-21
09:07 [Event][New] Indocrypt 2013: 14th International Conference on Cryptology in India

  Submission: 15 July 2013
From December 7 to December 10
Location: Mumbai, India
More Information: http://indocrypt.hbni.ac.in




2013-06-20
18:56 [PhD][Update] Enrico Thomae: About the Security of Multivariate Quadratic Public Key Schemes

  Name: Enrico Thomae
Topic: About the Security of Multivariate Quadratic Public Key Schemes
Category:public-key cryptography

Description: The primary goal of this thesis is to evaluate the security of multivariate quadratic public key schemes. We investigate three main topics related to the security of MQ-schemes, namely the MQ-Problem, the IP-Problem and the MinRank-Problem.
Section 2 discusses the MQ-Problem, which relates to direct pre-image attacks using the public key, i.e. finding x for a given y and P(x) = y, which is known to be difficult in general. In section 2.1 we provide a brief survey on algorithms to solve such systems, like F4, F5, XL and MutantXL. We recap the complexity analysis of the first three algorithms and provide a detailed complexity analysis of the latter. Our contribution is a proof of theorem 2.7 which is hopefully simpler than that in [CKPS, Section 8]. Further we derived theorem 2.29 and thus confirmed results from Yang and Chen [YC04a] in a different way.
In section 2.2 we present a new direct attack on the Unbalanced Oil and Vinegar signature scheme, which forces to raise parameters in order to obtain the same security level again. More generally we present an algorithm to solve underdetermined systems of MQ-equations faster than before.
Section 3 presents the main part of this work and is dedicated to algebraic key recovery attacks on MQ-schemes. Unfortunately naive algebraic attacks are usually far from being efficient due to the large number of variables. So we first formalize the underlying class of problems and introduce the Isomorphism of Polynomials with partial Knowledge (IPpK) Problem in section 3.3. We relate this new problem to known problems, like the Isomorphism of Polynomials Problem with one and two secrets. Our main contribution is to provide a general algebraic framework to tackle the IPpK-Problem. Therefore we generalize the notion of equivalent keys to so-called good keys. In a nutshell equivalent keys allow to reduce the number of variables of an algebraic attack. Good keys further reduce the number of variables, but this time also t[...]


12:17 [Pub][ePrint] A quasi-polynomial algorithm for discrete logarithm in finite fields of small characteristic, by Razvan Barbulescu and Pierrick Gaudry and Antoine Joux and Emmanuel Thomé

  In the present work, we present a new discrete logarithm algorithm, in the same vein as in recent works by Joux, using an asymptotically more efficient descent approach. The main result gives a quasi-polynomial heuristic complexity for the discrete logarithm problem in finite field of small characteristic. By quasi-polynomial, we mean a complexity of type $n^{O(\\log n)}$ where $n$ is the bit-size of the cardinality of the finite field. Such a complexity is smaller than any $L(\\varepsilon)$ for $\\epsilon>0$. It remains super-polynomial in the size of the input, but offers a major asymptotic improvement compared to $L(1/4+o(1))$.



12:17 [Pub][ePrint] Functional Signatures and Pseudorandom Functions, by Elette Boyle and Shafi Goldwasser and Ioana Ivan

  In this paper, we introduce \\emph{functional digital signatures}, \\emph{functional pseudorandom functions} and \\emph{pseudorandom functions with selective access}.

In a functional signature scheme, in addition to a master signing key that can be used to sign any message, there are \\emph{signing keys for a function} $f$, which allow one to sign any message in the range of $f$. An immediate application of functional signature schemes is delegation of the ability to sign a restricted set of messages by a master authority to a third party. We also show applications of functional signatures in constructing succinct non-interactive arguments and delegation schemes. We give several general constructions for this primitive based on different computational hardness assumptions, and describe the trade-offs between them in terms of the assumptions they require and the size of the signatures.

In a functional pseudorandom function, in addition to a master secret key that can be used to evaluate the pseudorandom function $F$ on any point in the domain, there are additional \\emph{secret keys for a function} $f$, which allow one to evaluate $F$ on any $y$ for which there exists an $x$ such that $f(x)=y$. This implies the ability to delegate keys per function $f$ for computing a pseudorandom function $F$ on points $y$ for which $f(y)=1$. Such functions imply {\\it pseudo random functions with selective access} -- pseudorandom function families F for which one may delegate keys per function f for computing F on points y for which f(y) = 1. We provide an example of a construction of a functional pseudorandom function for prefix fixing functions.



12:17 [Pub][ePrint] Efficient Two-Pass Anonymous Identity Authentication Using Smart Card, by Jue-Sam Chou1*, Chun-Hui Huang2, Yu-Siang Huang3, Yalin Chen4

  Recently, Khan et al. proposed an enhancement on a remote authentication scheme designed by Wang et al. which emphasizes on using dynamic identity. They claim that their improvement can avoid insider attack. However, we found the scheme lacks the anonymity property. Moreover, R. Madhusudhan et al. indicate their scheme also suffers the insider attack. Due to these observations, in this paper we propose a novel one which not only anonymously authenticates the remote user by using only two passes but also satisfies the ten requirements of an authentication scheme using smart card mentioned by Liao et al..



12:17 [Pub][ePrint] Function-Private Subspace-Membership Encryption and Its Applications, by Dan Boneh and Ananth Raghunathan and Gil Segev

  Boneh, Raghunathan, and Segev (CRYPTO \'13) have recently put forward the notion of function privacy and applied it to identity-based encryption, motivated by the need for providing predicate privacy in public-key searchable encryption. Intuitively, their notion asks that decryption keys reveal essentially no information on their corresponding identities, beyond the absolute minimum necessary. While Boneh et al. showed how to construct function-private identity-based encryption (which implies predicate-private encrypted keyword search), searchable encryption typically requires a richer set of predicates.

In this paper we significantly extend the function privacy framework. First, we introduce the new notion of subspace-membership encryption, a generalization of inner-product encryption, and formalize a meaningful and realistic notion for capturing its function privacy. Then, we present a generic construction of a function-private subspace-membership encryption scheme based on any inner-product encryption scheme. This is the first generic construction that yields a function-private encryption scheme based on a non-function-private one.

Finally, we present various applications of function-private subspace-membership encryption. Among our applications, we significantly improve the function privacy of the identity-based encryption schemes of Boneh et al.: whereas their schemes are function private only for identities that are highly unpredictable (with min-entropy of at least

$\\lambda + \\omega(\\log \\lambda)$ bits, where $\\lambda$ is the security parameter), we obtain function-private schemes assuming only the minimal required unpredictability (i.e., min-entropy of only $\\omega(\\log \\lambda)$ bits). This improvement offers a much more realistic function privacy guarantee.



12:17 [Pub][ePrint] The SIMON and SPECK Families of Lightweight Block Ciphers, by Ray Beaulieu and Douglas Shors and Jason Smith and Stefan Treatman-Clark and Bryan Weeks and Louis Wingers

  In this paper we propose two families of block ciphers, SIMON and SPECK, each of which comes in a variety of widths and key sizes. While many lightweight block ciphers exist, most were designed to perform well on a single platform and were not meant to provide high performance across a range of devices. The aim of SIMON and SPECK is to fill the need for secure, flexible, and analyzable lightweight block ciphers. Each offers excellent performance on hardware and software platforms, is flexible enough to admit a variety of implementations on a given platform, and is amenable to analysis using existing techniques. Both perform exceptionally well across the full spectrum of lightweight applications, but SIMON is tuned for optimal performance in hardware, and SPECK for optimal performance in software.