International Association for Cryptologic Research

IACR News Central

Here you can see all recent updates to the IACR webpage. These updates are also available:

Now viewing news items related to:

8 February 2016
Event date: 25 July 2016
Submission deadline: 15 May 2016
ePrint Report Speed Optimizations in Bitcoin Key Recovery Attacks Nicolas Courtois, Guangyan Song, Ryan Castellucci
In this paper we study and give the first detailed benchmarks on existing implementations of the secp256k1 elliptic curve used by at least hundreds of thousands of users in Bitcoin and other cryptocurrencies. Our implementation improves the state of the art by a factor of 2.5, with focus on the cases where side channel attacks are not a concern and a large quantity of RAM is available. As a result, we are able to scan the Bitcoin blockchain for weak keys faster than any previous implementation. We also give some examples of passwords which have we have cracked, showing that brain wallets are not secure in practice even for quite complex passwords.
ePrint Report Breaking the Sub-Exponential Barrier in Obfustopia Sanjam Garg, Omkant Pandey, Akshayaram Srinivasan, Mark Zhandry
Indistinguishability obfuscation (\io) has emerged as a surprisingly powerful notion. Almost all known cryptographic primitives can be constructed from general purpose \io\ and other minimalistic assumptions such as one-way functions. The primary challenge in this direction of research is to develop novel techniques for using \io\ since \io\ by itself offers virtually no protection to secret information in the underlying programs. When dealing with complex situations, often these techniques have to consider an exponential number of hybrids (usually one per input) in the security proof. This results in a {\em sub-exponential} loss in the security reduction. Unfortunately, this scenario is becoming more and more common and appears to be a fundamental barrier to current techniques.

In this work, we explore the possibility of getting around this {\em sub-exponential loss barrier} in constructions based on \io\ as well as the weaker notion of {\em functional encryption} (\fe). Towards this goal, we achieve the following results:

\begin{enumerate} \item We construct trapdoor one-way permutations from {\em polynomially-hard} \io\ (and standard one-way permutations). This improves upon the recent result of Bitansky, Paneth, and Wichs (TCC 2016) which requires \io\ of sub-exponential strength.

\item We present a different construction of trapdoor one-way permutations based on standard, polynomially-secure, public-key functional encryption. This qualitatively improves upon our first result since \fe\ is a weaker primitive than \io\ --- it can be based on polynomially-hard assumptions on multi-linear maps whereas \io\ inherently seems to requires assumptions of sub-exponential strength.

\item We present a construction of {\em universal samplers} also based only on polynomially-secure public-key \fe. Universal samplers, introduced in the work of Hofheinz, Jager, Khurana, Sahai, Waters and Zhandry (EPRINT 2014), is an appealing notion which allows a {\em single} trusted setup for {\em any} protocol. As an application of this result, we construct a {\em non-interactive multiparty key exchange} (NIKE) protocol for an unbounded number of users without a trusted setup. Prior to this work, such constructions were only known from indistinguishability obfuscation. \end{enumerate}

In obtaining our results, we build upon and significantly extend the techniques of Garg, Pandey, and Srinivasan (EPRINT 2015) introduced in the context of reducing $\PPAD$-hardness to polynomially-secure \io\ and \fe.
ePrint Report Signature Schemes with Efficient Protocols and Dynamic Group Signatures from Lattice Assumptions Benoit Libert, San Ling, Fabrice Mouhartem, Khoa Nguyen, Huaxiong Wang
A recent line of works - initiated by Gordon, Katz and Vaikuntanathan (Asiacrypt 2010) - gave lattice-based constructions allowing users to authenticate while remaining hidden in a crowd. Despite five years of efforts, known constructions are still limited to static sets of users, which cannot be dynamically updated. This work provides new tools enabling the design of anonymous authentication systems whereby new users can join the system at any time.

Our first contribution is a signature scheme with efficient protocols, which allows users to obtain a signature on a committed value and subsequently prove knowledge of a signature on a committed message. This construction is well-suited to the design of anonymous credentials and group signatures. It indeed provides the first lattice-based group signature supporting dynamically growing populations of users.

As a critical component of our group signature, we provide a simple joining mechanism of introducing new group members using our signature scheme. This technique is combined with zero-knowledge arguments allowing registered group members to prove knowledge of a secret short vector of which the corresponding public syndrome was certified by the group manager. These tools provide similar advantages to those of structure-preserving signatures in the realm of bilinear groups. Namely, they allow group members to generate their own public key without having to prove knowledge of the underlying secret key. This results in a two-message joining protocol supporting concurrent enrollments, which can be used in other settings such as group encryption.

Our zero-knowledge arguments are presented in a unified framework where: (i) The involved statements reduce to arguing possession of a $\{-1,0,1\}$-vector $\mathbf{x}$ with a particular structure and satisfying $\mathbf{P}\cdot \mathbf{x} = \mathbf{v} \bmod q$ for some public matrix $\mathbf{P}$ and vector $\mathbf{v}$; (ii) The reduced statements can be handled using permuting techniques for Stern-like protocols. Our framework can serve as a blueprint for proving many other relations in lattice-based cryptography.
ePrint Report On the Complexity of Scrypt and Proofs of Space in the Parallel Random Oracle Model Jo\"el Alwen, Binyi Chen, Chethan Kamath, Vladimir Kolmogorov, Krzysztof Pietrzak, Stefano Tessaro
We investigate lower bounds in terms of time and memory on the {\em parallel} complexity of an adversary $\cal A$ computing labels of randomly selected challenge nodes in direct acyclic graphs, where the $w$-bit label of a node is the hash $H(.)$ (modelled as a random oracle with $w$-bit output) of the labels of its parents. Specific instances of this general problem underlie both proofs-of-space protocols [Dziembowski et al. CRYPTO'15] as well as memory-hardness proofs including {\sf scrypt}, a widely deployed password hashing and key-derivation function which is e.g. used within Proofs-of-Work for digital currencies like Litecoin.

Current lower bound proofs for these problems only consider {\em restricted} algorithms $\cal A$ which perform only a single $H(.)$ query at a time and which only store individual labels (but not arbitrary functions thereof). This paper substantially improves this state of affairs; Our first set of results shows that even when allowing multiple parallel $\hash$ queries, the ``cumulative memory complexity'' (CMC), as recently considered by Alwen and Serbinenko [STOC '15], of ${\sf scrypt}$ is at least $w \cdot (n/\log(n))^2$, when ${\sf scrypt}$ invokes $H(.)$ $n$ times. Our lower bound holds for adversaries which can store (1) Arbitrary labels (i.e., random oracle outputs) and (2) Certain natural functions of these labels, e.g., linear combinations. The exact power of such adversaries is captured via the combinatorial abstraction of parallel ``entangled'' pebbling games on graphs, which we introduce and study.

We introduce a combinatorial quantity $\gamma_n$ and under the conjecture that it is upper bounded by some constant, we show that the above lower bound on the CMC also holds for arbitrary algorithms $\cal A$, storing in particular arbitrary functions of their labels. We also show that under the same conjecture, the {\em time complexity} of computing the label of a random node in a graph on $n$ nodes (given an initial $kw$-bit state) reduces tightly to the time complexity for entangled pebbling on the same graph (given an initial $k$-node pebbling). Under the conjecture, this solves the main open problem from the work of Dziembowski et al.

In fact, we note that every non-trivial upper bound on $\gamma_n$ will lead to the first non-trivial bounds for general adversaries for this problem.
7 February 2016
The only known way to achieve Attribute-based Fully Homomorphic Encryption (ABFHE) is through indistinguishability obfsucation. The best we can do at the moment without obfuscation is Attribute-Based Leveled FHE which allows circuits of an a priori bounded depth to be evaluated. This has been achieved from the Learning with Errors (LWE) assumption. However we know of no other way without obfuscation of constructing a scheme that can evaluate circuits of unbounded depth. In this paper, we present an ABFHE scheme that can evaluate circuits of unbounded depth but with one limitation: there is a bound N on the number of inputs that can be used in a circuit evaluation. The bound N could be thought of as a bound on the number of independent senders. Our scheme allows N to be exponentially large so we can set the parameters so that there is no limitation on the number of inputs in practice. Our construction relies on multi-key FHE and leveled ABFHE, both of which have been realized from LWE, and therefore we obtain a concrete scheme that is secure under LWE.
ePrint Report Haraka - Efficient Short-Input Hashing for Post-Quantum Applications Stefan Kölbl, Martin M. Lauridsen, Florian Mendel, Christian Rechberger
Many efficient cryptographic hash function design strategies have been explored recently, not least because of the SHA-3 competition. Almost exclusively these design are geared towards good performance for long inputs. However, various use cases exist where performance on short inputs matters more. An example is HMAC, and such functions also constituting the bottleneck of various hash-based signature schemes like SPHINCS, or XMSS which is currently under standardization. Secure functions specifically designed for such applications are scarce. In this paper, we fill this gap by proposing two short-input hash functions (or rather simply compression functions) exploiting instructions on modern CPUs that support the AES. To our knowledge these proposals are the fastest on modern high-end CPUs, reaching throughputs below one cycle per hashed byte even for short inputs while still having a very low latency of no more than 60 cycles. Under the hood, this results comes with several innovations.

First, we study whether the number of rounds for said functions can be reduced if collision resistance is not expected, but only second-preimage resistance. The conclusions is: only a little.

Second, since their inception AES-like designs allow for supportive security arguments by means of counting and bounding the number of active S-boxes. However, this ignores powerful attack vectors using truncated differentials, of which rebound attacks are a popular example. With our design, we develop for the first time a general tool-based method to include arguments against attack vectors using truncated differentials.
Bent functions shows some vital properties among all combinatorial objects. Its links in combinatorics, cryptography and coding theory attract the scientific community to construct new class of bent functions. Since the entire characterisation of bent functions is still unexplored but several construction on different algebraic structure is in progress. In this paper we proposed a generalized Maiorana-McFarland construction of bent function from Galois ring.
5 February 2016
Impossible differential and zero correlation linear cryptanalysis are two of the most important cryptanalytic vectors. To characterize the impossible differentials and zero correlation linear hulls which are independent of the choices of the non-linear components, Sun et al. proposed the structure deduced by a block cipher at CRYPTO 2015. Based on that, we concentrate in this paper on the security of the SPN structure and Feistel structure with SP-type round functions. Firstly, we prove that for an SPN structure, if \alpha_1\rightarrow\beta_1 and \alpha_2\rightarrow\beta_ are possible differentials, \alpha_1|\alpha_2\rightarrow\beta_1|\beta_2 is also a possible differential, i.e., the OR "|" operation preserves differentials. Secondly, we show that for an SPN structure, there exists an r-round impossible differential if and only if there exists an r-round impossible differential \alpha\not\rightarrow\beta where the Hamming weights of both \alpha and \beta are 1. Thus for an SPN structure operating on m bytes, the computation complexity for deciding whether there exists an impossible differential can be reduced from O(2^{2m}) to O(m^2). Thirdly, we associate a primitive index with the linear layers of SPN structures. Based on the matrices theory over integer rings, we prove that the length of impossible differentials of an SPN structure is upper bounded by the primitive index of the linear layers. As a result we show that, unless the details of the S-boxes are considered, there do not exist 5-round impossible differentials for the AES and ARIA. Lastly, based on the links between impossible differential and zero correlation linear hull, we projected these results on impossible differentials to zero correlation linear hulls. It is interesting to note some of our results also apply to the Feistel structures with SP-type round functions.
Known methods for obfuscating a circuit need to represent the circuit as a branching program and then use a multilinear map to encrypt the branching program. Multilinear maps are, however, too inefficient for encrypting the branching program. We found a dynamic encoding method which effectively singles out different inputs in the context of the matrix randomization technique of Kilian and Gentry et al., so that multilinear maps are no longer needed. To make the method work, we need the branching programs to be regular. For such branching programs, we also give most efficient constructions for NC1 circuits. This results in a much more efficient core obfuscator for NC1 circuits.
3 February 2016
Event date: 16 December to 18 December 2016
Submission deadline: 30 June 2016
Notification: 5 August 2016
The list of papers accepted to FSE 2016 is now online at http://fse.rub.de/accepted.html
2 February 2016
Dear IACR members,

The past year has again seen vibrant research activity in cryptology and many successful IACR events. For example, the IACR Cryptology Schools program has gained momentum with four schools sponsored in 2015. Another significant change was the introduction of parallel sessions at Eurocrypt, Crypto, and Asiacrypt to cope with the increased number of high-quality papers (more on that later). Last but not least, Alexandra Boldyreva has joined as co-editor of the Cryptology ePrint Archive, replacing Tal Rabin.

Board of Directors

As it happens every year, the composition of the Board has changed for 2016. I'd like to thank the leaving Board members, Svetla Petkova-Nikova, Steven Galbraith, Thomas Ristenpart, and Tom Berson, for their contributions to the IACR and to cryptology research.

A very special *thank you* from my side goes to Tom Berson, Fellow of the IACR, former president, secretary and much more: He was among the founding members of this association and has held almost every position since 1983; though his work in building the IACR he helped the individuals in the field make careers in research and technology; his dedication to the organization positioned cryptology to become an independent and vibrant domain today. We are sure he will enjoy board-meeting-free Sundays at the conferences in the future!

Joining the Board are three colleagues: Phil Rogaway has been newly elected to the Board in 2015, Steven Myers and SM Yiu join in their roles as General Chairs of Crypto and Asiacrypt in 2017 -- welcome!

Conference attendance

As many members have asked about attendance at our conferences I am including here the attendee counts at the 2015 events:

Asiacrypt 2015: 200
CHES 2015: 448
Crypto 2015: 322
Eurocrypt 2015: 324
FSE 2015: 136
PKC 2015: 144
TCC 2015: 145


Referendum on parallel sessions and bylaws modifications

The Board had suggested for many years that Program Chairs find a way to accommodate the increased number of submitted papers, including organizing parallel sessions. In 2014 the Board asked the Program Chairs of the 2015 general conferences (Eurocrypt, Crypto, Asiacrypt) directly to introduce parallel sessions in the program. The echo at the conferences was positive. As promised before we will now organize a formal membership vote on the question of continuing with parallel sessions like this.

With the same referendum we also propose to change the bylaws in minor ways. The document currently distinguishes between "IACR conferences" (Eurocrypt, Crypto, Asiacrypt) and "IACR workshops" (CHES, FSE, PKC, TCC). Since the latter have by far surpassed the common notion of a workshop in scope and attendance, we will rename them to "IACR Area Conferences". Some further small changes are also proposed.

You will receive email form the Helios voting system with your credential to vote. The full text of the referendums appears at iacr.org/elections/2016-vote/announcement.html.

Conferences

The first IACR conference in 2016 has already taken place (TCC 2016-A in Tel Aviv). Our next conferences are:
  • Public Key Cryptography, March 6-March 9, 2016, Taipei, Taiwan.
    http://troll.iis.sinica.edu.tw/pkc16/
  • Fast Software Encryption, March 20-March 23, 2016, Bochum, Germany.
    http://fse.rub.de/
  • Eurocrypt 2016, May 8-May 12, 2016, Vienna, Austria
    http://ist.ac.at/eurocrypt2016/
I am looking forward to seeing many of you at the IACR events this year.

Closing

This is *your* IACR: Please share your feedback and suggestions for improving IACR's services. Contact me, other Board members, the conference chairs, and feel free to use other communication channels.

Best regards,

Christian Cachin
President, IACR
Event Calendar : ARES EU Projects Symposium 2016 Salzburg, Austria, 31 August - 2 September 2016
Event date: 31 August to 2 September 2016
Submission deadline: 29 February 2016
Event date: 31 August to 2 September 2016
Submission deadline: 13 March 2016
Notification: 13 May 2016
ePrint Report Tightly Secure CCA-Secure Encryption without Pairings Romain Gay, Dennis Hofheinz, Eike Kiltz, Hoeteck Wee
We present the first CCA-secure public-key encryption scheme based on DDH where the security loss is independent of the number of challenge ciphertexts and the number of decryption queries. Our construction extends also to the standard k-Lin assumption in pairing-free groups, whereas all prior constructions starting with Hofheinz and Jager (Crypto ’12) rely on the use of pairings. Moreover, our construction improves upon the concrete efficiency of existing schemes, reducing the ciphertext overhead by about half (to only 3 group elements under DDH), in addition to eliminating the use of pairings. We also show how to use our techniques in the NIZK setting. Specifically, we construct the first tightly simulation-sound designated-verifier NIZK for linear languages without pairings. Using pairings, we can turn our construction into a highly optimized publicly verifiable NIZK with tight simulation-soundness.
Universal circuits (UCs) can be programmed to evaluate any circuit of a given size $k$. They provide elegant solutions in various application scenarios, e.g. for private function evaluation (PFE) and for improving the flexibility of attribute-based encryption (ABE) schemes. The optimal size of a universal circuit is proven to be $\Omega(k\log k)$. Valiant (STOC'76) proposed a size-optimized UC construction, which has not been put in practice ever since. The only implementation of universal circuits was provided by Kolesnikov and Schneider (FC'08), with size $\mathcal{O}(k\log^2 k)$.

In this paper, we refine the size of Valiant's UC and further improve the construction by (at least) $2k$. We show that due to recent optimizations and our improvements, it is the best solution to apply in the case for circuits with a constant number of inputs and outputs. When the number of inputs or outputs is linear in the number of gates, we propose a more efficient hybrid solution based on the two existing constructions. We validate the practicality of Valiant's UC, by giving an example implementation for PFE using these size-optimized UCs.
Spritz is a stream cipher proposed by Rivest and Schuldt at the rump session of CRYPTO 2014. It is intended to be a replacement of the popular RC4 stream cipher. In this paper we propose distinguishing attacks on the full Spritz, based on {\it a short-term bias} in the first two bytes of a keystream and {\it a long-term bias} in the first two bytes of every cycle of $N$ keystream bytes, where $N$ is the size of the internal permutation. Our attacks are able to distinguish a keystream of the {\it full} Spritz from a random sequence with samples of first two bytes produced by $2^{44.8}$ multiple key-IV pairs or $2^{60.8}$ keystream bytes produced by a single key-IV pair. These biases are also useful in the event of plaintext recovery in a broadcast attack. In the second part of the paper, we look at a state recovery attack on Spritz, in a special situation when the cipher enters a class of weak states. We determine the probability of encountering such a state, and demonstrate a state recovery algorithm that betters the $2^{1400}$ step algorithm of Ankele et al. at Latincrypt 2015.
The Algebraic Eraser has been gaining prominence as SecureRF, the company commercializing the algorithm, increases its marketing reach. The scheme is claimed to be well-suited to IoT applications but a lack of detail in available documentation has hampered peer-review. Recently more details of the system have emerged after a tag authentication protocol built using the Algebraic Eraser was proposed for standardization in ISO/IEC SC31 and SecureRF provided an open public description of the protocol. In this paper we describe a range of attacks on this protocol that include very efficient and practical tag impersonation as well as partial, and total, tag secret key recovery. Most of these results have been practically verified, they contrast with the 80-bit security that is claimed for the protocol, and they emphasize the importance of independent public review for any cryptographic proposal.
In this paper we study what happens to sets when we iteratively apply lossy (round) mappings to them. We describe the information loss as imbalances of parities of intermediate distributions and show that their evolution is governed by the correlation matrices of the mappings. At the macroscopic level we show that iterating lossy mappings results in an increase of a quantity we call "total imbalance". We quantify the increase in total imbalance as a function of the number of iterations and of round mapping characteristics. At the microscopic level we show that the imbalance of a parity located in some round, dubbed "final", is the sum of distinct terms. Each of these terms consists of the imbalance of a parity located at the output of a round, multiplied by the sum of the correlation contributions of all linear trails between that parity and the final parity. We illustrate our theory with experimental data. The developed theory can be applied whenever lossy mappings are repeatedly applied to a state. This is the case in many modes of block ciphers and permutations for, e.g., iterated hashing or self-synchronizing stream encryption. The main reason why we have developed it however, is for applying it to study the security implications of using non-uniform threshold schemes as countermeasure against differential power and electromagnetic analysis.
The security of many cryptographic schemes has been based on special instances of the Learning with Errors (LWE) problem, e.g., Ring-LWE, LWE with binary secret, or LWE with ternary error. However, recent results show that some subclasses are weaker than expected. In this work we show that LWE with binary error, introduced by Micciancio and Peikert, is one such subclass. We achieve this by applying the Howgrave-Graham attack on NTRU, which is a combination of lattice techniques and a Meet-in-the-Middle approach, to this setting. We show that the attack outperforms all other currently existing algorithms for several natural parameter sets. For instance, for the parameter set n = 256, m = 512, q = 256, this attack on LWE with binary error only requires 2^85 operations, while the previously best attack requires 2^117 operations. We additionally present a complete and improved analysis of the attack, using analytic techniques. Finally, based on the attack, we give concrete hardness estimations that can be used to select secure parameters for schemes based on LWE with binary error
The block cipher Simon has a very simple round function. This simplicity allows us to compute the correlation matrix of the round function. Despite its simplicity, Simon exhibits some very interesting phenomena with respect to linear cryptanalysis. The combination of an expanding linear function and a compressing nonlinear function creates one-round hulls. These hulls complicate the estimation of the correlation contribution of trails as well as the potential of linear hulls. They cause difficulties in the commonly used methods to estimate the cipher's security against linear cryptanalysis. Finally, because most hulls contain many trails with similar correlation contributions, we can demonstrate erratical behaviour of Matsui's Algorithm 1 when applied in the default way. We also show how Algorithm 1 can be adapted to this situation and recover multiple key bits.
We investigate how to safely export additional cryptographic keys from secure channel protocols, modelled with the authenticated and confidential channel establishment (ACCE) security notion. For example, the EAP-TLS protocol uses the Transport Layer Security (TLS) handshake to output an additional shared secret which can be used for purposes outside of TLS, and the RFC 5705 standard specifies a general mechanism for exporting keying material from TLS. We show that, for a class of ACCE protocols we call “TLS-like” protocols, the EAP-TLS transformation can be used to export an additional key, and that the result is a secure AKE protocol in the Bellare–Rogaway model. Interestingly, we are able to carry out the proof without looking at the specifics of the TLS protocol itself (beyond the notion that it is “TLS-like”), but rather are able to use the ACCE property in a semi black-box way. To facilitate our modular proof, we develop a novel technique, notably an encryption-based key checking mechanism that is used by the security reduction. Our results imply that EAP-TLS using secure TLS 1.2 cipher-suites is a secure authenticated key exchange protocol.
31 January 2016
ePrint Report Intel SGX Explained Victor Costan, Srinivas Devadas
Intel's Software Guard Extensions (SGX) is a set of extensions to the Intel architecture that aims to provide integrity and privacy guarantees to security-sensitive computation performed on a computer where all the privileged software (kernel, hypervisor, etc) is potentially malicious.

This paper analyzes Intel SGX, based on the 3 papers that introduced it, on the Intel Software Developer's Manual (which supersedes the SGX manuals), on an ISCA 2015 tutorial, and on two patents. We use the papers, reference manuals, and tutorial as primary data sources, and only draw on the patents to fill in missing information.

This paper's contributions are a summary of the Intel-specific architectural and micro-architectural details needed to understand SGX, a detailed and structured presentation of the publicly available information on SGX, a series of intelligent guesses about some important but undocumented aspects of SGX, and an analysis of SGX's security properties.
This paper shows how several ring-LWE based key exchange protocols can be broken, under the assumption that the same key share is used for multiple exchanges. This indicates that, if these key exchange protocols are used, then it will be necessary for a fresh key share be generated for each exchange, and that these key exchange protocols cannot be used as a drop in replacement for designs which use Diffie-Hellman static key shares.
RoadRunneR is a small and fast bitslice lightweight block cipher for low cost 8-bit processors proposed by Adnan Baysal and Sa ̈hap S ̧ahin in the LightSec 2015 conference. While most software efficient lightweight block ciphers lacking a security proof, RoadRunneR’s security is provable against differential and linear attacks. RoadRunneR is a Feistel structure block cipher with 64-bit block size. RoadRunneR-80 is a vision with 80-bit key and 10 rounds, and RoadRunneR-128 is a vision with 128-bit key and 12 rounds. In this paper, we obtain 5-round truncated differentials of RoadRunneR-80 and RoadRunneR-128 with probability 2^{−56}. Using the truncated differentials, we give a truncated differential attack on 7-round RoadRunneR-128 without whitening keys with data complexity of 2^{55} chosen plaintexts, time complexity of 2^{121} encryptions and memory complexity of 2^{68}. This is first known attack on RoadRunneR block cipher.
While DNSSEC securely provides authenticity and integrity to the domain name system (DNS), it also creates a new security vulnerability called zone enumeration that allows an adversary that asks a small number of targeted DNS queries to learn the IP addresses of all domain names in a zone. An enumerated zone can be used as ''a source of probable e-mail addresses for spam, or as a key for multiple WHOIS queries to reveal registrant data that many registries may have legal obligations to protect'' [RFC 5155] (e.g., per EU data protection laws), or to create a toehold for more complex attacks. As the Internet of things becomes increasingly ubiquitous, it also becomes increasingly important to keep the names and addresses of these ''things'' (e.g., thermostats, fridges, baby monitors) away from remote attackers.

In previous work we solved DNSSEC's zone enumeration problem by introducing NSEC5, a cryptographic construction based on RSA digital signatures. NSEC5 provides authenticated denial of existence, i.e., it is used to answer DNS queries that have negative responses (e.g., NXDOMAIN). RSA-based NSEC5 was recently submitted for specification in an Internet draft [draft-vcelak-nsec5-01], and a working implementation of a nameserver that supports RSA-based NSEC5 is also available [https://github.com/dipapado/nsec5-implementation].

However, recent years have seen the DNSSEC community aiming to replace RSA with elliptic curve cryptography (EC), in order to shorten the length of DNSSEC responses. Therefore, in this paper we present a new variant of NSEC5 that uses elliptic curve cryptography (ECC) to produce shorter NSEC5 responses. If a zone is signed with ECDSA at the 128-bit security level and also uses our new ECC-based NSEC5 scheme, its denial-of-existence responses (response code NXDOMAIN) will be about 2 times shorter than that a zone signed with 2048-bit RSA and RSA-based NSEC5. Moreover, our ECC-based NSEC5 has responses lengths that are comparable to NSEC3, DNSSEC's current authenticated-denial-of-existence mechanism that is vulnerable to zone enumeration via offline dictionary attacks. In fact, if a zone signed with ECDSA at the 128-bit security level also uses our new ECC-based NSEC5 scheme, it will have responses that are shorter than a zone using NSEC3 with 1024-bit RSA and SHA1 (for an 80-bit security level), which is today's dominant deployment configuration.
29 January 2016
Group signatures are an important privacy-enhancing tool that allow to anonymously sign messages on behalf of a group. A recent feature for group signatures is controllable linkability, where a dedicated linking authority (LA) can determine whether two given signatures stem from the same signer without being able to identify the signer(s). Currently the linking authority is fully trusted, which is often not desirable.

In this paper, we firstly introduce a generic technique for non-interactive zero-knowledge plaintext equality and inequality proofs. In our setting, the prover is given two ciphertexts and some trapdoor information, but neither has access to the decryption key nor the randomness used to produce the respective ciphertexts. Thus, the prover performs these proofs on unknown plaintexts. Besides a generic technique, we also propose an efficient instantiation that adapts recent results from Blazy et al. (CT-RSA'15), and in particular a combination of Groth-Sahai (GS) proofs (or sigma proofs) and smooth projective hash functions (SPHFs).

While this result may be of independent interest, we use it to realize verifiable controllable linkability for group signatures. Here, the LA is required to non-interactively prove whether or not two signatures link (while it is not able to identify the signers). This significantly reduces the required trust in the linking authority. Moreover, we extend the model of group signatures to cover the feature of verifiable controllable linkability.
We analyze the handshake protocol of TLS 1.3 draft-ietf-tls-tls13-10 (published October 2015). This continues and extends our previous analysis (CCS 2015, Cryptology ePrint Archive 2015) of former TLS 1.3 drafts (draft-ietf-tls-tls13-05 and draft-ietf-tls-tls13-dh-based). Here we show that the full (EC)DHE Diffie-Hellman-based handshake of draft-10 is also secure in the multi-stage key exchange framework of Fischlin and Günther which captures classical Bellare-Rogaway key secrecy for key exchange protocols that derive multiple keys.

We also note that a recent protocol change---the introduction of a NewSessionTicket message for resumption, encrypted under the application traffic key---impairs the protocol modularity and hence our compositional guarantees that ideally would allow an independent analysis of the record protocol. We additionally analyze the pre-shared key modes (with and without ephemeral Diffie-Hellman key), and fit them into the composability framework, addressing composability with the input resumption secret from a previous handshake and of the output session keys.
ePrint Report Cryptanalysis of PRINCE with Minimal Data Shahram Rasoolzadeh, Håvard Raddum
We investigate two attacks on the PRINCE block cipher in the most realistic scenario, when the attacker only has a minimal amount of known plaintext available. The first attack is called Accelerated Exhaustive Search, and is able to recover the key for up to the full 12-round PRINCE with a complexity slightly lower than the security claim given by the designers. The second attack is a meet-in-the-middle attack, where we show how to successfully attack 8- and 10-round PRINCE with only two known plaintext/ciphertext pairs. Both attacks take advantage of the fact that the two middle rounds in PRINCE are unkeyed, so guessing the state before the first middle round gives the state after the second round practically for free. These attacks are the fastest until now in the known plaintext scenario for the 8 and 10 reduced-round versions and the full 12-round of PRINCE.

  older items