International Association for Cryptologic Research

IACR News Central

Here you can see all recent updates to the IACR webpage. These updates are also available:

Now viewing news items related to:

18 January 2017
ePrint Report Five Rounds are Sufficient and Necessary for the Indifferentiability of Iterated Even-Mansour Yuanxi Dai, Yannick Seurin, John Steinberger, Aishwarya Thiruvengadam
We prove that the 5-round iterated Even-Mansour (IEM) construction (which captures the high-level structure of the class of key-alternating ciphers) with a non-idealized key-schedule (such as the trivial key-schedule, where all round keys are equal) is indifferentiable from an ideal cipher. In a separate result, we also prove that five rounds are necessary by describing an attack against the corresponding 4-round construction. This closes the gap regarding the exact number of rounds for which the IEM construction with a non-idealized key-schedule is indifferentiable from an ideal cipher, which was previously only known to lie between four and twelve.
Yao's garbled circuits have been extensively used in Secure Function Evaluations (SFE). Several improvements have been proposed to improve the efficiency of garbled circuits. Kolesnikov and Schneider (2008) proposed the free-XOR technique. Naor, Pinkas, and Sumner (1999) introduced garbled row-reduction technique GRR3 to reduce each garbled gate to three ciphertexts, Pinkas et al (2009) proposed GRR2, and Zahur, Rosulek, and Evans (2015) introduced a half-gates technique to design free-XOR compatible GRR2. The GRR2, half-gates, and free-XOR garbling schemes improve efficiency by leaking locations of XOR gates in the circuit. This kind of information leakage is acceptable for SFE protocols though it maybe be unacceptable for other protocols such as Private Function Evaluation (PFE). For example, these techniques could not be used to improve the efficiency of existing non-universal-circuit-based constant round PFE protocols. The first result of this paper is a Gate Privacy preserving Garbled Row Reduction technique GPGRR2 for designing garbled circuits with at most two ciphertexts for each garbled gate. Compared with state-of-the-art gate-privacy-preserving garbling scheme GRR3, the scheme GPGRR2 reduces the garbled circuit size by a factor of at least 33%. The second result is the design of a linear (over integers) garbling scheme to garble a single odd gate to one ciphertext. Zahur, Rosulek, and Evans (2015) proved that a linear garbling scheme garbles each odd gate to at least $2$ ciphertexts. Our result shows that Zahur et al's result should be formulated more appropriately by restricting the ``linear concept'' explicitly to the field $GF({2^t})$. The third result is to use the GPGRR2 scheme to reduce the number of ciphertexts in non-universal-circuit based PFE protocols by a factor of 25%.
In this work, we significantly improve the efficiency of non-malleable codes in the split state model, by constructing a code with codeword length |s|+O(k)|s|+O(k), where |s||s| is the length of the message, and kk is the security parameter. This is a substantial improvement over previous constructions, both asymptotically and concretely.

Our construction relies on a new primitive which we define and study, called ℓℓ-more extractable hash functions. This notion, which may be of independent interest, is strictly stronger than the previous notion of extractable hash by Goldwasser et al. (Eprint '11) and Bitansky et al. (ITCS '12, Eprint '14), yet we can instantiate it under the same assumption used for the previous extractable hash function (a variant of the Knowledge of Exponent Assumption).
ePrint Report PePTCAP: A Privacy-enhancing Protocol for(Temporary) Car Access Provision Iraklis Symeonidis, Abdelrahaman Aly, Mustafa A. Mustafa, Bart Preneel
This paper proposes a novel physical keyless car sharing protocol that allows users to share their cars conveniently. In spite of many advantages, keyless car sharing systems come with substantial security and privacy challenges. Within this work, we design a concrete decentralised and Privacy-enhanced Protocol for (Temporary) Car Access Provision namely \protocol. \protocol\ uses multiparty computation based on Shamir's secret sharing scheme which allows the generation, update, revocation and distribution of an access token for a car in a privacy-preserving manner. In order to solve disputes and to deal with law enforcement requests, our protocol provides forensic evidence of car incidents based on threshold secret sharing. We perform a security and privacy analysis of \protocol\ followed by a complexity analysis, practical efficiency and time estimations for the multiparty computation elements of \protocol\ under realistic scenarios.
Event date: 29 August to 2 September 2017
Submission deadline: 15 April 2017
Notification: 1 June 2017
Event Calendar PST2017: Privacy, Security and Trust 2017 Calgary, Canada, 28 August - 30 August 2017
Event date: 28 August to 30 August 2017
Submission deadline: 15 May 2017
Notification: 10 July 2017
15 January 2017
In an inner-product functional encryption scheme, the plaintexts are vectors and the owner of the secret key can delegate the ability to compute weighted sums of the coefficients of the plaintext of any ciphertext. Recently, many inner-product functional encryption schemes were proposed. However, none of the known schemes are secure against chosen ciphertext attacks (IND-FE-CCA). We present a generic construction of IND-FE-CCA inner-product functional encryption from projective hash functions with homomorphic properties. We show concrete instantiations based on the DCR assumption, the DDH assumption, and more generally, any Matrix DDH assumption.
ePrint Report Double-base scalar multiplication revisited Daniel J. Bernstein, Chitchanok Chuengsatiansup, Tanja Lange
This paper reduces the number of field multiplications required for scalar multiplication on conservative elliptic curves. For an average 256-bit integer n, this paper's multiply-by-n algorithm takes just 7.47M per bit on twisted Edwards curves -x^2+y^2=1+dx^2y^2 with small d. The previous record, 7.62M per bit, was unbeaten for seven years.

Unlike previous record-setting algorithms, this paper's multiply-by-n algorithm uses double-base chains. The new speeds rely on advances in tripling speeds and on advances in constructing double-base chains. This paper's new tripling formula for twisted Edwards curves takes just 11.4M, and its new algorithm for constructing an optimal double-base chain for n takes just (log n)^{2.5+o(1)} bit operations.

Extending this double-base algorithm to double-scalar multiplications, as in signature verification, takes 8.80M per bit to compute n_1P+n_2Q. Previous techniques used 9.34M per bit.
14 January 2017
ePrint Report Low-Complexity Cryptographic Hash Functions Benny Applebaum, Naama Haramaty, Yuval Ishai, Eyal Kushilevitz, Vinod Vaikuntanathan
Cryptographic hash functions are efficiently computable functions that shrink a long input into a shorter output while achieving some of the useful security properties of a random function. The most common type of such hash functions is {\em collision resistant} hash functions (CRH), which prevent an efficient attacker from finding a pair of inputs on which the function has the same output.

Despite the ubiquitous role of hash functions in cryptography, several of the most basic questions regarding their computational and algebraic complexity remained open. In this work we settle most of these questions under new, but arguably quite conservative, cryptographic assumptions, whose study may be of independent interest. Concretely, we obtain the following results:

(1) Low-complexity CRH. Assuming the intractability of finding short codewords in natural families of linear error-correcting codes, there are CRH that shrink the input by a constant factor and have a {\em constant algebraic degree} over $Z_2$ (as low as 3), or even {\em constant output locality and input locality}. Alternatively, CRH with an arbitrary polynomial shrinkage can be computed by {\em linear-size} circuits.

(2) Win-win results. If low-degree CRH with good shrinkage {\em do not} exist, this has useful consequences for learning algorithms and data structures.

(3) Degree-2 hash functions. Assuming the conjectured intractability of solving a random system of quadratic equations over $Z_2$, a uniformly random degree-2 mapping is a {\em universal one-way hash function} (UOWHF). UOWHF relaxes CRH by forcing the attacker to find a collision with a random input picked by a challenger. On the other hand, a uniformly random degree-2 mapping is {\em not} a CRH. We leave the existence of degree-2 CRH open, and relate it to open questions on the existence of degree-2 randomized encodings of functions.
13 January 2017
ePrint Report Privacy-Preserving Classification on Deep Neural Network Hervé Chabanne, Amaury de Wargny, Jonathan Milgram, Constance Morel, Emmanuel Prouff
Neural Networks (NN) are today increasingly used in Machine Learning where they have become deeper and deeper to accurately model or classify high-level abstractions of data. Their development however also gives rise to important data privacy risks. This observation motives Microsoft researchers to propose a framework, called Cryptonets. The core idea is to combines simplifications of the NN with Fully Homomorphic Encryptions (FHE) techniques to get both confidentiality of the manipulated data and efficiency of the processing. While efficiency and accuracy are demonstrated when the number of non-linear layers is small (eg $2$), Cryptonets unfortunately becomes ineffective for deeper NNs which let the problem of privacy preserving matching open in these contexts. This work successfully addresses this problem by combining the original ideas of Cryptonets' solution with the batch normalization principle introduced at ICML 2015 by Ioffe and Szegedy. We experimentally validate the soundness of our approach with a neural network with $6$ non-linear layers. When applied to the MNIST database, it competes the accuracy of the best non-secure versions, thus significantly improving Cryptonets.
ePrint Report Analysis of the NORX Core Permutation Alex Biryukov, Aleksei Udovenko, Vesselin Velichkov
NORX is one of the fifteen authenticated encryption algorithms that have reached the third round of the CAESAR competition. NORX is built using the sponge-based Monkey Duplex construction. In this note we analyze the core permutation $F$. We show that it has rotational symmetries on different structure levels. This yields simple distinguishing properties for the permutation, which propagate with very high probability or even probability one. We also investigate differential symmetries in NORX at the word level. A new type of truncated differentials called symmetric truncated differentials (STD) is proposed. It is shown that, under the Markov assumption, up to $2.125$ rounds of the $F$ function of NORX32 and NORX64 can be distinguished using STD. Finally, we note that our analysis covers only the permutation $F$ and does not immediately threaten the security claims of the designers.
Implementation security for lattice-based cryptography is still a vastly unexplored field. At CHES 2016, the very first side-channel attack on a lattice-based signature scheme was presented. Later, shuffling was proposed as an inexpensive means to protect the Gaussian sampling component against such attacks. However, the concrete effectiveness of this countermeasure has never been evaluated. We change that by presenting an in-depth analysis of the shuffling countermeasure. Our analysis consists of two main parts. First, we perform a side-channel attack on a Gaussian sampler implementation. We combine templates with a recovery of data-dependent branches, which are inherent to samplers. We show that an adversary can realistically recover some samples with very high confidence. Second, we present a new attack against the shuffling countermeasure in context of Gaussian sampling and lattice-based signatures. We do not attack the shuffling algorithm as such, but exploit differing distributions of certain variables. We give a broad analysis of our attack by considering multiple modeled SCA adversaries. We show that a simple version of shuffling is not an effective countermeasure. With our attack, a profiled SCA adversary can recover the key by observing only 7000 signatures. A second version of this countermeasure, which uses Gaussian convolution in conjunction with shuffling twice, can increase side-channel security and the number of required signatures significantly. Here, roughly 285000 observations are needed for a successful attack. Yet, this number is still practical.
GlobalPlatform (GP) card specifications are the de facto standards for the industry of smart cards. Being highly sensitive, GP specifications were defined regarding stringent security requirements. In this paper, we analyze the cryptographic core of these requirements; i.e. the family of Secure Channel Protocols (SCP). Our main results are twofold. First, we demonstrate a theoretical attack against SCP02, which is the most popular protocol in the SCP family. We discuss the scope of our attack by presenting an actual scenario in which a malicious entity can exploit it in order to recover encrypted messages. Second, we investigate the security of SCP03 that was introduced as an amendment in 2009. We find that it provably satisfies strong notions of security. Of particular interest, we prove that SCP03 withstands algorithm substitution attacks (ASAs) defined by Bellare et al. that may lead to secret mass surveillance. Our findings highlight the great value of the paradigm of provable security for standards and certification, since unlike extensive evaluation, it formally guarantees the absence of security flaws.
ePrint Report Honey Encryption for Language Marc Beunardeau, Houda Ferradi, Rémi Géraud, David Naccache
Honey Encryption (HE), introduced by Juels and Ristenpart (Eurocrypt 2014), is an encryption paradigm designed to produce ciphertexts yielding plausible-looking but bogus plaintexts upon decryption with wrong keys. Thus brute-force attackers need to use additional information to determine whether they indeed found the correct key.

At the end of their paper, Juels and Ristenpart leave as an open question the adaptation of honey encryption to natural language messages. A recent paper by Chatterjee et al. takes a mild attempt at the challenge and constructs a natural language honey encryption scheme relying on simple models for passwords.

In this position paper we explain why this approach cannot be extended to reasonable-size human-written documents e.g. e-mails. We propose an alternative approach and evaluate its security.
We propose a simple and efficient framework for obtaining communication-efficient, constant- round protocols for (malicious) secure two-party computation. Our framework uses a preprocessing phase to generate authentication information for the two parties; in the online phase, this information is used to construct a single “authenticated” garbled circuit which is transmitted and evaluated. We also discuss various instantiations of our framework:

• The preprocessing phase can be instantiated efficiently using, e.g., TinyOT. Using this approach with our improvements to TinyOT, we obtain a protocol in which secure evaluation of an AES circuit (at 128-bit computational security and 40-bit statistical security) uses roughly 6 MB of communication in total. Most of the communication is circuit independent. A single execution of our protocol performs even better than the best previous work supporting circuit-independent preprocessing when amortized over 1024 executions.

• If the preprocessing phase is instantiated using the IPS compiler, we obtain a constant- round protocol whose communication complexity is asymptotically as small as a semi- honest garbled-circuit protocol in the OT-hybrid model.

• If the preprocessing phase is carried out by a trusted server, we obtain a constant-round protocol whose communication complexity is essentially the same as in the linear-round protocol of Mohassel et al. in the analogous setting.
ePrint Report Bounded-Collusion Attribute-Based Encryption from Minimal Assumptions Gene Itkis, Emily Shen, Mayank Varia, David Wilson, Arkady Yerukhimovich
Attribute-based encryption (ABE) enables encryption of messages under access policies so that only users with attributes satisfying the policy can decrypt the ciphertext. In standard ABE, an arbitrary number of colluding users, each without an authorized attribute set, cannot decrypt the ciphertext. However, all existing ABE schemes rely on concrete cryptographic assumptions such as the hardness of certain problems over bilinear maps or integer lattices. Furthermore, it is known that ABE cannot be constructed from generic assumptions such as public-key encryption using black-box techniques.

In this work, we revisit the problem of constructing ABE that tolerates collusions of arbitrary but a priori bounded size. We present two ABE schemes secure against bounded collusions that require only semantically secure public-key encryption. Our schemes achieve significant improvement in the size of the public parameters, secret keys, and ciphertexts over the previous construction of bounded-collusion ABE from minimal assumptions by Gorbunov et al. (CRYPTO 2012). In fact, in our second scheme, the size of ABE secret keys does not grow at all with the collusion bound. As a building block, we introduce a multidimensional secret-sharing scheme that may be of independent interest. We also obtain bounded-collusion symmetric-key ABE (which requires the secret key for encryption) by replacing the public-key encryption with symmetric-key encryption, which can be built from the minimal assumption of one-way functions.
ePrint Report A Decentralized PKI In A Mobile Ecosystem Varun Chandrasekaran, Lakshminarayanan Subramanian
A public key infrastructure (PKI) supports the authentication and distribution of public encryption keys, enabling secure communication among other uses. However, PKIs rely heavily on a centralized trusted party called the certificate authority (CA) which acts as the root of trust, and is also a single point of compromise. We describe the design of a decentralized PKI utilizing the intrinsic trust of the cellular (GSM) network. Secure Mobile Identities (SMI) is a repetitive key-exchange protocol that utilizes cryptographic proofs to prove the unique identities of mobile users. In this paper, we discuss how one can create strong reputations for an identity despite the presence of an adversary who can exploit the probabilistic one-way trust property of GSM networks. Our evaluation shows minimal computational overhead on existing devices, and quick bootstrap times of under 10 minutes of mobile interaction despite minimal trust assumptions placed, suggesting easy adoption in today's operational ecosystem.
ePrint Report Scalable Multi-Party Private Set-Intersection Carmit Hazay, Muthuramakrishnan Venkitasubramaniam
In this work we study the problem of private set-intersection in the multi-party setting and design two protocols with the following improvements compared to prior work. First, our protocols are designed in the so-called star network topology, where a designated party communicates with everyone else, and take a new approach of leveraging the 2PC protocol of [FreedmanNP04]. This approach minimizes the usage of a broadcast channel, where our semi-honest protocol does not make any use of such a channel and all communication is via point-to-point channels. In addition, the communication complexity of our protocols scales with the number of parties.

More concretely, (1) our first semi-honest secure protocol implies communication complexity that is linear in the input sizes, namely $O((\sum_{i=1}^n m_i)\cdot\kappa)$ bits of communication where $\kappa$ is the security parameter and $m_i$ is the size of $P_i$'s input set, whereas overall computational overhead is quadratic in the input sizes only for a designated party, and linear for the rest. We further reduce this overhead by employing two types of hashing schemes. (2) Our second protocol is proven secure in the malicious setting. This protocol induces communication complexity $O((n^2 + nm_\maxx + nm_\minn\log m_\maxx)\kappa)$ bits of communication where $m_\minn$ (resp. $m_\maxx$) is the minimum (resp. maximum) over all input sets sizes and $n$ is the number of parties.
ePrint Report Constant Round Adaptively Secure Protocols in the Tamper-Proof Hardware Model Carmit Hazay, Antigoni Polychroniadou, Muthuramakrishnan Venkitasubramaniam
Achieving constant-round adaptively secure protocols (where all parties can be corrupted) in the plain model is a notoriously hard problem. Very recently, three works published in TCC 2015 (Dachman-Soled et al., Garg and Polychroniadou, Canetti et al.), solved the problem in the Common Reference String (CRS) model. In this work, we present a constant-round adaptive UC-secure computation protocol for all well-formed functionalities in the tamper-proof hardware model using stateless tokens from only one-way functions. In contrast, all prior works in the CRS model require very strong assumptions, in particular, the existence of indistinguishability obfuscation.

As a corollary to our techniques, we present the first adaptively secure protocols in the Random Oracle Model (ROM) with round complexity proportional to the depth of circuit implementing the functionality. Our protocols are secure in the Global Random Oracle Model introduced recently by Canetti, Jain and Scafuro in CCS 2014 that provides strong compositional guarantees. More precisely, we obtain an adaptively secure UC-commitment scheme in the global ROM assuming only one-way functions. In comparison, the protocol of Canetti, Jain and Scafuro achieves only static security and relies on the specific assumption of Discrete Diffie-Hellman assumption (DDH).
We show that the recent structure-preserving signature (SPS) scheme of Kiltz, Pan and Wee [CRYPTO 2015], provably secure under the standard bilinear pairings group assumption SXDH, can be improved to have one less group element and one less pairing product equation in the signature verification step. Our improved SPS scheme only requires six group elements (five in one group, and one in the other), and two pairing product equations for verification. The number of pairing product equations is optimal, as it matches a known lower bound of Abe et al [CRYPTO 2011]. The number of group elements in the signature also approaches the known lower bound of four for SXDH assumption. Further, while the earlier scheme had a security reduction which incurred a security loss that is quadratic in number of queries $Q$, our novel security reduction incurs only a $Q \log{Q}$ factor loss in security.

Structure-preserving signatures are used pervasively in group signatures, group encryptions, blind signatures, proxy signatures and many other anonymous credential applications. Our work directly leads to improvements in these schemes. Moreover, the improvements are usually of a higher multiplicative factor order, as these constructions use Groth-Sahai NIZK proofs for zero-knowledge verification of pairing-product equations.

We also give our construction under the more general and standard $\D_k$-MDDH (Matrix-DDH) assumption. The signature size in our scheme is $3k+2$ elements in one group, and one element in the other. The number of pairing product equations required for verification is only $2k$, whereas the earlier schemes required at least $2k+1$ equations.
ePrint Report Inference and Record-Injection Attacks on Searchable Encrypted Relational Databases {Mohamed Ahmed Abdelraheem, Tobias Andersson, Christian Gehrmann
We point out the risks of providing security to relational databases via searchable encryption schemes by mounting a novel inference attack exploiting the structure of relational databases together with the leakage of searchable encryption schemes. We discuss some techniques to reduce the effectiveness of inference attacks against searchable encryption schemes. Moreover, we show that record-injection attacks mounted on relational databases have worse consequences than their file-injection counterparts on unstructured databases which have been recently proposed at USENIX 2016.We point out the risks of providing security to relational databases via searchable encryption schemes by mounting a novel inference attack exploiting the structure of relational databases together with the leakage of searchable encryption schemes. We discuss some techniques to reduce the effectiveness of inference attacks against searchable encryption schemes. Moreover, we show that record-injection attacks mounted on relational databases have worse consequences than their file-injection counterparts on unstructured databases which have been recently proposed at USENIX 2016.
We propose a new generic framework for constructing fully secure attribute based encryption (ABE) in multilinear settings. It is applicable in a generic manner to any predicates. Previous generic frameworks of this kind are given only in bilinear group settings, where applicable predicate classes are limited. Our framework provides an abstraction of dual system paradigms over composite-order graded multilinear encoding schemes in a black-box manner.

As applications, we propose new fully secure ABE systems for general predicates, namely, ABE for circuits. We obtain two schemes for each of key-policy (KP) and ciphertext-policy (CP) variants of ABE. All of our four fully secure schemes can deal with unbounded-size circuits, while enjoy succinctness, meaning that the key and ciphertext sizes are (less than or) proportional to corresponding circuit sizes. In the CP-ABE case, no scheme ever achieves such properties, even when considering selectively secure systems. Furthermore, our second KP-ABE achieves constant-size ciphertexts, whereas our second CP-ABE achieves constant-size keys. Previous ABE systems for circuits are either selectively secure (Gorbunov et al. STOC'13, Garg et al. Crypto'13, and subsequent works), or semi-adaptively secure (Brakerski and Vaikuntanathan Crypto'16), or fully-secure but not succinct and restricted to bounded-size circuits (Garg et al. ePrint 2014/622, and Garg et al. TCC'16-A).
When data maintained in a decentralized fashion needs to be synchronized or exchanged between different databases, related data sets usually get associated with a unique identifier. While this approach facilitates cross-domain data exchange, it also comes with inherent drawbacks in terms of controllability. As data records can easily be linked, no central authority can limit or control the information flow. Worse, when records contain sensitive personal data, as is for instance the case in national social security systems, such linkability poses a massive security and privacy threat. An alternative approach is to use domain-specific pseudonyms, where only a central authority knows the cross-domain relation between the pseudonyms. However, current solutions require the central authority to be a fully trusted party, as otherwise it can provide false conversions and exploit the data it learns from the requests. We propose an (un)linkable pseudonym system that overcomes those limitations, and enables controlled yet privacy-friendly exchange of distributed data. We prove our protocol secure in the UC framework and provide an efficient instantiation based on discrete-logarithm related assumptions.
12 January 2017
Job Posting PhD Scholarships Computer Science/Mathematics, University of Auckland, New Zealand
Two, three-year PhD scholarships, covering international tuition fees and a stipend of NZ$27,500 per year.

Application deadline: March 12, 2017.

The project is to develop new theoretical foundations for practical obfuscation. The theoretical component will focus on designing and evaluating new security definitions for practical obfuscation solutions. The practical component will focus on creating practical obfuscation tools inspired by the results obtained in the theoretical component. The theoretical work will be led by Prof Steven Galbraith while Dr. Giovanni Russello will lead the practical aspects.

The ideal candidate will have an undergraduate degree in computer science, engineering or mathematics and have written a master thesis in some topic related to security, cryptography, or the underlying mathematics. Experience with obfuscation and programming preferable.

Closing date for applications: 12 March 2017

Contact: Professor Steven Galbraith

Mathematics Department

University of Auckland

aucklandobfuscationphd (at) gmail.com

More information: https://www.math.auckland.ac.nz/~sgal018/PhD-Marsden.pdf

11 January 2017
We introduce a notion of watermarking for cryptographic functions and propose a concrete scheme for watermarking cryptographic functions. Informally speaking, a digital watermarking scheme for cryptographic functions embeds information, called a \textit{mark}, into functions such as one-way functions and decryption functions of public-key encryption. There are two basic requirements for watermarking schemes. (1) A mark-embedded function must be functionally equivalent to the original function. (2) It must be difficult for adversaries to remove the embedded mark without damaging the original functionality. In spite of its importance and usefulness, there have only been a few theoretical works on watermarking for functions (or programs). Furthermore, we do not have rigorous definitions of watermarking for cryptographic functions and concrete constructions.

To solve the above problem, we introduce a notion of watermarking for cryptographic functions and define its security. Furthermore, we present a lossy trapdoor function (LTF) based on the decisional linear (DLIN) problem and a watermarking scheme for the LTF. Our watermarking scheme is secure under the DLIN assumption in the standard model. We use techniques of dual system encryption and dual pairing vector spaces (DPVS) to construct our watermarking scheme. This is a new application of DPVS. Our watermarking for cryptographic functions is a generalized notion of copyrighted functions introduced by Naccache, Shamir, and Stern (PKC 1999) and our scheme is based on an identity-based encryption scheme whose private keys for identities (i.e., decryption functions) are marked, so our technique can be used to construct black-box traitor tracing schemes.
ePrint Report A Generic Approach to Constructing and Proving Verifiable Random Functions Rishab Goyal, Susan Hohenberger, Venkata Koppula, Brent Waters
Verifiable Random Functions (VRFs) as introduced by Micali, Rabin and Vadhan are a special form of Pseudo Random Functions (PRFs) wherein a secret key holder can also prove validity of the function evaluation relative to a statistically binding commitment. Prior works have approached the problem of constructing VRFs by proposing a candidate under specific number theoretic setting --- mostly in bilinear groups --- and then grapple with the challenges of proving security in the VRF environments. These constructions achieved different results and tradeoffs in practical efficiency, tightness of reductions and cryptographic assumptions.

In this work we take a different approach. Instead of tackling the VRF problem as a whole we demonstrate a simple and generic way of building Verifiable Random Functions from more basic and narrow cryptographic primitives. Then we can turn to exploring solutions to these primitives with a more focused mindset. In particular, we show that VRFs can be constructed generically from the ingredients of: (1) a 1-bounded constrained pseudo random function for a functionality that is ``admissible hash friendly" , (2) a non-interactive statistically binding commitment scheme (without trusted setup) and (3) a non-interactive witness indistinguishable proofs or NIWIs. The first primitive can be replaced with a more basic puncturable PRF constraint if one is willing to settle for selective security or assume sub-exponential hardness of assumptions.

In addition, we support our generic approach by giving new constructions of constrained PRFs under non bilinear groups and new constructions of perfectly binding commitments from the Learning with Errors (LWE) and Learning Parity with Noise (LPN) assumptions.
Over the years, SSL/TLS has become an essential part of Internet security. As such, it should offer robust and state-of-the-art security, in particular for HTTPS, its first application. Theoretically, the protocol allows for a trade-off between secure algorithms and decent performance. Yet in practice, servers do not always support the latest version of the protocol, nor do they all enforce strong cryptographic algorithms. To assess the quality of HTTPS and other TLS deployment at large, several studies have been led to grasp the state of the ecosystem, and to characterize the quality of certificate chains in particular.

In this paper, we propose to analyse some of the existing data concerning TLS measures on the Internet. We studied several datasets, from the first public ones in 2010 to more recent scans. Even if the collection methodology and the used tools vary between campaigns, we propose a unified and reproducible way to analyse the TLS ecosystem through different datasets. Our approach is based on a set of open-source tools, concerto.

Our contribution is therefore threefold: an analysis of existing datasets to propose a unified methodology, the implementation of our approach with concerto, and the presentation of some results to validate our toolsets.
ePrint Report SMART POOL : Practical Decentralized Pooled Mining Loi Luu, Yaron Velner, Jason Teutsch, Prateek Saxena
Many blockchain-based cryptocurrencies such as Bitcoin and Ethereum use Nakamoto consensus protocol to reach agreement on the blockchain state between a network of participant nodes. The Nakamoto consensus protocol probabilistically selects a leader via a mining process which rewards network participants (or miners) to solve computational puzzles. Finding solutions for such puzzles requires an enormous amount of computation. Thus, miners often aggregate resources into {\em pools} and share rewards amongst all pool members via {\em pooled mining} protocol. Pooled mining helps reduce the variance of miners' payoffs significantly and is widely adopted in popular cryptocurrencies. For example, as of this writing, more than $95\%$ of mining power in Bitcoin emanates from $10$ mining pools.

Although pooled mining benefits miners, it severely degrades decentralization, since a centralized pool manager administers the pooling protocol. Furthermore, pooled mining increases the transaction censorship significantly since pool managers decide which transactions are included in blocks. Due to this widely recognized threat, the Bitcoin community has proposed an alternative called P2Pool which decentralizes the operations of the pool manager. However, P2Pool is inefficient, increases the variance of miners' rewards, requires much more computation and bandwidth from miners, and has not gained wide adoption.

In this work, we propose a new protocol design for a decentralized mining pool. Our protocol called SmartPool shows how one can leverage {\em smart contracts}, which are autonomous agents themselves running on decentralized blockchains, to decentralize cryptocurrency mining. SmartPool guarantees high security, low reward's variance for miners and is cost-efficient. We implemented a prototype of SmartPool as an Ethereum smart contract working as a decentralized mining pool for Bitcoin. We have deployed it on the Ethereum testnet and our experiments confirm that SmartPool is efficient and ready for practical use.
{\em Verifiable random functions} (VRFs) are pseudorandom functions where the owner of the seed, in addition to computing the function's value $y$ at any point $x$, can also generate a non-interactive proof $\pi$ that $y$ is correct (relative to so), without compromising pseudorandomness at other points. Being a natural primitive with a wide range of applications, considerable efforts have been directed towards the construction of such VRFs. While these efforts have resulted in a variety of algebraic constructions (from bilinear maps or the RSA problem), the relation between VRFs and other general primitives is still not well understood.

We present new constructions of VRFs from general primitives, the main one being {\em non-interactive witness-indistinguishable proofs} (NIWIs). This includes: \begin{itemize} \item A selectively-secure VRF assuming NIWIs and non-interactive commitments. As usual, the VRF can be made adaptively-secure assuming subexponential hardness of the underlying primitives. \item An adaptively-secure VRF assuming (polynomially-hard) NIWIs, non-interactive commitments, and {\em (single-key) constrained pseudorandom functions} for a restricted class of constraints. \end{itemize}

The above primitives have known instantiations under various standard assumptions, which yields corresponding VRF instantiations, under different assumptions than were known. One notable example is a non-uniform construction of VRFs from subexponentially-hard trapdoor permutations, or more generally, {\em verifiable pseudorandom generators} (the construction can be made uniform under a standard derandomization assumption). This partially answers an open question by Dwork and Naor (FOCS '00).

The construction and its analysis are quite simple. Both draw from ideas commonly used in the context of {\em indistinguishability obfuscation}.
We present an algorithm for the approximate $k$-List problem for the Euclidean distance that improves upon the Bai-Laarhoven-Stehle (BLS) algorithm from ANTS'16. The improvement stems from the observation that almost all the solutions to the approximate $k$-List problem form a particular configuration in $n$-dimensional space. Due to special properties of configurations, it is much easier to verify whether a $k$-tuple forms a configuration rather than checking whether it gives a solution to the $k$-List problem. Thus, phrasing the $k$-List problem as a problem of finding such configurations immediately gives a better algorithm. Furthermore, the search for configurations can be sped up using techniques from Locality-Sensitive Hashing (LSH). Stated in terms of configuration-search, our LSH-like algorithm offers a broader picture on previous LSH algorithms.

For the Shortest Vector Problem, our configuration-search algorithm results in an exponential improvement for memory-efficient sieving algorithms. For $k=3$, it allows us to bring down the complexity of the BLS sieve algorithm on an $n$-dimensional lattice from $2^{0.4812n+o(n)}$ to $2^{0.3962n + o(n)}$ with the same space-requirement $2^{0.1887n + o(n)}$. Note that our algorithm beats the Gauss Sieve algorithm with time resp. space requirements of $2^{0.415n+o(n)}$ resp. $2^{0.208n + o(n)}$, while being easy to implement. Using LSH techniques, we can further reduce the time complexity down to $2^{0.3717n + o(n)}$ while retaining a memory complexity of $2^{0.1887n+o(n)}$.

  older items