International Association for Cryptologic Research

IACR News Central

Here you can see all recent updates to the IACR webpage. These updates are also available:

Now viewing news items related to:

18 April 2019
In this paper, we describe several practically exploitable fault attacks against OpenSSL's implementation of elliptic curve cryptography, related to the singular curve point decompression attacks of Blömer and Günther (FDTC2015) and the degenerate curve attacks of Neves and Tibouchi (PKC 2016).

In particular, we show that OpenSSL allows to construct EC key files containing explicit curve parameters with a compressed base point. A simple single fault injection upon loading such a file yields a full key recovery attack when the key file is used for signing with ECDSA, and a complete recovery of the plaintext when the file is used for encryption using an algorithm like ECIES. The attack is especially devastating against curves with $j$-invariant equal to 0 such as the Bitcoin curve secp256k1, for which key recovery reduces to a single division in the base field.

Additionally, we apply the present fault attack technique to OpenSSL's implementation of ECDH, by combining it with Neves and Tibouchi's degenerate curve attack. This version of the attack applies to usual named curve parameters with nonzero $j$-invariant, such as P192 and P256. Although it is typically more computationally expensive than the one against signatures and encryption, and requires multiple faulty outputs from the server, it can recover the entire static secret key of the server even in the presence of point validation.

These various attacks can be mounted with only a single instruction skipping fault, and therefore can be easily injected using low-cost voltage glitches on embedded devices. We validated them in practice using concrete fault injection experiments on a Rapsberry Pi single board computer running the up to date OpenSSL command line tools---a setting where the threat of fault attacks is quite significant.
Non-malleable codes, introduced by Dziembowski, Pietrzak and Wichs in ICS 2010, have emerged in the last few years as a fundamental object at the intersection of cryptography and coding theory. Non-malleable codes provide a useful message integrity guarantee in situations where traditional error-correction (and even error-detection) is impossible; for example, when the attacker can completely overwrite the encoded message. Informally, a code is non-malleable if the message contained in a modified codeword is either the original message, or a completely ``unrelated value''. Although such codes do not exist if the family of ``tampering functions'' {\mathcal F} allowed to modify the original codeword is completely unrestricted, they are known to exist for many broad tampering families {\mathcal F}.

The family which received the most attention is the family of tampering functions in the so called (2-part) {\em split-state} model: here the message x is encoded into two shares L and R, and the attacker is allowed to arbitrarily tamper with each L and R individually.

Dodis, Kazana, and the authors in STOC 2015 developed a generalization of non-malleable codes called the concept of non-malleable reduction, where a non-malleable code for a tampering family {\mathcal F} can be seen as a non-malleable reduction from {\mathcal F} to a family NM of functions comprising the identity function and constant functions. They also gave a constant-rate reduction from a split-state tampering family to a tampering family {\mathcal G} containing so called $2$-lookahead functions, and forgetful functions.

In this work, we give a constant rate non-malleable reduction from the family {\mathcal G} to NM, thereby giving the first {\em constant rate non-malleable code in the split-state model.}

Central to our work is a technique called inception coding which was introduced by Aggarwal, Kazana and Obremski in TCC 2017, where a string that detects tampering on a part of the codeword is concatenated to the message that is being encoded.
ePrint Report Constant-Round Group Key Exchange from the Ring-LWE Assumption Daniel Apon, Dana Dachman-Soled, Huijing Gong, Jonathan Katz
Group key-exchange protocols allow a set of N parties to agree on a shared, secret key by communicating over a public network. A number of solutions to this problem have been proposed over the years, mostly based on variants of Diffie-Hellman (two-party) key exchange. There has been relatively little work, however, looking at candidate post-quantum group key-exchange protocols.

Here, we propose a constant-round protocol for unauthenticated group key exchange (i.e., with security against a passive eavesdropper) based on the hardness of the Ring-LWE problem. By applying the Katz-Yung compiler using any post-quantum signature scheme, we obtain a (scalable) protocol for authenticated group key exchange with post-quantum security. Our protocol is constructed by generalizing the Burmester-Desmedt protocol to the Ring-LWE setting, which requires addressing several technical challenges.
ePrint Report Feistel Structures for MPC, and More Martin R. Albrecht, Lorenzo Grassi, Léo Perrin, Sebastian Ramacher, Christian Rechberger, Dragos Rotaru, Arnab Roy, Markus Schofnegger
We study approaches to generalized Feistel constructions with low-degree round functions with a focus on x → x^3. Besides known constructions, we also provide a new balanced Feistel construction with improved diffusion properties. This then allows us to propose more efficient generalizations of the MiMC design (Asiacrypt’16), which we in turn evaluate in three application areas. Whereas MiMC was not competitive at all in a recently proposed new class of PQ-secure signature schemes, our new construction leads to about 30 times smaller signatures than MiMC. In MPC use cases, where MiMC outperforms all other competitors, we observe improvements in throughput by a factor of more than 7 and simultaneously a 16-fold reduction of preprocessing effort, albeit at the cost of a higher latency. Another use case where MiMC already outperforms other designs, in the area of SNARKs, sees modest improvements. Additionally, this use case benefits from the flexibility to use smaller fields.
In recent years, a number of attacks have been developed that can reconstruct encrypted one-dimensional databases that support range queries under the persistent passive adversary model. These attacks allow an (honest but curious) adversary (such as the cloud provider) to find the order of the elements in the database and, in some cases, to even reconstruct the database itself.

In this paper we present two mitigation techniques to make it harder for the adversary to reconstruct the database. The first technique makes it impossible for an adversary to reconstruct the values stored in the database with an error smaller than $k/2$, for $k$ chosen by the client. By fine-tuning $k$, the user can increase the adversary's error at will.

The second technique is targeted towards adversaries who have managed to learn the distribution of the queries issued. Such adversaries may be able to reconstruct most of the database after seeing a very small (i.e. poly-logarithmic) number of queries. To neutralize such adversaries, our technique turns the database to a circular buffer. All known techniques that exploit knowledge of distribution fail, and no technique can determine which record is first (or last) based on access pattern leakage.
The widespread use of cloud computing has enabled several database providers to store their data on servers in the cloud and answer queries from those servers. In order to protect the confidentiality of the data stored in the cloud, a database can be stored in an encrypted form and all queries can be executed on top of the encrypted database. Recent research results suggest that a curious cloud provider may be able to decrypt some of the items in the database after seeing a large number of queries and their (encrypted) results.

In this paper, we focus on one-dimensional databases that support range queries and develop an attack that can achieve full database reconstruction, inferring the exact value of every element in the database. Previous work on full database reconstruction depends on a client issuing queries uniformly at random.

Let $N$ be the number of elements in the database. Our attack succeeds after the attacker has seen each of the possible query results at least once, independent of their distribution. For the sake of query complexity analysis and comparison with relevant work, if we assume that the client issues queries uniformly at random, we can decrypt the entire database after observing $O(N^2 \log N)$ queries with high probability, an improvement upon Kellaris et al.'s $O(N^4 \log N)$.
ePrint Report Masking Dilithium: Efficient Implementation and Side-Channel Evaluation Vincent Migliore, Benoı̂t Gérard, Mehdi Tibouchi, Pierre-Alain Fouque
Although security against side-channel attacks is not an explicit design criterion of the NIST post-quantum standardization effort, it is certainly a major concern for schemes that are meant for real-world deployment. In view of the numerous physical attacks that have been proposed against post-quantum schemes in recent literature, it is in particular very important to evaluate the cost and effectiveness of side-channel countermeasures in that setting.

For lattice-based signatures, this work was initiated by Barthe et al., who showed at EUROCRYPT 2018 how to apply arbitrary order masking to the GLP signature scheme presented at CHES 2012 by Güneysu, Lyubashevsky and Pöppelman. However, although Barthe et al.’s paper provides detailed proofs of security in the probing model of Ishai, Sahai and Wagner, it does not include practical side-channel evaluations, and its proof-of-concept implementation has limited efficiency. Moreover, the GLP scheme has historical significance but is not a NIST candidate, nor is it being considered for concrete deployment.

In this paper, we look instead at Dilithium, one of the most promising NIST candidates for postquantum signatures. This scheme, presented at CHES 2018 by Ducas et al. and based on module lattices, can be seen as an updated variant of both GLP and its more efficient sibling BLISS; it comes with an implementation that is both efficient and constant-time.

Our analysis of Dilithium from a side-channel perspective is threefold. We first evaluate the side-channel resistance of an ARM Cortex-M3 implementation of Dilithium without masking, and identify exploitable side-channel leakage. We then describe how to securely mask the scheme, and verify that the masked implementation no longer leaks. Finally, we show how a simple tweak to Dilithium (namely, replacing the prime modulus by a power of two) makes it possible to obtain a considerably more efficient masked scheme, by a factor of 7.3 to 9 for the most time-consuming masking operations, without affecting security.
Soundness amplification is a central problem in the study of interactive protocols. While ``natural'' parallel repetition transformation is known to reduce the soundness error of some special cases of interactive arguments: three-message protocols and public-coin protocols, it fails to do so in the general case.

The only known round-preserving approach that applies to the general case of interactive arguments is Haitner's "random-terminating" transform [FOCS '09, SiCOMP '13]. Roughly speaking, a protocol $\pi$ is first transformed into a new slightly modified protocol $\widetilde{\pi}$, referred as the random terminating variant of $\pi$, and then parallel repetition is applied. Haitner's analysis shows that the parallel repetition of $\widetilde{\pi}$ does reduce the soundness error at a weak exponential rate. More precisely, if $\pi$ has $m$ rounds and soundness error $1-\epsilon$, then $\widetilde{\pi}^k$, the $k$-parallel repetition of $\widetilde{\pi}$, has soundness error $(1-\epsilon)^{\epsilon k / m^4}$. Since the security of many cryptographic protocols (e.g., binding) depends on the soundness of a related interactive argument, improving the above analysis is a key challenge in the study of cryptographic protocols.

In this work we introduce a different analysis for Haitner's method, proving that parallel repetition of random terminating protocols reduces the soundness error at a much stronger exponential rate: the soundness error of $\widetilde{\pi}^k$ is $(1-\epsilon)^{k / m}$, only an $m$ factor from the optimal rate of $(1-\epsilon)^k$, achievable in public-coin and three-message protocols. We prove the tightness of our analysis by presenting a matching protocol.
ePrint Report New Conditional Cube Attack on Keccak Keyed Modes Zheng Li, Xiaoyang Dong, Wenquan Bi, Keting Jia, Xiaoyun Wang, Willi Meier
Conditional cube attack on round-reduced \textsc{Keccak} keyed modes was proposed by Huang et al. at EUROCRYPT 2017. In their attack, a conditional cube variable was introduced, whose diffusion was significantly reduced by certain key bit conditions. Then, a set of cube variables were found, that were not multiplied after the first round, meanwhile, the conditional cube variable was not multiplied with other cube variables (called ordinary cube variables) after the second round. This has an impact on the degree of the output of \textsc{Keccak} and hence gives a distinguisher. Later, MILP method was applied to find ordinary cube variables. However, for some \textsc{Keccak} based versions with low degrees of freedom, one could not find enough ordinary cube variables, which weakens or even invalidates the conditional cube attack.

In this paper, a new conditional cube attack on \textsc{Keccak} is proposed. We remove the limitation that no cube variables multiply with each other in the first round. As a result, some quadratic terms may appear after the first round. We make use of some new bit conditions to prevent the quadratic terms from multiplying with other cube variables in the second round, so that there will be no cubic terms after the second round. Furthermore, we introduce the kernel quadratic term and construct a 6-2-2 pattern to reduce the diffusion of quadratic terms significantly, where the $\theta$ operation even in the second round becomes an identity transformation (CP-kernel property) for the kernel quadratic term. Previous conditional cube attacks on \textsc{Keccak} only explored the CP-kernel property of $\theta$ operation in the first round. Therefore, more degrees of freedom are available for ordinary cube variables and fewer bit conditions are used to remove the cubic terms after the second round, which plays a key role in the conditional cube attack on versions with very low degrees of freedom. We also use MILP method in the search of cube variables and give key-recovery attacks on round-reduced \textsc{Keccak} keyed modes.

As a result, we reduce the time complexity of key-recovery attacks on 7-round \textsc{Keccak}-MAC-512, and 7-round \textsc{Ketje Sr} v2 from $2^{111}$, $2^{99}$ to $2^{72}$, $2^{77}$, respectively. Additionally, we have reduced the time complexity of attacks on 9-round \texttt{KMAC256} and 7-round \textsc{Ketje Sr} v1. Besides, practical attacks on 6-round \textsc{Ketje Sr} v1 and v2 are also given in this paper for the first time.
Timing Channels Cache side-channels Information leakage
Email breaches are commonplace, and they expose a wealth of personal, business, and political data that may have devastating consequences. The current email system allows any attacker who gains access to your email to prove the authenticity of the stolen messages to third parties -- a property arising from a necessary anti-spam / anti-spoofing protocol called DKIM. This exacerbates the problem of email breaches by greatly increasing the potential for attackers to damage the users' reputation, blackmail them, or sell the stolen information to third parties.

In this paper, we introduce "non-attributable email", which guarantees that a wide class of adversaries are unable to convince any third party of the authenticity of stolen emails. We formally define non-attributability, and present two practical system proposals -- KeyForge and TimeForge -- that provably achieve non-attributability while maintaining the important protection against spam and spoofing that is currently provided by DKIM. Moreover, we implement KeyForge and demonstrate that that scheme is practical, achieving competitive verification and signing speed while also requiring 42% less bandwidth per email than RSA2048.
Lattice-based public-key encryption has a large number of design choices that can be combined in diverse ways to obtain different tradeoffs. One of these choices is the distribution from which secret keys are sampled. Numerous secret-key distributions exist in the state of the art, including (discrete) Gaussian, binomial, ternary, and fixed-weight ternary. Although the choice of the distribution impacts both the concrete security and performance of the schemes, it has not been compared explicitly how the choice of secret-key distribution affects this tradeoff.

In this paper, we compare different aspects of secret-key distributions that appear in submissions to the NIST post-quantum standardization effort. We first consider their impact on concrete security (influenced by the entropy and variance of the distribution), on decryption failures and IND-CCA2 security (influenced by the probability of sampling keys with ``non average, large'' norm), and on the key sizes. Next, we select concrete parameters of an encryption scheme instantiated with the above distributions, optimized for key sizes, to identify which distribution(s) offer the best tradeoffs between security and performance.

We draw two main conclusions from the results of the above optimization. Firstly, fixed-weight ternary secret keys result in the smallest key sizes of the encryption scheme. Such secret keys reduce the decryption failure rate and hence allow for a higher noise-to-modulus ratio, alleviating the slight increase in lattice dimension required for countering specialized attacks that apply in this case. Secondly, fixed-weight ternary secret keys result in the scheme becoming more secure against decryption failure-based IND-CCA2 attacks, as compared to secret keys with independently sampled components.
While digital secret keys appear indispensable in modern cryptography and security, they also routinely constitute a main attack point of the resulting hardware systems. Some recent approaches have tried to overcome this problem by simply avoiding keys and secrets in vulnerable systems. To start with, physical unclonable functions (PUFs) have demonstrated how “classical keys”, i.e., permanently stored digital secret keys, can be evaded, realizing security devices that might be called “classically key-free”. Still, most PUFs induce certain types of physical secrets deep in the hardware, whose disclosure to adversaries breaks security as well. Examples include the manufacturing variations that determine the power-up states of SRAM PUFs, or the signal runtimes of Arbiter PUFs, both of which have been extracted from PUF-hardware in practice, breaking security. A second generation of physical security primitives, such a SIMPLs/PPUFs and Unique Objects, recently has shown promise to overcome this issue, however. Perhaps counterintuitively, they would enable completely “secret-free” hardware, where adversaries might inspect every bit and atom, and learn any information present in any form in the hardware, without being able to break security. This concept paper takes this situation as starting point, and categorizes, formalizes, and surveys the currently emerging areas of key-free and, more importantly, secret-free security. Our treatment puts keys, secrets, and their respective avoidance into the center of the currently emerging physical security methods. It so aims to lay the foundations for future, secret-free security hardware, which would be innately and provably immune against any physical probing and key extraction.
17 April 2019
Event date: 19 August to 23 August 2019
Submission deadline: 6 May 2019
Notification: 6 June 2019
Event date: 7 April to 30 November 2019
Submission deadline: 30 June 2019
Event date: 4 December to 6 December 2019
Submission deadline: 29 August 2019
Notification: 21 October 2019
Job Posting 2 x NCSC-sponsored PhD Studentships Centre for Secure Information Technologies (CSIT), Queen’s University Belfast, UK
Applications are invited for 2x PhD studentships sponsored by the UK National Cyber Security Centre (NCSC) in the areas of: (1) Applying Deep Learning to Enhance Physical Unclonable Functions; and (2) Deep Learning for Total Network Defence.

ACADEMIC REQUIREMENTS: A minimum 2.1 honours degree or equivalent in Computer Science, Electrical and Electronic Engineering, Mathematics or closely related discipline is required. This is an NCSC-sponsored PhD studentship; therefore, only UK nationals are eligible for this funding. NCSC will be offering the student an opportunity to work more closely with them – e.g., via a short internship or attendance at technical meetings. As such, the recipient of this studentship will have to be appropriately security cleared by NCSC before they start their doctoral studies.

GENERAL INFORMATION: A GCHQ-sponsored PhD studentship provides funding for 3.5 years and commences Oct 2019. It covers approved tuition fees and a maintenance grant of approx.. £22,500 each year (tax-free). A further £5k of funding will also be available per annum for travel to conferences, collaborative partners, NCSC visits, etc.

To Apply, use the online application system available at: https://dap.qub.ac.uk/portal/

Closing date for applications: 31 May 2019

Contact: m.oneill AT ecit.qub.ac.uk

More information: https://www.qub.ac.uk/ecit/Education/PhDProjects/PhDResearchProjects2019/CSITCluster/

The position of Lecturer/Senior Lecturer of Cyber Security is responsible for undertaking research; teaching at undergraduate and postgraduate level including course coordination; research higher degree student supervision; and professional activities in the field of Cyber Security with a focus on cryptography and cyber security automation.

1. Cryptography position:–

We welcome applications from computer scientists with expertise in applied cryptography including but not limited to threshold cryptography, lightweight cryptography, post quantum cryptography, quantum key distribution and management, practical homomorphic encryption, cryptanalysis, data privacy-preserving techniques, and engineering of cryptographic applications according to international standards. Experts in other emerging cryptography research areas are also welcome to apply. Experience or an interest in engineering or standardizing industry-grade cryptographic solutions will be highly regarded. We seek to appoint a scientist with interest and experience in systems thinking and well versed in cryptographic and coding technologies.

2. Automation position:-

We welcome applications from computer scientists with expertise in cyber security autonomy and automation, covering areas including but not limited to exploit generation, reverse engineering, symbolic execution, program testing, fuzzing, vulnerability detection and mitigation, security information and event management. Experts with experience and interest in applying machine learning and AI planning to cyber security are also welcome to apply. Experience or an interest in engineering or standardizing industry-grade cyber security solutions will be highly regarded. We seek to appoint a scientist with interest and experience in systems thinking and well versed in vulnerability discovery and exploitation technologies.

This role is a full-time position; however flexible working arrangements may be negotiated. The University of Queensland values diversity and inclusion and actively encourages applications from those who bring diversity to the University.

Closing date for applications: 19 May 2019

Contact: ryan.ko AT uq.edu.au

More information: http://jobs.uq.edu.au/caw/en/job/507415/lecturersenior-lecturer-in-cyber-security-cryptographyautomation

Job Posting Professor/Reader x 1 and lecturer positions x 3 in information/cyber security Information Security group, Royal Holloway University of London, UK

The Information Security Group (ISG) at Royal Holloway University of London are looking for a full time permanent Professor/Reader (Chair/Distinguished or Full Professor) and three full time permanent lecturers (assistant professors).

The ISG is a full department within the university specialising in information/cyber security research and teaching and is one of the biggest specialist research and teaching groups in the UK. Details of the positions and more information about the ISG and Royal Holloway can be found at:

  • Professor/Reader: https://andersonquigley.com/digitalleaders/\"
  • Lecturers: https://jobs.royalholloway.ac.uk/vacancy.aspx?ref=0419-139
  • Additionally Royal Holloway is also looking for a head of Department for Computer Science and Head of Department in Media Arts (see https://andersonquigley.com/digitalleaders/ ).

    Closing date for applications: 7 May 2019

    Contact: Peter Komisarczuk

    Head of Department/Director Information Security Group

    Royal Holloway, University of London

    Tel: +44 (0)1784443089

    peter.komisarczuk (at) rhul.ac.uk

    More information: https://jobs.royalholloway.ac.uk/vacancy.aspx?ref=0419-139

    Event date: 10 June to 14 June 2019
    Event Calendar CARDIS '19: 18th Smart Card Research and Advanced Application Conference Prague, Czech Republic, 11 November - 13 November 2019
    Event date: 11 November to 13 November 2019
    Submission deadline: 12 July 2019
    Notification: 13 September 2019
    15 April 2019
    ePrint Report SoK : On DFA Vulnerabilities of Substitution-Permutation Networks Mustafa Khairallah, Xiaolu Hou, Zakaria Najm, Jakub Breier, Shivam Bhasin, Thomas Peyrin
    Recently, the NIST launched a competition for lightweight cryptography and a large number of ciphers are expected to be studied and analyzed under this competition. Apart from the classical security, the candidates are desired to be analyzed against physical attacks. Differential Fault Analysis (DFA) is an invasive physical attack method for recovering key information from cipher implementations. Up to date, almost all the block ciphers have been shown to be vulnerable against DFA, while following similar attack patterns. However, so far researchers mostly focused on particular ciphers rather than cipher families, resulting in works that reuse the same idea for different ciphers.

    In this article, we aim at bridging this gap, by providing a generic DFA attack method targeting Substitution-Permutation Network (SPN) based families of symmetric block ciphers. We provide an overview of the state-of-the-art of the fault attacks on SPNs, followed by generalized conditions that hold on all the ciphers of this design family. We show that for any SPN, as long as the fault mask injected before a non-linear layer in the last round follows a non-uniform distribution, the key search space can always be reduced. This shows that it is not possible to design an SPN-based cipher that is completely secure against DFA, without randomization. Furthermore, we propose a novel approach to find good fault masks that can leak the key with a small number of instances. We then developed a tool, called Joint Difference Distribution Table (JDDT) for pre-computing the solutions for the fault equations, which allows us to recover the last round key with a very small number of pairs of faulty and non-faulty ciphertexts. We evaluate our methodology on various block ciphers, including PRESENT-80, PRESENT-128, GIFT-64, GIFT-128, AES-128, LED-64, LED-128, Skinny-64-64, Skinny-128-128, PRIDE and PRINCE. The developed technique would allow automated DFA analysis of several candidates in the NIST competition.
    ePrint Report Field Extension in Secret-Shared Form and Its Applications to Efficient Secure Computation Ryo Kikuchi, Nuttapong Attrapadung, Koki Hamada, Dai Ikarashi, Ai Ishida, Takahiro Matsuda, Yusuke Sakai, Jacob C. N. Schuldt
    Secure computation enables participating parties to jointly compute a function over their inputs while keeping them private. Secret sharing plays an important role for maintaining privacy during the computation. In most schemes, secret sharing over the same finite field is normally utilized throughout all the steps in the secure computation. A major drawback of this “uniform” approach is that one has to set the size of the field to be as large as the maximum of all the lower bounds derived from all the steps in the protocol. This easily leads to a requirement for using a large field which, in turn, makes the protocol inefficient. In this paper, we propose a “non-uniform” approach: dynamically changing the fields so that they are suitable for each step of computation. At the core of our approach is a surprisingly simple method to extend the underlying field of a secret sharing scheme, in a non-interactive manner, while maintaining the secret being shared. Using our approach, default computations can hence be done in a small field, which allows better efficiency, while one would extend to a larger field only at the necessary steps. As the main application of our technique, we show an improvement upon the recent actively secure protocol proposed by Chida et al. (Crypto’18). The improved protocol can handle a binary field, which enables XOR-free computation of a boolean circuit. Other applications include efficient (batch) equality check and consistency check protocols, which are useful for, e.g., password-based threshold authentication
    We present a simple algorithm for Miller inversion for the reduced Tate pairing on supersingular elliptic curve of trace zero defined over the finite fields with q elements. Our algorithm runs with O((log q)^3) bit operations.
    Oblivious RAM (ORAM) and private information retrieval (PIR) are classic cryptographic primitives used to hide the access pattern to data whose storage has been outsourced to an untrusted server. Unfortunately, both primitives require considerable overhead compared to plaintext access. For large-scale storage infrastructure with highly frequent access requests, the degradation in response time and the exorbitant increase in resource costs incurred by either ORAM or PIR prevent their usage. In an ideal scenario, a privacy-preserving storage protocols with small overhead would be implemented for these heavily trafficked storage systems to avoid negatively impacting either performance and/or costs. In this work, we study the problem of the best $\mathit{storage\ access\ privacy}$ that is achievable with only $\mathit{small\ overhead}$ over plaintext access.

    To answer this question, we consider $\mathit{differential\ privacy\ access}$ which is a generalization of the $\mathit{oblivious\ access}$ security notion that are considered by ORAM and PIR. Quite surprisingly, we present strong evidence that constant overhead storage schemes may only be achieved with privacy budgets of $\epsilon = \Omega(\log n)$. We present asymptotically optimal constructions for differentially private variants of both ORAM and PIR with privacy budgets $\epsilon = \Theta(\log n)$ with only $O(1)$ overhead. In addition, we consider a more complex storage primitive called key-value storage in which data is indexed by keys from a large universe (as opposed to consecutive integers in ORAM and PIR). We present a differentially private key-value storage scheme with $\epsilon = \Theta(\log n)$ and $O(\log\log n)$ overhead. This construction uses a new oblivious, two-choice hashing scheme that may be of independent interest.
    The WPA3 certification aims to secure Wi-Fi networks, and provides several advantages over its predecessor WPA2, such as protection against offline dictionary attacks and forward secrecy.

    Unfortunately, we show that WPA3 is affected by several design flaws, and analyze these flaws both theoretically and practically. Most prominently, we show that WPA3's Simultaneous Authentication of Equals (SAE) handshake, commonly known as Dragonfly, is affected by password partitioning attacks. These attacks resemble dictionary attacks and allow an adversary to recover the password by abusing timing or cache-based side-channel leaks. Our side-channel attacks target the protocol's password encoding method. For instance, our cache-based attack exploits SAE's hash-to-curve algorithm.

    The resulting attacks are efficient and low cost: brute-forcing all 8-character lowercase password requires less than 125$ in Amazon EC2 instances.

    In light of ongoing standardization efforts on hash-to-curve, Password-Authenticated Key Exchanges (PAKEs), and Dragonfly as a TLS handshake, our findings are also of more general interest.

    Finally, we discuss how to mitigate our attacks in a backwards-compatible manner, and explain how minor changes to the protocol could have prevented most of our attacks.
    With Attribute-based Signatures (ABS) users can simultaneously sign messages and prove compliance of their attributes, issued by designated attribute authorities, with some verification policy. Neither signer's identity nor possessed attributes are leaked during the verification process, making ABS schemes a handy tool for applications requiring privacy-preserving authentication. Earlier ABS schemes lacked support for hierarchical delegation of attributes (across tiers of attribute authorities down to the signers), a distinct property that has made traditional PKIs more scalable and widely adoptable.

    This changed recently with the introduction of Hierarchical ABS (HABS) schemes, where support for attribute delegation was proposed in combination with stronger privacy guarantees for the delegation paths (path anonymity) and new accountability mechanisms allowing a dedicated tracing authority to identify these paths (path traceability) and the signer, along with delegated attributes, if needed. Yet, current HABS construction is generic with inefficient delegation process resulting in sub-optimal signature lengths of order $O(k^{2}|\Psi|)$ where $\Psi$ is the policy size and $k$ the height of the hierarchy.

    This paper proposes a direct HABS construction in bilinear groups that significantly improves on these bounds and satisfies the original security and privacy requirements. At the core of our HABS scheme is a new delegation process based on the length-reducing homomorphic trapdoor commitments to group elements for which we introduce a new delegation technique allowing step-wise commitments to additional elements without changing the length of the original commitment and its opening. While also being of independent interest, this technique results in shorter HABS keys and achieves the signature-length growth of $O(k|\Psi|)$ which is optimal due to the path-traceability requirement.
    Cube attacks are an important type of key recovery attacks against stream ciphers. In particular, it is shown to be powerful against Trivium-like ciphers. Traditional cube attacks are experimental attacks which could only exploit cubes of size less than 40. At CRYPTO 2017, division property based cube attacks were proposed by Todo et al., and an advantage of introducing the division property to cube attacks is that large cube sizes which are beyond the experimental range could be explored, and so powerful theoretical attacks were mounted to many lightweight stream ciphers.

    In this paper, we revisit the division property based cube attacks. There is an important assumption, called Weak Assumption, proposed in division property based cube attacks to support the effectiveness of key recovery. Todo et al. in CRYPTO 2017 said that the Weak Assumption was expected to hold for theoretically recovered superpolies of Trivium according to some experimental results on small cubes. In this paper, based on some new techniques to remove invalid division trails, some best key recovery results given at CRYPTO 2017 and CRYPTO 2018 on Trivium are proved to be distinguishers. First, we build a relationship between the bit-based division property and the algebraic degree evaluation on a set of active variables. Second, based on our algebraic point of view, we propose a new variant of division property which incorporates the distribution of active variables. Third, a new class of invalid division trails are characterized and new techniques based on MILP models to remove them are proposed. Hopefully this paper could give some new insights on accurately evaluating the propagation of the bit-based division property and also attract some attention on the validity of division property based cube attacks against stream ciphers.
    It is known that information-theoretically secure computation can be done by using a deck of physical cards. In card-based protocols, shuffles, which covertly rearrange the order of cards according to a permutation chosen randomly, are the heaviest operations. Due to this, the number of shuffles in a protocol is desired to be small. However, as far as we know, there are no general-purpose protocols with a constant number of shuffles. In this paper, we construct a general-purpose protocol with a constant number of shuffles; surprisingly, just one (somewhat complicated) shuffle. This is achieved by introducing the garbled circuit methodology. Moreover, to make our shuffle simpler, we also develop a novel technique for aggregating several pile-scramble shuffles (which are efficiently implementable) into one pile-scramble shuffle. Consequently, we also construct a general-purpose protocol with two pile-scramble shuffles. Both techniques may lead to many other applications in this area.
    ePrint Report Non-Malleable Codes for Decision Trees Marshall Ball, Siyao Guo, Daniel Wichs
    We construct efficient, unconditional non-malleable codes that are secure against tampering functions computed by decision trees of depth $d = n^{1/4-o(1)}$. In particular, each bit of the tampered codeword is set arbitrarily after adaptively reading up to $d$ arbitrary locations within the original codeword. Prior to this work, no efficient unconditional non-malleable codes were known for decision trees beyond depth $O(\log^2 n)$.

    Our result also yields efficient, unconditional non-malleable codes that are $\exp(-n^{\Omega(1)})$-secure against constant-depth circuits of $\exp(n^{\Omega(1)})$-size. Prior work of Chattopadhyay and Li (STOC 2017) and Ball et al. (FOCS 2018) only provide protection against $\exp(O(\log^2n))$-size circuits with $\exp(-O(\log^2n))$-security.

    We achieve our result through simple non-malleable reductions of decision tree tampering to split-state tampering. As an intermediary, we give a simple and generic reduction of leakage-resilient split-state tampering to split-state tampering with improved parameters. Prior work of Aggarwal et al. (TCC 2015) only provides a reduction to split-state non-malleable codes with decoders that exhibit particular properties.

      older items