International Association for Cryptologic Research

IACR News Central

Here you can see all recent updates to the IACR webpage. These updates are also available:

Now viewing news items related to:

24 September 2016
The implementation of the AES encryption core by Moradi et al. at Eurocrypt 2011 is one of the smallest in terms of gate area. The circuit takes around 2400 gates and operates on an 8 bit datapath. However this is an encryption only core and unable to cater to block cipher modes like CBC and ELmD that require access to both the AES encryption and decryption modules. In this paper we look to investigate whether the basic circuit of Moradi et al. can be tweaked to provide dual functionality of encryption and decryption (ENC/DEC) while keeping the hardware overhead as low as possible. As a result, we report an 8-bit serialized AES circuit that provides the functionality of both encryption and decryption and occupies around 2645 GE with a latency of 226 cycles. This is a substantial improvement over the next smallest AES ENC/DEC circuit (Grain of Sand) by Feldhofer et al. which takes around 3400 gates but has a latency of over 1000 cycles for both the encryption and decryption cycles.
Time-memory-data (TMD) tradeoff attacks limit the security level of many classical stream ciphers (like E0, A5/1, Trivium, Grain) to $n/2$, where $n$ denotes the inner state length of the underlying keystream generator. In this paper, we present LIZARD, a lightweight stream cipher for power-constrained devices like passive RFID tags. Its hardware efficiency results from combining a Grain-like design with the FP(1)-mode, a recently suggested construction principle for the state initialization of stream ciphers, which offers provable ($2n/3$)-security against TMD tradeoff attacks aiming at key recovery. LIZARD uses 120-bit keys, 64-bit IVs and has an inner state length of 121 bit. It is supposed to provide 80-bit security against key recovery attacks. LIZARD allows to generate up to $2^{18}$ keystream bits per key/IV pair, which would be sufficient for many existing communication scenarios like Bluetooth, WLAN or HTTPS.
ePrint Report Secure Channel Injection and Anonymous Proofs of Account Ownership Liang Wang, Rafael Pass, abhi shelat, Thomas Ristenpart
We introduce secure channel injection (SCI) protocols, which allow one party to insert a private message into another party's encrypted communications. We construct an efficient SCI protocol for communications delivered over TLS, and use it to realize anonymous proofs of account ownership for SMTP servers. This allows alice@mail.com to prove ownership of some email address @mail.com, without revealing ``alice'' to the verifier. We show experimentally that our system works with standard email server implementations as well as Gmail. We go on to extend our basic SCI protocol to realize a ``blind'' certificate authority: the account holder can obtain a valid X.509 certificate binding alice@mail.com to her public key, if it can prove ownership of some email address @mail.com. The authority never learns which email account is used.
In 2012, Petit et al. shows that under the algebraic geometrical assumption named "First Fall degree Assumption", the complexity of ECDLP over binary extension field ${\bf F}_{2^n}$ is in $O(exp(n^{2/3+o(1)}))$ where $\lim_{n \to \infty} o(1)=0$ and there are many generalizations and improvements for the complexity of ECDLP under this assumption. In 2015, the author proposes the bit coincidence mining algorithm, which states that under the heuristic assumption of the complexity of xL algorithm, the complexity of ECDLP $E/{\bf F}_q$ over arbitrary finite field including prime field, is in $O(exp(n^{1/2+o(1)}))$ where $n \sim \log_2 \#E({\bf F}_q) \sim \log_2 q$. It is the first (heuristic) algorithm for solving ECDLP over prime field in subexponential complexity. In both researches, ECDLP reduces to solving large equations system and from each assumption, the complexity for solving reduced equations system is subexponential (or polynomial) complexity. However, the obtained equations system is too large for solving in practical time and space, they are only the results for the complexity.

xL algorithm, is the algorithm for solving quadratic equations system, which consists of $n$ variables and $m$ equations. Here, $n$ and $m$ are considered as parameters. Put $D=D(n,m)$ by the maximal degree of the polynomials, which appears in the computation of solving equations system by xL. Courtois et al. observe and assume the following assumption;

1) There are small integer $C_0$, such that $D(n,n+C_0)$ is usually in $O(\sqrt{n})$, and the cost for solving equations system is in $O(exp(n^{1/2+0(1)}))$. However, this observation is optimistic and it must have the following assumption

2) The equations system have small number of the solutions over algebraic closure. (In this draft we assume the number of the solutions is 0 or 1)

In the previous version's bit coincidence mining algorithm (in 2015), the number of the solutions of the desired equations system over algebraic closure is small and it can be probabilistically controlled to be 1 and the assumption 2) is indirectly true. For my sense, the reason that xL algorithm, which is the beautiful heuristic, is not widely used is that the general equations system over finite field does not satisfy the assumption 2) (there are many solutions over algebraic closure) and is complexity is much larger.

In the previous draft, I show that the ECDLP of $E({\bf F}_q)$ reduces to solving equations system consists of $d-1$ variables and $d+C_0-1$ equations where $C_0$ is an arbitrary positive integer and $d \sim C_0 \times \log_2 q$. So, the complexity for solving ECDLP is in subexponential under the following assumption

a) There are some positive integer $C_0$ independent from $n$, such that solving quadratic equations system consists of $n$ variables and $m=n+C_0$ equations (and we must assume the assumption 2)) by xL algorithm, the maximum degree of the polynomials $D=D(n,m)$, appears in this routine is in $O(\sqrt{n})$ in high probability.

Here, we propose the new algorithm that ECDLP of $E({\bf F}_q)$ is essentially reducing to solving equations system consists of $d-1$ variables and $\frac{b_0}{2}d$ equations where $b_0(\ge 2)$ is an arbitrary positive integer named block size and $d \sim (b_0-1)\log_{b_0} q$. Here, we mainly treat the case block size $b_0=3$. In this case, ECDLP is essentially reducing to solving equations system consists of about $2 \log_3 q$ variables and $3 \log_3 q$ equations. So that the desired assumption 1) is always true. Moreover, the number of the solutions (over algebraic closure) of this equations system can be probabilistically controlled to be 1 and the desired assumption 2) is also true.

In the former part of this manuscript, the author states the algorithm for the construction of equations system that ECDLP is reduced and in the latter part of this manuscript, the author state the ideas and devices in order for increasing the number of the equations, which means the obtained equations system is easily solved by xL algorithm.
ePrint Report Attacking embedded ECC implementations through cmov side channels Erick Nascimento, Lukasz Chmielewski, David Oswald, Peter Schwabe
Side-channel attacks against implementations of elliptic-curve cryptography have been extensively studied in the literature and a large tool-set of countermeasures is available to thwart different attacks in different contexts. The current state of the art in attacks and countermeasures is nicely summarized in multiple survey papers, the most recent one by Danger et al. However, any combination of those countermeasures is ineffective against attacks that require only _a single trace_ and directly target a conditional move (cmov) -- an operation that is at the very foundation of all scalar-multiplication algorithms. This operation can either be implemented through arithmetic operations on registers or through various different approaches that all boil down to loading from or storing to a secret address. In this paper we demonstrate that such an attack is indeed possible for ECC software running on AVR ATmega microcontrollers, using a protected version of the popular $\mu$NaCl library as an example. For the targeted implementations, we are able to recover 99.6% of the key bits for the arithmetic approach and 95.3% of the key bits for the approach based on secret addresses, with confidence levels 76.1% and 78.8%, respectively. All publicly available ECC software for the AVR that we are aware of uses one of the two approaches and is thus in principle vulnerable to our attacks.
ePrint Report Leakage Characterizing and Detecting Based on Communication Theory Wei Yang, Yuchen Cao, Ke Ma, Hailong Zhang, Yongbin Zhou, Baofeng Li
Evaluating the side-channel attacks (SCAs) resilience of a crypto device is important and necessary. The SCAs-secure evaluation criteria includes the information theoretic metric and the security metric. The former metric measures the leakage amount of a crypto device. It should be independent with the evaluator. However, the current metrics, e.g. mutual information (MI), conditional entropy and perceived information, are related to the leakage model selected by the evaluator. They only reflect the leakage utilization, rather than the real leakage level of a crypto device. In light of this, we analysis the side-channel as a communication channel and develop two objective metrics, the average MI and capacity of the channel, to characterize the real leakage amount and its upper bound of a crypto device through communication theory. Although the channel capacity is a rough estimation of the leakage amount of the device, it can furnish the leakage amount at the worst case scenario the device may leak. We investigate the estimation methods of the two metric in different noise scenes. Besides, a leakage detection method based on consistency check is developed subsequently. The proposed method are capable of finding the Point-Of-Interests (POIs) in leakage traces and introducing few leakage points cannot be used to mount SCAs. The experiments show the effectiveness of the proposed method.
ePrint Report Breaking Cryptographic Implementations Using Deep Learning Techniques Houssem Maghrebi, Thibault Portigliatti, Emmanuel Prouff
Template attack is the most common and powerful profiled side channel attack. It relies on a realistic assumption regarding the noise of the device under attack: the probability density function of the data is a multivariate Gaussian distribution. To relax this assumption, a recent line of research has investigated new profiling approaches mainly by applying machine learning techniques. The obtained results are commensurate, and in some particular cases better, compared to template attack. In this work, we propose to continue this recent line of research by applying more sophisticated profiling techniques based on deep learning. Our experimental results confirm the overwhelming advantages of the resulting new attacks when targeting both unprotected and protected cryptographic implementations.
21 September 2016
ePrint Report Breaking Web Applications Built On Top of Encrypted Data Paul Grubbs, Richard McPherson, Muhammad Naveed, Thomas Ristenpart, Vitaly Shmatikov
We develop a systematic approach for analyzing client-server applications that aim to hide sensitive user data from untrusted servers. We then apply it to Mylar, a framework that uses multi-key searchable encryption (MKSE) to build Web applications on top of encrypted data. We demonstrate that (1) the Popa-Zeldovich model for MKSE does not imply security against either passive or active attacks; (2) Mylar-based Web applications reveal users’ data and queries to passive and active adversarial servers; and (3) Mylar is generically insecure against active attacks due to system design flaws. Our results show that the problem of securing client-server applications against actively malicious servers is challenging and still unsolved. We conclude with general lessons for the designers of systems that rely on property-preserving or searchable encryption to protect data from untrusted servers.
ePrint Report Snow White: Provably Secure Proofs of Stake Iddo Bentov, Rafael Pass, Elaine Shi
Decentralized cryptocurrencies have pushed deployments of distributed consensus to more stringent environments than ever before. Most existing protocols rely on proofs-of-work which require expensive computational puzzles to enforce, imprecisely speaking, “one vote per unit of computation”. The enormous amount of energy wasted by these protocols has been a topic of central debate, and well-known cryptocurrencies have announced it a top priority to alternative paradigms. Among the proposed alternative solutions, proofs-of-stake protocols have been of particular interest, where roughly speaking, the idea is to enforce “one vote per unit of stake”. Although the community have rushed to propose numerous candidates for proofs-of-stake, no existing protocol has offered formal proofs of security, which we believe to be a critical, indispensible ingredient of a distributed consensus protocol, particularly one that is to underly a high-value cryptocurrency system.

In this work, we seek to address the following basic questions:

• What kind of functionalities and robustness requirements should a consensus candidate offer to be suitable in a proof-of-stake application?

• Can we design a provably secure protocol that satisfies these requirements?

To the best of our knowledge, we are the first to formally articulate a set of requirements for consensus candidates for proofs-of-stake. We argue that any consensus protocol satisfying these properties can be used for proofs-of-stake, as long as money does not switch hands too quickly. Moreover, we provide the first consensus candidate that provably satisfies the desired robustness properties.
ePrint Report The Sleepy Model of Consensus Iddo Bentov, Rafael Pass, Elaine Shi
The distributed systems literature adopts two primary network models, the synchronous model where honest messages are delivered in the next round, and the partially synchronous (or asynchronous) model where honest messages are subject to unpredictable adversarial delays. In this paper, we show that more nuanced formal models exist beyond the traditional synchrony and asynchrony stratification -- and interestingly, such new models allow us to articulate new robustness properties that traditional models would have failed to capture. More specifically, we articulate a new formal model called “the sleepy model of consensus”, where we classify honest nodes as being either alert or sleepy. Alertness implies that the node is online and has good network connections; whereas sleepiness captures any type of failure or network jitter. We then describe the Sleepy consensus protocol that achieves security as long as at any time, the number of alert nodes outnumber corrupt ones. No classical synchronous or asynchronous protocols attain such robustness guarantees, and yet we show how to leverage Nakamoto’s blockchain protocol, but without proofs-of-work, to achieve these properties, assuming collision resistant hash functions, the existence of a public-key infrastructure and a common reference string.
Consensus, or state machine replication is a foundational building block of distributed systems and modern cryptography. Consensus in the classical, permissioned setting has been extensively studied in the 30 years of distributed systems literature. Recent developments in Bitcoin and other decentralized cryptocurrencies popularized a new form of consensus in a “permissionless” setting, where anyone can join and leave dynamically, and there is no a-priori knowledge of the consensus nodes. Despite this exciting breakthrough, today’s permissionless consensus protocols, often referred to as “blockchains”, are known to have terrible performance, which has resulted in heated, and at times acrimonious debates in the community. First, we show that unfortunately a performance loss is inherent for any protocol that secures against at least 1/3 corruptions in hashpower. Specifically, we formally define a new performance measure called responsiveness, and show that any responsive permissionless consensus protocol cannot tolerate 1/3 or more corruptions in hashpower. Next, we show a tightly matching uppper bound. Specifically, we propose a new permissionless consensus protocol called hybrid consensus, that is responsive and secures against up to 1/3 corruptions in hashpower. Hybrid consensus's idea is to bootstrap fast permissionless consensus by combining an inefficient blockchain protocol with a fast permissioned consensus protocol. Hybrid consensus uses the blockchain not to agree on transactions, but to agree on rotating committees which in turn execute permissioned consensus protocols to agree on transactions. While the high-level idea is intuitive, formally instantiating and reasoning about the protocol exposed a multitude of non-trivial technical subtleties and challenges.
Nakamoto's famous blockchain protocol enables achieving consensus in a so-called permissionless setting---anyone can join (or leave) the protocol execution, and the protocol instructions do not depend on the identities of the players. His ingenious protocol prevents ``sybil attacks'' (where an adversary spawns any number of new players) by relying on computational puzzles (a.k.a. ``moderately hard functions') introduced by Dwork and Naor (Crypto'92). Recent work by Garay et al (EuroCrypt'15) and Pass et al (manuscript, 2016) demonstrate that this protocol provably achieves consistency and liveness assuming a) honest players control a majority of the computational power in the network, b) the puzzle-hardness is appropriately set as a function of the maximum network delay and the total computational power of the network, and c) the computational puzzle is modeled as a random oracle.

Assuming honest participation, however, is a strong assumption, especially in a setting where honest players are expected to perform a lot of work (to solve the computational puzzles). In Nakamoto's Bitcoin application of the blockchain protocol, players are incentivized to solve these puzzles by receiving rewards for every ``blocks'' (of transactions) they contribute to the blockchain. An elegant work by Eyal and Sirer (FinancialCrypt'14), strengthening and formalizing an earlier attack discussed on the Bitcoin forum, demonstrates that a coalition controlling even a minority fraction of the computational power in the network can gain (close to) 2 times its ``fair share'' of the rewards (and transation fees) by deviating from the protocol instructions. In contrast, in a fair protocol, one would expect that players controlling a $\phi$ fraction of the computational resources to reap a $\phi$ fraction of the rewards.

In this work, we present a new blockchain protocol---the FruitChain protocol---which satisfies the same consistency and liveness properties as Nakamoto's protocol (assuming an honest majority of the computing power), and additionally is $\delta$-approximately fair: with overwhelming probability, any honest set of players controlling a $\phi$ fraction of computational power is guaranteed to get at least a fraction $(1 - \delta) \phi$ of the blocks (and thus rewards) in any $Omega( \kappa/\delta )$ length segment of the chain (where $\kappa$ is the security parameter).

As a consequence, if this blockchain protocol is used as the ledger underlying a cryptocurrency system, where rewards and transaction fees are evenly distributed among the miners of blocks in a length kappa segment of the chain, no coalition controlling less than a majority of the computing power can gain more than a factor $(1 + 3\delta)$ by deviating from the protocol (i.e., honest participation is an $n/2$-coalition-safe $3\delta$-Nash equilibrium).

Finally, the fruit chain protocol enables decreasing the variance of mining rewards and as such significantly lessens (or even obliterates) the need for mining pools.
ePrint Report Transparency Overlays and Applications Melissa Chase, Sarah Meiklejohn
In this paper, we initiate a formal study of transparency, which in recent years has become an increasingly critical requirement for the systems in which people place trust. We present the abstract concept of a transparency overlay, which can be used in conjunction with any system to give it provable transparency guarantees, and then apply the overlay to two settings: Certificate Transparency and Bitcoin. In the latter setting, we show that the usage of our transparency overlay eliminates the need to engage in mining and allows users to store a single small value rather than the entire blockchain. Our transparency overlay is generically constructed from a signature scheme and a new primitive we call a dynamic list commitment, which in practice can be instantiated using a collision-resistant hash function.
ePrint Report Computing discrete logarithms in cryptographically-interesting characteristic-three finite fields Gora Adj, Isaac Canales-Mart\'inez, Nareli Cruz-Cort\'es, Alfred Menezes, Thomaz Oliveira, Luis Rivera-Zamarripa, Francisco Rodr\'iguez-Henr\'iquez
Since 2013 there have been several developments in algorithms for computing discrete logarithms in small-characteristic finite fields, culminating in a quasi-polynomial algorithm. In this paper, we report on our successful computation of discrete logarithms in the cryptographically-interesting characteristic-three finite field ${\mathbb F}_{3^{6 \cdot 509}}$ using these new algorithms; prior to 2013, it was believed that this field enjoyed a security level of 128 bits. We also show that a recent idea of Guillevic can be used to compute discrete logarithms in the cryptographically-interesting finite field ${\mathbb F}_{3^{6 \cdot 709}}$ using essentially the same resources as we expended on the ${\mathbb F}_{3^{6 \cdot 509}}$ computation. Finally, we argue that discrete logarithms in the finite field ${\mathbb F}_{3^{6 \cdot 1429}}$ can feasibly be computed today; this is significant because this cryptographically-interesting field was previously believed to enjoy a security level of 192 bits.
Event date: 13 February to 15 February 2017
Submission deadline: 5 October 2016
Notification: 15 November 2016
19 September 2016
Authenticated key-exchange (AKE) plays a fundamental role in modern cryptography. Up to now, the HMQV protocol family is among the most efficient provably secure AKE protocols, which has been widely standardized and in use. Given recent advances in quantum computing, it is highly desirable to develop lattice-based HMQV-analogue protocols for the upcoming post-quantum era. Towards this goal, an important step is recently made by Zhang et al. at Eurocrypt'15. Similar to HMQV, the HMQV-analogue protocols proposed there consists of two variants: a two-pass protocol $\Pi_2$, as well as a one-pass protocol $\Pi_1$ that implies, in turn, a signcryption scheme (named as ``deniable encryption"). All these protocols are claimed to be provably secure under the ring-LWE (RLWE) assumption.

In this work, we propose a new type of attack, referred to as small field attack (SFA), against the one-pass protocol $\Pi_1$, as well as its resultant deniable encryption scheme. With SFA, a malicious user can efficiently recover the static private key of the honest victim user in $\Pi_1$ with overwhelming probability. Moreover, the SFA attack is realistic and powerful in practice, in the sense that it is almost impossible for the honest user to prevent, or even detect, the attack. Besides, some new property regarding the CRT basis of $R_q$ is also developed in this work, which is essential for our small field attack and may be of independent interest.

The security proof of the two-pass protocol $\Pi_2$ is then revisited. We are stunk at Claim 16 in [ZZDS14], with a gap identified and discussed in the security proof. To us, we do not know how to fix the gap, which traces back to some critical differences between the security proof of HMQV and that of its RLWE-based analogue.
ePrint Report Parallel Implementations of Masking Schemes and the Bounded Moment Leakage Model Gilles Barthe, François Dupressoir, Sebastian Faust, Benjamin Grégoire, François-Xavier Standaert, Pierre-Yves Strub
In this paper, we provide a necessary clarification of the good security properties that can be obtained from parallel implementations of masking schemes. For this purpose, we first argue that (i) the probing model is not straightforward to interpret, since it more naturally captures the intuitions of serial implementations, and (ii) the noisy leakage model is not always convenient, e.g. when combined with formal methods for the verification of cryptographic implementations. Therefore we introduce a new model, the bounded moment model, that formalizes a weaker notion of security order frequently used in the side-channel literature. Interestingly, we prove that probing security for a serial implementation implies bounded moment security for its parallel counterpart. This result therefore enables an accurate understanding of the links between formal security analyses of masking schemes and experimental security evaluations based on the estimation of statistical moments. Besides its consolidating nature, our work also brings useful technical contributions. First, we describe and analyze refreshing and multiplication algorithms that are well suited for parallel implementations and improve security against multivariate side-channel attacks. Second, we show that simple refreshing algorithms (with linear complexity) that are not secure in the continuous probing model are secure in the continuous bounded moment model. Eventually, we discuss the independent leakage assumption required for masking to deliver its security promises, and its specificities related to the serial or parallel nature of an implementation.
ePrint Report The Shortest Signatures Ever Mohamed Saied Emam Mohamed, Albrecht Petzoldt
Multivariate Cryptography is one of the main candidates for creating post quantum public key cryptosystems. Especially in the area of digital signatures, there exist many practical and secure multivariate schemes. In this paper we present a general technique to reduce the signature size of multivariate schemes. Our technique enables us to reduce the signature size of nearly all multivariate signature schemes by 10 to 15 % without slowing down the scheme significantly. We can prove that the security of the underlying scheme is not weakened by this modification. Furthermore, the technique enables a further reduction of the signature size when accepting a slightly more costly verification process. This trade off between signature size and complexity of the verification process can not be observed for any other class of digital signature schemes. By applying our technique to the Gui signature scheme, we obtain the shortest signatures of all existing digital signature schemes.
In this work we consider the closest vector problem (CVP) ---a problem also known as maximum-likelihood decoding--- in the tensor of two root lattices of type A ($A_m \otimes A_n$), as well as in their duals ($A^*_m \otimes A^*_n$). This problem is mainly motivated by {\em lattice based cryptography}, where the cyclotomic rings $\mathbb Z[\zeta_c]$ (resp. its co-different $\mathbb Z[\zeta_c]^\vee$) play a central role, and turn out to be isomorphic as lattices to tensors of $A^*$ lattices (resp. $A$ root lattices). In particular, our results lead to solving CVP in $\mathbb Z[\zeta_c]$ and in $\mathbb Z[\zeta_c]^\vee$ for conductors of the form $c = 2^\alpha p^\beta q^\gamma$ for any two odd primes $p,q$.

For the primal case $A_m \otimes A_n$, we provide a full characterization of the Voronoi region in terms of simple cycles in the complete directed bipartite graph $K_{m+1,n+1}$. This leads ---relying on the Bellman-Ford algorithm for negative cycle detection--- to a CVP algorithm running in *polynomial time*. Precisely, our algorithm performs $O(l\ m^2 n^2 \min\{m,n\})$ operations on reals, where $l$ is the number of bits per coordinate of the input target. For the dual case, we use a gluing-construction to solve CVP in sub-exponential time $O(n m^{n+1})$.
ePrint Report Multi-core FPGA Implementation of ECC with Homogeneous Co-Z Coordinate Representation Bo-Yuan Peng, Yuan-Che Hsu, Yu-Jia Chen, Di-Chia Chueh, Chen-Mou Cheng, Bo-Yin Yang
Elliptic Curve Cryptography (ECC) is gaining popularity in recent years. Having short keys and short signatures in particular makes ECC likely to be adopted in numerous internet-of-things (IoT) devices. It is therefore critical to optimize ECC well for both speed and power consumption. Optimization opportunities exist on several different levels: algorithm, architecture, and/or implementation. We combine optimizations at every level in an efficient multi-core FPGA implementation. The core building block for our implementation is a Montgomery multiplier capable of modular additions and multiplications with an arbitrary prime modulus. The size of the prime modulus can also be changed easily, for which we have implemented and tested up to 528-bits used in the NIST P-521 curve. Based on this building block, we have developed a multi-core architecture that supports multiple parallel modular additions, multiplications, and inverses. Efficient ECC group addition and doubling are then built from this foundation. To support a wide variety of curves and at the same time resist timing/power-based side-channel attacks, our scalar multiplication is implemented using the Co-Z ladder due to Hutter, Joye, and Sierra. This approach also allows us to trade off between speed and power consumption by using a different number of Montgomery cores.
ePrint Report Secure Error-Tolerant Graph Matching Protocols Kalikinkar Mandal, Basel Alomair, Radha Poovendran
We consider a setting where there are two parties, each party holds a private graph and they wish to jointly compute the structural dissimilarity between two graphs without revealing any information about their private input graph. Graph edit distance (GED) is a widely accepted metric for measuring the dissimilarity of graphs. It measures the minimum cost for transforming one graph into the other graph by applying graph edit operations. In this paper we present a framework for securely computing approximated GED and as an example, present a protocol based on threshold additive homomorphic encryption scheme. We develop several new sub-protocols such as private maximum computation and optimal assignment protocols to construct the main protocol. We show that our protocols are secure against semi-honest adversaries. The asymptotic complexity of the protocol is $O(n^5\ell\log^*(\ell))$ where $\ell$ is the bit length of ring elements and $n$ is the number of nodes in the graph.
Job Posting Associate Professor / Professor in Cyber Security, Ref. 56904 School of Computer Science & Engineering, UNSW Australia
The purpose of this role is to conduct high impact research and deliver outstanding teaching and student experience in the area of cyber security, and to play a significant leadership role in supporting the achievement of the teaching and service missions of the School and Faculty.

The appointment would be:

• at level D ($143k – $157k per annum) or level E ($183k per annum), plus 17% superannuation and leave loading, commensurate with academic qualifications and experience,

• full time,

• to a Convertible Tenure Track Employment Contract for an initial fixed term period, with conversion to continuing appointment based on satisfactory performance and there being sufficient productive work available.

The Faculty reserves the right to offer an appointment on a continuing basis or to directly appoint to advertised positions.

About you

We are looking for an exceptional candidate to provide academic leadership and foster excellence in research, innovative teaching and professional activities. Applicants with industry experience are especially welcome.

As the successful candidate, you will have:

• A PhD in Cyber Security or a related field

• A significant track record of high impact international research

• Demonstrated leadership in building engagement and partnerships with the profession and industry

• Record of outstanding delivery of high quality teaching and student experience

UNSW desires to be the exemplar Australian university and employer of choice for people from diverse backgrounds. UNSW aspires to ensure equality in recruitment, development, retention and promotion of staff, paying particular attention to ensuring no-one is disadvantaged on the basis of their gender, cultural background, disability or Indigenous origin.

Engineering strongly encourages female applicants and actively supports women to succeed through specific Faculty and UNSW initiatives and generous parental leave provisions and flexible work practices.

Closing date for applications: 9 October 2016

Contact: Any enquiries can be forwarded to Professor Maurice Pagnucco, Head of School:

E: morri (at) cse.unsw.edu.au

T: +61 9385 5518

More information: https://www.engineering.unsw.edu.au/about-us/careers/academic-positions

16 September 2016
Garbled RAM, introduced by Lu and Ostrovsky (Eurocrypt 2013), provides a novel method to garble RAM (Random Access Machine) programs directly. It can be seen as a RAM analogue of Yao's garbled circuits such that, the size of the garbled program and the time it takes to create and evaluate it, is proportional only to the running time of the RAM program, avoiding the inefficient process of first converting it into a circuit. Secure RAM computation for two parties is a key application of garbled RAM. However, this construction is secure only against semi-honest adversaries.

In this paper we provide a cut-and-choose technique for garbled RAM. This gives the first constant round two-party secure computation protocol for RAM programs secure against malicious adversaries that makes only black-box use of the underlying cryptographic primitives. Our protocol allows for garbling multiple RAM programs being executed on a persistent database. Security of our construction is argued in the random oracle model.
The possibility of basing cryptography on the minimal assumption $\textbf{NP}\nsubseteq \textbf{BPP}$ is at the very heart of complexity-theoretic cryptography. The closest we got so far was lattice-based cryptography whose average-case security is based on the worst-case hardness of approximate shortest vector problems on integer lattices. The state-of-the-art is the construction of a one-way function (and collision-resistant hash function) based on the hardness of the $\tilde{O}(n)$-approximate shortest independent vector problem $\textsf{SIVP}_{\tilde O(n)}$.

Although $\textsf{SIVP}$ is hard in its exact version, Guruswami, et al (CCC 2004) showed that $\textsf{gapSIVP}_{\sqrt{n/\log n}}$ is in $\textbf{NP} \cap \textbf{coAM}$ and thus unlikely to be $\textbf{NP}$-hard. Indeed, any language that can be reduced to $\textsf{gapSIVP}_{\tilde O(\sqrt n)}$ (under general probabilistic polynomial-time adaptive reductions) is in $\textbf{AM} \cap \textbf{coAM}$ by the results of Peikert and Vaikuntanathan (CRYPTO 2008) and Mahmoody and Xiao (CCC 2010). However, none of these results apply to reductions to search problems, still leaving open a ray of hope: can $\textbf{NP}$ be reduced to solving search SIVP with approximation factor $\tilde O(n)$?

We show that any language that can be reduced to solving search $\textsf{SIVP}$ with approximation factor $\tilde O(n)$ lies in $\textbf{AM}$ intersect $\textbf{coAM}$, eliminating the possibility of basing current constructions on NP-hardness.
15 September 2016
Tian et al. proposed a permutation based authentication protocol entitled RAPP. However, it came out very soon that it suffers from several security treats such as desynchronization attack. Following RAPP, several protocols have been proposed in literature to defeat such attacks. Among them, some protocols suggested to keep a record of old parameters by both the reader and the tag. In this paper we present a genrilized version of all such protocols, named GUMAP, and present an efficent desynchronization attack against it. The complexity of our attack is 5 consequences sessions of protocol and the success probability is almost 1. Our attack is applicable as it is to recently proposed protocols entitled RCIA, KMAP, SASI$^{+}$ and SLAP. To the best of our knowledge, it is the first report on the vulnerability of these protocols.
This paper presents expressive predicate encryption (PE) systems, namely non-zero inner-product-predicate encryption (NIPPE) and attribute-based encryption (ABE) supporting monotone span programs achieving best known parameters among existing similar schemes under well-studied static complexity assumptions. Both the constructions are built in composite order bilinear group setting and involve only 2 group elements in the ciphertexts. More interestingly, our NIPPE scheme, which additionally features only 1 group element in the decryption keys, is the first to attain succinct ciphertexts and decryption keys simultaneously. For proving selective security of these constructions under the Subgroup Decision assumptions, which are the most standard static assumptions in composite order bilinear group setting, we apply the extended version of the elegant D´ej`a Q framework, which was originally proposed as a general technique for reducing the q-type complexity assumptions to their static counter parts. Our work thus demonstrates the power of this framework in overcoming the need of q-type assumptions, which are vulnerable to serious practical attacks, for deriving security of highly expressive PE systems with compact parameters. We further introduce the concept of online-offline multi-input functional encryption (OO-MIFE), which is a crucial advancement towards realizing this highly promising but computationally intensive cryptographic primitive in resource bounded and power constrained devices. We also instantiate our notion of OO-MIFE by constructing such a scheme for the multi-input analog of the inner product functionality, which has a wide range of application in practice. Our OO-MIFE scheme for multiinput inner products is built in asymmetric bilinear groups of prime order and is proven selectively secure under the well-studied k-Linear (k-LIN) assumption.
The concept of indifferentiability of systems, a generalized form of indistinguishability, was proposed in 2004 to provide a simplified and generalized explanation of impossibility results like the non-instantiability of random oracles by hash functions due to Canetti, Goldreich, and Halevi (STOC 1998). But indifferentiability is actually a constructive notion, leading to possibility results. For example, Coron {\em et al.} (Crypto 2005) argued that the soundness of the construction $C(f)$ of a hash function from a compression function $f$ can be demonstrated by proving that $C(R)$ is indifferentiable from a random oracle if $R$ is an ideal random compression function.

The purpose of this short paper is to describe how the indifferentiability notion was a precursor to the theory of constructive cryptography and thereby to provide a simplified and generalized treatment of indifferentiability as a special type of constructive statement.
In cryptographic access control sensitive data is protected by cryptographic primitives and the desired access structure is enforced through appropriate management of the secret keys. In this paper we study rigorous security definitions for the cryptographic enforcement of Role Based Access Control (RBAC). We propose the first simulation-based security definition within the framework of Universal Composability (UC). Our definition is natural and intuitively appealing, so we expect that our approach would carry over to other access models.

Next, we establish two results that clarify the strength of our definition when compared with existing ones that use the game-based definitional approach. On the positive side, we demonstrate that both read and write-access guarantees in the sense of game-based security are implied by UC security of an access control system. Perhaps expected, this result serves as confirmation that the definition we propose is sound.

Our main technical result is a proof that simulation-based security requires impractical assumptions on the encryption scheme that is employed. As in other simulation-based settings, the source of inefficiency is the well known ``commitment problem'' which naturally occurs in the context of cryptographic access control to file systems.
ePrint Report Distance Bounding based on PUF Mathilde Igier, Serge Vaudenay
Distance Bounding (DB) is designed to mitigate relay attacks. This paper provides a complete study of the DB protocol of Kleber et al. based on Physical Unclonable Functions (PUFs). We contradict the claim that it resists to Terrorist Fraud (TF). We propose some slight modifications to increase the security of the protocol and formally prove TF-resistance, as well as resistance to Distance Fraud (DF), and Man-In-the-Middle attacks (MiM) which include relay attacks.
ePrint Report Quantifying Web Adblocker Privacy Arthur Gervais, Alexandros Filios, Vincent Lenders, Srdjan Capkun
Web advertisements, an integral part of today's web browsing experience, financially support countless websites. Meaningful advertisements, however, require behavioral targeting, user tracking and profile fingerprinting that raise serious privacy concerns. To counter privacy issues and enhance usability, adblockers emerged as a popular way to filter web requests that do not serve the website's main content. Despite their popularity, little work has focused on quantifying the privacy provisions of adblockers.

In this paper, we develop a quantitative approach to objectively compare the privacy of adblockers. We propose a model based on a set of privacy metrics that captures not only the technical web architecture, but also the underlying corporate institutions of the problem across time and geography.

We investigate experimentally the effect of various combinations of ad-blocking software and browser settings on 1000 Web sites. Our results highlight a significant difference among adblockers in terms of filtering performance, in particular affected by the applied configurations. Besides the ability to judge the filtering capabilities of existing adblockers and their particular configurations, our work provides a general framework to evaluate new adblocker proposals.

  older items