*16:56*[Event][New] ASIAPKC 2014: 2nd ACM ASIA Public-Key Cryptography Workshop

Submission: 10 February 2014

Notification: 10 March 2014

From June 3 to June 3

Location: Kyoto, Japan

More Information: http://www2.nict.go.jp/nsri/arch/asiapkc2014/

Get an update on changes of the IACR web-page here. For questions, contact *newsletter (at) iacr.org*.
You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

Submission: 10 February 2014

Notification: 10 March 2014

From June 3 to June 3

Location: Kyoto, Japan

More Information: http://www2.nict.go.jp/nsri/arch/asiapkc2014/

2014-01-08

Some recent constructions based on LWE do not sample the secret uniformly at random but rather from some distribution which produces small entries. The most prominent of these is the binary-LWE problem where the secret vector is sampled from {0, 1}∗ or {−1, 0, 1}∗. We present a variant of the BKW algorithm for binary-LWE and other small secret variants and show that this variant reduces the complexity for solving binary-LWE. We also give estimates for the cost of solving binary-LWE instances in this setting and demonstrate the advantage of this BKW variant over standard BKW and lattice reduction techniques applied to the SIS problem. Our variant can be seen as a combination of the BKW algorithm with a lazy variant of modulus switching which might be of independent interest.

One of the most important applications of cryptography is the establishment of secure communication channels between two entities (e.g. a client and a server), and the protocol most widely used for this purpose is TLS.

A key goal of research in cryptography is to provide security proofs for cryptographic protocols. This task is particularly difficult if the considered protocol has not been designed with provable security in mind, as is the case for TLS.

Results on provable security differ with respect to (1) the assumptions made and (2) the statement that is proved to follow from the assumptions. It is important that the proved statement is of a form that allows for both comparisons of protocol performance, and for direct use in the proof of a higher-level protocol. Security statements should thus be exact (as opposed to asymptotic), giving precise upper bounds for the security level guaranteed by a protocol. Furthermore, a key to analyzing and designing cryptographic protocols is a modularization in which the role of each cryptographic primitive (e.g. encryption) or mechanism (e.g. nonce exchange) is made explicit, and the security of its application is proved in isolation, once and for all. The constructive cryptography framework provides a sound instantiation of this approach. A modular step constructs a specific resource from certain (assumed) resources, and the overall protocol is the composition of several such construction steps. The security proof for the overall protocol follows directly from the composition theorem as well as the individual (reasonably simple) security proofs for the modules. Moreover, the actual security statement for the overall protocol is of a standardized form, in terms of a resource, which makes it straight-forward to use the protocol in a higher-level context, with the overall security proof again following from the composition theorem.

In this paper, we provide such a constructive treatment of TLS. We provide a deconstruction of TLS into modular steps and a security proof for each step which, compared to previous work, results in the above mentioned advantages. For the key-exchange step in particular, we analyze the RSA-based and both Diffie-Hellman-based variants (with static and ephemeral server key) under a non-randomizability assumption for RSA-PKCS and the Gap Diffie-Hellman assumption, respectively; in all cases we make use of random oracles. In general, since the design of TLS is not modular, the constructive decomposition is less fine-grained than one might wish to have and than it is for a modular design. This paper therefore also suggests new insights into the intrinsic problems incurred by a non-modular protocol design such as that of TLS.

Attribute-based encryption (ABE) is a type of public key encryption that allows users to encrypt and decrypt messages based on user attributes. For instance, one can encrypt a message to any user

satisfying the boolean formula (``crypto conference attendee\'\' AND ``PhD student\'\') OR ``IACR member\'\'. One drawback is that encryption and key generation computational costs scale with the complexity of the access policy or number of attributes. In practice, this makes

encryption and user key generation a possible bottleneck for some applications.

To address this problem, we develop new techniques for ABE that split the computation for these algorithms into two phases: a preparation phase that does the vast majority of the work to encrypt a message or create a secret key *before* it knows the message or the attribute list/access control policy that will be used (or even the size of the list or policy). A second phase can then rapidly assemble an ABE ciphertext or key when the specifics become known. This concept is sometimes called ``online/offline\'\' encryption when only the message is unknown during the preparation phase; we note that the addition of unknown attribute lists and access policies makes ABE significantly more challenging.

One motivating application for this technology is mobile devices: the preparation work can be performed while the phone is plugged into a power source, then it can later rapidly perform ABE operations on the move without significantly draining the battery.

Most of the lightweight block ciphers are nibble-oriented as the implementation of a 4-bit S-box is much more compact than an 8-bit S-box. This paper proposes a novel implementation of multiplicative inverse for 8-bit S-boxes using LFSR requiring only 138 gate-equivalent. It can be shown that if such S-boxes are adopted for the AES it takes less than 50 gate-equivalent per S-box in parallel implementation. Canright\'s \\cite{Canright} implementation of the AES S-box is five times more expensive compared to this method for AES-like S-boxes. With this powerful scheme, a lightweight block cipher can be designed using an 8-bit S-box.

It is well known that almost all random subset sum

instances with density less than 0.6463... can be solved with an

$l_{2}$-norm SVP oracle by Lagarias and Odlyzko. Later, Coster

\\emph{et al.} improved the bound to 0.9408... by using a different

lattice. In this paper, we generalize this classical result to

$l_p$-norm. More precisely, we show that for $p\\in \\mathbb{Z}^{+}$,

an $l_p$-norm SVP oracle can be used to solve almost all random

subset sum instances with density bounded by $\\delta_p$, where

$\\delta_1=0.5761$ and $\\delta_p =

1/(\\frac{1}{2^p}\\log_2(2^{p+1}-2)+\\log_2(1+\\frac{1}{(2^p-1)(1-(\\frac{1}{2^{p+1}-2})^{(2^p-1)})})))$

for $p\\geq 3$(asymptotically, $\\delta_p\\approx 2^p/(p+2)$). Since

$\\delta_p$ goes increasingly to infinity when $p$ tends to infinity,

it can be concluded that an $l_p$-norm SVP oracle with bigger $p$

can solve more subset sum instances. An interesting phenomenon is

that an $l_p$-norm SVP oracle with $p\\geq 3$ can help solve almost

all random subset sum instances with density one, which are thought

to be the most difficult instances.

By shrinking the technology static power consumption of CMOS circuits is becoming a major concern. In this paper, we present the first practical results of exploiting static power consumption of FPGA-based cryptographic devices in order to mount a key-recovery side-channel attack. The experiments represented here are based on three Xilinx FPGAs built on 65nm, 45nm, and 28nm process technologies. By means of a sophisticated measurement setup as well as a measurement methodology we demonstrate an exploitable information leakage through static power of the underlying FPGAs. The current work highlights the feasibility of side-channel analysis attacks by static power that have been known for years but have not been performed and investigated in practice yet. This is a starting point for further research investigations, and may have a huge impact on the efficiency of DPA countermeasures in the near future.

2014-01-07

Name: Kwangsu Lee

Topic: Efficient Hidden Vector Encryptions and Its Applications

Category:public-key cryptography

Description:

Predicate encryption is a new paradigm of public key encryption that enables searches on encrypted data. Using the predicate encryption, we can search keywords or attributes on encrypted data without decrypting the ciphertexts. In predicate encryption, a ciphertext is associated with attributes and a token corresponds to a predicate. The token that corresponds to a predicate $f$ can decrypt the ciphertext associated with attributes $\vect{x}$ if and only if $f(\vect{x})=1$.

Hidden vector encryption (HVE) is a special kind of predicate encryption. HVE supports the evaluation of conjunctive equality, comparison, and subset operations between attributes in ciphertexts and attributes in tokens. Currently, several HVE schemes were proposed where the ciphertext size, the token size, and the decryption cost are proportional to the number of attributes in the ciphertext. In this thesis, we consider the efficiency, the generality, and the security of HVE schemes. The results of this thesis are described as follows.

The first results of this thesis are efficient HVE schemes where the token consists of just four group elements and the decryption only requires four bilinear map computations, independent of the number of attributes in the ciphertext. The construction uses composite order bilinear groups and is selectively secure under the well-known assumptions.

The second results are efficient HVE schemes that are secure under any kind of pairing types. To achieve our goals, we proposed a general framework that converts HVE schemes from composite order bilinear groups to prime order bilinear groups. Using the framework, we convert the previous HVE schemes from composite order bilinear groups to prime order bilinear groups.

The third results are fully secure HVE schemes with short tokens. Previous HVE schemes were proven to be secure only in the selective security model where the capabilities of the adversaries are severely restricted[...]

Diffusion layers are crucial components of symmetric ciphers. These components, along with suitable Sboxes, can make symmetric ciphers resistant against statistical attacks like linear and differential cryptanalysis. Conventional MDS diffusion layers, which are defined as matrices over finite fields, have been used in symmetric ciphers such as AES, Twofish and SNOW. In this paper, we study linear, linearized and nonlinear MDS diffusion layers. We investigate linearized diffusion layers, which are a generalization of conventional diffusion layers; these diffusion layers are used in symmetric ciphers like SMS4, Loiss and ZUC. We introduce some new families of linearized MDS diffusion layers and as a consequence, we present a method for construction of randomized linear diffusion layers over a finite field. Nonlinear MDS diffusion layers are introduced in Klimov\'s thesis; we investigate nonlinear MDS diffusion layers theoretically, and we present a new family of nonlinear MDS diffusion layers. We show that these nonlinear diffusion layers can be made randomized with a low implementation cost. An important fact about linearized and nonlinear diffusion layers is that they are more resistant against algebraic attacks in comparison to conventional diffusion layers. A special case of diffusion layers are (0,1)-diffusion layers. This type of diffusion layers are used in symmetric ciphers like ARIA. We examine (0,1)-diffusion layers and prove a theorem about them. At last, we study linearized MDS diffusion layers of symmetric ciphers Loiss, SMS4 and ZUC, from the mathematical viewpoint.