*01:17* [Pub][ePrint]
Complexity of Multi-Party Computation Functionalities, by Hemanta K. Maji and Manoj Prabhakaran and Mike Rosulek
The central objects of secure multiparty computation are the \"multiparty functions\" (or functionalities) that it seeks to securely realize. In this article we survey a set of results that constitute a Cryptographic Complexity Theory. This theory classifies and compares multiparty functions according to their secure computability and reducibility to each other. The basic questions studied, under various notions of security and reducibility, include:o Which functionalities are securely realizable (or are \"trivial\" - i.e., can be reduced to any functionality)?

o Which functionalities are \"complete\" - i.e., those to which any functionality can be reduced?

o More generally, which functionalities are reducible to which? Outside of triviality and completeness, this question is relatively less explored.

Reductions yield relative measures of complexity among various functionalities. In the information- theoretic setting, absolute complexity measures have also been considered. In particular, we discuss results regarding which functions have t-private protocols (in which security is required against a passive adversary corrupting t out of n players) and how this set changes as t increases from 1 to n.

We treat separately the results on two-party functionalities, for which the cryptographic complexity is much better understood. In particular, we present unified combinatorial characterizations of completeness and triviality for secure function evaluation using notions of isomorphism and the common information functionality (called the kernel) of a given functionality. Beyond completeness and triviality, we also discuss results on general reducibility, and, in the computationally bounded setting, the connection between these reductions and computational hardness assumptions.

We briefly discuss results on reactive functionalities, which are much less studied than non-reactive (secure function evaluation) functionalities. Finally, we conclude with a selection of open problems.

*22:17* [Pub][ePrint]
New Cube Root Algorithm Based on Third Order Linear Recurrence Relation in Finite Field, by Gook Hwa Cho and Namhun Koo and Eunhye Ha and Soonhak Kwon
In this paper, we present a new cube root algorithm in finitefield $\\mathbb{F}_{q}$ with $q$ a power of prime, which extends

the Cipolla-Lehmer type algorithms \\cite{Cip,Leh}. Our cube root

method is inspired by the work of M\\\"{u}ller \\cite{Muller} on

quadratic case. For given cubic residue $c \\in \\mathbb{F}_{q}$

with $q \\equiv 1 \\pmod{9}$, we show that there is an irreducible

polynomial $f(x)=x^{3}-ax^{2}+bx-1$ with root $\\alpha \\in

\\mathbb{F}_{q^{3}}$ such that $Tr(\\alpha^{\\frac{q^{2}+q-2}{9}})$

is a cube root of $c$. Consequently we find an efficient cube root

algorithm based on third order linear recurrence sequence arising

from $f(x)$. Complexity estimation shows that our algorithm is

better than previously proposed Cipolla-Lehmer type algorithms.

*22:17* [Pub][ePrint]
The IITM Model: a Simple and Expressive Model for Universal Composability, by Ralf Kuesters and Max Tuengerthal
The universal composability paradigm allows for the modular design and analysis of cryptographic protocols. It has been widely and successfully used in cryptography. However, devising a coherent yet simple and expressive model for universal composability is, as the history of such models shows, highly non-trivial. For example, several partly severe problems have been pointed out in the literature for the UC model.In this work, we propose a coherent model for universal composability, called the IITM model (``Inexhaustible Interactive Turing Machine\'\'). A main feature of the model is that it is stated without a priori fixing irrelevant details, such as a specific way of addressing of machines by session and party identifiers, a specific modeling of corruption, or a specific protocol hierarchy. In addition, we employ a very general notion of runtime. All reasonable protocols and ideal functionalities should be expressible based on this notion in a direct and natural way, and without tweaks, such as (artificial) padding of messages or (artificially) adding extra messages.

Not least because of these features, the model is simple and expressive. Also the general results that we prove, such as composition theorems, hold independently of how such details are fixed for concrete applications.

Being inspired by other models for universal composability, in particular the UC model and because of the flexibility and expressivity of the IITM model, conceptually, results formulated in these models directly carry over to the IITM model.

*22:17* [Pub][ePrint]
RSA private key reconstruction from random bits using SAT solvers, by Constantinos Patsakis
SAT solvers are being used more and more in Cryptanalysis, with mixed results regarding their efficiency, depending on the structure of the algorithm they are applied. However, when it comes to integer factorization, or more specially the RSA problem, SAT solvers prove to be at least inefficient. The running times are too long to be compared with any well known integer factorization algorithm, even when it comes to small RSA moduli numbers. The recent work on cold boot attacks has sparkled again the interest on partial key exposure attacks and in RSA key reconstruction. In our work, contrary to the lattice-based approach that most of these

works use, we use SAT solvers. For the special case where the public exponent $e$ is equal to three, we provide a more efficient modeling of RSA as an instance of a satisfiability problem, and manage to reconstruct the private key, given a part of the key, even for public keys of 1024 bits in few seconds.