International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

17 September 2025

Xinxin Gong, Qingju Wang, Yonglin Hao, Lin Jiao, Xichao Hu
ePrint Report ePrint Report
The ARX structure plays a crucial role in symmetric-key primitives, with differential-linear (DL) attacks being among the most effective cryptanalysis techniques against ARX ciphers. In this paper, we present a systematic re-decomposition technique for DL distinguishers of ARX ciphers and identify for the first time the hourglass(-like) structural commonalities among optimal DL distinguishers searched out by various deduction techniques, also supported through comprehensive experiments, which motivate us to develop an efficient and generalized approach to construct optimal hourglass(-like) structural DL distinguishers. Our method yields significant advances when applied to \speck, \alzette, and the underlying permutations of \siphash{} and \chaskey: (1) the first 11- to 14-round DL distinguishers of \alzette; (2) the first (valid) DL distinguishers for 11-round \speck32, 12-round \speck48, and 16-round \speck96; (3) deterministic (correlation $\pm1$) 3-round DL distinguishers for \siphash-2-4 and significantly improved 4-round ones. All these distinguishers are equipped with both theoretical and experimental verifications. We further analyze ARX-based Latin dance stream ciphers, achieving improved DL distinguishers for 7/7.25-round \chacha, 8-round \salsa, and 5.5-round \forro. Though some of the improvements are not significant, we have verified the effectiveness of our method across a broader range of instances. This work provides new insights for DL distinguisher construction and enhances understanding of the security of ARX ciphers.
Expand
Hila Dahari-Garbian, Ariel Nof, Luke Parker
ePrint Report ePrint Report
We present Trout (Two-ROUnd Threshold), the \textit{first} distributed two-round ECDSA signing protocol for arbitrary thresholds. Trout has constant upload bandwidth per-party and processing time linear in the amount of participants. Moreover, Trout achieves the Identifiable Abort (IA) property, which means that if the protocol cannot terminate due to a failure, parties can attribute the failure to a specific party. We achieve this without a trusted setup.

Our protocol relies on linear-homomorphic encryptions and commitments over class groups. To obtain our result, we leverage the recent construction of an exponent-VRF (Boneh et al., Eurocrypt 2025) and a novel protocol to multiply an encrypted value with a committed value and simultaneously decrypt it, which we call "scaled decryption". We believe that this protocol may be of independent interest.

Our protocol has a very low communication cost of just 6.5 KB sent per party. Furthermore, we implemented our protocol in Rust and provide benchmarks for various configurations, showing its practicality even for 100 parties. Our implementation includes a constant-time variant which, to the best of our knowledge, is the first of its kind for class-group-based threshold ECDSA protocols.
Expand
Chris Brzuska, Michael Klooß, Ivy K. Y. Woo
ePrint Report ePrint Report
Threshold public-key encryption (TPKE) allows $t$ out of $k$ parties to jointly decrypt, but any $t-1$ subset obtains no information on the underlying plaintext. The ongoing standardisation efforts on threshold primitives by NIST make it a pressing question to understand TPKE security notions, which, perhaps surprisingly, have remained varied.

We systematically develop what we view as minimal security properties for TPKE, formalise these into indistinguishability-based and simulation-based security notions, and establish implications and separations between different variants. One of our insights is that the common belief of maximal corruption implying the same security notion under fewer corruption is generally false, and the importance of partial decryptions on challenge ciphertexts is often neglected. Concretely, we design a (contrived) scheme which is CCA-simulation-secure under maximal corruptions, but not IND-CPA-secure under arbitrary corruptions. Our scheme is so that a random, initially hidden subset of $t-1$ parties can jointly decrypt and thus trivially insecure, but which can still be proven secure when partial decryption queries are disallowed.

To show that our security notions are achievable, we prove that threshold ElGamal (Desmedt-Frankel, 1989) achieves simulation-CPA-security under DDH, borrowing techniques from a concurrent work. We also revisit CPA-to-CCA transforms in the style of Naor and Yung (NY) and discover that, generically, NY does not upgrade CPA to CCA security for TPKE. We provide two alternatives: (1) We propose and construct a novel building block called non-interactive proofs of randomness (NIPoR) in the random oracle model, and show that NIPoR allows a generic CPA-to-CCA transform. (2) We show that assuming the stronger semi-malicious CPA security, NY-style techniques suffice to upgrade to CCA security.
Expand
Tarun Yadav, Shweta Singh, Sudha Yadav
ePrint Report ePrint Report
Quantum cryptanalysis of block ciphers with Grover’s search requires synthesis of round function, where the non-linear S-boxes dominate the circuit cost. Efficient quantum implementations of these S-boxes are a bottleneck for cryptanalysis. In this work, we address this problem and present new generic strategy for synthesis of quantum circuit for large S-boxes that reduces the NISQ-era transpiled depth after decomposition into the hardware-oriented universal basis gate set u+cx. We introduce two-phase MILP-based, ancilla-aware synthesis framework for large S-boxes. Phase 1 determines which monomials will be synthesised globally, and how they are reused across outputs. This reduces redundancy and avoids high-degree terms that would lead to deep ladders. Phase 2 arranges the selected monomials into parallel layers. Here the solver explicitly accounts for ancilla usage, balancing the trade-off between fewer layers (smaller depth) and larger ancilla. MILP based synthesis show decisive, multi-fold reductions in transpiled depth in universal basis gate set u+cx. For SKINNY and ZUC S0 S-boxes, our synthesis reduces transpiled depth by factors of 18 and 9, respectively, with ancilla usage raised only to 10 and 13 qubits. For higher-degree S-boxes such as AES, SM4, and ZUC S1, we achieve 5 times reduction in transpiled depth by trading additional ancillas, increasing the budget from 5 to 22. To our knowledge, this is the first demonstration of ancilla-aware, globally optimised synthesis of 8-bit cryptographic S-boxes. By aligning primitive synthesis with transpiled cost, our method establishes a new baseline for depth-optimised resource estimation in the quantum cryptanalysis of symmetric primitives.
Expand
Mary Maller, Nicolas Mohnblatt, Arantxa Zapico
ePrint Report ePrint Report
Incrementally verifiable computation (IVC) is a powerful cryptographic primitive, particularly suited for proving long-running machine computations. Previous work shows that IVC can be constructed by recursively composing SNARKs. Unfortunately, theoretical challenges limit the provable security of known IVC constructions. Recursive composition may quickly lead to a blowup in extraction time and may require arithmetic circuits to enforce constraints about random oracle calls. Furthermore, composition presents a practical challenge: proofs are often expressed in a form that is not friendly to the arithmetic circuits that produce them.

To mitigate the theoretical challenges, we present the Open-and-Sign Random Oracle Model (osROM) as an extension to the signed random oracle of Chiesa and Tromer (ICS '10). This model, while strictly harder to instantiate than the Random Oracle Model, allows the design of protocols that can efficiently verify calls to the oracle and support straight-line extractors. As a result, IVC constructions in the osROM can be shown to have provable security for polynomial depths of computation.

Under our new model, we construct a framework to build secure IVC schemes from simple non-interactive reductions of knowledge. Our construction natively supports cycles of elliptic curves in the style of Ben-Sasson et al. (CRYPTO '14), thus answering the practical challenge outlined above. Finally, we analyze the HyperNova (CRYPTO '24) IVC scheme in the osROM and show that it is secure over a two-cycle of elliptic curves, for polynomial depths of computation.
Expand
MINKA MI NGUIDJOI Thierry Emmanuel
ePrint Report ePrint Report
Distributed systems require robust, transparent mechanisms for verifiable temporal ordering to operate without trusted authorities or synchronized clocks. This paper introduces Affine One-Wayness (AOW), a new cryptographic primitive for post-quantum temporal verification based on iterative polynomial evaluation over finite fields. AOW provides strong temporal binding guarantees by reducing its security with a tight reduction to the hardness of the dis crete logarithm problem in high-genus hyperelliptic curves (HCDLP) and with a reduction to the Affine Iterated Inversion Problem (AIIP), which possesses dual foundations in multivariate quadratic algebra and the arithmetic of high-genus hyperelliptic curves. We present a con struction with transparent setup and prove formal security against both classical and quantum adversaries. Furthermore, we demonstrate efficient integration with STARK proof systems for zero-knowledge verification of sequential computation with logarithmic scaling. As the core reliability component of the Chaotic Affine Secure Hash (CASH) framework, AOW enables practical applications in Byzantine-resistant event ordering and distributed synchronization with provable security guarantees under standard cryptographic assumptions.
Expand
Andreas Wiemers
ePrint Report ePrint Report
Over the past years, the so called Goppa Code Distinguishing (GD) problem has been studied. The GD problem asks at recognizing a generator matrix of a binary Goppa code from a random matrix. The main motivation for introducing the GD problem is the connection to the security of the McEliece public-key cryptosytem. A main contribution in addressing this problem is the so called syzygy distinguisher. In this article, we introduce another distinguisher. From a geometric perspective, the distinguisher considers certain invariants of the space of all homogeneous polynomials that vanish in higher order on the columns of the generator matrix. Based on heuristic arguments, the distinguisher described in this article might be favorable (but not practically computable) for specific practical parameters such as the combination (m = 12, s = 64, k =768, n = 3488) compared to the values given by the syzygy distinguisher.
Expand
Xiaojie Guo, Hanlin Liu, Zhicong Huang, Hongrui Cui, Wenhao Zhang, Cheng Hong, Xiao Wang, Kang Yang, Yu Yu
ePrint Report ePrint Report
Pseudorandom correlation generators (PCGs) have been popular in generating a huge amount of correlated randomness, a critical resource in secure computation. However, existing PCGs are memory-consuming and not friendly to resource-constrained devices. Even for moderate devices, the need for large memory can also be a disadvantage in applications like zero-knowledge proofs or large-scale secure computation. In this paper, we propose a malicious streaming PCG (sPCG), which generates a bounded number of tuples of subfield vector oblivious linear evaluation (sVOLE) on-the-fly, each with sublinear memory and computation.

1. We propose an efficient protocol that replaces the relaxed distributed comparison function in the best pseudorandom correlation function (PCF) for sVOLE (CRYPTO'22), which has the same streaming features for any polynomial number of tuples. With this protocol, our sPCG is doubly efficient in memory and the computation per sVOLE. Moreover, we augment the black-box distributed setup to malicious security and yield 4x communication improvement. Our sPCG can be extended to a more efficient sVOLE PCF with the same improvements in memory and computation, and a 2x faster malicious non-black-box distributed setup.

2. We present a practical attack on the Learning Parity with Noise (LPN) assumption for expand-accumulate codes with regular noise, revealing that some previous parameters provide around 14~22 bits of security over binary noises, far below the target 128 bits. To address this, we introduce a low-Hamming-weight noise distribution to withstand the attack. We then derive some updated LPN parameters with the new noise distribution, restoring 128-bit security and reducing the noise-related computation and communication.

3. We provide an implementation of our sPCG for the special case of correlated oblivious transfer (COT). In addition to the improvements over the best PCF, our sPCG can have a comparable end-to-end performance to Ferret (CCS'20) and the PCG from expand-convolute codes (CRYPTO'23), two state-of-the-art PCGs, with the advantage of being able to produce 10 million COTs on-the-fly and reducing the memory from 337 MB and 624 MB to 20 MB, respectively.
Expand
Zonglun Li, Wangze Ni, Shuhao Zheng, Junliang Luo, Weijie Sun, Lei Chen, Xue Liu, Tianhang Zheng, Zhan Qin, Kui Ren
ePrint Report ePrint Report
While transaction transparency is fundamental, it introduces privacy vulnerabilities for blockchain users requiring confidentiality. Existing privacy mixers, intended to mitigate the issue by offering obfuscation of transactional links, have been leveraged to evade emerging financial regulations in DeFi and facilitate harmful practices within the community. Regulatory concerns, driven by prosocial intentions, are raised to ensure that mixers are used responsibly complying with regulations. The research challenge is to reconcile privacy with enforceable compliance by providing designated-only transaction traceability, blocking sanctioned actors and preserving honest-user anonymity. We tackle this challenge by introducing the Hurricane Mixer, the mixer framework that embeds compliance logic without forfeiting privacy of regular transactions. Hurricane comes in two deployable variants: Cash for fixed-denomination pools and UTXO for arbitrary-amount transfers. Both variants share the key components: a sanction list mechanism that prevents transactions involving sanctioned entities, and a mechanism that allows for possible regulatory access to encrypted transaction details for compliance purposes. We implement the full stack: Gnark Groth-16 circuits for deposit/withdraw proofs, contracts maintaining an on-chain sanction list, and dual public-key encryption for bidirectional tracing. The comprehensive evaluation illustrates the efficacy of Hurricane Mixer in ensuring privacy preservation, regulatory conformity, and cost efficiency. Experiments show that the Cash variant is more economical when the payment amount matches the denomination, the UTXO variant is better suited for large or fractional payments, and the framework overall sustains competitive gas efficiency without compromising regulator traceability.
Expand
Bowen Zhang, Hao Cheng, Johann Großschädl, Peter Y. A. Ryan
ePrint Report ePrint Report
The Edwards-curve Digital Signature Algorithm (EdDSA) is a deterministic digital signature scheme that has recently been adopted in a range of popular security protocols. Verifying an EdDSA signature involves the computation of a double-scalar multiplication of the form $SB - hA$, which is a costly operation. The vector extensions of modern Intel processors, such as AVX2 and AVX-512, offer a variety of options to speed up double-scalar multiplication thanks to their massive SIMD-parallel processing capabilities. However, in certain application domains like fintech or e-voting, several or many EdDSA verifications have to be performed, and what counts in the end is not the execution time of one single signature verification, but how long it takes to verify a certain number of signatures. For such applications, it makes more sense to use SIMD instructions to maximize the throughput of a batch of verification operations instead of minimizing the latency of one verification. In this paper, we introduce high-throughput AVX2/AVX-512 implementations of EdDSA verification executing four (resp., eight) instances of double-scalar multiplication in a SIMD-parallel fashion, whereby each instance uses a 64-bit element of the 256-bit (resp., 512-bit) vectors. We analyze three techniques for double-scalar multiplication, one that separates the computation of $SB$ and $hA$, while the other two integrate or interleave them based on a joint-sparse form or non-adjacent form representation of the scalars $S$ and $h$. Our experiments with 256-bit AVX2 vectorization on an Intel Cascade Lake CPU show that the separate method achieves the best results and reaches a single-core throughput of 48,182 double-scalar multiplications per second, which exceeds the throughput of the currently fastest latency-optimized implementation by 33%.
Expand
Eli Baum, Sam Buxbaum, Nitin Mathai, Muhammad Faisal, Vasiliki Kalavri, Mayank Varia, John Liagouris
ePrint Report ePrint Report
We present ORQ, a system that enables collaborative analysis of large private datasets using cryptographically secure multi-party computation (MPC). ORQ protects data against semi-honest or malicious parties and can efficiently evaluate relational queries with multi-way joins and aggregations that have been considered notoriously expensive under MPC. To do so, ORQ eliminates the quadratic cost of secure joins by leveraging the fact that, in practice, the structure of many real queries allows us to join records and apply the aggregations “on the fly” while keeping the result size bounded. On the system side, ORQ contributes generic oblivious operators, a data-parallel vectorized query engine, a communication layer that amortizes MPC network costs, and a dataflow API for expressing relational analytics—all built from the ground up.

We evaluate ORQ in LAN and WAN deployments on a diverse set of workloads, including complex queries with multiple joins and custom aggregations. When compared to state-of-the-art solutions, ORQ significantly reduces MPC execution times and can process one order of magnitude larger datasets. For our most challenging workload, the full TPC-H benchmark, we report results entirely under MPC with Scale Factor 10—a scale that had previously been achieved only with information leakage or the use of trusted third parties.
Expand
Suvradip Chakraborty, Sebastian Faller, Dennis Hofheinz, Kristina Hostáková
ePrint Report ePrint Report
We put forward the concept of "forgetful encryption". A forgetful public-key encryption scheme guarantees that (a limited amount of) information that is leaked through the encryption process does not reveal the whole message. This notion is related to, but different from leakage-resilient encryption (where leakage on the decryption key is considered) and big-key encryption (which is defined for secret-key encryption). Forgetful encryption is useful, e.g., in settings in which a cloud server encrypts data stored for a client, using that client’s public key. In that setting, a limited form of leakage on the encryption process may occur accidentally, through security breaches, or through targeted attacks. In this work, we define the notion of forgetful encryption using the simulation paradigm, and provide a number of instantiations from mild computational assumptions. In the random oracle model, we provide a generic instantiation from “lossy key encapsulation mechanisms”, an existing paradigm that allows for efficient instantiations in the group, RSA, and lattice settings. In the standard model, we provide a somewhat less efficient construction (that can also be instantiated under standard assumptions) from a mild variant of (sender-)deniable encryption.
Expand
Zeyu Liu, Katerina Sotiraki, Eran Tromer, Yunhao Wang
ePrint Report ePrint Report
The efficiency of Public Key Encryption (PKE) and Key Encapsulation Mechanism (KEM), and in particular their large ciphertext size, is a bottleneck in real-world systems. This worsens in post-quantum secure schemes (e.g., lattice-based ones), whose ciphertexts are an order of magnitude larger than prior ones.%their non-post-quantum counterparts. The work of Kurosawa (PKC'02) introduced multi-message multi-recipient PKE (mmPKE) to reduce the amortized ciphertext size when sending messages to more than one recipient. This notion naturally extends to multi-message multi-recipient KEM (mmKEM).

In this work, we first show concrete attacks on existing lattice-based mmPKE schemes: Using maliciously-crafted recipient public keys, these attacks completely break semantic security and key privacy, and are inherently undetectable. We then introduce the first lattice-based mmKEM scheme that maintains full privacy even in the presence of maliciously-generated public keys. Concretely, the ciphertext size of our mmKEM for 100 recipients is $>10\times$ smaller than naively using Crystals-Kyber. We also show how to extend our mmKEM to mmPKE, achieving a scheme that outperforms all prior lattice-based mmPKE schemes in terms of both security and efficiency. We additionally show a similar efficiency gain when applied to batched random oblivious transfer, and to group oblivious message retrieval.

Our scheme is proven secure under a new Module-LWE variant assumption, Oracle Module-LWE, which can be of its own independent interest. We reduce standard MLWE to this new assumption for some parameter regimes, which also gives intuition on why this assumption holds for the parameter we are interested in (along with additional cryptanalysis).

Furthermore, we show an asymptotically efficient compiler that removes the assumption made in prior works that recipients know their position in the list of intended recipients for every ciphertext.
Expand
Yanqi Gu, Stanislaw Jarecki, Phillip Nazarian, Apurva Rai
ePrint Report ePrint Report
Message authentication (MA) in the Short Authenticated String (SAS) model, defined by Vaudenay, allows for authenticating arbitrary messages sent over an insecure channel as long as the sender can also transmit to the receiver a short authenticated message, e.g. d = 20 bits. The flagship application of SAS-MA is Authenticated Key Exchange (AKE) in the SAS model (SAS-AKE), which allows parties communicating over insecure network to establish a secure channel without prior source of trust except an ability to exchange d-bit authenticated strings. SAS-AKE is applicable e.g. for device pairing, i.e. creating secure channels between devices capable of displaying d-bit values, e.g. encoded as decimal strings, verified by a human operator, or to secure messaging applications like Signal or WhatsApp, where such short values can be read off by participants who trust each others’ voices. A string of works showed light-weight SAS-MA schemes, using only symmetric-key crypto and 3 communication flows, which is optimal. This was extended to group SAS-(M)MA, for (mutual) message authentication among any number of parties, using two simultaneous flows. We show a new two simultaneous flows SAS-(M)MA protocol, based on Verifiable Random Functions (VRF), with a novel property that the first flow, which consists of exchanging VRF public keys, can be re-used in multiple SAS-MA instances. Moreover, instantiated with ECVRF, these keys have the same form $vk = g^{sk}$ as Diffie-Hellman keys exchanged in DH-based (A)KE protocols like X3DH. We show that X3DH keys can be re-used in our SAS-MA, implying SAS-AKE which adds a minimal overhead of a single flow to X3DH. Crucially, while X3DH is secure only if participants’ public keys are certified by a shared source of trust, e.g. a Public Key Infrastructure (PKI) or a trusted Key Distribution Center (KDC) ran by Signal or WhatsApp, if X3DH is amended by our SAS-AKE then the established channel is secure even if PKI or KDC is compromised, assuming trust in user-assisted authentication of short d-bit strings.
Expand
Zesheng Li, Dongliang Cai, Yimeng Tian, Yihang Du, Xinxuan Zhang, Yi Deng
ePrint Report ePrint Report
Succinct Non-interactive Arguments of Knowledge(SNARKs) have gained widespread application due to their succinct proof size and efficient verification. However, they face significant scalability limitations in proof generation for large-scale circuits. To address this challenge, distributing the prover's computation across multiple nodes has emerged as a promising solution. Existing distributed SNARK constructions rely on distributed polynomial commitments, requiring each prover to perform computationally intensive group operations during the polynomial commitment opening phase.

In this paper, we propose a novel distributed SNARK system constructed by compiling distributed PIOP with additively homomorphic polynomial commitment, rather than distributed polynomial commitment. The core technical component is distributed SumFold, which folds multiple sum-check instances into one. After the folding process, only one prover is required to perform polynomial commitment openings. It facilitates compilation with SamaritanPCS, which is a recently proposed additively homomorphic multilinear polynomial commitment scheme. The resulting SNARK system is specifically optimized for data-parallel circuits. Compared to prior HyperPlonk-based distributed proof systems (e.g., Hyperpianist and Cirrus), our construction achieves improvements in both proof size and prover time. We implement our protocol and conduct a comprehensive comparison with HyperPianist with 8 machines. Our system achieves shorter proof and 4.1~4.9× speedup in prover time, while maintaining comparable verification efficiency.
Expand

12 September 2025

ExeQuantum, Docklands, Melbourne (Remote-friendly for the right candidate)
Job Posting Job Posting

ExeQuantum is a Melbourne-based company pioneering post-quantum cryptography (PQC) and sovereign-grade secure systems. We are working with critical industries and governments to deliver solutions that are sovereign, transparent, agile, and compliant. Our projects range from PQC-as-a-Service APIs to secure integrations in finance, healthcare, and national infrastructure.

We are looking for a Software Engineer to join our engineering team. This role reports directly to the CTO and will involve building prototypes into production-ready solutions across cryptography, email security, and payment infrastructure. This is not a generic coding role. You will be working on systems where discipline, confidentiality, and creativity matter as much as technical skill.

Responsibilities
  • Design, develop, and maintain secure software components (Python, Node.js, C/C++/Rust depending on project scope).
  • Integrate PQC algorithms (ML-KEM, ML-DSA, HQC, FN-DSA, etc.) into real-world applications.
  • Contribute to internal tools, SDKs, APIs, and add-ins (e.g., Outlook, payment gateways).
  • Collaborate with the CTO on system design and architecture.
  • Follow strict security and confidentiality practices.
  • Participate in code reviews, testing, and documentation to ensure auditability and compliance.
What We Are Looking For
  • Open-mindedness and willingness to study cutting-edge technologies. Demonstrated ability to think outside the box and avoid “impossible” as a default answer.
  • 3+ years of professional software development experience (startup or high-assurance sector preferred).
  • Strong skills in at least one of: Python, Node.js/TypeScript, C/C++/Rust.
  • Familiarity with cryptographic libraries, secure coding practices, or networking protocols is a plus.
  • Comfort working with prototypes, debugging, and delivering solutions in ambiguous/problem-solving contexts.
  • High standard of confidentiality and discipline in handling IP, code, and client data.

Closing date for applications:

Contact: Send your CV, links of your code repositories (GitHub, GitLab, etc.), and a short note about why you want to work on PQC and secure systems with ExeQuantum to raymond@exequantum.com.

More information: https://www.linkedin.com/hiring/jobs/4298309236/detail/

Expand
Monash University, Melbourne, Australia
Job Posting Job Posting
Monash cybersecurity group has an opening for a post-doc position. The topics of interest are Lattice-Based Cryptography and/or Privacy Enhancing Technologies (such as advanced forms of cryptographic tools and zero-knowledge proofs). We provide
  1. a highly competitive salary on par with lecturer (assistant professor) salaries in Australia,
  2. opportunities to collaborate with leading academic and industry experts in the related areas,
  3. opportunities to participate in international grant-funded projects,
  4. collaborative and friendly research environment,
  5. an opportunity to live/study in one of the most liveable and safest cities in the world.
The position will be filled as soon as a suitable candidate is found.

Requirements. significant research experience in Lattice-Based Cryptography and/or Privacy-Enhancing Technologies is required. A strong mathematical background is highly desired. Some knowledge/experience in coding (for example, Python, C/C++, SageMath) is a plus. Candidates must have completed (or be about to complete within the next 8 months) a PhD degree in a relevant field.

How to apply. please first refer to mfesgin.github.io/supervision/ for more information about our team. Then, please fill out the following form (also clickable from the advertisement title): https://docs.google.com/forms/d/e/1FAIpQLSeU6O65yJQW3rrcAi4dzatBYPyWU7Y5otLPReHFPuQf8dtggw/viewform

Closing date for applications:

Contact: Muhammed Esgin

More information: https://docs.google.com/forms/d/e/1FAIpQLSeU6O65yJQW3rrcAi4dzatBYPyWU7Y5otLPReHFPuQf8dtggw/viewform

Expand
Yuhao Zheng, Jianming Lin, Chang-an Zhao
ePrint Report ePrint Report
Bilinear pairings have emerged as a fundamental tool in public-key cryptography, enabling advanced protocols such as Identity-Based Encryption (IBE), short signatures, and zero-knowledge proofs. This paper focuses on optimizing pairing computations on curves with embedding degree 2, addressing both theoretical foundations and practical implementations. We propose an optimized double-and-add ladder algorithm that leverages the technique of y-coordinate recovery, achieving superior performance for the Tate pairing on supersingular curves and the Omega pairing on non-supersingular curves. Our method is implemented based on the RELIC cryptographic library, demonstrating significant efficiency improvements over Miller’s algorithm. Specifically, it reduces the number of Fp-multiplications (resp. CPU clock cycles) by 17.53% (resp. 13.58%) for the reduced Tate pairing on SS-1536 and by 12.37% (resp. 8.39%) for the Omega pairing on NSS-1536. This work establishes the first comprehensive implementation framework for cubical-based pairing computations on curves with embedding degree 2, providing quantified optimizations for practical cryptographic deployment.
Expand
Maxence Jauberty, Pierrick Méaux
ePrint Report ePrint Report
We provide a complete characterization of the possible cardinalities of Walsh supports of Boolean functions. Our approach begins with a detailed study of Siegenthaler’s construction and its properties, which allow us to derive relations between admissible support sizes in successive numbers of variables. We then introduce new notions such as Walsh space, reduction, and equivalence on supports, which form the structural framework of our analysis. For $n=6$, we perform an experimental enumeration of affine-equivalence classes, and we analyze the geometric structure of supports of small cardinalities, proving uniqueness for sizes $10$ and $13$ and obtaining partial results for size $16$. By combining these findings with a sieving method, we rule out twelve impossible cardinalities and establish constructive methods that transform a support of size $s$ into one of size $4s+r$ for different values of $r$, sufficient to obtain every admissible cardinality for $n \geq 7$. As a consequence, we provide a complete characterization and resolve several open problems.
Expand
Ariel Futoransky, Ramses Fernandez, Emilio Garcia, Gabriel Larotonda, Sergio Demian Lerner
ePrint Report ePrint Report
We present WISCH, a commit-reveal protocol that combines compact aggregate signatures with hash-based commitments to enable selective disclosure of correlated data in multiparty computation. The protocol separates an on-chain verification core from off chain preparation, so that verification cost depends only on the number of openings, not on the size of the underlying message space. This yields asymptotic efficiency: on-chain cost grows linearly in the number of revealed items and is independent of the ambient domain, while the per-byte overhead decreases with the message granularity. Security is established via a simulation-based proof in a UC framework with an ideal ledger functionality, in the algebraic group and global random-oracle models, under standard assumptions for discrete-log-based signatures and hash-based commitments. Thus WISCH provides selectively verifiable revelation with succinct on-chain checks and provable security guarantees.
Expand
Next ►