IACR News
Updates on the COVID-19 situation are on the
Announcement channel.
Here you can see all recent updates to the IACR webpage. These updates are also available:
02 August 2024
Emily Wenger, Eshika Saxena, Mohamed Malhou, Ellie Thieu, Kristin Lauter
ePrint Report
Lattice cryptography schemes based on the learning with errors (LWE) hardness assumption have been standardized by NIST for use as post-quantum cryptosystems, and by HomomorphicEncryption.org for encrypted compute on sensitive data. Thus, understanding their concrete security is critical. Most work on LWE security focuses on theoretical estimates of attack performance, which is important but may overlook attack nuances arising in real-world implementations. The sole existing concrete benchmarking effort, the Darmstadt Lattice Challenge, does not include benchmarks relevant to the standardized LWE parameter choices - such as small secret and small error distributions, and Ring-LWE (RLWE) and Module-LWE (MLWE) variants. To improve our understanding of concrete LWE security, we provide the first benchmarks for LWE secret recovery on standardized parameters, for small and low-weight (sparse) secrets. We evaluate four LWE attacks in these settings to serve as a baseline: the Search-LWE attacks uSVP, SALSA, and Coo & Cruel, and the Decision-LWE attack: Dual Hybrid Meet-in-the-Middle (MitM). We extend the SALSA and Cool & Cruel attacks in significant ways, and implement and scale up MitM attacks for the first time. For example, we recover hamming weight $9-11$ binomial secrets for KYBER ($\kappa=2$) parameters in $28-36$ hours with SALSA and Cool\&Cruel, while we find that MitM can solve Decision-LWE instances for hamming weights up to $4$ in under an hour for Kyber parameters, while uSVP attacks do not recover any secrets after running for more than $1100$ hours. We also compare concrete performance against theoretical estimates. Finally, we open source the code to enable future research.
Elijah Pelofske, Vincent Urias, Lorie M. Liebrock
ePrint Report
Generative Pre-Trained Transformer models have been shown to be surprisingly effective at a variety of natural language processing tasks -- including generating computer code. However, in general GPT models have been shown to not be incredibly effective at handling specific computational tasks (such as evaluating mathematical functions).
In this study, we evaluate the effectiveness of open source GPT models, with no fine-tuning, and with context introduced by the langchain and localGPT Large Language Model (LLM) framework, for the task of automatic identification of the presence of vulnerable code syntax (specifically targeting C and C++ source code). This task is evaluated on a selection of $36$ source code examples from the NIST SARD dataset, which are specifically curated to not contain natural English that indicates the presence, or lack thereof, of a particular vulnerability (including the removal of all source code comments). The NIST SARD source code dataset contains identified vulnerable lines of source code that are examples of one out of the $839$ distinct Common Weakness Enumerations (CWE), allowing for exact quantification of the GPT output classification error rate. A total of $5$ GPT models are evaluated, using $10$ different inference temperatures and $100$ repetitions at each setting, resulting in $5,000$ GPT queries per vulnerable source code analyzed.
Ultimately, we find that the open source GPT models that we evaluated are not suitable for fully automated vulnerability scanning because the false positive and false negative rates are too high to likely be useful in practice. However, we do find that the GPT models perform surprisingly well at automated vulnerability detection for some of the test cases, in particular surpassing random sampling (for some GPT models and inference temperatures), and being able to identify the exact lines of code that are vulnerable albeit at a low success rate. The best performing GPT model result found was Llama-2-70b-chat-hf with inference temperature of $0.1$ applied to NIST SARD test case 149165 (which is an example of a buffer overflow vulnerability), which had a binary classification recall score of $1.0$ and a precision of $1.0$ for correctly and uniquely identifying the vulnerable line of code and the correct CWE number.
Additionally, the GPT models are able to, with a rate quantifiably better than random sampling, identify the specific line of source that contains the identified CWE for many of the NIST SARD test cases.
Michael Rosenberg, Maurice Shih, Zhenyu Zhao, Rui Wang, Ian Miers, Fan Zhang
ePrint Report
Anonymous Broadcast Channels (ABCs) allow a group of clients to announce messages without revealing the exact author. Modern ABCs operate in a client-server model, where anonymity depends on some threshold (e.g., 1 of 2) of servers being honest. ABCs are an important application in their own right, e.g., for activism and whistleblowing. Recent work on ABCs (Riposte, Blinder) has focused on minimizing the bandwidth cost to clients and servers when supporting large broadcast channels for such applications. But, particularly for low bandwidth settings, they impose large costs on servers, make cover traffic costly, and make volunteer operators unlikely.
In this paper, we describe the design, implementation, and evaluation of ZIPNet, an anonymous broadcast channel that 1) scales to hundreds of anytrust servers by minimizing the computational costs of each server, 2) substantially reduces the servers’ bandwidth costs by outsourcing the aggregation of client messages to untrusted (for privacy) infrastructure, and 3) supports cover traffic that is both cheap for clients to produce and for servers to handle.
In this paper, we describe the design, implementation, and evaluation of ZIPNet, an anonymous broadcast channel that 1) scales to hundreds of anytrust servers by minimizing the computational costs of each server, 2) substantially reduces the servers’ bandwidth costs by outsourcing the aggregation of client messages to untrusted (for privacy) infrastructure, and 3) supports cover traffic that is both cheap for clients to produce and for servers to handle.
Guillaume Girol, Lucca Hirschi, Ralf Sasse, Dennis Jackson, Cas Cremers, David Basin
ePrint Report
The Noise specification describes how to systematically construct a large family of Diffie-Hellman based key exchange protocols, including the secure transports used by WhatsApp, Lightning, and WireGuard. As the specification only makes informal security claims, earlier work has explored which formal security properties may be enjoyed by protocols in the Noise framework, yet many important questions remain open.
In this work we provide the most comprehensive, systematic analysis of the Noise framework to date. We start from first principles and, using an automated analysis tool, compute the strongest threat model under which a protocol is secure, thus enabling formal comparison between protocols. Our results allow us to objectively and automatically associate each informal security level presented in the Noise specification with a formal security claim.
We also provide a fine-grained separation of Noise protocols that were previously described as offering similar security properties, revealing a subclass for which alternative Noise protocols exist that offer strictly better security guarantees. Our analysis also uncovers missing assumptions in the Noise specification and some surprising consequences, e.g. in some situations higher security levels yield strictly worse security.
In this work we provide the most comprehensive, systematic analysis of the Noise framework to date. We start from first principles and, using an automated analysis tool, compute the strongest threat model under which a protocol is secure, thus enabling formal comparison between protocols. Our results allow us to objectively and automatically associate each informal security level presented in the Noise specification with a formal security claim.
We also provide a fine-grained separation of Noise protocols that were previously described as offering similar security properties, revealing a subclass for which alternative Noise protocols exist that offer strictly better security guarantees. Our analysis also uncovers missing assumptions in the Noise specification and some surprising consequences, e.g. in some situations higher security levels yield strictly worse security.
01 August 2024
San Jose, USA, 5 May - 8 May 2025
Event Calendar
Event date: 5 May to 8 May 2025
31 July 2024
Knud Ahrens
ePrint Report
Non-Interactive Timed Commitment schemes (NITC) allow to open any commitment after a specified delay $t_{\mathrm{fd}}$ . This is useful for sealed bid auctions and as primitive for more complex protocols. We present the first NITC without repeated squaring or theoretical black box algorithms like NIZK proofs or one-way functions. It has fast verification, almost arbitrary delay and satisfies IND-CCA hiding and perfect binding. Additionally, it needs no trusted setup. Our protocol is based on isogenies between supersingular elliptic curves making it presumably quantum secure, and all algorithms have been implemented as part of SQISign or other well-known isogeny-based cryptosystems.
Axel Durbet, Koray Karabina, Kevin Thiry-Atighehchi
ePrint Report
Secure sketches are designed to facilitate the recovery of originally enrolled data from inputs that may vary slightly over time. This capability is important in applications where data consistency cannot be guaranteed due to natural variations, such as in biometric systems and hardware security. Traditionally, secure sketches are constructed using error-correcting codes to handle these variations effectively. Additionally, principles of information theory ensure the security of these sketches by managing the trade-off between data recoverability and confidentiality. In this paper, we show how to construct a new family of secure sketches generically from groups. The notion of groups with unique factorization property is first introduced, which is of independent interest and serves as a building block for our secure sketch construction. Next, an in-depth study of the underlying mathematical structures is provided, and some computational and decisional hardness assumptions are defined. As a result, it is argued that our secure sketches are efficient; can handle a linear fraction of errors with respect to the norm 1 distance; and that they are reusable and irreversible. To our knowledge, such generic group-based secure sketch construction is the first of its kind, and it offers a viable alternative to the currently known secure sketches.
Diego F. Aranha, Georgios Fotiadis, Aurore Guillevic
ePrint Report
For more than two decades, pairings have been a fundamental tool for designing elegant cryptosystems, varying from digital signature schemes to more complex privacy-preserving constructions. However, the advancement of quantum computing threatens to undermine public-key cryptography. Concretely, it is widely accepted that a future large-scale quantum computer would be capable to break any public-key cryptosystem used today, rendering today's public-key cryptography obsolete and mandating the transition to quantum-safe cryptographic solutions. This necessity is enforced by numerous recognized government bodies around the world, including NIST which initiated the first open competition in standardizing post-quantum (PQ) cryptographic schemes, focusing primarily on digital signatures and key encapsulation/public-key encryption schemes. Despite the current efforts in standardizing PQ primitives, the landscape of complex, privacy-preserving cryptographic protocols, e.g., zkSNARKs/zkSTARKs, is at an early stage. Existing solutions suffer from various disadvantages in terms of efficiency and compactness and in addition, they need to undergo the required scrutiny to gain the necessary trust in the academic and industrial domains. Therefore, it is believed that the migration to purely quantum-safe cryptography would require an intermediate step where current classically secure protocols and quantum-safe solutions will co-exist. This is enforced by the report of the Commercial National Security Algorithm Suite version 2.0, mandating transition to quantum-safe cryptographic algorithms by 2033 and suggesting to incorporate ECC at 192-bit security in the meantime. To this end, the present paper aims at providing a comprehensive study on pairings at 192-bit security level. We start with an exhaustive review in the literature to search for all possible recommendations of such pairing constructions, from which we extract the most promising candidates in terms of efficiency and security, with respect to the advanced Special TNFS attacks. Our analysis is focused, not only on the pairing computation itself, but on additional operations that are relevant in pairing-based applications, such as hashing to pairing groups, cofactor clearing and subgroup membership testing. We implement all functionalities of the most promising candidates within the RELIC cryptographic toolkit in order to identify the most efficient pairing implementation at 192-bit security and provide extensive experimental results.
Yujin Oh, Kyungbae Jang, Yujin Yang, Hwajeong Seo
ePrint Report
The progression of quantum computing is considered a potential threat to traditional cryptography system, highlighting the significance of post-quantum security in cryptographic systems. Regarding symmetric key encryption, the Grover algorithm can approximately halve the search complexity. Despite the absence of fully operational quantum computers at present, the necessity of assessing the security of symmetric key encryption against quantum computing continues to grow. In this paper, we implement the ARIA block cipher in a quantum circuit and compare it with previous research. Our implementation of the ARIA quantum circuit achieves over 92.5% improvement in full depth and over 98.7% improvement in Toffoli depth compared to the implementation proposed in Chauhan et al. Compared to Yang et al.’s implementation, our implementation is improved the full depth by 36.7% and the number of qubits by 8%. Additionally, we analyze the complexity of Grover’s search attack and compare it with NIST criteria. We confirm that ARIA achieves quantum security level 1, 3, and 5 (ARIA-128, 192, and 256, respectively).
Kyungbae Jang, Yujin Oh, Minwoo Lee, Dukyoung Kim, Hwajeong Seo
ePrint Report
Quantum computers can model and solve several problems that have posed challenges for classical super computers, leveraging their natural quantum mechanical characteristics. A large-scale quantum computer is poised to significantly reduce security strength in cryptography. In this context, extensive research has been conducted on quantum cryptanalysis. In this paper, we present optimized quantum circuits for Korean block ciphers, HIGHT and LEA. Our quantum circuits for HIGHT and LEA demonstrate the lowest circuit depth compared to previous results. Specifically, we achieve depth reductions of 48% and 74% for HIGHT and LEA, respectively. We employ multiple novel techniques that effectively reduce the quantum circuit depth with a reasonable increase in qubit count. Based on our depth-optimized quantum circuits for HIGHT and LEA block ciphers, we estimate the lowest quantum attack complexity for Grover’s key search. Our quantum circuit can be utilized for other quantum algorithms, not only for Grover’s algorithm. Furthermore, the optimization methods gathered in this work can be adopted for generic quantum implementations in cryptography.
Nikolaos Dimitriou, Albert Garreta, Ignacio Manzur, Ilia Vlasov
ePrint Report
We present Mova, a folding scheme for R1CS instances that does not require committing to error or cross terms, nor makes use of the sumcheck protocol. For reasonable parameter choices, Mova's Prover is about $5$ to $10$ times faster than Nova's Prover, and about $1.5$ times faster than Hypernova's Prover (applied to R1CS instances), not counting the cost of committing to the R1CS witness. Mova's Verifier has a similar cost as Hypernova's Verifier, but Mova has the advantage of having only $4$ rounds of communication, while Hypernova has a logarithmic number of rounds.
Mova, which is based on the Nova folding scheme, manages to avoid committing to Nova's so-called error term $\mathbf{EE}$ and cross term $\mathbf{TT}$ by replacing said commitments with evaluations of the Multilinear Extension (MLE) of $\mathbf{EE}$ and $\mathbf{TT}$ at a random point sampled by the Verifier. A key observation used in Mova's soundness proofs is that $\mathbf{EE}$ is implicitly committed by a commitment to the input-witness vector $\mathbf{ZZ}$, since $\mathbf{EE}=(A\cdot\mathbf{ZZ})\circ (B\cdot\mathbf{ZZ}) -u (C\cdot \mathbf{ZZ})$.
Mova, which is based on the Nova folding scheme, manages to avoid committing to Nova's so-called error term $\mathbf{EE}$ and cross term $\mathbf{TT}$ by replacing said commitments with evaluations of the Multilinear Extension (MLE) of $\mathbf{EE}$ and $\mathbf{TT}$ at a random point sampled by the Verifier. A key observation used in Mova's soundness proofs is that $\mathbf{EE}$ is implicitly committed by a commitment to the input-witness vector $\mathbf{ZZ}$, since $\mathbf{EE}=(A\cdot\mathbf{ZZ})\circ (B\cdot\mathbf{ZZ}) -u (C\cdot \mathbf{ZZ})$.
Krystal Maughan, Joseph Near, Christelle Vincent
ePrint Report
The security of certain post-quantum isogeny-based cryptographic schemes relies on the ability to provably and efficiently compute isogenies between supersingular elliptic curves without leaking information about the isogeny other than its domain and codomain. Earlier work in this direction give mathematical proofs of knowledge for the isogeny, and as a result when computing a chain of $n$ isogenies each proceeding node must verify the correctness of the proof of each preceding node, which is computationally linear in $n$.
In this work, we empirically build a system to prove the execution of the circuit computing the isogeny rather than produce a proof of knowledge. This proof can then be used as part of the verifiable folding scheme Nova, which reduces the complexity of an isogeny proof of computation for a chain of $n$ isogenies from $O(n)$ to $O(1)$ by providing at each step a single proof that proves the whole preceding chain. To our knowledge, this is the first application of this type of solution to this problem.
Xavier Bonnetain, Virginie Lallemand
ePrint Report
In this short note we examine one of the impossible boomerang distinguishers of Skinny-128-384 provided by Zhang, Wang and Tang at ToSC 2024 Issue 2 and disprove it.
The issue arises from the use of the Double Boomerang Connectivity Table (DBCT) as a tool to establish that a boomerang switch over 2 rounds has probability zero, whereas the DBCT only covers specific cases of difference propagation, missing a large set of events that might make the connection possible.
We study in details the specific instance provided by Zhang et al. and display one example of a returning quartet that contradicts the impossibility.
The issue arises from the use of the Double Boomerang Connectivity Table (DBCT) as a tool to establish that a boomerang switch over 2 rounds has probability zero, whereas the DBCT only covers specific cases of difference propagation, missing a large set of events that might make the connection possible.
We study in details the specific instance provided by Zhang et al. and display one example of a returning quartet that contradicts the impossibility.
Jong-Yeon Park, Wonil Lee, Bo Gyeong Kang, Il-jong Song, Jaekeun Oh, Kouichi Sakurai
ePrint Report
A prominent countermeasure against side channel attacks, the hiding countermeasure, typically involves shuffling operations using a permutation algorithm. Especially in the era of Post-Quantum Cryptography, the importance of the hiding coun- termeasure is emphasized due to computational characteristics like those of lattice and code-based cryptography. In this context, swiftly and securely generating permutations has a critical impact on an algorithm’s security and efficiency. The widely adopted Fisher-Yates shuffle, because of its high security and ease of implementation, is prevalent. However, it has a limitation of complexity O(?) due to its sequential nature. In response, we propose a time-area trade-off swap algorithm, FSS, based on the Butterfly Network with only log(?) depth, log(?) works and O(1) operation time in parallel. We will calculate the maximum gain that an attacker can achieve through butterfly operations with only log(?) depth from side channel analysis perspective. In particular, we will show that it is possible to derive a generalized formula of the attack complexity with higher-order side channel attacks for arbitrary input sizes through a fractal structure of the butterfly network. Furthermore, our research highlights the possibility of generating efficient and secure permutations utilizing a minimal amount of randomness.
Scott Griffy, Anna Lysyanskaya, Omid Mir, Octavio Perez Kempner, Daniel Slamanig
ePrint Report
Delegatable anonymous credentials (DACs) are anonymous credentials that allow a
root issuer to delegate their credential-issuing power to secondary issuers
who, in turn, can delegate further. This delegation, as well as credential
showing, is carried out in a privacy-preserving manner, so that credential
recipients and verifiers learn nothing about the issuers on the delegation
chain. One particularly efficient approach to constructing DACs is due to
Crites and Lysyanskaya (CT-RSA'19), based on mercurial signatures, which is a
type of equivalence-class signatures. In contrast to previous approaches, this
design is conceptually simple and does not require extensive use of
non-interactive zero-knowledge proofs. Unfortunately, the ``CL-type'' DAC
schemes proposed so far have a privacy limitation: if an adversarial issuer
(even an honest-but-curious one) was part of an honest user's delegation chain,
the adversary will be able to detect this fact (and identify the specific
adversarial issuer) when an honest user shows its credential. This is because
underlying mercurial signature schemes allow the owner of a secret key to
detect when his key was used in a delegation chain.
In this paper we show that it is possible to construct CL-type DACs that does not suffer from this privacy issue. We give a new mercurial signature scheme that provides adversarial public key class hiding; i.e. even if an adversarial signer participated in the delegation chain, the adversary won't be able to identify this fact. This is achieved by introducing structured public parameters which for each delegation level, enabling strong privacy features in DAC. Since the setup of these parameters also produces trapdoors that are problematic in privacy applications, we show how to overcome this problem by using techniques from updatable structured reference string in zero-knowledge proof systems (Groth et al. CRYPTO'18).
In addition, we propose a simple way to realize revocation for CL-type DACs via the concept of revocation tokens. While we showcase this approach to revocation using our DAC scheme, it is generic and can be applied to any CL-type DAC system. Revocation is a feature that is largely unexplored and notoriously hard to achieve for DACs. However as it is a vital feature for any anonymous credential system, this can help to make DAC schemes more attractive for practical applications.
In this paper we show that it is possible to construct CL-type DACs that does not suffer from this privacy issue. We give a new mercurial signature scheme that provides adversarial public key class hiding; i.e. even if an adversarial signer participated in the delegation chain, the adversary won't be able to identify this fact. This is achieved by introducing structured public parameters which for each delegation level, enabling strong privacy features in DAC. Since the setup of these parameters also produces trapdoors that are problematic in privacy applications, we show how to overcome this problem by using techniques from updatable structured reference string in zero-knowledge proof systems (Groth et al. CRYPTO'18).
In addition, we propose a simple way to realize revocation for CL-type DACs via the concept of revocation tokens. While we showcase this approach to revocation using our DAC scheme, it is generic and can be applied to any CL-type DAC system. Revocation is a feature that is largely unexplored and notoriously hard to achieve for DACs. However as it is a vital feature for any anonymous credential system, this can help to make DAC schemes more attractive for practical applications.
Chris Brzuska, Cas Cremers, Håkon Jacobsen, Douglas Stebila, Bogdan Warinschi
ePrint Report
A security proof for a key exchange protocol requires writing down a security definition. Authors typically have a clear idea of the level of security they aim to achieve. Defining the model formally additionally requires making choices on games vs. simulation-based models, partnering, on having one or more Test queries and on adopting a style of avoiding trivial attacks: exclusion, penalizing or filtering. We elucidate the consequences, advantages and disadvantages of the different possible model choices. Concretely, we show that a model with multiple Test queries composes tightly with symmetric-key protocols while models with a single Test query require a hybrid argument that loses a factor in the number of sessions. To illustrate the usefulness of models with multiple Test queries, we prove the Naxos protocol security in said model and obtain a tighter bound than adding a hybrid argument on top of a proof in a single Test query model.
Our composition model exposes partnering information to the adversary, circumventing a previous result by Brzuska, Fischlin, Warinschi, and Williams (CCS 2011) showing that the protocol needs to provide public partnering. Moreover, our baseline theorem of key exchange partnering shows that partnering by key equality provides a joint baseline for most known partnering mechanisms, countering previous criticism by Li and Schäge (CCS 2017) that security in models with existential quantification over session identifiers is non-falsifiable.
Our composition model exposes partnering information to the adversary, circumventing a previous result by Brzuska, Fischlin, Warinschi, and Williams (CCS 2011) showing that the protocol needs to provide public partnering. Moreover, our baseline theorem of key exchange partnering shows that partnering by key equality provides a joint baseline for most known partnering mechanisms, countering previous criticism by Li and Schäge (CCS 2017) that security in models with existential quantification over session identifiers is non-falsifiable.
Jiawei Zhang, Jiangshan Long, Changhai Ou, Kexin Qiao, Fan Zhang, Shi Yan
ePrint Report
By introducing collision information, the existing side-channel Correlation-Enhanced Collision Attacks (CECAs) performed collision-chain detection, and reduced a given candidate space to a significantly smaller collision-chain space, leading to more efficient key recovery. However, they are still limited by low collision detection speed and low success rate of key recovery. To address these issues, we first give a Collision Detection framework with Genetic Algorithm (CDGA), which exploits Genetic Algorithm to detect the collision chains and has a strong capability of global searching. Secondly, we theoretically analyze the performance of CECA, and bound the searching depth of its output candidate
vectors with a confidence level using a rigorous hypothesis test, which is suitable both for Gaussian and non-Gaussian leakages. This facilitates the
initialization of the population.
Thirdly, we design an innovative goal-directed mutation method to randomly select new gene values for replacement, thus improving efficiency and adaptability of the CDGA. Finally, to optimize the evolutionary of CDGA,
we introduce roulette selection strategy to employ a probability assignment based on individual fitness values to guarantee the preferential selection of superior genes. A single-point crossover strategy is also used to introduce novel gene segments into the chromosomes, thus enhancing the genetic diversity of the population. Experiments verify the superiority of our CDGA.
NXP Semiconductors Austria GmbH & CO KG
Job Posting
Ready to join the future of innovation in our team at NXP in Gratkorn/Graz?
Owing to the success of our business, our department is growing and we are looking for a talented student, who would like to gain experience in parallel to their studies. In this exciting role, you will be part of a highly talented team developing secure high performance crypto software for next generation products in different market segments (payment, identification, mobile, IoT, Automotive, …)
Your responsibilities:
Supporting the Crypto Library development and maintenance
Embedded software development in C
Implementation of cryptographic algorithms
Maintenance of quality KPIs
Your profile:
Ongoing studies in mathematics, computer science, electronic/ electrical engineering, information technology or similar
First experience in embedded software development, using C and assembly is an advantage
A solid understanding of microcontroller architecture
First experience in implementing crypto algorithms such as DES, AES, RSA, ECC, SHA is appreciated
You are a team player and you have initiative
Availability around 16 hours per week (on average)
The minimum salary for this position is EUR 17 gross/hour.
We offer:
Flexible working time
On-site free beverages and fresh fruit
Social events (student get-togethers, networking opportunities, etc)
and many other benefits!
We offer internships with a variety of lengths.
We are proud to have received the Leading Employer Award for the 5th time in a row 2023, which is presented exclusively only to the top 1% of employers in Austria. Furthermore, we have recently been certified as Family Friendly Employer and received the EqualitA label.
Closing date for applications:
Contact: Laura Hoser
More information: https://nxp.wd3.myworkdayjobs.com/careers/job/Gratkorn/Internship--Crypto-Library-SW-Development--f-m-d-_R-10053816
Rovira i Virgili University, Tarragona, Spain
Job Posting
We seek to hire an outstanding PhD candidate to start in December 2024. The successful candidate will participate in the activities of the CRISES research group, which focuses on theoretical advances for computer security and data privacy.
The University offers:
A 4-year PhD scholarship to work in an exciting international environment located at the sunny and mediterranean city of Tarragona, Spain.
Generous travel funds for participation in conferences, summer schools, and research stays.
An automatic switch to a postdoctoral contract once the candidate defends the PhD thesis.
Profile:
A First Class Honours degree (or equivalent) or Master degree (with research component) in computer science or mathematics
Strong academic performance, programming and mathematical skills
A proven interest in computer security and/or related topics
Excellent written and oral English skills
Commitment, team worker, self-motivated and a critical mind
Applications should be written in English and include the following documents:
Curriculum Vitae
A short description of your Master work or Honours thesis (max 1 page)
Transcript of grades from all university-level courses taken
Contact information for 3 referees
Closing date for applications:
Contact: Dr. Rolando Trujillo rolando.trujillo@urv.cat
More information: https://rolandotr.bitbucket.io/open-positions.html
University of South Florida, The Department of Computer Science and Engineering, Tampa, FL, USA.
Job Posting
We have a fully funded Ph.D. position about Deep Learning (DL) and secure multi-party computation (MPC) applied to medical systems beginning from Fall 2025 (August 2025) or Spring 2025 (January 2025) at University of South Florida (USF). Students receive a yearly package worth approximately $65,000, which covers all the tuition, health insurance, fringe benefits, and a competitive monthly salary.
The project offers an excellent multi-disciplinary opportunity involving flagship medical centers, and top machine learning experts, all with MPC and applied cryptography. The candidate will work on private diagnosis with DL and federated learning, developing specially designed MPC protocols for medical applications. USF is a Rank-1 Research University, AAU University, and USF CSE is in the top 15% among Computer Science departments in public universities based on Academic Analytics data based on Scholarly Research Index (and top 8th for patents in the USA).
Requirements:
- A BS degree in ECE/CS with a high GPA
- Excellent programming skills (e.g., C, C++), familiarity with Linux
- MS degree in ECE/CS/Math is a big plus. Publications will be regarded as a plus but not required.
- Please send your CV, transcripts, TOEFL/IELTS scores (required), publications (optional), and GRE (highly preferred).
Closing date for applications:
Contact:
Email: attilaayavuz@usf.edu
Webpage : http://www.csee.usf.edu/~attilaayavuz/