International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News item: 01 December 2025

Linköping University, Sweden
Job Posting Job Posting
Large language model (LLM) agents represent the next generation of artificial intelligence (AI) systems, integrating LLMs with external tools and memory components to execute complex reasoning and decision-making tasks. These agents are increasingly deployed in domains such as healthcare, finance, cybersecurity, and autonomous vehicles, where they interact dynamically with external knowledge sources, retain memory across sessions, and autonomously generate responses and actions. While their adoption brings transformative benefits, it also exposes them to new and critical security risks that remain poorly understood. Among these risks, memory poisoning attacks pose a severe and immediate threat to the reliability and security of LLM agents. These attacks exploit the agent’s ability to store, retrieve, and adapt knowledge over time, leading to biased decisions, manipulation of real-time behavior, security breaches, and system-wide failures. The goal of this project is to develop a theoretical foundation for understanding and mitigating memory poisoning in LLM agents. This position, funded by the Swedish Research Council (VR), offers an exciting opportunity to work at the forefront of AI security, tackling some of the most pressing challenges in the field. Full information and application link: https://liu.se/en/work-at-liu/vacancies/27883

Closing date for applications:

Contact: Khac-Hoang Ngo, Assistant Professor, khac-hoang.ngo@liu.se

More information: https://liu.se/en/work-at-liu/vacancies/27883

Expand

Additional news items may be found on the IACR news page.