This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Agents team investigates how to build, align, and scale frontier AI systems that can tackle complex, multi-step tasks and workflows, with a particular focus on agentic and scientific domains. Our work sits at the intersection of agent capabilities, human-computer interaction, and infrastructure—from designing post-training methods for agentic behavior to developing evaluation frameworks for open-ended tasks where traditional metrics fall short. As a research intern, you will work on problems at the frontier of agentic AI, where challenges in alignment, reliability, and scalability are deeply intertwined. Projects may involve developing new training recipes for self-learning and long-horizon reasoning, curating datasets for non-deterministic scientific and agentic tasks, studying failure modes in agentic behavior, or building infrastructure that enables agent operations at scale. You'll operate in a space where algorithmic innovation, dataset and interaction design, and systems work come together to push the boundaries of what AI agents can reliably accomplish.
Job Responsibility:
Research and implement novel techniques in one or more of our focus areas
Design and conduct rigorous experiments to validate hypotheses
Document findings in scientific publications and blog posts
Communicate the plans, progress, and results of projects to the broader team
Requirements:
Currently pursuing a Ph.D. degree in Computer Science, Electrical Engineering, Information Science, Statistics or a related field
Publications at leading ML conferences or journals (such as NeurIPS, ICML, ICLR, *ACL, EMNLP)
Strong knowledge of Machine Learning and Deep Learning fundamentals
Experience with deep learning frameworks (PyTorch, JAX, etc.)
Understanding of how LLMs work
Strong programming skills in Python
Familiarity with Transformer architectures and recent developments in foundation models