D

Research Engineer, Multimodal Reinforcement Learning

DeepMind
On-site
Zurich, Switzerland

Snapshot

Are you a Research Engineer with a passion for Reinforcement Learning and Multimodality? Join Google DeepMind’s Frontier AI Unit! We are seeking a researcher to help us make learning efficient through conversational environments. While text-based reasoning has shown immense promise, we are moving the frontier toward image-grounded, multimodal, and retrieval-augmented conversational setups. You will bridge the gap between conversational learning and the visual domain, applying the latest RL methods to create scalable, semi-verifiable environments that power the next generation of our models (e.g., Gemini).

About us

Google DeepMind: Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

Frontier AI Unit: The Frontier AI Unit is responsible for building and scaling the next generation of our core models. Within this group, our team focuses on "conversationality" as a mechanism for efficient learning. We believe that learning conversationally transfers between environments. We are moving beyond Chain-of-Thought (CoT) and text-only setups to build multimodal, multi-turn reasoning capabilities, leveraging an ecosystem of autoraters and autousers to scale environment creation.

The role

We have strong evidence that conversational environments lead to better learning in a transferable way. However, we need to go beyond text. As a Research Engineer, you will play a pivotal role in expanding Meta Reinforcement Learning to multimodal setups. You will help us leapfrog current industry benchmarks by extending our focus from verifiable domains to semi-verifiable, multimodal domains (e.g., Lens, Image-grounded reasoning).

This is an ecosystem play: you will leverage our advantages in autoraters and autousers to scale the creation of these conversational environments. You will be the bridge between the core conversational work and the specifics of grounding in the visual domain, moving our training infra from static data towards dynamic, multi-turn environments.

Key responsibilities

  • Multimodal RL Research: Design and implement novel RL algorithms that enable multi-turn reasoning and learning in multimodal (text + vision) environments.
  • Environment Scaling: Contribute to the "ecosystem" of autoraters and autousers, building the infrastructure needed to generate high-quality, semi-verifiable training environments at scale.
  • Strategic Application: Apply state-of-the-art methods to solve strategic problems, specifically closing the gap between single-turn and multi-turn embeddings (retrieval-augmented reasoning).
  • Experimentation & Analysis: track, interpret, and analyze complex experiments, providing scientific rigor to our training pipelines.
  • Collaboration: Act as a connector between teams (Google Research, Core, GDM GenAI), helping to build shared pipelines for conversational infrastructure that serve product needs in Search, Lens, and YouTube.

What We Can Offer You

  • Scientific Contribution: The opportunity to publish and contribute to the scientific community, specifically in the high-impact intersection of RL, Multimodality, and Reasoning.
  • Scale & Resources: Access to world-class compute and the existing infrastructure of autoraters/autousers, allowing you to focus on innovation rather than building from scratch.
  • Direct Impact: Your work will directly influence the reasoning capabilities of Google’s flagship models (Gemini), moving the needle on how models learn and interact with the world.
  • Collaborative Culture: Work alongside world-leading experts in RL and Generative AI in a supportive, growth-oriented environment.

About you

We are looking for a Research Engineer who is not just technically proficient but deeply curious about the mechanics of learning. You should be up to date with the latest methods in RL and eager to apply them to messy, ambiguous, and high-impact strategic problems. You are comfortable bridging the gap between abstract research and concrete implementation.

Essential Skills:

  • PhD or Equivalent Experience: A PhD in Computer Science, AI, or related field, or equivalent practical experience, with a specific focus on Reinforcement Learning (RL).
  • Proven Research Track Record: A history of scientific contributions (e.g., publications at NeurIPS, ICML, ICLR, CVPR) or significant contributions to state-of-the-art AI models.
  • Multimodal Experience: Concrete experience working with multimodal models (vision + language) and understanding the specific challenges of grounding text in visual data.
  • Engineering Excellence: Strong coding skills (Python, JAX/TensorFlow/PyTorch) and experience designing and executing complex experiments.

Useful Skills:

  • Retrieval & Embeddings: Experience with retrieval-augmented generation (RAG), embedding spaces, or search infrastructure.
  • Multi-Agent Systems: Familiarity with self-verification, introspection, reflection, or multi-agent negotiation frameworks.
  • Infrastructure: Experience building or scaling training environments, autoraters, or reward models.

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

Apply now
Share this job