This role works with sensitive content or situations and may be exposed to graphic, controversial, and/or upsetting topics or content.
As Red Teaming Lead in Responsibility at Google DeepMind, you will be working with a diverse team to drive and grow red teaming of Google DeepMind's most groundbreaking models. You will be responsible for our frontier risk red teaming program, which probes for and identifies emerging model risks and vulnerabilities. You will pioneer the latest red teaming methods with teams across Google DeepMind and external partners to ensure that our work is conducted in line with responsibility and safety best practices, helping Google DeepMind to progress towards its mission.
Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.
As a Red Teaming Lead working in Responsibility, you'll be responsible for managing and growing our frontier risk red teaming program. You will be conducting hands-on red teaming of advanced AI models, partnering with external organizations on red teaming exercises, and working closely with product and engineering teams to develop the next generation of red teaming tooling. You'll be supporting the team across the full range of development, from running early tests to developing higher-level frameworks and reports to identify and mitigate risks.
Key responsibilities
In order to set you up for success in this role, we are looking for the following skills and experience:
In addition, the following would be an advantage:
The US base salary range for this full-time position is between $174,000 - $258,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.
Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf. For more information on how we handle your data, please see our Applicant and Candidate Privacy Policy.
At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.