The Future of Life Institute (FLI) is hiring two US Policy Team Members to help with our policy work and US government engagement to promote the safe governance of AI through a bipartisan lens. The US Policy Team Members will report directly to the Head of U.S. Policy.
Who we are
FLI educates and engages lawmakers, key stakeholders, and the general public about transformative technologies and their implications. We advocate for policies and approaches that mitigate catastrophic risks while advancing positive futures for humanity.
\nApplication Deadline: 3 April 2026
Location: Washington DC
Start Date: We'd like the chosen candidate to start as soon as possible after accepting an offer.
Application Process: Apply by uploading the following:
- your resume
- a short video response via YouTube to the following question "Why do you want to work at FLI?". We are requesting video responses in an attempt to reduce the number of spam applications we receive.
- An answer to the question: In 250 words or less, please outline the opportunities you see for Congressional legislation to address AI risk over the next two years.
FLI aims to be an inclusive organization. We proactively seek job applications from candidates with diverse backgrounds. If you are passionate about FLI’s mission and think you have what it takes to be successful in this role even though you may not check all the boxes, please still apply. We would appreciate the opportunity to consider your application.
Questions may be directed to jobsadmin@futureoflife.org.
About the Future of Life Institute
Founded in 2014, FLI is an independent non-profit working to steer transformative technology towards benefiting life and away from extreme large-scale risks. Our work includes grantmaking, educational outreach, and policy engagement.
Our work has been featured in The Washington Post, Politico, Vox, Forbes, The Guardian, the BBC, and Wired.
Some of our achievements include:
- Superintelligence Statement, an open letter calling for a prohibition on the development of superintelligent AI until there is strong public buy-in and scientific consensus that it can be developed safely and controllably. The letter has been signed by more than 130,000 people, including Yoshua Bengio, Geoffrey Hinton, Steve Wozniak, Richard Branson, Steve Bannon, Prince Harry and Susan Rice.
- The Asilomar AI Principles, one of the earliest and most influential sets of AI governance principles.
- The Pro-Human AI Declaration, a document with five guidelines on how AI development must be centred on humanity first, created during a meeting of bipartisan civil society groups spanning faith, labor, and think tanks.
- Testimony to the US Senate AI Insight Forum on Innovation, and the first civil society presentation to the US House of Representatives Taskforce on AI, encouraging policies that mitigate catastrophic risks from AI.
- Participation in the US AI Safety Institute Consortium.
- AI Safety Index - an objective rating of AI companies on key safety and security domains, as judged by experts in the field.
FLI is a largely virtual organization, with a team of >30 distributed internationally, mostly in Europe and the US. We have four offices: Campbell in California, Brussels in Belgium, London in the UK, and Washington DC.