The Center for AI Safety (CAIS) is a leading research and advocacy organization focused on mitigating societal-scale risks from AI. We work to ensure that AI is developed and deployed safely, aligning its impact with the long-term interests of humanity. This means engaging with policymakers, researchers, industry leaders, and the broader public to build awareness and support for measures that can meaningfully reduce AI risk.
The AI Safety Newsletter (https://newsletter.safe.ai/) is a clear, concise weekly newsletter covering developments in AI and AI safety for a broad audience—from researchers and policymakers to practitioners and concerned citizens. Recent editions cover policy, governance, industry moves, and notable research.
We’re seeking a Newsletter Editor who keeps up-to-date with AI safety news, can identify the most compelling stories for an AI safety audience, and who writes clearly. Your role will include story selection, drafting, editing, and publishing newsletter issues—while working closely with our team.
We are open to full-time or part-time arrangements. We have a preference for candidates based in San Francisco, but we are open to remote candidates, as well.
\nKnow someone who could be a great fit for this role? Submit their details through our Referral Form. If we end up hiring your referral, you’ll receive a $1,500 bonus once they’ve been with CAIS for 90 days.
The Center for AI Safety is an Equal Opportunity Employer. We consider all qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity or expression, national origin, ancestry, age, disability, medical condition, marital status, military or veteran status, or any other protected status in accordance with applicable federal, state, and local laws. In alignment with the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records for employment.
If you require a reasonable accommodation during the application or interview process, please contact contact@safe.ai.
We value diversity and encourage individuals from all backgrounds to apply.