GoFundMe logo

Adversarial AI Engineer

GoFundMe
On-site
San Francisco, CA

Want to help us help others? We’re hiring! 

GoFundMe is the world’s most powerful community for good, dedicated to helping people help each other. By uniting individuals and nonprofits in one place, GoFundMe makes it easy and safe for people to ask for help and support causes—for themselves and each other. Together, our community has raised more than $40 billion since 2010. Join us! 

Every day, millions turn to us in their most vulnerable moments, and our technology must be safe, resilient, and trustworthy. We’re looking for an Adversarial AI Engineer who combines offensive security expertise with machine learning depth. You will execute red team testing against our AI systems, build automated adversarial evaluation pipelines, and deploy defense mechanisms to keep our AI safe. Beyond technical execution, you’ll influence governance, partner with cross-functional stakeholders, and establish GoFundMe as a thought leader in AI security.

This role is foundational. You’ll lead red team operations and strengthen our AI/ML systems that power fraud detection, content moderation, and Trust & Safety at scale. You will also design adversarial testing frameworks, deploy real-time defenses, and set governance for secure AI deployment across GoFundMe’s platform—protecting billions in charitable giving from evolving attack vectors.

Candidates considered for this role will be located in the San Francisco, Bay Area. There will be an in-office requirement of 3x a week.

The Job

  • Adversarial Testing & Red Teaming
    • Execute adversarial testing of LLMs, Agentic AI systems, recommendation models, and fraud detection tools using techniques such as prompt injection, jailbreaking, data poisoning, model inversion, and membership inference attacks.
    • Develop synthetic attack datasets tailored to fundraising and trust & safety scenarios.
  •  Build Security Frameworks & Defenses
    • Develop automated adversarial testing pipelines integrated into CI/CD, create reusable robustness evaluation libraries, and generate synthetic attack datasets for fundraising scenarios.
    • Build reusable robustness evaluation libraries.
    • Deploy real-time detection for prompt injection and model evasion, implement input validation, output filtering, adversarial training, and differential privacy mechanisms.
  • Cross-Functional Security Leadership
    • Partner with Trust & Safety on fraud countermeasures.
    • Collaborate with Product Security on AI threat models and governance.
    • Work with Data Science to mitigate algorithmic bias and ensure robust defense.
  • Governance & Policy
    • Establish AI security policies, training, and deployment review processes aligned with NIST AI RMF.
    • Build monitoring and incident response systems for AI security.
  • Research & Innovation
    • Stay current with emerging attack vectors and defense mechanisms.
    • Contribute to open-source adversarial tools.
    • Publish and speak externally to advance GoFundMe’s leadership in AI security.

You 

  • 6–8 years in cybersecurity with a focus on AI/ML security or adversarial ML.
  • 2+ years specialized LLM security experience (prompt injection, jailbreaking, adversarial prompt crafting).
  • Proven red team / penetration testing background on AI systems.
  • Strong Python programming with ML frameworks (TensorFlow, PyTorch, Hugging Face).
  • Deep understanding of ML fundamentals, Neural Networks, transformers (GPT, LLaMA, Claude, BERT) and known vulnerabilities.
  • Experience testing Agentic AI security testing including agent frameworks (LangGraph, AutoGen, CrewAI, Google ADK, Pydantic AI).
  • Skilled in adversarial attack methods: data poisoning, model evasion, membership inference, model extraction.
  • Knowledge of defense mechanisms: adversarial training, input sanitization, differential privacy, robustness certification.
  • Hands-on adversarial attack experience: data poisoning, model evasion, membership inference, model extraction.
  • Familiarity with OWASP Top 10 for LLMs, MITRE ATLAS, NIST AI RMF.
  • Experience with threat modeling, security architecture, and cloud controls (AWS, GCP, Azure).

Preferred

  • Multimodal AI security experience (vision-language, audio).
  • Background in financial services, fintech, or sensitive transaction platforms.
  • AI compliance, audit, and regulatory experience.
  • Published AI security research or open-source contributions.
  • Trust & Safety or fraud detection systems experience.
  • Privacy-preserving ML techniques (federated learning, homomorphic encryption).

 

Why you’ll love it here

  • Make an Impact: Be part of a mission-driven organization making a positive difference in millions of lives every year.
  • Innovative Environment: Work with a diverse, passionate, and talented team in a fast-paced, forward-thinking atmosphere.
  • Collaborative Team: Join a fun and collaborative team that works hard and celebrates success together.
  • Competitive Benefits: Enjoy competitive pay and comprehensive healthcare benefits.
  • Holistic Support: Enjoy financial assistance for things like hybrid work, family planning, along with generous parental leave, flexible time-off policies, and mental health and wellness resources to support your overall well-being.
  • Growth Opportunities: Participate in learning, development, and recognition programs to help you thrive and grow.
  • Commitment to DEI: Contribute to diversity, equity, and inclusion through ongoing initiatives and employee resource groups.
  • Community Engagement: Make a difference through our volunteering and Gives Back programs.

We live by our core values: impatient to be great, find a way, earn trust every day, fueled by purpose. Be a part of something bigger with us!

GoFundMe is proud to be an equal opportunity employer that actively pursues candidates of diverse backgrounds and experiences.  We do not discriminate on the basis of race, color, religion, ethnicity, nationality or national origin, sex, sexual orientation, gender, gender identity or expression, pregnancy status, marital status, age, medical condition, mental or physical disability, or military or veteran status.

The total annual salary for this full-time position is $181,000 - $271,000 + equity + benefits.  As this is a hybrid position, the salary range was determined by role, level, and possible location across the US. Individual pay is determined by work location and additional factors including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range based on your location during the hiring process. 

If you require a reasonable accommodation to complete a job application or a job interview or to otherwise participate in the hiring process, please contact us at accommodationrequests@gofundme.com

Global Data Privacy Notice for Job Candidates and Applicants:

Depending on your location, the General Data Protection Regulation (GDPR) or certain US privacy laws may regulate the way we manage the data of job applicants. Our full notice outlining how data will be processed as part of the application procedure for applicable locations is available here. By submitting your application, you are agreeing to our use and processing of your data as required. 

Learn more about GoFundMe:

We’re proud to partner with GoFundMe.org, an independent public charity, to extend the reach and impact of our generous community, while helping drive critical social change. You can learn more about GoFundMe.org’s activities and impact in their FY ‘24 annual report.

Our annual “Year in Help” report reflects our community’s impact in advancing our mission of helping people help each other.

For recent company news and announcements, visit our Newsroom.