About Mistral
At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.
We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments. Our offerings include le Chat, the AI assistant for life and work.
We are a dynamic, collaborative team passionate about AI and its potential to transform society.
Our diverse workforce thrives in competitive environments and is committed to driving innovation. Our teams are distributed between France, USA, UK, Germany and Singapore. We are creative, low-ego and team-spirited.
Join us to be part of a pioneering company shaping the future of AI. Together, we can make a meaningful impact. See more about our culture on https://mistral.ai/careers.
About The Job
The Applied AI team is Mistral's customer-facing technical organization. We work directly with enterprise clients from pre-sales through implementation to deploy cutting-edge AI solutions that deliver measurable business impact. Our team combines deep ML expertise with strong customer engagement skills, operating like startup CTOs who own end-to-end project execution.
However, the AI graveyard is full of great ideas nobody could measure or prototypes that never made it to production. As a first Evaluation Engineer, you'll design the methodology, build the infrastructure, and define what "ready for production" means across verticals and use cases.
You will design and implement evaluation systems that help our customers understand model performance across their specific use cases, build robust evaluation infrastructure, and work closely with both research and customer-facing teams.
Research builds evals for frontier capabilities but customers don't care about MMLU scores. We need in Applied AI evals and frameworks for customer reality domain-specific, risk-aware, production-grade. The kind that tell you whether your medical summarization model will hallucinate drug interactions, or whether your legal assistant will invent case citations.
This role sits at the intersection of research, engineering, and solutions, you will play a critical cross role in measuring, understanding, and improving the capabilities of our models for our enterprise customers.
What you will do
- Design and implement comprehensive evaluation frameworks to measure LLM capabilities across diverse customer use cases, including text generation, reasoning, code, and domain-specific applications
- Build scalable evaluation infrastructure and pipelines that enable rapid, reproducible assessment of model performance
- Develop novel evaluation methodologies to assess emerging capabilities or verticalized use cases (cybersecurity, finance, healthcare, etc.) and enable the Solutions (Deployment Strategist and Applied AI) on these topics.
- Create custom evaluation suites tailored to enterprise customers' specific needs, working closely with them to understand their requirements and success criteria
- Collaborate with research teams to translate evaluation insights into model improvements and training decisions
- Partner with product teams to continuously improve our evaluation tooling based on customer feedback
How We Work in Applied AI
- We care about people and outputs.
- What matters is what you ship, not the time you spend on it
- Bureaucracy is where urgency goes to vanish. You talk to whoever you need to talk to. The best idea wins, whether it comes from a principal engineer or someone in their first week.
- Always ask why. The best solutions come from deep understanding, not from copying what worked before
- We say what we mean. Feedback is direct, timely, and given because we care.
- No politics. Low ego, high standards.
- We embrace an unstructured environment and find joy in it.
About you
- You are fluent in English
- 3+ years of experience in ML evaluation, benchmarking for LLM or agentic systems
- You have proven experience in AI or machine learning product implementation with APIs, back-end
- You have deep understanding of concepts and algorithms underlying machine learning and LLMs
- You have strong technical coding skills in Python
- You hold strong communication skills with an ability to explain complex technical concepts in simple terms with technical and non-technical audiences
Ideally you have:
- Contributions to open-source evaluation frameworks (e.g., LM Eval Harness, OpenAI Evals) or published research on LLM evaluation
- Experience as a Customer Engineer, Forward Deployed Engineer, Sales Engineer, Solutions Architect or Technical Product Manager
- Experience with ML frameworks (PyTorch, HuggingFace Transformers)
Benefits
🏝️ PTO: The CDI contract will be a "Forfait 218 jours", corresponding to 25 days of holidays and on average 8 to 10 days of RTT days, and complete autonomy on working hours
⚕️ Health : Full health insurance coverage for you and your family
🚗 Transportation : We offer a €600 annual mobility allowance. This package covers 50% of your public transportation costs and includes the Sustainable Mobility Allowance (FMD), encouraging eco-friendly travel options such as cycling or carpooling.
🥕 Food : Swile meal vouchers with 10,83€ per worked day, incl 60% offered by company
🏀 Sport : Gymlib - sponsorship by Mistral of a significant part of the monthly fee (depending on the program you chose)
🐤 Parental policy : 4 additional weeks for parents on top of what is offered by the French state.