招聘
Benefits & Perks
•Unlimited PTO
•Healthcare
•Commuter Benefits
•Free Meals
•Gym
•Parental Leave
•Unlimited Pto
•Healthcare
•Commuter
•Meals
•Gym
•Parental Leave
Required Skills
Python
Machine Learning
LLM evaluation
Benchmarking
About Mistral
At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.
We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments. Our offerings include le Chat, the AI assistant for life and work.
We are a dynamic, collaborative team passionate about AI and its potential to transform society.
Our diverse workforce thrives in competitive environments and is committed to driving innovation. Our teams are distributed between France, USA, UK, Germany and Singapore. We are creative, low-ego and team-spirited.
Join us to be part of a pioneering company shaping the future of AI. Together, we can make a meaningful impact. See more about our culture on https://mistral.ai/careers.
About The Job:
The Applied AI team is Mistral's customer-facing technical organization. We work directly with enterprise clients from pre-sales through implementation to deploy cutting-edge AI solutions that deliver measurable business impact. Our team combines deep ML expertise with strong customer engagement skills, operating like startup CTOs who own end-to-end project execution.
However, the AI graveyard is full of great ideas nobody could measure or prototypes that never made it to production. As a first Evaluation Engineer, you'll design the methodology, build the infrastructure, and define what "ready for production" means across verticals and use cases.
You will design and implement evaluation systems that help our customers understand model performance across their specific use cases, build robust evaluation infrastructure, and work closely with both research and customer-facing teams.
Research builds evals for frontier capabilities but customers don't care about MMLU scores. We need in Applied AI evals and frameworks for customer reality domain-specific, risk-aware, production-grade. The kind that tell you whether your medical summarization model will hallucinate drug interactions, or whether your legal assistant will invent case citations.
This role sits at the intersection of research, engineering, and solutions, you will play a critical cross role in measuring, understanding, and improving the capabilities of our models for our enterprise customers.
What you will do
- Design and implement comprehensive evaluation frameworks to measure LLM capabilities across diverse customer use cases, including text generation, reasoning, code, and domain-specific applications
- Build scalable evaluation infrastructure and pipelines that enable rapid, reproducible assessment of model performance
- Develop novel evaluation methodologies to assess emerging capabilities or verticalized use cases (cybersecurity, finance, healthcare, etc.) and enable the Solutions (Deployment Strategist and Applied AI) on these topics.
- Create custom evaluation suites tailored to enterprise customers' specific needs, working closely with them to understand their requirements and success criteria
- Collaborate with research teams to translate evaluation insights into model improvements and training decisions
- Partner with product teams to continuously improve our evaluation tooling based on customer feedback
How We Work in Applied AI:
- We care about people and outputs.
- What matters is what you ship, not the time you spend on it
- Bureaucracy is where urgency goes to vanish. You talk to whoever you need to talk to. The best idea wins, whether it comes from a principal engineer or someone in their first week.
- Always ask why. The best solutions come from deep understanding, not from copying what worked before
- We say what we mean. Feedback is direct, timely, and given because we care.
- No politics. Low ego, high standards.
- We embrace an unstructured environment and find joy in it.
About you
- You are fluent in English
- 3+ years of experience in ML evaluation, benchmarking for LLM or agentic systems
- You have proven experience in AI or machine learning product implementation with APIs, back-end
- You have deep understanding of concepts and algorithms underlying machine learning and LLMs
- You have strong technical coding skills in Python
- You hold strong communication skills with an ability to explain complex technical concepts in simple terms with technical and non-technical audiences
Ideally you have:
- Contributions to open-source evaluation frameworks (e.g., LM Eval Harness, OpenAI Evals) or published research on LLM evaluation
- Experience as a Customer Engineer, Forward Deployed Engineer, Sales Engineer, Solutions Architect or Technical Product Manager
- Experience with ML frameworks (Py Torch, Hugging Face Transformers)
Benefits:
🏝️ PTO: The CDI contract will be a "Forfait 218 jours", corresponding to 25 days of holidays and on average 8 to 10 days of RTT days, and complete autonomy on working hours
⚕️ Health : Full health insurance coverage for you and your family
🚗 Transportation : We offer a €600 annual mobility allowance. This package covers 50% of your public transportation costs and includes the Sustainable Mobility Allowance (FMD), encouraging eco-friendly travel options such as cycling or carpooling.
🥕 Food : Swile meal vouchers with 10,83€ per worked day, incl 60% offered by company
🏀 Sport : Gymlib - sponsorship by Mistral of a significant part of the monthly fee (depending on the program you chose)
🐤 Parental policy : 4 additional weeks for parents on top of what is offered by the French state.
Total Views
1
Apply Clicks
0
Mock Applicants
0
Scraps
0
Similar Jobs
About Mistral AI

Mistral AI
Series BMistral AI is a French artificial intelligence company that develops and provides large language models and AI solutions. The company focuses on creating efficient and powerful AI models for various applications.
51-200
Employees
Paris
Headquarters
$6.0B
Valuation
Reviews
4.2
2 reviews
Work Life Balance
3.5
Compensation
3.5
Culture
3.5
Career
3.5
Management
3.5
80%
Recommend to a Friend
Pros
European-based AI solution
Supports digital sovereignty
EU-based alternative to geofenced AI models
Cons
No specific cons mentioned
Limited feedback available
Insufficient review data
Salary Ranges
38 data points
Senior/L5
Senior/L5 · Solution Architect
1 reports
$241,500
total / year
Base
$210,000
Stock
-
Bonus
-
$241,500
$241,500
Interview Experience
7 interviews
Difficulty
3.6
/ 5
Duration
21-35 weeks
Interview Process
1
Application Review
2
Recruiter Screen
3
Coding Round
4
ML Theory Assessment
5
LLM Systems/Design Interview
6
Project Deep Dive
7
Technical Interview
Common Questions
Coding/Algorithm
ML Theory
System Design
LLM Systems Design
Project Experience
News & Buzz
Mistral AI Upgrades Vibe Coding Agent - AI Business
Source: AI Business
News
·
5w ago
Mistral AI on track to reach one billion euros in revenue by 2026 - Maddyness
Source: Maddyness
News
·
5w ago
A European AI challenger goes after GitHub Copilot: Mistral launches Vibe 2.0 - VentureBeat
Source: VentureBeat
News
·
5w ago
Axios House: "There's a little more friction" as businesses race to adopt AI, Mistral AI CEO says - Axios
Source: Axios
News
·
5w ago




