採用
必須スキル
TypeScript
Python
LLM
A/B Testing
Prompt Engineering
About Mistral
At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.
We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments. Our offerings include le Chat, the AI assistant for life and work.
We are a dynamic, collaborative team passionate about AI and its potential to transform society.
Our diverse workforce thrives in competitive environments and is committed to driving innovation. Our teams are distributed between France, USA, UK, Germany and Singapore. We are creative, low-ego and team-spirited.
Join us to be part of a pioneering company shaping the future of AI. Together, we can make a meaningful impact. See more about our culture on https://mistral.ai/careers.
Role summary
Embedded directly in a product team as search, chat, documents, or audio, you'll improve AI-powered features through rigorous evaluation, prompt and orchestration design, and rapid experimentation. You'll own your domain's AI quality end-to-end: define what "good" looks like, measure it, run experiments, and ship what works. Work with Science to deliver measurable improvements to quality, latency, safety, and reliability.
What you will do
- Design and run evaluations for your product area: reference tests, heuristics, model-graded checks tailored to search relevance, chat quality, document understanding, or audio performance.
- Define and track metrics that matter: task success, helpfulness, hallucination proxies, safety flags, latency, cost.
- Own prompt and orchestration design: write, test, and iterate on prompts and system prompts as a core part of your work.
- Run A/B tests on prompts, models, and configurations; analyze results; make rollout or rollback decisions from data.
- Set up observability for LLM calls: structured logging, tracing, dashboards, alerts.
- Operate model releases: canary and shadow traffic, sign-offs, SLO-based rollback criteria, regression detection.
- Improve core behaviors in your product area, whether that's memory policies, intent classification, routing, tool-call reliability, or retrieval quality.
- Create templates and documentation so other teams can author evals and ship safely.
- Partner with Science to diagnose regressions and lead post-mortems.
About you
- 3-4 years of experience; backgrounds that fit well include ML engineers moving closer to product, or software engineers with real AI/ML production experience.
- Strong TypeScript or Python skills - we have both tracks depending on team fit.
- Production LLM experience: prompts, tool/function calling, system prompts.
- Hands-on with evals and A/B testing; you can design metrics, not just run them.
- Comfortable implementing directly in product code, not only notebooks.
- Observability experience: logging, tracing, dashboards, alerting.
- Product mindset: form hypotheses, run experiments, interpret results, ship.
- Clear communication, autonomous, and oriented toward production impact over experimentation for its own sake.
It would be ideal if you also have:
- Safety systems experience: moderation, PII handling/redaction, guardrails.
- Release operations: canary/shadowing, automated rollbacks, experiment platforms.
- Prior work on search ranking, chat systems, document AI, or audio ML features.
Hiring Process
- Introduction call - 30 min
- Hiring Manager interview - 30 min
- Technical Rounds
- Live-coding Interview - 45 min
- AI Engineering Interview - 45 min
- Culture-fit discussion - 30 min
- References
総閲覧数
1
応募クリック数
0
模擬応募者数
0
スクラップ
0
類似の求人

Cloud Machine Learning Engineer - EMEA remote
Hugging Face · Paris

SAP Business AI Architect(H/F/D) - France
Infosys · Paris, France

AI Research Scientist – Datadog AI Research (DAIR)
Datadog · Paris, France

Cloud Machine Learning Evangelist - EMEA remote
Hugging Face · Paris

AI Deployment Engineer | Codex
OpenAI · Paris, France
Mistral AIについて

Mistral AI
Series BMistral AI is a French artificial intelligence company that develops and provides large language models and AI solutions. The company focuses on creating efficient and powerful AI models for various applications.
51-200
従業員数
Paris
本社所在地
$6.0B
企業価値
レビュー
3.8
10件のレビュー
ワークライフバランス
2.5
報酬
4.0
企業文化
4.2
キャリア
3.5
経営陣
2.3
72%
友人に勧める
良い点
Supportive team environment
Good compensation and benefits
Innovative projects and cutting-edge technology
改善点
Poor management and lack of direction
Work-life balance issues and heavy workload
Fast-paced stressful environment
給与レンジ
37件のデータ
Mid/L4
Senior/L5
Staff/L6
Mid/L4 · Applied AI Engineer
2件のレポート
$214,500
年収総額
基本給
$165,000
ストック
-
ボーナス
-
$195,000
$234,000
面接体験
1件の面接
難易度
3.0
/ 5
期間
21-35週間
面接プロセス
1
Application Review
2
Recruiter Screen
3
Technical Interview
4
Research Presentation
5
Team Matching
6
Offer
よくある質問
Machine Learning/AI Algorithms
Research Experience
Technical Knowledge
Coding/Implementation
Behavioral/STAR
ニュース&話題
Generative AI Platforms - Trend Hunter
Trend Hunter
News
·
5d ago
How France’s Mistral Built A $14 Billion AI Empire By Not Being American - Forbes
Forbes
News
·
5d ago
Connect the dots: Build with built-in and custom MCPs in Studio - Mistral AI
Mistral AI
News
·
6d ago
The OpenAI / TBPN Audit: Why Anthropic’s Next Acquisition Should Be a Regulatory Network
https://preview.redd.it/q7ltkacfu2tg1.jpg?width=3000&format=pjpg&auto=webp&s=261ce6e7090baf84297a882ffa5b7e62f0d09955 # Forensic Audit: OpenAI’s TBPN Acquisition, the Enterprise Trust Gap, and the Dawn of Regulatory Media **Listen to audio at** [**https://enoumen.substack.com/p/the-openai-tbpn-audit-why-anthropics**](https://enoumen.substack.com/p/the-openai-tbpn-audit-why-anthropics) OpenAI just spent hundreds of millions to buy the Silicon Valley narrative. It’s a brilliant cons
·
2w ago
·
1
·
1