채용
Required Skills
C++
Python
Performance optimization
Distributed systems
Leadership
About Mistral
At Mistral AI, we believe in the power of AI to simplify tasks, save time and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.
We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments. Our offerings include le Chat, the AI assistant for life and work.
We are a dynamic, collaborative team passionate about AI and its potential to transform society.
Our diverse workforce thrives in competitive environments and is committed to driving innovation. Our teams are distributed between France, USA, UK, Germany and Singapore. We are creative, low-ego and team-spirited.
Join us to be part of a pioneering company shaping the future of AI. Together, we can make a meaningful impact. See more about our culture on https://mistral.ai/careers.
Role summary
As the Technical Lead for the Inference team, you will drive the architecture and optimization of our inference backbone, ensuring high performance, scalability, and efficiency in a dynamic environment. You will lead the acquisition and automation of benchmarks, collaborate with cross-functional teams, and innovate solutions to enhance our AI-powered applications.
What you will do
- Architect and optimize the inference for high-volume, low-latency, and high-availability environments.
- Lead the acquisition and automation of benchmarks at both micro and macro scales.
- Introduce new techniques and tools to improve performance, latency, throughput, and efficiency in our model inference stack.
- Build tools to identify bottlenecks and sources of instability, and design solutions to address them.
- Collaborate with machine learning researchers, engineers, and product managers to bring cutting-edge technologies into production.
- Optimize code and infrastructure to maximize hardware utilization and efficiency.
- Mentor and guide team members, fostering a culture of collaboration, innovation, and continuous learning.
About you
- Extensive experience in C++ and Python, with a strong focus on backend development and performance optimization.
- Deep understanding of modern ML architectures and experience with performance optimization for inference.
- Proven track record with large-scale distributed systems, particularly performance-critical ones.
- Familiarity with Py Torch, TensorRT, CUDA, NCCL.
- Strong grasp of infrastructure, continuous integration, and continuous development principles.
- Ability to lead and mentor team members, driving projects from concept to implementation.
- Results-oriented mindset with a bias towards flexibility and impact.
- Passion for staying ahead of emerging technologies and applying them to Al-driven solutions.
- Humble attitude, eagerness to help colleagues, and a desire to see the team succeed.
Our Culture
We're driven to build a strong company culture and are looking for individuals with solid alignment with the following:
- Reason with rigor
- Are you audacious enough?
- Make our customers succeed
- Ship early and accelerate
- Leave your ego aside
Total Views
0
Apply Clicks
0
Mock Applicants
0
Scraps
0
Similar Jobs

Lead Systems Engineer - Systems Study
GE Vernova · 2 Locations

Lead Engineer- RSCAD/RTDS verification engineer
GE Vernova · Noida

Distribution Tech Lead - Senior Full Stack Developer
AIG · Sumida-ku

Lead AI Engineer (FM Hosting, LLM Inference)
Capital One · 4 Locations

Lead AI Engineer
Capital One · 4 Locations
About Mistral AI

Mistral AI
Series BMistral AI is a French artificial intelligence company that develops and provides large language models and AI solutions. The company focuses on creating efficient and powerful AI models for various applications.
51-200
Employees
Paris
Headquarters
$6.0B
Valuation
Reviews
4.2
2 reviews
Work Life Balance
3.5
Compensation
3.5
Culture
3.5
Career
3.5
Management
3.5
80%
Recommend to a Friend
Pros
European-based AI solution
Supports digital sovereignty
EU-based alternative to geofenced AI models
Cons
No specific cons mentioned
Limited feedback available
Insufficient review data
Salary Ranges
38 data points
Senior/L5
Senior/L5 · Solution Architect
1 reports
$241,500
total / year
Base
$210,000
Stock
-
Bonus
-
$241,500
$241,500
Interview Experience
7 interviews
Difficulty
3.6
/ 5
Duration
21-35 weeks
Interview Process
1
Application Review
2
Recruiter Screen
3
Coding Round
4
ML Theory Assessment
5
LLM Systems/Design Interview
6
Project Deep Dive
7
Technical Interview
Common Questions
Coding/Algorithm
ML Theory
System Design
LLM Systems Design
Project Experience
News & Buzz
Mistral AI Upgrades Vibe Coding Agent - AI Business
Source: AI Business
News
·
5w ago
Mistral AI on track to reach one billion euros in revenue by 2026 - Maddyness
Source: Maddyness
News
·
5w ago
A European AI challenger goes after GitHub Copilot: Mistral launches Vibe 2.0 - VentureBeat
Source: VentureBeat
News
·
5w ago
Axios House: "There's a little more friction" as businesses race to adopt AI, Mistral AI CEO says - Axios
Source: Axios
News
·
5w ago