招聘
福利待遇
•Parental Leave
•Equity
•Healthcare
必备技能
Node.js
React
TypeScript
ABOUT BASETEN
Baseten powers mission-critical inference for the world's most dynamic AI companies, like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma and Writer. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. We're growing quickly and recently raised our $300M Series E, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction. Join us and help build the platform engineers turn to to ship AI products.
THE ROLE:
Baseten’s Model Performance (MP) team is responsible for ensuring the models running on our platform are fast, reliable, and cost‑efficient. As part of this team, you’ll focus on Model API's — the infrastructure powering our hosted API endpoints for the latest open‑source models. This work spans distributed systems, model serving, and developer experience. You’ll join a small, high‑impact team operating at the intersection of product, model performance, and infra, helping to define how developers interact with AI models at scale.
RESPONSIBILITIES:
-
Design, build, and operate the Model APIs surface with focus on advanced inference capabilities: structured outputs (JSON mode, grammar-constrained generation), tool/function calling and multi-modal serving
-
Profile and optimize TensorRT-LLM kernels, analyze CUDA kernel performance, implement custom CUDA operators, tune memory allocation patterns for maximum throughput and optimize communication patterns across multi-GPU setups
-
Productionize performance improvements across runtimes with deep understanding of their internals: speculative decoding implementations, guided generation for structured outputs, custom scheduling and routing algorithms for high-performance serving
-
Build comprehensive benchmarking frameworks that measure real-world performance across different model architectures, batch sizes, sequence lengths, and hardware configurations
-
Productionize performance improvements across runtimes (e.g.
TensorRT, TensorRT‑LLM): speculative decoding, quantization, batching, and KV‑cache reuse.
-
Instrument deep observability (metrics, traces, logs) and build repeatable benchmarks to measure speed, reliability, and quality.
-
Implement platform fundamentals: API versioning, validation, usage metering, quotas, and authentication.
-
Collaborate closely with other teams to deliver robust, developer‑friendly model serving experiences.
REQUIREMENTS:
-
3+ years experience building and operating distributed systems or large‑scale APIs.
-
Proven track record of owning low‑latency, reliable backend services (rate‑limiting, auth, quotas, metering, migrations).
-
Infra instincts with performance sensibilities: profiling, tracing, capacity planning, and SLO management.
-
Comfortable debugging complex systems, from runtime internals to GPU execution traces.
-
Strong written communication; able to produce clear design docs and collaborate across functions.
NICE TO HAVE:
-
Experience with LLM runtimes (vLLM, SGLang, TensorRT‑LLM) or contributions to open-source inference engines (vLLM, TensorRT-LLM, SGLang, TGI)
-
Knowledge of Kubernetes, service meshes, API gateways, or distributed scheduling.
-
Background in developer‑facing infrastructure or open‑source APIs.
-
We value infra‑leaning generalists who bring strong engineering fundamentals and curiosity. ML experience is a plus, but not required.
BENEFITS
-
Competitive compensation, including meaningful equity.
-
100% coverage of medical, dental, and vision insurance for employee and dependents
-
Generous PTO policy including company wide Winter Break (our offices are closed from Christmas Eve to New Year's Day!)
-
Paid parental leave
-
Company-facilitated 401(k)
-
Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.
Apply now to embark on a rewarding journey in shaping the future of AI! If you are a motivated individual with a passion for machine learning and a desire to be part of a collaborative and forward-thinking team, we would love to hear from you.
At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.
总浏览量
2
申请点击数
0
模拟申请者数
0
收藏
0
相似职位

Researcher, Synthetic RL
OpenAI · San Francisco

Regional Vice President, Partners and Alliances
Split.io · San Francisco, California, United States

Training Performance Engineer
OpenAI · San Francisco

Privacy Research Engineer, Safeguards
Anthropic · San Francisco, CA

Software Engineer, Graphics & Media
Figma · San Francisco, CA • New York, NY • United States
关于Baseten

Baseten
Series CBaseten provides a platform for deploying and scaling machine learning models in production environments. The company offers infrastructure and tools for ML engineers to build, deploy, and monitor AI applications.
51-200
员工数
San Francisco
总部位置
$1.0B
企业估值
评价
4.1
10条评价
工作生活平衡
4.2
薪酬
2.8
企业文化
4.3
职业发展
3.5
管理层
3.2
72%
推荐给朋友
优点
Flexible work arrangements and schedules
Supportive team environment and good colleagues
Good benefits and health coverage
缺点
Below industry standard compensation and salary
Limited career advancement opportunities
High workload and stressful expectations
薪资范围
9个数据点
Junior/L3
L2
L3
L4
L5
L6
Recruiter
Junior/L3 · Recruiter
0份报告
$183,600
年薪总额
基本工资
-
股票
-
奖金
-
$156,060
$211,140
面试经验
52次面试
难度
3.3
/ 5
时长
14-28周
录用率
42%
体验
正面 66%
中性 21%
负面 13%
面试流程
1
Phone Screen
2
Technical Interview
3
Hiring Manager
4
Team Fit
常见问题
Technical skills
Past experience
Team collaboration
Problem solving
新闻动态
Baseten Introduces Delivery Network Aimed at Faster Large-Model Inference - TipRanks
TipRanks
News
·
4w ago
Baseten Technologies - 2026 Funding Rounds & List of Investors - Tracxn
Tracxn
News
·
4w ago
Strat AE l Baseten vs Cursor?
I have an opportunity to go to Baseten or Cursor as a strat rep. I'm torn on the meteoric rose of Cursor vs getting into the inference game given its size & CAGR. What would you do? Anyone have experience with either?
·
5w ago
·
6
·
21
Inferless Joins Baseten
HN
·
9w ago
·
1