채용
복지 및 혜택
•Parental Leave
•Equity
•Healthcare
필수 스킬
Node.js
React
TypeScript
ABOUT BASETEN
Baseten powers mission-critical inference for the world's most dynamic AI companies, like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma and Writer. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. We're growing quickly and recently raised our $300M Series E, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction. Join us and help build the platform engineers turn to to ship AI products.
THE ROLE:
Baseten’s Model Performance (MP) team is responsible for ensuring the models running on our platform are fast, reliable, and cost‑efficient. As part of this team, you’ll focus on Model API's — the infrastructure powering our hosted API endpoints for the latest open‑source models. This work spans distributed systems, model serving, and developer experience. You’ll join a small, high‑impact team operating at the intersection of product, model performance, and infra, helping to define how developers interact with AI models at scale.
RESPONSIBILITIES:
-
Design, build, and operate the Model APIs surface with focus on advanced inference capabilities: structured outputs (JSON mode, grammar-constrained generation), tool/function calling and multi-modal serving
-
Profile and optimize TensorRT-LLM kernels, analyze CUDA kernel performance, implement custom CUDA operators, tune memory allocation patterns for maximum throughput and optimize communication patterns across multi-GPU setups
-
Productionize performance improvements across runtimes with deep understanding of their internals: speculative decoding implementations, guided generation for structured outputs, custom scheduling and routing algorithms for high-performance serving
-
Build comprehensive benchmarking frameworks that measure real-world performance across different model architectures, batch sizes, sequence lengths, and hardware configurations
-
Productionize performance improvements across runtimes (e.g.
TensorRT, TensorRT‑LLM): speculative decoding, quantization, batching, and KV‑cache reuse.
-
Instrument deep observability (metrics, traces, logs) and build repeatable benchmarks to measure speed, reliability, and quality.
-
Implement platform fundamentals: API versioning, validation, usage metering, quotas, and authentication.
-
Collaborate closely with other teams to deliver robust, developer‑friendly model serving experiences.
REQUIREMENTS:
-
3+ years experience building and operating distributed systems or large‑scale APIs.
-
Proven track record of owning low‑latency, reliable backend services (rate‑limiting, auth, quotas, metering, migrations).
-
Infra instincts with performance sensibilities: profiling, tracing, capacity planning, and SLO management.
-
Comfortable debugging complex systems, from runtime internals to GPU execution traces.
-
Strong written communication; able to produce clear design docs and collaborate across functions.
NICE TO HAVE:
-
Experience with LLM runtimes (vLLM, SGLang, TensorRT‑LLM) or contributions to open-source inference engines (vLLM, TensorRT-LLM, SGLang, TGI)
-
Knowledge of Kubernetes, service meshes, API gateways, or distributed scheduling.
-
Background in developer‑facing infrastructure or open‑source APIs.
-
We value infra‑leaning generalists who bring strong engineering fundamentals and curiosity. ML experience is a plus, but not required.
BENEFITS
-
Competitive compensation, including meaningful equity.
-
100% coverage of medical, dental, and vision insurance for employee and dependents
-
Generous PTO policy including company wide Winter Break (our offices are closed from Christmas Eve to New Year's Day!)
-
Paid parental leave
-
Company-facilitated 401(k)
-
Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.
Apply now to embark on a rewarding journey in shaping the future of AI! If you are a motivated individual with a passion for machine learning and a desire to be part of a collaborative and forward-thinking team, we would love to hear from you.
At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.
총 조회수
2
총 지원 클릭 수
0
모의 지원자 수
0
스크랩
0
비슷한 채용공고

Regional Vice President, Partners and Alliances
Split.io · San Francisco, California, United States

Software Engineer, Graphics & Media
Figma · San Francisco, CA • New York, NY • United States

Training Performance Engineer
OpenAI · San Francisco

Research Engineer, Frontier Red Team (Autonomy)
Anthropic · San Francisco, CA

Privacy Research Engineer, Safeguards
Anthropic · San Francisco, CA
Baseten 소개

Baseten
Series CBaseten provides a platform for deploying and scaling machine learning models in production environments. The company offers infrastructure and tools for ML engineers to build, deploy, and monitor AI applications.
51-200
직원 수
San Francisco
본사 위치
$1.0B
기업 가치
리뷰
4.1
10개 리뷰
워라밸
4.2
보상
2.8
문화
4.3
커리어
3.5
경영진
3.2
72%
친구에게 추천
장점
Flexible work arrangements and schedules
Supportive team environment and good colleagues
Good benefits and health coverage
단점
Below industry standard compensation and salary
Limited career advancement opportunities
High workload and stressful expectations
연봉 정보
9개 데이터
Junior/L3
L2
L3
L4
L5
L6
Recruiter
Junior/L3 · Recruiter
0개 리포트
$183,600
총 연봉
기본급
-
주식
-
보너스
-
$156,060
$211,140
면접 경험
52개 면접
난이도
3.3
/ 5
소요 기간
14-28주
합격률
42%
경험
긍정 66%
보통 21%
부정 13%
면접 과정
1
Phone Screen
2
Technical Interview
3
Hiring Manager
4
Team Fit
자주 나오는 질문
Technical skills
Past experience
Team collaboration
Problem solving
뉴스 & 버즈
Baseten Introduces Delivery Network Aimed at Faster Large-Model Inference - TipRanks
TipRanks
News
·
4w ago
Baseten Technologies - 2026 Funding Rounds & List of Investors - Tracxn
Tracxn
News
·
4w ago
Strat AE l Baseten vs Cursor?
I have an opportunity to go to Baseten or Cursor as a strat rep. I'm torn on the meteoric rose of Cursor vs getting into the inference game given its size & CAGR. What would you do? Anyone have experience with either?
·
5w ago
·
6
·
21
Inferless Joins Baseten
HN
·
9w ago
·
1