채용
복지 및 혜택
•Parental Leave
•Flexible Hours
•Learning
필수 스킬
Node.js
React
TypeScript
ABOUT BASETEN
Baseten powers mission-critical inference for the world's most dynamic AI companies, like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma and Writer. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. We're growing quickly and recently raised our $300M Series E https://www.baseten.co/blog/announcing-baseten-s-300m-series-e/, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction. Join us and help build the platform engineers turn to to ship AI products.
THE ROLE:
Are you passionate about advancing the application of artificial intelligence? We are looking for a Software Engineer focused on ML performance to join our dynamic team. This role is ideal for someone who thrives in a fast-paced startup environment and is eager to make significant contributions to the exciting field of LLM Inference. If you are a backend engineer who thrives on making things faster and is excited about open-source ML models, we look forward to your application.
EXAMPLE INITIATIVES
You'll get to work on these types of projects as part of our Model Performance team:
-
Baseten Embeddings Inference: The fastest embeddings solution available https://www.baseten.co/blog/introducing-baseten-embeddings-inference-bei/
-
The Baseten Inference Stack https://www.baseten.co/resources/guide/the-baseten-inference-stack/
-
Driving model performance optimization https://www.baseten.co/blog/driving-model-performance-optimization-2024-highlights/
RESPONSIBILITIES:
-
Implement, refine, and productionize cutting-edge techniques (quantization, speculative decoding, kv cache reuse, chunked prefill and LoRA) for ML model inference and infrastructure.
-
Deep dive into underlying codebases of TensorRT, Py Torch, TensorRT-LLM, vllm, sglang, CUDA, and other libraries to debug ML performance issues.
-
Apply and scale optimization techniques across a wide range of ML models, particularly large language models.
-
Collaborate with a diverse team to design and implement innovative solutions.
-
Own projects from idea to production.
REQUIREMENTS:
-
Bachelor's, Master's, or Ph.D. degree in Computer Science, Engineering, Mathematics, or related field.
-
Experience with one or more general-purpose programming languages, such as Python or C++.
-
Familiarity with LLM optimization techniques (e.g., quantization, speculative decoding, continuous batching).
-
Strong familiarity with ML libraries, especially Py Torch, TensorRT, or TensorRT-LLM.
-
Demonstrated interest and experience in LLM’s.
-
Deep understanding of GPU architecture.
-
Bonus:
-
Proficiency in enhancing the performance of software systems, particularly in the context of large language models (LLMs).
-
Experience with CUDA or similar technologies.
-
Deep understanding of software engineering principles and a proven track record of developing and deploying AI/ML inference solutions.
-
Experience with Docker and Kubernetes.
BENEFITS:
-
Competitive compensation, including meaningful equity.
-
100% coverage of medical, dental, and vision insurance for employee and dependents
-
Generous PTO policy including company wide Winter Break (our offices are closed from Christmas Eve to New Year's Day!)
-
Paid parental leave
-
Company-facilitated 401(k)
-
Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.
Apply now to embark on a rewarding journey in shaping the future of AI! If you are a motivated individual with a passion for machine learning and a desire to be part of a collaborative and forward-thinking team, we would love to hear from you.
At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.
총 조회수
0
총 지원 클릭 수
0
모의 지원자 수
0
스크랩
0
비슷한 채용공고

Software Engineer, Hardware
OpenAI · San Francisco

Research Engineer / Scientist, Frontier Red Team (Cyber)
Anthropic · San Francisco, CA

Software Engineer, Graphics & Media
Figma · San Francisco, CA • New York, NY • United States

Performance Engineer, GPU
Anthropic · San Francisco, CA

Full Stack Engineer, Health AI
OpenAI · San Francisco
Baseten 소개

Baseten
Series CBaseten provides a platform for deploying and scaling machine learning models in production environments. The company offers infrastructure and tools for ML engineers to build, deploy, and monitor AI applications.
51-200
직원 수
San Francisco
본사 위치
$1.0B
기업 가치
리뷰
4.1
10개 리뷰
워라밸
4.2
보상
2.8
문화
4.3
커리어
3.5
경영진
3.2
72%
친구에게 추천
장점
Flexible work arrangements and schedules
Supportive team environment and good colleagues
Good benefits and health coverage
단점
Below industry standard compensation and salary
Limited career advancement opportunities
High workload and stressful expectations
연봉 정보
9개 데이터
Junior/L3
L2
L3
L4
L5
L6
Recruiter
Junior/L3 · Recruiter
0개 리포트
$183,600
총 연봉
기본급
-
주식
-
보너스
-
$156,060
$211,140
면접 경험
52개 면접
난이도
3.3
/ 5
소요 기간
14-28주
합격률
42%
경험
긍정 66%
보통 21%
부정 13%
면접 과정
1
Phone Screen
2
Technical Interview
3
Hiring Manager
4
Team Fit
자주 나오는 질문
Technical skills
Past experience
Team collaboration
Problem solving
뉴스 & 버즈
Baseten Introduces Delivery Network Aimed at Faster Large-Model Inference - TipRanks
TipRanks
News
·
4w ago
Baseten Technologies - 2026 Funding Rounds & List of Investors - Tracxn
Tracxn
News
·
4w ago
Strat AE l Baseten vs Cursor?
I have an opportunity to go to Baseten or Cursor as a strat rep. I'm torn on the meteoric rose of Cursor vs getting into the inference game given its size & CAGR. What would you do? Anyone have experience with either?
·
5w ago
·
6
·
21
Inferless Joins Baseten
HN
·
9w ago
·
1