招聘
必备技能
Python
TypeScript
Go
Rust
About the Role
Together AI is building the Inference Platform that brings the most advanced generative AI models to the world. Our platform powers multi-tenant serverless workloads and dedicated endpoints, enabling developers, enterprises, and researchers to harness the latest LLMs, multimodal models, image, audio, video, and speech models at scale.
If you get a thrill from optimizing latency down to the last millisecond, this is your playground. You’ll work hands-on with tens of thousands of GPUs (H100s, H200s, GB200s, and beyond), figuring out how to fully utilize every FLOP and every gigabyte of memory.
You’ll collaborate directly with research teams to bring frontier models into production, making breakthroughs usable in the real world. Our team also works closely with the open source community, contributing to and leveraging projects like SGLang, vLLM, and NVIDIA Dynamo to push the boundaries of inference performance and efficiency.
-
Shape the core inference backbone that powers Together AI’s frontier models.
-
Solve performance-critical challenges in global request routing, load balancing, and large-scale resource allocation.
-
Work with state-of-the-art accelerators (H100s, H200s, GB200s) at global scale.
-
Partner with world-class researchers to bring new model architectures into production.
-
Collaborate with and contribute to the open source community, shaping the tools that advance the industry.
-
A culture of deep technical ownership and high impact — where your work makes models faster, cheaper, and more accessible.
-
Competitive compensation, equity, and benefits.
Responsibilities
-
Build and optimize global and local request routing, ensuring low-latency load balancing across data centers and model engine pods.
-
Develop auto-scaling systems to dynamically allocate resources and meet strict SLOs across dozens of data centers.
-
Design systems for multi-tenant traffic shaping, tuning both resource allocation and request handling — including smart rate limiting and regulation — to ensure fairness and consistent experience across all users.
-
Engineer trade-offs between latency and throughput to serve diverse workloads efficiently.
-
Optimize prefix caching to reduce model compute and speed up responses.
-
Collaborate with ML researchers to bring new model architectures into production at scale.
-
Continuously profile and analyze system-level performance to identify bottlenecks and implement optimizations.
Requirements
-
5+ years of demonstrated experience building large-scale, fault-tolerant, distributed systems and API microservices.
-
Strong background in designing, analyzing, and improving efficiency, scalability, and stability of complex systems.
-
Excellent understanding of low-level OS concepts: multi-threading, memory management, networking, and storage performance.
-
Expert-level programming in one or more of: Rust, Go, Python, or TypeScript.
-
Knowledge of modern LLMs and generative models and how they are served in production is a plus.
-
Experience working with the open source ecosystem around inference is highly valuable; familiarity with SGLang, vLLM, or NVIDIA Dynamo will be especially handy.
-
Experience with Kubernetes or container orchestration is a strong plus.
-
Familiarity with GPU software stacks (CUDA, Triton, NCCL) and HPC technologies (Infini Band, NVLink, MPI) is a plus.
-
Bachelor’s or Master’s degree in Computer Science, Computer Engineering, or related field, or equivalent practical experience.
About Together AI
Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as Flash Attention, Hyena, Flex Gen, and Red Pajama. We invite you to join a passionate group of researchers and engineers in our journey in building the next generation AI infrastructure.
Compensation
We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $160,000 - $250,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.
Equal Opportunity
Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
Please see our privacy policy at https://www.together.ai/privacy
总浏览量
0
申请点击数
0
模拟申请者数
0
收藏
0
相似职位

Senior Software Engineer, Experimentation Platform (Backend Services)
Paramount · San Francisco, CA, US, 94107

Senior Backend Software Engineer - Billing
Crusoe · San Francisco, CA - US

Senior Product Engineer - Training Platform
Baseten · San Francisco

Research Engineer / Research Scientist - Foundations Retrieval Lead
OpenAI · San Francisco

Staff Backend Software Engineer- (AI Platform)
Databricks · San Francisco, California
关于Together AI

Together AI
Series BData annotation company.
51-200
员工数
San Francisco
总部位置
$1.25B
企业估值
评价
3.8
10条评价
工作生活平衡
3.5
薪酬
2.5
企业文化
4.2
职业发展
2.8
管理层
3.0
65%
推荐给朋友
优点
Great team spirit and collaboration
Good work-life balance and flexible hours
Supportive work environment
缺点
Below industry standard compensation
High workload and overwhelming workpace
Limited career advancement opportunities
薪资范围
0个数据点
Mid/L4
Senior
Mid/L4 · Product Designer
0份报告
$156,800
年薪总额
基本工资
$156,800
股票
-
奖金
-
$133,280
$180,320
面试经验
3次面试
难度
3.0
/ 5
时长
14-28周
面试流程
1
Application Review
2
Recruiter Screen
3
Technical Phone Screen
4
Coding Rounds
5
System Design Interview
6
Final Interview
常见问题
Coding/Algorithm
System Design
Technical Knowledge
Behavioral/STAR
Infrastructure/SRE
新闻动态
IBM Working Together with Google Cloud to Accelerate Enterprise AI and Hybrid Cloud Modernization - IBM Newsroom
IBM Newsroom
News
·
2d ago
Together AI Features Deployment of Kimi K2.6 Multimodal Agentic Model - TipRanks
TipRanks
News
·
3d ago
SpaceX says it can buy Cursor later this year for $60 billion or pay $10 billion for 'our work together' - CNBC
CNBC
News
·
4d ago
Together AI leases 150K sf for new HQ in SF’s Showplace Square - The Real Deal
The Real Deal
News
·
5d ago