採用
About the Role
Together AI is building the best inference infrastructure for voice applications. Our Voice AI platform powers production-grade, real-time voice agents and applications — serving speech-to-text and text-to-speech models with best-in-class latency and reliability.
We're looking for a Senior ML Engineer to drive the model serving layer for voice workloads. You'll work hands-on with inference engines like TRT-LLM and SGLang to optimize how we serve models like Whisper, Parakeet, Orpheus, and Kokoro — pushing latency and throughput to the frontier. You'll profile GPU utilization, design batching strategies for streaming audio, and ensure new model architectures can go from research to production quickly.
This is a foundational hire on a small, high-impact team. Voice inference has unique challenges — streaming audio, tokenization, real-time latency budgets — that require dedicated ML engineering focus. You'll shape how Together serves voice models as the industry moves from pipeline architectures (ASR → LLM → TTS) toward end-to-end speech-to-speech.
-
Own the model serving stack that powers Together's voice platform across STT, TTS, and speech-to-speech.
-
Work directly with state-of-the-art accelerators (H100s, H200s, B200s) to optimize voice model inference.
-
Collaborate with model partners (Cartesia, Deepgram, Rime, and others) to bring their models to production on Together's infrastructure.
-
Build quality evaluation frameworks that guide model selection for customers and inform the roadmap.
-
Join a small, early-stage team with outsized impact on a fast-growing product area.
Responsibilities
-
Optimize inference performance for voice models (STT, TTS, speech-to-speech) — targeting best-in-class TTFB, throughput, and GPU utilization across our curated model set.
-
Productionize voice models on serverless and dedicated endpoints, including batching strategies, streaming inference, and memory management tailored to audio workloads.
-
Build and maintain a voice model evaluation framework — measuring WER across accents, languages, and noise conditions for STT; naturalness, latency, and pronunciation accuracy for TTS.
-
Enable new model architectures in our serving stack as the field evolves, including audio-native LLMs, codec-based models (SNAC), and speech-to-speech systems.
-
Collaborate with model partners to integrate and optimize their models (Cartesia, Deepgram, Rime, and others) running on Together's infrastructure.
-
Profile and debug performance across the full inference stack — from GPU kernels to framework-level bottlenecks — and ship measurable improvements.
-
Work with the platform engineering side of the team to ensure the serving layer meets the latency and reliability requirements of real-time voice APIs.
-
Contribute to voice model fine-tuning capabilities (STT and TTS) as we enable customers to build differentiated voice experiences on Together.
-
Lay the groundwork for multiple new products down the line.
Requirements
-
5+ years of experience in ML engineering, with a focus on model serving, inference optimization, or ML infrastructure.
-
Hands-on experience with LLM serving engines (vLLM, SGLang, TensorRT-LLM, or similar) — comfortable reading and modifying engine internals, not just using APIs.
-
Strong proficiency in Python and Py Torch; experience with GPU profiling and optimization (CUDA, memory management, kernel-level debugging).
-
Track record of shipping ML systems to production with measurable performance improvements.
-
Strong product sense — you think about what developers building voice apps actually need, not just what's technically interesting.
-
Comfort working on a small, early-stage team where you'll wear multiple hats and move fast.
-
Experience with speech and audio ML (ASR, TTS architectures, audio signal processing) is a strong plus but not required — you can learn this quickly if you have strong ML engineering fundamentals.
-
Familiarity with audio codecs and tokenization schemes (SNAC, Encodec, DAC) is a plus.
-
Experience training or fine-tuning speech models is a plus.
-
Bachelor's or Master's degree in Computer Science, Electrical Engineering, or related field, or equivalent practical experience
About Together AI
Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as Flash Attention, Hyena, Flex Gen, and Red Pajama. We invite you to join a passionate group of researchers and engineers in our journey in building the next generation AI infrastructure.
Compensation
We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $200,000 - $260,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.
Equal Opportunity
Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
Please see our privacy policy at https://www.together.ai/privacy
総閲覧数
1
応募クリック数
0
模擬応募者数
0
スクラップ
0
類似の求人

Senior Applied Machine Learning Engineer - Asset Intelligence
MaintainX · San Francisco (Remote)

Member of Technical Staff - Post Training, Reinforcement Learning
Liquid AI · San Francisco

Senior Machine Learning Engineer - Payments
Plaid · San Francisco

Senior Machine Learning Engineer, Public Sector
Scale AI · San Francisco, CA; New York, NY; Washington, DC

Member of Technical Staff - Multi-Modal, Vision
Liquid AI · San Francisco
Together AIについて

Together AI
Series BData annotation company.
51-200
従業員数
San Francisco
本社所在地
$1.25B
企業価値
レビュー
3.8
10件のレビュー
ワークライフバランス
3.5
報酬
2.8
企業文化
4.2
キャリア
3.0
経営陣
3.2
65%
友人に勧める
良い点
Great team culture and collaboration
Flexible work arrangements and remote options
Good work-life balance
改善点
Below industry standard compensation
High workload and overwhelming demands
Limited career advancement opportunities
給与レンジ
0件のデータ
Mid/L4
Senior
Mid/L4 · Product Designer
0件のレポート
$156,800
年収総額
基本給
$156,800
ストック
-
ボーナス
-
$133,280
$180,320
面接体験
3件の面接
難易度
3.0
/ 5
期間
14-28週間
面接プロセス
1
Application Review
2
Recruiter Screen
3
Technical Phone Screen
4
Coding Rounds
5
System Design Interview
6
Final Interview
よくある質問
Coding/Algorithm
System Design
Technical Knowledge
Behavioral/STAR
Infrastructure/SRE
ニュース&話題
Amazon launches AI Store, showcasing range of AI-powered consumer devices - connectedtoindia.com
connectedtoindia.com
News
·
4d ago
Together AI - Forbes
Forbes
News
·
4d ago
Together AI lands massive new headquarters in San Francisco while 'in this hypergrowth phase' - The Business Journals
The Business Journals
News
·
5d ago
Annual TraceGains ‘Together’ Conference to Showcase AI, Connected Data in Food and Beverage Industry - Quality Assurance & Food Safety
Quality Assurance & Food Safety
News
·
5d ago