채용

Member of Technical Staff - ML Research Engineer, Multi-Modal - Audio
San Francisco
·
On-site
·
Full-time
·
1mo ago
필수 스킬
PyTorch
ABOUT LIQUID AI:
Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.
THE OPPORTUNITY:
Our Audio team is building frontier speech-language models that handle STT, TTS, and speech-to-speech in a single architecture. This role sits at the center of applied audio model development, working directly with the technical lead to ship production systems that run on-device under real-time constraints. You will own critical workstreams across data pipelines, evaluation systems, and customer deployments. If you want high ownership on rare technical problems in a small, elite team where your code ships, this is the role.
WHAT WE'RE LOOKING FOR:
We need someone who:
-
Builds first, theorizes later: You ship working systems, not just notebooks. Production-grade code is your default, not a stretch goal.
-
Owns outcomes end-to-end: From data pipelines to customer deployments, you take responsibility for the full stack without waiting for someone else to handle the hard parts.
-
Thrives under constraints: On-device, low-latency, memory-limited systems excite you. You see constraints as design parameters, not blockers.
-
Ramps quickly on new territory: Gaps in specific subdomains are fine if you close them fast. You seek out feedback and stay focused on what moves the needle.
THE WORK
-
Build and scale data pipelines for audio model training, including preprocessing, augmentation, and quality filtering at scale
-
Design, implement, and maintain evaluation systems that measure multimodal performance across internal and public benchmarks
-
Fine-tune and adapt audio models for customer-specific use cases, owning delivery from requirements through deployment
-
Contribute production code to the core audio repository, collaborating with infrastructure and research teams
-
Support experimentation under real hardware constraints, shifting between customer work and core development as priorities evolve
DESIRED EXPERIENCE:
Must-have:
-
Strong programming fundamentals with demonstrated ability to write clean, maintainable, production-grade code
-
Experience building and shipping production ML systems beyond model training (data pipelines, evals, serving infrastructure)
-
Proficiency in Py Torch and familiarity with distributed training frameworks (Deep Speed, FSDP, or similar)
-
Track record of collaborating effectively in shared codebases with high engineering standards
Nice-to-have:
-
Direct experience with audio/speech models (ASR, TTS, vocoders, diarization, or speech-to-speech systems)
-
Experience designing and running large-scale training experiments on distributed GPU clusters
-
Open-source contributions that demonstrate code quality and engineering judgment
WHAT SUCCESS LOOKS LIKE (YEAR ONE)
-
Within 6 months, you independently deliver production-ready data pipelines or evaluation systems and own at least one customer workstream end-to-end
-
Your PRs to the core audio repo are accepted without heavy rework, demonstrating strong judgment in system design
-
By year end, you operate as a second pillar to the technical lead, unblocking parallel workstreams and raising overall team velocity
WHAT WE OFFER:
-
Rare technical problems: Work on audio-to-audio frontier systems with real ownership in a team small enough that your contributions ship directly to production.
-
Compensation: Competitive base salary with equity in a unicorn-stage company
-
Health: We pay 100% of medical, dental, and vision premiums for employees and dependents
-
Financial: 401(k) matching up to 4% of base pay
-
Time Off: Unlimited PTO plus company-wide Refill Days throughout the year
총 조회수
0
총 지원 클릭 수
0
모의 지원자 수
0
스크랩
0
비슷한 채용공고

Senior Applied AI/ML Scientist - Ads Bidding
Faire · San Francisco, CA

Sr. Data Scientist, tvScientific
Pinterest · San Francisco, CA, US; Remote, US

Senior Data Scientist
Sentry · San Francisco, California

Senior Data Scientist - Cards and Credit
Mercury · San Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States

Senior Data Scientist
Hinge Health · San Francisco-HQ
Liquid AI 소개

Liquid AI
Series ALiquid AI is an artificial intelligence company focused on developing liquid neural networks and dynamic AI systems. The company specializes in creating adaptive neural architectures inspired by biological systems.
51-200
직원 수
Cambridge
본사 위치
$2.3B
기업 가치
리뷰
4.1
10개 리뷰
워라밸
3.5
보상
3.2
문화
4.3
커리어
3.4
경영진
3.8
75%
친구에게 추천
장점
Flexible work hours
Great team culture and collaborative environment
Supportive management and leadership
단점
Heavy workload and occasional long hours
Compensation could be better
Lack of clear direction and communication issues
연봉 정보
4개 데이터
Staff/L6
Staff/L6 · GTM STAFF - STRATEGIC PARTNERSHIPS
1개 리포트
$455,000
총 연봉
기본급
$350,000
주식
-
보너스
-
$455,000
$455,000
뉴스 & 버즈
AI Drive: Mercedes-Benz partners with Liquid AI for in-car intelligence - Collision Repair Mag
Collision Repair Mag
News
·
1d ago
How is AI Transforming Mercedes-Benz Vehicles? - AIM Media House
AIM Media House
News
·
1d ago
Mercedes is bringing faster, more private voice control to US cars - How-To Geek
How-To Geek
News
·
2d ago
Mercedes-Benz And Liquid Ai Partner To Scale Embedded In-Car Intelligence In North America - TradingView
TradingView
News
·
2d ago