招聘
必备技能
PyTorch
ABOUT LIQUID AI:
Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.
THE OPPORTUNITY:
Our Training Infrastructure team is building the distributed systems that power our next-generation Liquid Foundation Models. As we scale, we need to design, implement, and optimize the infrastructure that enables large-scale training.
This is a high-ownership training systems role focused on runtime/performance/reliability (not a general platform/SRE role). You’ll work on a small team with fast feedback loops, building critical systems from the ground up rather than inheriting mature infrastructure.
While San Francisco and Boston are preferred, we are open to other locations.
WHAT WE'RE LOOKING FOR:
We need someone who:
-
Loves distributed systems complexity: Our team builds systems that keeps long training runs stable, debugs training failures across GPU clusters, and improves performance.
-
Wants to build: We need builders who find satisfaction in robust, fast, reliable infrastructure.
-
Thrives in ambiguity: Our systems support model architectures that are still evolving. We make decisions with incomplete information and iterate quickly.
-
Aligns with team priorities and delivers: Our best engineers align with team priorities while pushing back with data when they see problems.
THE WORK
-
Design and build core systems that make large training runs fast and reliable
-
Build scalable distributed training infrastructure for GPU clusters
-
Implement and tune parallelism/sharding strategies for evolving architectures
-
Optimize distributed efficiency (topology-aware collectives, comm/compute overlap, straggler mitigation)
-
Build data loading systems that eliminate I/O bottlenecks for multimodal datasets
-
Develop checkpointing mechanisms balancing memory constraints with recovery needs
-
Create monitoring, profiling, and debugging tools for training stability and performance
DESIRED EXPERIENCE:
Must-have:
-
Hands-on experience building distributed training infrastructure (Py Torch Distributed DDP/FSDP, Deep Speed ZeRO, Megatron-LM TP/PP)
-
Experience diagnosing performance bottlenecks and failure modes (profiling, NCCL/collectives issues, hangs, OOMs, stragglers)
-
Understanding of hardware accelerators and networking topologies
-
Experience optimizing data pipelines for ML workloads
Nice-to-have:
-
MoE (Mixture of Experts) training experience
-
Large-scale distributed training (100+ GPUs)
-
Open-source contributions to training infrastructure projects
WHAT SUCCESS LOOKS LIKE (YEAR ONE)
-
Training throughput has increased
-
Overall training efficiency/cost has improved
-
Training stability has improved (fewer failures, faster recovery)
-
Data loading bottlenecks are eliminated for multimodal workloads
WHAT WE OFFER:
-
Greenfield challenges: Build systems from scratch for novel architectures. High ownership from day one.
-
Compensation: Competitive base salary with equity in a unicorn-stage company
-
Health: We pay 100% of medical, dental, and vision premiums for employees and dependents
-
Financial: 401(k) matching up to 4% of base pay
-
Time Off: Unlimited PTO plus company-wide Refill Days throughout the year
总浏览量
0
申请点击数
0
模拟申请者数
0
收藏
0
相似职位

Senior Electrical Engineer - Dot
DoorDash · San Francisco, CA

Senior Developer Productivity Engineer
Together AI · San Francisco

Staff AI Product Engineer, Code
Semgrep · San Francisco, Boston, New York, Denver

Senior Product Engineer
World Labs · San Francisco

Senior Infra Engineer
LlamaIndex · San Francisco
关于Liquid AI

Liquid AI
Series ALiquid AI is an artificial intelligence company focused on developing liquid neural networks and dynamic AI systems. The company specializes in creating adaptive neural architectures inspired by biological systems.
51-200
员工数
Cambridge
总部位置
薪资范围
4个数据点
Staff/L6
Staff/L6 · GTM STAFF - STRATEGIC PARTNERSHIPS
1份报告
$455,000
年薪总额
基本工资
$350,000
股票
-
奖金
-
$455,000
$455,000
新闻动态
Vertiv Stock: The $15 Billion Backlog, Liquid Cooling Dominance, And The AI Trade (VRT) - Seeking Alpha
Seeking Alpha
News
·
1w ago
Taiwan cooling suppliers post record March revenue as AI demand lifts liquid cooling - digitimes
digitimes
News
·
1w ago
Best practices for deploying liquid-cooled servers in AI data centers - Data Center Dynamics
Data Center Dynamics
News
·
1w ago
Liquid AI Releases LFM2.5-VL-450M: a 450M-Parameter Vision-Language Model with Bounding Box Prediction, Multilingual Support, and Sub-250ms Edge Inference - MarkTechPost
MarkTechPost
News
·
1w ago