採用
必須スキル
PyTorch
ABOUT LIQUID AI:
Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.
THE OPPORTUNITY:
Our models and workflows require performance work that generic frameworks don’t solve. You’ll design and ship custom CUDA kernels, profile at the hardware level, and integrate research ideas into production code that delivers measurable speedups in real pipelines (training, post-training, and inference). Our team is small, fast-moving, and high-ownership. We're looking for someone who finds joy in memory hierarchies, tensor cores, and profiler output.
While San Francisco and Boston are preferred, we are open to other locations.
WHAT WE'RE LOOKING FOR:
We need someone who:
-
Works profiler-first: You use tools like Nsight Systems / Nsight Compute to find bottlenecks, validate hypotheses, and iterate until improvements show up in end-to-end benchmarks.
-
Bridges theory and practice: You can translate ideas from papers into implementations that are robust, testable, and performant.
-
Executes independently: Given an ambiguous bottleneck, you can drive from profiling to kernel/integration changes to benchmarked results to maintained ownership.
-
Cares about the details: Memory hierarchy, occupancy, launch configs, tensor core utilization, bandwidth vs compute limits.
THE WORK
-
Write high-performance GPU kernels for our novel model architectures
-
Integrate kernels into Py Torch pipelines (custom ops, extensions, dispatch, benchmarking)
-
Profile and optimize training and inference workflows to eliminate bottlenecks
-
Build correctness tests and numerics checks
-
Build/maintain performance benchmarks and guardrails to prevent regressions
-
Collaborate closely with researchers to turn promising ideas into shipped speedups
DESIRED EXPERIENCE:
Must-have:
-
Authored custom CUDA kernels (not only calling cuDNN/cuBLAS)
-
Strong understanding of GPU architecture and performance: memory hierarchy, warps, shared memory/register pressure, bandwidth vs compute limits
-
Proficiency with low-level profiling (Nsight Systems/Compute) and performance methodology
-
Strong C/C++ skills
Nice-to-have:
-
CUTLASS experience and tensor core utilization strategies
-
Triton kernel experience and/or Py Torch custom op integration
-
Experience building benchmark harnesses and perf regression tests
WHAT SUCCESS LOOKS LIKE (YEAR ONE)
-
Measurable improvement on at least one critical end-to-end pipeline (throughput and/or latency), validated by repeatable benchmarks
-
At least one research-driven technique shipped as a production kernel and maintained over time
-
Performance regressions are detectable early via benchmarks/guardrails, not discovered late
WHAT WE OFFER:
-
Unique challenges: Our architectural innovations and efficiency requirements offer unique optimization challenges. High ownership from day one.
-
Compensation: Competitive base salary with equity in a unicorn-stage company
-
Health: We pay 100% of medical, dental, and vision premiums for employees and dependents
-
Financial: 401(k) matching up to 4% of base pay
-
Time Off: Unlimited PTO plus company-wide Refill Days throughout the year
総閲覧数
0
応募クリック数
0
模擬応募者数
0
スクラップ
0
類似の求 人

Senior Cloud Operations Engineer I
DigitalOcean · San Francisco

Senior Forward Deployed Engineer
Handshake · San Francisco, CA

Manual Evaluations Program Leader
Uber · San Francisco, CA

Staff Forward Deployed Engineer
Handshake · San Francisco, CA

Principal Applied AI Marketing Engineer
Okta · San Francisco, California
Liquid AIについて

Liquid AI
Series ALiquid AI is an artificial intelligence company focused on developing liquid neural networks and dynamic AI systems. The company specializes in creating adaptive neural architectures inspired by biological systems.
51-200
従業員数
Cambridge
本社所在地
給与レンジ
4件のデータ
Staff/L6
Staff/L6 · GTM STAFF - STRATEGIC PARTNERSHIPS
1件のレポート
$455,000
年収総額
基本給
$350,000
ストック
-
ボーナス
-
$455,000
$455,000
ニュース&話題
Vertiv Stock: The $15 Billion Backlog, Liquid Cooling Dominance, And The AI Trade (VRT) - Seeking Alpha
Seeking Alpha
News
·
1w ago
Taiwan cooling suppliers post record March revenue as AI demand lifts liquid cooling - digitimes
digitimes
News
·
1w ago
Best practices for deploying liquid-cooled servers in AI data centers - Data Center Dynamics
Data Center Dynamics
News
·
1w ago
Liquid AI Releases LFM2.5-VL-450M: a 450M-Parameter Vision-Language Model with Bounding Box Prediction, Multilingual Support, and Sub-250ms Edge Inference - MarkTechPost
MarkTechPost
News
·
1w ago