招聘
必备技能
Python
PyTorch
TensorFlow
Machine Learning
ABOUT LIQUID LABS:
Research has been core to Liquid AI from the beginning.
Liquid Labs gives that work a formal home; an internal research accelerator driving fundamental breakthroughs in the science of building intelligent, personalized, and adaptive machines.
Our origins trace back to MIT CSAIL, where the foundational work on Liquid Neural Networks defined a new class of dynamical, efficient sequence-processing architectures. That research became the basis for Liquid Foundation Models (LFMs). Scalable, multimodal models built for real-world deployment in resource-constrained environments.
At Liquid Labs, we extend that lineage - pushing forward the frontier of efficient, adaptive intelligence through both fundamental research and practical engineering.
We work hand-in-hand with Liquid’s core foundation model and systems teams to translate theory into deployed capability — defining a new generation of intelligent systems that are both powerful and efficient.
ABOUT THE ROLE:
As a Research Engineer, you’ll join a small, high-context team exploring the limits of adaptive intelligence. You’ll design and implement novel architectures, training methods, and inference strategies to redefine what efficient AI can do.
You’ll operate at the intersection of research and engineering — translating scientific ideas into working systems, publishing where it drives the field forward, and deploying where it changes what’s possible.
While San Francisco and Boston are preferred, we are open to other locations in the United States.
THIS ROLE IS FOR YOU IF:
-
Work fluently in Python and frameworks such as Py Torch, JAX, or Tensor Flow
-
Have experience in machine learning research or production-grade ML systems
-
Move fast from paper to prototype — curiosity backed by precision
-
Care about efficiency, scalability, and elegant system design as scientific principles
-
Value small, deep-technical teams where impact is immediate and measurable
-
Have a track record of publication in tier-1 venues (NeurIPS, ICML, ICLR, CVPR, ACL, or equivalent), demonstrating original contribution and research rigor
OPEN SCIENCE AND IMPACT:
Liquid Labs reinforces our commitment to transparent, reproducible, open research.
We publish through technical reports, architectural deep dives, ablations, and model releases, advancing the broader science of efficient AI while translating breakthroughs into production-ready systems.
WHY LIQUID LABS:
Liquid Labs is for researchers who build.
Those who care about lasting impact more than publication count, but who hold themselves to the same scientific standard.
We don’t chase benchmarks; we redefine them.
We move fast, think deeply, and measure success by the systems that endure.
There is no application deadline. We review candidates on a rolling basis.
总浏览量
0
申请点击数
0
模拟申请者数
0
收藏
0
相似职位

Power Electronics Research Engineer
Ford · Dearborn, MI, United States, US

Vulnerability Research Engineer and Developer
Booz Allen Hamilton · Quantico, VA

Research Engineer, Environment Scaling
Anthropic · Remote-Friendly, United States

AI Data Foundation Research Engineer
HPE · Ft. Collins, Colorado, United States of America

GPU Research Engineer
Qualcomm · Santa Clara, California, United States of America
关于Liquid AI

Liquid AI
Series ALiquid AI is an artificial intelligence company focused on developing liquid neural networks and dynamic AI systems. The company specializes in creating adaptive neural architectures inspired by biological systems.
51-200
员工数
Cambridge
总部位置
薪资范围
4个数据点
Staff/L6
Staff/L6 · GTM STAFF - STRATEGIC PARTNERSHIPS
1份报告
$455,000
年薪总额
基本工资
$350,000
股票
-
奖金
-
$455,000
$455,000
新闻动态
Vertiv Stock: The $15 Billion Backlog, Liquid Cooling Dominance, And The AI Trade (VRT) - Seeking Alpha
Seeking Alpha
News
·
1w ago
Taiwan cooling suppliers post record March revenue as AI demand lifts liquid cooling - digitimes
digitimes
News
·
1w ago
Best practices for deploying liquid-cooled servers in AI data centers - Data Center Dynamics
Data Center Dynamics
News
·
1w ago
Liquid AI Releases LFM2.5-VL-450M: a 450M-Parameter Vision-Language Model with Bounding Box Prediction, Multilingual Support, and Sub-250ms Edge Inference - MarkTechPost
MarkTechPost
News
·
1w ago