refresh

トレンド企業

トレンド企業

採用

Liquid AI

ICLR 2026

Liquid AI

San Francisco

·

On-site

·

Full-time

·

1d ago

About Liquid AI

Spun out of MIT CSAIL, we build foundation models from scratch using a fundamentally different architecture. Our Liquid Foundation Models (LFMs) are built on a fundamentally different hybrid architecture, they deliver faster inference, lower memory, and deploy where traditional models can't. We ship open-weight text, vision-language, and audio-language models that run on phones, laptops, vehicles, and embedded devices.

Why We're at ICLR

We're here because ICLR brings together the people working on the problems we care most about: efficient architectures, representation learning, multimodal reasoning, and the science of how models learn. If our conversation at the booth was interesting, this is the next step.

What We're Building

Liquid AI is hiring across several research and engineering areas. You don't need to fit neatly into one. If your work touches any of these, we want to talk:

  • Efficient Architectures. State space models, hybrid attention designs, neural ODEs, and alternatives to the transformer paradigm. Our LFM2 architecture combines gated short convolutions with grouped query attention. We're looking for people who think about what comes next.

  • Multimodal Vision. Vision-language models that run on-device under tight latency and memory constraints. Our VLM team has shipped multiple best-in-class models and owns the full pipeline from architecture through deployment.

  • Multimodal Audio. Speech foundation models, end-to-end audio-language systems, and real-time voice on constrained hardware. Our LFM2.5-Audio runs natively on devices with dramatically faster decoding than its predecessor.

  • Data Engineering. Pre-training data curation, synthetic data generation, data mixtures, and scaling strategies. The quality of what goes in determines everything that comes out.

  • Infrastructure & Performance. Distributed training, GPU kernel optimization, edge inference, and model serving at scale. We build the systems that make our architecture fast in practice, from custom kernels to on-device deployment pipelines.

  • Post-Training & Alignment. RLHF, preference optimization, multi-stage reinforcement learning, and evaluation. Our latest models were shaped by large-scale RL without supervised fine-tuning warmup.

Who Thrives Here

This is a small team where individuals own entire work-streams end-to-end, from research through shipped models. We publish. We release open weights. We present at the conferences you attend. If you want to do work that is visible and that ships, this is the environment for it.

What We're Looking For

  • Demonstrated research or engineering contribution in one or more of the areas above.

  • Ability to move from idea to implementation to shipped result.

  • M.S. or Ph.D. in Computer Science, Mathematics, Electrical Engineering, or a related field; or equivalent industry experience.

Stronger candidates will also have:

  • Published research at top-tier venues (NeurIPS, ICML, ICLR, CVPR, ACL, Interspeech, etc.).

  • Experience training or fine-tuning foundation models at scale.

  • Hands-on work with distributed training infrastructure (Deep Speed, FSDP, Megatron-LM).

  • Open-source contributions (code, data, or models) on GitHub or Hugging Face.

  • Experience deploying models to edge or on-device environments.

What We Offer

  • Full ownership:

You own your work from architecture to deployment.

  • Compensation:

Competitive base salary with equity in a unicorn-stage company.

  • Health:

We pay 100% of medical, dental, and vision premiums for employees and dependents.

  • Financial:

401(k) matching up to 4% of base pay.

  • Time Off:

Unlimited PTO plus company-wide Refill Days throughout the year.

  • Visa Sponsorship:

We sponsor O-1 and H-1B visas for exceptional talent. If you can't relocate, we'll find a way to work together.

総閲覧数

0

応募クリック数

0

模擬応募者数

0

スクラップ

0

Liquid AIについて

Liquid AI

Liquid AI

Series A

Liquid AI is an artificial intelligence company focused on developing liquid neural networks and dynamic AI systems. The company specializes in creating adaptive neural architectures inspired by biological systems.

51-200

従業員数

Cambridge

本社所在地

$2.3B

企業価値

レビュー

4.1

10件のレビュー

ワークライフバランス

3.5

報酬

3.2

企業文化

4.3

キャリア

3.4

経営陣

3.8

75%

友人に勧める

良い点

Flexible work hours

Great team culture and collaborative environment

Supportive management and leadership

改善点

Heavy workload and occasional long hours

Compensation could be better

Lack of clear direction and communication issues

給与レンジ

4件のデータ

Staff/L6

Staff/L6 · GTM STAFF - STRATEGIC PARTNERSHIPS

1件のレポート

$455,000

年収総額

基本給

$350,000

ストック

-

ボーナス

-

$455,000

$455,000