Jobs
ABOUT LIQUID AI:
Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.
THE OPPORTUNITY:
This is a rare chance to apply frontier sequential recommendation architectures to real enterprise problems at scale. You will own applied ML work end-to-end for recommendation system workloads, adapting Liquid Foundation Models for customers who need personalization and ranking capabilities that run efficiently under production constraints.
Unlike most recommendation roles that are siloed into a single product surface, this role gives you full ownership over how large-scale recommendation models are adapted, evaluated, and deployed for enterprise customers. Between engagements, you will build reusable applied tooling and workflows that accelerate future delivery.
If you care about data quality at scale, user behavior modeling, and making recommendation systems actually work in enterprise production environments, this is the role.
WHAT WE’RE LOOKING FOR:
We need someone who:
-
Takes ownership: Owns customer recommendation system engagements end-to-end, from requirements through delivery and evaluation.
-
Thinks at scale: Can reason about user interaction data, sequential modeling, feature engineering, and evaluation across large-scale production systems.
-
Is pragmatic: Optimizes for measurable customer outcomes (engagement, conversion, revenue lift) over theoretical novelty.
-
Communicates clearly: Can translate between customer business metrics and internal technical decisions, and push back when needed.
THE WORK
-
Act as the technical owner for enterprise customer engagements involving recommendation and ranking workloads
-
Translate customer requirements into concrete specifications for recommendation models
-
Design and execute data pipelines for user interaction data, feature engineering, and training data curation at scale
-
Fine-tune and adapt large-scale sequential recommendation models (e.g., HSTU-style architectures) for customer-specific use cases
-
Design task-specific evaluations for recommendation model performance (ranking quality, latency, throughput) and interpret results
-
Build reusable applied tooling and workflows that accelerate future customer engagements
DESIRED EXPERIENCE:
Must-have:
-
Hands-on experience building or fine-tuning recommendation models at scale (not just off-the-shelf collaborative filtering)
-
Experience with sequential recommendation architectures, user behavior modeling, or large-scale ranking systems
-
Strong intuition for data quality and evaluation design in recommendation contexts (offline metrics, A/B testing, business metric alignment)
-
Experience with large-scale data pipelines for user interaction data and feature engineering
-
Proficiency in Python and Py Torch with autonomous coding and debugging ability
Nice-to-have:
-
Experience with transformer-based recommendation architectures (HSTU, SASRec, BERT4Rec, or similar)
-
Experience delivering recommendation systems to external customers with measurable business outcomes
-
Familiarity with serving recommendation models under latency and throughput constraints
WHAT SUCCESS LOOKS LIKE (YEAR ONE)
-
Independently owns and delivers enterprise recommendation system engagements with minimal oversight
-
Is trusted by customers as the technical owner, demonstrating strong judgment on the tradeoffs between model quality, latency, and business impact
-
Has built reusable applied workflows or tooling that accelerate future customer engagements
WHAT WE OFFER:
-
Real ML work: You will build and adapt large-scale recommendation models for enterprise customers, working with frontier architectures like HSTU under real production constraints.
-
Compensation: Competitive base salary with equity in a unicorn-stage company
-
Health: We pay 100% of medical, dental, and vision premiums for employees and dependents
-
Financial: 401(k) matching up to 4% of base pay
-
Time Off: Unlimited PTO plus company-wide Refill Days throughout the year
Total Views
0
Apply Clicks
0
Weekly mock applicants
0
Bookmarks
0
Similar jobs

Senior/Staff Machine Learning Engineer - Perception Offline Driving Intelligence
Zoox · Boston, MA

Senior Solutions Architect II - AI/ML
DigitalOcean · Boston

Senior Machine Learning Engineer - Prediction
Motional · Boston, Massachusetts, United States; Pittsburgh, Pennsylvania, United States; Remote U.S.

Senior Machine Learning Engineer (Health)
Whoop · Boston, MA

Senior Machine Learning Engineer (Sensor Intelligence)
Whoop · Boston, MA
About Liquid AI

Liquid AI
Series ALiquid AI is an artificial intelligence company focused on developing liquid neural networks and dynamic AI systems. The company specializes in creating adaptive neural architectures inspired by biological systems.
51-200
Employees
Cambridge
Headquarters
Salary Ranges
4 data points
Staff/L6
Staff/L6 · GTM STAFF - STRATEGIC PARTNERSHIPS
1 reports
$455,000
total per year
Base
$350,000
Stock
-
Bonus
-
$455,000
$455,000
News & Buzz
Vertiv Stock: The $15 Billion Backlog, Liquid Cooling Dominance, And The AI Trade (VRT) - Seeking Alpha
Seeking Alpha
News
·
6d ago
Taiwan cooling suppliers post record March revenue as AI demand lifts liquid cooling - digitimes
digitimes
News
·
6d ago
Best practices for deploying liquid-cooled servers in AI data centers - Data Center Dynamics
Data Center Dynamics
News
·
1w ago
Liquid AI Releases LFM2.5-VL-450M: a 450M-Parameter Vision-Language Model with Bounding Box Prediction, Multilingual Support, and Sub-250ms Edge Inference - MarkTechPost
MarkTechPost
News
·
1w ago