招聘
About Liquid AI
Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.
The Opportunity
We're building the product function that packages Liquid AI's technology into repeatable solutions for enterprise customers. You'll join our Product team and work directly with our technical leadership to define, package, and bring to market repeatable AI solutions. You'll collaborate daily with ML engineers, GTM leaders, and enterprise customers to understand what makes our technology valuable and how to deliver it efficiently. This is a high-ownership role where you'll treat your solution area like your own startup within the company.
What We're Looking For
We need someone with:
- Customer Obsession:
Defaults to gathering signal from actual customers rather than hypothesizing. Writes down assumptions and validates them through direct customer feedback.
- Self-Direction:
Proactively goes deep without being asked. Not satisfied with surface-level understanding. Comfortable navigating ambiguity and proposing solutions.
- Technical Fluency:
Can engage credibly with ML engineers and researchers. Understands the nuances of deploying AI systems in production.
- Founder Mentality:
Treats their solution area like their own startup. Takes ownership of outcomes across functions, from technical architecture to go-to-market strategy.
The Work
-
Own one or more GTM-ready solutions end-to-end, from definition through scalable customer deployment
-
Extract learnings from customer engagements and identify patterns that can be productized
-
Work with ML and inference teams to build tooling that collapses implementation overhead
-
Define ICPs, pricing, and packaging for repeatable solutions
-
Partner with GTM teams to scale outbound sales motions around productized offerings
Desired Experience Must-have:
-
Engineering, ML, or CS background with demonstrated technical fluency
-
Track record of owning product strategy, not just executing on handed roadmaps
-
Outbound product management experience with enterprise customers
-
Direct experience gathering and acting on customer feedback to shape product direction
Nice-to-have:
-
Experience at an AI/ML company or working on AI-powered products
-
Experience in relevant verticals (automotive, consumer electronics, life sciences, financial services)
-
B2B experience with large enterprises, as well as mid-market
What Success Looks Like (Year One)
-
Own one solution from definition to market launch, with a defined ICP and go-to-market motion
-
Reduce implementation overhead for at least one solution
-
Identify and document universal patterns across customer deployments that enable repeatability
What We Offer
- Full Ownership:
You own your solution from customer discovery through deployment.
- Compensation:
Competitive base salary with equity in a unicorn-stage company
- Health:
We pay 100% of medical, dental, and vision premiums for employees and dependents
- Financial:
401(k) matching up to 4% of base pay
- Time Off:
Unlimited PTO plus company-wide Refill Days throughout the year
总浏览量
0
申请点击数
0
模拟申请者数
0
收藏
0
相似职位
关于Liquid AI

Liquid AI
Series ALiquid AI is an artificial intelligence company focused on developing liquid neural networks and dynamic AI systems. The company specializes in creating adaptive neural architectures inspired by biological systems.
51-200
员工数
Cambridge
总部位置
薪资范围
3个数据点
Staff/L6
Staff/L6 · GTM Staff - Strategic Partnerships
1份报告
$402,500
年薪总额
基本工资
$350,000
股票
-
奖金
-
$402,500
$402,500
新闻动态
What is Liquid AI | How LIQAI Works, Use Cases and Values | MEXC - MEXC
MEXC
News
·
3w ago
Bitcoin miners start funding pivot to AI with debt while selling BTC to stay liquid - CryptoSlate
CryptoSlate
News
·
3w ago
MiTAC Computing leads with AI-ready, OCP-compliant and liquid cooling innovations at CloudFest 2026 - The AI Journal
The AI Journal
News
·
3w ago
Liquid AI's LFM2-24B-A2B running at ~50 tokens/second in a web browser on WebGPU
The model (MoE w/ 24B total & 2B active params) runs at \~50 tokens per second on my M4 Max, and the 8B A1B variant runs at over 100 tokens per second on the same hardware. Demo (+ source code): [https://huggingface.co/spaces/LiquidAI/LFM2-MoE-WebGPU](https://huggingface.co/spaces/LiquidAI/LFM2-MoE-WebGPU) Optimized ONNX models: \- [https://huggingface.co/LiquidAI/LFM2-8B-A1B-ONNX](https://huggingface.co/LiquidAI/LFM2-8B-A1B-ONNX) \- [https://huggingface.co/LiquidAI/LFM2-24B-A2B-O
·
3w ago
·
116
·
18




