採用
Crusoe is on a mission to accelerate the abundance of energy and intelligence. As the only vertically integrated AI infrastructure company built from the ground up, we own and operate each layer of the stack — from electrons to tokens — to power the world's most ambitious AI workloads. When you join Crusoe, you join a team that is building the future, faster.
We're in the midst of the greatest industrial revolution of our time. The demand for AI compute is boundless, and power is a bottleneck. We're solving that — with an energy-first approach that makes AI infrastructure better for the world and faster for the people innovating with AI.
We're looking for problem-solving, opportunity-finding teammates with a sense of urgency, who believe in the scale of our ambition and thrive on a path not fully paved — people who want to grow their careers alongside a team of experts across energy, manufacturing, data center construction, and cloud services.
If you want to do the most meaningful work of your career, help our customers and partners advance their AI strategies, and be part of a high-performing team that believes in each other, come build with us at Crusoe.
About the Role:
Crusoe's Cloud Product team is hiring a Senior Technical Program Manager to own and drive technical programs across our AI IaaS platform, aligning Product and Engineering on delivery. This role requires a hands-on individual contributor with genuine technical depth in AI infrastructure, comfortable operating in ambiguous, fast-moving environments and capable of building execution structure where little exists.
Our vision is the easiest-to-use AI purpose-built cloud. We offer IaaS products, letting AI/ML engineers focus on AI model frameworks (Py Torch, Ray) and compute stacks (CUDA, ROCm) while Crusoe manages the underlying complexity. Our platform standardizes firmware/OS bundles and automates component orchestration for consistent, scalable infrastructure. We also offer AI Managed Services, like SLA-bounded Managed Inference. The TPM connects engineering, product, procurement, and data center operations to deliver a reliable platform where customers run AI workloads without managing low-level system details.
At this level, TPM engagement focuses on defined technical programs and workstreams within larger cross-functional initiatives: GPU cluster commissioning, feature delivery within IaaS products, NPI workstream ownership, and coordination across hardware and software dependencies.
The ideal candidate is engineer-rooted with hands-on technical depth in compute infrastructure, firmware, or hardware platform delivery. You will own defined programs end-to-end, build lightweight execution frameworks, govern dependencies within your scope, and grow into broader program ownership over time.
What You'll Be Working On:
-
Workstream & Program Ownership: Own delivery of defined feature sets or commissioning workstreams within larger IaaS and NPI programs. Drive execution from kickoff through GA or handoff milestone.
-
Hands-On Execution: Personally manage technically complex programs within your scope. Identify blockers early, intervene to correct course, and escalate cross-organizational issues promptly when they exceed your authority.
-
Commissioning Leadership: Own the Cloud TPM side of GPU cluster commissioning for assigned deployments — define RFN acceptance criteria, commissioning readiness requirements, and Go/No-Go criteria. Lead Phase 4.0 commissioning activities including network fabric, storage provisioning, GPU validation, burn-in, and monitoring integration.
-
Risk & Dependency Governance: Proactively identify technical and organizational risks within your programs. Maintain dependency logs, surface blockers before scheduled reviews, and drive issues to resolution.
-
Execution Frameworks: Build lightweight, fit-for-purpose execution structures for your programs — standups, dashboards, status updates, and risk logs. Establish predictability within your workstream.
-
External Partner Coordination: Coordinate with external dependencies such as hardware partners, ODMs, or data center operations teams on assigned programs. Support certification, validation, and infrastructure stack alignment under guidance of senior TPMs.
What You'll Bring to the Team:
-
AI Infrastructure Technical Depth: Genuine hands-on knowledge of the AI infrastructure stack — GPU firmware, drivers, BIOS/BMC, CUDA or ROCm stacks, OS configurations, and the tooling required to bring a cluster to production. Ability to engage credibly with firmware, hardware, networking, and compute engineers without a translator.
-
Commissioning Fluency: Familiarity with GPU cluster commissioning — understanding of the distinct workstreams (Compute, SDN, ZTP, Firmware, Network, Storage), commissioning gate definitions, and what it means to take ownership of a cluster from RFN through GA.
-
Experience at Hyperscalers or Neoclouds: 5–10 years of experience as a Technical Program Manager, with meaningful tenure at a hyperscaler (AWS, Azure, Google, Meta, OCI) or neocloud (Core Weave, Lambda Labs) in a hardware, firmware, compute platform, or IaaS delivery role. Must have been on the build side — not customer-facing or enterprise IT delivery.
-
AI Tool Integration: Active daily use of AI tools to enhance program execution, tracking, and communication.
-
Execution Rigor: Demonstrated ability to establish program predictability in ambiguous environments. Builds lightweight structure that engineering teams trust and adopt.
-
Communication: Clear written and verbal communication for delivering program status, risks, and dependencies to engineering leads and product managers.
Bonus Points
-
Bachelor's or Master's degree in Engineering, Computer Science, or a related technical field
-
Hands-on engineering background — hardware engineering, systems engineering, silicon validation, embedded software, or firmware
-
Experience with GPU NPI programs, firmware deployment across compute fleets, or datacenter commissioning
-
Experience at a hyper-growth company
Benefits:
-
Competitive compensation
-
Restricted Stock Units
-
Paid time off & paid holidays
-
Comprehensive health, dental & vision insurance
-
Employer contributions to HSA account
-
Paid parental leave
-
Paid life insurance, short-term and long-term disability
-
Professional development & tuition reimbursement
-
Mental health & wellness support
-
Commuter benefits (parking & transit)
-
Cell phone stipend
-
401(k) Retirement plan with company match up to 4% of salary
-
Volunteer time off
Compensation Range:
Compensation will be paid in the range of up to $161,700 - $196,000 + Bonus. Restricted Stock Units are included in all offers. Compensation to be determined by the applicants knowledge, education, and abilities, as well as internal equity and alignment with market data.
Crusoe is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, disability, genetic information, pregnancy, citizenship, marital status, sex/gender, sexual preference/ orientation, gender identity, age, veteran status, national origin, or any other status protected by law or regulation.
総閲覧数
0
応募クリック数
0
模擬応募者数
0
スクラップ
0
類似の求人

Staff Technical Program Manager, ML/AI Platform
Pinterest · San Francisco, CA, US; Remote, US

Senior Technical Program Manager
Mercury · San Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States

Principal Product Manager, Agentic Benefits Operations
Gusto · San Francisco, CA

Staff Product Manager, Media
Fastly · San Francisco, CA

Principal Product Manager, Personal Loans
LendingClub · San Francisco, CA
Crusoeについて

Crusoe
Series CCrusoe Energy develops cloud computing infrastructure powered by stranded energy sources to support high-performance computing workloads including AI training and cryptocurrency mining.
201-500
従業員数
Denver
本社所在地
$3B
企業価値
給与レンジ
27件のデータ
Junior/L3
Junior/L3 · Product Manager
0件のレポート
$370,000
年収総額
基本給
$240,000
ストック
$100,000
ボーナス
$30,000
$314,500
$425,500
ニュース&話題
Crusoe Highlights Early-Career Talent Pipeline With Engineering Externship - TipRanks
TipRanks
News
·
2w ago
Microsoft lays claim to Crusoe's new 900 MW DC campus - theregister.com
theregister.com
News
·
3w ago
Crusoe Expands Abilene AI Campus With New 900 MW ‘AI Factory’ for Microsoft - Data Center Knowledge
Data Center Knowledge
News
·
3w ago
Form Energy, Crusoe partner on 12 GWh of iron-air batteries for AI data centers - pv magazine USA
pv magazine USA
News
·
3w ago