
Electric vehicle company
Tech Lead, Robotic AI Model
The Company:
Faraday Future is a California-based technology company focused on the design, engineering, and development of intelligent, connected electric vehicles and related artificial intelligence–enabled technologies.
Founded in 2014, the Company’s mission is to disrupt the automotive and technology industries by creating user-centric, technology-first experiences. The Company, together with its controlled subsidiaries, operates across multiple technology-driven areas, including AI electric vehicles, robotics, and its crypto business (AIXC), all under its upgraded Global EAI Industry Bridge Strategy, marking the beginning of a new chapter in AI mobility and Web3 integration. The Company aims to leverage the latest technologies and world’s best talent to realize exciting new possibilities across all of these lines. Faraday Future’s automotive business exemplifies its vision for luxury, innovation, and performance, while its FX strategy aims to introduce mass production models equipped with state-of-the-art luxury technology derived from the FF brand, targeted towards a broader market with middle-to-low price range offerings. FF is committed to redefining mobility through AI innovation. Join us in shaping the future of intelligent transportation and technology by creating something new, something connected, and something with a true global impact.
Your Role:
We are building the next generation of intelligent robots. As a leader in Robotics AI Model, you will own the critical pipeline that transforms pretrained foundation models into deployable robot policies — turning general-purpose AI into systems that can reliably manipulate objects, navigate environments, and perform complex physical tasks in the real world.
This role sits at the intersection of embodied AI, robot learning, and foundation model adaptation. You will work across the full post-training lifecycle: curating demonstration data, fine-tuning vision-language-action (VLA) models or world models, training reinforcement learning policies in simulation, validating behaviors on real hardware, and optimizing models for on-robot inference. Your work will directly determine how capable, safe, and generalizable our robots are.
Key Responsibilities: Model Post-Training & Fine-Tuning
- Design and execute post-training pipelines for VLA and visuomotor policy models (e.g., diffusion policies, ACT, flow matching), including supervised fine-tuning (SFT), reinforcement learning (RL), and preference-based optimization
- Fine-tune pretrained robot foundation models on task-specific demonstration datasets for dexterous manipulation, locomotion, whole-body control, and multi-step task sequencing
- Develop and iterate on reward functions, verifiers, and RL training loops (PPO, GRPO, RLVR) to improve policy success rate and robustness in simulation and real-world deployment
- Apply parameter-efficient fine-tuning methods (LoRA, QLoRA, OFT) to adapt large models to new tasks and robot embodiments under compute constraints
Data Pipeline & Curation
- Build and manage large-scale robot demonstration data pipelines: teleoperation data collection, action tokenization (e.g., FAST tokenizer), data augmentation, quality filtering, and dataset versioning
- Define data collection strategies across robot platforms, collaborating with robot operators and data labeling teams to ensure dataset diversity and coverage
- Integrate multi-modal sensory data (RGB, depth, proprioception, force/torque, tactile) into coherent training datasets
Simulation & Sim-to-Real Transfer
- Build and maintain simulation environments (Isaac Sim, Mu JoCo, SAPIEN) for scalable policy training, including domain randomization, asset generation, and task definition
- Address sim-to-real transfer challenges through visual augmentation, action space calibration, dynamics randomization, and systematic real-world validation
- Design and run large-scale distributed RL training across GPU clusters for locomotion and manipulation policies
Evaluation & Deployment
- Build evaluation and benchmarking infrastructure: automated success-rate tracking, sim evaluation harnesses, real-robot A/B testing, and regression monitoring
- Optimize models for on-robot inference: quantization (INT8/FP8), action chunking, latency reduction, and real-time control loop integration
- Collaborate with controls, perception, and hardware teams to integrate learned policies into the full robot software stack
Research & Innovation
- Track and adopt state-of-the-art research in robot foundation models, generalist policies, and embodied AI post-training (e.g., π₀/π₀.5, OpenVLA OFT, RT-2, Octo, Helix)
- Contribute to internal research efforts on topics such as multi-embodiment transfer, long-horizon task learning, open-world generalization, and human-in-the-loop policy improvement
Basic Qualifications:
- Master’s or PhD in Robotics, Computer Science, Machine Learning, or a closely related field
- 3+ years of hands-on experience in robot learning, including imitation learning, behavior cloning, or visuomotor policy training on real or simulated robots
- Deep expertise in at least one post-training paradigm: SFT on robot demonstrations, RL-based policy optimization, or diffusion/flow-matching policy training
- Strong Py Torch skills with experience training and debugging models at scale; familiarity with distributed training (FSDP, Deep Speed)
- Practical experience with robot simulation platforms (Isaac Sim, Mu JoCo, Py Bullet, or SAPIEN) and sim-to-real workflows
- Understanding of action representations for robotics: continuous control, discrete tokenization, action chunking, and diffusion-based action generation
- Solid Python engineering; comfortable working with ROS/ROS2, real-time control systems, and robot hardware integration
- Ability to independently drive projects from research prototype to real-robot deployment
Preferred Qualifications:
- Experience fine-tuning VLA models such as π₀, OpenVLA, RT-2, Octo, or similar generalist robot policies
- Hands-on experience with real robot platforms: humanoids, bi-manual arms (ALOHA), mobile manipulators, or dexterous hands
- Experience with large-scale teleoperation data collection systems and robot fleet management
- Familiarity with RLHF/DPO/GRPO applied to robotic policy alignment and human preference learning
- Experience building or contributing to robot learning infrastructure (Le Robot, robomimic, openpi, etc.)
- Publications at top robotics or ML venues (CoRL, RSS, ICRA, NeurIPS, ICML, ICLR)
- Knowledge of on-device model optimization: TensorRT, ONNX Runtime, model pruning, and edge deployment for embodied AI
Salary Range:
($150,000-$180,000 DOE), plus benefits and incentive plans
Perks + Benefits
- Healthcare + dental + vision benefits (Free for you/discounted for family)
- 401(k) options
- Casual dress code + relaxed work environment
- Culturally diverse, progressive atmosphere
Faraday Future is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
閲覧数
0
応募クリック
0
Mock Apply
0
スクラップ
0
類似の求人

Product Manager, Machine Learning Lifecycle
JPMorgan Chase · New York, NY, United States, US

Senior Lead AI Engineer (AI Foundations, LLM Core and Agentic AI)
Capital One · 4 Locations

Senior Lead AI Engineer (FM Hosting, LLM Inference)
Capital One · 3 Locations

Generative AI Engineer - Vice President (Dallas, TX)
Goldman Sachs · Dallas, Texas, United States

Digital Assurance & Transparency - Responsible AI - Director
PwC · New York, NY
Faraday Futureについて

Faraday Future
PublicFaraday Future Inc. is an American technology company founded in 2014 focused on the development of electric vehicles. Based in Los Angeles, California, it began producing vehicles in 2023 and markets them in the United States and China. The company delivered a total of 16 vehicles by January 2025.
51-200
従業員数
Gardena
本社所在地
レビュー
10件のレビュー
3.3
10件のレビュー
ワークライフバランス
2.8
報酬
2.5
企業文化
3.7
キャリア
3.2
経営陣
2.3
45%
知人への推奨率
良い点
Cutting-edge technology and innovation
Friendly and talented colleagues
Learning and growth opportunities
改善点
Poor management and leadership
High pressure and overwhelming workload
Job insecurity and frequent layoffs
給与レンジ
0件のデータ
Junior/L3
Junior/L3 · Solution Architect
0件のレポート
$169,150
年収総額
基本給
-
ストック
-
ボーナス
-
$143,778
$194,522
面接レビュー
レビュー52件
難易度
3.3
/ 5
期間
14-28週間
内定率
37%
体験
ポジティブ 63%
普通 21%
ネガティブ 16%
面接プロセス
1
Phone Screen
2
Technical Interview
3
Hiring Manager
4
Team Fit
よくある質問
Technical skills
Past experience
Team collaboration
Problem solving
最新情報
Faraday Future Launches Embodied AI Developer Platform Purpose‑Built for AI Natives - World Business Outlook
World Business Outlook
News
·
1w ago
Faraday Future Strategically Launches Its Embodied AI Developer Platform Purpose-Built for AI Natives, Marking 2026 as the Inaugural Year of EAI Robotics Education - The Joplin Globe
The Joplin Globe
News
·
1w ago
Faraday Future Strategically Launches Its Embodied AI Developer Platform Purpose-Built for AI Natives, Marking 2026 as the Inaugural Year of EAI Robotics Education - itemonline.com
itemonline.com
News
·
1w ago
Faraday Future Lands $45 Million in Funding - Los Angeles Business Journal
Los Angeles Business Journal
News
·
1w ago