
Generative AI platform
Member of Technical Staff, Performance Optimization
必須スキル
PyTorch
About Us:
At Fireworks, we’re building the future of generative AI infrastructure. Our platform delivers the highest-quality models with the fastest and most scalable inference in the industry. We’ve been independently benchmarked as the leader in LLM inference speed and are driving cutting-edge innovation through projects like our own function calling and multimodal models. Fireworks is a Series C company valued at $4 billion and backed by top investors including Benchmark, Sequoia, Lightspeed, Index, and Evantic. We’re an ambitious, collaborative team of builders, founded by veterans of Meta Py Torch and Google Vertex AI.
The Role:
We're looking for a Software Engineer focused on Performance Optimization to help push the boundaries of speed and efficiency across our AI infrastructure. In this role, you'll take ownership of optimizing performance at every layer of the stack—from low-level GPU kernels to large-scale distributed systems. A key focus will be maximizing the performance of our most demanding workloads, including large language models (LLMs), vision-language models (VLMs), and next-generation video models.
You’ll work closely with teams across research, infrastructure, and systems to identify performance bottlenecks, implement cutting-edge optimizations, and scale our AI systems to meet the demands of real-world production use cases. Your work will directly impact the speed, scalability, and cost-effectiveness of some of the most advanced generative AI models in the world.
Key Responsibilities:
-
Optimize system and GPU performance for high-throughput AI workloads across training and inference
-
Analyze and improve latency, throughput, memory usage, and compute efficiency
-
Profile system performance to detect and resolve GPU- and kernel-level bottlenecks
-
Implement low-level optimizations using CUDA, Triton, and other performance tooling
-
Drive improvements in execution speed and resource utilization for large-scale model workloads (LLMs, VLMs, and video models)
-
Collaborate with ML researchers to co-design and tune model architectures for hardware efficiency
-
Improve support for mixed precision, quantization, and model graph optimization
-
Build and maintain performance benchmarking and monitoring infrastructure
-
Scale inference and training systems across multi-GPU, multi-node environments
-
Evaluate and integrate optimizations for emerging hardware accelerators and specialized runtimes
Minimum Qualifications:
-
Bachelor’s degree in Computer Science, Computer Engineering, Electrical Engineering, or equivalent practical experience
-
5+ years of experience working on performance optimization or high-performance computing systems
-
Proficiency in CUDA or ROCm and experience with GPU profiling tools (e.g., Nsight, nvprof, CUPTI)
-
Familiarity with Py Torch and performance-critical model execution
-
Experience with distributed system debugging and optimization in multi-GPU environments
-
Deep understanding of GPU architecture, parallel programming models, and compute kernels
Preferred Qualifications:
-
Master’s or PhD in Computer Science, Electrical Engineering, or a related field
-
Experience optimizing large models for training and inference (LLMs, VLMs, or video models)
-
Knowledge of compiler stacks or ML compilers (e.g., torch.compile, Triton, XLA)
-
Contributions to open-source ML or HPC infrastructure
-
Familiarity with cloud-scale AI infrastructure and orchestration tools (e.g., Kubernetes)
-
Background in ML systems engineering or hardware-aware model design
Example projects:
-
Implement fully asynchronous low-latency sampling for large language models integrated with structured outputs
-
Implement GPU kernels for the new low-precision scheme and run experiments to find optimal speed-quality tradeoff
-
Build a distributed router with a custom load-balancing algorithm to optimize LLM cache efficiency
-
Define metrics and build harness for finding optimal performance configuration (e.g. sharding, precision) for a given class of model
-
Determine and implement in Py Torch an optimal sharding scheme for a novel attention variant
-
Optimize communication patterns in RDMA networks (Infiniband, RoCE)
-
Debug numerical instabilities for a given model for a small portion of requests when deployed at scale
Total compensation for this role also includes meaningful equity in a fast-growing startup, along with a competitive salary and comprehensive benefits package. Base salary is determined by a range of factors including individual qualifications, experience, skills, interview performance, market data, and work location. The listed salary range is intended as a guideline and may be adjusted.
Base Pay Range (Plus Equity)$175,000—$220,000 USD
Why Fireworks AI?
-
Solve Hard Problems: Tackle challenges at the forefront of AI infrastructure, from low-latency inference to scalable model serving.
-
Build What’s Next: Work with bleeding-edge technology that impacts how businesses and developers harness AI globally.
-
Ownership & Impact: Join a fast-growing, passionate team where your work directly shapes the future of AI—no bureaucracy, just results.
-
Learn from the Best: Collaborate with world-class engineers and AI researchers who thrive on curiosity and innovation.
Fireworks AI is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all innovators.
閲覧数
0
応募クリック
0
Mock Apply
0
スクラップ
0
類似の求人

Senior Autonomy Engineer - Controls
Skydio · San Mateo, California, United States

Senior Wireless Systems Performance Engineer
Skydio · San Mateo, California, United States

Principal Enterprise Applications Engineer
Roblox · San Mateo, CA, United States

Senior BSP Engineer - Cameras
Verkada · San Mateo, CA United States

Principal Engine Programmer - Systems (C++)
Roblox · San Mateo, CA, United States
Fireworks AIについて

Fireworks AI
Series AFireworks AI provides generative AI inference and fine-tuning platform for developers and enterprises. The company offers high-performance API services for running large language models and other generative AI workloads.
51-200
従業員数
San Francisco
本社所在地
$1.2B
企業価値
レビュー
26件のレビュー
3.8
26件のレビュー
ワークライフバランス
3.5
報酬
4.2
企業文化
3.8
キャリア
4.0
経営陣
3.6
79%
知人への推奨率
良い点
Supportive team and management
Opportunity for career growth
Interesting projects and challenges
改善点
Internal communication could improve
Career progression could be clearer
Work-life balance varies by team
給与レンジ
0件のデータ
Senior
Senior · Sales
0件のレポート
$241,200
年収総額
基本給
-
ストック
-
ボーナス
-
$204,020
$278,380
面接レビュー
レビュー43件
難易度
3.1
/ 5
期間
14-28週間
内定率
41%
体験
ポジティブ 60%
普通 21%
ネガティブ 19%
面接プロセス
1
Phone Screen
2
Technical Interview
3
Hiring Manager
4
Team Fit
よくある質問
Technical skills
Past experience
Team collaboration
Problem solving
最新情報
Zoldan Reflects on Decades in Fireworks; Construction Pros See AI Future - Business Journal Daily
Business Journal Daily
News
·
1w ago
DeepSeek V4 Pro: Validating Frontier Models for Production
HN
·
1w ago
·
3
Upcoming loop with fireworks ai
Anyone interviewed with them? What to expect in the ai coding round?
·
1w ago
·
1
How we fixed prompt injection for all models on Fireworks
HN
·
2w ago
·
4
·
1