
Generative AI platform
Member of Technical Staff, Performance Optimization
필수 스킬
PyTorch
About Us:
At Fireworks, we’re building the future of generative AI infrastructure. Our platform delivers the highest-quality models with the fastest and most scalable inference in the industry. We’ve been independently benchmarked as the leader in LLM inference speed and are driving cutting-edge innovation through projects like our own function calling and multimodal models. Fireworks is a Series C company valued at $4 billion and backed by top investors including Benchmark, Sequoia, Lightspeed, Index, and Evantic. We’re an ambitious, collaborative team of builders, founded by veterans of Meta Py Torch and Google Vertex AI.
The Role:
We're looking for a Software Engineer focused on Performance Optimization to help push the boundaries of speed and efficiency across our AI infrastructure. In this role, you'll take ownership of optimizing performance at every layer of the stack—from low-level GPU kernels to large-scale distributed systems. A key focus will be maximizing the performance of our most demanding workloads, including large language models (LLMs), vision-language models (VLMs), and next-generation video models.
You’ll work closely with teams across research, infrastructure, and systems to identify performance bottlenecks, implement cutting-edge optimizations, and scale our AI systems to meet the demands of real-world production use cases. Your work will directly impact the speed, scalability, and cost-effectiveness of some of the most advanced generative AI models in the world.
Key Responsibilities:
-
Optimize system and GPU performance for high-throughput AI workloads across training and inference
-
Analyze and improve latency, throughput, memory usage, and compute efficiency
-
Profile system performance to detect and resolve GPU- and kernel-level bottlenecks
-
Implement low-level optimizations using CUDA, Triton, and other performance tooling
-
Drive improvements in execution speed and resource utilization for large-scale model workloads (LLMs, VLMs, and video models)
-
Collaborate with ML researchers to co-design and tune model architectures for hardware efficiency
-
Improve support for mixed precision, quantization, and model graph optimization
-
Build and maintain performance benchmarking and monitoring infrastructure
-
Scale inference and training systems across multi-GPU, multi-node environments
-
Evaluate and integrate optimizations for emerging hardware accelerators and specialized runtimes
Minimum Qualifications:
-
Bachelor’s degree in Computer Science, Computer Engineering, Electrical Engineering, or equivalent practical experience
-
5+ years of experience working on performance optimization or high-performance computing systems
-
Proficiency in CUDA or ROCm and experience with GPU profiling tools (e.g., Nsight, nvprof, CUPTI)
-
Familiarity with Py Torch and performance-critical model execution
-
Experience with distributed system debugging and optimization in multi-GPU environments
-
Deep understanding of GPU architecture, parallel programming models, and compute kernels
Preferred Qualifications:
-
Master’s or PhD in Computer Science, Electrical Engineering, or a related field
-
Experience optimizing large models for training and inference (LLMs, VLMs, or video models)
-
Knowledge of compiler stacks or ML compilers (e.g., torch.compile, Triton, XLA)
-
Contributions to open-source ML or HPC infrastructure
-
Familiarity with cloud-scale AI infrastructure and orchestration tools (e.g., Kubernetes)
-
Background in ML systems engineering or hardware-aware model design
Example projects:
-
Implement fully asynchronous low-latency sampling for large language models integrated with structured outputs
-
Implement GPU kernels for the new low-precision scheme and run experiments to find optimal speed-quality tradeoff
-
Build a distributed router with a custom load-balancing algorithm to optimize LLM cache efficiency
-
Define metrics and build harness for finding optimal performance configuration (e.g. sharding, precision) for a given class of model
-
Determine and implement in Py Torch an optimal sharding scheme for a novel attention variant
-
Optimize communication patterns in RDMA networks (Infiniband, RoCE)
-
Debug numerical instabilities for a given model for a small portion of requests when deployed at scale
Total compensation for this role also includes meaningful equity in a fast-growing startup, along with a competitive salary and comprehensive benefits package. Base salary is determined by a range of factors including individual qualifications, experience, skills, interview performance, market data, and work location. The listed salary range is intended as a guideline and may be adjusted.
Base Pay Range (Plus Equity)$175,000—$220,000 USD
Why Fireworks AI?
-
Solve Hard Problems: Tackle challenges at the forefront of AI infrastructure, from low-latency inference to scalable model serving.
-
Build What’s Next: Work with bleeding-edge technology that impacts how businesses and developers harness AI globally.
-
Ownership & Impact: Join a fast-growing, passionate team where your work directly shapes the future of AI—no bureaucracy, just results.
-
Learn from the Best: Collaborate with world-class engineers and AI researchers who thrive on curiosity and innovation.
Fireworks AI is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all innovators.
전체 조회수
0
전체 지원 클릭
0
전체 Mock Apply
0
전체 스크랩
0
비슷한 채용공고

Principal Enterprise Applications Engineer
Roblox · San Mateo, CA, United States

Senior Autonomy Engineer - Controls
Skydio · San Mateo, California, United States

Senior BSP Engineer - Cameras
Verkada · San Mateo, CA United States

Principal Engine Programmer - Systems (C++)
Roblox · San Mateo, CA, United States

Senior Wireless Systems Performance Engineer
Skydio · San Mateo, California, United States
Fireworks AI 소개

Fireworks AI
Series AFireworks AI provides generative AI inference and fine-tuning platform for developers and enterprises. The company offers high-performance API services for running large language models and other generative AI workloads.
51-200
직원 수
San Francisco
본사 위치
$1.2B
기업 가치
리뷰
26개 리뷰
3.8
26개 리뷰
워라밸
3.5
보상
4.2
문화
3.8
커리어
4.0
경영진
3.6
79%
지인 추천률
장점
Supportive team and management
Opportunity for career growth
Interesting projects and challenges
단점
Internal communication could improve
Career progression could be clearer
Work-life balance varies by team
연봉 정보
0개 데이터
Senior
Senior · Sales
0개 리포트
$241,200
총 연봉
기본급
-
주식
-
보너스
-
$204,020
$278,380
면접 후기
후기 43개
난이도
3.1
/ 5
소요 기간
14-28주
합격률
41%
경험
긍정 60%
보통 21%
부정 19%
면접 과정
1
Phone Screen
2
Technical Interview
3
Hiring Manager
4
Team Fit
자주 나오는 질문
Technical skills
Past experience
Team collaboration
Problem solving
최근 소식
Zoldan Reflects on Decades in Fireworks; Construction Pros See AI Future - Business Journal Daily
Business Journal Daily
News
·
1w ago
DeepSeek V4 Pro: Validating Frontier Models for Production
HN
·
1w ago
·
3
Upcoming loop with fireworks ai
Anyone interviewed with them? What to expect in the ai coding round?
·
1w ago
·
1
How we fixed prompt injection for all models on Fireworks
HN
·
2w ago
·
4
·
1