Jobs
About Us:
At Fireworks, we’re building the future of generative AI infrastructure. Our platform delivers the highest-quality models with the fastest and most scalable inference in the industry. We’ve been independently benchmarked as the leader in LLM inference speed and are driving cutting-edge innovation through projects like our own function calling and multimodal models. Fireworks is a Series C company valued at $4 billion and backed by top investors including Benchmark, Sequoia, Lightspeed, Index, and Evantic. We’re an ambitious, collaborative team of builders, founded by veterans of Meta Py Torch and Google Vertex AI.
The Role:
We are seeking a Software Engineer, Dev Ex & Evals to play a highly impactful role in shaping the Fireworks platform. You will be responsible for defining and building a cohesive developer journey that bridges the gap between experimentation and production.
In the AI application lifecycle, Evals and Training is a continuous loop. You will build the experiences that allow developers to seamlessly navigate this cycle: starting with serverless experimentation, moving to fine-tuning custom models, evaluating them rigorously, and finally deploying to on-demand GPUs.
You will also lead the engineering efforts on our open-source initiative and productize these capabilities to help users author evals and train custom models. As the public platform serves as the product-led growth (PLG) engine for Fireworks, your work will directly drive business impact by removing friction at every stage of the developer onboarding and adoption process.
Key Responsibilities:
-
Unify the AI Lifecycle: Build a streamlined experience that connects our serverless inference, fine-tuning, and on-demand deployment products into a single, intuitive workflow.
-
Streamline Developer Onboarding: Obsess over the "Time to First Token" and "Time to First Fine-tune." Identify and eliminate friction points for new developers entering the platform.
-
Architect Scalable Eval Tooling: Design full-stack features that support the continuous cycle of training and evaluation, providing deep insights into model quality that directly inform the next round of fine-tuning.
-
Productize Inference Optimization: Build experiences that help developers optimize their GPU deployments for their specific workloads, guiding them on the right balance of throughput, latency, and model quality.
Minimum Requirements:
-
1 - 7 years of software engineering experience (We are hiring at multiple levels for this role).
-
Passion for Developer Experience: You care deeply about API design, documentation, and the ergonomics of CLI/SDK tools.
-
Understanding of the GenAI Lifecycle: You understand the end-to-end workflow—from prompting a base model to curating a dataset, fine-tuning, and productionizing agents—and how these steps interconnect.
-
User-Centric Mindset: Willing to talk to users, triage GitHub issues for open-source projects, and build products from scratch to serve emerging needs.
Preferred Qualifications:
-
3+ years of software engineering experience.
-
Domain-Specific Evaluation Experience: Strong familiarity with designing and running evaluations for domain-specific use cases (e.g. medical, legal, coding, or custom internal datasets)
-
Open Source Contributions: Prior contributions to developer tools or AI/ML repositories
-
Inference & Hardware Knowledge: Interest in the hardware side of AI—understanding GPU constraints, inference optimization techniques, and how they relate to model performance.
-
Startup DNA: Experience in fast-paced environments where you own features end-to-end.
Why Fireworks AI?
-
Solve Hard Problems: Tackle challenges at the forefront of AI infrastructure, from low-latency inference to scalable model serving.
-
Build What’s Next: Work with bleeding-edge technology that impacts how businesses and developers harness AI globally.
-
Ownership & Impact: Join a fast-growing, passionate team where your work directly shapes the future of AI—no bureaucracy, just results.
-
Learn from the Best: Collaborate with world-class engineers and AI researchers who thrive on curiosity and innovation.
Fireworks AI is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all innovators.
Total Views
0
Apply Clicks
0
Mock Applicants
0
Scraps
0
Similar Jobs

Senior Electrical Engineer, Manufacturing Test
Anduril · Costa Mesa, California, United States

Software Engineer, Growth
Suno AI · Boston

Graphics Pipeline Engineer
Aurora · San Francisco, California

Senior Electrical Engineer (TEMPEST)
Anduril · Reston, Virginia, United States

Applied AI, Evaluation Engineer
Mistral AI · Paris
About Fireworks AI

Fireworks AI
Series AFireworks AI provides generative AI inference and fine-tuning platform for developers and enterprises. The company offers high-performance API services for running large language models and other generative AI workloads.
51-200
Employees
San Francisco
Headquarters
$1.2B
Valuation
Reviews
3.8
26 reviews
Work Life Balance
3.5
Compensation
4.2
Culture
3.8
Career
4.0
Management
3.6
79%
Recommend to a Friend
Pros
Supportive team and management
Opportunity for career growth
Interesting projects and challenges
Cons
Internal communication could improve
Career progression could be clearer
Work-life balance varies by team
Interview Experience
43 interviews
Difficulty
3.1
/ 5
Duration
14-28 weeks
Offer Rate
41%
Experience
Positive 60%
Neutral 21%
Negative 19%
Interview Process
1
Phone Screen
2
Technical Interview
3
Hiring Manager
4
Team Fit
Common Questions
Technical skills
Past experience
Team collaboration
Problem solving
News & Buzz
Fireworks AI Positions Open-Source Infrastructure as Low-Cost, Privacy-Focused Backbone for Personal AI Agents - TipRanks
Source: TipRanks
News
·
5w ago
This CEO left Meta and built a $4B AI startup by rejecting the one-size-fits-all approach - The Business Journals
Source: The Business Journals
News
·
8w ago
Fireworks, Metropolis and Hippocratic AI Lead Funding Rounds - PYMNTS.com
Source: PYMNTS.com
News
·
17w ago
Fireworks AI raises $250M at $4B valuation to help enterprises with AI inference workloads - SiliconANGLE
Source: SiliconANGLE
News
·
19w ago