Jobs
Required Skills
Python
C++
Machine Learning
Deep Learning
Transformer models
Inference optimization
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
About The Role
As a Senior Research Engineer on the Inference ML team at Cerebras Systems, you will adapt today's most advanced language and vision models to run efficiently on our flagship Cerebras architecture. You'll work alongside ML researchers and engineers to design, prototype, validate, and optimize models, gaining end-to-end exposure to cutting-edge inference research on the world's fastest AI accelerator.
You will focus on pushing the frontier of speculative decoding,large-model pruning and compression,sparse attention, and sparsity-driven techniques to deliver low-latency, high-throughput inference at scale.
Responsibilities
-
Design, implement, and optimize state-of-the-art transformer architectures for NLP and computer vision on Cerebras hardware.
-
Research and prototype novel inference algorithms and model architectures that exploit the unique capabilities of Cerebras hardware, with emphasis on speculative decoding, pruning/compression, sparse attention, and sparsity.
-
Train models to convergence, perform hyperparameter sweeps, and analyze results to inform next steps.
-
Bring up new models on the Cerebras system, validate functional correctness, and troubleshoot any integration issues.
-
Profile and optimize model code using Cerebras tools to maximize throughput and minimize latency.
-
Develop diagnostic tooling or scripts to surface performance bottlenecks and guide optimization strategies for inference workloads.
-
Collaborate across teams, including software, hardware, and product, to drive projects from inception through delivery.
Minimum Qualifications
One of the following education and experience combinations:
-
Bachelor’s degree in Computer Science, Software Engineering, Computer Engineering, Electrical Engineering, or a related technical field AND 7+ years of ML software development experience,OR
-
Master’s degree in Computer Science or related technical field AND 4+ years of software development experience,OR
-
PhD in Computer Science or related technical field with 2+ years of relevant research or industry experience,OR
-
Equivalent practical experience.
-
4+ years of experience testing, maintaining, or launching software products, including2+ years of experience with software design and architecture.
-
3+ years of experience in software development focused on machine learning (e.g., deep learning, large language models, or computer vision).
-
Strong programming skills in Python and/or C++.
-
Experience with Generative AI and Machine Learning systems.
-
Evidence of research impact in machine learning, such as publications at top conferences (NeurIPS, ICLR, ICML, ACL, EMNLP, MLSys) or comparable contributions to widely used open-source projects or high-quality preprints.
Preferred Qualifications
-
Master’s degree or PhD in Computer Science, Computer Engineering, or a related technical field.
-
Experience independently driving complex ML or inference projects from prototype to production-quality implementations.
-
Hands-on experience with relevant ML frameworks such as Py Torch, Transformers, vLLM, or SGLang.
-
Experience with large language models, mixture-of-experts models, multimodal learning, or AI agents.
-
Experience with speculative decoding,neural network pruning and compression,sparse attention,quantization,sparsity, post-training techniques, and inference-focused evaluations.
-
Familiarity with large-scale model training and deployment, including performance and cost trade-offs in production systems.
-
Triton/CUDA experience is a big plus.
Required Skills & Attributes
-
Proficiency with at least one major ML framework (Py Torch, Transformers, vLLM, or SGLang).
-
Deep understanding of transformer-based models in language and/or vision domains, with demonstrated experience implementing and optimizing them.
-
Proven ability to translate research ideas into robust code: implementing new model variants, training strategies, and evaluation workflows end-to-end.
-
Strong foundation in performance optimization on specialized hardware (e.g., GPUs, TPUs, or HPC interconnects).
-
Deep understanding of modern ML architectures and strong intuition for optimizing their performance, particularly for inference workloads using sparse attention, pruning/compression, and speculative decoding.
-
Track record of owning problems end-to-end and autonomously acquiring whatever knowledge is needed to deliver results.
-
Self-directed mindset with a demonstrated ability to identify and tackle the most impactful problems.
-
Collaborative approach with humility, eagerness to help colleagues, and commitment to team success.
-
Genuine passion for AI and a drive to push the limits of inference performance.
-
Hybrid role in Toronto, ON, CA or Sunnyvale, CA, USA.
Why Join Cerebras
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
-
Build a breakthrough AI platform beyond the constraints of the GPU.
-
Publish and open source their cutting-edge AI research.
-
Work on one of the fastest AI supercomputers in the world.
-
Enjoy job stability with startup vitality.
-
Our simple, non-corporate work culture that respects individual beliefs.
Read our blog: Five Reasons to Join Cerebras in 2026.
Apply today and become part of the forefront of groundbreaking advancements in AI!
*Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. **We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies.*We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Total Views
0
Apply Clicks
0
Mock Applicants
0
Scraps
0
Similar Jobs

Developer Community Lead - Japan
Anthropic · Tokyo, Japan

Principal Engineer, Authentication
Databricks · Mountain View, California; San Francisco, California

Business Development and Partnerships Lead, LATAM
Duolingo · Remote - Mexico

Forward Deployed Engineer - Software Engineer
ElevenLabs · San Francisco

Exceptional Engineer - Grok Chat
xAI · Palo Alto, CA
About Cerebras

Cerebras
Series F+Cerebras Systems Inc. is an American artificial intelligence (AI) company with offices in Sunnyvale, San Diego, Toronto, and Bangalore, India. Cerebras builds computer systems for complex AI deep learning applications.
201-500
Employees
Sunnyvale
Headquarters
$4.1B
Valuation
Reviews
4.1
39 reviews
Work Life Balance
3.3
Compensation
4.8
Culture
4.1
Career
4.4
Management
4.0
90%
Recommend to a Friend
Pros
Strong research and publication culture
Impact on the future of AI development
Brilliant colleagues passionate about the field
Cons
Work-life balance can suffer during critical periods
High expectations and pressure to deliver
Competition for resources and recognition
Salary Ranges
2 data points
L3
Intern
L3 · Compiler Engineer Intern
1 reports
$87,000
total / year
Base
$87,000
Stock
-
Bonus
-
$87,000
$87,000
Interview Experience
50 interviews
Difficulty
3.9
/ 5
Duration
21-35 weeks
Offer Rate
23%
Experience
Positive 72%
Neutral 9%
Negative 19%
Interview Process
1
Recruiter Screen
2
ML Coding
3
ML System Design
4
Research Discussion
5
Team Interviews
Common Questions
ML fundamentals
Design an ML system
Research paper discussion
Statistical concepts
News & Buzz
Cerebras Systems Highlights AI Infrastructure Strategy at MIT Sloan Tech Summit - TipRanks
Source: TipRanks
News
·
5w ago
Cerebras AI Lands A Whale As It Prepares To Go Public - Forbes
Source: Forbes
News
·
7w ago
Cerebras Inks Transformative $10 Billion Inference Deal With OpenAI - The Next Platform
Source: The Next Platform
News
·
7w ago
Cerebras Poses an Alternative to Nvidia With $10B OpenAI Deal - AI Business
Source: AI Business
News