
Pioneering accelerated computing and AI
Machine Learning Applications and Compiler Engineer, LPX - New College Grad 2026 at NVIDIA
About the role
Our work at NVIDIA is dedicated towards a computing model focused on visual and AI computing. For two decades, NVIDIA has pioneered visual computing, the art and science of computer graphics, with our invention of the GPU. The GPU has also shown to be spectacularly effective at solving some of the most complex problems in computer science. Today, NVIDIA’s GPU simulates human intelligence, running deep learning algorithms and acting as the brain of computers, robots and self-driving cars that can perceive and understand the world. We are looking to grow our company and teams with the smartest people in the world and there has never been a more exciting time to join our team!
NVIDIA is seeking engineers to develop algorithms and optimizations for our LPX inference and compiler stack. You will work at the intersection of large-scale systems, compilers, and deep learning, crafting how neural network workloads map onto future NVIDIA platforms. This is your chance to be part of something outstandingly innovative!
What you’ll be doing:
-
Build, develop, and maintain high-performance runtime and compiler components, focusing on end-to-end inference optimization.
-
Define and implement mappings of large-scale inference workloads onto NVIDIA’s systems.
-
Extend and integrate with NVIDIA’s SW ecosystem, contributing to libraries, tooling, and interfaces that enable seamless deployment of models across platforms.
-
Benchmark, profile, and monitor key performance and efficiency metrics to ensure the compiler generates efficient mappings of neural network graphs to our inference hardware.
-
Collaborate closely with hardware architects and design teams to feedback software observations, influence future architectures, and codesign features that unlock new performance and efficiency points.
-
Prototype and evaluate new compilation and runtime techniques, including graph transformations, scheduling strategies, and memory/layout optimizations tailored to spatial processors.
-
Publish and present technical work on novel compilation approaches for inference and related spatial accelerators at top tier ML, compiler, and computer architecture venues.
What we need to see:
-
Pursuing or recently completed a MS or PhD in Computer Science, Electrical/Computer Engineering, or related field, or equivalent experience.
-
Possess software engineering background with familiarity in systems level programming (e.g., C/C++ and/or Rust) and solid CS fundamentals in data structures, algorithms, and concurrency.
-
Hands on experience with compiler or runtime development, including IR design, optimization passes, or code generation.
-
Experience with LLVM and/or MLIR, including building custom passes, dialects, or integrations.
-
Familiarity with deep learning frameworks such as Tensor Flow and Py Torch, and experience working with portable graph formats such as ONNX.
-
Understanding of parallel and heterogeneous compute architectures, such as GPUs, spatial accelerators, or other domain specific processors.
-
Strong analytical and debugging skills, with experience using profiling, tracing, and benchmarking tools to drive performance improvements.
-
Excellent communication and collaboration skills, with the ability to work across hardware, systems, and software teams.
-
Ideal candidates will have direct experience with MLIR based compilers or other multilevel IR stacks, especially in the context of graph based deep learning workloads.
Ways to stand out from the crowd:
-
Prior work on spatial or dataflow architectures, including static scheduling, pipeline parallelism, or tensor parallelism at scale.
-
Contributions to opensource ML frameworks, compilers, or runtime systems, particularly in areas related to performance or scalability.
-
Demonstrated research impact, such as publications or presentations at conferences like PLDI, CGO, ASPLOS, ISCA, MICRO, MLSys, NeurIPS, or similar.
-
Experience with large-scale AI distributed inference or training systems, including performance modeling and capacity planning for multi rack deployments.
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. If you're creative and autonomous, we want to hear from you!
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 124,000 USD - 195,500 USD for Level 2, and 152,000 USD - 241,500 USD for Level 3.
You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until May 9, 2026.
This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
Required skills
Machine learning
Compilers
Inference optimization
Runtime systems
Performance profiling
Algorithms
Systems programming
Deep learning
Total Views
0
Total Apply Clicks
0
Total Mock Apply
0
Total Bookmarks
0
More open roles at NVIDIA

Senior Software Engineer - GPU Networking
NVIDIA · US, CA, Santa Clara

Senior System Software Test Engineer, Networking
NVIDIA · US, CA, Santa Clara

Manager, Networking Software Test
NVIDIA · US, CA, Santa Clara

Senior Firmware Engineer, Networking
NVIDIA · US, CA, Santa Clara

Senior Software K8S Engineer
NVIDIA · 5 Locations
Similar jobs

Principal Speech Recognition Researcher (Onsite)
Collins Aerospace (RTX) · US-MD-COLUMBIA-720 ~ 9861 Broken Land Pkwy ~ BBN COLUMBIA, Ste 400

Senior Speech Recognition Researcher (Onsite)
Collins Aerospace (RTX) · US-MD-COLUMBIA-720 ~ 9861 Broken Land Pkwy ~ BBN COLUMBIA, Ste 400

Generative AI Software Developer/Engineer – Aerospace Technologies (Onsite)
RTX (Raytheon) · US-IA-CEDAR RAPIDS-124 ~ 400 Collins Rd NE ~ BLDG 124

AI Engineer
Rockwell Automation · Singapore, Singapore

AI Engineer
Rockwell Automation · Milwaukee; Mayfield Heights
About NVIDIA

NVIDIA
PublicA computing platform company operating at the intersection of graphics, HPC, and AI.
10,001+
Employees
Santa Clara
Headquarters
$4.57T
Valuation
Reviews
10 reviews
4.4
10 reviews
Work-life balance
2.8
Compensation
4.5
Culture
4.2
Career
4.3
Management
3.8
78%
Recommend to a friend
Pros
Cutting-edge technology and innovation
Excellent compensation and benefits
Great team culture and collaboration
Cons
High pressure and expectations
Poor work-life balance and long hours
Fast-paced environment leading to burnout
Salary Ranges
79 data points
L3
L4
L5
L3 · Data Scientist IC2
0 reports
$177,542
total per year
Base
-
Stock
-
Bonus
-
$150,910
$204,174
Interview experience
5 interviews
Difficulty
3.0
/ 5
Interview process
1
Application Review
2
Recruiter Screen
3
Technical Phone Screen
4
Onsite/Virtual Interviews
5
Team Matching
6
Offer
Common questions
Coding/Algorithm
System Design
Behavioral/STAR
Technical Knowledge
Past Experience
Latest updates
Negotiating NVIDIA's Offer
Base, stock, and sign-on negotiable. Recruiters invested in closing candidates. CEO reviews all 42K employee salaries monthly. Stock growth has made many employees millionaires.
reddit/blind
·
NVIDIA Company Reviews
WLB rated 3.9/5 (lowest category). 64% satisfied with WLB but 53% feel burnt out. Compensation rated 4.4-4.5/5. Experience highly team-dependent.
reddit/blind
·
NVIDIA Interview Discussions
Technical bar is high with 4-6 rounds. Process takes 4-8 weeks. Expect C++ questions, LeetCode medium, and system design. Difficulty rated 3.16/5.
reddit/blind
·
NVIDIA Culture Discussions
Team-dependent experience; sink-or-swim culture that rewards high performers but can be overwhelming. No politics, flat structure, but demanding workload with some teams requiring evening/weekend work.
reddit/blind
·