招聘
NVIDIA is a global leader in physical AI, powering self-driving cars, humanoid robots, intelligent environments, and medical devices. Our software platforms are central to this mission. We help innovators build products that save lives, enhance working conditions, and improve living standards globally! We are hiring a Senior Engineer to become part of our team as a technical authority in deep learning inference optimization for autonomous vehicles and robotics on edge hardware. This role requires a hands-on expert who can inspect model architectures down to the operator level. They will uncover performance bottlenecks through kernel traces and evaluate how modern architectures (transformers, vision-language models, diffusion/flow matching, state space models) function on GPU and SOC. The work performed directly advances how autonomous vehicles and robots sense and respond in the real world, with instant impact!
This group addresses some of the toughest optimization problems in the industry, operating at the crossroads of innovative model architectures, compiler technology, and embedded hardware. We work in close partnership with automotive OEMs, robotics collaborators, and internal hardware teams to expand the limits of what can be achieved on edge devices.
What you'll be doing:
-
Address customer and partner optimization challenges: Engage directly with prominent automotive OEMs and robotics associates to analyze, debug, and improve their deep learning models on NVIDIA platforms. We emphasize delivering solutions rather than just recommendations.
-
Own performance benchmarking: Drive efforts to achieve leading results on MLPerf Edge and industry benchmarks, as well as closed-source engagements with key partners. Define methodology, ensure reproducibility, and turn results into actionable optimization priorities.
-
Evaluate emerging model architectures: Analyze new DL architectures, including vision encoders, multi-modal VLMs, hybrid SSM-Transformer backbones, diffusion/flow matching decoders, and multi-camera tokenizers, for compilation feasibility, memory footprint, and latency on target SOCs.
-
Collaborate across teams: Partner with our compiler, runtime, and hardware teams to connect model-level insight with platform capabilities.
-
Contribute to build reviews and help develop internal roadmap priorities based on real customer workload patterns.
-
Represent NVIDIA externally: Share our deep learning optimization expertise at conferences, webinars, and partner events. Help elevate the broader team by bringing back insights and establishing guidelines.
-
Deliver TensorRT and compiler-stack solutions for edge: Create and deploy inference solutions on Jetson, DRIVE, and GPU + ARM platforms for AV and robotics workloads. Develop Proofs of Readiness (PORs) and work closely with our compiler team on Torch-TRT, MLIR-TRT, and related frameworks to bridge performance gaps.
What we need to see:
-
Master’s degree or equivalent experience in Computer Science, Electrical Engineering, or a related field.
-
12 + years of industry experience with over 8 years in deep learning model optimization, inference engineering, or neural network compilation. You need to be adept at interpreting and reasoning about model architectures at the operator/kernel level, not only operating them.
-
Over 5 years of validated expertise in embedded/edge software, with experience delivering production inference solutions within power-limited, latency-sensitive deployment environments.
-
Deep knowledge of current DL architectures: transformers, attention variants, vision encoders (ViT), multi-modal/vision-language model frameworks, and experience with diffusion models and/or state space models.
-
Expert knowledge of GPU architecture fundamentals, CUDA, and low-level performance optimization using heterogeneous computing. Experience with TensorRT, compiler IRs, or equivalent inference optimization toolchains.
-
Solid understanding of embedded operating system internals (QNX/Linux), memory management, C/C++, and embedded/system software concepts.
-
Background in parallel programming (e.g., CUDA, OpenMP) and experience reasoning about memory hierarchies, data movement, and compute utilization.
-
Demonstrated capability to collaborate directly with external partners and customers in a deep technical role, solving their workload issues, identifying performance problems, and providing solutions within production limitations.
Ways to Stand Out from the Crowd:
-
Experience with ML compiler frameworks (TVM, MLIR, XLA, Triton) or contributing to inference runtime development.
-
Production deployment experience with autonomous vehicle perception or planning stacks, understanding the full pipeline from sensor input through trajectory output.
-
Familiarity with the Physical AI model landscape: VLM + action expert architectures, end-to-end driving models, or robot foundation models.
-
Contributions to MLPerf benchmarks and large-scale industry performance optimization efforts.
-
Experience with automotive safety standards (ISO 26262, SOTIF) and their implications for inference system development.
-
Experience leading technical initiatives across globally distributed engineering teams.
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 225,000 CAD - 275,000 CAD.
You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until March 2, 2026.
This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
Total Views
0
Apply Clicks
0
Mock Applicants
0
Scraps
0
Similar Jobs

Machine Learning/AI Scientist Intern (PhD), 2026
Netflix · Los Gatos,California,United States of America; New York,New York,United States of America; Los Angeles,California,United States of America

Machine Learning Engineer
Cisco · 2 Locations

Senior Researcher - Machine Learning - Microsoft Research
Microsoft · United States, Washington, Redmond

Senior AI Engineer
Luxoft (DXC) · 2 Locations

Staff Machine Learning Engineer
Zoom · Seattle (WA)
About NVIDIA

NVIDIA
PublicA computing platform company operating at the intersection of graphics, HPC, and AI.
10,001+
Employees
Santa Clara
Headquarters
$4.57T
Valuation
Reviews
4.1
10 reviews
Work Life Balance
3.5
Compensation
4.2
Culture
4.3
Career
4.5
Management
4.0
75%
Recommend to a Friend
Pros
Great culture and supportive environment
Smart colleagues and excellent people
Cutting-edge technology and learning opportunities
Cons
Team-dependent experience and outcomes
Work-life balance issues with long hours
Politics and influence over competence
Salary Ranges
47 data points
L3
L4
L5
L3 · Data Scientist IC2
0 reports
$177,542
total / year
Base
-
Stock
-
Bonus
-
$150,910
$204,174
Interview Experience
7 interviews
Difficulty
3.1
/ 5
Experience
Positive 0%
Neutral 86%
Negative 14%
Interview Process
1
Application Review
2
Recruiter Screen
3
Online Assessment
4
Technical Interview
5
System Design Interview
6
Team Review
Common Questions
Coding/Algorithm
System Design
Technical Knowledge
Behavioral/STAR
News & Buzz
NVIDIA Culture Discussions
Team-dependent experience; sink-or-swim culture that rewards high performers but can be overwhelming. No politics, flat structure, but demanding workload with some teams requiring evening/weekend work.
News
·
NaNw ago
Negotiating NVIDIA's Offer
Base, stock, and sign-on negotiable. Recruiters invested in closing candidates. CEO reviews all 42K employee salaries monthly. Stock growth has made many employees millionaires.
News
·
NaNw ago
NVIDIA Company Reviews
WLB rated 3.9/5 (lowest category). 64% satisfied with WLB but 53% feel burnt out. Compensation rated 4.4-4.5/5. Experience highly team-dependent.
News
·
NaNw ago
NVIDIA Interview Discussions
Technical bar is high with 4-6 rounds. Process takes 4-8 weeks. Expect C++ questions, LeetCode medium, and system design. Difficulty rated 3.16/5.
News
·
NaNw ago