採用

AI Inference Performance Engineer - New College Grad 2026
US, CA, Santa Clara
·
On-site
·
Full-time
·
1mo ago
必須スキル
Python
PyTorch
We optimize and benchmark GenAI inference on NVIDIA's latest accelerators, defining the industry’s performance standards across language models, video generation, and speech workloads. We work directly within TensorRT-LLM, SGLang, and vLLM, building the tools that evaluate serving performance at scale. This team sits at the intersection of GPU performance engineering and public accountability.
What You Will Be Doing:
-
Drive industry benchmark results: own the end-to-end optimization pipeline, implement and integrate optimizations in quantization, scheduling, memory management, and distributed inference across TensorRT-LLM, SGLang, and vLLM.
-
Define and optimize cutting-edge workloads: identify and shape next-generation inference benchmarks, multi-turn coding, agentic workflows, and other emerging AI use cases. Collaborate with framework and kernel teams to push performance to its extreme on large-scale LLM-MoE models, vision-language models, video diffusion models, recommendation, and speech workloads.
-
Architect distributed inference: Design and optimize execution from single-GPU to rack-scale clusters, managing performance across clusters of GPUs.
-
Establish performance methodology: Apply roofline analysis and systematic profiling to decompose bottlenecks across CUDA kernels, frameworks, and serving layers.
-
Influence the ecosystem: contribute to TensorRT-LLM, vLLM, SGLang, and other open-source projects. Partner with architecture, kernel, and compiler teams to shape GPU roadmaps based on real workload data.
-
Technical Leadership: Raise the technical bar for the team, drive cross-functional execution on tight benchmark timelines, and lead a world-class team.
What We Need To See:
-
BS, MS, or PhD in Computer Science, Computer Engineering, Electrical Engineering, or equivalent experience.
-
2+ years of relevant software development experience.
-
Strong Python or C++ programming, software design, and software engineering skills.
-
Expertise with a DL framework such as Py Torch or JAX.
-
Proven track record of delivering measurable performance improvements in deep learning inference or high-performance systems.
-
Deep understanding of LLM/VLM architectures and inference mechanics: attention, KV caching, batching strategies, decode-phase bottlenecks, speculative decoding, disaggregated serving etc.
Ways To Stand Out From The Crowd:
-
Prior experience with an LLM framework (TensorRT-LLM, vLLM, SGLang, etc) or a DL compiler in inference, deployment, algorithms, or implementation.
-
Prior experience with performance modeling, profiling, debug, and code optimization of a DL/HPC/high-performance application.
-
Experience with scale-out inference orchestration (MPI, NCCL, K8S) on large GPU clusters.
-
Expertise in kernel development (CUTLASS, cuteDSL, tilelang, OpenAI Triton) or compiler/runtime paths (torch.compile, graph lowering, operator fusion). Architectural knowledge of CPU, GPU, FPGA or other DL accelerators; GPU programming experience (CUDA).
-
Track record of leading ambiguous, high-impact technical programs across multiple teams under tight deadlines.
GPU deep learning has provided the foundation for machines to learn, perceive, reason and solve problems posed using human language. The GPU started out as the engine for simulating human imagination, conjuring up the outstanding virtual worlds of video games and Hollywood films. Now, NVIDIA's GPU runs deep learning algorithms, simulating human intelligence, and acts as the brain of computers, robots and self-driving cars that can perceive and understand the world. Just as human imagination and intelligence are linked, computer graphics and artificial intelligence come together in our architecture. Two modes of the human brain, two modes of the GPU. This may explain why NVIDIA GPUs are used broadly for deep learning, and NVIDIA is increasingly known as “the AI computing company.” Come, join our DL Architecture team, where you can help build the real-time, cost-effective computing platform driving our success in this exciting and quickly growing field.
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 124,000 USD - 195,500 USD for Level 2, and 152,000 USD - 241,500 USD for Level 3.
You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until March 9, 2026.
This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
総閲覧数
0
応募クリック数
0
模擬応募者数
0
スクラップ
0
類似の求人

Process Engineer IV
Applied Materials · Santa Clara,CA

GCC Compiler Engineer
Tenstorrent · Santa Clara, California, United States

Silicon Validation Engineer
Tenstorrent · Santa Clara, California, United States

CPU Verification Engineer (RISC-V)
Qualcomm · Santa Clara, California, United States of America; Austin, Texas, United States of America

Memory Controller Design Engineer
Qualcomm · Santa Clara, California, United States of America; San Diego, California, United States of America
NVIDIAについて

NVIDIA
PublicA computing platform company operating at the intersection of graphics, HPC, and AI.
10,001+
従業員数
Santa Clara
本社所在地
$4.57T
企業価値
レビュー
4.1
10件のレビュー
ワークライフバランス
3.5
報酬
4.2
企業文化
4.3
キャリア
4.5
経営陣
4.0
75%
友人に勧める
良い点
Great culture and supportive environment
Smart colleagues and excellent people
Cutting-edge technology and learning opportunities
改善点
Team-dependent experience and outcomes
Work-life balance issues with long hours
Politics and influence over competence
給与レンジ
73件のデータ
L3
L4
L5
L3 · Data Scientist IC2
0件のレポート
$177,542
年収総額
基本給
-
ストック
-
ボーナス
-
$150,910
$204,174
面接体験
7件の面接
難易度
3.1
/ 5
体験
ポジティブ 0%
普通 86%
ネガティブ 14%
面接プロセス
1
Application Review
2
Recruiter Screen
3
Online Assessment
4
Technical Interview
5
System Design Interview
6
Team Review
よくある質問
Coding/Algorithm
System Design
Technical Knowledge
Behavioral/STAR
ニュース&話題
Negotiating NVIDIA's Offer
Base, stock, and sign-on negotiable. Recruiters invested in closing candidates. CEO reviews all 42K employee salaries monthly. Stock growth has made many employees millionaires.
News
·
NaNw ago
NVIDIA Company Reviews
WLB rated 3.9/5 (lowest category). 64% satisfied with WLB but 53% feel burnt out. Compensation rated 4.4-4.5/5. Experience highly team-dependent.
News
·
NaNw ago
NVIDIA Interview Discussions
Technical bar is high with 4-6 rounds. Process takes 4-8 weeks. Expect C++ questions, LeetCode medium, and system design. Difficulty rated 3.16/5.
News
·
NaNw ago
NVIDIA Culture Discussions
Team-dependent experience; sink-or-swim culture that rewards high performers but can be overwhelming. No politics, flat structure, but demanding workload with some teams requiring evening/weekend work.
News
·
NaNw ago