招聘

Deep Learning Software Engineer, TensorRT Performance - New College Grad 2026
US
·
On-site
·
Full-time
·
2w ago
We are now looking for a Deep Learning Software Engineer, TensorRT Performance! NVIDIA is seeking an experienced Deep Learning Engineer passionate about analyzing and improving the performance of NVIDIA’s inference ecosystem! NVIDIA is rapidly growing our research and development for Deep Learning Inference and is seeking excellent Software Engineers at all levels of expertise to join our team. Companies around the world are using NVIDIA GPUs to power a revolution in deep learning, enabling breakthroughs in areas like Generative AI, Recommenders and Vision that have put DL into every software solution. Join the team that builds the software to enable the performance optimization, deployment and serving of these DL inference solutions. We specialize in developing GPU-accelerated deep learning inference software like TensorRT, DL benchmarking software and performant solutions to deploy and serve these models.
Collaborate with the deep learning community to integrate TensorRT into OSS frameworks like TensorRT-EdgeLLM and Py Torch. Identify performance opportunities and optimize SoTA models across the spectrum of NVIDIA accelerators, from datacenter GPUs to edge So Cs. Implement graph compiler algorithms, frontend operators and code generators across NVIDIA’s inference ecosystem. Work and collaborate with a diverse set of teams involving workflow improvements, performance modeling, performance analysis, kernel development and inference software development.
What you'll be doing:
-
Establish groundbreaking performance benchmarking methodologies and analysis workflows and identify performance issues and opportunities for NVIDIA’s inference ecosystem (e.g. TensorRT/TensorRT-EdgeLLM/Torch-TensorRT)
-
Contribute features and code to NVIDIA/OSS inference frameworks including but not limited to TensorRT/TensorRT-EdgeLLM/Torch-TensorRT.
-
Develop new model pipelines for NVIDIA’s inference ecosystem with optimized performance including but not limited to areas like quantization, scheduling, memory management, and distributed inference to set the gold standard for Gen AI performance.
-
Work with cross-collaborative teams inside and outside of NVIDIA across generative AI, automotive, robotics, image understanding, and speech understanding to set directions and develop innovative inference solutions.
-
Scale performance of deep learning models across different architectures and types of NVIDIA accelerators.
What we need to see:
-
Bachelors, Masters, PhD, or equivalent experience in relevant fields (Computer Science, Computer Engineering, EECS, AI).
-
2 years of relevant software development experience.
-
Strong C++, Python programming and software engineering skills
-
Experience with DL frameworks (e.g. Py Torch, JAX, Tensor Flow, ONNX) and inference libraries (e.g. TensorRT, TensorRT-LLM, vLLM, SGLang, Flash Infer).
-
Experience with performance analysis and performance optimization
Ways to stand out from the crowd:
-
Strong foundation and architectural knowledge of GPUs.
-
Deep understanding of modern deep learning models and workloads (e.g. Transformers, Recommenders, ASR, TTS, Visual Understanding).
-
Proficiency in one of the deep learning programming domain specific languages (e.g. CUDA/TileIR/Cu TeDSL/cutlass/Triton).
-
Prior contributions to major LLM inference frameworks (e.g. vLLM) or prior experience with graph compilers in deep learning inference (e.g. Torch Dynamo/Torch Inductor).
-
Prior experience optimizing performance for low-latency, resource-constrained systems or embedded AI pipelines (e.g. Jetson systems or other edge AI accelerators).
GPU deep learning has provided the foundation for machines to learn, perceive, reason and solve problems posed using human language. The GPU started out as the engine for simulating human imagination, conjuring up the amazing virtual worlds of video games and Hollywood films. Now, NVIDIA's GPU runs deep learning algorithms, simulating human intelligence, and acts as the brain of computers, robots and self-driving cars that can perceive and understand the world. Just as human imagination and intelligence are linked, computer graphics and artificial intelligence come together in our architecture. Two modes of the human brain, two modes of the GPU. This may explain why NVIDIA GPUs are used broadly for deep learning, and NVIDIA is increasingly known as “the AI computing company.” Come, join our DL Architecture team, where you can help build a real-time, cost-effective computing platform driving our success in this exciting and quickly growing field.
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 124,000 USD - 195,500 USD for Level 2, and 152,000 USD - 241,500 USD for Level 3.
You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until April 7, 2026.
This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
总浏览量
1
申请点击数
0
模拟申请者数
0
收藏
0
相似职位

Advisor - AI-Guided Optimization for Biologics
Eli Lilly · US, San Diego CA

Clinical Development AI Engineer - Advisor
Eli Lilly · US, Indianapolis IN

AI Performance Library Architect
Intel · US

Deep Agentic Reasoning Engineer (Lorenz Labs)
Analog Devices · US

Data Scientist - Machine Learning and Computer Vision
3M · US, Minnesota, Maplewood
关于NVIDIA

NVIDIA
PublicA computing platform company operating at the intersection of graphics, HPC, and AI.
10,001+
员工数
Santa Clara
总部位置
$4.57T
企业估值
评价
4.1
10条评价
工作生活平衡
3.5
薪酬
4.2
企业文化
4.3
职业发展
4.5
管理层
4.0
75%
推荐给朋友
优点
Great culture and supportive environment
Smart colleagues and excellent people
Cutting-edge technology and learning opportunities
缺点
Team-dependent experience and outcomes
Work-life balance issues with long hours
Politics and influence over competence
薪资范围
73个数据点
L3
L4
L5
L3 · Data Scientist IC2
0份报告
$177,542
年薪总额
基本工资
-
股票
-
奖金
-
$150,910
$204,174
面试经验
7次面试
难度
3.1
/ 5
体验
正面 0%
中性 86%
负面 14%
面试流程
1
Application Review
2
Recruiter Screen
3
Online Assessment
4
Technical Interview
5
System Design Interview
6
Team Review
常见问题
Coding/Algorithm
System Design
Technical Knowledge
Behavioral/STAR
新闻动态
Negotiating NVIDIA's Offer
Base, stock, and sign-on negotiable. Recruiters invested in closing candidates. CEO reviews all 42K employee salaries monthly. Stock growth has made many employees millionaires.
News
·
NaNw ago
NVIDIA Company Reviews
WLB rated 3.9/5 (lowest category). 64% satisfied with WLB but 53% feel burnt out. Compensation rated 4.4-4.5/5. Experience highly team-dependent.
News
·
NaNw ago
NVIDIA Interview Discussions
Technical bar is high with 4-6 rounds. Process takes 4-8 weeks. Expect C++ questions, LeetCode medium, and system design. Difficulty rated 3.16/5.
News
·
NaNw ago
NVIDIA Culture Discussions
Team-dependent experience; sink-or-swim culture that rewards high performers but can be overwhelming. No politics, flat structure, but demanding workload with some teams requiring evening/weekend work.
News
·
NaNw ago