トレンド企業

NVIDIA
NVIDIA

Pioneering accelerated computing and AI

AI Inference Performance Engineer

職種機械学習
経験ミドル級
勤務地United States, Canada
勤務オンサイト
雇用正社員
掲載1ヶ月前
応募する

必須スキル

Python

PyTorch

We optimize and benchmark GenAI inference on NVIDIA's latest accelerators, defining the industry’s performance standards across language models, video generation, and speech workloads. We work directly within TensorRT-LLM, SGLang, and vLLM, building the tools that evaluate serving performance at scale. This team sits at the intersection of GPU performance engineering and public accountability.

What You Will Be Doing:

  • Drive industry benchmark results: own the end-to-end optimization pipeline, implement and integrate optimizations in quantization, scheduling, memory management, and distributed inference across TensorRT-LLM, SGLang, and vLLM.

  • Define and optimize cutting-edge workloads: identify and shape next-generation inference benchmarks, multi-turn coding, agentic workflows, and other emerging AI use cases. Collaborate with framework and kernel teams to push performance to its extreme on large-scale LLM-MoE models, vision-language models, video diffusion models, recommendation, and speech workloads.

  • Architect distributed inference: Design and optimize execution from single-GPU to rack-scale clusters, managing performance across clusters of GPUs.

  • Establish performance methodology: Apply roofline analysis and systematic profiling to decompose bottlenecks across CUDA kernels, frameworks, and serving layers.

  • Influence the ecosystem: contribute to TensorRT-LLM, vLLM, SGLang, and other open-source projects. Partner with architecture, kernel, and compiler teams to shape GPU roadmaps based on real workload data.

  • Technical Leadership: Raise the technical bar for the team, drive cross-functional execution on tight benchmark timelines, and lead a world-class team.

What We Need To See:

  • BS, MS, or PhD in Computer Science, Computer Engineering, Electrical Engineering, or equivalent experience.

  • 5+ years of relevant software development experience.

  • Strong Python or C++ programming, software design, and software engineering skills.

  • Expertise with a DL framework such as Py Torch or JAX.

  • Proven track record of delivering measurable performance improvements in deep learning inference or high-performance systems.

  • Deep understanding of LLM/VLM architectures and inference mechanics: attention, KV caching, batching strategies, decode-phase bottlenecks, speculative decoding, disaggregated serving etc.

Ways To Stand Out From The Crowd:

  • Prior experience with an LLM framework (TensorRT-LLM, vLLM, SGLang, etc) or a DL compiler in inference, deployment, algorithms, or implementation.

  • Prior experience with performance modeling, profiling, debug, and code optimization of a DL/HPC/high-performance application.

  • Experience with scale-out inference orchestration (MPI, NCCL, K8S) on large GPU clusters.

  • Expertise in kernel development (CUTLASS, cuteDSL, tilelang, OpenAI Triton) or compiler/runtime paths (torch.compile, graph lowering, operator fusion). Architectural knowledge of CPU, GPU, FPGA or other DL accelerators; GPU programming experience (CUDA).

  • Track record of leading ambiguous, high-impact technical programs across multiple teams under tight deadlines.

GPU deep learning has provided the foundation for machines to learn, perceive, reason and solve problems posed using human language. The GPU started out as the engine for simulating human imagination, conjuring up the outstanding virtual worlds of video games and Hollywood films. Now, NVIDIA's GPU runs deep learning algorithms, simulating human intelligence, and acts as the brain of computers, robots and self-driving cars that can perceive and understand the world. Just as human imagination and intelligence are linked, computer graphics and artificial intelligence come together in our architecture. Two modes of the human brain, two modes of the GPU. This may explain why NVIDIA GPUs are used broadly for deep learning, and NVIDIA is increasingly known as “the AI computing company.” Come, join our DL Architecture team, where you can help build the real-time, cost-effective computing platform driving our success in this exciting and quickly growing field.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until March 13, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

閲覧数

0

応募クリック

0

Mock Apply

0

スクラップ

0

NVIDIAについて

NVIDIA

NVIDIA

Public

A computing platform company operating at the intersection of graphics, HPC, and AI.

10,001+

従業員数

Santa Clara

本社所在地

$4.57T

企業価値

レビュー

10件のレビュー

4.4

10件のレビュー

ワークライフバランス

2.8

報酬

4.5

企業文化

4.2

キャリア

4.3

経営陣

3.8

78%

知人への推奨率

良い点

Cutting-edge technology and innovation

Excellent compensation and benefits

Great team culture and collaboration

改善点

High pressure and expectations

Poor work-life balance and long hours

Fast-paced environment leading to burnout

給与レンジ

79件のデータ

L3

L4

L5

L3 · Data Scientist IC2

0件のレポート

$177,542

年収総額

基本給

-

ストック

-

ボーナス

-

$150,910

$204,174

面接レビュー

レビュー5件

難易度

3.0

/ 5

面接プロセス

1

Application Review

2

Recruiter Screen

3

Technical Phone Screen

4

Onsite/Virtual Interviews

5

Team Matching

6

Offer

よくある質問

Coding/Algorithm

System Design

Behavioral/STAR

Technical Knowledge

Past Experience