
Pioneering accelerated computing and AI
Senior GPU Networking Architect
NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world.
We are looking for a Senior GPU Networking Architect to join our networking software group, bringing strong GPU architecture and programming skills to build and improve GPU communication kernels. This role links GPU computing with networking by making sure communication primitives are carefully developed alongside GPU hardware capabilities. Join our team of engineers developing the software foundation for the largest AI systems globally.
What you will be doing:
-
Build, implement, and optimize GPU communication kernels that underpin collective and point-to-point operations in large-scale AI systems.
-
Leverage deep knowledge of GPU architecture—thread scheduling, memory hierarchy, execution pipelines—to improve kernel efficiency, minimize latency, and overlap computation with communication.
-
Develop GPU-resident communication primitives and device-side APIs that enable fine-grained, kernel-initiated data movement across nodes and accelerators.
-
Profile and tune GPU kernels end-to-end, identifying bottlenecks at the intersection of compute, memory, and network, and driving targeted optimizations.
-
Collaborate with network software, hardware, and AI framework teams to co-design communication strategies that align with GPU execution patterns and emerging model architectures.
-
Build proofs-of-concept, conduct experiments, and perform quantitative modeling to evaluate and validate new communication strategies before committing them to production.
-
Contribute to the evolution of programming models that expose GPU-aware networking capabilities to application developers.
What we need to see:
-
5+ years of hands-on CUDA programming, including writing and optimizing non-trivial GPU kernels.
-
M.Sc. or equivalent experience in computer science, computer engineering, or a closely related field.
-
Strong understanding of GPU architecture fundamentals: warp scheduling, shared memory, L2 cache, memory coalescing, occupancy tuning, and asynchronous execution.
-
Experience with systems-level C/C++ development in performance-critical environments.
-
Familiarity with GPU data movement mechanisms such as GPUDirect RDMA and GPU-initiated communication.
-
Ability to read and reason about GPU performance profiles (e.g., Nsight Compute, Nsight Systems) and translate observations into actionable optimizations.
-
Strong collaboration skills in a multi-national, interdisciplinary environment.
Ways to stand out from the crowd:
-
Experience developing or optimizing communication kernels in libraries such as NCCL, NVSHMEM, or similar GPU-aware communication frameworks.
-
Understanding of distributed deep learning parallelism techniques, including data parallelism, tensor parallelism, pipeline parallelism, expert parallelism, and mixture-of-experts parallelism, and the communication patterns they impose on GPU kernels.
-
Background in RDMA, Infini Band, high-speed networking, and GPU system topology, including NVLink, NVSwitch, PCIe, and network fabrics, and their impact on communication kernel design.
-
Experience with overlap techniques such as kernel pipelining, persistent kernels, or cooperative groups to hide communication latency behind compute.
-
Proven experience evaluating and optimizing large-scale LLM training or inference workloads, including hands-on work with frameworks such as Py Torch, TensorRT-LLM, or vLLM, and familiarity with emerging serving architectures such as disaggregated serving.
At NVIDIA, you'll work alongside colleagues who demonstrate deep expertise and innovative thinking in the industry, pushing the boundaries of what's possible in AI and high-performance computing. If you're passionate about GPU architecture, low-level kernel optimization, and building the communication fabric for next-generation AI, we want to hear from you!
Widely considered to be one of the technology world’s most desirable employers, NVIDIA offers highly competitive salaries and a comprehensive benefits package. As you plan your future, see what we can offer to you and your family www.nvidiabenefits.com/
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. For Poland: The base salary range is 292,500 PLN - 507,000 PLN for Level 4, and 375,000 PLN - 650,000 PLN for Level 5.
閲覧数
0
応募クリック
0
Mock Apply
0
スクラップ
0
類似の求人

Senior/Staff Software Engineer - Perception & Sensing
Zoox · Foster City, CA

Senior System Safety Engineer, Operations & Fleet Response
Waymo · Mountain View, CA, USA; San Francisco, CA, USA

Principal Engineer, ASIC Verification - AI/HPC SOCs
Marvell · Ottawa; Toronto

Staff System QA Engineer
Marvell · Santa Clara, CA

Nuclear Engineer III or Senior Nuclear Engineer – Project Engineer 24 Month Fuel Cycle Transition Projects
Duke Energy · Huntersville; Charlotte
NVIDIAについて

NVIDIA
PublicA computing platform company operating at the intersection of graphics, HPC, and AI.
10,001+
従業員数
Santa Clara
本社所在地
$4.57T
企業価値
レビュー
10件のレビュー
4.4
10件のレビュー
ワークライフバランス
2.8
報酬
4.5
企業文化
4.2
キャリア
4.3
経営陣
3.8
78%
知人への推奨率
良い点
Cutting-edge technology and innovation
Excellent compensation and benefits
Great team culture and collaboration
改善点
High pressure and expectations
Poor work-life balance and long hours
Fast-paced environment leading to burnout
給与レンジ
79件のデータ
Junior/L3
Mid/L4
Senior/L5
Junior/L3 · Analyst
7件のレポート
$170,275
年収総額
基本給
$130,981
ストック
-
ボーナス
-
$155,480
$234,166
面接レビュー
レビュー5件
難易度
3.0
/ 5
面接プロセス
1
Application Review
2
Recruiter Screen
3
Technical Phone Screen
4
Onsite/Virtual Interviews
5
Team Matching
6
Offer
よくある質問
Coding/Algorithm
System Design
Behavioral/STAR
Technical Knowledge
Past Experience
最新情報
Negotiating NVIDIA's Offer
Base, stock, and sign-on negotiable. Recruiters invested in closing candidates. CEO reviews all 42K employee salaries monthly. Stock growth has made many employees millionaires.
reddit/blind
·
NVIDIA Company Reviews
WLB rated 3.9/5 (lowest category). 64% satisfied with WLB but 53% feel burnt out. Compensation rated 4.4-4.5/5. Experience highly team-dependent.
reddit/blind
·
NVIDIA Interview Discussions
Technical bar is high with 4-6 rounds. Process takes 4-8 weeks. Expect C++ questions, LeetCode medium, and system design. Difficulty rated 3.16/5.
reddit/blind
·
NVIDIA Culture Discussions
Team-dependent experience; sink-or-swim culture that rewards high performers but can be overwhelming. No politics, flat structure, but demanding workload with some teams requiring evening/weekend work.
reddit/blind
·