refresh

トレンド企業

トレンド企業

採用

求人NVIDIA

Senior Product Architect, Storage

NVIDIA

Senior Product Architect, Storage

NVIDIA

US

·

On-site

·

Full-time

·

1mo ago

NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by phenomenal technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world.

As an AI Storage Platform Architect at NVIDIA, this position will be the linchpin between cutting-edge hardware platforms and real-world AI deployments - translating the capabilities of Rubin GPUs, Vera CPUs, Blue Field DPUs, NVLink fabric, and Spectrum-X networking into validated, production-ready blueprints. Work hand-in-hand with storage ecosystem partners to co-develop reference architectures for the NVIDIA AI Data Platform and beyond, ensuring that every layer of the stack - compute, fabric, memory, and storage - is optimized for modern AI workloads!

What you’ll be doing:

  • Architect end-to-end reference architectures for disaggregated inference (aligned with NVIDIA Dynamo), large-scale foundation model training, and agentic AI pipelines — co-developed with storage and ecosystem partners.

  • Design and validate storage-optimized AI infrastructure, including KV Cache tiering strategies, checkpoint acceleration, and high-throughput dataset pipelines that leverage RDMA and NVMeoF fabrics.

  • Define system-level architectures spanning Rubin graphics processors, Vera central processing units, Blue Field data processing units, NVLink interconnects, and Spectrum-X Ethernet to improve efficiency across the full AI lifecycle.

  • Develop and publish reference architectures, whitepapers, and deployment guides for the NVIDIA AI Data Platform and partner-integrated solutions.

  • Drive prototyping, benchmarking, and performance validation of AI infrastructure at scale - diagnosing bottlenecks across compute, networking, and storage layers.

  • Leverage DOCA to architect DPU-offloaded data services including storage acceleration, telemetry, security enforcement, and network virtualization.

  • Collaborate with RAG and autonomous AI teams to build retrieval-optimized storage architectures, including vector database integration, low-latency object access patterns, and inference-aware caching.

  • Partner with customers and collaborators in the ecosystem to co-innovate, deliver proof-of-concepts (POCs) and MVPs that demonstrate end-to-end AI platform performance leadership.

What we need to see:

  • 12+ years of experience architecting datacenter-scale AI, HPC, or storage infrastructure as a Principal Architect, Solutions Architect, Principal Engineer, or equivalent.

  • Bachelors in Computer Science or related field (or equivalent experience).

  • Deep expertise in AI infrastructure build, including disaggregated inference architectures, LLM training pipelines, and autonomous AI system patterns.

  • Hands-on experience with RDMA (RoCEv2/Infini Band), high-performance storage protocols (NVMeoF, GPFS, Lustre, or S3-compatible object storage), and low-latency fabric design.

  • Strong understanding of KV Cache management strategies, including tiered memory/storage hierarchies for inference optimization.

  • Familiarity with Retrieval-Augmented Generation (RAG) architectures and the storage, indexing, and retrieval patterns they demand at scale.

  • Experience with NVIDIA DOCA or equivalent DPU/SmartNIC programming frameworks for offloading data plane and storage services.

  • Proven foundation in networking: Spectrum-X Ethernet, Infini Band, NVLink Switch fabrics, congestion control, and datacenter topologies.

Ways to stand out from the crowd:

  • Proven experience designing reference architectures jointly with storage or infrastructure OEM partners (e.g., Net App, DDN, VAST, Pure Storage, Dell or similar).

  • Hands-on deployment experience with disaggregated inference systems, including prefill/decode separation, KV Cache offload, and request routing.

  • Deep familiarity with NVIDIA Grace-Hopper, Grace-Blackwell, or upcoming Vera-Rubin platforms and their system-level implications for AI workloads.

NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. If you're creative and autonomous, we want to hear from you!

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 224,000 USD - 356,500 USD.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until March 17, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

総閲覧数

0

応募クリック数

0

模擬応募者数

0

スクラップ

0

NVIDIAについて

NVIDIA

NVIDIA

Public

A computing platform company operating at the intersection of graphics, HPC, and AI.

10,001+

従業員数

Santa Clara

本社所在地

$4.57T

企業価値

レビュー

4.1

10件のレビュー

ワークライフバランス

3.5

報酬

4.2

企業文化

4.3

キャリア

4.5

経営陣

4.0

75%

友人に勧める

良い点

Great culture and supportive environment

Smart colleagues and excellent people

Cutting-edge technology and learning opportunities

改善点

Team-dependent experience and outcomes

Work-life balance issues with long hours

Politics and influence over competence

給与レンジ

73件のデータ

Junior/L3

Mid/L4

Junior/L3 · Analyst

7件のレポート

$170,275

年収総額

基本給

$130,981

ストック

-

ボーナス

-

$155,480

$234,166

面接体験

7件の面接

難易度

3.1

/ 5

体験

ポジティブ 0%

普通 86%

ネガティブ 14%

面接プロセス

1

Application Review

2

Recruiter Screen

3

Online Assessment

4

Technical Interview

5

System Design Interview

6

Team Review

よくある質問

Coding/Algorithm

System Design

Technical Knowledge

Behavioral/STAR