refresh

トレンド企業

トレンド企業

採用

求人NVIDIA

Solutions Architect - CPU and LPU

NVIDIA

Solutions Architect - CPU and LPU

NVIDIA

China

·

On-site

·

Full-time

·

2w ago

aNVIDIA’s Solutions Architect team is looking for a software-focused Solutions Architect to drive adoption of next-generation AI infrastructure across NVIDIA CPU platforms and LPU-based inference systems. This role will focus on NVIDIA CPUs, including Grace, Vera, and future CPU generations, and on LPU platforms and LPX-class systems used to accelerate large language model inference and other latency-sensitive generative AI workloads. We are looking for someone who understands that AI efficiency is a full-stack challenge spanning model architecture, runtime, compiler, serving framework, host software, memory movement, and workload partitioning across CPU, GPU, and LPU.

As a Solutions Architect, you will be the first line of technical expertise between NVIDIA and our customers for CPU- and LPU-centric AI system design. You will help customers understand how NVIDIA CPUs and LPU-based systems can improve the efficiency, latency, throughput, and total cost of their AI workloads, especially when deployed alongside NVIDIA GPUs in heterogeneous production environments. Your work will range from proof-of-concept development and software stack optimization to technical leadership with customer architects, engineering teams, and senior decision makers. You will engage directly with developers, ML engineers, researchers, platform architects, and IT leaders to identify bottlenecks, design optimization strategies, and build deployable reference architectures. You will also work closely with NVIDIA engineering, product, and field teams to translate customer needs into platform feedback, solution patterns, and roadmap inputs.

What you’ll be doing:

  • Evangelize NVIDIA CPU platforms, including Grace, Vera, and future generations, as well as LPU-based systems and LPX-class platforms, with a strong focus on AI software stacks and workload efficiency.

  • Help customers design and optimize AI workloads across CPU, GPU, and LPU, improving latency, throughput, utilization, and overall cost efficiency.

  • Analyze and tune LLM and generative AI pipelines across serving, runtime, memory, I/O, batching, scheduling, and orchestration layers.

  • Build proof-of-concepts, reference architectures, and technical guidance in partnership with Engineering, Product, and Sales teams.

  • Establish trusted technical relationships with customer architects, infrastructure teams, and senior leaders, becoming a strategic advisor for heterogeneous AI system design.

What we need to see:

  • MS or PhD in Computer Science, Engineering, Mathematics, Physics, or a related field, or equivalent experience, plus 5+ years in AI systems, infrastructure, performance engineering, or solution architecture.

  • Strong understanding of modern CPU architecture, Linux systems, and software performance tuning, along with hands-on experience in AI inference for LLM, generative AI, or agentic AI workloads.

  • Experience optimizing heterogeneous systems involving CPU and accelerators, with familiarity in frameworks such as Py Torch, Triton, TensorRT-LLM, vLLM, or ONNX Runtime.

  • Strong programming, problem-solving, and communication skills, with the ability to work effectively with both technical teams and senior customer stakeholders.

Ways to stand out from the crowd:

  • Experience with NVIDIA CPU platforms such as Grace, Grace Hopper, or Arm64 server environments, and familiarity with LPU-based systems or other low-latency inference accelerators.

  • Deep expertise in LLM inference optimization, serving architecture, and workload placement across CPU, GPU, and LPU.

  • Experience building customer-facing proof-of-concepts and measuring AI efficiency through latency, throughput, cost per token, power, or utilization.

  • Familiarity with NVIDIA AI software and platform technologies.

NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-looking and talented people in the world working with us. If you are creative, autonomous, and excited about helping customers build highly efficient AI platforms across CPU, GPU, and LPU technologies, we want to hear from you.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. We highly value diversity in our current and future employees and do not discriminate on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status, or any other characteristic protected by law.

総閲覧数

0

応募クリック数

0

模擬応募者数

0

スクラップ

0

NVIDIAについて

NVIDIA

NVIDIA

Public

A computing platform company operating at the intersection of graphics, HPC, and AI.

10,001+

従業員数

Santa Clara

本社所在地

$4.57T

企業価値

レビュー

4.1

10件のレビュー

ワークライフバランス

3.5

報酬

4.2

企業文化

4.3

キャリア

4.5

経営陣

4.0

75%

友人に勧める

良い点

Great culture and supportive environment

Smart colleagues and excellent people

Cutting-edge technology and learning opportunities

改善点

Team-dependent experience and outcomes

Work-life balance issues with long hours

Politics and influence over competence

給与レンジ

73件のデータ

Junior/L3

Mid/L4

Junior/L3 · Analyst

7件のレポート

$170,275

年収総額

基本給

$130,981

ストック

-

ボーナス

-

$155,480

$234,166

面接体験

7件の面接

難易度

3.1

/ 5

体験

ポジティブ 0%

普通 86%

ネガティブ 14%

面接プロセス

1

Application Review

2

Recruiter Screen

3

Online Assessment

4

Technical Interview

5

System Design Interview

6

Team Review

よくある質問

Coding/Algorithm

System Design

Technical Knowledge

Behavioral/STAR