招聘
aNVIDIA’s Solutions Architect team is looking for a software-focused Solutions Architect to drive adoption of next-generation AI infrastructure across NVIDIA CPU platforms and LPU-based inference systems. This role will focus on NVIDIA CPUs, including Grace, Vera, and future CPU generations, and on LPU platforms and LPX-class systems used to accelerate large language model inference and other latency-sensitive generative AI workloads. We are looking for someone who understands that AI efficiency is a full-stack challenge spanning model architecture, runtime, compiler, serving framework, host software, memory movement, and workload partitioning across CPU, GPU, and LPU.
As a Solutions Architect, you will be the first line of technical expertise between NVIDIA and our customers for CPU- and LPU-centric AI system design. You will help customers understand how NVIDIA CPUs and LPU-based systems can improve the efficiency, latency, throughput, and total cost of their AI workloads, especially when deployed alongside NVIDIA GPUs in heterogeneous production environments. Your work will range from proof-of-concept development and software stack optimization to technical leadership with customer architects, engineering teams, and senior decision makers. You will engage directly with developers, ML engineers, researchers, platform architects, and IT leaders to identify bottlenecks, design optimization strategies, and build deployable reference architectures. You will also work closely with NVIDIA engineering, product, and field teams to translate customer needs into platform feedback, solution patterns, and roadmap inputs.
What you’ll be doing:
-
Evangelize NVIDIA CPU platforms, including Grace, Vera, and future generations, as well as LPU-based systems and LPX-class platforms, with a strong focus on AI software stacks and workload efficiency.
-
Help customers design and optimize AI workloads across CPU, GPU, and LPU, improving latency, throughput, utilization, and overall cost efficiency.
-
Analyze and tune LLM and generative AI pipelines across serving, runtime, memory, I/O, batching, scheduling, and orchestration layers.
-
Build proof-of-concepts, reference architectures, and technical guidance in partnership with Engineering, Product, and Sales teams.
-
Establish trusted technical relationships with customer architects, infrastructure teams, and senior leaders, becoming a strategic advisor for heterogeneous AI system design.
What we need to see:
-
MS or PhD in Computer Science, Engineering, Mathematics, Physics, or a related field, or equivalent experience, plus 5+ years in AI systems, infrastructure, performance engineering, or solution architecture.
-
Strong understanding of modern CPU architecture, Linux systems, and software performance tuning, along with hands-on experience in AI inference for LLM, generative AI, or agentic AI workloads.
-
Experience optimizing heterogeneous systems involving CPU and accelerators, with familiarity in frameworks such as Py Torch, Triton, TensorRT-LLM, vLLM, or ONNX Runtime.
-
Strong programming, problem-solving, and communication skills, with the ability to work effectively with both technical teams and senior customer stakeholders.
Ways to stand out from the crowd:
-
Experience with NVIDIA CPU platforms such as Grace, Grace Hopper, or Arm64 server environments, and familiarity with LPU-based systems or other low-latency inference accelerators.
-
Deep expertise in LLM inference optimization, serving architecture, and workload placement across CPU, GPU, and LPU.
-
Experience building customer-facing proof-of-concepts and measuring AI efficiency through latency, throughput, cost per token, power, or utilization.
-
Familiarity with NVIDIA AI software and platform technologies.
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-looking and talented people in the world working with us. If you are creative, autonomous, and excited about helping customers build highly efficient AI platforms across CPU, GPU, and LPU technologies, we want to hear from you.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. We highly value diversity in our current and future employees and do not discriminate on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status, or any other characteristic protected by law.
总浏览量
0
申请点击数
0
模拟申请者数
0
收藏
0
相似职位
关于NVIDIA

NVIDIA
PublicA computing platform company operating at the intersection of graphics, HPC, and AI.
10,001+
员工数
Santa Clara
总部位置
$4.57T
企业估值
评价
4.1
10条评价
工作生活平衡
3.5
薪酬
4.2
企业文化
4.3
职业发展
4.5
管理层
4.0
75%
推荐给朋友
优点
Great culture and supportive environment
Smart colleagues and excellent people
Cutting-edge technology and learning opportunities
缺点
Team-dependent experience and outcomes
Work-life balance issues with long hours
Politics and influence over competence
薪资范围
73个数据点
Junior/L3
Mid/L4
Junior/L3 · Analyst
7份报告
$170,275
年薪总额
基本工资
$130,981
股票
-
奖金
-
$155,480
$234,166
面试经验
7次面试
难度
3.1
/ 5
体验
正面 0%
中性 86%
负面 14%
面试流程
1
Application Review
2
Recruiter Screen
3
Online Assessment
4
Technical Interview
5
System Design Interview
6
Team Review
常见问题
Coding/Algorithm
System Design
Technical Knowledge
Behavioral/STAR
新闻动态
Negotiating NVIDIA's Offer
Base, stock, and sign-on negotiable. Recruiters invested in closing candidates. CEO reviews all 42K employee salaries monthly. Stock growth has made many employees millionaires.
News
·
NaNw ago
NVIDIA Company Reviews
WLB rated 3.9/5 (lowest category). 64% satisfied with WLB but 53% feel burnt out. Compensation rated 4.4-4.5/5. Experience highly team-dependent.
News
·
NaNw ago
NVIDIA Interview Discussions
Technical bar is high with 4-6 rounds. Process takes 4-8 weeks. Expect C++ questions, LeetCode medium, and system design. Difficulty rated 3.16/5.
News
·
NaNw ago
NVIDIA Culture Discussions
Team-dependent experience; sink-or-swim culture that rewards high performers but can be overwhelming. No politics, flat structure, but demanding workload with some teams requiring evening/weekend work.
News
·
NaNw ago



