
Pioneering accelerated computing and AI
Inference Optimization Architect, Speech AI
Widely considered to be one of the technology world’s most desirable employers, NVIDIA is an industry leader with groundbreaking developments in High-Performance Computing, Artificial Intelligence and Visualization. The GPU, our invention, serves as the visual cortex of modern computers and is at the heart of our products and services. GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, autonomous cars and conversational AI that can perceive and understand the world. Today, we are increasingly known as “the AI computing company.” We're looking to grow our company, and build our teams with the smartest people in the world. Join us at the forefront of technological advancement.
NVIDIA is looking for an Inference Optimization architect to accelerate and scale our Speech AI models & improve the experience of millions of customers. You will focus on reducing inference latency, improving throughput, and optimizing resource utilization across our AI infrastructure. If you're creative & passionate about solving real world conversational AI problems, come join our Speech AI Engineering team.
What you’ll be doing:
-
Optimize Inference Performance: Improve streaming latency and throughput through advanced batching strategies, encoder caching, and multi-threaded pipeline optimizations
-
Model Compression: Implement techniques including quantization, pruning, and knowledge distillation.
-
Benchmarking: Profile and benchmark models to identify and resolve performance bottlenecks. GPU profiling and debugging using Nsight Systems and Nsight Compute
-
Hardware Acceleration: Develop custom kernels and leverage hardware acceleration (CUDA, TensorRT, etc.).
-
Infrastructure Design: Design and implement efficient serving infrastructure for Speech models at scale.
-
Collaboration: Work alongside Model researchers to transition models from research to production readiness.
-
Cross-Platform Optimization: Optimize inference across diverse GPUs platforms (data centre, edge devices).
-
Tooling: Build frameworks for automated model optimization pipelines.
-
Resource Management: Monitor and improve inference costs and resource utilization in production.
What we need to see:
-
Masters or BE/BTech in Computer Science, computer architecture, or related field
-
10+ years of total experience & 5+ years on performance optimizations of Deep learning model inference
-
Experience with inference pipelines for LLM, Speech Recognition & Speech Synthesis
-
CUDA kernel development: thread blocks, shared memory, synchronization
-
Model inference optimization: batching, dynamic shapes, latency tuning
-
Model serving and deployment: Triton, Torch Serve, TensorRT, TRT-LLM, vLLM
-
Model optimization techniques: quantization, pruning, distillation
-
Computer architecture & Operating systems: processes, threads, scheduling, memory management
-
Solid understanding of modern model architectures (Transformers, CNNs, RNNs)
Ways to stand out from the crowd:
-
Publications or contributions to open-source projects like pytorch/jax/triton-lan
-
Experience with embedded systems or edge deployment
-
Strong collaborative and interpersonal skills, specifically a proven ability to effectively guide and influence within a dynamic matrix environment
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
전체 조회수
0
전체 지원 클릭
0
전체 Mock Apply
0
전체 스크랩
0
비슷한 채용공고
NVIDIA 소개

NVIDIA
PublicA computing platform company operating at the intersection of graphics, HPC, and AI.
10,001+
직원 수
Santa Clara
본사 위치
$4.57T
기업 가치
리뷰
10개 리뷰
4.4
10개 리뷰
워라밸
2.8
보상
4.5
문화
4.2
커리어
4.3
경영진
3.8
78%
지인 추천률
장점
Cutting-edge technology and innovation
Excellent compensation and benefits
Great team culture and collaboration
단점
High pressure and expectations
Poor work-life balance and long hours
Fast-paced environment leading to burnout
연봉 정보
79개 데이터
L3
L4
L5
L3 · Data Scientist IC2
0개 리포트
$177,542
총 연봉
기본급
-
주식
-
보너스
-
$150,910
$204,174
면접 후기
후기 5개
난이도
3.0
/ 5
면접 과정
1
Application Review
2
Recruiter Screen
3
Technical Phone Screen
4
Onsite/Virtual Interviews
5
Team Matching
6
Offer
자주 나오는 질문
Coding/Algorithm
System Design
Behavioral/STAR
Technical Knowledge
Past Experience
최근 소식
Negotiating NVIDIA's Offer
Base, stock, and sign-on negotiable. Recruiters invested in closing candidates. CEO reviews all 42K employee salaries monthly. Stock growth has made many employees millionaires.
reddit/blind
·
NVIDIA Company Reviews
WLB rated 3.9/5 (lowest category). 64% satisfied with WLB but 53% feel burnt out. Compensation rated 4.4-4.5/5. Experience highly team-dependent.
reddit/blind
·
NVIDIA Interview Discussions
Technical bar is high with 4-6 rounds. Process takes 4-8 weeks. Expect C++ questions, LeetCode medium, and system design. Difficulty rated 3.16/5.
reddit/blind
·
NVIDIA Culture Discussions
Team-dependent experience; sink-or-swim culture that rewards high performers but can be overwhelming. No politics, flat structure, but demanding workload with some teams requiring evening/weekend work.
reddit/blind
·



