採用

Senior Software Development Engineer - LLM Kernel & Inference Systems
Santa Clara
·
On-site
·
Full-time
·
2mo ago
福利厚生
•Equity
•Healthcare
必須スキル
Python
JavaScript
TypeScript
WHAT YOU DO AT AMD CHANGES EVERYTHING:
At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.THE ROLE
As a Senior Member of Technical Staff, you will be a technical leader in Large Language Model (LLM) inference and kernel optimization for AMD GPUs. You will play a critical role in advancing high-performance LLM serving by optimizing GPU kernels, inference runtimes, and distributed execution strategies across single-node and multi-node systems.
This role is deeply focused on LLM inference stacks, including vLLM, SGLang, and internal inference platforms. You will work at the intersection of model architecture, GPU kernels, compiler technology, and distributed systems, collaborating closely with internal GPU library teams and upstream open-source communities to deliver production-grade performance improvements.
Your work will directly impact throughput, latency, scalability, and cost efficiency for state-of-the-art LLMs running on AMD GPUs.
THE PERSON: You are a senior systems engineer with deep LLM domain knowledge who enjoys working close to the metal while keeping a strong understanding of end-to-end inference systems. You are comfortable reasoning about attention, KV cache, batching, parallelism strategies, and how they map to GPU kernels and hardware characteristics.
You thrive in ambiguous problem spaces, can independently define technical direction, and consistently deliver measurable performance gains. You balance strong execution with thoughtful upstream collaboration and maintain a high bar for software quality.
-
KEY RESPONSIBILITIES
-
Optimize LLM Inference Frameworks
Drive performance improvements in LLM inference frameworks such as vLLM, SGLang, and Py Torch for AMD GPUs, contributing both internally and upstream. -
LLM-Aware Kernel Development
Design and optimize GPU kernels critical to LLM inference, including attention, GEMMs, KV cache operations, MoE components, and memory-bound kernels. -
Distributed LLM Inference at Scale
Design, implement, and tune multi-GPU and multi-node inference strategies, including TP / PP / EP hybrids, continuous batching, KV cache management, and disaggregated serving. -
Model–System Co-Design
Collaborate with model and framework teams to align LLM architectures with hardware-aware optimizations, improving real-world inference efficiency. -
Compiler & Runtime Optimization
Leverage compiler technologies (LLVM, ROCm, Triton, graph compilers) to improve kernel fusion, memory access patterns, and end-to-end inference pipelines. -
End-to-End Inference Pipeline Optimization
Optimize the full inference stack—from model execution graphs and runtimes to scheduling, batching, and deployment. -
Open-Source Leadership
Engage with open-source maintainers to upstream optimizations, influence roadmap direction, and ensure long-term sustainability of contributions. -
Engineering Excellence
Apply best practices in software engineering, including performance benchmarking, testing, debugging, and maintainability at scale. -
PREFERRED EXPERIENCE
-
Good LLM Knowledge
Deep understanding of Large Language Model inference, including attention mechanisms, KV cache behavior, batching strategies, and latency/throughput trade-offs. -
LLM Inference Frameworks
Hands-on experience with vLLM, SGLang, or similar inference systems (e.g., Faster Transformer), with demonstrated performance tuning. -
GPU Kernel Development
Proven experience optimizing GPU kernels for deep learning workloads, particularly inference-critical paths. -
Distributed Inference Systems
Experience designing and tuning large-scale inference systems across multiple GPUs and nodes. -
Open-Source Contributions
Track record of meaningful upstream contributions to ML, LLM, or systems-level open-source projects. -
Programming & Debugging Skills
Strong proficiency in Python and C++, with deep experience in performance analysis, profiling, and debugging complex systems. -
High-Performance Computing
Experience running and optimizing large-scale workloads on heterogeneous GPU clusters. -
Compiler & Systems Background
Solid foundation in compiler concepts and tooling (LLVM, ROCm, Triton), applied to ML kernel and runtime optimization.
ACADEMIC CREDENTIALS:
- Master’s or PhD in Computer Science, Computer Engineering, Electrical Engineering, or a related field
Benefits offered are described: AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.
AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here.
This posting is for an existing vacancy.
総閲覧数
2
応募クリック数
0
模擬応募者数
0
スクラップ
0
類似の求人

Systems Engineer (Sr. Staff) - WLAN
Qualcomm · Santa Clara, California, United States of America

Senior Platform Networking Software Engineer, Systems Engineering
Pure Storage · Santa Clara, California

Senior I/O Subsystem Architect
NVIDIA · US, CA, Santa Clara

Senior Systems Engineer - MedTech
Johnson & Johnson · Santa Clara, California, United States of America

Sr Staff IT Systems Engineer (Google Workspace) Santa Clara, CA 01/26/2026
Palo Alto Networks · santa clara
AMDについて

AMD
PublicAdvanced Micro Devices, Inc. (AMD) is an American multinational semiconductor company headquartered in Santa Clara, California.
10,001+
従業員数
Santa Clara
本社所在地
$240B
企業価値
レビュー
3.7
10件のレビュー
ワークライフバランス
2.8
報酬
3.2
企業文化
4.1
キャリア
3.4
経営陣
3.8
68%
友人に勧める
良い点
Great team culture and spirit
Innovative projects and cutting-edge technology
Supportive management and leadership
改善点
High workload and overwhelming work demands
Work-life balance challenges
High pressure and stressful deadlines
給与レンジ
6件のデータ
L2
L3
L4
L5
L6
L2 · Data Analyst L2
0件のレポート
$76,430
年収総額
基本給
$30,572
ストック
$38,215
ボーナス
$7,643
$53,501
$99,359
面接体験
2件の面接
難易度
3.0
/ 5
期間
14-28週間
内定率
50%
面接プロセス
1
Application Review
2
Recruiter Screen
3
Hiring Manager Interview
4
Technical Interview
5
Offer
よくある質問
Technical Knowledge
Behavioral/STAR
Past Experience
Problem Solving
ニュース&話題
I Tested Qualcomm's Snapdragon X2 Elite Extreme: This 18-Core Power CPU Hits Hard Against AMD, Apple, Intel - PCMag
PCMag
News
·
3d ago
Broadcom vs. AMD: Which AI Chipmaker Is the Better Buy? - The Motley Fool
The Motley Fool
News
·
3d ago
NVIDIA Vs. AMD: Buy The Dominant Leader At A Discount (NASDAQ:NVDA) - Seeking Alpha
Seeking Alpha
News
·
3d ago
AMD Stock Slips Despite Ryzen 7 5800X3D Return Rumors - TipRanks
TipRanks
News
·
4d ago