Jobs
Benefits & Perks
•Professional development budget
•Generous paid time off and holidays
•Parental leave
•Team events and activities
•Comprehensive health, dental, and vision insurance
•Learning
•Parental Leave
•Healthcare
Required Skills
React
PostgreSQL
Node.js
WHAT YOU DO AT AMD CHANGES EVERYTHING:
At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.THE ROLE:
As a senior member of the LLM inference framework team, you will be responsible for building and optimizing production-grade single-node and distributed inference runtimes for large language models on AMD GPUs. You will work at the framework and runtime layer, driving performance, scalability, and reliability, enabling tensor parallelism, pipeline parallelism, expert parallelism (MoE), and single-node or multi-node inference at scale. Your work will directly power customer-facing deployments and benchmarking platforms (e.g., Inference Max, MLPerf, strategic partners, and cloud providers) and will be upstreamed into open-source inference frameworks such as vLLM and SGLang to make AMD a first-class platform for LLM serving.
This role sits at the intersection of inference engines, distributed systems, and GPU runtime and kernel backends.
THE PERSON:
You are a systems-minded ML engineer who thinks in terms of throughput, latency, memory movement, and scheduling, not just model code.
You are comfortable reading and modifying large-scale inference frameworks, debugging performance across GPUs and nodes, and collaborating with kernel, compiler, and networking teams to close end-to-end performance gaps.
You enjoy working in open source and driving architecture-level improvements in inference platforms.
KEY RESPONSIBILITIES: Inference Framework & Runtime
-
Architect and optimize distributed LLM inference runtimes based on in-house LLM engines or open-source stacks such as vLLM, SGLang, and llm-d
-
Design and improve TP / PP / EP (MoE) hybrid execution, including KV-cache management, attention dispatch, and token scheduling
-
Implement and optimize multi-node inference pipelines using RCCL, RDMA, and collective-based execution
Performance & Scalability:
-
Drive throughput, latency, and memory efficiency across single-GPU and multi-GPU clusters
-
Optimize continuous batching, speculative decoding, KV-cache paging, prefix caching, and multi-turn serving
GPU & Backend Integration:
-
Work with AMD GPU libraries (AITER, HIPBLAS-LT, RCCL, ROCm runtime) to ensure inference frameworks efficiently use FP8 / FP4 GEMM and Flash Attention / MLA
-
Collaborate with compiler teams (Triton, LLVM, ROCm) to unblock framework-level performance
Open Source & Customer Enablement:
-
Upstream features and performance fixes into vLLM, SGLang, and llm-d
-
Enable customer Po Cs and production deployments on AMD platforms
-
Build and maintain benchmark-grade inference pipelines
PREFERRED EXPERIENCE: Inference Stack Knowledge
-
Hands-on understanding of vLLM, SGLang, or similar inference stacks
-
Experience with distributed inference scaling and a proven track record of contributing to upstream open-source projects
Deep Learning Integration:
- Strong experience integrating optimized GPU performance into machine-learning frameworks (e.g., Py Torch, Tensor Flow) for high-throughput and scalable inference
Kernel & Inference Frameworks:
- Strong background in NVIDIA, AMD, or similar GPU architectures and kernel development
Software Engineering
- Expertise in Python and preferably experience in C/C++, including debugging, performance tuning, and test design for large-scale systems
High-Performance Computing
- Experience running large-scale workloads on heterogeneous GPU clusters, optimizing for efficiency and scalability
Compiler & Runtime Optimization:
- Understanding of compiler and runtime systems, including LLVM, ROCm, and GPU code generation
ACADEMIC CREDENTIALS:
- Master’s or PhD in Computer Science, Computer Engineering, Electrical Engineering, or a related field.
Benefits offered are described: AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.
AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here.
This posting is for an existing vacancy.
Total Views
0
Apply Clicks
0
Mock Applicants
0
Scraps
0
Similar Jobs
About AMD

AMD
PublicA semiconductor company that designs and develops graphics units, processors, and media solutions
10,001+
Employees
Santa Clara
Headquarters
Reviews
3.5
25 reviews
Work Life Balance
3.2
Compensation
4.1
Culture
3.6
Career
3.4
Management
3.1
65%
Recommend to a Friend
Pros
Good compensation and benefits
Positive work environment
Great management and coworkers
Cons
Poor work life balance
Micromanagement and excessive tracking
Too much pressure and workload
Salary Ranges
6 data points
L2
L3
L4
L5
L6
M3
M4
M5
M6
L2 · Graphic Designer L2
0 reports
$162,512
total / year
Base
$65,005
Stock
$81,256
Bonus
$16,251
$113,758
$211,266
Interview Experience
5 interviews
Difficulty
3.6
/ 5
Duration
14-28 weeks
Offer Rate
60%
Experience
Positive 20%
Neutral 20%
Negative 60%
Interview Process
1
Application Review
2
Recruiter Screen
3
Technical Phone Screen
4
Technical Interview
5
Hiring Manager Interview
6
Offer
Common Questions
Coding/Algorithm
Technical Knowledge
Behavioral/STAR
Past Experience
System Design
News & Buzz
Nvidia vs. AMD vs. Broadcom: What's the Best AI Chip Stock to Own for 2026 - The Globe and Mail
Source: The Globe and Mail
News
·
4w ago
AMD stock rating reiterated at Overweight by Wells Fargo - Investing.com
Source: Investing.com
News
·
5w ago
AMD: Facing Its Moment Of Truth (NASDAQ:AMD) - Seeking Alpha
Source: Seeking Alpha
News
·
5w ago
소재 확 바뀐 2026 그램 AMD vs 인텔 팬서레이크, 둘 다 써보고 결론 냈습니다
News
·
5w ago
·
67,317

