
Together we advance.
AI Performance Engineer
WHAT YOU DO AT AMD CHANGES EVERYTHING
At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE:
As an AI Performance Engineers you will focus on pushing machine learning workloads to peak hardware efficiency. The emphasis of this call is on analysis, profiling, debugging and optimization at application/workload-level; however a broad understanding of low-level GPU execution and kernel optimization is a major advantage.
KEY RESPONSIBILITIES:
- Explore and benchmark ML models and workloads (including diffusion models, LLMs, and multimodal systems) to identify bottlenecks across compute, memory, and networking layers.
- Optimize performance for inference and training on AMD GPUs, including parallelization strategies, quantization techniques, serving orchestration, network communication and distributed execution.
- Perform deep profiling to uncover inefficiencies in ML frameworks, data pipelines, compiler tools, and key tensor operations such GEMMs, Convs and Attention, to name a few.
- Support AMD top-tier customers to improve model throughput, reduce latency, and optimize resource utilization across multi-GPU and cluster environments.
- Work closely with hardware, compiler, and software teams to drive improvements across the full ROCm stack
- Communicate performance bottlenecks, solutions, and optimization strategies to stakeholders.
- Work with international teams located across Europe, US and Asia.
EXAMPLE TASKS FOR THE FIRST 6 MONTHS:
- Benchmark and profile the latest e.g. Deep Seek model on single- and multi-GPU AMD systems.
- Identify top bottlenecks (e.g. gemms, moe, attn, vae) and drive improvements to reach peak performance.
- Evaluate competing hardware (other GPUs, TPUs, NPUs...) to understand where we lead and where we fall behind.
- Contribute improvements to popular inference and training frameworks such vLLM, SGLang, x DiT, Primus.
- Produce ambitious performance uplift plans, and execute them with your team.
IDEAL CANDIDATE PROFILE:
- Running the latest Frontier AI workloads (LLMs, diffusion, multimodal) at scale.
- Profiling, debugging and optimizing complex ML workloads on Py Torch and JAX.
- High-performance networking for AI infrastructure (RDMA, Infini Band, RoCE, UCX).
- Strong understanding of GPU architectures and performance trade-offs on AI workloads.
- Disaggregated LLM serving systems (KVCache management, prefill-decode separation, GPU-direct).
- Pre-training, fine-tuning, instruct-tuning, Lo Ra and other training-related experiences.
- You are proactive, a self-starter, and passionate about delivering performance improvements at scale.
REQUIRED SKILLS & QUALIFICATIONS:
- Experience with profiling, debugging, benchmarking, and optimization tools.
- Familiarity with ML frameworks (e.g., Py Torch, JAX, TF) and inference serving frameworks (e.g., vLLM, SGLang).
- Strong C++ and/or Python skills, along the basics: unix, git, terminal, debugging, testing, thinking...
- Experience with Docker, container orchestration (Kubernetes), and job schedulers (Slurm).
- Ability to work independently and collaboratively in a multi-cultural team.
- Excellent communication skills in a fast-moving environment.
NICE TO HAVE:
- Experience with AMD tooling (not mandatory if strong fundamentals).
- GPU kernel development experience with HIP, CUDA, or OpenCL
- Tile-programming experience (Triton, Pallas, Gluon, Cutlass, cuDSL...)
- Experience in multi-GPU cluster environments (single- and multi-node).
- Background in high-performance networking for AI infrastructure.
- Familiarity with compiler backends or code generation.
- Experience with KVCache optimization and memory hierarchy tuning.
ACADEMIC CREDENTIALS:
- BSc, MSc, PhD or equivalent experience in Computer Science, Electrical Engineering or a related field
Locations:
Open to Finland, Sweden, Netherlands and UK
*Benefits offered are described: *AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.
AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here.
This posting is for an existing vacancy.
THE ROLE:
As an AI Performance Engineers you will focus on pushing machine learning workloads to peak hardware efficiency. The emphasis of this call is on analysis, profiling, debugging and optimization at application/workload-level; however a broad understanding of low-level GPU execution and kernel optimization is a major advantage.
KEY RESPONSIBILITIES:
- Explore and benchmark ML models and workloads (including diffusion models, LLMs, and multimodal systems) to identify bottlenecks across compute, memory, and networking layers.
- Optimize performance for inference and training on AMD GPUs, including parallelization strategies, quantization techniques, serving orchestration, network communication and distributed execution.
- Perform deep profiling to uncover inefficiencies in ML frameworks, data pipelines, compiler tools, and key tensor operations such GEMMs, Convs and Attention, to name a few.
- Support AMD top-tier customers to improve model throughput, reduce latency, and optimize resource utilization across multi-GPU and cluster environments.
- Work closely with hardware, compiler, and software teams to drive improvements across the full ROCm stack
- Communicate performance bottlenecks, solutions, and optimization strategies to stakeholders.
- Work with international teams located across Europe, US and Asia.
EXAMPLE TASKS FOR THE FIRST 6 MONTHS:
- Benchmark and profile the latest e.g. Deep Seek model on single- and multi-GPU AMD systems.
- Identify top bottlenecks (e.g. gemms, moe, attn, vae) and drive improvements to reach peak performance.
- Evaluate competing hardware (other GPUs, TPUs, NPUs...) to understand where we lead and where we fall behind.
- Contribute improvements to popular inference and training frameworks such vLLM, SGLang, x DiT, Primus.
- Produce ambitious performance uplift plans, and execute them with your team.
IDEAL CANDIDATE PROFILE:
- Running the latest Frontier AI workloads (LLMs, diffusion, multimodal) at scale.
- Profiling, debugging and optimizing complex ML workloads on Py Torch and JAX.
- High-performance networking for AI infrastructure (RDMA, Infini Band, RoCE, UCX).
- Strong understanding of GPU architectures and performance trade-offs on AI workloads.
- Disaggregated LLM serving systems (KVCache management, prefill-decode separation, GPU-direct).
- Pre-training, fine-tuning, instruct-tuning, Lo Ra and other training-related experiences.
- You are proactive, a self-starter, and passionate about delivering performance improvements at scale.
REQUIRED SKILLS & QUALIFICATIONS:
- Experience with profiling, debugging, benchmarking, and optimization tools.
- Familiarity with ML frameworks (e.g., Py Torch, JAX, TF) and inference serving frameworks (e.g., vLLM, SGLang).
- Strong C++ and/or Python skills, along the basics: unix, git, terminal, debugging, testing, thinking...
- Experience with Docker, container orchestration (Kubernetes), and job schedulers (Slurm).
- Ability to work independently and collaboratively in a multi-cultural team.
- Excellent communication skills in a fast-moving environment.
NICE TO HAVE:
- Experience with AMD tooling (not mandatory if strong fundamentals).
- GPU kernel development experience with HIP, CUDA, or OpenCL
- Tile-programming experience (Triton, Pallas, Gluon, Cutlass, cuDSL...)
- Experience in multi-GPU cluster environments (single- and multi-node).
- Background in high-performance networking for AI infrastructure.
- Familiarity with compiler backends or code generation.
- Experience with KVCache optimization and memory hierarchy tuning.
ACADEMIC CREDENTIALS:
- BSc, MSc, PhD or equivalent experience in Computer Science, Electrical Engineering or a related field
Locations:
Open to Finland, Sweden, Netherlands and UK
*Benefits offered are described: *AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.
AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here.
This posting is for an existing vacancy.
Total Views
0
Total Apply Clicks
0
Total Mock Apply
0
Total Bookmarks
0
Similar jobs

WMD Research Scientist
Lockheed Martin · Fayetteville, North Carolina

GEN AI_CITIBANK
Wipro · Chennai, India

Applied Scientist II, Reinforcement Learning
Amazon · N.reading, MA, USA

Autonomy Perception CV/ML SWE
Zipline · South San Francisco, California, USA

Software Engineer III, AI/ML, AI and Infrastructure
About AMD

AMD
PublicAdvanced Micro Devices, Inc. (AMD) is an American multinational semiconductor company headquartered in Santa Clara, California.
10,001+
Employees
Santa Clara
Headquarters
$240B
Valuation
Reviews
10 reviews
3.7
10 reviews
Work-life balance
2.8
Compensation
3.2
Culture
4.1
Career
3.4
Management
3.8
68%
Recommend to a friend
Pros
Great team culture and spirit
Innovative projects and cutting-edge technology
Supportive management and leadership
Cons
High workload and overwhelming demands
Work-life balance challenges
High pressure and stressful deadlines
Salary Ranges
6 data points
L2
L6
M3
M4
M5
M6
L3
L4
L5
L2 · Data Scientist L2
0 reports
$104,000
total per year
Base
$41,600
Stock
$52,000
Bonus
$10,400
$72,800
$135,200
Interview experience
2 interviews
Difficulty
3.0
/ 5
Duration
14-28 weeks
Offer rate
50%
Interview process
1
Application Review
2
Recruiter Screen
3
Hiring Manager Interview
4
Technical Interview
5
Offer
Common questions
Technical Knowledge
Behavioral/STAR
Past Experience
Problem Solving
Latest updates
Anything Intel Can Do, Advanced Micro Devices Might Be Able to Do Better - Barron's
Barron's
News
·
1w ago
These stocks are today’s movers: Intel, AMD, Arm, SAP, Comfort Systems, and more - MSN
MSN
News
·
1w ago
Many Intel & AMD Laptop Improvements Merged For Linux 7.1 - Phoronix
Phoronix
News
·
1w ago
AMD’s stock is red-hot — and just hit a major milestone - MarketWatch
MarketWatch
News
·
1w ago