Jobs
WHAT YOU DO AT AMD CHANGES EVERYTHING
At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.
1. Responsibilities
- Train, fine-tune, and optimize Large Language Models (LLMs), including but not limited to pretraining, SFT, and RLHF pipelines
- Design and develop LLM-based agent systems (e.g., tool use, planning and reasoning, multi-agent collaboration)
- Optimize LLM inference performance, including latency, throughput, and memory (VRAM) usage
- Participate in GPU computing optimization, including operator/kernel optimization and parallelization strategies
- Collaborate with research and product teams to drive the deployment of LLMs in real-world applications
2. Requirements
- Bachelor’s degree or above in Computer Science, Artificial Intelligence, or a related field
- 4+ years of relevant development experience
- Proficient in at least one of Python or C++, with strong engineering skills
- Familiar with LLM training workflows, with hands-on experience in training or fine-tuning; experience deploying LLM-based products is a plus
- Experience in agent development (e.g., Lang Chain, in-house agents, tool use systems)
- Familiar with LLM inference optimization techniques, including but not limited to acceleration, quantization, and KV cache
- Understanding of GPU computing principles, with some experience in operator/kernel optimization
3. Preferred Qualifications (Plus)
- Experience with large-scale LLM training (e.g., distributed training, Megatron, Deep Speed)
- Familiarity with CUDA or Triton, with experience in GPU kernel development or optimization
- Experience in high-performance computing (HPC) or inference framework optimization
- Hands-on experience deploying agent systems in production (e.g., complex task planning, multi-tool orchestration)
*Benefits offered are described: *AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.
AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here.
This posting is for an existing vacancy.
1. Responsibilities
- Train, fine-tune, and optimize Large Language Models (LLMs), including but not limited to pretraining, SFT, and RLHF pipelines
- Design and develop LLM-based agent systems (e.g., tool use, planning and reasoning, multi-agent collaboration)
- Optimize LLM inference performance, including latency, throughput, and memory (VRAM) usage
- Participate in GPU computing optimization, including operator/kernel optimization and parallelization strategies
- Collaborate with research and product teams to drive the deployment of LLMs in real-world applications
2. Requirements
- Bachelor’s degree or above in Computer Science, Artificial Intelligence, or a related field
- 4+ years of relevant development experience
- Proficient in at least one of Python or C++, with strong engineering skills
- Familiar with LLM training workflows, with hands-on experience in training or fine-tuning; experience deploying LLM-based products is a plus
- Experience in agent development (e.g., Lang Chain, in-house agents, tool use systems)
- Familiar with LLM inference optimization techniques, including but not limited to acceleration, quantization, and KV cache
- Understanding of GPU computing principles, with some experience in operator/kernel optimization
3. Preferred Qualifications (Plus)
- Experience with large-scale LLM training (e.g., distributed training, Megatron, Deep Speed)
- Familiarity with CUDA or Triton, with experience in GPU kernel development or optimization
- Experience in high-performance computing (HPC) or inference framework optimization
- Hands-on experience deploying agent systems in production (e.g., complex task planning, multi-tool orchestration)
*Benefits offered are described: *AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.
AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here.
This posting is for an existing vacancy.
Total Views
1
Apply Clicks
0
Weekly mock applicants
0
Bookmarks
0
Similar jobs

Software & AI Application Engineer
Trane Technologies · Shanghai, Shanghai, China

Asset Management - Quantitative Analyst - Artificial Intelligence & Machine Learning Focus
JPMorgan Chase · Shanghai, China, CN

PyTorch Framework engineer
Intel · PRC, Shanghai

Machine Learning Engineer
Qualcomm · Shanghai, Shanghai, China

Artificial Intelligence & Machine Learning, Off
State Street · Hangzhou, China
About AMD

AMD
PublicAdvanced Micro Devices, Inc. (AMD) is an American multinational semiconductor company headquartered in Santa Clara, California.
10,001+
Employees
Santa Clara
Headquarters
$240B
Valuation
Reviews
3.7
10 reviews
Work-life balance
2.8
Compensation
3.2
Culture
4.1
Career
3.4
Management
3.8
68%
Recommend to a friend
Pros
Great team culture and spirit
Innovative projects and cutting-edge technology
Supportive management and leadership
Cons
High workload and overwhelming work demands
Work-life balance challenges
High pressure and stressful deadlines
Salary Ranges
6 data points
L2
L3
L4
L5
L6
M3
M4
M5
M6
L2 · Data Scientist L2
0 reports
$104,000
total per year
Base
$41,600
Stock
$52,000
Bonus
$10,400
$72,800
$135,200
Interview experience
2 interviews
Difficulty
3.0
/ 5
Duration
14-28 weeks
Offer rate
50%
Interview process
1
Application Review
2
Recruiter Screen
3
Hiring Manager Interview
4
Technical Interview
5
Offer
Common questions
Technical Knowledge
Behavioral/STAR
Past Experience
Problem Solving
News & Buzz
I Tested Qualcomm's Snapdragon X2 Elite Extreme: This 18-Core Power CPU Hits Hard Against AMD, Apple, Intel - PCMag
PCMag
News
·
1d ago
Broadcom vs. AMD: Which AI Chipmaker Is the Better Buy? - The Motley Fool
The Motley Fool
News
·
1d ago
NVIDIA Vs. AMD: Buy The Dominant Leader At A Discount (NASDAQ:NVDA) - Seeking Alpha
Seeking Alpha
News
·
1d ago
AMD Stock Slips Despite Ryzen 7 5800X3D Return Rumors - TipRanks
TipRanks
News
·
2d ago