招聘
WHAT YOU DO AT AMD CHANGES EVERYTHING
At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.
THE PERSON:
Success in this role will require deep knowledge of Data Center AI workloads such as LLM, Generative AI, Recommendation, NLP, Video Analytics, and/or transformer … AI cross cloud, client, edge… the candidate needs to have hands-on experiences with various AI models, end-to-end pipeline, industry framework / SDKs and solutions.
KEY RESPONSIBILITIES:
- High-Performance Kernel Development: Design, implement, and optimize high-performance GPU kernels for AI/ML workloads to maximize hardware utilization.
- Performance Optimization: Analyze and optimize kernel execution for latency and throughput, addressing bottlenecks in memory bandwidth, instruction latency, and thread divergence.
- Workload Analysis: Evaluate the end-to-end performance impact of individual kernels on full-stack AI models, ensuring that micro-optimizations translate to application-level speedups.
- Profiling & Tuning: Utilize advanced GPU profiling tools (e.g., ROCm Profiler, Pytorch Profiler) to identify performance cliffs, stall pipelines, and memory hierarchy inefficiencies.
- Architecture Adaptation: Tailor implementation strategies to leverage specific features of modern GPU architectures (e.g., Matrix Cores, HBM characteristics).
- Framework Integration: Collaborate with software stack teams to expose optimized kernels within high-level frameworks and inference engines.
PREFERRED EXPERIENCE:
- GPU Architecture Mastery: In-depth understanding of modern GPU underlying architectures, including streaming multiprocessors (SMs/CUs), memory hierarchy (registers, shared memory, L1/L2 cache, HBM), and warp/wavefront execution models.
- Kernel Programming Expertise: Strong proficiency in **C++**and parallel computing, with extensive hands-on experience in NVIDIA CUDA or AMD HIP kernel programming.
- Performance Engineering: Demonstrated ability to debug and profile complex GPU workloads, interpreting low-level metrics to drive architectural-aware optimizations.
- Systems Knowledge: Familiarity with asynchronous execution, stream management, and host-device memory transfers.
- Python DSLs & Triton: Experience implementing kernels using OpenAI Triton or other Python-based DSLs for agile kernel development and auto-tuning.
- Inference Engine Experience: Hands-on experience integrating custom kernels into large-scale inference frameworks such as vLLM,SGLang, or TensorRT-LLM.
- Deep Learning Frameworks: Familiarity with writing custom extensions or operators for Py Torch (C++/CUDA extensions).
- Hardware Agnosticism: Experience porting kernels between NVIDIA and AMD architectures or working with cross-platform HPC libraries.
ACADEMIC CREDENTIALS:
- MS candidates who graduate in 2025/2026
*Benefits offered are described: *AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.
AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here.
This posting is for an existing vacancy.
THE PERSON:
Success in this role will require deep knowledge of Data Center AI workloads such as LLM, Generative AI, Recommendation, NLP, Video Analytics, and/or transformer … AI cross cloud, client, edge… the candidate needs to have hands-on experiences with various AI models, end-to-end pipeline, industry framework / SDKs and solutions.
KEY RESPONSIBILITIES:
- High-Performance Kernel Development: Design, implement, and optimize high-performance GPU kernels for AI/ML workloads to maximize hardware utilization.
- Performance Optimization: Analyze and optimize kernel execution for latency and throughput, addressing bottlenecks in memory bandwidth, instruction latency, and thread divergence.
- Workload Analysis: Evaluate the end-to-end performance impact of individual kernels on full-stack AI models, ensuring that micro-optimizations translate to application-level speedups.
- Profiling & Tuning: Utilize advanced GPU profiling tools (e.g., ROCm Profiler, Pytorch Profiler) to identify performance cliffs, stall pipelines, and memory hierarchy inefficiencies.
- Architecture Adaptation: Tailor implementation strategies to leverage specific features of modern GPU architectures (e.g., Matrix Cores, HBM characteristics).
- Framework Integration: Collaborate with software stack teams to expose optimized kernels within high-level frameworks and inference engines.
PREFERRED EXPERIENCE:
- GPU Architecture Mastery: In-depth understanding of modern GPU underlying architectures, including streaming multiprocessors (SMs/CUs), memory hierarchy (registers, shared memory, L1/L2 cache, HBM), and warp/wavefront execution models.
- Kernel Programming Expertise: Strong proficiency in **C++**and parallel computing, with extensive hands-on experience in NVIDIA CUDA or AMD HIP kernel programming.
- Performance Engineering: Demonstrated ability to debug and profile complex GPU workloads, interpreting low-level metrics to drive architectural-aware optimizations.
- Systems Knowledge: Familiarity with asynchronous execution, stream management, and host-device memory transfers.
- Python DSLs & Triton: Experience implementing kernels using OpenAI Triton or other Python-based DSLs for agile kernel development and auto-tuning.
- Inference Engine Experience: Hands-on experience integrating custom kernels into large-scale inference frameworks such as vLLM,SGLang, or TensorRT-LLM.
- Deep Learning Frameworks: Familiarity with writing custom extensions or operators for Py Torch (C++/CUDA extensions).
- Hardware Agnosticism: Experience porting kernels between NVIDIA and AMD architectures or working with cross-platform HPC libraries.
ACADEMIC CREDENTIALS:
- MS candidates who graduate in 2025/2026
*Benefits offered are described: *AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.
AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here.
This posting is for an existing vacancy.
总浏览量
1
申请点击数
0
模拟申请者数
0
收藏
0
相似职位

PyTorch Framework engineer
Intel · PRC, Shanghai

Software & AI Application Engineer
Trane Technologies · Shanghai, Shanghai, China

AI Engineer
Thermo Fisher · Jiangsu, China

Machine Learning Engineer, Community Support Engineering
Airbnb · China

Machine Learning Engineer
Qualcomm · Shenzhen, Guangdong, China
关于AMD

AMD
PublicAdvanced Micro Devices, Inc. (AMD) is an American multinational semiconductor company headquartered in Santa Clara, California.
10,001+
员工数
Santa Clara
总部位置
$240B
企业估值
评价
3.7
10条评价
工作生活平衡
2.8
薪酬
3.2
企业文化
4.1
职业发展
3.4
管理层
3.8
68%
推荐给朋友
优点
Great team culture and spirit
Innovative projects and cutting-edge technology
Supportive management and leadership
缺点
High workload and overwhelming work demands
Work-life balance challenges
High pressure and stressful deadlines
薪资范围
6个数据点
L2
L3
L4
L5
L6
M3
M4
M5
M6
L2 · Data Scientist L2
0份报告
$104,000
年薪总额
基本工资
$41,600
股票
$52,000
奖金
$10,400
$72,800
$135,200
面试经验
2次面试
难度
3.0
/ 5
时长
14-28周
录用率
50%
面试流程
1
Application Review
2
Recruiter Screen
3
Hiring Manager Interview
4
Technical Interview
5
Offer
常见问题
Technical Knowledge
Behavioral/STAR
Past Experience
Problem Solving
新闻动态
I Tested Qualcomm's Snapdragon X2 Elite Extreme: This 18-Core Power CPU Hits Hard Against AMD, Apple, Intel - PCMag
PCMag
News
·
2d ago
Broadcom vs. AMD: Which AI Chipmaker Is the Better Buy? - The Motley Fool
The Motley Fool
News
·
2d ago
NVIDIA Vs. AMD: Buy The Dominant Leader At A Discount (NASDAQ:NVDA) - Seeking Alpha
Seeking Alpha
News
·
2d ago
AMD Stock Slips Despite Ryzen 7 5800X3D Return Rumors - TipRanks
TipRanks
News
·
3d ago