채용
WHAT YOU DO AT AMD CHANGES EVERYTHING
At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.THE ROLE:
As a core member of the team, you will play a pivotal role in optimizing and developing deep learning frameworks for AMD GPUs. Your experience will be critical in enhancing GPU kernels, deep learning models, and training/inference performance across multi-GPU and multi-node systems. You will engage with both internal GPU library teams and open-source maintainers to ensure seamless integration of optimizations, utilizing cutting-edge compiler technologies and advanced engineering principles to drive continuous improvement.
THE PERSON:
Skilled engineer with strong technical and analytical expertise in C++ development within Linux environments. The ideal candidate will thrive in both collaborative team settings and independent work, with the ability to define goals, manage development efforts, and deliver high-quality solutions. Strong problem-solving skills, a proactive approach, and a keen understanding of software engineering best practices are essential.
KEY RESPONSIBILITIES:
- Deep Learning & LLM Framework Optimization:
Optimize major DL/LLM frameworks (Tensor Flow, Py Torch, vLLM, SGLang) for AMD GPUs and contribute improvements upstream.
- GPU Kernel & Operator Optimization:
Develop and tune GPU kernels and performance-critical operators to maximize throughput and minimize latency.
- Model & Architecture Optimization:
Adapt and optimize LLM architectures (e.g., Llama, Qwen, Deep Seek) and apply advanced techniques like Flash Attention, Paged Attention, and quantization.
- End-to-End Performance Engineering:
Perform comprehensive profiling to identify bottlenecks and implement system, memory, and communication optimizations across multi-GPU and multi-node setups.
- Compiler & Pipeline Acceleration:
Leverage advanced compiler technologies and graph compilers to enhance the full deep learning and inference pipeline.
- Research & Advanced Techniques:
Prototype and integrate emerging optimization methods such as speculative decoding and weight-only quantization into production systems.
- Cross-Team & Open-Source Collaboration:
Collaborate with internal GPU library teams and open-source maintainers to align improvements and ensure seamless upstream integration.
- Software Engineering Excellence:
Apply robust engineering practices to deliver maintainable, reliable, and production-quality performance optimizations.
MANDATORY EXPERIENCE:
- Inference Frameworks, Model Architectures & Optimization Expertise:
Strong deep practical experience with vLLM or SGLang, mastery of modern LLMs (e.g., Deep Seek, Qwen), strong theoretical grounding in Transformer/Attention/MoE/KV Cache, and hands-on application of advanced inference optimizations such as Flash Attention, Paged Attention, continuous batching, and quantization (INT8/INT4/GPTQ/AWQ).
- End-to-End LLM Performance Engineering:
Demonstrated ability to profile, diagnose, and optimize compute, memory, and communication bottlenecks across multi-GPU and multi-node environments.
- High-Performance Computing:
Experience running and optimizing large-scale workloads on heterogeneous clusters with a focus on efficiency, reliability, and scalability.
- Deep Learning Framework Integration:
Proven ability to integrate optimized GPU kernels into Tensor Flow/Py Torch to accelerate large-scale training and inference with strong scaling and throughput.
- Software Engineering Excellence & Community Contribution:
Strong Python/C++ coding skills, effective debugging and testing practices, proven ability to deliver maintainable performance-critical software, and a track record of open-source contributions with strong self-motivation.
- GPU Kernel Development & Optimization is a plus:
Hands-on experience designing and tuning high-performance GPU kernels for AMD GPUs using HIP, CUDA, ASM, and tools like CK, CUTLASS, and Triton, with strong knowledge of GCN/RDNA architectures.
- Compiler & System-Level Optimization is a plus:
Solid foundational knowledge of LLVM, ROCm, and compiler-driven techniques for improving kernel and system performance.
ACADEMIC & PREFERRED QUALIFICATIONS:
- Master’s Degree or PHD in Computer Science, Computer Engineering, Electrical Engineering, or a related field.
- Low-Level Development Skills:
Experience with CUDA C++ programming for writing and debugging high-performance GPU kernels; or practical experience using Triton to develop and optimize deep learning operators.
- Compiler Knowledge:
Understanding or practical experience with compiler technologies like TVM or MLIR is a significant advantage.
- Distributed Systems Experience:
Hands-on experience with distributed inference for large-scale models (e.g., Tensor Parallel, Pipeline Parallel).
Benefits offered are described: AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.
AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here.
This posting is for an existing vacancy.
THE ROLE:
As a core member of the team, you will play a pivotal role in optimizing and developing deep learning frameworks for AMD GPUs. Your experience will be critical in enhancing GPU kernels, deep learning models, and training/inference performance across multi-GPU and multi-node systems. You will engage with both internal GPU library teams and open-source maintainers to ensure seamless integration of optimizations, utilizing cutting-edge compiler technologies and advanced engineering principles to drive continuous improvement.
THE PERSON:
Skilled engineer with strong technical and analytical expertise in C++ development within Linux environments. The ideal candidate will thrive in both collaborative team settings and independent work, with the ability to define goals, manage development efforts, and deliver high-quality solutions. Strong problem-solving skills, a proactive approach, and a keen understanding of software engineering best practices are essential.
KEY RESPONSIBILITIES:
- Deep Learning & LLM Framework Optimization:
Optimize major DL/LLM frameworks (Tensor Flow, Py Torch, vLLM, SGLang) for AMD GPUs and contribute improvements upstream.
- GPU Kernel & Operator Optimization:
Develop and tune GPU kernels and performance-critical operators to maximize throughput and minimize latency.
- Model & Architecture Optimization:
Adapt and optimize LLM architectures (e.g., Llama, Qwen, Deep Seek) and apply advanced techniques like Flash Attention, Paged Attention, and quantization.
- End-to-End Performance Engineering:
Perform comprehensive profiling to identify bottlenecks and implement system, memory, and communication optimizations across multi-GPU and multi-node setups.
- Compiler & Pipeline Acceleration:
Leverage advanced compiler technologies and graph compilers to enhance the full deep learning and inference pipeline.
- Research & Advanced Techniques:
Prototype and integrate emerging optimization methods such as speculative decoding and weight-only quantization into production systems.
- Cross-Team & Open-Source Collaboration:
Collaborate with internal GPU library teams and open-source maintainers to align improvements and ensure seamless upstream integration.
- Software Engineering Excellence:
Apply robust engineering practices to deliver maintainable, reliable, and production-quality performance optimizations.
MANDATORY EXPERIENCE:
- Inference Frameworks, Model Architectures & Optimization Expertise:
Strong deep practical experience with vLLM or SGLang, mastery of modern LLMs (e.g., Deep Seek, Qwen), strong theoretical grounding in Transformer/Attention/MoE/KV Cache, and hands-on application of advanced inference optimizations such as Flash Attention, Paged Attention, continuous batching, and quantization (INT8/INT4/GPTQ/AWQ).
- End-to-End LLM Performance Engineering:
Demonstrated ability to profile, diagnose, and optimize compute, memory, and communication bottlenecks across multi-GPU and multi-node environments.
- High-Performance Computing:
Experience running and optimizing large-scale workloads on heterogeneous clusters with a focus on efficiency, reliability, and scalability.
- Deep Learning Framework Integration:
Proven ability to integrate optimized GPU kernels into Tensor Flow/Py Torch to accelerate large-scale training and inference with strong scaling and throughput.
- Software Engineering Excellence & Community Contribution:
Strong Python/C++ coding skills, effective debugging and testing practices, proven ability to deliver maintainable performance-critical software, and a track record of open-source contributions with strong self-motivation.
- GPU Kernel Development & Optimization is a plus:
Hands-on experience designing and tuning high-performance GPU kernels for AMD GPUs using HIP, CUDA, ASM, and tools like CK, CUTLASS, and Triton, with strong knowledge of GCN/RDNA architectures.
- Compiler & System-Level Optimization is a plus:
Solid foundational knowledge of LLVM, ROCm, and compiler-driven techniques for improving kernel and system performance.
ACADEMIC & PREFERRED QUALIFICATIONS:
- Master’s Degree or PHD in Computer Science, Computer Engineering, Electrical Engineering, or a related field.
- Low-Level Development Skills:
Experience with CUDA C++ programming for writing and debugging high-performance GPU kernels; or practical experience using Triton to develop and optimize deep learning operators.
- Compiler Knowledge:
Understanding or practical experience with compiler technologies like TVM or MLIR is a significant advantage.
- Distributed Systems Experience:
Hands-on experience with distributed inference for large-scale models (e.g., Tensor Parallel, Pipeline Parallel).
Benefits offered are described: AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.
AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here.
This posting is for an existing vacancy.
총 조회수
0
총 지원 클릭 수
0
모의 지원자 수
0
스크랩
0
비슷한 채용공고

Materials Developer II, JP/SportStyle Global - Converse
Nike · Shanghai, China Mainland

Category Specialist(Cable Harness Engineer)
ABB · Shanghai, Shanghai, China

Manufacturing Engineer
Emerson · Shanghai, Shanghai, China, CN

Software Engineer I
GOAT · Shanghai, Shanghai, China

Engineer, FEA
Trane Technologies · Shanghai, Shanghai, China
AMD 소개

AMD
PublicAdvanced Micro Devices, Inc. (AMD) is an American multinational semiconductor company headquartered in Santa Clara, California.
10,001+
직원 수
Santa Clara
본사 위치
$240B
기업 가치
리뷰
3.7
10개 리뷰
워라밸
2.8
보상
3.2
문화
4.1
커리어
3.4
경영진
3.8
68%
친구에게 추천
장점
Great team culture and spirit
Innovative projects and cutting-edge technology
Supportive management and leadership
단점
High workload and overwhelming work demands
Work-life balance challenges
High pressure and stressful deadlines
연봉 정보
6개 데이터
L2
L3
L4
L5
L6
L2 · Data Analyst L2
0개 리포트
$76,430
총 연봉
기본급
$30,572
주식
$38,215
보너스
$7,643
$53,501
$99,359
면접 경험
2개 면접
난이도
3.0
/ 5
소요 기간
14-28주
합격률
50%
면접 과정
1
Application Review
2
Recruiter Screen
3
Hiring Manager Interview
4
Technical Interview
5
Offer
자주 나오는 질문
Technical Knowledge
Behavioral/STAR
Past Experience
Problem Solving
뉴스 & 버즈
I Tested Qualcomm's Snapdragon X2 Elite Extreme: This 18-Core Power CPU Hits Hard Against AMD, Apple, Intel - PCMag
PCMag
News
·
1d ago
Broadcom vs. AMD: Which AI Chipmaker Is the Better Buy? - The Motley Fool
The Motley Fool
News
·
1d ago
NVIDIA Vs. AMD: Buy The Dominant Leader At A Discount (NASDAQ:NVDA) - Seeking Alpha
Seeking Alpha
News
·
1d ago
AMD Stock Slips Despite Ryzen 7 5800X3D Return Rumors - TipRanks
TipRanks
News
·
2d ago