채용
WHAT YOU DO AT AMD CHANGES EVERYTHING
At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.
THE ROLE:
The AI Customer Engineering organization is looking for a Principal AI Solution Integration Engineer to help customers achieve best‑in‑class performance for AI training and inference workloads on AMD GPU platforms. This is a hands‑on, customer‑facing role leading full-stack debug of complex AI issues on performance, scale, and production readiness for large AI workloads.
THE PERSON:
Ideal candidate would be a senior technologist in AI who can reason across model behavior, ROCm software, system software, and GPU microarchitecture to continuously improve performance benchmarks. Debug complex performance and stability issues across application, framework, runtime, driver, and system layers. Expert in Data Center GPU and AI domains with good understanding of AI workload optimization through different layers of a system including GPU micro-architecture. Experience in in working with different owners of the functional code stack as well as the ability to drive issues to resolutions with innovative solutions.
KEY RESPONSIBILITIES:
- Own end‑to‑end integration and full‑stack debugging of AI workloads across customer production environments, spanning AI frameworks (Py Torch, Triton, Megatron‑LM, vLLM), ROCm runtime, compilers, math and communication libraries, Linux kernel, GPU drivers, and firmware.
- Serve as the technical escalation point for complex, production‑blocking issues in large‑scale, multi‑node and multi‑rack GPU deployments, addressing performance, correctness, and system stability.
- Diagnose and resolve issues across the ROCm stack, including HIP runtime and kernels, ROCm libraries (hipBLASLt, hipSPARSE, hipSOLVER, CK), collective communication (RCCL, SHMEM, Infinity Fabric), and observability tooling.
- Drive workload tuning, scaling optimization, and performance improvements for large‑scale training and inference workloads.
- Correlate GPU microarchitecture behavior (compute, memory hierarchy, HBM, scheduling, power and thermal limits) with symptoms observed at the application, runtime, and driver layers.
- Partner closely with silicon, firmware, and DFx teams to influence future debug, observability, and platform capabilities.
- Act as a trusted technical advisor to strategic customers, leading cross‑functional war rooms and translating ambiguous system symptoms into clear root‑cause analyses and resolution paths.
- Mentor senior engineers, promoting advanced debugging techniques, system‑level thinking, and best practices across teams.
PREFERRED EXPERIENCE:
- Proven years experience in GPU computing, AI infrastructure, or low‑level systems software.
- Deep hands‑on experience with the ROCm software stack and Linux‑based GPU systems.
- Strong understanding of GPU microarchitecture and memory systems, with the ability to reason from u Arch behavior to software outcomes.
- Proven track record debugging issues across application/workload, runtime, driver, firmware and hardware layers.
- Experience supporting or deploying large‑scale AI training/inference clusters with best-in-class performance
- Excellent technical communication skills across customers, executives, and engineering teams.
- Prior experience influencing GPU, firmware, or platform architecture based on field learnings.
- Familiarity with AI workload characteristics (LLMs, distributed training, large‑scale inference).
- Experience with OEM/ODM server platforms, BMC/SMC, and system telemetry.
- Background in performance modeling, reliability analysis, or large‑scale system validation.
ACADEMIC CREDENTIALS:
- Bachelor's, Master's, or Ph.D. in Computer Engineering, Computer Science, Electrical Engineering, or a related technical field.
This role is not eligible for visa sponsorship.
*Benefits offered are described: *AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.
AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here.
This posting is for an existing vacancy.
THE ROLE:
The AI Customer Engineering organization is looking for a Principal AI Solution Integration Engineer to help customers achieve best‑in‑class performance for AI training and inference workloads on AMD GPU platforms. This is a hands‑on, customer‑facing role leading full-stack debug of complex AI issues on performance, scale, and production readiness for large AI workloads.
THE PERSON:
Ideal candidate would be a senior technologist in AI who can reason across model behavior, ROCm software, system software, and GPU microarchitecture to continuously improve performance benchmarks. Debug complex performance and stability issues across application, framework, runtime, driver, and system layers. Expert in Data Center GPU and AI domains with good understanding of AI workload optimization through different layers of a system including GPU micro-architecture. Experience in in working with different owners of the functional code stack as well as the ability to drive issues to resolutions with innovative solutions.
KEY RESPONSIBILITIES:
- Own end‑to‑end integration and full‑stack debugging of AI workloads across customer production environments, spanning AI frameworks (Py Torch, Triton, Megatron‑LM, vLLM), ROCm runtime, compilers, math and communication libraries, Linux kernel, GPU drivers, and firmware.
- Serve as the technical escalation point for complex, production‑blocking issues in large‑scale, multi‑node and multi‑rack GPU deployments, addressing performance, correctness, and system stability.
- Diagnose and resolve issues across the ROCm stack, including HIP runtime and kernels, ROCm libraries (hipBLASLt, hipSPARSE, hipSOLVER, CK), collective communication (RCCL, SHMEM, Infinity Fabric), and observability tooling.
- Drive workload tuning, scaling optimization, and performance improvements for large‑scale training and inference workloads.
- Correlate GPU microarchitecture behavior (compute, memory hierarchy, HBM, scheduling, power and thermal limits) with symptoms observed at the application, runtime, and driver layers.
- Partner closely with silicon, firmware, and DFx teams to influence future debug, observability, and platform capabilities.
- Act as a trusted technical advisor to strategic customers, leading cross‑functional war rooms and translating ambiguous system symptoms into clear root‑cause analyses and resolution paths.
- Mentor senior engineers, promoting advanced debugging techniques, system‑level thinking, and best practices across teams.
PREFERRED EXPERIENCE:
- Proven years experience in GPU computing, AI infrastructure, or low‑level systems software.
- Deep hands‑on experience with the ROCm software stack and Linux‑based GPU systems.
- Strong understanding of GPU microarchitecture and memory systems, with the ability to reason from u Arch behavior to software outcomes.
- Proven track record debugging issues across application/workload, runtime, driver, firmware and hardware layers.
- Experience supporting or deploying large‑scale AI training/inference clusters with best-in-class performance
- Excellent technical communication skills across customers, executives, and engineering teams.
- Prior experience influencing GPU, firmware, or platform architecture based on field learnings.
- Familiarity with AI workload characteristics (LLMs, distributed training, large‑scale inference).
- Experience with OEM/ODM server platforms, BMC/SMC, and system telemetry.
- Background in performance modeling, reliability analysis, or large‑scale system validation.
ACADEMIC CREDENTIALS:
- Bachelor's, Master's, or Ph.D. in Computer Engineering, Computer Science, Electrical Engineering, or a related technical field.
This role is not eligible for visa sponsorship.
*Benefits offered are described: *AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.
AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here.
This posting is for an existing vacancy.
총 조회수
0
총 지원 클릭 수
0
모의 지원자 수
0
스크랩
0
비슷한 채용공고

Sr Software Engineer
PayPal · Austin, Texas, United States of America; San Jose, California, United States of America

Senior Software Engineer, Apple Pay - Wallet, Payments & Commerce (WPC)
Apple · Austin, TX

Principal Mechanical Engineer
BAE Systems · Austin, Texas, United States

Senior Engineer: Inference Data Plane
DigitalOcean · Austin

Sr Software Engineer
PayPal · Austin, Texas, United States of America
AMD 소개

AMD
PublicAdvanced Micro Devices, Inc. (AMD) is an American multinational semiconductor company headquartered in Santa Clara, California.
10,001+
직원 수
Santa Clara
본사 위치
$240B
기업 가치
리뷰
3.7
10개 리뷰
워라밸
2.8
보상
3.2
문화
4.1
커리어
3.4
경영진
3.8
68%
친구에게 추천
장점
Great team culture and spirit
Innovative projects and cutting-edge technology
Supportive management and leadership
단점
High workload and overwhelming work demands
Work-life balance challenges
High pressure and stressful deadlines
연봉 정보
6개 데이터
L2
L3
L4
L5
L6
L2 · Data Analyst L2
0개 리포트
$76,430
총 연봉
기본급
$30,572
주식
$38,215
보너스
$7,643
$53,501
$99,359
면접 경험
2개 면접
난이도
3.0
/ 5
소요 기간
14-28주
합격률
50%
면접 과정
1
Application Review
2
Recruiter Screen
3
Hiring Manager Interview
4
Technical Interview
5
Offer
자주 나오는 질문
Technical Knowledge
Behavioral/STAR
Past Experience
Problem Solving
뉴스 & 버즈
I Tested Qualcomm's Snapdragon X2 Elite Extreme: This 18-Core Power CPU Hits Hard Against AMD, Apple, Intel - PCMag
PCMag
News
·
2d ago
Broadcom vs. AMD: Which AI Chipmaker Is the Better Buy? - The Motley Fool
The Motley Fool
News
·
2d ago
NVIDIA Vs. AMD: Buy The Dominant Leader At A Discount (NASDAQ:NVDA) - Seeking Alpha
Seeking Alpha
News
·
2d ago
AMD Stock Slips Despite Ryzen 7 5800X3D Return Rumors - TipRanks
TipRanks
News
·
3d ago