採用
Required Skills
Python
Go
Rust
GPU Programming
Systems Programming
Performance Optimization
At e Bay, we're more than a global ecommerce leader — we’re changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We’re committed to pushing boundaries and leaving our mark as we reinvent the future of ecommerce for enthusiasts.
Our customers are our compass, authenticity thrives, bold ideas are welcome, and everyone can bring their unique selves to work — every day. We're in this together, sustaining the future of our customers, our company, and our planet.
Join a team of passionate thinkers, innovators, and dreamers — and help us connect people and build communities to create economic opportunity for all.
As an LLM Inference Engineer on our AI Platform team, you’ll remove the compute-scaling bottleneck for production LLMs. Your job is to make frontier-model inference fast, efficient, reliable, and observable—the “last mile” from GPUs to APIs that products depend on. This role sits at the intersection of HPC, GPU systems, and MLOps, and requires strong intuition for how model architecture, runtimes, and hardware interact.
What You’ll Do
- Own production inference: Take models from handoff to production-grade serving, including release engineering, capacity planning, cost optimization, and incident response.
- Tune inference performance: reduce end-to-end latency and increase throughput across real production traffic patterns.
- Optimize runtimes and servers: Scale inference across heterogeneous GPU fleets; optimize stacks such as vLLM, Triton, and related components (e.g., schedulers, KV cache, batching, memory).
- Benchmark and measure: Build benchmarking suites, metrics, and tooling to quantify latency, throughput, GPU utilization, memory, and cost.
- Reliability and observability: Improve monitoring, tracing, and alerting; participate in incident response and postmortems to harden systems.
- Apply and ship new optimizations: Evaluate research and implement pragmatic inference optimizations (e.g., quantization, paging, kernel/runtimes improvements).
- Partner cross-functionally: Work with data science and product teams to translate business requirements into performance and availability SLOs.
What We’re Looking For
- 5 years of strong development experience
- Experience deploying and operating LLM inference services in production.
- Strong production coding skills in Python plus Go or Rust (systems-level implementation and debugging).
- Experience with ML frameworks and runtimes: Py Torch, vLLM, SGLang (and/or TensorRT).
- Knowledge of GPU architecture and performance (profiling, memory bandwidth/latency tradeoffs); CUDA/kernel programming is a strong plus.
- Solid understanding of LLM inference and optimization techniques: continuous batching, KV cache management, quantization, speculative decoding (nice-to-have), etc.
- 3 years hands-on experience in performance optimization and systems programming for AI/ML workloads.
- Demonstrated ability to deliver measurable production improvements (e.g., 2X throughput, lower p95/p99 latency, reduced GPU cost).
- Proven skill in root-cause analysis: finding bottlenecks across model, runtime, networking, and infrastructure.
Please see the Talent Privacy Notice for information regarding how eBay handles your personal data collected when you use the eBay Careers website or apply for a job with eBay.
eBay is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, veteran status, and disability, or other legally protected status. If you have a need that requires accommodation, please contact us at talentebay.com. We will make every effort to respond to your request for accommodation as soon as possible. View our accessibility statement to learn more about eBay's commitment to ensuring digital accessibility for people with disabilities.
The eBay Jobs website uses cookies to enhance your experience. By continuing to browse the site, you agree to our use of cookies. Visit our Privacy Center for more information.
Total Views
0
Apply Clicks
0
Mock Applicants
0
Scraps
0
Similar Jobs

Principal DevOps Engineer (Prisma AIRS) - NetSec Bengaluru, Karnataka 01/26/2026
Palo Alto Networks · bengaluru

Senior Site Reliability Engineer- Splunk Expert
Okta · Bengaluru, India

Site Reliability Engineer - Director- Software Production Management & Reliability Engineering
Morgan Stanley · Bengaluru, Karnataka, India

Senior Site Reliability Engineer - Kubernetes
Okta · Bengaluru, India

Senior Staff Site Reliability Engineer Bengaluru, Karnataka 01/26/2026
Palo Alto Networks · bengaluru
About eBay

eBay
PublicBuy, sell, and discover.
10,001+
Employees
San Jose
Headquarters
Reviews
3.8
5 reviews
Work Life Balance
4.2
Compensation
2.5
Culture
4.0
Career
2.8
Management
3.5
Pros
Good work-life balance
Great culture and environment
Nice colleagues and supportive people
Cons
Limited opportunities for growth
Old technology and systems
Call quotas and difficult customers
Salary Ranges
2,741 data points
Mid/L4
Mid/L4 · Business Analyst, ALDP
1 reports
$178,250
total / year
Base
$155,000
Stock
-
Bonus
-
$178,250
$178,250
Interview Experience
4 interviews
Difficulty
3.0
/ 5
Duration
14-28 weeks
Experience
Positive 0%
Neutral 75%
Negative 25%
Interview Process
1
Application Review
2
Online Assessment (CodeSignal)
3
Technical Phone Screen
4
Technical Interview Rounds
5
Final Review
Common Questions
Coding/Algorithm
Technical Knowledge
Problem Solving
Data Structures
News & Buzz
eBay Introduces its Inaugural Climate Transition Plan to Advance Sustainable Commerce - Ethical Marketing News
Source: Ethical Marketing News
News
·
5w ago
eBay Stock: Is Wall Street Bullish or Bearish? - Barchart.com
Source: Barchart.com
News
·
5w ago
AustralianSuper Pty Ltd Sells 791,379 Shares of eBay Inc. $EBAY - MarketBeat
Source: MarketBeat
News
·
5w ago
eBay Passes on Agentic Shopping — For Now - E-Commerce Times
Source: E-Commerce Times
News
·
5w ago