採用

Staff Software Engineer - GenAI Performance and Kernel
San Francisco, California
·
On-site
·
Full-time
·
2mo ago
報酬
$190,900 - $232,800
福利厚生
•Unlimited Pto
•Learning
•Healthcare
必須スキル
Python
TensorFlow
Airflow
P-1285
About This Role
As a staff software engineer for GenAI Performance and Kernel, you will own the design, implementation, optimization, and correctness of the high-performance GPU kernels powering our GenAI inference stack. You will lead development of highly-tuned, low-level compute paths, manage trade-offs between hardware efficiency and generality, and mentor others in kernel-level performance engineering. You will work closely with ML researchers, systems engineers, and product teams to push the state-of-the-art in inference performance at scale.
What You Will Do
-
Lead the design, implementation, benchmarking, and maintenance of core compute kernels (e.g. attention, MLP, softmax, layernorm, memory management) optimized for various hardware backends (GPU, accelerators)
-
Drive the performance roadmap for kernel-level improvements: vectorization, tensorization, tiling, fusion, mixed precision, sparsity, quantization, memory reuse, scheduling, auto-tuning, etc.
-
Integrate kernel optimizations with higher-level ML systems
-
Build and maintain profiling, instrumentation, and verification tooling to detect correctness, performance regressions, numerical issues, and hardware utilization gaps
-
Lead performance investigations and root-cause analysis on inference bottlenecks, e.g. memory bandwidth, cache contention, kernel launch overhead, tensor fragmentation
-
Establish coding patterns, abstractions, and frameworks to modularize kernels for reuse, cross-backend portability, and maintainability
-
Influence system architecture decisions to make kernel improvements more effective (e.g. memory layout, dataflow scheduling, kernel fusion boundaries)
-
Mentor and guide other engineers working on lower-level performance, provide code reviews, help set best practices
-
Collaborate with infrastructure, tooling, and ML teams to roll out kernel-level optimizations into production, and monitor their impact
What We Look For
-
BS/MS/PhD in Computer Science, or a related field
-
Deep hands-on experience writing and tuning compute kernels (CUDA, Triton, OpenCL, LLVM IR, assembly or similar sort) for ML workloads
-
Strong knowledge of GPU/accelerator architecture: warp structure, memory hierarchy (global, shared, register, L1/L2 caches), tensor cores, scheduling, SM occupancy, etc.
-
Experience with advanced optimization techniques: tiling, blocking, software pipelining, vectorization, fusion, loop transformations, auto-tuning
-
Familiarity with ML-specific kernel libraries (cuBLAS, cuDNN, CUTLASS, oneDNN, etc.) or open kernels
-
Strong debugging and profiling skills (Nsight, NVProf, perf, vtune, custom instrumentation)
-
Experience reasoning about numerical stability, mixed precision, quantization, and error propagation
-
Experience in integrating optimized kernels into real-world ML inference systems; exposure to distributed inference pipelines, memory management, and runtime systems
-
Experience building high-performance products leveraging GPU acceleration
-
Excellent communication and leadership skills — able to drive design discussions, mentor colleagues, and make trade-offs visible
-
A track record of shipping performance-critical, high-quality production software
-
Bonus: published in systems/ML performance venues (e.g. MLSys, ASPLOS, ISCA, PPoPP), experience with custom accelerators or FPGA, experience with sparsity or model compression techniques
Pay Range Transparency
Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.
Local Pay Range:
$190,900—$232,800 USD
About Databricks
Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.
Benefits:
At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks.
Our Commitment to Diversity and Inclusion
At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.
Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
総閲覧数
1
応募クリック数
0
模擬応募者数
0
スクラップ
0
類似の求人

Senior Presales, Systems Engineer
Juniper Networks · San Francisco

Sr. Power System Engineer
Schneider Electric · San Francisco, California

Senior Presales, Systems Engineer
HPE · San Francisco, California, United States of America

Staff Software Engineer, Systems Engineering Focus
Crusoe · San Francisco, CA - US

Sr Staff Agentic Systems Engineer
Uber · San Francisco, CA; Sunnyvale, CA
Databricksについて

Databricks
Series IDatabricks, Inc. is an American software company based in San Francisco. It was founded in 2013 by the original creators of Apache Spark. It offers a cloud-based platform for data analytics and artificial intelligence.
6,000+
従業員数
San Francisco
本社所在地
$43B
企業価値
レビュー
3.8
10件のレビュー
ワークライフバランス
2.8
報酬
4.0
企業文化
4.2
キャリア
3.5
経営陣
4.0
72%
友人に勧める
良い点
Innovative technology and cutting-edge projects
Supportive and collaborative team environment
Good benefits and competitive compensation
改善点
Poor work-life balance and long hours
High pressure and stressful environment
Heavy workload and overtime requirements
給与レンジ
34件のデータ
Mid/L4
Senior/L5
Mid/L4 · Corporate Development Manager
1件のレポート
$171,004
年収総額
基本給
$148,699
ストック
-
ボーナス
-
$171,004
$171,004
面接体験
6件の面接
難易度
3.2
/ 5
期間
21-35週間
体験
ポジティブ 0%
普通 83%
ネガティブ 17%
面接プロセス
1
Application Review
2
Recruiter Screen
3
Technical Phone Screen
4
Coding Round
5
Onsite/Virtual Interviews
6
Offer
よくある質問
Coding/Algorithm
System Design
Behavioral/STAR
Technical Knowledge
ニュース&話題
Databricks Highlights Infrastructure Demands of AI Agent Workloads - TipRanks
TipRanks
News
·
3d ago
Governing Coding Agent Sprawl with Unity AI Gateway - Databricks
Databricks
News
·
4d ago
Introducing Genie Agent Mode - Databricks
Databricks
News
·
4d ago
Open Platform, Unified Pipelines: Why dbt on Databricks is Accelerating - Databricks
Databricks
News
·
5d ago