refresh

トレンド企業

トレンド企業

採用

求人Baseten

Software Engineer - Model Performance

Baseten

Software Engineer - Model Performance

Baseten

San Francisco

·

On-site

·

Full-time

·

2mo ago

福利厚生

Parental Leave

Flexible Hours

Learning

必須スキル

Node.js

React

TypeScript

ABOUT BASETEN

Baseten powers mission-critical inference for the world's most dynamic AI companies, like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma and Writer. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. We're growing quickly and recently raised our $300M Series E https://www.baseten.co/blog/announcing-baseten-s-300m-series-e/, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction. Join us and help build the platform engineers turn to to ship AI products.

THE ROLE:

Are you passionate about advancing the application of artificial intelligence? We are looking for a Software Engineer focused on ML performance to join our dynamic team. This role is ideal for someone who thrives in a fast-paced startup environment and is eager to make significant contributions to the exciting field of LLM Inference. If you are a backend engineer who thrives on making things faster and is excited about open-source ML models, we look forward to your application.

EXAMPLE INITIATIVES

You'll get to work on these types of projects as part of our Model Performance team:

  • Baseten Embeddings Inference: The fastest embeddings solution available https://www.baseten.co/blog/introducing-baseten-embeddings-inference-bei/

  • The Baseten Inference Stack https://www.baseten.co/resources/guide/the-baseten-inference-stack/

  • Driving model performance optimization https://www.baseten.co/blog/driving-model-performance-optimization-2024-highlights/

RESPONSIBILITIES:

  • Implement, refine, and productionize cutting-edge techniques (quantization, speculative decoding, kv cache reuse, chunked prefill and LoRA) for ML model inference and infrastructure.

  • Deep dive into underlying codebases of TensorRT, Py Torch, TensorRT-LLM, vllm, sglang, CUDA, and other libraries to debug ML performance issues.

  • Apply and scale optimization techniques across a wide range of ML models, particularly large language models.

  • Collaborate with a diverse team to design and implement innovative solutions.

  • Own projects from idea to production.

REQUIREMENTS:

  • Bachelor's, Master's, or Ph.D. degree in Computer Science, Engineering, Mathematics, or related field.

  • Experience with one or more general-purpose programming languages, such as Python or C++.

  • Familiarity with LLM optimization techniques (e.g., quantization, speculative decoding, continuous batching).

  • Strong familiarity with ML libraries, especially Py Torch, TensorRT, or TensorRT-LLM.

  • Demonstrated interest and experience in LLM’s.

  • Deep understanding of GPU architecture.

  • Bonus:

  • Proficiency in enhancing the performance of software systems, particularly in the context of large language models (LLMs).

  • Experience with CUDA or similar technologies.

  • Deep understanding of software engineering principles and a proven track record of developing and deploying AI/ML inference solutions.

  • Experience with Docker and Kubernetes.

BENEFITS:

  • Competitive compensation, including meaningful equity.

  • 100% coverage of medical, dental, and vision insurance for employee and dependents

  • Generous PTO policy including company wide Winter Break (our offices are closed from Christmas Eve to New Year's Day!)

  • Paid parental leave

  • Company-facilitated 401(k)

  • Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.

Apply now to embark on a rewarding journey in shaping the future of AI! If you are a motivated individual with a passion for machine learning and a desire to be part of a collaborative and forward-thinking team, we would love to hear from you.

At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.

総閲覧数

0

応募クリック数

0

模擬応募者数

0

スクラップ

0

Basetenについて

Baseten

Baseten

Series C

Baseten provides a platform for deploying and scaling machine learning models in production environments. The company offers infrastructure and tools for ML engineers to build, deploy, and monitor AI applications.

51-200

従業員数

San Francisco

本社所在地

$1.0B

企業価値

レビュー

4.1

10件のレビュー

ワークライフバランス

4.2

報酬

2.8

企業文化

4.3

キャリア

3.5

経営陣

3.2

72%

友人に勧める

良い点

Flexible work arrangements and schedules

Supportive team environment and good colleagues

Good benefits and health coverage

改善点

Below industry standard compensation and salary

Limited career advancement opportunities

High workload and stressful expectations

給与レンジ

9件のデータ

Junior/L3

L2

L3

L4

L5

L6

Recruiter

Junior/L3 · Recruiter

0件のレポート

$183,600

年収総額

基本給

-

ストック

-

ボーナス

-

$156,060

$211,140

面接体験

52件の面接

難易度

3.3

/ 5

期間

14-28週間

内定率

42%

体験

ポジティブ 66%

普通 21%

ネガティブ 13%

面接プロセス

1

Phone Screen

2

Technical Interview

3

Hiring Manager

4

Team Fit

よくある質問

Technical skills

Past experience

Team collaboration

Problem solving