채용
Who We Are
We are an applied AI lab building end-to-end software agents. We're the team behind Devin, the first AI software engineer, and Windsurf, an AI-native IDE. These products represent our vision for AI that doesn't just assist engineers, but works alongside them as a genuine teammate.
Our team is small and talent-dense: world-class competitive programmers, former founders, and researchers from the frontier of AI, including Scale AI, Palantir, Cursor, Google Deep Mind, and others.
Role Mission
Mid-training sits at the seam between pre-training and post-training and is one of the highest-leverage points in the entire model pipeline. This is where raw base model capability is sharpened into something that can reason deeply, generalize reliably, and serve as the foundation that post-training builds on.
You will own the late-stage training decisions that determine what our models are fundamentally capable of: data mix and quality uplift, annealing schedules, context length extension, capability injection across coding, math, and reasoning, and the synthetic data strategies that make all of it scale. This role does cross-cutting work across what is classically considered both pre-training and post-training. We don't distinguish between research and engineering; we expect both.
What You'll Accomplish
- Data Mix and Quality Uplift:
Design and iterate on high-quality data mixtures for late-stage and annealing training runs. Develop principled methods for sourcing, filtering, and weighting data to sharpen model capabilities without degrading general performance.
- Capability Injection:
Drive targeted improvements in coding, mathematics, and long-horizon reasoning through curated data strategies and training interventions. Translate research insights into measurable capability gains on our agents.
- Synthetic Data Research:
Develop and evaluate synthetic data pipelines that generate training signal at scale. Understand the limits and failure modes of synthetic approaches and build methods that hold up in production training runs.
- Annealing and Schedule Design:
Research and optimize multi-stage learning rate schedules, warmup strategies, and compute allocation across training phases. Understand how schedule choices interact with data distribution and model behavior.
- Context Length Extension:
Research and implement methods for extending effective context length without degrading short-context performance. This includes positional encoding strategies, data construction, and targeted evaluation.
- Evaluation and Iteration:
Build evals that distinguish real capability improvements from benchmark overfitting. Close the loop between training decisions and what actually matters for Devin and our other systems in deployment.
- Scaling and Methodology:
Measure how mid-training interventions scale with compute and data. Develop new approaches when existing methods hit ceilings; we expect both rigorous empiricism and original thinking.
Exceptional Candidates Have Demonstrated
-
Deep familiarity with the LLM training pipeline end to end: pre-training data, optimization, architecture, and how mid-training and post-training interact
-
Hands-on experience with continual pre-training, annealing, or late-stage data mixing for large models
-
Strong intuition for data quality: what makes a dataset useful for training, how to filter and curate at scale, and how data mix choices compound across evals
-
Experience developing or evaluating synthetic data pipelines for capability improvement
-
Proficiency in Python and deep learning frameworks (Py Torch); comfortable debugging distributed training at scale
-
Strong fundamentals in optimization, statistics, and ML theory; able to distinguish real effects from noise, instability, and overfitting
-
A track record of original contributions: publications, open-source impact, or internal results that moved a capability frontier
-
Comfort operating in ambiguous, fast-moving environments where the problem definition is as important as the solution
-
We care more about demonstrated capability than credentials. A PhD is one signal among many.
Resources & Environment
-
Small, highly selective team where research and product move together; prototypes reach real deployment quickly
-
Compute is not a constraint: large allocations with training jobs routinely running across thousands of GPUs from day one
-
The environment rewards speed, autonomy, and technical depth with minimal process overhead; this is one of the most competitive and fast-moving problems in AI
Equal Opportunity
Cognition is an equal opportunity employer. We do not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, veteran status, or any other protected characteristic under applicable law. We are committed to providing reasonable accommodations for candidates with disabilities throughout the hiring process - please let us know if you need any.
총 조회수
0
총 지원 클릭 수
0
모의 지원자 수
0
스크랩
0
비슷한 채용공고

Data Scientist (Level 3 or4)
Northrop Grumman · United States-Texas-San Antonio

Data Scientist, Risk & Fraud
Whatnot · San Francisco, CA

Data Scientist 2-2
Mastercard · Navi Mumbai, India (Finicity)

Analyst, Research - Sustainability
BlackRock · Edinburgh, Scotland

Application Scientist
Thermo Fisher · Singapore, Singapore
Cognition AI 소개

Cognition AI
Series ACognition AI develops AI-powered software engineering tools and autonomous coding systems. The company focuses on creating AI agents that can perform complex software development tasks.
1-50
직원 수
San Francisco
본사 위치
$2B
기업 가치
연봉 정보
2개 데이터
Mid/L4
Senior/L5
Mid/L4 · Software Engineer
1개 리포트
$195,000
총 연봉
기본급
$150,000
주식
-
보너스
-
$195,000
$195,000
뉴스 & 버즈
Degrees without thinking? AI is decoupling knowledge from performance - Devdiscourse
Devdiscourse
News
·
6d ago
Studies Suggest AI Use Leads to Dishonest Behaviour, Reduces Cognitive Capabilities - CXOToday.com
CXOToday.com
News
·
6d ago
AI is reshaping the teenage brain — and an Oxford study says it is making students faster, but shallower thinkers - AOL.com
AOL.com
News
·
6d ago
New Research Finds AI Can Improve Cognition - Newswire Canada
Newswire Canada
News
·
1w ago