採用
必須スキル
LLMs
Research
Red Teaming
Python
About the team
The Safety Systems org ensures that OpenAI’s most capable models can be responsibly developed and deployed. We build evaluations, safeguards, and safety frameworks that help our models behave as intended in real-world settings.
The Preparedness team is an important part of the Safety Systems https://openai.com/safety/safety-systems org at OpenAI, and is guided by OpenAI’s Preparedness Framework https://openai.com/index/updating-our-preparedness-framework/.
Frontier AI models have the potential to benefit all of humanity, but also pose increasingly severe risks. To ensure that AI promotes positive change, the Preparedness team helps us prepare for the development of increasingly capable frontier AI models. This team is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models.
The mission of the Preparedness team is to:
-
Closely monitor and predict the evolving capabilities of frontier AI systems, with an eye towards risks whose impact could be catastrophic
-
Ensure we have concrete procedures, infrastructure and partnerships to mitigate these risks and to safely handle the development of powerful AI systems
Preparedness tightly connects capability assessment, evaluations, and internal red teaming, and mitigations for frontier models, as well as overall coordination on AGI preparedness. This is fast paced, exciting work that has far reaching importance for the company and for society.
About the role
This role leads the Automated Red Teaming (ART) effort: building scalable, research-driven systems that continuously discover failure modes in our models and mitigations — and translate those findings into actionable, production-facing improvements. The goal is to maximize counterfactual reduction in expected harm by finding the highest-leverage, least-covered weaknesses early and reliably.
In this role you will
You will own the research and technical direction for automated red teaming across catastrophic risk areas, with an initial emphasis on:
-
Automated classifier jailbreak discovery (cyber and bio)
-
Automated bio threat-development elicitation (worst-feasible planning uplift)
-
CoT monitoring evasion probing (and adjacent loss-of-control evaluations)
You will partner tightly with:
-
Vertical risk teams (Cyber, Bio, Loss of Control) to define threat models, prioritize targets, and land mitigations
-
The Classifiers team to turn discovered attacks into training data, evals, and measurable robustness gains
-
Product / eng / safety stakeholders to ensure ART outputs are operationally useful (not just interesting)
You might thrive in this role if you:
-
Feel a strong pull toward AI safety, and you’re motivated by reducing real-world catastrophic risk (not just publishing cool results).
-
Love breaking systems (responsibly) — you get energy from finding weird, high-severity failure modes and turning them into concrete fixes.
-
Have strong applied research instincts, especially around evaluations: you’re good at designing experiments that are reproducible, interpretable, and hard to fool.
-
Bring hands-on experience with LLMs and agents, including multi-turn behaviors, tool use, and the ways models adapt to constraints.
-
Are comfortable building scalable automation, not just prototypes — you can turn red-teaming ideas into pipelines that run continuously and produce high-signal outputs.
-
Have solid software engineering fundamentals (data structures, algorithms, testing discipline) and you can work effectively in a production-adjacent environment.
-
Think in threat models and incentives, and you naturally ask “what would an attacker do next?” or “how would this fail under pressure?”
-
Can translate messy findings into action, communicating clearly with researchers, engineers, product, and policy — and driving alignment on what to fix first.
-
Care about efficiency and prioritization, and you’re happy to say “no” to low-leverage work to focus on what moves the risk needle.
-
(Nice to have) Experience in adversarial ML, security research / red teaming, abuse prevention systems, or large-scale eval infrastructure.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement https://cdn.openai.com/policies/eeo-policy-statement.pdf.
Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form https://form.asana.com/?d=57018692298241&k=5MqR40fZd7jlxVUh5J-UeA. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link https://form.asana.com/?k=bQ7w9h3iexRlicUdWRiwvg&d=57018692298241.
OpenAI Global Applicant Privacy Policy https://cdn.openai.com/policies/global-employee-and-contractor-privacy-policy.pdf
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
総閲覧数
0
応募クリック数
0
模擬応募者数
0
スクラップ
0
類似の求人
OpenAIについて

OpenAI
Series CCreating safe AGI that benefits all of humanity.
1,500+
従業員数
San Francisco
本社所在地
$80B
企業価値
レビュー
3.5
4件のレビュー
ワークライフバランス
3.0
報酬
2.8
企業文化
4.2
キャリア
4.3
経営陣
2.5
65%
友人に勧める
良い点
Culture of innovation and continuous learning
High pay depending on expertise
Intellectually stimulating experience
改善点
Lack of communication and oversight
Did not receive payment for work
Management issues
給与レンジ
67件のデータ
Junior/L3
L5
Senior/L5
Staff/L6
Intern
Junior/L3 · Data Scientist
9件のレポート
$126,619
年収総額
基本給
$119,641
ストック
-
ボーナス
$6,978
$94,332
$171,313
面接体験
4件の面接
難易度
3.5
/ 5
期間
21-35週間
体験
ポジティブ 0%
普通 75%
ネガティブ 25%
面接プロセス
1
Recruiter Outreach
2
Technical Phone Screen
3
Take-home Assignment
4
5-person Panel Interview
ニュース&話題
How a fiery attack on Sam Altman’s home unfolded - The Guardian
The Guardian
News
·
3d ago
OpenAI's senior exec Srinivas Narayanan announces he is leaving; says: 'Looking forward to spending some - The Times of India
The Times of India
News
·
4d ago
3 top executives leave OpenAI in a single day - Business Insider
Business Insider
News
·
4d ago
AI chipmaker Cerebras files for IPO following mega deal with OpenAI - Seeking Alpha
Seeking Alpha
News
·
4d ago




