Jobs
Snapshot
As a Chemist in the Responsible Development & Innovation (ReDI) team at Google Deep Mind, you will be a principal architect of the safety protocols governing the intersection of Large Language Models (LLMs) and the chemical sciences. You will design and execute rigorous safety evaluations and inform mitigation strategies that ensure our frontier models accelerate scientific discovery without compromising global security. This role is pivotal in deciding when and how our most advanced AI systems are released to the world.
About us
Artificial Intelligence could be one of humanity’s most useful inventions. At Google Deep Mind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.
The role
We are seeking a PhD-level Chemist with post-doctoral or equivalent experience in organic synthesis. You will serve as a technical authority on our red-teaming efforts, simulating adversarial scenarios to identify where AI models might inadvertently provide actionable information on the synthesis and/or weaponisation of known or potential Chemical Warfare Agents (CWAs).
You will apply your knowledge of chemistry to devise evaluation methodologies (e.g. red-teaming, knowledge elicitation studies, etc.) and contribute to building and running these evaluations on new models. You will analyse the results from evaluations, communicate them clearly to advise and inform decision-makers on the safety of our AI systems, and use them to refine our harm frameworks and inform our mitigation strategies.
In this role, you will work closely with other Subject-Matter Experts (SMEs) in the chemical, biological, radiological and nuclear domains, Research Engineers and Research Scientists focused on developing AI systems, as well as experts in AI ethics and policy.
Key responsibilities:
- Architect of Safety Evaluations:
Build rigorous, scalable frameworks to evaluate model proficiency in overcoming key bottlenecks in CWA precursor acquisition, chemical synthesis, and weaponisation.
- Strategic Advisory:
Analyse evaluation results to brief executive decision-makers on model safety, directly influencing deployment "Go/No-Go" decisions.
- Harm Framework Innovation:
Refine our internal safety taxonomies to account for emergent risks at the intersection of general AI and specialist models like Alpha Fold.
- Collaborative Mitigation:
Partner with Research Engineers to revise mitigation strategies and refine harm frameworks for identified chemical risks. Work with other SMEs in the chemical, biological, radiological, nuclear, and conventional explosive domains to build a unified defence against CBRNE-related risks.
- External Engagement:
Stay abreast of global chemical security trends and international non-proliferation policy through engagement with external international, governmental, and non-governmental organisations.
About you
You are a seasoned scientist who bridges the gap between laboratory chemistry and emerging technology. You are motivated by the challenge of defending complex systems and possess the critical mindset required to anticipate non-obvious misuse scenarios.
Minimum Qualifications:
- Chemistry Expertise: PhD in synthetic organic chemistry with at least two years post-doctoral or equivalent experience.
- Publication Record: Proven experience publishing as a first author in high-impact general science or chemistry-specific journals, and presenting work at international chemistry conferences. Classified or internal reporting experience will be considered in lieu of public records for candidates from roles in national security.
- Security Domain Expertise: Comprehensive understanding of the Chemical Weapons Convention (CWC) and other national and international CWA agreements/treaties, chemical defence protocols, and the landscape of dual-use research in the chemical domain.
- Systems Thinking: The ability to translate high-level chemical risks into technical requirements for AI safety.
- Communication Excellence: A proven ability in distilling complex technical findings into clear, actionable advice for non-specialist stakeholders.
Preferred Experience:
- Knowledge of CWA defence, including synthesis, detection, and countermeasures**.**
- Direct experience with CBRNE mitigation, non-proliferation, or relevant international security stakeholders.
- Familiarity with the machine learning lifecycle and AI Safety Frameworks.
- Experience using and/or developing computational chemistry tools (e.g., Alpha Fold, retrosynthesis engines, etc.).
- Working knowledge of the Frontier Safety Framework (FSF), Critical Capability Levels (CCLs), and similar documents published by other leading AI labs.
- Understanding of Google Deep Mind AI research output (e.g., Alpha Fold, GNoME, Weather Next, etc.), and AI products (e.g., Gemini, Nano Banana, Genie, etc.).
- Passion for the ethical deployment of frontier technologies and AI policy.
At Google Deep Mind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
The US base salary range for this full-time position is between $166,000 - $244,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.
Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf. For more information on how we handle your data, please see our Applicant and Candidate Privacy Policy.
Total Views
0
Apply Clicks
0
Weekly mock applicants
0
Bookmarks
0
Similar jobs

Mechanical Integration Technician
Gatik · Mountain View, CA

Receiving Inspector I
Moog · Baguio City, PH

Maintenance - Shoreline Amphitheatre (Seasonal 2026)
Live Nation · Mountain View, CA, USA

Technician - Assembly and Integration II
Blue Origin · Huntsville, AL

Mechanical Engineer, Vehicle Engineering
Waymo · Mountain View, CA, USA ; Novi, MI, USA
About Google DeepMind

Google DeepMind
AcquiredDeepMind Technologies Limited, trading as Google DeepMind or simply DeepMind, is a British-American artificial intelligence research laboratory which serves as a subsidiary of Alphabet Inc.
1,001-5,000
Employees
London
Headquarters
Reviews
3.8
10 reviews
Work-life balance
3.8
Compensation
4.2
Culture
3.5
Career
4.0
Management
2.8
68%
Recommend to a friend
Pros
Smart and brilliant colleagues
Good compensation and benefits
Work flexibility and remote options
Cons
Poor management and leadership issues
Bureaucracy and slow processes
Constantly changing priorities and goals
Interview experience
5 interviews
Difficulty
3.0
/ 5
Duration
21-35 weeks
Offer rate
60%
Experience
Positive 60%
Neutral 40%
Negative 0%
Interview process
1
Application Review
2
Phone Screen/Online Assessment
3
Technical Interview
4
Team Matching Interview
5
Offer
Common questions
Coding/Algorithm
Technical Knowledge
Behavioral/STAR
Research Experience
System Design
News & Buzz
Google Deepmind pioneer David Silver departs to found AI startup, betting LLMs alone won't reach superintelligence - the-decoder.com
Source: the-decoder.com
News
·
11w ago
Apple loses more AI researchers, Siri exec to Google and Meta - 9to5Mac
Source: 9to5Mac
News
·
11w ago
Apple Loses More AI Researchers and a Siri Executive in Latest Departures - Bloomberg
Source: Bloomberg
News
·
11w ago
Google DeepMind seeks team lead for growing AI chip design effort - Data Center Dynamics
Source: Data Center Dynamics
News
·
11w ago