
Organizing the world's information and making it universally accessible.
Security Engineering Manager, Trust and Safety, Gemini and Labs
About the job
Our Security team works to create and maintain the safest operating environment for Google's users and developers. Security Engineers work with network equipment and actively monitor our systems for attacks and intrusions. In this role, you will also work with software engineers to proactively identify and fix security flaws and vulnerabilities.
Google's Trust and Safety (T&S) team is entrusted with an immense responsibility: making the internet safer for everyone. Within this group, our T&S Gemini and Labs team operates at the absolute frontier of technology. We are the stewards tasked with safety for Google's generative AI products and features, ensuring it is developed and deployed with the highest standards of safety and integrity.
As the Security Engineering Manager for T&S Gemini and Labs, you would be thought leader and domain expert to inform the team's strategy and drive execution to combat novel Generative AI (GenAI) threats and provide technical leadership in cyber-security, intelligence, and threat analysis. You will mentor and lead the team to grow at the critical intersection of AI research and real-world security/harms to build the foundational defenses that prevent the misuse of generative models and agents. By pioneering threat detection and mitigation strategies, you will empower products to push the boundaries of AI innovation safely, ethically, and securely. You will not just participate in this mission; you will help lead it, setting the standard for the industry in addressing unprecedented safety issues at a global scale.
The US base salary range for this full-time position is $207,000-$300,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
Responsibilities
-
Lead the team's strategy, research and direction to identify risks and threats with an evolving AI threat landscape. Translate insights into proactive mitigation goals to address novel abuse and attack vectors across product surfaces and capabilities.
-
Provide technical leadership to scope and drive comprehensive, transparent and scalable anti-abuse defenses for interconnected AI ecosystems.
-
Be the team's thought leader and engage with Engineering, Product, Policy and Legal to deploy scalable, and defensible mitigation processes for risks with Large Language Models (LLMs).
-
Lead adversarial simulations, proactive assessments, discovery programs to surface unknown attack vectors and risks of AI systems and next-generation capabilities. Architecting novel testing frameworks to expose, multi-stage vulnerabilities.
-
Direct rapid investigation, mitigation and response for high-severity AI abuse incidents, collaborating across product, research and policy teams.
Minimum qualifications
-
Bachelor's degree or equivalent practical experience.
-
8 years of experience with security engineering, computer and network security and security protocols.
-
8 years of experience with security analysis, abuse detection or threat modeling.
-
3 years of experience leading teams in a technical capacity or leading technical risk analysis in an enterprise environment.
-
Experience in people management.
Preferred qualifications
-
Experience in applied vulnerability research, or advanced pen testing/red teaming/bug bounties.
-
Experience in analyzing systems and identifying security and abuse problems, threat modeling, and remediation.
-
Understanding of generative AI technologies, large language models (LLMs), and AI agents.
-
Ability to review or be exposed to sensitive or violative content as part of core role.
-
Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.
浏览量
0
申请点击
0
Mock Apply
0
收藏
0
相似职位

Chief Security Officer
Moog · Buffalo, NY

Electrical Hardware Integration Lead - HUD, Clusters, Audio, & Customer Controls
General Motors · Warren, Michigan, United States of America

Lead Security Engineer - PSL
JPMorgan Chase · Plano, TX, United States, US

Application Penetration Tester (Assistant Vice President)
Citigroup · SINGAPORE, Singapore

Chief of Staff, Information Security
S&P Global · Virtual, Colorado
关于Google

Google specializes in internet-related services and products, including search, advertising, and software.
10,001+
员工数
Mountain View
总部位置
$1,700B
企业估值
评价
10条评价
4.5
10条评价
工作生活平衡
3.2
薪酬
4.3
企业文化
4.1
职业发展
4.2
管理层
3.8
82%
推荐率
优点
Great benefits and perks
Innovative and interesting work
Career development and learning opportunities
缺点
High pressure and expectations
Long hours and heavy workload
Fast-paced and overwhelming environment
薪资范围
57,503个数据点
Mid/L4
Mid/L4 · Accessibility Analyst
1份报告
$214,500
年薪总额
基本工资
$165,000
股票
-
奖金
-
$214,500
$214,500
面试评价
9条评价
难度
3.4
/ 5
时长
14-28周
录用率
44%
体验
正面 0%
中性 56%
负面 44%
面试流程
1
Application Review
2
Online Assessment/Technical Screen
3
Phone Screen
4
Onsite/Virtual Interviews
5
Team Matching
6
Offer
常见问题
Coding/Algorithm
System Design
Behavioral/STAR
Technical Knowledge
Product Sense
最新动态
Our eighth generation TPUs: two chips for the agentic era - blog.google
blog.google
News
·
1w ago
Google Maps on Android Auto now shows bigger labels on streets along your route [Gallery] - 9to5Google
9to5Google
News
·
1w ago
Google to invest up to $40 billion in AI rival Anthropic - Reuters
Reuters
News
·
1w ago
Google to invest up to $40B in Anthropic in cash and compute - TechCrunch
TechCrunch
News
·
1w ago