
Organizing the world's information and making it universally accessible.
Security Engineering Manager, Trust and Safety, Gemini and Labs
About the job
Our Security team works to create and maintain the safest operating environment for Google's users and developers. Security Engineers work with network equipment and actively monitor our systems for attacks and intrusions. In this role, you will also work with software engineers to proactively identify and fix security flaws and vulnerabilities.
Google's Trust and Safety (T&S) team is entrusted with an immense responsibility: making the internet safer for everyone. Within this group, our T&S Gemini and Labs team operates at the absolute frontier of technology. We are the stewards tasked with safety for Google's generative AI products and features, ensuring it is developed and deployed with the highest standards of safety and integrity.
As the Security Engineering Manager for T&S Gemini and Labs, you would be thought leader and domain expert to inform the team's strategy and drive execution to combat novel Generative AI (GenAI) threats and provide technical leadership in cyber-security, intelligence, and threat analysis. You will mentor and lead the team to grow at the critical intersection of AI research and real-world security/harms to build the foundational defenses that prevent the misuse of generative models and agents. By pioneering threat detection and mitigation strategies, you will empower products to push the boundaries of AI innovation safely, ethically, and securely. You will not just participate in this mission; you will help lead it, setting the standard for the industry in addressing unprecedented safety issues at a global scale.
The US base salary range for this full-time position is $207,000-$300,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
Responsibilities
-
Lead the team's strategy, research and direction to identify risks and threats with an evolving AI threat landscape. Translate insights into proactive mitigation goals to address novel abuse and attack vectors across product surfaces and capabilities.
-
Provide technical leadership to scope and drive comprehensive, transparent and scalable anti-abuse defenses for interconnected AI ecosystems.
-
Be the team's thought leader and engage with Engineering, Product, Policy and Legal to deploy scalable, and defensible mitigation processes for risks with Large Language Models (LLMs).
-
Lead adversarial simulations, proactive assessments, discovery programs to surface unknown attack vectors and risks of AI systems and next-generation capabilities. Architecting novel testing frameworks to expose, multi-stage vulnerabilities.
-
Direct rapid investigation, mitigation and response for high-severity AI abuse incidents, collaborating across product, research and policy teams.
Minimum qualifications
-
Bachelor's degree or equivalent practical experience.
-
8 years of experience with security engineering, computer and network security and security protocols.
-
8 years of experience with security analysis, abuse detection or threat modeling.
-
3 years of experience leading teams in a technical capacity or leading technical risk analysis in an enterprise environment.
-
Experience in people management.
Preferred qualifications
-
Experience in applied vulnerability research, or advanced pen testing/red teaming/bug bounties.
-
Experience in analyzing systems and identifying security and abuse problems, threat modeling, and remediation.
-
Understanding of generative AI technologies, large language models (LLMs), and AI agents.
-
Ability to review or be exposed to sensitive or violative content as part of core role.
-
Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.
전체 조회수
0
전체 지원 클릭
0
전체 Mock Apply
0
전체 스크랩
0
비슷한 채용공고

Chief Security Officer
Moog · Buffalo, NY

Electrical Hardware Integration Lead - HUD, Clusters, Audio, & Customer Controls
General Motors · Warren, Michigan, United States of America

Lead Security Engineer - PSL
JPMorgan Chase · Plano, TX, United States, US

Application Penetration Tester (Assistant Vice President)
Citigroup · SINGAPORE, Singapore

Chief of Staff, Information Security
S&P Global · Virtual, Colorado
Google 소개

Google specializes in internet-related services and products, including search, advertising, and software.
10,001+
직원 수
Mountain View
본사 위치
$1,700B
기업 가치
리뷰
10개 리뷰
4.5
10개 리뷰
워라밸
3.2
보상
4.3
문화
4.1
커리어
4.2
경영진
3.8
82%
지인 추천률
장점
Great benefits and perks
Innovative and interesting work
Career development and learning opportunities
단점
High pressure and expectations
Long hours and heavy workload
Fast-paced and overwhelming environment
연봉 정보
57,503개 데이터
Mid/L4
Mid/L4 · Accessibility Analyst
1개 리포트
$214,500
총 연봉
기본급
$165,000
주식
-
보너스
-
$214,500
$214,500
면접 후기
후기 9개
난이도
3.4
/ 5
소요 기간
14-28주
합격률
44%
경험
긍정 0%