refresh

Trending companies

Trending companies

Google
Google

Organizing the world's information and making it universally accessible.

Security Engineering Manager, Trust and Safety, Gemini and Labs

RoleSecurity
LevelLead
WorkOn-site
TypeFull-time
Posted1 month ago
Apply now

About the job

Our Security team works to create and maintain the safest operating environment for Google's users and developers. Security Engineers work with network equipment and actively monitor our systems for attacks and intrusions. In this role, you will also work with software engineers to proactively identify and fix security flaws and vulnerabilities.
Google's Trust and Safety (T&S) team is entrusted with an immense responsibility: making the internet safer for everyone. Within this group, our T&S Gemini and Labs team operates at the absolute frontier of technology. We are the stewards tasked with safety for Google's generative AI products and features, ensuring it is developed and deployed with the highest standards of safety and integrity.

As the Security Engineering Manager for T&S Gemini and Labs, you would be thought leader and domain expert to inform the team's strategy and drive execution to combat novel Generative AI (GenAI) threats and provide technical leadership in cyber-security, intelligence, and threat analysis. You will mentor and lead the team to grow at the critical intersection of AI research and real-world security/harms to build the foundational defenses that prevent the misuse of generative models and agents. By pioneering threat detection and mitigation strategies, you will empower products to push the boundaries of AI innovation safely, ethically, and securely. You will not just participate in this mission; you will help lead it, setting the standard for the industry in addressing unprecedented safety issues at a global scale.

The US base salary range for this full-time position is $207,000-$300,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.

Responsibilities

  • Lead the team's strategy, research and direction to identify risks and threats with an evolving AI threat landscape. Translate insights into proactive mitigation goals to address novel abuse and attack vectors across product surfaces and capabilities.

  • Provide technical leadership to scope and drive comprehensive, transparent and scalable anti-abuse defenses for interconnected AI ecosystems.

  • Be the team's thought leader and engage with Engineering, Product, Policy and Legal to deploy scalable, and defensible mitigation processes for risks with Large Language Models (LLMs).

  • Lead adversarial simulations, proactive assessments, discovery programs to surface unknown attack vectors and risks of AI systems and next-generation capabilities. Architecting novel testing frameworks to expose, multi-stage vulnerabilities.

  • Direct rapid investigation, mitigation and response for high-severity AI abuse incidents, collaborating across product, research and policy teams.

Minimum qualifications

  • Bachelor's degree or equivalent practical experience.

  • 8 years of experience with security engineering, computer and network security and security protocols.

  • 8 years of experience with security analysis, abuse detection or threat modeling.

  • 3 years of experience leading teams in a technical capacity or leading technical risk analysis in an enterprise environment.

  • Experience in people management.

Preferred qualifications

  • Experience in applied vulnerability research, or advanced pen testing/red teaming/bug bounties.

  • Experience in analyzing systems and identifying security and abuse problems, threat modeling, and remediation.

  • Understanding of generative AI technologies, large language models (LLMs), and AI agents.

  • Ability to review or be exposed to sensitive or violative content as part of core role.

  • Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.

Total Views

0

Total Apply Clicks

0

Total Mock Apply

0

Total Bookmarks

0

About Google

Google

Google

Public

Google specializes in internet-related services and products, including search, advertising, and software.

10,001+

Employees

Mountain View

Headquarters

$1,700B

Valuation

Reviews

10 reviews

4.5

10 reviews

Work-life balance

3.2

Compensation

4.3

Culture

4.1

Career

4.2

Management

3.8

82%

Recommend to a friend

Pros

Great benefits and perks

Innovative and interesting work

Career development and learning opportunities

Cons

High pressure and expectations

Long hours and heavy workload

Fast-paced and overwhelming environment

Salary Ranges

57,503 data points

Mid/L4

Mid/L4 · Accessibility Analyst

1 reports

$214,500

total per year

Base

$165,000

Stock

-

Bonus

-

$214,500

$214,500

Interview experience

9 interviews

Difficulty

3.4

/ 5

Duration

14-28 weeks

Offer rate

44%

Experience

Positive 0%

Neutral 56%

Negative 44%

Interview process

1

Application Review

2

Online Assessment/Technical Screen

3

Phone Screen

4

Onsite/Virtual Interviews

5

Team Matching

6

Offer

Common questions

Coding/Algorithm

System Design

Behavioral/STAR

Technical Knowledge

Product Sense