採用
Benefits & Perks
•Parental leave
•Professional development budget
•Flexible work arrangements
•Comprehensive health, dental, and vision insurance
•401(k) matching
•Competitive salary and equity package
•Parental Leave
•Learning
•Flexible Hours
•Healthcare
•Equity
Required Skills
Python
JavaScript
TypeScript
About the job
Trust and Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.
The Content Adversarial Red Team (CART) within Trust and Safety conducts unstructured adversarial testing of Google’s premier generative AI products to uncover emerging content risks not identified in structured evaluations. CART works alongside product, policy, and enforcement teams to build the safest possible experiences for Google users.
In this role, you will develop and drive the team’s strategic plans while acting as a key advisor to executive leadership, leveraging cross-functional influence to advance safety initiatives. As a member of the team, you will mentor analysts and foster a culture of continuous learning by sharing your deep expertise in adversarial techniques. Additionally, you will represent Google’s AI safety efforts in external forums, collaborating with industry partners to develop best practices for responsible AI and solidifying our position as a thought leader in the field.
At Google we work hard to earn our users’ trust every day. Trust and Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.
The US base salary range for this full-time position is $160,000-$237,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
Responsibilities
- Lead and guide the team's efforts in identifying and analyzing high-complexity content risks, with a special focus on the safety of users under 18 and influence cross-functional teams, including Product, Engineering, Research, and Policy, to drive the implementation of safety initiatives.
- Develop and deploy tailored and red teaming exercises that identify emerging, unanticipated, or unknown threats.
- Drive the creation and refinement of net new red teaming methodologies, strategies and tactics to help build the U18 red teaming program and ensure coherence and consistency across all testing modalities.
- Design, develop, and oversee the execution of innovative and red teaming strategies to uncover content abuse risks.
- Act as a key advisor to executive leadership on content safety issues, providing actionable insights and recommendations. This role will be exposed to graphic, controversial, or upsetting content.
Minimum qualifications
-
Bachelor's degree or equivalent practical experience.
-
10 years of experience in data analytics, trust and safety, policy, cybersecurity, business strategy, or a related field.
-
Experience in Artificial Intelligence or Machine Learning.
Preferred qualifications
-
Master's degree or PhD in a relevant field.
-
3 years of experience in red teaming, vulnerability research or penetration testing.
-
Experience working with engineering and product teams to create tools, solutions, or automation to improve user safety.
-
Experience with machine learning.
-
Experience in SQL, building dashboards, data collection/transformation, visualization/dashboards, or experience in a scripting/programming language (e.g., Python).
-
Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.
Total Views
0
Apply Clicks
0
Mock Applicants
0
Scraps
0
Similar Jobs
About Google

Google specializes in internet-related services and products, including search, advertising, and software.
10,001+
Employees
Mountain View
Headquarters
$1,700B
Valuation
Reviews
3.7
25 reviews
Work Life Balance
3.8
Compensation
4.2
Culture
3.4
Career
3.9
Management
2.8
68%
Recommend to a Friend
Pros
Excellent compensation and benefits
Smart and talented colleagues
Great perks and work flexibility
Cons
Management and leadership issues
Bureaucracy and slow processes
Constantly changing priorities and reorganizations
Salary Ranges
63,375 data points
Junior/L3
L3
L4
L5
L6
L7
L8
Mid/L4
Principal/L7
Senior/L5
Staff/L6
Director
Junior/L3 · Data Scientist L3
0 reports
$176,704
total / year
Base
-
Stock
-
Bonus
-
$150,298
$203,110
Interview Experience
9 interviews
Difficulty
3.4
/ 5
Duration
14-28 weeks
Offer Rate
44%
Experience
Positive 0%
Neutral 56%
Negative 44%
Interview Process
1
Application Review
2
Online Assessment/Technical Screen
3
Phone Screen
4
Onsite/Virtual Interviews
5
Team Matching
6
Offer
Common Questions
Coding/Algorithm
System Design
Behavioral/STAR
Technical Knowledge
Product Sense
News & Buzz
Video game company stock prices dip after Google introduces an AI world-generation tool - The Verge
Source: The Verge
News
·
5w ago
Google Faces New ‘Gemini’ Trademark Suit From Speakers Company - Bloomberg Law News
Source: Bloomberg Law News
News
·
5w ago
A Bunch Of Big Video Game Company Stocks Just Tanked For A Very Dumb Reason - Kotaku
Source: Kotaku
News
·
5w ago
Videogame stocks slide on Google's AI model that turns prompts into playable worlds - Reuters
Source: Reuters
News
·
5w ago



