refresh

트렌딩 기업

트렌딩 기업

채용

채용Google

Senior Analyst, Content Adversarial Red Team

Google

Senior Analyst, Content Adversarial Red Team

Google

·

On-site

·

Full-time

·

2mo ago

보상

$160,000 - $237,000

복지 및 혜택

Parental Leave

Learning

Flexible Hours

Healthcare

Equity

필수 스킬

Python

JavaScript

TypeScript

About the job

Trust and Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.

The Content Adversarial Red Team (CART) within Trust and Safety conducts unstructured adversarial testing of Google’s premier generative AI products to uncover emerging content risks not identified in structured evaluations. CART works alongside product, policy, and enforcement teams to build the safest possible experiences for Google users.

In this role, you will develop and drive the team’s strategic plans while acting as a key advisor to executive leadership, leveraging cross-functional influence to advance safety initiatives. As a member of the team, you will mentor analysts and foster a culture of continuous learning by sharing your deep expertise in adversarial techniques. Additionally, you will represent Google’s AI safety efforts in external forums, collaborating with industry partners to develop best practices for responsible AI and solidifying our position as a thought leader in the field.

At Google we work hard to earn our users’ trust every day. Trust and Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.

The US base salary range for this full-time position is $160,000-$237,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.

Responsibilities

  • Lead and guide the team's efforts in identifying and analyzing high-complexity content risks, with a special focus on the safety of users under 18 and influence cross-functional teams, including Product, Engineering, Research, and Policy, to drive the implementation of safety initiatives.
  • Develop and deploy tailored and red teaming exercises that identify emerging, unanticipated, or unknown threats.
  • Drive the creation and refinement of net new red teaming methodologies, strategies and tactics to help build the U18 red teaming program and ensure coherence and consistency across all testing modalities.
  • Design, develop, and oversee the execution of innovative and red teaming strategies to uncover content abuse risks.
  • Act as a key advisor to executive leadership on content safety issues, providing actionable insights and recommendations. This role will be exposed to graphic, controversial, or upsetting content.

Minimum qualifications

  • Bachelor's degree or equivalent practical experience.

  • 10 years of experience in data analytics, trust and safety, policy, cybersecurity, business strategy, or a related field.

  • Experience in Artificial Intelligence or Machine Learning.

Preferred qualifications

  • Master's degree or PhD in a relevant field.

  • 3 years of experience in red teaming, vulnerability research or penetration testing.

  • Experience working with engineering and product teams to create tools, solutions, or automation to improve user safety.

  • Experience with machine learning.

  • Experience in SQL, building dashboards, data collection/transformation, visualization/dashboards, or experience in a scripting/programming language (e.g., Python).

  • Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.

총 조회수

0

총 지원 클릭 수

0

모의 지원자 수

0

스크랩

0

Google 소개

Google

Google

Public

Google specializes in internet-related services and products, including search, advertising, and software.

10,001+

직원 수

Mountain View

본사 위치

$1,700B

기업 가치

리뷰

3.7

25개 리뷰

워라밸

3.8

보상

4.2

문화

3.4

커리어

3.9

경영진

2.8

68%

친구에게 추천

장점

Excellent compensation and benefits

Smart and talented colleagues

Great perks and work flexibility

단점

Management and leadership issues

Bureaucracy and slow processes

Constantly changing priorities and reorganizations

연봉 정보

57,502개 데이터

Junior/L3

L3

L4

L5

L6

L7

L8

Mid/L4

Principal/L7

Senior/L5

Staff/L6

Director

Junior/L3 · Data Scientist L3

0개 리포트

$176,704

총 연봉

기본급

-

주식

-

보너스

-

$150,298

$203,110

면접 경험

9개 면접

난이도

3.4

/ 5

소요 기간

14-28주

합격률

44%

경험

긍정 0%

보통 56%

부정 44%

면접 과정

1

Application Review

2

Online Assessment/Technical Screen

3

Phone Screen

4

Onsite/Virtual Interviews

5

Team Matching

6

Offer

자주 나오는 질문

Coding/Algorithm

System Design

Behavioral/STAR

Technical Knowledge

Product Sense