Technical Analyst, Content Adversarial Red Team

Washington D.C., DC, USA; Austin, TX, USA

Google

Google’s mission is to organize the world's information and make it universally accessible and useful.

View all jobs at Google

Apply now Apply later


Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.
  • 4 years of experience in data analytics, data science, statistics, intel (e.g., geopolitical research, open-source intelligence(OSINT), etc.), Trust and Safety, cybersecurity, business strategy, policy, or related fields.

Preferred qualifications:

  • Master's degree or equivalent practical experience.
  • Experience working with engineering and product teams to create tools, solutions, or automation to improve user safety.
  • Experience in SQL, data collection/transformation, visualization/dashboards, or experience in a scripting/programming language (e.g. Python).
  • Experience with machine learning.
  • Excellent communication and presentation skills (written and verbal) and the ability to influence cross-functionally at various levels.
  • Excellent problem-solving and critical thinking skills with attention to detail in a fluid environment.

About the job

Trust and Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.

In this role, you will help uncover "unknown" generative AI issues - novel threats and vulnerabilities that are not captured by traditional testing methods. You will use technical tools, scripts, and automation and develop repeatable processes that can multiply the impact of the team, yield valuable insights, and identify and mitigate emerging content safety risks within Google's Generative Artificial Intelligence(GenAI) products.

The US base salary range for this full-time position is $108,000-$158,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target salaries for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google. Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google. Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google. Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.

Responsibilities

  • Develop and implement Content Adversarial Red Team(CART's) technical analysis program to improve AI safety and security.
  • Conduct red teaming projects, for Google’s Generative AI products.
  • Develop and execute red teaming strategies For each project and product, identify testing approaches, as instructions to vendor teams.
  • Partner with product managers, engineers, researchers, and other stakeholders to understand product functionality, potential vulnerabilities, and develop actionable solutions.
  • Identify interesting, unique, and thus far unknown issue areas or trends through the review of raw testing results.
  • Conduct in-depth analysis of complex issues and edge cases, providing clear and actionable insights to inform decision-making.
  • Work with sensitive content or situations and be exposed to graphic, controversial, or upsetting topics or content.
Develop and implement Content Adversarial Red Team(CART's) technical analysis program to improve AI safety and security.
Apply now Apply later
  • Share this job via
  • 𝕏
  • or
Job stats:  26  5  0

Tags: Analytics Artificial Intelligence Automation Data Analytics Generative AI Machine Learning OSINT Python Red team Scripting SQL Strategy Vulnerabilities

Perks/benefits: Career development Equity / stock options Salary bonus

Region: North America
Country: United States

More jobs like this

Explore more career opportunities

Find even more open roles below ordered by popularity of job title or skills/products/technologies used.