Analyst, Under 18 Product Policies, Content Adversarial Red Team
Washington D.C., DC, USA; Austin, TX, USA
Minimum qualifications:
- Bachelor's degree or equivalent practical experience.
- 4 years of experience in Trust and Safety, product policy, business strategy, or related fields.
Preferred qualifications:
- Master's degree or equivalent practical experience.
- Experience in SQL, data collection/transformation, visualization/dashboards.
- Experience in a scripting/programming language (e.g. Python).
- Experience with machine learning.
- Excellent communication and presentation skills (written and verbal) and the ability to influence cross-functionally at various levels.
- Excellent problem-solving and critical thinking skills with attention to detail in an fluid environment.
About the job
Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.
As a Content Adversarial Red Team Analyst, you will be a key contributor in identifying and mitigating emerging content safety risks, particularly those for users under 18 years of age, within Google's Generative Artificial Intelligence(GenAI) products. You will uncover "unknown" GenAI issues - novel threats and vulnerabilities that are not captured by traditional testing methods. Your ability to think strategically will be instrumental in shaping the future of AI development, ensuring that Google's AI products are safe, fair, and unbiased.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
Responsibilities
- Use Google’s big data to conduct data-oriented analysis, architect metrics, synthesize information, solve problems, and influence business decision-making by presenting insights and market trends.
- Conduct red teaming projects, for Google’s most prominent Generative AI products, using novel strategies to identify unknown content safety risks. Apply under 18 product policy knowledge when applicable for red teaming projects.
- Develop and implement CART strategic programs to improve Artificial Intelligence (AI) safety and security. This will include defining and developing new programs, identify key areas for process and impact improvements and develop comprehensive program plans.
- Work with sensitive content or situations and be exposed to graphic, controversial, or upsetting topics or content.
Tags: Artificial Intelligence Big Data Generative AI Machine Learning Python Red team Scripting SQL Strategy Vulnerabilities
Perks/benefits: Career development Equity / stock options Salary bonus
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.