Research Scientist, Tech Lead, AI-Based Cyberattack Defense
Mountain View, California, US; San Francisco, California, US
⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️
Full Time Senior-level / Expert USD 197K - 291K
DeepMind
Artificial intelligence could be one of humanity’s most useful inventions. We research and build safe artificial intelligence systems. We're committed to solving intelligence, to advance science...Snapshot
AI’s cyberattack capability is increasing rapidly; recently, there have been a number of publicized zero-day exploits found with the help of AI in well-tested software. Our team works at the cutting edge of this challenge, focusing on evaluating the cyberattack capabilities of AI and importantly, developing defenses to enable our models to refuse helping with cyberattacks. The work spans from novel research all the way to deployment of defenses in Gemini. Given the dual-use nature of AI tools, our mission is to advance the state of the art in discerning between malicious and benign use, ensuring our models are both helpful and safe.
About Us
Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts, and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery and collaborate with others on critical challenges, ensuring safety and ethics are the highest priorities.
Our project is powered by an elite team of exceptional Research Scientists and Software Engineers with a proven track record in security and AI. As a Tech Lead, you will guide this team's efforts, collaborating closely with a wide variety of partners across Google DeepMind (including being embedded within larger security and safety teams) and Google/Alphabet to enable the secure use of our most advanced generative AI models.
The Role
We seek a leader who thrives in ambiguity and is adept at navigating the uncertainty of research. You are skilled at guiding a team to invent novel solutions, pivot when ideas don’t work out, and maintain momentum. Your leadership will be critical in shaping our defenses and influencing both the research community and products with tremendous impact.
Key responsibilities:
- Set the technical vision and strategic direction for the team's research and development efforts in evaluating cyberattack capabilities and developing mitigations to prevent our AI models from being used for cyberattacks.
- Develop novel research that improves the state-of-the-art in mitigating misuse of models for cyberattack,
- Oversee and contribute to deployment in Gemini of the most suitable defenses that the team produces,
- Lead, mentor, and grow a high-performing team, fostering a culture of innovation, collaboration, and technical excellence.
- Drive the end-to-end execution of complex research projects, from rapid-prototyping of novel datasets and methods to developing principled, scalable techniques for discerning malicious and benign use.
- Act as the primary technical point of contact, representing the team's work and influencing stakeholders across Google DeepMind and Google's security and product organizations.
Contribute hands-on to the most critical and challenging aspects of the work, staying deeply engaged with the technical details while guiding the team's overall progress.
About You
In order to set you up for success as a Research Scientist at Google DeepMind, we look for the following skills and experience:
- MSc or PhD/DPhil degree in Computer Security, Computer Science, or a related technical field, or equivalent practical experience.
- A track record of impactful publications on the topic of AI security published at top-tier security conferences or top tier AI conferences, or significant contributions to open-source or Google code with wide adoption.
- Proven experience leading research projects or technical teams in a relevant area.
In addition, the following would be an advantage:
- Demonstrated experience leading projects in one or more of the following areas: cybersecurity exploitation, defenses for cyberattacks, or AI-based exploitation.
- Experience in designing and shipping cybersecurity defenses for common vulnerabilities or leading work on vulnerability exploitation.
- Deep understanding and proven experience in advanced model prompting and red teaming techniques.
- An independent, proactive, and mission-driven leadership style with a passion for building and enabling high-impact teams.
The US base salary range for this full-time position is between $197,000 - $291,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.
At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
Tags: Artificial Intelligence Computer Science Exploits Generative AI Machine Learning PhD Prototyping Red team Vulnerabilities Zero-day
Perks/benefits: Career development Conferences Equity / stock options Salary bonus
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.