Director of Security Engineering
Mountain View, California, US
Full Time Executive-level / Director USD 280K - 415K
DeepMind
Artificial intelligence could be one of humanity’s most useful inventions. We research and build safe artificial intelligence systems. We're committed to solving intelligence, to advance science...At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
Snapshot
Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.
About Us
At Google DeepMind, we've built a unique culture and environment where long-term ambitious research can flourish. Our special interdisciplinary team combines the best techniques from deep learning, reinforcement learning and systems neuroscience to build general-purpose learning algorithms. We have already made a number of high-profile breakthroughs towards building artificial general intelligence (AGI) and, with the support of Alphabet, we have access to extraordinary computational resources and a clear route to global impact on millions of people.
The Role
As the Director of Security Engineering at Google DeepMind, you will lead and inspire a world-class team of security engineers to protect our groundbreaking research, cutting-edge technology, and the sensitive data that fuels our progress towards artificial general intelligence. Reporting directly to the VP of Security and Privacy you will be a key member of the security leadership team, responsible for setting and executing the strategic vision for security across Google DeepMind. You will ensure that security is a fundamental consideration in every aspect of our work, from the initial design of our AI systems to the deployment of our research and the infrastructure that supports it. You will also play a critical role in collaborating with Google's central security teams to define and implement best practices for secure model development.
Key responsibilities:
- Leadership & Strategy:
- Define and drive the long-term security strategy and roadmap for Google DeepMind, aligning with overall organizational goals and anticipating future threats.
- Lead, mentor, and grow a high-performing team of security engineers, fostering a culture of collaboration, innovation, and continuous learning.
- Build and maintain strong relationships with key stakeholders across Google DeepMind, including research, engineering, legal, and compliance teams.
- Represent Google DeepMind's security posture to external partners, regulators, and the broader security community.
- Technical Expertise:
- Provide expert guidance on the design and implementation of secure systems, including threat modeling, vulnerability management, incident response, and security automation.
- Oversee the development and maintenance of security policies, standards, and procedures, ensuring they are at the forefront of industry best practices.
- Stay abreast of the latest security threats, vulnerabilities, and industry best practices, particularly in the rapidly evolving field of AI security.
- Drive innovation in security engineering, exploring and implementing cutting-edge technologies and methodologies to continue enhancing Google DeepMind's security posture.
- Product Security & IP Protection:
- Lead the development and implementation of security strategies to protect Google DeepMind's intellectual property and products throughout their lifecycle.
- Ensure that security is integrated into all phases of product development, from conception to deployment and maintenance.
- Oversee the development of secure design principles and guidelines for AI models and systems.
- Collaboration with Google Central Security:
- Serve as the primary liaison between Google DeepMind and Google's central security teams.
- Collaborate with Google's security experts to define and implement secure model development standards and practices, ensuring alignment across the organization.
- Leverage Google's security resources and expertise to augment Google DeepMind's security capabilities.
- Risk Management:
- Identify, assess, and prioritize security risks across the organization, developing and implementing mitigation strategies.
- Develop and maintain a comprehensive understanding of the unique security challenges associated with AI research and development.
- Proactively identify and address emerging security threats related to advancements in AI, such as adversarial attacks and data poisoning.
- Collaboration and Communication:
- Foster a culture of security awareness and responsibility throughout Google DeepMind.
- Communicate effectively with technical and non-technical audiences, clearly articulating complex security concepts and risks.
- Collaborate with other Alphabet security teams to share knowledge and best practices.
About You
In order to set you up for success as a Software Engineer at Google DeepMind, we look for the following skills and experience:
- Bachelor's degree in Computer Science, Information Security, or a related technical field, or equivalent practical experience.
- 15+ years of experience in security engineering, with at least 7+ years in a leadership role.
- Proven track record of building and leading high-performing security teams.
- Deep understanding of security principles, technologies, and best practices across multiple domains, including cloud security, network security, application security, and data security.
- Experience with threat modeling, vulnerability management, incident response, and security automation.
- Strong understanding of security risks and challenges associated with large-scale, complex systems and datasets.
- Excellent communication, interpersonal, and leadership skills.
In addition, the following would be an advantage:
- Master's or PhD degree in Computer Science, Information Security, or a related technical field.
- Experience in the research and development industry, particularly in AI or machine learning.
- Familiarity with relevant regulations and industry standards (e.g., GDPR, ISO 27001, NIST Cybersecurity Framework).
- Experience working with Google security teams.
- Demonstrated contributions to the security community (e.g., publications, presentations, open-source projects).
- Experience with securing AI/ML systems and addressing AI-specific security risks (e.g., adversarial attacks, data poisoning, model inversion).
The US base salary range for this full-time position is between $280,00 - $415,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.
Tags: Application security Artificial Intelligence Automation Cloud Compliance Computer Science GDPR Incident response ISO 27001 Machine Learning Network security NIST PhD Privacy Product security Risk management Security strategy Strategy Vulnerabilities Vulnerability management
Perks/benefits: Career development Equity / stock options Salary bonus
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.