Principal AI Safety Practice Evangelist

Redmond, Washington, United States

Microsoft

Entdecken Sie Microsoft-Produkte und -Dienste für Ihr Zuhause oder Ihr Unternehmen. Microsoft 365, Copilot, Teams, Xbox, Windows, Azure, Surface und mehr kaufen

View all jobs at Microsoft

Apply now Apply later

Artificial Intelligence has the potential to change the world around us, but we must act ethically along the way. At Microsoft, we are committed to the advancement of AI driven by ethical principles. We are looking for a Principal AI Safety Practice Evangelist to join us and understand the safety and security risks related to AI systems that need to be addressed. Are you intersted about safety, security and technology in society? This may be a great opportunity for you!

 

Who we are:

We are the Artificial Generative Intelligence Security (AeGIS) team, and we are charged with ensuring justified confidence in the safety of Microsoft’s generative AI products. This encompasses providing an infrastructure for AI safety; serving as a coordination point for all things AI incident response; researching the quickly evolving threat landscape; red teaming AI systems for failures; and empowering Microsoft with this knowledge. We partner closely with product engineering teams to mitigate and address the full range of threats that face AI services – from traditional security risks to novel security threats like indirect prompt injection and entirely AI-native threats like the manufacture of NCII or CSAM or the use of AI to run automated scams. We are a mission-driven team intent on delivering trustworthy AI and response processes when it does not live up to those standards.

 

We are always learning. Insatiably curious. We lean into uncertainty, take risks, and learn quickly from our mistakes. We build on each other’s ideas, because we are better together. We are motivated every day to empower others to do and achieve more through our technology and innovation. Together we make a difference for all of our customers, from end users to Fortune 50 enterprises.

 

Our team has people from a wide variety of backgrounds, previous work histories, and life experiences, and we are eager to maintain and grow that diversity. Our diversity of backgrounds and experiences enables us to create innovative solutions for our customers. Our culture is collaborative and customer focused.

 

What we do:

While some aspects of safety can be formalized in software or process, many things require thinking and experience – things like threat modeling, identifying the right places and ways to mitigate risks, and building response strategies. In the world of AI safety, this requires an awareness and understanding of threats and risks far beyond those from traditional security; you don’t just need to worry about an access control failure, you need to worry about the user of your system having an abusive partner who’s spying on them.

 

The Empowering Microsoft team within AeGIS is charged with continually distilling our understanding of AI safety into training, documentation, methodologies and tools that empower the people designing, building, testing, and using systems to do so safely. While the team’s top priority is to train Microsoft’s own teams, we are looking beyond that to provide these resources to the world at large. For us, AI Safety is not about compliance, it’s about trust.

 

How you can help:

We are searching for a person who can identify patterns of AI safety risk as well as best practices from a broad spectrum of technical and other sources and distill those down to their essential pieces. This person will also be able to transform those essential pieces into content that can be communicated to a range of partner teams and audiences so that they understand the need for what needs to be addressed (for patterns that require mitigation) or how to incorporate into their work (for best practices). A particular challenge will be that these practices involve thinking in ways people aren’t familiar with (thinking like adversaries, thinking about how systems will fail) and about issues people are unfamiliar or even uncomfortable with (the abusive partner example above is not hypothetical), and we need to light up a few hundred thousand people with a deep understanding of this nonetheless.

 

Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world.

 

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Responsibilities

  • As part of the AI Safety Threat Understanding team, you will be a key contributor helping to identify patterns of risk from a diverse set of signals and helping partner teams develop strategies for addressing those patterns in a systematic way.
  • Work with our training and education teams to create content across a variety of media that presents these patterns in practical human/societal centered ways. Our audience for this content is wide ranging and includes engineering, UX design, program management and business leaders.
  • You will be responsible for managing some of the partner team relationships, building alignment with them, keeping them involved of the progress and ensuring that their perspective is represented.
  • Help define new policies and procedures (or changes to existing ones) that ensure that customers can have justified trust in Microsoft’s AI services.
  • You will have the opportunity to contribute and shape the way AI safety is embedded day-to-day engineering at Microsoft.

Other

Qualifications

Required/Minimum Qualifications

  • Bachelor's Degree AND 6+ years experience in engineering, product/technical program management, data analysis, or product development
    • OR equivalent experience.
  • 3+ years experience managing cross-functional and/or cross-team projects.

Other Requirements 

  • Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.  

Additional or Preferred Qualifications

  • Bachelor's Degree AND 10+ years experience in engineering, product/technical program management, data analysis, or product development
    • OR equivalent experience.
  • 8+ years experience managing cross-functional and/or cross-team projects.
  • 3+ years experience with a socio-technical safety space (e.g. online safety, privacy)
  • 3+ years experience in writing, content design, information technology (IT), or coding/programming
  • 1+ year(s) experience reading and/or writing code (e.g., sample documentation, product demos)
  • Understanding of Microsoft organizations, technologies and products, especially as they relate to security will ensure a quick start.

Technical Program Management IC5 - The typical base pay range for this role across the U.S. is USD $137,600 - $267,000 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $180,400 - $294,000 per year.

 

Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: https://careers.microsoft.com/us/en/us-corporate-pay

   

Microsoft will accept applications for the role until January 25, 2025. 

 

Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances.  We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request via the Accommodation request form.

 

Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.

 

#MSFTSecurity #MSECAIR #AI #Safety #Security #AeGIS

Apply now Apply later
Job stats:  0  0  0

Tags: Artificial Intelligence Cloud Compliance Generative AI Incident response Privacy Red team

Perks/benefits: Career development Medical leave

Region: North America
Country: United States

More jobs like this

Explore more career opportunities

Find even more open roles below ordered by popularity of job title or skills/products/technologies used.