Principal Security Researcher
Microsoft
Principal Security Researcher
Multiple Locations, United States
Save
Overview
We are seeking a Principal Security Researcher to join our Autonomous Defense and Protection Team (ADAPT). Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world.
The Microsoft Security AI (Artificial Intelligence) Research team is responsible for defending Microsoft and our customers through applied AI innovation. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world. Defending Microsoft’s complex environment provides a unique opportunity to build and evaluate autonomous defense through emerging generative AI capabilities. Microsoft understands and learns from its own defensive expertise, including via teams like Microsoft Threat Intelligence Center (MSTIC), and has the opportunity to build a unique knowledge graph describing the relationship between risk, investigation, and response. This data, built over Microsoft’s complex digital estate, along with Microsoft AI forms the foundation for innovative solutions to defend Microsoft.
In this role, you will leverage LLMs and agentic systems to uncover and operationalize new adversarial techniques while addressing gaps in red team capabilities and supporting autonomous red team campaigns. You will use various approaches, including the analysis of structured and unstructured data such as code repositories and operational data, to identify new capabilities, assess design flaws and vulnerabilities, and expand our TTP catalog. You will also prototype and validate exploitation methods while helping design LLM-based workflows for TTP discovery and operationalization, advancing offensive data analysis and strengthening red team strategies.
Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
Qualifications
Required/Minimum Qualifications
- 7+ years experience in software development lifecycle, large-scale computing, modeling, cybersecurity, and/or anomaly detection
- OR Doctorate in Statistics, Mathematics, Computer Science or related field.
- 7+ years experience as a red team operator with hands-on engineering expertise in developing custom tooling for adversary simulations and executing red team campaigns
- 5+ years experience proficiency as a reverse engineer and vulnerability researcher
- With or without: exploit development experience, including analyzing artifacts collected during campaigns, researching scoped targets, debugging binaries, and identifying exploitable flaws.
Other Requirements
- Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.
Additional or Preferred Qualifications
- 8+ years experience in software development lifecycle, large-scale computing, modeling, cybersecurity, and/or anomaly detection
- OR Doctorate in Statistics, Mathematics, Computer Science or related field.
- Hands-on experience with Azure, AWS, and GCP environments.
- Proficiency in coding with languages such as C++, C#, Python, Go, and PowerShell.
- Verbal and written communication skills with the ability to convey complex security concepts effectively.
- Previous project management skills with a proven track record of driving projects to completion.
Security Research IC5 - The typical base pay range for this role across the U.S. is USD $137,600 - $267,000 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $180,400 - $294,000 per year.
Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: https://careers.microsoft.com/us/en/us-corporate-pay
Microsoft will accept applications for the role until January 12th, 2025
#MSFTSecurity #MSECAIR #Cybersecurity #SecurityResearch #LLM #AI #AIAgents #GenAI #MSecADAPT #RedTeam
Responsibilities
- Collaborate with the MS Red Team and Applied Research Scientists to leverage AI-driven techniques, including LLMs and agentic systems, to identify and operationalize new adversarial techniques, address capability gaps in red team engagements, expand the internal TTP catalog, and support autonomous red team campaigns.
- Design and implement data pipelines to analyze structured and unstructured data, create workflows for the identification and operationalization of adversarial techniques, and enrich operational data to generate actionable insights.
- Prototype, validate, and refine exploitation methods, using LLMs and agentic systems to discover vulnerabilities, develop exploits, and enhance workflows that automate and strengthen red team operations.
- Design and execute realistic attack simulations to test and improve methodologies, advancing offensive data analysis and supporting autonomous red team strategies.
- Cultivate global collaborations with security researchers to exchange knowledge, stay updated on emerging threats, and build partnerships with offensive security researchers to advance and integrate cutting-edge offensive security research shared within the community.
- Share knowledge with the community through presentations, publications, and active participation in social media, contributing to the broader information security ecosystem.
Other