Microsoft, OpenAI partnership provides cybersecurity’s generative AI moment

Microsoft Corp. has captured the attention of the cybersecurity and AI communities with the introduction of Microsoft Security Copilot, an implementation that applies Microsoft partner OpenAI’s GPT-4 large language model in generative AI along with a security-specific Microsoft model. Security Copilot is integrated with the company’s security products portfolio and leverages Microsoft’s vast array of threat intelligence and hyperscaler resources.

The Take

Nearly two-thirds of respondents (64%) to our Information Security, Budgets & Outlook 2023 study say that responsive security measures are “very important,” but many teams are overwhelmed with data, and staffing remains a challenge. Large-model AI — in particular, large language and a growing range of multimodal models — could be a decided asset in revolutionizing the approach to such challenges. With its relationship with OpenAI and the expansive footprint of its security products, services and initiatives, Microsoft has made its bid for center stage at the forefront of such initiatives.

This is part of the company’s broader generative AI strategy (with many of its new offerings also branded “Copilot”) for harnessing large language model AI to overhaul the nature of how technology is applied to problems.  Microsoft may have seized the moment, but its bet on the synergies between large-model AI and cybersecurity will not be lost among the company’s security competitors, including some of the largest vendors not just in cybersecurity, but also among those who see AI as the future.

Context

The cybersecurity industry has pushed for greater integration of automation, machine learning and AI into security operations (SecOps) — to the point where the concept of the “autonomous security operations center” has become very visible in the space. Among the reasons:

  • Security teams are overwhelmed with data. According to figures quoted by Microsoft, security teams take in data from over 100 different sources on average. Microsoft says it analyzes 65 trillion signals a day. Yet the correlation of this data is also vital to recognizing a threat, and adversaries are highly motivated to keep signals obscured.
  • Organizations are also hampered by the challenges of sourcing and staffing the security expertise necessary to manage security proactively, as well as to analyze and react to all this data, yet staffing remains a persistent problem for security organizations. In our Information Security, Organizational Behavior 2022 survey, 70% of respondents reported some level of staffing inadequacy in a continuing trend.

When malicious activity is detected, an organization must respond. People who know what they are seeing and how to mitigate a threat are typically the first line of defense, but the level of scale and detail required to respond effectively can also be overwhelming, and if response is not timely or effective, an incident may follow.

This is a large part of the push toward SOC automation, but people will likely remain critical to SecOps regardless, for two main reasons. First, cybersecurity is a human endeavor that requires the ability to think like the adversary and to anticipate the actions of adversaries that are highly motivated to overcome defenses. Second, even with automation in play, unsupervised automation may have unexpected consequences.

Enter emerging AI

Even when AI is applied to automation opportunities, it is no secret that machines can often get it wrong — even with advances in artificial intelligence. AI-enabled automation may be good at a number of things, but those are often well-defined and narrowly scoped. Beyond those constraints, outcomes may be less predictable. People must be able to monitor, control and optimize automation.

These factors have converged on what has already become a watershed moment for generative AI across technology in general, particularly given its emphasis on human communication. ChatGPT and similar initiatives are not just interactive; they learn from their human interactions. Even when they err, they learn from the prompts and feedback of people for whom they perform tasks, such as information analysis and code generation, and they are learning quickly. In only a few months, the improvements in capability between OpenAI’s GPT-4 and its previous iterations have demonstrated how rapidly improvements are appearing. It should therefore be no surprise that generative AI has found a place in security operations.

Until now, much of the emphasis on the application of AI in cybersecurity has been on areas such as threat recognition. More recently, we have seen the advent of virtual assistants to security analysts, able to identify resources that may be useful in gathering the context of events or helping determine a course of action. These initiatives — represented by vendors such as StrikeReady’s CARA (Cyber Awareness and Response Analyst), MixMode’s AI-based analytics or Expel’s bots — help analysts navigate the high and diverse volume of inputs and actions required for effective threat detection and response. These initiatives have been bellwethers of the moment at hand.

The elephant enters the room

The introduction of Microsoft Security Copilot is likely to be disruptive to security technology more broadly, and not only because of the company’s market presence, which is substantial. In 2021, the company said its security business made $10 billion in revenue over the prior 12 months — more than double its closest competitors in cybersecurity technology at that time. That claim has since grown to more than $20 billion as of early 2023. This is the business to which the company now brings its well-known relationship with OpenAI. Microsoft is bringing generative AI into a number of its offerings, with Copilot the branding for many. Given the security opportunity, it was expected that the company’s security portfolio would be a likely destination as well.

Microsoft Security Copilot

Microsoft Security Copilot is a large language AI model powered by OpenAI’s GPT-4, combined with a Microsoft security-specific model that incorporates what Microsoft describes as a growing set of security-specific skills informed by its global threat intelligence and vast signals volume. Security Copilot integrates with the Microsoft Security products portfolio, which means that it offers the most value to those with a significant investment in the Microsoft security portfolio, but the company notes that it will be expanded to third-party products.

Users can give Security Copilot a prompt, to which it then responds in a manner that will be familiar to those who have already been exploring ChatGPT and similar functionality. While Security Copilot calls upon its existing security skills to respond, it also learns new skills thanks to the learning system with which the security-specific model has been equipped. Users can save prompts into a “Promptbook” — a set of steps or automations that users have developed. This builds a body of knowledge and automated functionality that both the organization and Security Copilot can build on over time.

A Primer on Decentralized Digital Identity

The impact

Part of the reason this introduction is likely to be so resonant and disruptive is because of the human aspect that remains, and will remain, so vital to security operations. Generative AI produces output specifically intended to be presented to people, in human-readable or -usable fashion. The ability of large language AI models to comb through vast amounts of information and present it conversationally addresses one of the primary use cases of automation in SecOps: gathering the context of incidents and events to help analysts triage and escalate those that pose a significant threat.

Generative AI can produce other content as well, such as reverse engineering an exploit. One of the examples given by Microsoft was a visualization of the sequence of an exploit made by Security Copilot, showing how it moved through an incident, as well as the individual accounts, resources and components of the environment affected. The accompanying discussion produced by Security Copilot elaborated on its findings in a way that is readable by a wide variety of people (not just technical security personnel). It is not a stretch to envision such capability going the next step: deployment of functionality in production to achieve an operational objective in response to such findings. It is well known that generative AI can produce code.

To project much further would be speculative at best, but more than a few are anticipating where these developments could lead. Even so, the constraints described above that will keep people involved in cybersecurity operations — the need to think about adversarial and defense tactics in ways that only people can, and the need interact with AI and automation — will likely continue to play a role in the adoption of this technology for the foreseeable future. The more realistic near-term hope is for its ability to reduce demands on human expertise and availability, and ease security operations for the personnel required.

Safeguarding innovation

These developments aren’t without other concerns. Aware of this, Microsoft has emphasized the steps it is taking to assure how it will deliver security AI “in a safe, secure and responsible way.” User data will be the user’s to own and control. It will not be used to train or enrich foundational AI models used by others. Users’ data and AI models are protected by compliance and security controls. We expect the company to disclose further details on these controls as its AI offerings come to market.

Pacing the industry

Time will tell whether Microsoft’s introduction of generative AI into the security toolset becomes transformative for the industry. At a minimum, its partnership with OpenAI, along with Microsoft’s other AI investments, is bound to attract attention for the near term.

Competitively, the immediate impact will be felt by those with a stake in generative/large-model AI and security, particularly among other hyperscalers. Google, fresh from its $5.4 billion acquisition of Mandiant in 2022, had already answered Microsoft’s GPT challenge with its introduction of Bard. Amazon.com Inc., while not competing directly in SecOps much beyond its own estate so far, introduced Amazon Security Lake at re:Invent 2022, but has yet to elaborate significantly on its plans. Amazon’s relationships with AI companies such as Hugging Face should be watched for moves that could find their way into security.

More directly affected will be a host of contenders in a wide variety of SecOps tech, including security information and event management, extended detection and response and its contributing technologies, and security automation. Many partner with cloud providers to deliver their offerings, but not all have shown a high level of commitment to the integration of interactive AI into their offerings, which seems likely to change. The greater integration of large-model AI into cybersecurity was already poised to be a prominent factor at the upcoming RSA Conference in San Francisco. The buzz will certainly not end there.


Want insights on Infosec trends delivered to your inbox? Join the 451 Alliance.