Microsoft has been integrating Copilot AI assistants across its product line as part of its $10 billion investment in OpenAI. The latest one is Microsoft Security Copilot, that aids security teams in their investigation and response to security issues.
According to Chang Kawaguchi, vice president and AI Security Architect at Microsoft, defenders are having a difficult time coping with a dynamic security environment. Microsoft Security Copilot is designed to make defenders' lives easier by using artificial intelligence to help them catch incidents that they might otherwise miss, improve the quality of threat detection, and speed up response. To locate breaches, connect threat signals, and conduct data analysis, Security Copilot makes use of both the GPT-4 generative AI model from OpenAI and the proprietary security-based model from Microsoft.
The objective of Security Copilot is to make “Defenders’ lives better, make them more efficient, and make them more effective by bringing AI to this problem,” Kawaguchi says.
How Does Security Copilot Work?
Security Copilot ensures to ingest and decode huge amounts of security data, like the 65 trillion security signals Microsoft pulls every day and all the data reaped by the Microsoft products the company is using, including Microsoft Sentinel, Defender, Entra, Priva, Purview, and Intune. Analysts can investigate incidents, research information on prevalent vulnerabilities and exposures.
When analysts and incident response team type "/ask about" into a text prompt, Security Copilot will respond with information based on what it knows about the organization's data.
According to Kawaguchi, by doing this, security teams will be able to draw the dots between various elements of a security incident, such as a suspicious email, a malicious software file, or the numerous system components that had been hacked. The queries could range from being general information in regards with vulnerabilities, or specific to the organization’s environment, like looking in the logs for signs that some Exchange flaw has been exploited.
The queries could be general, such as an explanation of a vulnerability, or specific to the organization’s environment, such as looking in the logs for signs that a particular Exchange flaw had been exploited. And because Security Copilot uses GPT-4, it can respond to natural language questions. Additionally, as Security Copilot makes use of GPT-4, it can respond to queries in natural language.
The analyst can review brief summaries of what transpired before following Security Copilot's prompts to delve deeper into the inquiry. These actions can all be recorded and shared with other security team members, stakeholders, and senior executives using a "pinboard." The completed tasks are all saved and available for access. Also, there is a summary that is generated automatically and updated as new activities are finished.
“This is what makes this experience more of a notebook than a chat bot experience,” says Kawaguchi, mentioning also that the tool can also create PowerPoint presentations on the basis of the investigation conducted by the security team, which could then be used to share details of the incident that follows.
The company claims that Security Copilot is not designed to replace human analysts, but rather to give them the information they need to work fast and efficiently throughout an investigation. By looking at each asset in the environment, threat hunters may use the tool to see if an organization is vulnerable to known vulnerabilities and exploits.