Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label XDR Tools. Show all posts

Invest in Future-Proofing Your Cybersecurity AI Plan

 

With the ongoing barrage of new attacks and emerging dangers, one might argue that every day is an exciting day in the security operations centre (SOC). However, today's SOC teams are experiencing one of the most compelling and transformative changes in how we detect and respond to cybersecurity threats. Innovative security organisations are attempting to modernise SOCs with extended detection and response (XDR) platforms that incorporate the most recent developments in artificial intelligence (AI) into the defensive effort. 

XDR systems combine security telemetry from several domains, such as identities, endpoints, software-as-a-service apps, email, and cloud workloads, to provide detection and response features in a single platform. As a result, security teams employing XDR have greater visibility across the company than ever before. But that's only half the tale. The combination of this unprecedented insight and an AI-powered SOC aid can allow security teams to operate at the pace required to turn the tables on potential attackers. 

Innovative security organisations need to have a strategic implementation plan that considers the future in order to effectively leverage today's AI capabilities and provide the foundation for tomorrow's breakthroughs. This is because the industry is evolving rapidly. 

XDR breadth matters 

Unlike traditional automated detection and blocking solutions, which frequently rely on a single indicator of compromise, XDR platforms employ AI to correlate cross-domain security signals that analyse a full attack and identify threats with high confidence. AI's greater fidelity improves the signal-to-noise ratio, resulting in fewer false positives for manual investigation and triage. Notably, the larger the dataset on which the AI is operating, the more effective it will be; therefore, XDR's inherent breadth is critical. 

An effective XDR strategy should identify and account for high-risk regions, cybersecurity maturity, modern architecture and technologies, and budgetary limits, among other things. While adoption should be gradual to minimise operational impact, organisations must also examine how to acquire the broadest XDR coverage possible in order to make the most of AI's capabilities. 

Create AI-Confident teams

The purpose of AI is not to replace humans in your SOC, but to enable them. If your team lacks faith in the tools they use they will be unable to fully realise the platform's potential. As previously noted, minimising false positives will help increase user trust over time, but it is also critical to provide operational transparency so that everyone understands where data is coming from and what actions have been taken. 

XDR platforms must provide SOC teams with complete control over investigating, remediating, and bringing assets back online when they are required. Tightly integrating threat detection and automatic attack disruption capabilities into existing workflows will speed up triage and provide a clear view of threats and remedial operations across the infrastructure. 

Stay vigilant 

The indicators of attack and compromise are continually evolving. An effective, long-term XDR plan will meet the ongoing requirement for rapid analysis and continuous vetting of the most recent threat intelligence. Implementation roadmaps should address how to facilitate the incorporation of timely threat intelligence and include flexibility to grow or augment teams when complex incidents demand additional expertise or support. 

As more organisations look to engage in XDR and AI to improve their security operations, taking a careful, future-focused approach to deployment will allow them to better use today's AI capabilities while also being prepared for tomorrow's breakthroughs. After all, successful organisations will not rely solely on artificial intelligence to stay ahead of attackers. They will plan AI investments to keep them relevant.

How ChatGPT May Act as a Copilot for Security Experts

 

Security teams have been left to make assumptions about how generative AI will affect the threat landscape since ChatGPT-4 was released this week. Although it is now widely known that GPT-3 may be used to create malware and ransomware code, GPT-4 is 571X more potent, which could result in a large increase in threats. 

While the long-term effects of generative AI are yet unknown, a new study presented today by cybersecurity company Sophos reveals that GPT-3 can be used by security teams to thwart cyberattacks. 

Younghoo Lee, the principal data scientist for Sophos AI, and other Sophos researchers used the large language models from GPT-3 to create a natural language query interface for looking for malicious activity across the telemetry of the XDR security tool, detecting spam emails, and examining potential covert "living off the land" binary command lines. 

In general, Sophos' research suggests that generative AI has a crucial role to play in processing security events in the SOC, allowing defenders to better manage their workloads and identify threats more quickly. 

Detecting illegal activity 

The statement comes as security teams increasingly struggle to handle the volume of warnings generated by tools throughout the network, with 70% of SOC teams indicating that their work managing IT threat alerts is emotionally affecting their personal lives. 

According to Sean Gallagher, senior threat researcher at Sophos, one of the rising issues within security operation centres is the sheer amount of 'noise' streaming in. Many businesses are dealing with scarce resources, and there are just too many notifications and detections to look through. Using tools like GPT-3, we've demonstrated that it's possible to streamline some labor-intensive proxies and give defenders back vital time. 

Utilising ChatGPT as a cybersecurity co-pilot 

In the study, researchers used a natural language query interface where a security analyst may screen the data gathered by security technologies for harmful activities by typing queries in plain text English. 

For instance, the user may input a command like "show me all processes that were named powershelgl.exe and run by the root user" and produce XDR-SQL queries from them without having to be aware of the underlying database structure. 

This method gives defenders the ability to filter data without the usage of programming languages like SQL and offers a "co-pilot" to ease the effort of manually looking for threat data.

“We are already working on incorporating some of the prototypes into our products, and we’ve made the results of our efforts available on our GitHub for those interested in testing GPT-3 in their own analysis environments,” Gallagher stated. “In the future, we believe that GPT-3 may very well become a standard co-pilot for security experts.” 

It's important to note that researchers also discovered GPT-3 to filter threat data to be significantly more effective than utilising other substitute machine learning models. This would probably be faster with the upcoming version of generative AI given the availability of GPT-4 and its greater processing capabilities. Although these pilots are still in their early stages, Sophos has published the findings of the spam filtering and command line analysis experiments on the SophosAI GitHub website for other businesses to adapt.