Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Shadow AI. Show all posts

Ransomware Gangs Actively Recruiting Pen Testers: Insights from Cato Networks' Q3 2024 Report

 

Cybercriminals are increasingly targeting penetration testers to join ransomware affiliate programs such as Apos, Lynx, and Rabbit Hole, according to Cato Network's Q3 2024 SASE Threat Report, published by its Cyber Threats Research Lab (CTRL).

The report highlights numerous Russian-language job advertisements uncovered through surveillance of discussions on the Russian Anonymous Marketplace (RAMP). Speaking at an event in Stuttgart, Germany, on November 12, Etay Maor, Chief Security Strategist at Cato Networks, explained:"Penetration testing is a term from the security side of things when we try to reach our own systems to see if there are any holes. Now, ransomware gangs are hiring people with the same level of expertise - not to secure systems, but to target systems."

He further noted, "There's a whole economy in the criminal underground just behind this area of ransomware."

The report details how ransomware operators aim to ensure the effectiveness of their attacks by recruiting skilled developers and testers. Maor emphasized the evolution of ransomware-as-a-service (RaaS), stating, "[Ransomware-as-a-service] is constantly evolving. I think they're going into much more details than before, especially in some of their recruitment."

Cato Networks' team discovered instances of ransomware tools being sold, such as locker source code priced at $45,000. Maor remarked:"The bar keeps going down in terms of how much it takes to be a criminal. In the past, cybercriminals may have needed to know how to program. Then in the early 2000s, you could buy viruses. Now you don't need to even buy them because [other cybercriminals] will do this for you."

AI's role in facilitating cybercrime was also noted as a factor lowering barriers to entry. The report flagged examples like a user under the name ‘eloncrypto’ offering a MAKOP ransomware builder, an offshoot of PHOBOS ransomware.

The report warns of the growing threat posed by Shadow AI—where organizations or employees use AI tools without proper governance. Of the AI applications monitored, Bodygram, Craiyon, Otter.ai, Writesonic, and Character.AI were among those flagged for security risks, primarily data privacy concerns.

Cato CTRL also identified critical gaps in Transport Layer Security (TLS) inspection. Only 45% of surveyed organizations utilized TLS inspection, and just 3% inspected all relevant sessions. This lapse allows attackers to leverage encrypted TLS traffic to evade detection.

In Q3 2024, Cato CTRL noted that 60% of CVE exploit attempts were blocked within TLS traffic. Prominent vulnerabilities targeted included Log4j, SolarWinds, and ConnectWise.

The report is based on the analysis of 1.46 trillion network flows across over 2,500 global customers between July and September 2024. It underscores the evolving tactics of ransomware gangs and the growing challenges organizations face in safeguarding their systems.

Shadow AI: The Novel, Unseen Threat to Your Company's Data

 

Earlier this year, ChatGPT emerged as the face of generative AI. ChatGPT was designed to help with almost everything, from creating business plans to breaking down complex topics into simple terms. Since then, businesses of all sizes have been eager to explore and reap the benefits of generative AI. 

However, as this new chapter of AI innovation moves at breakneck speed, CEOs and leaders risk overlooking a type of technology that has been infiltrating through the back door: shadow AI. 

Overlooking AI shadow a risky option 

To put it simply, "shadow AI" refers to employees who, without management awareness, add AI tools to their work systems to make life easier. Although most of the time this pursuit of efficiency is well-intentioned, it is exposing businesses to new cybersecurity and data privacy risks.

When it comes to navigating tedious tasks or laborious processes, employees who want to increase productivity and process efficiency are usually the ones who embrace shadow AI. This could imply that AI is being asked to summarise the main ideas from meeting minutes or to comb through hundreds of PowerPoint decks in search of critical data. 

Employees typically don't intentionally expose their company to risk. On the contrary. All they're doing is simplifying things so they can cross more things off their to-do list. However, given that over a million adults in the United Kingdom have already utilised generative AI at work, there is a chance that an increasing number of employees will use models that their employers have not approved for safe use, endangering data security in the process. 

Major risks 

Shadow AI carries two risks. First, employees may feed sensitive company information into such tools or leave sensitive company information open to be scraped while the technology continues to operate in the background. For example, when an employee uses ChatGPT or Google Bard to increase productivity or clarify information, they may be entering sensitive or confidential company information. 

Sharing data isn't always an issue—companies frequently rely on third-party tools and service providers for information—but problems can arise when the tool in question and its data-handling policies haven't been assessed and approved by the business. 

The second risk related to shadow AI is that, because businesses generally aren't aware that these tools are being used, they can't assess the risks or take appropriate action to minimise them. (This may also apply to employees who receive false information and subsequently use it in their work.) 

This is something that occurs behind closed doors and beyond the knowledge of business leaders. In 2022, 41% of employees created, modified, or accessed technology outside of IT's purview, according to research from Gartner. By 2027, the figure is expected to increase to 75%. 

And therein lies the crux of the issue. How can organisations monitor and assess the risks of something they don't understand? 

Some companies, such as Samsung, have gone so far as to ban ChatGPT from their offices after employees uploaded proprietary source code and leaked confidential company information via the public platform. Apple and JP Morgan have also restricted employee use of ChatGPT. Others are burying their heads in the sand or failing to notice the problem at all. 

What should business leaders do to mitigate the risks of shadow AI while also ensuring that they and their teams can benefit from the efficiencies and insights that artificial intelligence can offer? 

First, leaders should educate teams on what constitutes safe AI practise, as well as the risks associated with shadow AI, and provide clear guidance on when ChatGPT can and cannot be used safely at work. 

Companies should consider offering private, in-house generative AI tools to employees who fall into the latter category. Models such as Llama 2 and Falcon AI can be downloaded and used securely to power generative AI tools. Azure Open AI provides a middle-ground option in which data remains within the company's Microsoft "tenancy." 

These options avoid the risk to data and IP that comes with public large language models like ChatGPT—whose various uses of our data aren't yet known—while allowing employees to yield the results they desire.