Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Netskope. Show all posts

Bumblebee Malware Resurfaces in New Attacks Following Europol Crackdown

 

iThe Bumblebee malware loader, inactive since Europol's 'Operation Endgame' in May, has recently resurfaced in new cyberattacks. This malware, believed to have been developed by TrickBot creators, first appeared in 2022 as a successor to the BazarLoader backdoor, giving ransomware groups access to victim networks.

Bumblebee spreads through phishing campaigns, malvertising, and SEO poisoning, often disguised as legitimate software such as Zooom, Cisco AnyConnect, ChatGPT, and Citrix Workspace. Among the dangerous payloads it delivers are Cobalt Strike beacons, data-stealing malware, and ransomware.

Operation Endgame was a large-scale law enforcement effort that targeted and dismantled over a hundred servers supporting various malware loaders, including IcedID, Pikabot, TrickBot, Bumblebee, and more. Following this, Bumblebee activity appeared to cease. However, cybersecurity experts at Netskope have recently detected new instances of the malware, hinting at a possible resurgence.

The latest Bumblebee attack involves a phishing email that tricks recipients into downloading a malicious ZIP file. Inside is a .LNK shortcut that activates PowerShell to download a harmful MSI file disguised as an NVIDIA driver update or Midjourney installer.

This MSI file is executed silently, and Bumblebee uses it to deploy itself in the system's memory. The malware uses a DLL unpacking process to establish itself, showing configuration extraction methods similar to previous versions. The encryption key "NEW_BLACK" was identified in recent attacks, along with two campaign IDs: "msi" and "lnk001."

Although Netskope hasn't shared details about the payloads Bumblebee is currently deploying, the new activity signals the malware’s possible return. A full list of indicators of compromise can be found on a related GitHub repository.

AI Data Breach Reveals Trust Issues with Personal Information

 


Insight AI technology is being explored by businesses as a tool for balancing the benefits it brings with the risks that are associated. Amidst this backdrop, NetSkope Threat Labs has recently released the latest edition of its Cloud and Threat Report, which focuses on using AI apps within the enterprise to prevent fraud and other unauthorized activity. There is a lot of risk associated with the use of AI applications in the enterprise, including an increased attack surface, which was already discussed in a serious report, and the accidental sharing of sensitive information that occurs when using AI apps. 

As users and particularly as individuals working in the cybersecurity as well as privacy sectors, it is our responsibility to protect data in an age when artificial intelligence has become a popular tool. An artificial intelligence system, or AI system, is a machine-controlled program that is programmed to think and learn the same way humans do through the use of simulation. 

AI systems come in various forms, each designed to perform specialized tasks using advanced computational techniques: - Generative Models: These AI systems learn patterns from large datasets to generate new content, whether it be text, images, or audio. A notable example is ChatGPT, which creates human-like responses and creative content. - Machine Learning Algorithms: Focused on learning from data, these models continuously improve their performance and automate tasks. Amazon Alexa, for instance, leverages machine learning to enhance voice recognition and provide smarter responses. - Robotic Vision: In robotics, AI is used to interpret and interact with the physical environment. Self-driving cars like those from Tesla use advanced robotics to perceive their surroundings and make real-time driving decisions. - Personalization Engines: These systems curate content based on user behavior and preferences, tailoring experiences to individual needs.  Instagram Ads, for example, analyze user activity to display highly relevant ads and recommendations. These examples highlight the diverse applications of AI across different industries and everyday technologies. 

In many cases, artificial intelligence (AI) chatbots are good at what they do, but they have problems detecting the difference between legitimate commands from their users and manipulation requests from outside sources. 

In a cybersecurity report published on Wednesday, researchers assert that artificial intelligence has a definite Achilles' heel that should be exploited by attackers shortly. There have been a great number of public chatbots powered by large language models, or LLMs for short, that have been emerging just over the last year, and this field of LLM cybersecurity is at its infancy stage. However, researchers have already found that these models may be susceptible to a specific form of attack referred to as "prompt injection," which occurs when a bad actor sneakily provides commands to the model without the model's knowledge. 

In some instances, attackers hide prompts inside webpages that the chatbot reads later, so that the chatbot might download malware, assist with financial fraud, or repeat dangerous misinformation that is passed on to people by the chatbot. 

What is Artificial Intelligence?


AI (artificial intelligence) is one of the most important areas of study in technology today. AI focuses on developing systems that mimic human intelligence, with the ability to learn, reason, and solve problems autonomously. The two basic types of AI models that can be used for analyzing data are predictive AI models and generative AI models. 

 A predictive artificial intelligence function is a computational capability that uses existing data to make predictions about future outcomes or behaviours based on historical patterns and data. A creative AI system, however, has the capability of creating new data or content that is similar to the input it has been trained on, even if there was no content set in the dataset before it was trained. 

 A philosophical discord exists between Leibnitz and the founding fathers of artificial intelligence in the early 1800s, although the conception of the term "artificial intelligence" as we use it today has existed since the early 1940s, and became famous with the development of the "Turing test" in 1950. It has been quite some time since we have experienced a rapid period of progress in the field of artificial intelligence, a trend that has been influenced by three major factors: better algorithms, increased networked computing power, and a greater capacity to capture and store data in unprecedented quantities. 

Aside from technological advancements, the very way we think about intelligent machines has changed dramatically since the 1960s. This has resulted in a great number of developments that are taking place today. Even though most people are not aware of it, AI technologies are already being utilized in very practical ways in our everyday lives, even though they may not be aware of it. As a characteristic of AI, after it becomes effective, it stops being referred to as AI and becomes mainstream computing as a result.2 For instance, there are several mainstream AI technologies on which you can take advantage, including having the option of being greeted by an automated voice when you call, or being suggested a movie based on your preferences. The fact that these systems have become a part of our lives, and we are surrounded by them every day, is often overlooked, even though they are supported by a variety of AI techniques, including speech recognition, natural language processing, and predictive analytics that make their work possible. 

What's in the news? 


There is a great deal of hype surrounding artificial intelligence and there is a lot of interest in the media regarding it, so it is not surprising to find that there are an increasing number of users accessing AI apps in the enterprise. The rapid adoption of artificial intelligence (AI) applications in the enterprise landscape is significantly raising concerns about the risk of unintentional exposure to internal information. A recent study reveals that, between May and June 2023, there was a weekly increase of 2.4% in the number of enterprise users accessing at least one AI application daily, culminating in an overall growth of 22.5% over the observed period. Among enterprise AI tools, ChatGPT has emerged as the most widely used, with daily active users surpassing those of any other AI application by a factor of more than eight. 

In organizations with a workforce exceeding 1,000 employees, an average of three different AI applications are utilized daily, while organizations with more than 10,000 employees engage with an average of five different AI tools each day. Notably, one out of every 100 enterprise users interacts with an AI application daily. The rapid increase in the adoption of AI technologies is driven largely by the potential benefits these tools can bring to organizations. Enterprises are recognizing the value of AI applications in enhancing productivity and providing a competitive edge. Tools like ChatGPT are being deployed for a variety of tasks, including reviewing source code to identify security vulnerabilities, assisting in the editing and refinement of written content, and facilitating more informed, data-driven decision-making processes. 

However, the unprecedented speed at which generative AI applications are being developed and deployed presents a significant challenge. The rapid rollout of these technologies has the potential to lead to the emergence of inadequately developed AI applications that may appear to be fully functional products or services. In reality, some of these applications may be created within a very short time frame, possibly within a single afternoon, often without sufficient oversight or attention to critical factors such as user privacy and data security. 

The hurried development of AI tools raises the risk that confidential or sensitive information entered into these applications could be exposed to vulnerabilities or security breaches. Consequently, organizations must exercise caution and implement stringent security measures to mitigate the potential risks associated with the accelerated deployment of generative AI technologies. 

Threat to Privacy


Methods of Data Collection 

AI tools generally employ one of two methods to collect data: Data collection is very common in this new tech-era. This is when the AI system is programmed to collect specific data. Examples include online forms, surveys, and cookies on websites that gather information directly from users. 

Another comes Indirect collection, this involves collecting data through various platforms and services. For instance, social media platforms might collect data on users' likes, shares, and comments, or a fitness app might gather data on users' physical activity levels. 

As technology continues to undergo ever-increasing waves of transformation, security, and IT leaders will have to constantly seek a balance between the need to keep up with technology and the need for robust security. Whenever enterprises integrate artificial intelligence into their business, key considerations must be taken into account so that IT teams can achieve maximum results. 

As a fundamental aspect of any IT governance program, it is most important to determine what applications are permissible, in conjunction with implementing controls that not only empower users but also protect the organization from potential risks. Keeping an environment in a secure state requires organizations to monitor AI app usage, trends, behaviours, and the sensitivity of data regularly to detect emerging risks as soon as they emerge.

A second effective way of protecting your company is to block access to non-essential or high-risk applications. Further, policies that are designed to prevent data loss should be implemented to detect sensitive information, such as source code, passwords, intellectual property, or regulated data, so that DLP policies can be implemented. A real-time coaching feature that integrates with the DLP system reinforces the company's policies regarding how AI apps are used, ensuring users' compliance at all times. 

A security plan must be integrated across the organization, sharing intelligence to streamline security operations and work in harmony for a seamless security program. Businesses must adhere to these core cloud security principles to be confident in their experiments with AI applications, knowing that their proprietary corporate data will remain secure throughout the experiment. As a consequence of this approach, sensitive information is not only protected but also allows companies to explore innovative applications of AI that are beyond the realm of mainstream tasks such as the creation of texts or images.  

Cyber Threats by Nation-States Surge Beyond Control

 


In recent years, state-sponsored hacker groups have increased their attacks on critical infrastructure, causing great concern across the globe. It has become increasingly evident that these coordinated and sophisticated cyber threats and attacks are posing serious risks to the security and safety of the country as a whole. 

To protect crucial systems such as power grids, healthcare systems, and water treatment plants, strong cybersecurity measures must be implemented to prevent any disruption or manipulation. This underscores the importance of protecting critical infrastructure that needs to be protected. Currently, two-thirds of all cyberattacks that are attributed to a state-backed actor originate in foreign countries. This information lends credence to the warnings from the US Department of Homeland Security that enterprises and public services alike are facing significant threats. 

Netskope, a security firm that conducts research into state-sponsored attacks, has reported a marked increase in attacks in recent years, with the firm alerting this trend does not appear to be waning anytime soon. It has been estimated that the kind of cyberattacks waged by nation-state actors are now constituting one of the largest forms of quiet warfare on the planet, said Netskope's CEO Sanjay Beri. To understand this worldwide escalation, it is necessary to look beneath the surface of the conflict, which shows a lot of different states employing widely disparate cyberattack strategies. 

It seems that due to the current threat landscape, the U.S. administration has made their national unity of effort a priority to keep a critical infrastructure that is secure, accessible, and reliable. For the above threats and attacks to be addressed effectively, international cooperation, strict regulations, and investments in advanced cybersecurity technologies will be needed. 

It is also imperative that we raise public awareness about cyber threats in addition to improving cyber hygiene practices to minimize the risks of state-sponsored cyberattacks on critical infrastructure that pose a significant threat to the public. Additionally, the European Union Agency for Cybersecurity (ENISA), representing the European Union, released an executive summary of 'Foresight Cybersecurity Threats for 2030' which highlights ten of the most dangerous emerging threats for the next decade. 

A review of previously identified threats and trends is provided in this study, which offers insight into the morphing landscape of cybersecurity. The report, it is details that by addressing issues such as supply chain compromises, skill shortages, digital surveillance, and machine learning abuse, it contributes to developing robust cybersecurity frameworks and best practices for combating emerging threats by 2030 by addressing relevant issues such as supply chain compromises, skill shortages, and digital surveillance. 

As a part of its annual cyber security report, the National Cyber Security Centre (NCSC) of the United Kingdom has released a new report which examines the possible impacts of artificial intelligence (AI) on the global ransomware threat which has been on the rise for some time now. A report published by the CERT indicates that in the future, the frequency and severity of cyberattacks might be exacerbated as Artificial Intelligence (AI) continues to gain importance. NCSC advises individuals and organisations to enhance their cybersecurity measures in a proactive manner in order to prevent security threats. 

It is also discussed in the report how artificial intelligence will impact cyber operations in general, as well as social engineering and malware in particular, highlighting the importance of continuing to be vigilant against these evolving threats as they arise. There was an alert raised earlier this summer by the National Cyber Security Centre (NCSC) of the UK, the US, and South Korean authorities regarding a North Korea-linked threat group known as Andariel that allegedly breached organizations all over the world, stealing sensitive and classified technology as well as intellectual property. 

Despite the fact that it predominantly targeted defense, aerospace, nuclear, and engineering companies, it also harmed smaller organizations in the medical, energy, and knowledge sectors on a lesser scale, stealing information such as contract specifications, design drawings, and project details from these organizations. 

In March 2024, the United Kingdom took a firm stance against Chinese state-sponsored cyber activities targeting parliamentarians and the Electoral Commission, making it clear that such intrusions would not be tolerated. This came after a significant breach linked to Chinese state-affiliated hackers, prompting the UK government to summon the Chinese Ambassador and impose sanctions on a front company and two individuals associated with the APT31 hacking group. This decisive response highlighted the nation's commitment to countering state-sponsored cyber threats. 

The previous year saw similar tensions, as Russian-backed cyber threat actors faced increased scrutiny following a National Cyber Security Centre (NCSC) disclosure. The NCSC had exposed a campaign led by Russian intelligence services aimed at interfering with the UK's political landscape and democratic institutions. These incidents underscore a troubling trend: state-affiliated actors increasingly exploit the tools and expertise of cybercriminals to achieve their objectives. 

Over the past year, this collaboration between nation-state actors and cybercriminal entities has become more pronounced. Microsoft's observations reveal a growing pattern where state-sponsored groups not only pursue financial gain but also enlist cybercriminals to support intelligence collection, particularly concerning the Ukrainian military. These actors have adopted the same malware, command and control frameworks, and other tools commonly used by the wider cybercriminal community. Specific examples illustrate this evolution. 

Russian threat actors, for instance, have outsourced some aspects of their cyber espionage operations to criminal groups, especially in Ukraine. In June 2024, a suspected cybercrime group utilized commodity malware to compromise more than 50 Ukrainian military devices, reflecting a strategic shift toward outsourcing to achieve tactical advantages. Similarly, Iranian state-sponsored actors have turned to ransomware as part of their cyber-influence operations. In one notable case, they marketed stolen data from an Israeli dating website, offering to remove individual profiles from their database for a fee—blending ransomware tactics with influence operations. 

Meanwhile, North Korean cyber actors have also expanded into ransomware, developing a custom variant known as "FakePenny." This ransomware targeted organizations in the aerospace and defence sectors, employing a strategy that combined data exfiltration with subsequent ransom demands, thus aiming at both intelligence gathering and financial gain. The sheer scale of the cyber threat landscape is daunting, with Microsoft reporting over 600 million attacks daily on its customers alone. 

Addressing this challenge requires comprehensive countermeasures that reduce the frequency and impact of these intrusions. Effective deterrence involves two key strategies: preventing unauthorized access and imposing meaningful consequences for malicious behaviour. Microsoft's Secure Future Initiative represents a commitment to strengthening defences and safeguarding its customers from cyber threats. 

However, while the private sector plays a crucial role in thwarting attackers through enhanced cybersecurity, government action is also essential. Imposing consequences on malicious actors is vital to curbing the most damaging cyberattacks and deterring future threats. Despite substantial discussions in recent years about establishing international norms for cyberspace conduct, current frameworks lack enforcement mechanisms, and nation-state cyberattacks have continued to escalate in both scale and sophistication. 

To change this dynamic, a united effort from both the public and private sectors is necessary. Only through a combination of robust defence measures and stringent deterrence policies can the balance shift to favour defenders, creating a more secure and resilient digital environment.

Quishing Scams Exploit Microsoft Sway Platform

 


It has been discovered that a new phishing campaign is being run using Microsoft Sway, which has been found by researchers. A series of attacks have been called the "Quishing" campaign to describe what is happening. The practice of "squishing" is a form of phishing that uses QR codes to lead people to malicious websites. An example of Quishing is embedding malicious URLs into a QR code to commit phishing. 

A few groups of victims in Asia and North America are primarily focusing on the campaign. In late December, researchers noticed that an unexpected spike in traffic to unique Microsoft Sway phishing pages arose as a result of a campaign called "quishing," which targeted Microsoft Office credentials.  As defined by Netskope Threat Labs, quishing is essentially phishing to trick users into opening malicious pages by presenting them with QR codes, which are commonly used in many forms of phishing. 

According to a spokesperson for the campaign, the campaign mainly targets victims in Asia and North America, across multiple industries such as the technology, manufacturing, and finance sectors. A researcher from the University of California, Davis, reported that "attackers instruct their victims to scan QR codes with their mobile devices, in the hope that these portable devices do not possess the strict security measures found on corporate-issued devices," according to an article written by the researchers. 

This QR phishing campaign utilizes two techniques that have been discussed in previous articles: transparent phishing in conjunction with Cloudflare Turnstile" Those who operate phishing websites use Cloudflare Turnstile to ensure that their malicious websites are protected from static analysis tools so that they can hide their malicious payloads, prevent web filtering providers from blocking their domains, and maintain a clean reputation among the web community. 

This is known as an attack-in-the-middle phishing technique, which is more sophisticated than traditional phishing techniques. The attackers not only attempt to gain access to the victims' credentials but also attempt to log them into the legitimate service using those credentials, bypassing multi-factor authentication, so they can steal sensitive tokens or cookies which can be used to gain further unauthorized access to the system. 

This is a massive QR code phishing campaign, which abused Microsoft Sway, a cloud-based tool for creating presentations online, to create landing pages that scammed Microsoft 365 users into handing over their credentials in exchange for money. According to Netskope Threat Labs, these attacks were spotted in July 2024 after detecting an increase of 2,000-fold in attacks exploiting Microsoft Sway to host phishing pages that allegedly steal access credentials for Microsoft 365 accounts. 

Interestingly, this surge of activity dates back to the first half of the year when minimal activity was reported. So, it comes as no surprise that this campaign has been so widespread. Essentially, they were interested in targeting users in Asia and North America, concentrating primarily on the technology, manufacturing, and finance sectors, which were the most likely to present themselves to them. A free application, called Sway, is available in Microsoft 365 for anyone with a Microsoft account who has a Microsoft account. 

Attackers, however, utilize this open access as an opportunity to fool users by misrepresenting them as legitimate cloud applications, thus defrauding them of the money they are paid to use them. Furthermore, Sway is accessed once an individual logs into their Microsoft 365 account, adding a layer of legitimacy to the attack, since it is accessible once the victim has already logged into the account, thus increasing the chances of them opening malicious links. 

Netskope Threat Labs identified a new QR code phishing campaign in July 2024, marking a significant development in cyber threats. This campaign primarily targets victims in Asia and North America, affecting various sectors, including manufacturing, technology, and finance. Cybercriminals employ diverse sharing methods, such as email, links, and social media platforms like Twitter, to direct users to phishing pages hosted on the sway. cloud.Microsoft domain. 

Once on these pages, victims are prompted to scan QR codes that subsequently lead them to malicious websites. Microsoft Sway, a platform known for its versatility, has been exploited in the past for phishing activities. Notably, five years ago, the PerSwaysion phishing campaign leveraged Microsoft Sway to target Office 365 login credentials. This campaign, driven by a phishing kit offered through a malware-as-a-service (MaaS) operation, was uncovered by Group-IB security researchers.

The attacks deceived at least 156 high-ranking individuals within small and medium-sized financial services companies, law firms, and real estate groups. The compromised accounts included those of executives, presidents, and managing directors across the U.S., Canada, Germany, the U.K., the Netherlands, Hong Kong, and Singapore. This escalation in phishing tactics highlights the ongoing battle between cybercriminals and cybersecurity professionals, where each defensive measure is met with a corresponding offensive innovation. 

The need for a comprehensive approach to cybersecurity has never been more apparent, as malicious actors continue to exploit seemingly innocuous technologies for nefarious purposes. With the rising popularity of Unicode QR code phishing techniques, security experts emphasize the importance of enhancing detection capabilities to analyze not just images but also text-based codes and other unconventional formats used to deceive users and infiltrate systems. This sophisticated phishing method underscores the continuous vigilance required to safeguard digital environments against increasingly cunning cyber threats.

Phishing Campaigns Exploit Cloudflare Workers to Harvest User Credentials

 

Cybersecurity researchers are raising alarms about phishing campaigns that exploit Cloudflare Workers to serve phishing sites designed to harvest user credentials associated with Microsoft, Gmail, Yahoo!, and cPanel Webmail. This attack method, known as transparent phishing or adversary-in-the-middle (AitM) phishing, employs Cloudflare Workers to act as a reverse proxy for legitimate login pages, intercepting traffic between the victim and the login page to capture credentials, cookies, and tokens, according to Netskope researcher Jan Michael Alcantara. 

Over the past 30 days, the majority of these phishing campaigns have targeted victims in Asia, North America, and Southern Europe, particularly in the technology, financial services, and banking sectors. The cybersecurity firm noted an increase in traffic to Cloudflare Workers-hosted phishing pages starting in Q2 2023, with a spike in the number of distinct domains from just over 1,000 in Q4 2023 to nearly 1,300 in Q1 2024. The phishing campaigns utilize a technique called HTML smuggling, which uses malicious JavaScript to assemble the malicious payload on the client side, evading security protections. 

Unlike traditional methods, the malicious payload in this case is a phishing page reconstructed and displayed to the user on a web browser. These phishing pages prompt victims to sign in with Microsoft Outlook or Office 365 (now Microsoft 365) to view a purported PDF document. If users follow through, fake sign-in pages hosted on Cloudflare Workers are used to harvest their credentials and multi-factor authentication (MFA) codes. "The entire phishing page is created using a modified version of an open-source Cloudflare AitM toolkit," Alcantara said. 

Once victims enter their credentials, the attackers collect tokens and cookies from the responses, gaining visibility into any additional activity performed by the victim post-login. HTML smuggling is increasingly favored by threat actors for its ability to bypass modern defenses, serving fraudulent HTML pages and other malware without raising red flags. One highlighted instance by Huntress Labs involved a fake HTML file injecting an iframe of the legitimate Microsoft authentication portal retrieved from an actor-controlled domain. This method enables MFA-bypass AitM transparent proxy phishing attacks using HTML smuggling payloads with injected iframes instead of simple links. 

Recent phishing campaigns have also used invoice-themed emails with HTML attachments masquerading as PDF viewer login pages to steal email account credentials before redirecting users to URLs hosting "proof of payment." These tactics leverage phishing-as-a-service (PhaaS) toolkits like Greatness to steal Microsoft 365 login credentials and bypass MFA using the AitM technique. The financial services, manufacturing, energy/utilities, retail, and consulting sectors in the U.S., Canada, Germany, South Korea, and Norway have been top targets. 

Threat actors are also employing generative artificial intelligence (GenAI) to craft effective phishing emails and using file inflation methods to evade analysis by delivering large malware payloads. Cybersecurity experts underscore the need for robust security measures and oversight mechanisms to combat these sophisticated phishing campaigns, which continually evolve to outsmart traditional detection systems.

Blocking Access to AI Apps is a Short-term Solution to Mitigate Safety Risk


Another major revelation in regard to ChatGPT recently came to light through research conducted by Netskope. According to their analysis, business organizations are experiencing about 183 occurrences of sensitive data being posted to ChatGPT for every 10,000 corporate users each month. Amongst the sensitive data being exposed, source code bagged the largest share.

The security researchers further scrutinized the data of the million enterprise users worldwide and emphasized the growing trend of generative AI app usage, which witnessed an increase of 22.5% over the past two months. This consequently escalated the chance of sensitive data being exposed. 

ChatGPT Reigning the Generative AI Market

Apparently, organizations with 10,000 (or more) users are utilizing some or the other AI tool – with an average of 5 apps – on a regular basis. Compared to other generative AI apps, ChatGPT has more than 8 times as many daily active users. Within the next seven months, it is anticipated that the number of people accessing AI apps will double at the present growth pace.

The AI app with the swiftest growth in installations over the last two months was Google Bard, which is presently attracting new users at a rate of 7.1% per week versus 1.6% for ChatGPT. Although the generative AI app market is expected to considerably grow before then, with many more apps in development, Google Bard is not projected to overtake ChatGPT for more than a year at the current rate.

Besides the intellectual property (excluding source code) and personally identifiable information, other sensitive data communicated via ChatGPT includes regulated data, such as financial and healthcare data, as well as passwords and keys, which are typically included in source code.

According to Ray Canzanese, Threat Research Director, Netskope Threat Lab, “It is inevitable that some users will upload proprietary source code or text containing sensitive data to AI tools that promise to help with programming or writing[…]Therefore, it is imperative for organizations to place controls around AI to prevent sensitive data leaks. Controls that empower users to reap the benefits of AI, streamlining operations and improving efficiency, while mitigating the risks are the ultimate goal. The most effective controls that we see are a combination of DLP and interactive user coaching.”

Safety Measures to Adopt AI Apps

As opportunistic attackers look to profit from the popularity of artificial intelligence, Netskope Threat Labs is presently monitoring ChatGPT proxies and more than 1,000 malicious URLs and domains, including several phishing attacks, malware distribution campaigns, spam, and fraud websites.

While blocking access to AI content and apps may seem like a good idea, it is indeed a short-term solution. 

James Robinson, Deputy CISO at Netskope, said “As security leaders, we cannot simply decide to ban applications without impacting on user experience and productivity[…]Organizations should focus on evolving their workforce awareness and data policies to meet the needs of employees using AI products productively. There is a good path to safe enablement of generative AI with the right tools and the right mindset.”

Organizations must focus their strategy on finding acceptable applications and implementing controls that enable users to use them to their maximum potential while protecting the business from dangers in order to enable the safe adoption of AI apps. For protection against assaults, such a strategy should incorporate domain filtering, URL filtering, and content inspection.

Here, we are listing some more safety measures to secure data and use AI tools with safety: 

  • Disable access to apps that lack a legitimate commercial value or that put the organization at disproportionate risk. 
  • Educate employees to remind users of their company policy pertaining to the usage of AI apps.
  • Utilize cutting-edge data loss prevention (DLP) tools to identify posts with potentially sensitive data.  

Users' Crypto Wallets are Stolen by Fake Binance NFT Mystery Box Bots

 

Researchers have discovered a new campaign to disperse the RedLine Stealer — a low-cost password seeker sold on underground forums — by mutating oneself with the data malware from GitHub repositories using a fake Binance NFT mystery box bots, an array of YouTube videos that take advantage of global interest in NFTs. 

The enticement is the promise of a bot that will automatically purchase Binance NFT Mystery Boxes as they become available. Binance mystery boxes are collections of non-fungible token (NFT) things for users to purchase in the hopes of receiving a one-of-a-kind or uncommon item at a discounted price. Some of the NFTs obtained in such boxes can be used in online blockchain games to add unusual cosmetics or identities. However, the bot is a hoax. According to Gustavo Palazolo, a malware analyst at Netskope Threat Labs, the video descriptions on the YouTube pages encourage victims to accidentally download RedLine Stealer from a GitHub link. 

In the NFT market, mystery boxes are popular because they provide individuals with the thrill of the unknown as well as the possibility of a large payout if they win a rare NFT. However, marketplaces such as Binance sell them in limited quantities, making some crates difficult to obtain before they sell out. 

"We found in this attempt that the attacker is also exploiting GitHub in the threat flow, to host the payloads," Palazolo said. "RedLine Stealer was already known for manipulating YouTube videos to proliferate through false themes," Palazolo said. The advertising was spotted by Netskope in April. "While RedLine Stealer is a low-cost malware, it has several capabilities that might do considerable harm to its victims, including the loss of sensitive data," Palazolo said. This is why prospective buyers frequently use "bots" to obtain them, and it is exactly this big trend that threat actors are attempting to exploit. 

The Ads were uploaded during March and April 2022, and each one includes a link to a GitHub repository that purports to host the bot but instead distributes RedLine. "BinanceNFT.bot v1.3.zip" is the name of the dropped file, which contains a program of a similar name, which is the cargo, a Visual C++ installation, and a README.txt file. Because RedLine is written in.NET, it demands the VC redistributable setup file to run, whereas the prose file contains the victim's installation instructions.

If the infected machine is found in any of the following countries, the virus does not run, according to Palazolo: Armenia, Azerbaijan,  Belarus,  Kazakhstan,  Kyrgyzstan,  Moldova,  Russia,  Tajikistan Ukraine, and Uzbekistan.

The repository's GitHub account, "NFTSupp," began work in March 2022, according to Palazolo. The same source also contains 15 zipped files including five different RedLine Stealer loaders. "While each of the five loaders we looked at is slightly different, they all unzip and inject RedLine Stealer in the same fashion, as we discussed earlier in this report. The oldest sample we identified was most likely created on March 11, 2022, and the newest sample was most likely compiled on April 7, 2022," he said. These promotions, on the other hand, use rebrand.ly URLs that lead to MediaFire downloads. This operation is also spreading password-stealing trojans, according to VirusTotal. 

RedLine is now available for $100 per month on a subscription basis to independent operators, and it allows for the theft of login passwords and cookies from browsers, content from chat apps, VPN keys, and cryptocurrency wallets. Keep in mind that the validity of platforms like YouTube and GitHub doesn't really inherently imply content reliability, as these sites' upload checks and moderation systems are inadequate.

Cloud-Delivered Malware Increased 68% in Q2, Netskope Reports

 

Cybersecurity firm Netskope published the fifth edition of its Cloud and Threat Report that covers the cloud data risks, menaces, and trends they see throughout the quarter. According to the security firm report, malware delivered over the cloud increased 68% in the second quarter.

"In Q2 2021, 43% of all malware downloads were malicious Office docs, compared to just 20% at the beginning of 2020. This increase comes even after the Emotet takedown, indicating that other groups observed the success of the Emotet crew and have adopted similar techniques," the report said.

“Collaboration apps and development tools account for the next largest percentage, as attackers abuse popular chat apps and code repositories to deliver malware. In total, Netskope detected and blocked malware downloads originating from 290 distinct cloud apps in the first half of 2021." 

Cybersecurity researchers explained that threat actors deliver malware via cloud applications “to bypass blocklists and take advantage of any app-specific allow lists.” Cloud service providers usually eliminate most malware instantly, but some attackers have discovered methods to do significant damage in the short time they spend in a system without being noticed.

According to the company's researchers, cloud storage apps account for more than 66% of cloud malware distribution. Approximately 35% of all workloads are also susceptible to the public internet within AWS, Azure, and GCP, with public IP addresses that are accessible from anywhere on the internet.

“A popular infiltration vector for attackers” are RDP servers which were exposed in 8.3% of workloads. Today, the average company with 500-2,000 employees uses 805 individual apps and cloud services, 97% of which are unmanaged and often free by business units and users.

According to Netskope's findings, employees leaving the organization upload three times more data to their personal apps in the last 30 days of employment. The uploads are leaving company data exposed because much of it is uploaded to personal Google Drive and Microsoft OneDrive, which are popular targets for cybercriminals. 

As stated by chief security scientist and advisory CISO at ThycoticCentrify Joseph Carson, last year’s change to a hybrid work environment requires cybersecurity to evolve from perimeter and network-based to cloud, identity, and privileged access management. 

Organizations must continue to adapt and prioritize managing and securing access to the business applications and data, such as that similar to the BYOD types of devices, and that means further segregation networks for untrusted devices but secured with strong privileged access security controls to enable productivity and access,” Carson said.