Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Security Vendors. Show all posts

Cybercriminals Are Becoming More Proficient at Exploiting Vulnerabilities

 

According to Fortinet, cybercriminals have their sights on the increasing number of new vulnerabilities triggered by the expansion of online services and applications, as well as the rapid rise in the number and variety of connected devices. It's only inevitable that assaults targeting those vulnerabilities will increase.

The most recent semiannual report provides a snapshot of the active threat landscape and highlights trends from July to December 2023, including an analysis of the rate at which cyber criminals are capitalising on newly discovered exploits from across the cybersecurity industry, as well as the rise of targeted ransomware and wiper activity against the industrial and OT sectors.

Attacks began an average of 4.76 days after new exploits were publicly revealed: FortiGuard Labs, like the 1H 2023 Global Threat Landscape Report, wanted to understand how long it takes for a vulnerability to go from initial release to exploitation, whether flaws with a high Exploit Prediction Scoring System (EPSS) score are exploited faster, and whether EPSS data could be used to predict the average time-to-exploitation.

Vendors’ obligation to disclose flaws 

Based on this analysis, attackers increased the rate at which they exploited newly revealed vulnerabilities in the second half of 2023 (43% faster than in the first half of 2023). This highlights the importance of vendors committing to internally discovering vulnerabilities and implementing patches before exploitation starts. It also emphasises the importance of vendors disclosing vulnerabilities to customers proactively and transparently in order to provide them with the information they need to successfully secure their assets before cyber attackers exploit N-day flaws. 

CISOs and security teams need to be concerned about more than simply newly found vulnerabilities. According to Fortinet telemetry, 41% of organisations discovered exploits from signatures that were less than a month old, while 98% detected N-Day vulnerabilities that had existed for at least five years.

FortiGuard Labs has also observed threat actors exploiting vulnerabilities that are more than 15 years old, emphasising the importance of upholding security hygiene and prompting organisations to act quickly through a consistent patching and updating programme, employing best practices and guidance from organisations such as the Network Resilience Coalition to improve network security overall. 

Ransomware targeting critical sectors 

44% of all ransomware and wiper samples targeted the industrial sector. Ransomware detections decreased by 70% across all Fortinet sensors when compared to the first half of 2023. The observed drop in ransomware over the last year can be due to attackers moving away from the old "spray and pray" technique and towards a more focused approach, primarily targeting the energy, healthcare, manufacturing, transportation and logistics, and automotive industries. 

Botnets shown amazing durability, with command and control (C2) connections ceasing on average 85 days after initial detection. While bot traffic remained consistent with the first half of 2023, FortiGuard Labs continued to see the more prominent botnets of recent years, such as Gh0st, Mirai, and ZeroAccess, but three new botnets surfaced in the second half of 2023: AndroxGh0st, Prometei, and DarkGate. 

38 of the 143 advanced persistent threat (APT) groups listed by MITRE were observed to be active during the second half of 2023. FortiRecon, Fortinet's digital risk prevention solution, reports that 38 of the 143 Groups tracked by MITRE were active in the second half of 2023. The most active groups included the Lazarus Group, Kimusky, APT28, APT29, Andariel, and OilRig. 

“The 2H 2023 Global Threat Landscape Report from FortiGuard Labs continues to shine a light on how quickly threat actors are taking advantage of newly disclosed vulnerabilities. In this climate, both vendors and customers have a role to play. Vendors must introduce robust security scrutiny at all stages of the product development life cycle and dedicate themselves to responsible radical transparency in their vulnerability disclosures. With over 26,447 vulnerabilities across more than 2,000 vendors in 2023 as cited by NIST, it is also critical that customers maintain a strict patching regimen to reduce the risk of exploitation,” stated Derek Manky, Chief Security Strategist and Global VP Threat Intelligence, FortiGuard Labs.

Dangers of Adopting Unsanctioned SaaS Applications

 

A sleek little app-store sidebar was silently introduced to the right side of your session screen by the most recent programme update, as you might have seen on your most recent Zoom calls. With the touch of a button and without even pausing their Zoom session, this feature enables any business user inside your company to connect the software-as-a-service (SaaS) apps displayed in the sidebar.

The fact that anyone within an organisation can deploy, administer, and manage SaaS applications emphasises both one of the major strengths and security threats associated with SaaS. Although this technique could be quick and simple for business enablement, it also intentionally avoids any internal security review procedures. 

As a result, your security team is unable to identify which applications are being adopted and used, as well as whether or not they may be vulnerable to security threats, whether or not they are being used securely, or how to put security barriers in place to prevent unauthorised access to them. Zero-trust security principles become nearly hard to enforce. 

Joint Obligation 

Companies need to understand that they are continually being urged by vendors to install additional apps and adopt new features before they reprimand their staff for recklessly utilising SaaS applications. Indeed, the applications themselves frequently meet crucial business demands, and sure, employees naturally want to use them right away without waiting for a drawn-out security evaluation. But, whether they are aware of it or not, they are acting in this way because shrewd application providers are actively marketing to them and frequently tricking users into thinking they are adhering to security best practices. Users are not always reading the consent text displayed on the consent screens that are intended to give users pause during installation and nudge them to read about their rights and obligations. 

Always be cautious

In other circumstances, security is frequently presumed. Consider well-known brands' application markets. Vendors do not have the motivation, financial interest, or capacity to assess the security posture of every third-party application sold on their marketplaces. Yet, in order to promote the business, they may mislead users into believing that anything sold there retains the same level of protection as the marketplace vendor, frequently by omission. Similarly, market descriptions may be worded in such a way as to imply that their application was developed in partnership with or approved by a significant, secure brand.

The use of application marketplaces results in third-party integrations that pose the same vulnerabilities as those that led to numerous recent assaults. During the April 2022 GitHub assault campaign, attackers were able to steal and exploit legitimate Heroku and Travis-CI OAuth tokens issued to well-known suppliers. According to GitHub, the attackers were able to steal data from dozens of GitHub customers and private repositories by using the trust and high access offered to reputable vendors. 

Similarly, CircleCI, a provider focusing in CI/CD and DevOps technologies, reported in December 2022 that some customer data was stolen in a data breach. The investigation was sparked by a hacked GitHub OAuth token. According to the CircleCI team's research, the attackers were able to obtain a valid session token from a CircleCI engineer, allowing them to bypass the two-factor authentication mechanism and gain unauthorised access to production systems. They were able to steal consumer variables, tokens, and keys as a result. 

An Attraction to Frictionless Adoption 

Vendors also design their platforms and incentive plans to make adoption as simple as accepting a free trial, a lifetime free service tier, or swiping a credit card, frequently with alluring discounts to try and buy without commitment. Vendors want users to adopt any exciting, new capability immediately, so they remove all barriers to adoption, including going around ongoing IT and security team reviews. It is hoped that an application will prove to be too well-liked by business users and crucial to corporate operations to be removed, even if security personnel become aware of its use. 

Making adoption too simple, however, can also result in a rise in the number of underutilised, abandoned, and exposed apps. An app can frequently continue to function after it has been rejected during a proof of concept (PoC), abandoned because users have lost interest in it, or the app owner has left the company. This results in an expanded and unprotected attack surface that puts the organisation and its data at greater risk.

While educating business users on SaaS security best practises is important, it's even more crucial to prevent SaaS sprawl by teaching them to think more critically about the seductive promises of quick deployment and financial incentives made by SaaS suppliers.

Additionally, security teams ought to use solutions that can help them manage risks associated with SaaS misconfiguration and SaaS-to-SaaS integrations. These technologies allow customers to continue utilising SaaS applications as required while also conducting security due diligence on new vendors and integrations and setting up crucial security barriers.

Security Vendors are Turning to GPT as a Key AI Technology

 

A number of businesses are utilising conversational AI technology to improve their product capabilities, including for security, despite some concerns about how generative AI chatbots like ChatGPT can be used maliciously — to create phishing campaigns or write malware. 

A large language model (LLM) called ChatGPT, created by OpenAI, uses the GPT 3 LLM and is based on a variety of large test data sets. When a user asks a simple question, ChatGPT, which can understand human language, responds with thorough explanations and can manage complex tasks like document creation and code writing. It serves as an illustration of how conversational AI can be used to organise massive amounts of data, improve user experience, and facilitate communications. 

For example, a conversational AI tool, such as ChatGPT or another option, could act as the back end of an information concierge that automates the use of threat intelligence in enterprise support, claims IT research and advisory firm Into-Tech Research. 

With Orca Security Platform, it seems like Orca Security is taking that tack. The platform's capacity to produce contextual and precise remediation plans for security alerts was improved by the incorporation of OpenAI's GPT3 API, particularly the "Da-Vinci-03" series. In the announcement, the head of data science at Orca, Itamar Golan, and the director of innovation at Orca, Lior Drihem, wrote. Before feeding the components as input to GPT3, the new pipeline preprocesses data from a security alert, including fundamental details about the risk and its contextual environment, including affected assets, attack vectors, and potential impact. The best and most useful solutions to fix the problem are then generated by the AI, according to Golan and Drihem. For teams to refer to and apply, these remediation steps can also be included in tickets, such as Jira tickets. 

Even though the AI model has the potential to produce inaccurate data (or ambiguous results), Drihem and Golan claim that "the benefits of utilising GPT3's natural language generation capabilities outweigh any potential risks, and have seen significant improvements in the efficiency and effectiveness of our remediation efforts." 

Orca Security has previously used language models in their work. To improve the remediation information customers receive regarding infosec risks, the company recently integrated GPT3 into its cloud security platform. 

"By fine-tuning these powerful language models with our own security data sets, we have been able to improve the detail and accuracy of our remediation steps — giving you a much better remediation plan and assisting you to optimally solve the issue as fast as possible," Golan and Drihem added. 

Utilizing LLM & AI for applications 

Orca Security joins other businesses that offer language models as part of their product line. This week, Gupshup introduced Auto Bot Builder, a tool that uses GPT-3 to assist businesses in creating their own sophisticated conversational chatbots. Using content from the enterprise website, documents, message logs, product catalogues, databases, and other corporate systems, Auto Bot Builder creates chatbots tailored to the enterprise's unique requirements. The information is processed using GPT-3 LLM (Large Language Model), and it is then fine-tuned with proprietary industry-specific models. Businesses can use Auto Bot Builder to create chatbots for customer support, product discovery, product recommendations, shopping advice, and lead generation in marketing. 

These chatbots are different from ChatGPT, a general-purpose chatbot, but they share with ChatGPT the ability to communicate with end users at a "exceptionally high degree of language capability," according to Gupshup. 

ChatGPT is also being used by the cryptocurrency community to develop software like trading bots and cryptocurrency blogs. Competitive intelligence analyst Jerrod Piker from Deep Instinct wrote in an email. Examples include creating a sample smart contract using ChatGPT and creating a trading bot to help automate the process of buying and selling cryptocurrencies by identifying entry and exit points. 

The idea of a generative AI chatbot that can respond to questions is not new, but Casey Ellis, founder and CTO of Bugcrowd, notes that ChatGPT stands out from the competition due to the variety of topics it can handle and its usability.