Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

Network Detection and Response Defends Against AI Powered Cyber Attacks

 

Cybersecurity teams are facing growing pressure as attackers increasingly adopt artificial intelligence to accelerate, scale, and conceal malicious activity. Modern threat actors are no longer limited to static malware or simple intrusion techniques. Instead, AI-powered campaigns are using adaptive methods that blend into legitimate system behavior, making detection significantly more difficult and forcing defenders to rethink traditional security strategies. 

Threat intelligence research from major technology firms indicates that offensive uses of AI are expanding rapidly. Security teams have observed AI tools capable of bypassing established safeguards, automatically generating malicious scripts, and evading detection mechanisms with minimal human involvement. In some cases, AI-driven orchestration has been used to coordinate multiple malware components, allowing attackers to conduct reconnaissance, identify vulnerabilities, move laterally through networks, and extract sensitive data at machine speed. These automated operations can unfold faster than manual security workflows can reasonably respond. 

What distinguishes these attacks from earlier generations is not the underlying techniques, but the scale and efficiency at which they can be executed. Credential abuse, for example, is not new, but AI enables attackers to harvest and exploit credentials across large environments with only minimal input. Research published in mid-2025 highlighted dozens of ways autonomous AI agents could be deployed against enterprise systems, effectively expanding the attack surface beyond conventional trust boundaries and security assumptions. 

This evolving threat landscape has reinforced the relevance of zero trust principles, which assume no user, device, or connection should be trusted by default. However, zero trust alone is not sufficient. Security operations teams must also be able to detect abnormal behavior regardless of where it originates, especially as AI-driven attacks increasingly rely on legitimate tools and system processes to hide in plain sight. 

As a result, organizations are placing renewed emphasis on network detection and response technologies. Unlike legacy defenses that depend heavily on known signatures or manual investigation, modern NDR platforms continuously analyze network traffic to identify suspicious patterns and anomalous behavior in real time. This visibility allows security teams to spot rapid reconnaissance activity, unusual data movement, or unexpected protocol usage that may signal AI-assisted attacks. 

NDR systems also help security teams understand broader trends across enterprise and cloud environments. By comparing current activity against historical baselines, these tools can highlight deviations that would otherwise go unnoticed, such as sudden changes in encrypted traffic levels or new outbound connections from systems that rarely communicate externally. Capturing and storing this data enables deeper forensic analysis and supports long-term threat hunting. 

Crucially, NDR platforms use automation and behavioral analysis to classify activity as benign, suspicious, or malicious, reducing alert fatigue for security analysts. Even when traffic is encrypted, network-level context can reveal patterns consistent with abuse. As attackers increasingly rely on AI to mask their movements, the ability to rapidly triage and respond becomes essential.  

By delivering comprehensive network visibility and faster response capabilities, NDR solutions help organizations reduce risk, limit the impact of breaches, and prepare for a future where AI-driven threats continue to evolve.

VPN Surge: Americans Bypass Age Verification Laws

 

Americans are increasingly seeking out VPNs as states enact stringent age verification laws that limit what minors can see online. These regulations compel users to provide personal information — like government issued IDs — to verify their age, leading to concerns about privacy and security. As a result, VPN usage is skyrocketing, particularly in states such as Missouri, Florida, Louisiana, Utah and more where VPN searches have jumped by a factor of four following the new regulations. 

How age verification laws work 

Age verification laws require websites and apps that contain a substantial amount of "material harmful to minors" to verify users' age prior to access. This step frequently entails submitting photographs or scans of ID documents, potentially exposing personal info to breaches. Even though laws forbid companies from storing this information, there is no assurance it will be kept secure, not with the record of massive data breaches at big tech firms. 

The vague definition of "harmful content" suggests that age verification could be required for many other types of digital platforms, such as social media, streaming services, and video games. The expansion raises questions about digital privacy and identity protection for all users, minors not excluded. From the latest Pew Research Center finding, 40% of Americans say government regulation of business does more harm than good, illustrating bipartisan wariness of these laws. 

Bypassing restrictions with VPNs 

VPN services enable users to mask their IP addresses and circumvent these age verification policies, allowing them to maintain their anonymity and have their sensitive information protected. Some VPNs are available on desktop and mobile devices, and some can be used on Amazon Fire TV Stick, among other platforms. To maximize privacy and security, experts suggest opting for VPN providers with robust no-logs policies and strong encryption.

Higher VPN adoption has fueled speculation on whether the US lawmakers will attempt to ban VPNs outright, which would be yet another blow to digital privacy and freedom. For now, VPNs are still a popular option for Americans who want to keep their online activity hidden from nosy age verification schemes.

US DoJ Charges 54 Linked to ATM Jackpotting Scheme Using Ploutus Malware, Tied to Tren de Aragua

 

The U.S. Department of Justice (DoJ) has revealed the indictment of 54 people for their alleged roles in a sophisticated, multi-million-dollar ATM jackpotting operation that targeted machines across the United States.

According to authorities, the operation involved the use of Ploutus malware to compromise automated teller machines and force them to dispense cash illegally. Investigators say the accused individuals are connected to Tren de Aragua (TdA), a Venezuelan criminal group that the U.S. State Department has classified as a foreign terrorist organization.

The DoJ noted that in July 2025, the U.S. government imposed sanctions on TdA’s leader, Hector Rusthenford Guerrero Flores, also known as “Niño Guerrero,” along with five senior members. They were sanctioned for alleged involvement in crimes including “illicit drug trade, human smuggling and trafficking, extortion, sexual exploitation of women and children, and money laundering, among other criminal activities.”

An indictment returned on December 9, 2025, charged 22 individuals with offenses such as bank fraud, burglary, and money laundering. Prosecutors allege that TdA used ATM jackpotting attacks to steal millions of dollars in the U.S. and distribute the proceeds among its network.

In a separate but related case, another 32 defendants were charged under an indictment filed on October 21, 2025. These charges include “one count of conspiracy to commit bank fraud, one count of conspiracy to commit bank burglary and computer fraud, 18 counts of bank fraud, 18 counts of bank burglary, and 18 counts of damage to computers.”

If found guilty, the defendants could face sentences ranging from 20 years to as much as 335 years in prison.

“These defendants employed methodical surveillance and burglary techniques to install malware into ATM machines, and then steal and launder money from the machines, in part to fund terrorism and the other far-reaching criminal activities of TDA, a designated Foreign Terrorist Organization,” said Acting Assistant Attorney General Matthew R. Galeotti of the Justice Department’s Criminal Division.

Officials explained that the scheme relied on recruiting individuals to physically access ATMs nationwide. These recruits reportedly carried out reconnaissance to study security measures, tested whether alarms were triggered, and then accessed the machines’ internal components.

Once access was obtained, the attackers allegedly installed Ploutus either by swapping the ATM’s hard drive with a preloaded one or by using removable media such as a USB drive. The malware can send unauthorized commands to the ATM’s Cash Dispensing Module, causing it to release money on demand.

“The Ploutus malware was also designed to delete evidence of malware in an effort to conceal, create a false impression, mislead, or otherwise deceive employees of the banks and credit unions from learning about the deployment of the malware on the ATM,” the DoJ said. “Members of the conspiracy would then split the proceeds in predetermined portions.”

Ploutus first surfaced in Mexico in 2013. Security firms later documented its evolution, including its exploitation of vulnerabilities in Windows XP-based ATMs and its ability to control Diebold machines running multiple Windows versions.

“Once deployed to an ATM, Ploutus-D makes it possible for a money mule to obtain thousands of dollars in minutes,” researchers noted. “A money mule must have a master key to open the top portion of the ATM (or be able to pick it), a physical keyboard to connect to the machine, and an activation code (provided by the boss in charge of the operation) in order to dispense money from the ATM.”

The DoJ estimates that since 2021, at least 1,529 jackpotting incidents have occurred in the U.S., resulting in losses of approximately $40.73 million as of August 2025.

“Many millions of dollars were drained from ATM machines across the United States as a result of this conspiracy, and that money is alleged to have gone to Tren de Aragua leaders to fund their terrorist activities and purposes,” said U.S. Attorney Lesley Woods

£1.8bn BritCard: A Security Investment Against UK Fraud

 

The UK has debated national ID for years, but the discussion has become more pointed alongside growing privacy concerns. Two decades ago Tony Blair could sing the praises of ID cards and instead of public hysteria about data held by government, today Keir Starmer’s digital ID proposal – initially focused on proving a right to work – meets a distinctly more sceptical audience.

That scepticism has been turbocharged by a single figure: the projected £1.8bn cost laid out in the Autumn Budget. Yet the obsession with the initial cost may blind people to the greater scandal: the cost of inaction. Fraud already takes a mind-boggling toll on the UK economy – weighed in at over £200bn a year by recent estimates – while clunky, paper-based ID systems hobble everything from renting a home to getting services. That friction isn’t just annoying, it feeds a broader productivity problem by compelling organizations to waste time and money verifying the same individuals, time and again.

Viewed in that context, £1.8bn should be considered as an investment in security, not a political luxury. The greater risk is not that government over-spend, but that it under spends — or rushes — and winds up with a brittle system that became an embarrassment to the nation. A BritCard deployment at “cut-price” that ends in a breach would cost multiples of what the original outlay was and would cause irreparable damage to public trust. If it is the state’s desire that citizens adopt a new layer of identity, it must prove that the system is reliable as well as restrained.

The good news is that the core design can, in principle,support both goals. BritCard is akin to a digital version of a physical ID card, contained within a secure, government-issued wallet. Most importantly, the core identity data would stay on the user’s device, enabling people to prove certain attributes — like being over 18 — without revealing personal details such as a date of birth or passport number. This model of “sharing what is necessary,” is a practical approach to privacy concerns as it is designed to limit the amount of sensitive information that will be routinely disclosed.

However, none of this eliminates risk. Critics will reasonably worry about any central verification component becoming a lucrative “honeypot.” That is why transparency is non-negotiable: the government should publish how data is stored, accessed and shared, what protections exist, and how citizens opt in and control disclosure.

Amazon and Microsoft AI Investments Put India at a Crossroads

 

Major technology companies Amazon and Microsoft have announced combined investments exceeding $50 billion in India, placing artificial intelligence firmly at the center of global attention on the country’s technology ambitions. Microsoft chief executive Satya Nadella revealed the company’s largest-ever investment in Asia, committing $17.5 billion to support infrastructure development, workforce skills, and what he described as India’s transition toward an AI-first economy. Shortly after, Amazon said it plans to invest more than $35 billion in India by 2030, with part of that funding expected to strengthen its artificial intelligence capabilities in the country. 

These announcements arrive at a time of heightened debate around artificial intelligence valuations globally. As concerns about a potential AI-driven market bubble have grown, some financial institutions have taken a contrarian view on India’s position. Analysts at Jefferies described Indian equities as a “reverse AI trade,” suggesting the market could outperform if global enthusiasm for AI weakens. HSBC has echoed similar views, arguing that Indian stocks offer diversification for investors wary of overheated technology markets elsewhere. This perspective has gained traction as Indian equities have underperformed regional peers over the past year, while foreign capital has flowed heavily into AI-centric companies in South Korea and Taiwan. 

Against this backdrop, the scale of Amazon and Microsoft’s commitments offers a significant boost to confidence. However, questions remain about how competitive India truly is in the global AI race. Adoption of artificial intelligence across the country has accelerated, with increasing investment in data centers and early movement toward domestic chip manufacturing. A recent collaboration between Intel and Tata Electronics to produce semiconductors locally reflects growing momentum in strengthening AI infrastructure. 

Despite these advances, India continues to lag behind global leaders when it comes to building sovereign AI models. The government launched a national AI mission aimed at supporting researchers and startups with high-performance computing resources to develop a large multilingual model. While officials say a sovereign model supporting more than 22 languages is close to launch, global competitors such as OpenAI and China-based firms have continued to release more advanced systems in the interim. India’s public investment in this effort remains modest when compared with the far larger AI spending programs seen in countries like France and Saudi Arabia. 

Structural challenges also persist. Limited access to advanced semiconductors, fragmented data ecosystems, and insufficient long-term research investment constrain progress. Although India has a higher-than-average concentration of AI-skilled professionals, retaining top talent remains difficult as global mobility draws developers overseas. Experts argue that policy incentives will be critical if India hopes to convert its talent advantage into sustained leadership. 

Even so, international studies suggest India performs strongly relative to its economic stage. The country ranks among the top five globally for new AI startups receiving investment and contributes a significant share of global AI research publications. While funding volumes remain far below those of the United States and China, experts believe India’s advantage may lie in applying AI to real-world problems rather than competing directly in foundational model development. 

AI-driven applications addressing agriculture, education, and healthcare are already gaining traction, demonstrating the technology’s potential impact at scale. At the same time, analysts warn that artificial intelligence could disrupt India’s IT services sector, a long-standing engine of economic growth. Slowing hiring, wage pressure, and weaker stock performance indicate that this transition is already underway, underscoring both the opportunity and the risk embedded in India’s AI future.

OpenAI Warns Future AI Models Could Increase Cybersecurity Risks and Defenses

 

Meanwhile, OpenAI told the press that large language models will get to a level where future generations of these could pose a serious risk to cybersecurity. The company in its blog postingly admitted that powerful AI systems could eventually be used to craft sophisticated cyberattacks, such as developing previously unknown software vulnerabilities or aiding stealthy cyber-espionage operations against well-defended targets. Although this is still theoretical, OpenAI has underlined that the pace with which AI cyber-capability improvements are taking place demands proactive preparation. 

The same advances that could make future models attractive for malicious use, according to the company, also offer significant opportunities to strengthen cyber defense. OpenAI said such progress in reasoning, code analysis, and automation has the potential to significantly enhance security teams' ability to identify weaknesses in systems better, audit complex software systems, and remediate vulnerabilities more effectively. Instead of framing the issue as a threat alone, the company cast the issue as a dual-use challenge-one in which adequate management through safeguards and responsible deployment would be required. 

In the development of such advanced AI systems, OpenAI says it is investing heavily in defensive cybersecurity applications. This includes helping models improve particularly on tasks related to secure code review, vulnerability discovery, and patch validation. It also mentioned its effort on creating tooling supporting defenders in running critical workflows at scale, notably in environments where manual processes are slow or resource-intensive. 

OpenAI identified several technical strategies that it thinks are critical to the mitigation of cyber risk associated with increased capabilities of AI systems: stronger access controls to restrict who has access to sensitive features, hardened infrastructure to prevent abuse, outbound data controls to reduce the risk of information leakage, and continuous monitoring to detect anomalous behavior. These altogether are aimed at reducing the likelihood that advanced capabilities could be leveraged for harmful purposes. 

It also announced the forthcoming launch of a new program offering tiered access to additional cybersecurity-related AI capabilities. This is intended to ensure that researchers, enterprises, and security professionals working on legitimate defensive use cases have access to more advanced tooling while providing appropriate restrictions on higher-risk functionality. Specific timelines were not discussed by OpenAI, although it promised that more would be forthcoming very soon. 

Meanwhile, OpenAI also announced that it would create a Frontier Risk Council comprising renowned cybersecurity experts and industry practitioners. Its initial mandate will lie in assessing the cyber-related risks that come with frontier AI models. But this is expected to expand beyond this in the near future. Its members will be required to offer advice on the question of where the line should fall between developing capability responsibly and possible misuse. And its input would keep informing future safeguards and evaluation frameworks. 

OpenAI also emphasized that the risks of AI-enabled cyber misuse have no single-company or single-platform constraint. Any sophisticated model, across the industry, it said, may be misused if there are no proper controls. To that effect, OpenAI said it continues to collaborate with peers through initiatives such as the Frontier Model Forum, sharing threat modeling insights and best practices. 

By recognizing how AI capabilities could be weaponized and where the points of intervention may lie, the company believes, the industry will go a long way toward balancing innovation and security as AI systems continue to evolve.

Fix SOC Blind Spots: Real-Time Industry & Country Threat Visibility

 

Modern SOCs are now grappling with a massive visibility problem, essentially “driving through fog” but now with their headlights dimming rapidly. The playbook for many teams is still looking back: analysts wait for an alert to fire, investigate the incident, and then try to respond. 

While understandable due to the high volume of noise and alert fatigue, this reactive attitude leaves the organization exposed. It induces a clouded vision from structural level, where teams cannot observe threat actors conducting attack preparations, they do not predict campaign sequences aimed at their own sector, and are not capable of modifying the defense until after an attack has been launched.

Operational costs of delay 

Remaining in a reactive state imposes severe penalties on security teams in terms of time, budget, and risk profile. 

  • Investigation latency: Without broader context, analysts are forced to research every suspicious object from scratch, significantly slowing down response times.
  • Resource drain: Teams often waste cycles chasing false positives or threats that are irrelevant to their geography or vertical because they lack the intelligence to filter them out.
  • Increased breach risk: Attackers frequently reuse infrastructure and target specific industries; failing to spot these patterns early hands the advantage to the adversary. 

According to security analysts, the only way out is the transition from the current reactive SOC model to an active SOC model powered by Threat Intelligence (TI). Tools like the ANY.RUN Threat Intelligence Lookup serve as a "tactical magnifying glass," converting raw data into operational assets .The use of TI helps the SOC understand the threats currently present in their environment and which alerts must be escalated immediately. 

Rise of hybrid threats 

One of the major reasons for this imperative change is the increased pace of change in attack infrastructure, specifically hybrid threats. The use of multiple attacks together has now been brought to the fore by recent investigations by the researchers, including Tycoon 2FA and Salty attack kits combining together as one kill chain attack. In these scenarios, one kit may handle the initial lure and reverse proxy, while another manages session hijacking. These combinations effectively break existing detection rules and confuse traditional defense strategies.

To address this challenge, IT professionals need behavioral patterns and attack logic visibility in real time, as opposed to only focusing on signatures. Finally, proactive protection based on industry and geo context enables SOC managers to understand the threats that matter to them more effectively while predicting attacks rather than reacting to them.

Critical FreePBX Vulnerabilities Expose Authentication Bypass and Remote Code Execution Risks

 

Researchers at Horizon3.ai have uncovered several security vulnerabilities within FreePBX, an open-source private branch exchange platform. Among them, one severity flaw could be exploited to bypass authentication if very specific configurations are enabled. The issues were disclosed privately to FreePBX maintainers in mid-September 2025, and the researchers have raised concerns about the exposure of internet-facing PBX deployments.  

According to Horizon3.ai's analysis, the disclosed vulnerabilities affect several FreePBX core components and can be exploited by an attacker to achieve unauthorized access, manipulate databases, upload malicious files, and ultimately execute arbitrary commands. One of the most critical finding involves an authentication bypass weakness that could grant attackers access to the FreePBX Administrator Control Panel without needing valid credentials, given specific conditions. This vulnerability manifests itself in situations where the system's authorization mechanism is configured to trust the web server rather than FreePBX's own user management. 

Although the authentication bypass is not active in the default FreePBX configuration, it becomes exploitable with the addition of multiple advanced settings enabled. Once these are in place, an attacker can create HTTP requests that contain forged authorization headers as a way to provide administrative access. Researchers pointed out that such access can be used to add malicious users to internal database tables effectively to maintain control of the device. The behavior greatly resembles another FreePBX vulnerability disclosed in the past and that was being actively exploited during the first months of 2025.  

Besides the authentication bypass, Horizon3.ai found various SQL injection bugs that impact different endpoints within the platform. These bugs allow authenticated attackers to read from and write to the underlying database by modifying request parameters. Such access can leak call records, credentials, and system configuration data. The researchers also discovered an arbitrary file upload bug that can be exploited as part of having a valid session identifier, thus allowing attacks to upload a PHP-based web shell and use command execution against the underlying server. 

This can be used for extracting sensitive system files or establishing deeper persistence. Horizon3.ai noted that the vulnerabilities are fairly low-complexity to exploit and may enable remote code execution by both authenticated and unauthenticated attackers, depending on which endpoint is exposed and how the system is configured. It added that the PBX systems are an attractive target because such boxes are very exposed to the internet and also often integrated deeply into critical communications infrastructure. The FreePBX project has made patches available to address the issues across supported versions, beginning the rollout in incremental fashion between October and December 2025.

In light of the findings, the project also disabled the ability to configure authentication providers through the web interface and required administrators to configure this setting through command-line tools. Temporary mitigation guidance issued by those impacted encouraged users to transition to the user manager authentication method, limit overrides to advanced settings, and reboot impacted systems to kill potentially unauthorized sessions. Researchers and FreePBX maintainers have called on administrators to check their environments for compromise-especially in cases where the vulnerable authentication configuration was enabled. 

While several vulnerable code paths remain, they require security through additional authentication layers. Security experts underscored that, whenever possible, legacy authentication mechanisms should be avoided because they offer weaker protection against exploitation. The incident serves as a reminder of the importance of secure configuration practices, especially for systems that play a critical role in organizational communications.