Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

FBI Discovers 630 Million Stolen Passwords in Major Cybercrime Investigation

 

A newly disclosed trove of stolen credentials has underscored the scale of modern cybercrime after U.S. federal investigators uncovered hundreds of millions of compromised passwords on devices seized from a single suspected hacker. The dataset, comprising approximately 630 million passwords, has now been integrated into the widely used Have I Been Pwned (HIBP) database, significantly expanding its ability to warn users about exposed credentials. 

The passwords were provided to HIBP by the Federal Bureau of Investigation as part of ongoing cybercrime investigations. According to Troy Hunt, the security researcher behind the service, this latest contribution is particularly striking because it originates from one individual rather than a large breach aggregation. While the FBI has shared compromised credentials with HIBP for several years, the sheer volume associated with this case highlights how centralized and extensive credential theft operations have become. 

Initial analysis suggests the data was collected from a mixture of underground sources, including dark web marketplaces, messaging platforms such as Telegram, and large-scale infostealer malware campaigns. Not all of the passwords were previously unknown, but a meaningful portion had never appeared in public breach repositories. Roughly 7.4% of the dataset represents newly identified compromised passwords, amounting to tens of millions of credentials that were previously undetectable by users relying on breach-monitoring tools. 

Security experts warn that even recycled or older passwords remain highly valuable to attackers. Stolen credentials are frequently reused in credential-stuffing attacks, where automated tools attempt the same password across multiple platforms. Because many users continue to reuse passwords, a single exposed credential can provide access to multiple accounts, amplifying the potential impact of historical data leaks. 

The expanded dataset is now searchable through the Pwned Passwords service, which allows users to check whether a password has appeared in known breach collections. The system is designed to preserve privacy by hashing submitted passwords and ensuring no personally identifiable information is stored or associated with search results. This enables individuals and organizations to proactively block compromised passwords without exposing sensitive data. 

The discovery has renewed calls for stronger credential hygiene across both consumer and enterprise environments. Cybersecurity professionals consistently emphasize that password reuse and weak password creation remain among the most common contributors to account compromise. Password managers are widely recommended as an effective countermeasure, as they allow users to generate and store long, unique passwords for every service without relying on memory. 

In addition to password managers, broader adoption of passkeys and multi-factor authentication is increasingly viewed as essential. These technologies significantly reduce reliance on static passwords and make stolen credential databases far less useful to attackers. Many platforms now support these features, yet adoption remains inconsistent. 

As law enforcement continues to uncover massive credential repositories during cybercrime investigations, experts caution that similar discoveries are likely in the future. Each new dataset reinforces the importance of assuming passwords will eventually be exposed and building defenses accordingly. Regular password audits, automated breach detection, and layered authentication controls are now considered baseline requirements for maintaining digital security.

Trend Micro Warns: 'Vibe Crime' Ushers in Agentic AI-Driven Cybercrime Era

 

Trend Micro, a cybersecurity firm, has sounded the alarm over what it calls the rise of "vibe crime": fully automated cybercriminal operations powered by agentic AI, which marks a fundamental turn away from traditional ransomware and phishing campaigns. The report from the company forecasts a massive increase in attack volume as criminals take advantage of autonomous AI agents to perform continuous, large-scale operations. 

From service to servant model 

The criminal ecosystem is evolving from "Cybercrime as a Service" to "Cybercrime as a Servant," where chained AI agents and autonomous orchestration layers manage end-to-end criminal enterprises. Robert McArdle, director of forward-looking threat research at Trend Micro, stressed that the real risk does not come from sudden explosive growth but rather from the gradual automation of attacks that previously required a lot of skill, time, and effort.

"We will see an optimization of today's leading attacks, the amplification of attacks that previously had poor ROI, and the emergence of brand new 'Black Swan' cybercrime business models," McArdle stated. 

Researchers expect enterprise cloud and AI infrastructure to be increasingly targeted in the future, as criminals use these platforms as sources of scalable computing power, AI, storage, and potentially valuable data to run their agentic infrastructures. This transformation is supposed to bring with it new, previously unthinkable types of attacks as well as shake up the entire criminal ecosystem, introducing new revenue streams and business models.

Industry-wide alarm bells 

Trend Micro's alert echoes other warnings about an “agentic” AI threat in cyberspace. Anthropic acknowledged that its AI tools had been “weaponized” by hackers in September, criminals employed Claude Code to automate reconnaissance, gather credentials, and breach networks at 17 organizations in the fields of healthcare, emergency services, and government.

In a similar vein, the 2025 State of Malware report from Malwarebytes warned that agentic AI would “continue to dramatically change cyber criminal tactics” and accelerate development of even more dangerous malware. The researchers further stressed that defensive platforms must deploy their own autonomous agents and orchestrators to counter this evolution or face being overwhelmed. Organizations need to reassess security strategies immediately and invest in AI-driven defense before criminals industrialize their AI capabilities, or risk falling behind in an exponential arms race.

Network Detection and Response Defends Against AI Powered Cyber Attacks

 

Cybersecurity teams are facing growing pressure as attackers increasingly adopt artificial intelligence to accelerate, scale, and conceal malicious activity. Modern threat actors are no longer limited to static malware or simple intrusion techniques. Instead, AI-powered campaigns are using adaptive methods that blend into legitimate system behavior, making detection significantly more difficult and forcing defenders to rethink traditional security strategies. 

Threat intelligence research from major technology firms indicates that offensive uses of AI are expanding rapidly. Security teams have observed AI tools capable of bypassing established safeguards, automatically generating malicious scripts, and evading detection mechanisms with minimal human involvement. In some cases, AI-driven orchestration has been used to coordinate multiple malware components, allowing attackers to conduct reconnaissance, identify vulnerabilities, move laterally through networks, and extract sensitive data at machine speed. These automated operations can unfold faster than manual security workflows can reasonably respond. 

What distinguishes these attacks from earlier generations is not the underlying techniques, but the scale and efficiency at which they can be executed. Credential abuse, for example, is not new, but AI enables attackers to harvest and exploit credentials across large environments with only minimal input. Research published in mid-2025 highlighted dozens of ways autonomous AI agents could be deployed against enterprise systems, effectively expanding the attack surface beyond conventional trust boundaries and security assumptions. 

This evolving threat landscape has reinforced the relevance of zero trust principles, which assume no user, device, or connection should be trusted by default. However, zero trust alone is not sufficient. Security operations teams must also be able to detect abnormal behavior regardless of where it originates, especially as AI-driven attacks increasingly rely on legitimate tools and system processes to hide in plain sight. 

As a result, organizations are placing renewed emphasis on network detection and response technologies. Unlike legacy defenses that depend heavily on known signatures or manual investigation, modern NDR platforms continuously analyze network traffic to identify suspicious patterns and anomalous behavior in real time. This visibility allows security teams to spot rapid reconnaissance activity, unusual data movement, or unexpected protocol usage that may signal AI-assisted attacks. 

NDR systems also help security teams understand broader trends across enterprise and cloud environments. By comparing current activity against historical baselines, these tools can highlight deviations that would otherwise go unnoticed, such as sudden changes in encrypted traffic levels or new outbound connections from systems that rarely communicate externally. Capturing and storing this data enables deeper forensic analysis and supports long-term threat hunting. 

Crucially, NDR platforms use automation and behavioral analysis to classify activity as benign, suspicious, or malicious, reducing alert fatigue for security analysts. Even when traffic is encrypted, network-level context can reveal patterns consistent with abuse. As attackers increasingly rely on AI to mask their movements, the ability to rapidly triage and respond becomes essential.  

By delivering comprehensive network visibility and faster response capabilities, NDR solutions help organizations reduce risk, limit the impact of breaches, and prepare for a future where AI-driven threats continue to evolve.

VPN Surge: Americans Bypass Age Verification Laws

 

Americans are increasingly seeking out VPNs as states enact stringent age verification laws that limit what minors can see online. These regulations compel users to provide personal information — like government issued IDs — to verify their age, leading to concerns about privacy and security. As a result, VPN usage is skyrocketing, particularly in states such as Missouri, Florida, Louisiana, Utah and more where VPN searches have jumped by a factor of four following the new regulations. 

How age verification laws work 

Age verification laws require websites and apps that contain a substantial amount of "material harmful to minors" to verify users' age prior to access. This step frequently entails submitting photographs or scans of ID documents, potentially exposing personal info to breaches. Even though laws forbid companies from storing this information, there is no assurance it will be kept secure, not with the record of massive data breaches at big tech firms. 

The vague definition of "harmful content" suggests that age verification could be required for many other types of digital platforms, such as social media, streaming services, and video games. The expansion raises questions about digital privacy and identity protection for all users, minors not excluded. From the latest Pew Research Center finding, 40% of Americans say government regulation of business does more harm than good, illustrating bipartisan wariness of these laws. 

Bypassing restrictions with VPNs 

VPN services enable users to mask their IP addresses and circumvent these age verification policies, allowing them to maintain their anonymity and have their sensitive information protected. Some VPNs are available on desktop and mobile devices, and some can be used on Amazon Fire TV Stick, among other platforms. To maximize privacy and security, experts suggest opting for VPN providers with robust no-logs policies and strong encryption.

Higher VPN adoption has fueled speculation on whether the US lawmakers will attempt to ban VPNs outright, which would be yet another blow to digital privacy and freedom. For now, VPNs are still a popular option for Americans who want to keep their online activity hidden from nosy age verification schemes.

US DoJ Charges 54 Linked to ATM Jackpotting Scheme Using Ploutus Malware, Tied to Tren de Aragua

 

The U.S. Department of Justice (DoJ) has revealed the indictment of 54 people for their alleged roles in a sophisticated, multi-million-dollar ATM jackpotting operation that targeted machines across the United States.

According to authorities, the operation involved the use of Ploutus malware to compromise automated teller machines and force them to dispense cash illegally. Investigators say the accused individuals are connected to Tren de Aragua (TdA), a Venezuelan criminal group that the U.S. State Department has classified as a foreign terrorist organization.

The DoJ noted that in July 2025, the U.S. government imposed sanctions on TdA’s leader, Hector Rusthenford Guerrero Flores, also known as “Niño Guerrero,” along with five senior members. They were sanctioned for alleged involvement in crimes including “illicit drug trade, human smuggling and trafficking, extortion, sexual exploitation of women and children, and money laundering, among other criminal activities.”

An indictment returned on December 9, 2025, charged 22 individuals with offenses such as bank fraud, burglary, and money laundering. Prosecutors allege that TdA used ATM jackpotting attacks to steal millions of dollars in the U.S. and distribute the proceeds among its network.

In a separate but related case, another 32 defendants were charged under an indictment filed on October 21, 2025. These charges include “one count of conspiracy to commit bank fraud, one count of conspiracy to commit bank burglary and computer fraud, 18 counts of bank fraud, 18 counts of bank burglary, and 18 counts of damage to computers.”

If found guilty, the defendants could face sentences ranging from 20 years to as much as 335 years in prison.

“These defendants employed methodical surveillance and burglary techniques to install malware into ATM machines, and then steal and launder money from the machines, in part to fund terrorism and the other far-reaching criminal activities of TDA, a designated Foreign Terrorist Organization,” said Acting Assistant Attorney General Matthew R. Galeotti of the Justice Department’s Criminal Division.

Officials explained that the scheme relied on recruiting individuals to physically access ATMs nationwide. These recruits reportedly carried out reconnaissance to study security measures, tested whether alarms were triggered, and then accessed the machines’ internal components.

Once access was obtained, the attackers allegedly installed Ploutus either by swapping the ATM’s hard drive with a preloaded one or by using removable media such as a USB drive. The malware can send unauthorized commands to the ATM’s Cash Dispensing Module, causing it to release money on demand.

“The Ploutus malware was also designed to delete evidence of malware in an effort to conceal, create a false impression, mislead, or otherwise deceive employees of the banks and credit unions from learning about the deployment of the malware on the ATM,” the DoJ said. “Members of the conspiracy would then split the proceeds in predetermined portions.”

Ploutus first surfaced in Mexico in 2013. Security firms later documented its evolution, including its exploitation of vulnerabilities in Windows XP-based ATMs and its ability to control Diebold machines running multiple Windows versions.

“Once deployed to an ATM, Ploutus-D makes it possible for a money mule to obtain thousands of dollars in minutes,” researchers noted. “A money mule must have a master key to open the top portion of the ATM (or be able to pick it), a physical keyboard to connect to the machine, and an activation code (provided by the boss in charge of the operation) in order to dispense money from the ATM.”

The DoJ estimates that since 2021, at least 1,529 jackpotting incidents have occurred in the U.S., resulting in losses of approximately $40.73 million as of August 2025.

“Many millions of dollars were drained from ATM machines across the United States as a result of this conspiracy, and that money is alleged to have gone to Tren de Aragua leaders to fund their terrorist activities and purposes,” said U.S. Attorney Lesley Woods

£1.8bn BritCard: A Security Investment Against UK Fraud

 

The UK has debated national ID for years, but the discussion has become more pointed alongside growing privacy concerns. Two decades ago Tony Blair could sing the praises of ID cards and instead of public hysteria about data held by government, today Keir Starmer’s digital ID proposal – initially focused on proving a right to work – meets a distinctly more sceptical audience.

That scepticism has been turbocharged by a single figure: the projected £1.8bn cost laid out in the Autumn Budget. Yet the obsession with the initial cost may blind people to the greater scandal: the cost of inaction. Fraud already takes a mind-boggling toll on the UK economy – weighed in at over £200bn a year by recent estimates – while clunky, paper-based ID systems hobble everything from renting a home to getting services. That friction isn’t just annoying, it feeds a broader productivity problem by compelling organizations to waste time and money verifying the same individuals, time and again.

Viewed in that context, £1.8bn should be considered as an investment in security, not a political luxury. The greater risk is not that government over-spend, but that it under spends — or rushes — and winds up with a brittle system that became an embarrassment to the nation. A BritCard deployment at “cut-price” that ends in a breach would cost multiples of what the original outlay was and would cause irreparable damage to public trust. If it is the state’s desire that citizens adopt a new layer of identity, it must prove that the system is reliable as well as restrained.

The good news is that the core design can, in principle,support both goals. BritCard is akin to a digital version of a physical ID card, contained within a secure, government-issued wallet. Most importantly, the core identity data would stay on the user’s device, enabling people to prove certain attributes — like being over 18 — without revealing personal details such as a date of birth or passport number. This model of “sharing what is necessary,” is a practical approach to privacy concerns as it is designed to limit the amount of sensitive information that will be routinely disclosed.

However, none of this eliminates risk. Critics will reasonably worry about any central verification component becoming a lucrative “honeypot.” That is why transparency is non-negotiable: the government should publish how data is stored, accessed and shared, what protections exist, and how citizens opt in and control disclosure.

Amazon and Microsoft AI Investments Put India at a Crossroads

 

Major technology companies Amazon and Microsoft have announced combined investments exceeding $50 billion in India, placing artificial intelligence firmly at the center of global attention on the country’s technology ambitions. Microsoft chief executive Satya Nadella revealed the company’s largest-ever investment in Asia, committing $17.5 billion to support infrastructure development, workforce skills, and what he described as India’s transition toward an AI-first economy. Shortly after, Amazon said it plans to invest more than $35 billion in India by 2030, with part of that funding expected to strengthen its artificial intelligence capabilities in the country. 

These announcements arrive at a time of heightened debate around artificial intelligence valuations globally. As concerns about a potential AI-driven market bubble have grown, some financial institutions have taken a contrarian view on India’s position. Analysts at Jefferies described Indian equities as a “reverse AI trade,” suggesting the market could outperform if global enthusiasm for AI weakens. HSBC has echoed similar views, arguing that Indian stocks offer diversification for investors wary of overheated technology markets elsewhere. This perspective has gained traction as Indian equities have underperformed regional peers over the past year, while foreign capital has flowed heavily into AI-centric companies in South Korea and Taiwan. 

Against this backdrop, the scale of Amazon and Microsoft’s commitments offers a significant boost to confidence. However, questions remain about how competitive India truly is in the global AI race. Adoption of artificial intelligence across the country has accelerated, with increasing investment in data centers and early movement toward domestic chip manufacturing. A recent collaboration between Intel and Tata Electronics to produce semiconductors locally reflects growing momentum in strengthening AI infrastructure. 

Despite these advances, India continues to lag behind global leaders when it comes to building sovereign AI models. The government launched a national AI mission aimed at supporting researchers and startups with high-performance computing resources to develop a large multilingual model. While officials say a sovereign model supporting more than 22 languages is close to launch, global competitors such as OpenAI and China-based firms have continued to release more advanced systems in the interim. India’s public investment in this effort remains modest when compared with the far larger AI spending programs seen in countries like France and Saudi Arabia. 

Structural challenges also persist. Limited access to advanced semiconductors, fragmented data ecosystems, and insufficient long-term research investment constrain progress. Although India has a higher-than-average concentration of AI-skilled professionals, retaining top talent remains difficult as global mobility draws developers overseas. Experts argue that policy incentives will be critical if India hopes to convert its talent advantage into sustained leadership. 

Even so, international studies suggest India performs strongly relative to its economic stage. The country ranks among the top five globally for new AI startups receiving investment and contributes a significant share of global AI research publications. While funding volumes remain far below those of the United States and China, experts believe India’s advantage may lie in applying AI to real-world problems rather than competing directly in foundational model development. 

AI-driven applications addressing agriculture, education, and healthcare are already gaining traction, demonstrating the technology’s potential impact at scale. At the same time, analysts warn that artificial intelligence could disrupt India’s IT services sector, a long-standing engine of economic growth. Slowing hiring, wage pressure, and weaker stock performance indicate that this transition is already underway, underscoring both the opportunity and the risk embedded in India’s AI future.

OpenAI Warns Future AI Models Could Increase Cybersecurity Risks and Defenses

 

Meanwhile, OpenAI told the press that large language models will get to a level where future generations of these could pose a serious risk to cybersecurity. The company in its blog postingly admitted that powerful AI systems could eventually be used to craft sophisticated cyberattacks, such as developing previously unknown software vulnerabilities or aiding stealthy cyber-espionage operations against well-defended targets. Although this is still theoretical, OpenAI has underlined that the pace with which AI cyber-capability improvements are taking place demands proactive preparation. 

The same advances that could make future models attractive for malicious use, according to the company, also offer significant opportunities to strengthen cyber defense. OpenAI said such progress in reasoning, code analysis, and automation has the potential to significantly enhance security teams' ability to identify weaknesses in systems better, audit complex software systems, and remediate vulnerabilities more effectively. Instead of framing the issue as a threat alone, the company cast the issue as a dual-use challenge-one in which adequate management through safeguards and responsible deployment would be required. 

In the development of such advanced AI systems, OpenAI says it is investing heavily in defensive cybersecurity applications. This includes helping models improve particularly on tasks related to secure code review, vulnerability discovery, and patch validation. It also mentioned its effort on creating tooling supporting defenders in running critical workflows at scale, notably in environments where manual processes are slow or resource-intensive. 

OpenAI identified several technical strategies that it thinks are critical to the mitigation of cyber risk associated with increased capabilities of AI systems: stronger access controls to restrict who has access to sensitive features, hardened infrastructure to prevent abuse, outbound data controls to reduce the risk of information leakage, and continuous monitoring to detect anomalous behavior. These altogether are aimed at reducing the likelihood that advanced capabilities could be leveraged for harmful purposes. 

It also announced the forthcoming launch of a new program offering tiered access to additional cybersecurity-related AI capabilities. This is intended to ensure that researchers, enterprises, and security professionals working on legitimate defensive use cases have access to more advanced tooling while providing appropriate restrictions on higher-risk functionality. Specific timelines were not discussed by OpenAI, although it promised that more would be forthcoming very soon. 

Meanwhile, OpenAI also announced that it would create a Frontier Risk Council comprising renowned cybersecurity experts and industry practitioners. Its initial mandate will lie in assessing the cyber-related risks that come with frontier AI models. But this is expected to expand beyond this in the near future. Its members will be required to offer advice on the question of where the line should fall between developing capability responsibly and possible misuse. And its input would keep informing future safeguards and evaluation frameworks. 

OpenAI also emphasized that the risks of AI-enabled cyber misuse have no single-company or single-platform constraint. Any sophisticated model, across the industry, it said, may be misused if there are no proper controls. To that effect, OpenAI said it continues to collaborate with peers through initiatives such as the Frontier Model Forum, sharing threat modeling insights and best practices. 

By recognizing how AI capabilities could be weaponized and where the points of intervention may lie, the company believes, the industry will go a long way toward balancing innovation and security as AI systems continue to evolve.