Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

Rhysida Claims Responsibility for November 2025 Ransomware Attack on Southold, New York

 

A ransomware gang known as Rhysida has claimed it was behind a cyberattack carried out in November 2025 against the local government of Southold, New York.

Town authorities first disclosed the incident on November 24, 2025, revealing that a ransomware attack had disrupted critical municipal services. Impacted systems included email communications, payroll processing, tax collection, permitting, and other essential operations. While most systems were restored within two weeks, some remained offline through mid-January.

On its data leak portal, Rhysida demanded a ransom payment of 10 bitcoin—valued at approximately $661,400 at the time of reporting. The group gave the town a seven-day deadline, threatening to auction the allegedly stolen data to other cybercriminal actors if the ransom was not paid. Southold Supervisor Al Krupski stated that the town does not plan to comply with the ransom demand.

Town officials have not confirmed Rhysida’s involvement, and independent verification of the gang’s claims has not been established. It remains unclear what specific data may have been compromised or how attackers gained access to the town’s network. Officials were contacted for further comment, and updates are expected if additional information becomes available.

Following the breach, the town allocated $500,000 toward cybersecurity enhancements.

“Please be advised that the Town of Southold is investigating a potential cyber incident affecting town servers, which affects our ability to communicate with residents via email,” said the city’s November 24 announcement. “During the course of this investigation, we regret to inform you that all town services will be limited.”

Rhysida emerged in May 2023 and operates a ransomware-as-a-service (RaaS) model. The group’s malware is capable of encrypting systems and exfiltrating sensitive data. Victims are typically pressured to pay for both a decryption key and assurances that stolen information will be deleted. Affiliates can lease Rhysida’s infrastructure to conduct attacks and share in ransom proceeds.

In 2025, the group claimed responsibility for 21 verified ransomware incidents and made an additional 70 unconfirmed claims. Several confirmed attacks targeted public-sector entities, including:
  • Oregon Department of Environmental Quality (April 2025 – $2.6 million ransom, unpaid)
  • Maryland Department of Transportation (August 2025 – $3.4 million ransom, unpaid)
  • Cleveland County Sheriff’s Office (November 2025 – $782,000 ransom)
  • Cheyenne and Arapaho Tribes (December 2025 – $682,000 ransom, unpaid)
So far in 2026, the group has claimed six additional breaches.

Security researchers documented 84 confirmed ransomware incidents targeting U.S. government entities in 2025, exposing roughly 639,000 personal records. The average ransom demand across these cases reached $987,000.

In 2026, confirmed government-sector victims include Midway, Florida, Winona County, Minnesota, New Britain, Connecticut, and Tulsa International Airport.

Ransomware attacks on public institutions often involve both data theft and system encryption, disrupting services such as bill payments, court records management, and emergency response operations. Governments that refuse to pay may face prolonged outages, data loss, and heightened risks of fraud for affected residents.

Southold is a town located on Long Island in New York, with a population of approximately 24,000 residents. It falls within Suffolk County, which experienced a significant ransomware incident in 2021 that exposed the personal data of around 470,000 residents and severely disrupted county services.

Rocket Software Research Highlights Data Security and AI Infrastructure Gaps in Enterprise IT Modernization

 

Stress is rising among IT decision-makers as organizations accelerate technology upgrades and introduce AI into hybrid infrastructure. Data security now leads modernization concerns, with nearly 70 percent identifying it as their primary pressure point. As transformation speeds up, safeguarding digital assets becomes more complex, especially as risks expand across both legacy systems and cloud environments. 

Aligning security improvements with system upgrades remains difficult. Close to seven in ten technology leaders rank data protection as their biggest modernization hurdle. Many rely on AI-based monitoring, stricter access controls, and stronger data governance frameworks to manage risk. However, confidence in these safeguards is limited. Fewer than one-third feel highly certain about passing upcoming regulatory audits. While 78 percent believe they can detect insider threats, only about a quarter express complete confidence in doing so. 

Hybrid IT environments add further strain. Just over half of respondents report difficulty integrating cloud platforms with on-premises infrastructure. Poor data quality emerges as the biggest obstacle to managing workloads effectively across these mixed systems. Secure data movement challenges affect half of those surveyed, while 52 percent cite access control issues and 46 percent point to inconsistent governance. Rising storage costs also weigh on 45 percent, slowing modernization and increasing operational risk. 

Workforce shortages compound these challenges. Nearly 48 percent of organizations continue to depend on legacy systems for critical operations, yet only 35 percent of IT leaders believe their teams have the necessary expertise to manage them effectively. Additionally, 52 percent struggle to recruit professionals skilled in older technologies, underscoring the need for reskilling to prevent operational vulnerabilities. 

AI remains a strategic priority, particularly in areas such as fraud detection, process optimization, and customer experience. Still, infrastructure readiness lags behind ambition. Only one-quarter of leaders feel fully confident their systems can support AI workloads. Meanwhile, 66 percent identify data accessibility as the most significant factor shaping future modernization plans. 

Looking ahead, organizations are prioritizing stronger data protection, closing infrastructure gaps to support AI, and improving data availability. Progress increasingly depends on integrated systems that securely connect applications and databases across hybrid environments. The findings are based on a survey conducted with 276 IT directors and vice presidents from companies with more than 1,000 employees across the United States, the United Kingdom, France, and Germany during October 2025.

Two AI Data Breaches Leak Over Billion KYC Records


About the leaks

Two significant data leaks connected to two AI-related apps have been discovered by cybersecurity researchers, exposing the private information and media files of millions of users worldwide. 

The security researchers cautioned that more than a billion records might be exposed in two different studies published by Cybernews, which were initially reported by Forbes. An AI-powered Know Your Customer (KYC) technology utilized by digital identity verification company IDMerit has been blamed for the initial leak. The business offers real-time verification tools to the fintech and financial services industries as part of its AI-powered digital identity verification solutions.

Attack tactic 

When the researchers discovered the unprotected instance on November 11, 2025, they informed the company right away, and they quickly secured the database. The cybersecurity researchers said, "Automated crawlers set up by threat actors constantly prowl the web for exposed instances, downloading them almost instantly once they appear, even though there is currently no evidence of malicious misuse." 

Leaked records

One billion private documents belonging to people in 26 different nations were compromised. With almost 203 million exposed data, the United States was the most impacted, followed by Mexico (124 million) and the Philippines (72 million). Full names, residences, postcodes, dates of birth, national IDs, phone numbers, genders, email addresses, and telecom information were among the "core personal identifiers used for your financial and digital life" that were made public.

According to researchers, account takeovers, targeted phishing, credit fraud, SIM swaps, and long-term privacy losses are some of the downstream hazards associated with this data leak. The Android software "Video AI Art Generator & Maker," which has received over 500,000 downloads on Google Play and has received over 11,000 reviews with a rating of 4.3 stars, is connected to the second leak. Due to a Google Cloud Storage bucket that was improperly configured, allowing anyone to access stored files without authentication, the app was discovered to be leaking user data. According to researchers, the app exposed millions of media assets created by users utilizing AI, as well as more than 1.5 million user photos and 385,000 videos.

The app was created by Codeway Dijital Hizmetler Anonim Sirketi, a company registered in Turkey. Previously, the company's Chat & Ask AI app leaked around 300 million messages associated with over 25 million users.

Google Chrome Introduces Merkle Tree Certificates to Build Quantum-Resistant HTTPS

 

A fresh move inside Google Chrome targets long-term security of HTTPS links against risks tied to quantum machines. Instead of dropping standard X.509 certificates straight into the Chrome Root Store - ones using post-quantum methods - the team leans on an alternate design path. Speed stays high, system growth remains smooth, thanks to this structural twist shaping how protection rolls out online. 

The decision comes from Chrome’s Secure Web and Networking Team: conventional post-quantum X.509 certificates won’t enter the root program right now. Rather than adopt them outright, Google works alongside others on a different path - Merkle Tree Certificates (MTCs). Progress unfolds inside the PLANTS working group, shifting how HTTPS verification could function down the line. 

One way to look at MTCs, according to Cloudflare, is as an updated framework for how online trust systems operate today. Instead of relying on long chains of verification, these models aim to cut down excess - fewer keys, fewer signatures traded when devices connect securely. A key feature involves certification authorities signing just one root structure, known as a Tree Head, which stands in for vast groups of individual certificates. During a web visit, the user's browser gets a small cryptographic note confirming the site’s credentials live inside that larger authenticated structure. Rather than pulling multiple files across networks, only minimal evidence travels each time. 

One way this setup works is by fitting new quantum-resistant codes without needing much extra data flow. Large certificates often grow bulkier when using tougher encryption methods. Instead of linking security directly to file size, these compact certificates help maintain speed during secure browsing. With less information needed at connection start, performance stays high even under upgraded protection levels. 

Testing of MTCs is now happening, using actual internet data flows, alongside a step-by-step introduction schedule that runs until 2027. Right now, the opening stage focuses on checking viability through joint work with Cloudflare, observing how things run when exposed to active TLS environments. Instead of waiting, preparations are shifting ahead - by early 2027, those running Certificate Transparency logs, provided they had at least one accepted by Chrome prior to February 1, 2026, may join efforts to kickstart broader MTC availability. Moving forward, around late 2027, rules for admitting CAs into Google's new quantum-safe root store should be set, a system built only to handle MTC certificates. 

A shift like this one sits at the core of Google's approach to future-proofing online security. Rather than wait, the team is rebuilding trust systems so they handle both emerging risks and current efficiency needs. With updated certificates in place, stronger defenses can spread faster across services. Speed does not take a back seat - performance stays aligned with how people actually use browsers now.

How a Single Brick Helped Homeland Security Rescue an Abused Child from the Dark Web

 

A years-long investigation by the US Department of Homeland Security led to the dramatic rescue of a young girl whose abuse images had been circulating on the dark web — with a crucial clue hidden in the background of a photograph.

Specialist online investigator Greg Squire had nearly exhausted all leads while trying to identify and locate a 12-year-old girl his team had named Lucy. Explicit images of her were being distributed through encrypted networks designed to conceal users’ identities. The perpetrator had taken deliberate steps to erase identifying features, carefully cropping and altering images to avoid detection.

Despite those efforts, investigators found that the answer was concealed in plain sight.

Squire, part of an elite Homeland Security Investigations unit focused on identifying children in sexual abuse material, became deeply invested in Lucy’s case early in his career. The case struck him personally — Lucy was close in age to his own daughter, and new images of her abuse continued to surface online.

Initially, the team determined only that Lucy was likely somewhere in North America, based on visible electrical outlets and fixtures in the room. Attempts to seek assistance from Facebook proved unsuccessful. Although the company had facial recognition technology, it stated it "did not have the tools" to help with the search.

Investigators then scrutinized every visible detail in Lucy’s bedroom — bedding patterns, toys, clothing, and furniture. A breakthrough came when they realized that a sofa appearing in some images had only been sold regionally rather than nationwide, reducing the potential customer base to roughly 40,000 buyers.

"At that point in the investigation, we're [still] looking at 29 states here in the US. I mean, you're talking about tens of thousands of addresses, and that's a very, very daunting task," says Squire.

Still searching for more clues, Squire turned his attention to an exposed brick wall visible in the background of several photos. He contacted the Brick Industry Association after researching brick manufacturers.

"And the woman on the phone was awesome. She was like, 'how can the brick industry help?'"

The association circulated the image among brick specialists nationwide. One expert, John Harp — a veteran in brick sales since 1981 — quickly identified the material.

"I noticed that the brick was a very pink-cast brick, and it had a little bit of a charcoal overlay on it. It was a modular eight-inch brick and it was square-edged," he says. "When I saw that, I knew exactly what the brick was," he adds.

Harp identified it as a "Flaming Alamo".

"[Our company] made that brick from the late 60s through about the middle part of the 80s, and I had sold millions of bricks from that plant."

Although sales records were not digitized and existed only as a "pile of notes", Harp shared a vital insight.

"He goes: 'Bricks are heavy.' And he said: 'So heavy bricks don't go very far.'"

That observation narrowed the search dramatically. Investigators filtered the sofa buyers list to those living within a 100-mile radius of the brick factory in the American southwest


From there, social media analysis uncovered a photograph of Lucy alongside an adult woman believed to be a relative. Tracking related addresses and household members eventually led authorities to a single residence.

Investigators discovered that Lucy lived there with her mother’s boyfriend — a convicted sex offender. Within hours, local Homeland Security agents arrested the man, who had abused Lucy for six years. He was later sentenced to more than 70 years in prison.

Harp, who has fostered over 150 children and adopted three, said the rescue resonated deeply with him.

"We've had over 150 different children in our home. We've adopted three. So, doing that over those years, we have a lot of children in our home that were [previously] abused," he said.

"What [Squire's team] do day in and day out, and what they see, is a magnification of hundreds of times of what I've seen or had to deal with."

The emotional toll of the work eventually affected Squire’s mental health. He admits that outside of work, "alcohol was a bigger part of my life than it should have been".

Reflecting on that period, he said:

"At that point my kids were a bit older… and, you know, that almost enables you to push harder. Like… 'I bet if I get up at three this morning, I can surprise [a perpetrator] online.'

"But meanwhile, personally… 'Who's Greg? I don't even know what he likes to do.' All of your friends… during the day, you know, they're criminals… All they do is talk about the most horrific things all day long."

After his marriage ended and he experienced suicidal thoughts, colleague Pete Manning urged him to seek help.

"It's hard when the thing that brings you so much energy and drive is also the thing that's slowly destroying you," Manning says.

Squire credits confronting his struggles openly as the turning point.

"I feel honoured to be part of the team that can make a difference instead of watching it on TV or hearing about it… I'd rather be right in there in the fight trying to stop it."

Years later, Squire met Lucy — now in her 20s — for the first time. She said healing and support have helped her speak openly about her past.

"I have more stability. I'm able to have the energy to talk to people [about the abuse], which I could not have done… even, like, a couple years ago."

She revealed that when authorities intervened, she had been "praying actively for it to end".

"Not to sound cliché, but it was a prayer answered."

Squire shared that he wished he could have reassured her during those years.

"You wish there was some telepathy and you could reach out and be like, 'listen, we're coming'."

When questioned about its earlier role, Facebook responded: "To protect user privacy, it's important that we follow the appropriate legal process, but we work to support law enforcement as much as we can."

Infostealer Malware Targets OpenClaw AI Agent Files to Steal API Keys and Authentication Tokens

 

Now appearing in threat reports, OpenClaw — a local AI assistant that runs directly on personal devices — has rapidly gained popularity. Because it operates on users’ machines, attackers are shifting focus to its configuration files. Recent malware infections have been caught stealing setup data containing API keys, login tokens, and other sensitive credentials, exposing private access points that were meant to remain local. 

Previously known as ClawdBot or MoltBot, OpenClaw functions as a persistent assistant that reads local files, logs into email and messaging apps, and interacts with web services. Since it stores memory and configuration details on the device itself, compromising it can expose deeply personal and professional data. As adoption grows across home and workplace environments, saved credentials are becoming attractive targets. 

Cybersecurity firm Hudson Rock identified what it believes is the first confirmed case of infostealer malware extracting OpenClaw configuration data. The incident marks a shift in tactics: instead of stealing only browser passwords, attackers are now targeting AI assistant environments that store powerful authentication tokens. According to co-founder and CTO Alon Gal, the infection likely involved a Vidar infostealer variant, with stolen data traced to February 13, 2026. 

Researchers say the malware did not specifically target OpenClaw. Instead, it scanned infected systems broadly for files containing keywords like “token” or “private key.” Because OpenClaw stores data in a hidden folder with those identifiers, its files were automatically captured. Among the compromised files, openclaw.json contained a masked email, workspace path, and a high-entropy gateway authentication token that could enable unauthorized access or API impersonation. 

The device.json file stored public and private encryption keys used for pairing and signing, meaning attackers with the private key could mimic the victim’s device and bypass security checks. Additional files such as soul.md, AGENTS.md, and MEMORY.md outlined the agent’s behavior and stored contextual data including logs, messages, and calendar entries. Hudson Rock concluded that the combination of stolen tokens, keys, and memory data could potentially allow near-total digital identity compromise.

Experts expect infostealers to increasingly target AI systems as they become embedded in professional workflows. Separately, Tenable disclosed a critical flaw in Nanobot, an AI assistant inspired by OpenClaw. The vulnerability, tracked as CVE-2026-2577, allowed remote hijacking of exposed instances but was patched in version 0.13.post7. 

Security professionals warn that as AI tools gain deeper access to personal and corporate systems, protecting configuration files is now as critical as safeguarding passwords. Hidden setup files can carry risks equal to — or greater than — stolen login credentials.

Influencers Alarmed as New AI Rules Enforce Three-Hour Takedowns

 

India’s new three-hour takedown rule for online content has triggered unease among influencers, agencies, and brands, who fear it could disrupt campaigns and shrink creative freedom.

The rule, introduced through amendments to the IT Intermediary Rules on February 11, slashes the takedown window from 36 hours to just three, with the stated goal of curbing unlawful and AI-generated deepfake content. Creators argue that while tackling deepfakes and harmful material is essential, such a compressed deadline leaves almost no room to contest wrongful flags or provide context, especially when automated moderation tools make mistakes. They warn that legitimate posts could be penalised simply because systems misread nuance, humour, or sensitive but educational topics.

Influencer Ekta Makhijani described the deadline as “incredibly tight,” noting that if a brand campaign video is misflagged, an entire launch window could be lost in hours rather than days. She highlighted how parenting content around breastfeeding or toddler behaviour has previously been misinterpreted by moderation tools, and said the shorter window magnifies the risk of such false positives. Apparel brand founder Akanksha Kommirelly added that small creators lack round-the-clock legal and compliance teams, making it unrealistic for them to respond to takedown notices at all times.

Experts also worry about a chilling effect on speech, especially satire, political commentary, and advocacy. With platforms facing tighter liability, agencies fear an “act first, verify later” culture in which companies remove anything remotely borderline to stay safe. Raj Mishra of Chtrbox warned that, in practice, the incentive becomes to take down flagged content immediately, which could hit investigative work or edgy creative pieces hardest. India’s linguistic diversity further complicates moderation, as systems trained mainly on English may misinterpret regional content.A

longside takedowns, mandatory AI labelling is reshaping creator workflows and brand strategies. Kommirelly noted that prominent AI tags on visual campaigns may weaken brand recall, while Mishra cautioned that platforms could quietly de-prioritise AI-labelled content in algorithms, reducing reach regardless of audience acceptance. This dual pressure—strict timelines and AI disclosure—forces creators to rethink how they script, edit, and publish content.

Agencies like Kofluence and Chtrbox are responding by building compliance support systems for the creator economy. These include AI content guides, pre-upload checks, documentation protocols, legal support networks, and even insurance options to cover campaign disruptions. While most stakeholders accept that tougher rules are needed against deepfakes and abuse, they are urging the government to differentiate emergency takedowns for clearly illegal content from more contested speech so that speed does not entirely override fairness.

Botnet Moves to Blockchain, Evades Traditional Takedowns

 

A newly identified botnet loader is challenging long standing methods used to dismantle cybercrime infrastructure. Security researchers have uncovered a tool known as Aeternum C2 that stores its command instructions on the Polygon blockchain rather than on traditional servers or domains. 

For years, investigators have disrupted major botnets by seizing command and control servers or suspending malicious domains. Operations targeting networks such as Emotet, TrickBot, and QakBot relied heavily on this approach. 

Aeternum C2 appears designed to bypass that model entirely by embedding instructions inside smart contracts on Polygon, a public blockchain replicated across thousands of nodes worldwide. 

According to researchers at Qrator Labs, the loader is written in native C++ and distributed in both 32 bit and 64 bit builds. Instead of connecting to a centralized server, infected systems retrieve commands by reading transactions recorded on the blockchain through public remote procedure call endpoints. 

The seller claims that bots receive updates within two to three minutes of publication, offering relatively fast synchronization without peer to peer infrastructure. The malware is marketed on underground forums either as a lifetime licensed build or as full source code with ongoing updates. Operating costs are minimal. 

Researchers observed that a small amount of MATIC, the Polygon network token, is sufficient to process a significant number of command transactions. With no need to rent servers or register domains, operators face fewer operational hurdles. 

Investigators also found that Aeternum includes anti virtual machine checks intended to avoid execution in sandboxed analysis environments. A bundled scanning feature reportedly measures detection rates across multiple antivirus engines, helping operators test payloads before deployment. 

Because commands are stored on chain, they cannot be altered or removed without access to the controlling wallet. Even if infected devices are cleaned, the underlying smart contracts remain active, allowing operators to resume activity without rebuilding infrastructure. 

Researchers warn that this model could complicate takedown efforts and enable persistent campaigns involving distributed denial of service attacks, credential theft, and other abuse. 

As infrastructure seizures become less effective, defenders may need to focus more heavily on endpoint monitoring, behavioral detection, and careful oversight of outbound connections to blockchain related services.