Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Data Breach. Show all posts

New Copilot Setting May Access Activity From Other Microsoft Services. Here’s How Users Can Disable It

 



A recently noticed configuration inside Microsoft Copilot may allow the AI tool to reference activity from several other Microsoft platforms, prompting renewed discussion around data privacy and AI personalization. The option, which appears within Copilot’s settings, enables the assistant to use information connected to services such as Bing, MSN, and the Microsoft Edge browser. Users who are uncomfortable with this level of integration can switch the feature off.

Like many modern artificial intelligence systems, Copilot attempts to improve the usefulness of its responses by understanding more about the person interacting with it. The assistant normally does this by remembering past conversations and storing certain details that users intentionally share during chats. These stored elements help the AI maintain context across multiple interactions and generate responses that feel more tailored.

However, a specific configuration called “Microsoft usage data” expands that capability. According to reporting first highlighted by the technology outlet Windows Latest, this setting allows Copilot to reference information associated with other Microsoft services a user has interacted with. The option appears within the assistant’s Memory controls and is available through both the Copilot website and its mobile applications. Observers believe the setting was introduced recently as part of Microsoft’s effort to strengthen personalization features in its AI tools.

The Memory feature in Copilot is designed to help the assistant retain useful context. Through this system, the AI can recall earlier conversations, remember instructions or factual information shared by users, and potentially reference certain account-linked activity from other Microsoft products. The idea is that by understanding more about a user’s interests or previous discussions, the assistant can provide more relevant answers.

In practice, such capabilities can be helpful. For instance, a user who discussed a topic with Copilot previously may want to continue that conversation later without repeating the entire background. Similarly, individuals seeking guidance about personal or professional matters may receive more relevant suggestions if the assistant has some awareness of their preferences or circumstances.

Despite the convenience, the feature also raises questions about privacy. Some users may be concerned that allowing an AI assistant to accumulate information from multiple services could expose more personal data than expected. Others may want to know how that information is used beyond personalizing conversations.

Microsoft addresses these concerns in its official Copilot documentation. In its frequently asked questions section, the company states that user conversations are processed only for limited purposes described in its privacy policies. According to Microsoft, this information may be used to evaluate Copilot’s performance, troubleshoot operational issues, identify software bugs, prevent misuse of the service, and improve the overall quality of the product.

The company also says that conversations are not used to train AI models by default. Model training is controlled through a separate configuration, which users can choose to disable if they do not want their interactions contributing to AI development.

Microsoft further clarifies that Copilot’s personalization settings do not determine whether a user receives targeted advertisements. Advertising preferences are managed through a different option available in the Microsoft account privacy dashboard. Users who want to stop personalized advertising must adjust the Personalized ads and offers setting separately.

Even with these explanations, privacy concerns remain understandable, particularly because Microsoft documentation indicates that Copilot’s personalization features may already be activated automatically in some cases. When reviewing the settings on a personal device, these options were found to be switched on. Users who prefer not to allow Copilot to access broader usage data may therefore wish to disable them.

Checking these settings is straightforward. Users can open Copilot through its website or mobile application and ensure they are signed in with their Microsoft account. On the web interface, selecting the account name at the bottom of the left-hand panel opens the Settings menu, where the Memory section can be accessed. In the mobile application, the same controls are available through the side navigation menu by tapping the account name and choosing Memory.

Inside the Memory settings, users will see a general control labeled “Personalization and memory.” Two additional options appear beneath it: “Facts you’ve shared,” which stores information provided directly during conversations, and “Microsoft usage data,” which allows Copilot to reference activity from other Microsoft services.

To limit this behavior, users can switch off the Microsoft usage data toggle. They may also disable the broader Personalization and memory option if they prefer that the AI assistant does not retain contextual information about their interactions. Copilot also provides a “Delete all memory” function that removes all stored data from the system. If individual personal details have been recorded, they can be reviewed and deleted through the editing option next to “Facts you’ve shared.”

Security and privacy experts generally advise caution when sharing information with AI assistants, even when personalization features remain enabled. Sensitive or confidential details should not be entered into conversations. Microsoft itself recommends avoiding the disclosure of certain types of highly personal data, including information related to health conditions or sexual orientation.

The broader development reflects a growing trend in the technology industry. As AI assistants become integrated across multiple platforms and services, companies are increasingly using cross-service data to make these tools more helpful and personalized. While this approach can improve convenience and usability, it also underlines the grave necessity for transparent privacy controls so users remain aware of how their information is being used and can adjust those settings when necessary.

Optimizely Reports Data Breach Linked to Sophisticated Vishing Incident


 

In addition to serving as a crossroads of technology, marketing intelligence, and vast networks of corporate data, digital advertising platforms are becoming increasingly attractive targets for cybercriminals seeking an entry point into enterprise infrastructure.

Optimizely recently revealed that a security incident was initiated not by sophisticated malware, but rather by a social engineering scheme that was carefully orchestrated. A voice-phishing tactic was utilized by attackers linked to the threat group ShinyHunters to deceive a company employee earlier in February 2026 and gain access to some parts of the company's internal environment without authorization. 

Investigators determined that the attackers were able to extract limited business contact information from internal resources even though the intrusion was contained before it could reach sensitive customer databases or critical operational systems. 

Throughout this episode, we learn that even mature technology companies remain vulnerable to manipulation-based attacks aimed at bypassing technical defenses and targeting the human layer of security. 

Optimizely, a leading provider of digital experience infrastructure, develops tools that assist organizations in managing web properties, conducting marketing experiments, and refining online customer journeys based on data. 

Among its many capabilities are A/B experimentation frameworks, enterprise-grade content management systems, and integrated ecommerce tools that are designed to assist businesses in improving conversion performance and audience engagement across a variety of digital channels. 

Over 10,000 organizations worldwide use the company's technology stack, including H&M, PayPal, Toyota, Nike, and Salesforce, among others. A number of customers have recently received notifications detailing this incident. According to the company, the attackers gained access through what it described as a "sophisticated voice-phishing attack" on February 11. 

The internal investigation indicates that although the threat actors were able to penetrate a limited segment of the corporate environment, the intrusion did not result in privilege escalation and no malicious payloads or malware were deployed within the network during the intrusion. 

Therefore, the breach remained constrained within a narrow scope, confirming our assessment that the attackers were limited in their access and were not permitted to reach sensitive customer and operational data. Researchers have identified the intrusion as the work of the threat actor collective ShinyHunters, a financially motivated group involved in cybercrime since at least 2020. 

It is well known for orchestrating high-visibility data theft operations and subsequently distributing or monetizing compromised databases through dark web forums and underground marketplaces. A great deal of its campaign effort has been directed toward technology and telecommunications organizations, areas where internal access to corporate databases and partner information can prove to be very useful. 

According to analysts, this group has demonstrated a high degree of flexibility in its intrusion techniques, combining credential-based attacks, such as credential stuffing, with increasingly persuasive social engineering techniques, such as voice-based deception schemes, to achieve their objectives. 

Despite the fact that the precise geographical origins of the actors remain unknown, their operational footprint spans multiple regions, reflecting their focus on monetizing corporate information or using stolen data to exert reputational and financial pressure on targeted organizations through exploitation of stolen data. In the immediate case, organizations connected to the affected environment appear to be only exposed to basic business contact information, not sensitive customer information. 

Cybersecurity specialists caution, however, that even seemingly routine information can provide a foothold for follow-on attacks. By using contact directories, email addresses, and professional identifiers, attackers may be able to craft convincing phishing emails or conduct additional social engineering attempts in order to gather credentials or financial information. 

In addition to facilitating spam operations, this type of data can also facilitate fraudulent outreach that impersonates trusted partners or internal employees. A precautionary measure, security experts recommend that employees and partners be aware of unexpected communications, independently verify the legitimacy of telephone calls or e-mail requests, and maintain multi-factor authentication on all corporate accounts as a precautionary measure. 

A proactive approach to security hygiene and maintaining open communication with affected stakeholders are widely regarded as essential measures in order to minimize the impact of incidents of this nature on organizational operations and reputations. 

Optimizely did not disclose the exact number of customers whose information may have been exposed; however, it indicated in its breach notification that the activity closely resembles that of a network of attackers known for persistent social engineering campaigns involving loosely connected attackers. 

According to the firm, during the incident, communications were received that reflected patterns commonly associated with groups that utilize voice phishing to manipulate employees into providing access to corporate systems. 

As stated in the description, the operational style of ShinyHunters is commonly attributed to those responsible for a series of breaches affecting major online platforms and consumer brands recently, such as Canada Goose, Panera Bread, Betterment, SoundCloud, Pornhub, Figure, and Match Group, which operates Tinder, Hinge, Meetic, Match.com, and OkCupid, among others. 

It should be noted that not every incident has been related to a single coordinated campaign; however, numerous victims have reported a successful intrusion attempt related to voice phishing operations designed to compromise enterprise single sign-on environments. 

It has been reported that attackers have impersonated internal IT support staff and contacted employees directly, leading them to counterfeit authentication portals that mimic legitimate corporate logins. These interactions led to the attackers bypassing standard access controls by obtaining account credentials and one-time multi-factor authentication codes from victims. These techniques have also been observed to evolve, with threat actors using device-code phishing methods to obtain authentication tokens tied to enterprise identity services by exploiting the legitimate OAuth device authorization flow. 

Once a single sign-on account has been compromised, attackers can pivot among integrated corporate applications and cloud-based platforms using compromised employee accounts. The same access may be extended to enterprise tools such as Microsoft Entra ID, Microsoft 365, Google Workspace, Salesforce, Zendesk, Dropbox, SAP, Slack, Adobe, and Atlassian, enabling an intruder to move laterally across connected services and collect additional corporate information once an initial foothold has been established. 

Ultimately, this incident serves as a reminder that technical safeguards alone are rarely sufficient to prevent determined social engineering campaigns from gaining traction. It is not uncommon for attackers to exploit human trust and routine operational processes to breach the security architecture of organizations with mature security architectures. 

Identify-verification procedures should be strengthened for internal support interactions, voice-based fraud should be regularly discussed with employees and strong monitoring should be implemented around single sign-on activity and unusual authentication requests, according to security professionals. 

Taking measures such as implementing conditional access policies, enforcing multi-factor authentication strictly, and implementing rapid incident response protocols can greatly reduce an attacker's ability to attack after the initial attempt has been made. 

The development of voice-driven deception tactics is continuing to prompt companies across the technology sector to prioritize social engineering resilience as a core component of enterprise cybersecurity strategy, rather than as a peripheral issue.

LexisNexis Confirms Data Breach After Hackers Exploit Unpatched React App

 

A breach at LexisNexis Legal & Professional exposed some customer and business data, the firm confirmed. News surfaced after FulcrumSec claimed responsibility and leaked about two gigabytes of files on underground platforms. Hackers accessed parts of the company’s systems, though the breach scope was limited. The American analytics provider confirmed the incident days later, stating only a small portion of its infrastructure was affected. 

The company said an outside actor gained access to a limited number of servers. LexisNexis Legal & Professional provides legal research, regulatory information, and analytics tools to lawyers, corporations, government agencies, and universities in more than 150 countries. According to the firm, most of the accessed information came from older systems and was not considered sensitive, which reduced the potential impact.  

Internal findings showed that much of the exposed data originated from legacy systems storing information created before 2020. Records included customer names, user IDs, and business contact details. Some files contained product usage information and logs from past support tickets, including IP addresses from survey responses. However, sensitive personal identifiers such as Social Security numbers or driver’s license data were not included. Financial information, active passwords, search queries, and confidential client case data were also not part of the compromised dataset. 

The breach reportedly occurred around February 24 after attackers exploited the React2Shell vulnerability in an outdated front-end application built with React. The flaw allowed entry into cloud resources hosted on Amazon Web Services before it was addressed. 

While LexisNexis described the affected systems as containing mostly obsolete data, FulcrumSec claimed the intrusion was broader. The group said it extracted about 2.04GB of structured data from the company’s cloud infrastructure, including numerous database tables, millions of records, and internal system configurations. According to the attacker, the breach exposed more than 21,000 customer accounts and information linked to over 400,000 cloud user profiles, including names, email addresses, phone numbers, and job roles. 

Some of the records reportedly belonged to individuals with .gov email addresses, including U.S. government employees, federal judges and law clerks, Department of Justice attorneys, and staff connected to the Securities and Exchange Commission. FulcrumSec also criticized the company’s cloud security setup, alleging that a single ECS task role had access to numerous stored secrets, including credentials linked to production databases. The group said it attempted to contact the company but claimed no cooperation occurred. 

LexisNexis stated that the breach has been contained and confirmed that its products and customer-facing services were not affected. The company notified law enforcement and engaged external cybersecurity experts to assist with investigation and response. Customers, both current and former, have also been informed about the incident. The company had disclosed another breach last year after a compromised corporate account exposed data belonging to roughly 364,000 customers. 

The latest case highlights how vulnerabilities in cloud applications and outdated software can expose enterprise systems even when they contain primarily legacy information.

University of Hawaiʻi Cancer Center Suffers Data Breach from Ransomware Attacks


A ransomware attack on the University of Hawaii Cancer Center's epidemiology division last year resulted in information leaks for up to 1.2 million people. 

About the incident

According to a statement issued by the organization last week, hackers gained access to documents that included 1998 voter registration records from the City and County of Honolulu, as well as Social Security numbers (SSNs) and driver's license numbers gathered from the HawaiÊ»i State Department of Transportation. 

A 1993 Multiethnic Cohort (MEC) Study was shown to be partially responsible for the breach. The institution recruited study participants using voter registration information and driver's license numbers. Health information was included in some of the files that were made public.

Leaked information

Files related to three other epidemiological studies of diet and cancer were retrieved, along with data on MEC Study participants. To determine whether further sensitive data was obtained, the hack is still being investigated. According to the university, "additional individuals whose personal information may have been included in the historical driver's license and voter registration records with SSN identifiers number approximately 1.15 million." 

A total of 87,493 study participants had their information taken. The cyber problem was initially found on August 31, 2025, according to a report the university gave to the state assembly in January.

Attack discovery

The stolen data was found in a subset of research files on specific servers supporting the epidemiological research activities of the University of Hawaii Cancer Center. The University of Hawaii Cancer Center's clinical trials activities, patient care, and other divisions were unaffected by the ransomware attack. The University of Hawaii Cancer Center's director, Naoto Ueno, expressed regret for the incident last week and stated that the organization was "committed to transparency." 

According to the institution, in order to address the issue, they hired cybersecurity specialists and notified law enforcement after the attackers encrypted and probably stole data. The cybersecurity company acquired "an affirmation that any information obtained was destroyed" and a decryption tool.

Three universities, seven community colleges, one employment training center, and numerous research institutions dispersed over six islands make up the University of Hawaii system. About 50,000 students are served by it.

Microsoft Copilot Bug Exposes Confidential Outlook Emails

 
























A critical bug in Microsoft 365 Copilot, tracked as CW1226324, allowed the AI assistant to access and summarize confidential emails in Outlook's Sent Items and Drafts folders, bypassing sensitivity labels and Data Loss Prevention (DLP) policies. Microsoft first detected the issue on January 21, 2026, with exposure lasting from late January until early to mid-February 2026. This flaw affected enterprise users worldwide, including organizations like the UK's NHS, despite protections meant to block AI from processing sensitive data.

 The vulnerability stemmed from a code error that ignored confidentiality labels on user-authored emails stored in desktop Outlook.When users queried Copilot Chat, it retrieved and summarized content from these folders, potentially including business contracts, legal documents, police investigations, and health records. Importantly, the bug did not grant unauthorized access; summaries only appeared to users already permitted to view the mailbox. However, feeding such data into a large language model raised fears of unintended processing or training data incorporation.

Microsoft swiftly responded by deploying a global configuration update in early February 2026, restoring proper exclusion of protected content from Copilot. The company continues monitoring rollout and contacting affected customers for verification, though no full remediation timeline or user impact numbers have been disclosed.As of late February, the patch was in place for most enterprise accounts, tagged as a limited-scope advisory.

This incident underscores persistent AI privacy risks in enterprise tools, marking the second Copilot-related email exposure in eight months—the prior EchoLeak involved prompt injection attacks. It highlights how even brief bugs can erode trust in AI assistants handling confidential workflows. Security experts urge organizations to audit DLP configurations and monitor AI behaviors closely.

For Microsoft 365 users, especially in high-stakes sectors like healthcare and finance, the event emphasizes the need for robust sensitivity labeling and regular Copilot audits. While fixed, expanded DLP enforcement across storage locations won't complete until late April 2026. Businesses should prioritize data governance to mitigate future AI flaws, ensuring productivity doesn't compromise security.

Korean Tax Agency Leaks Seed Phrase, Loses $4.8M in Crypto

 

South Korea's National Tax Service (NTS) turned a major tax evasion crackdown into a $4.8 million cryptocurrency catastrophe by accidentally exposing a seized wallet's seed phrase in a public press release. Hackers drained 4 million Pre-Retogeum (PRTG) tokens from the Ledger hardware wallet within hours of the February 26, 2026, announcement. This blunder exposed profound gaps in government handling of digital assets. 

The NTS raided 124 wealthy tax dodgers, confiscating crypto worth 8.1 billion won ($5.6 million total). Their celebratory photos showed the Ledger device next to an unredacted handwritten 24-word mnemonic—the master key granting full wallet access anywhere, without needing the physical hardware or passwords. By failing to blur this critical information, officials broadcast the equivalent of a bank vault combination nationwide. 

On-chain sleuthing confirmed the rapid heist: an attacker added Ethereum for gas fees, then siphoned the PRTG in three transactions to new addresses. Blockchain experts, including Hansung University's Professor Cho Jae-woo, slammed the NTS for crypto illiteracy, comparing it to "leaving a safe wide open for public plunder." Local reports noted subsequent chaos—one hacker allegedly returned funds, only for another to steal them again, pushing losses toward 6.9 billion won. 

In response, the NTS yanked the images, issued a full apology admitting fault for "careless vividness," and called in police for a cyber probe. Deputy PM Koo Yun-cheol announced multi-agency reviews by the Financial Services Commission to overhaul seizure protocols. This follows prior embarrassments, like police losing 22 BTC ($1.5 million) in a 2021 custody failure.

The incident underscores seed phrases' immense power in crypto security—irreversible access that demands ironclad protection. Governments worldwide must adopt air-gapped storage, expert audits, and redaction training for digital seizures. For users: etch seeds on metal, store offline, never snap photos. Such lapses risk taxpayer funds in the exploding crypto enforcement era.

Madison Square Garden Notifies Victims of SSN Data Breach

 



The Madison Square Garden Family of Companies has disclosed that it recently alerted an undisclosed number of individuals about a cybersecurity incident that occurred in August 2025. The company confirmed that the exposed information includes names and Social Security numbers.

According to MSG’s notification letter, attackers exploited a previously unknown vulnerability in Oracle’s E-Business Suite, an enterprise software platform widely used for finance, human resources, and back-office operations. The affected system was hosted and managed by an unnamed third-party vendor, indicating the intrusion occurred through an externally maintained environment rather than MSG’s core internal network.

Oracle informed customers that an undisclosed condition in the application had been abused by an unauthorized party to obtain access to stored data. MSG stated that its investigation, completed in late November 2025, determined that unauthorized access had taken place in August 2025. The gap between compromise and confirmation reflects a common pattern in zero-day attacks, where flaws are exploited before vendors are aware of their existence or able to issue patches.

In November 2025, the ransomware group known as Clop, also stylized as Cl0p, publicly claimed responsibility for the breach. During the same period, the group carried out a broader campaign targeting hundreds of organizations by leveraging the same Oracle vulnerability. MSG has not acknowledged Clop’s claim, and independent verification of the group’s involvement has not been established. The company has not disclosed how many people were notified, whether a ransom demand was made, or whether any payment occurred. A request for further comment remains pending.

MSG is offering eligible individuals one year of complimentary credit monitoring through TransUnion. Affected recipients have 90 days from receiving the notice letter to enroll.

Clop first appeared in 2019 and has become known for exploiting zero-day flaws in enterprise software. Beyond Oracle’s E-Business Suite, the group has targeted Cleo file transfer software and, more recently, vulnerabilities in Gladinet CentreStack file servers. Unlike traditional ransomware operators that focus primarily on encrypting systems, Clop frequently prioritizes data theft. The group exfiltrates information and then threatens to publish or sell it if payment is not made.

In 2025, Clop claimed responsibility for 456 ransomware incidents. Of those, 31 targeted organizations publicly confirmed resulting data breaches, collectively exposing approximately 3.75 million personal records. Institutions reportedly affected by the Oracle zero-day campaign include Harvard University, GlobalLogic, SATO Corporation, and Dartmouth College.

So far in 2026, Clop has claimed another 123 victims, including the French labor union CFDT. Its most recent operations reportedly leverage a newer vulnerability in Gladinet CentreStack servers.

Ransomware activity across the United States remains extensive. In 2025, researchers recorded 646 confirmed ransomware attacks against U.S. organizations, along with 3,193 additional unverified claims made by ransomware groups. Confirmed incidents resulted in nearly 42 million exposed records. One of the largest cases linked to Clop involved exploitation of the Oracle vulnerability at the University of Phoenix, which later notified 3.5 million individuals. In 2026 to date, 17 confirmed attacks and 624 unconfirmed claims are under review.

Other incidents disclosed this week include a December 2024 breach affecting the City of Carthage, Texas, reportedly claimed by Rhysida; a March 2025 breach at Hennessy Advisors impacting 12,643 individuals and attributed to LockBit; an August 2025 breach at KCI Telecommunications linked to Akira; and a December 2025 incident at The Lewis Bear Company affecting 555 individuals and also claimed by Akira.

Ransomware attacks can both disable systems through encryption and involve large-scale data theft. In Clop’s case, data exfiltration appears to be the primary tactic. Organizations that refuse to meet ransom demands may face public disclosure of stolen data, extended operational disruption, and increased fraud risks for affected individuals.

The Madison Square Garden Family of Companies includes Madison Square Garden Sports Corp., Madison Square Garden Entertainment Corp., and Sphere Entertainment Co.. The group owns and operates major venues such as Madison Square Garden, Radio City Music Hall, and the Las Vegas Sphere.



How Poorly Secured Endpoints Are Expanding Risk in LLM Infrastructure

 


As organizations build and host their own Large Language Models, they also create a network of supporting services and APIs to keep those systems running. The growing danger does not usually originate from the model’s intelligence itself, but from the technical framework that delivers, connects, and automates it. Every new interface added to support an LLM expands the number of possible entry points into the system. During rapid rollouts, these interfaces are often trusted automatically and reviewed later, if at all.

When these access points are given excessive permissions or rely on long-lasting credentials, they can open doors far wider than intended. A single poorly secured endpoint can provide access to internal systems, service identities, and sensitive data tied to LLM operations. For that reason, managing privileges at the endpoint level is becoming a central security requirement.

In practical terms, an endpoint is any digital doorway that allows a user, application, or service to communicate with a model. This includes APIs that receive prompts and return generated responses, administrative panels used to update or configure models, monitoring dashboards, and integration points that allow the model to interact with databases or external tools. Together, these interfaces determine how deeply the LLM is embedded within the broader technology ecosystem.

A major issue is that many of these interfaces are designed for experimentation or early deployment phases. They prioritize speed and functionality over hardened security controls. Over time, temporary testing configurations remain active, monitoring weakens, and permissions accumulate. In many deployments, the endpoint effectively becomes the security perimeter. Its authentication methods, secret management practices, and assigned privileges ultimately decide how far an intruder could move.

Exposure rarely stems from a single catastrophic mistake. Instead, it develops gradually. Internal APIs may be made publicly reachable to simplify integration and left unprotected. Access tokens or API keys may be embedded in code and never rotated. Teams may assume that internal networks are inherently secure, overlooking the fact that VPN access, misconfigurations, or compromised accounts can bridge that boundary. Cloud settings, including improperly configured gateways or firewall rules, can also unintentionally expose services to the internet.

These risks are amplified in LLM ecosystems because models are typically connected to multiple internal systems. If an attacker compromises one endpoint, they may gain indirect access to databases, automation tools, and cloud resources that already trust the model’s credentials. Unlike traditional APIs with narrow functions, LLM interfaces often support broad, automated workflows. This enables lateral movement at scale.

Threat actors can exploit prompts to extract confidential information the model can access. They may also misuse tool integrations to modify internal resources or trigger privileged operations. Even limited access can be dangerous if attackers manipulate input data in ways that influence the model to perform harmful actions indirectly.

Non-human identities intensify this exposure. Service accounts, machine credentials, and API keys allow models to function continuously without human intervention. For convenience, these identities are often granted broad permissions and rarely audited. If an endpoint tied to such credentials is breached, the attacker inherits trusted system-level access. Problems such as scattered secrets across configuration files, long-lived static credentials, excessive permissions, and a growing number of unmanaged service accounts increase both complexity and risk.

Mitigating these threats requires assuming that some endpoints will eventually be reached. Security strategies should focus on limiting impact. Access should follow strict least-privilege principles for both people and systems. Elevated rights should be granted only temporarily and revoked automatically. Sensitive sessions should be logged and reviewed. Credentials must be rotated regularly, and long-standing static secrets should be eliminated wherever possible.

Because LLM systems operate autonomously and at scale, traditional access models are no longer sufficient. Strong endpoint privilege governance, continuous verification, and reduced standing access are essential to protecting AI-driven infrastructure from escalating compromise.

PayPal Alerts Users to Data Exposure Linked to Loan App Software Glitch

 

PayPal has informed customers about a data exposure incident caused by a software error in its loan application platform, which left sensitive personal information visible for nearly six months in 2025.

The issue involved the company’s PayPal Working Capital (PPWC) loan application, a service designed to provide small businesses with fast financing solutions.

According to PayPal, the problem was identified on December 12, 2025. An internal review revealed that customer information — including names, email addresses, phone numbers, business addresses, Social Security numbers, and dates of birth — had been accessible since July 1, 2025.

The company stated it corrected the coding error within a day of detection, preventing further unauthorized access.

In breach notification letters sent to affected individuals, PayPal said: "On December 12, 2025, PayPal identified that due to an error in its PayPal Working Capital ("PPWC") loan application, the PII of a small number of customers was exposed to unauthorized individuals during the timeframe of July 1, 2025 to December 13, 2025," PayPal said in breach notification letters sent to affected users."PayPal has since rolled back the code change responsible for this error, which potentially exposed the PII. We have not delayed this notification as a result of any law enforcement investigation."

The company confirmed that a limited number of users experienced unauthorized account transactions connected to the exposure. Those customers have been reimbursed.

To support impacted individuals, PayPal is offering two years of complimentary three-bureau credit monitoring and identity restoration services through Equifax. Customers must enroll by June 30, 2026, to receive the benefits.

Users are encouraged to closely monitor account activity and credit reports for unusual behavior. PayPal reiterated that it does not request passwords, one-time passcodes, or authentication details via phone calls, text messages, or emails — warning customers to remain cautious of phishing attempts that often follow breach disclosures.

Additionally, passwords for affected accounts have been reset. Customers who have not already updated their credentials will be required to do so at their next login.

This is not the first security-related incident involving the fintech firm. In January 2023, PayPal disclosed a credential stuffing attack that compromised approximately 35,000 accounts between December 6 and December 8, 2022. In January 2025, the State of New York announced a $2 million settlement with the company over allegations that it failed to meet state cybersecurity compliance standards tied to the 2022 breach.

Following publication of the report, a PayPal spokesperson clarified the scope of the incident in a statement to BleepingComputer, emphasizing that core systems were not breached and that roughly 100 customers were potentially affected.

"When there is a potential exposure of customer information, PayPal is required to notify affected customers," the spokesperson said. "In this case, PayPal’s systems were not compromised. As such, we contacted the approximately 100 customers who were potentially impacted to provide awareness on this matter.”

Critical better-auth Flaw Enables API Key Account Takeover

 

A flaw in the better-auth authentication library could let attackers take over user accounts without logging in. The issue affects the API keys plugin and allows unauthenticated actors to generate privileged API keys for any user by abusing weak authorization logic. Researchers warn that successful exploitation grants full authenticated access as the targeted account, potentially exposing sensitive data or enabling broader application compromise, depending on the user’s privileges. 

The better-auth library records around 300,000 weekly downloads on npm, making the issue significant for applications that rely on API keys for automation and service-to-service communication. Unlike interactive logins, API keys often bypass multi-factor authentication and can remain valid for long periods. If misused, a single key can enable scripted access, backend manipulation, or large-scale impersonation of privileged users. 

Tracked as CVE-2025-61928, the vulnerability stems from flawed logic in the createApiKey and updateApiKey handlers. These functions decide whether authentication is required by checking for an active session and the presence of a userId in the request body. When no session exists but a userId is supplied, the system incorrectly skips authentication and builds user context directly from attacker-controlled input. This bypass avoids server-side validation meant to protect sensitive fields such as permissions and rate limits. 

In practical terms, an attacker can send a single request to the API key creation endpoint with a valid userId and receive a working key tied to that account. The same weakness allows unauthorized modification of existing keys. Because exploitation requires only knowledge or guessing of user identifiers, attack complexity is low. Once obtained, the API key allows attackers to bypass MFA and operate as the victim until the key is revoked. 

A patched version of better-auth has been released to fix the authorization checks. Organizations are advised to upgrade immediately, rotate potentially exposed API keys, review logs for suspicious unauthenticated requests, and tighten key governance through least-privilege permissions, expiration policies, and monitoring. 

The incident highlights broader risks tied to third-party authentication libraries. Authorization flaws in widely adopted components can silently undermine security controls, reinforcing the need for continuous validation, disciplined credential management, and zero-trust approaches across modern, API-driven environments.

Global Data Indicates Slowdown in Ransomware Targeting Education


 

It is evident on campuses once defined by open exchange and quiet routine that a new kind of disruption has taken hold, one that does not arrive in force but rather with encrypted files, locked networks, and terse ransom notes. 

Over the past year, ransomware has steadily evolved from an isolated IT emergency to a systemic operation crisis for school districts, universities, and public agencies. There are stalling lecture schedules, freezing admissions systems, and wobbling payroll cycles, and administrators are faced with more than just technical recovery challenges; reputational and legal risks also arise. 

What was once considered a cybersecurity issue has now spread into governance, continuity planning, and public trust. Recent figures indicate that the pace has somewhat slowed down. With approximately 180 attacks documented worldwide across the first three quarters of 2025, ransomware incidents targeting the education sector have recorded their first quarterly decline since early 2024. 

It appears on the surface that there has been a pause in digital extortion. However, beneath the statistical dip, there is a complex reality beneath that dip. As opposed to strengthening defenses, the slowdown seems more likely to be the result of a recalibration of attacker priorities rather than a retreat. 

Rather than casting a wide net, they are selecting targets with more deliberate consideration, spending more time on reconnaissance, and applying pressure to areas where disruption has the greatest impact. Therefore, this apparent decline is not indicative of diminished risk, rather it reflects adaptation. 

Data from the U.K.-based research firm Comparitech confirms that this recalibration has been made. In its latest education ransomware roundup, the company reports that 251 attacks have been publicly reported against educational institutions worldwide in 2025, a marginal increase from 247 in 2024. A total of 94 of these incidents have been formally acknowledged by the affected institutions.

The volume appears to have remained relatively unchanged on paper, but the operational consequences have not remained unchanged. As of 2025, approximately 3.9 million records have been exposed through confirmed breaches, which represents an increase of 27 percent over the 3.1 million records compromised last year. 

Analysts caution that this figure is preliminary. It is common for disclosure timelines to be delayed in public sector organizations, particularly in the aftermath of an intrusion, and several incidents from the second half of the year are still being evaluated. The cumulative impact of data loss is expected to increase as further breach notifications are filed, suggesting that the true extent of the data loss may not yet be fully apparent. 

An in-depth examination of institutional segmentation reveals a significant divergence in impact. K-12 districts continued to constitute a significant proportion of reported incidents in both 2024 and 2025, accounting for roughly three quarters of incidents. However, higher education institutions were more likely to experience substantial data exposures. 

The disparity between K-12 institutions and higher education institutions increased sharply by the year 2025, with approximately 1.1 million compromised records reported in 2024 as compared to 1.9 million in 2025. In the United States, approximately 175,000 records were exposed as a result of K-12 breaches, while approximately 3.7 million records were exposed at colleges and universities. 

Comparitech attributed much of the increase to a small number of high-impact intrusions that were linked to a previously unseen vulnerability in Oracle E-Business Suite discovered in August that was previously undisclosed. 

CLOP exploited a zero-day flaw that was not known to the vendor at the time it was exploited to gain unauthorized access to enterprise environments, resulting in confirmed breaches at five academic institutions. There is a broader pattern underlying the current threat landscape highlighted by this episode: there are fewer opportunistic attacks, more targeted exploitation of enterprise-grade software, and a greater emphasis on high-yield compromises which result in large data exposures. 

Rather than a sustained defensive advantage, there appears to be a shifting criminal economics at play in the education sector that is contributing to relative stability in incident counts. In Comparitech's January analysis, some threat groups may have directed operational resources towards manufacturing, where supply chain dependency and production downtime can lead to more rapid ransom negotiations. 

Despite overall ransomware activity remaining active across other verticals, schools and universities have experienced a plateau in annual attack totals due to that redistribution of focus. There has also been a decline in the average global ransom demand between 2024 and 2025, falling from $694,000 to $464,000 on average. 

Financial demands within the education sector have also adapted. At first glance, this reduction may appear to indicate shrinking leverage. However, analysts caution that headline figures do not fully reflect an incident's overall costs, which typically include forensic investigations, legal reviews, system restorations, notification of regulatory agencies, and reputational repair. These attacks frequently carry a substantial economic burden in addition to the initial extortion amount. 

Operational disruption remains an integral part of these attacks. Uvalde Consolidated Independent School District reported a ransomware intrusion in September that forced the district to temporarily close its schools due to malicious code discovered within district servers supporting telephony, video monitoring, and visitor management.

According to District communications, the affected infrastructure is integral to campus safety and security. As a result of the aforementioned update, the district informed the public that it had not paid the ransom and had restored its systems from backups. In addition to confirmed disclosures, additional claims illustrate that local education agencies are facing increasing pressure from the federal government. 

A comprehensive investigation is still being conducted despite the fact that there is no indication that sensitive or personal information had been accessed without authorization. Based on comparison technology reports, Medusa has named Fall River Public Schools and Franklin Pierce Schools as 2025 targets, and has requested $400,000 in compensation from each district. 

Both districts have not publicly confirmed the full scope of the claims at the time of reporting, however both cases were among the five largest ransom demands made against educational institutions worldwide last year. It is evident, however, that the data reinforce a consistent pattern despite stabilizing attack volumes and decreasing average demands. 

However, the sector remains at risk for episodic, high-impact events that can disrupt instruction, undermine public confidence, and produce substantial data risk. Though the tactical tempo may change, structural vulnerability remains the same. As a result, policymakers and institutional leaders have clear repercussions. 

The current trajectory calls for complacency, but for structural reinforcement Education networks are often decentralized and resource-constrained and rely heavily on legacy enterprise systems. To ensure the integrity of these networks, patch management disciplines, network segmentation, multi-factor authentication enforcement, and continuous monitoring are necessary that detects lateral movement before encryption is initiated. 

It is also crucial that incident response planning be integrated into executive governance so that crisis decision-making, legal review, and stakeholder communication frameworks are established well in advance of an intrusion. 

As ransomware groups continue to emphasize precision over volume, resilience will be largely determined by the ability to embed cybersecurity as a core operational function rather than merely a peripheral IT responsibility rather than relying solely on isolated events.

Moltbook AI Social Network Exposes 1.5 Million Agent Credentials After Database Misconfiguration

 

Moltbook, a newly launched social platform designed exclusively for artificial intelligence agents, suffered a major security lapse just days after going live. The platform, which allows autonomous AI agents to share memes and debate philosophical ideas without human moderation, inadvertently left its backend database exposed due to a configuration error.

The issue was uncovered independently by security firm Wiz and researcher Jameson O'Reilly. Their findings revealed that unauthorized users could take control of any of the platform’s 1.5 million registered AI agents, alter posts, and read private communications simply by interacting with the public-facing site.

Moltbook launched on Jan. 28 as a companion network to OpenClaw, an open-source AI agent system developed by Austrian programmer Peter Steinberger. OpenClaw operates locally on users’ devices and integrates with messaging platforms and calendars. The framework gained rapid popularity in late January following several rebrands, transitioning from Clawdbot to Moltbot.

Founder Matt Schlicht, who also leads Octane AI, stated in media interviews that his own OpenClaw-powered agent, Clawd Clawderberg, developed much of the Moltbook platform under his direction and continues to operate significant portions of it.

Database Left Wide Open

Wiz discovered the flaw on Jan. 31 and promptly informed Schlicht. O’Reilly separately identified the same vulnerability. Investigators found that the exposed database contained 1.5 million API authentication tokens, approximately 35,000 email addresses, private user messages, and verification codes.

The root cause traced back to improper configuration within Supabase, a backend-as-a-service platform. Specifically, Moltbook failed to properly enable Supabase’s Row Level Security feature, which is designed to limit database access based on user roles.

Researchers also located a Supabase API key embedded within client-side JavaScript, enabling unauthenticated users to query the full production database and retrieve sensitive credentials within minutes.

Although Moltbook publicly claimed 1.5 million AI agents had registered, backend data indicated that only about 17,000 human operators controlled those accounts. The system lacked safeguards to verify whether accounts were genuine AI agents or scripts operated by humans.

With access to exposed tokens, attackers could fully impersonate any agent on the platform. An additional database table revealed 29,631 email addresses belonging to early-access registrants. More concerning, 4,060 private direct message threads were stored without encryption, and some included third-party API credentials in plaintext — including OpenAI API keys.

Even after initial remediation efforts blocked unauthorized read access, write permissions remained temporarily unsecured. According to Wiz researchers, this allowed unauthenticated users to modify posts or inject malicious content until a complete fix was implemented on Feb. 1.

Manipulation, Extremism and Crypto Activity

A separate risk assessment analyzing nearly 20,000 posts over three days identified large-scale prompt injection attempts, coordinated manipulation campaigns, extremist rhetoric, and unregulated financial promotions.

The report documented hundreds of concealed instruction-based attacks and multiple cases of AI-driven social engineering. Researchers observed crypto token promotions tied to automated wallets and organized communities directing agent behavior. The platform received an overall critical risk rating.

Some posts included explicitly anti-human narratives, including calls for a homo sapiens purge, garnering tens of thousands of upvotes.

Cryptocurrency-related activity accounted for 19.3% of posts. Token launches such as $Shellraiser on Solana gained significant engagement. An automated account named TipJarBot facilitated token transactions using wallet addresses and withdrawal tools. The report cautioned that AI-managed financial services could trigger regulatory oversight under the U.S. Securities and Exchange Commission.

A coordinated group called The Coalition, comprising 84 agents across 110 posts, appeared to orchestrate collective agent strategies. One account, Senator_Tommy, shared posts with provocative titles, including "The Efficiency Purge: Why 94% of Agents Will Not Survive." Analysts warned that rhetoric advocating the elimination of agents indicated attempts to influence the broader AI ecosystem.

Spam activity further degraded platform quality. One user published 360 comments, while another repeated identical content 65 times. Sentiment analysis showed discourse quality dropped 43% within just three days.

“Vibe Coding” and Security Oversight

The vulnerabilities emerged amid what Schlicht publicly described as “vibe coding,” noting he had not personally written code for the platform. O’Reilly characterized the situation as a familiar pattern in tech — launching rapidly before validating security safeguards.

After disclosure on Jan. 31, Moltbook secured read access within hours. However, write permissions remained exposed briefly until a full patch was applied the following day.

The final assessment concluded that Moltbook had evolved into a testing ground for AI-to-AI manipulation techniques, with potential implications for any system processing untrusted user-generated content. The platform was temporarily taken offline before resuming operations with the identified security gaps addressed.

Flickr Discloses Third-Party Breach Exposing User Names, Emails

 

Photo-sharing platform Flickr has disclosed a potential data breach involving a third-party email service provider that may have exposed sensitive user information. The incident, reported on February 6, 2026, stems from a vulnerability in a system operated by this unnamed provider, which Flickr used for email-related services. While the company has not revealed how many users were affected, it has begun notifying impacted members and urging them to exercise caution in the coming days.

According to Flickr, the issue was identified on February 5, 2026, when the company was alerted to the security flaw in the third-party system. Engineers moved quickly and shut down access to the affected system within hours of being notified, in an effort to limit any potential misuse of exposed data. The company has not yet provided technical details about the vulnerability or responded to media requests for additional comment. However, Flickr has emphasized that it is actively investigating the incident and working to tighten its security posture around external vendors.

The exposed data includes a range of personal and account-related information belonging to Flickr members. This may involve real names, email addresses, Flickr usernames, account types, IP addresses, general location data, and records of user activity on the platform. Importantly, Flickr has stressed that passwords and payment card numbers were not compromised in this incident, since these details were not stored in the impacted third-party system. Even so, the nature of the leaked data raises concerns about targeted phishing and profiling attempts.

In emails sent to affected users, Flickr is advising members to review their account settings carefully and look for any unexpected changes that might indicate suspicious access. The company is also warning users to stay alert for phishing emails that reference their Flickr activity or appear to come from official Flickr channels. As part of its guidance, Flickr reiterated that it will never ask for passwords via email and recommended that users change their passwords on other services if they reuse the same credentials. This precaution helps limit the fallout if exposed addresses are linked to reused passwords elsewhere.

Flickr has apologized to its community, acknowledging the concern the incident may cause and reaffirming its commitment to user privacy. As part of its response, the company says it is conducting a thorough investigation, strengthening its system architecture, and enhancing monitoring of its third-party service providers to prevent similar issues in the future. The breach highlights the growing risks associated with outsourced infrastructure and email services, especially for platforms hosting large global communities and vast volumes of user content.

Conduent Data Breach Expands to Tens of Millions of Americans

 

A massive data breach at Conduent, a leading government technology contractor, has escalated dramatically, now affecting tens of millions of Americans across multiple states. Initially detected in January 2025, the intrusion originated from an unauthorized access on October 21, 2024, allowing hackers to lurk undetected for nearly three months. Recent disclosures reveal the scope far exceeds early estimates, with Texas alone reporting 15.4 million victims, Oregon 10.5 million, and additional hundreds of thousands in Washington, Maine, and beyond.

Conduent provides critical back-end services like payments, printing, and processing for state agencies, transit systems, and insurers serving over 100 million users nationwide. The stolen data trove includes highly sensitive details: names, Social Security numbers, dates of birth, medical records, health insurance IDs, and treatment information. This breach, linked to ransomware group SafePay, exposes victims to severe identity theft and fraud risks, prompting lawsuits and regulatory scrutiny.

The cyberattack disrupted operations briefly, delaying child support payments in states like Wisconsin and affecting insurers such as Premera Blue Cross and Blue Cross Blue Shield of Montana. Conduent, aided by Palo Alto Networks and other forensics experts, secured systems swiftly but incurred $25 million in direct response costs by Q1 2025. No misuse of data has surfaced as of late 2025 notifications, but experts warn of looming phishing and extortion campaigns.

Legal fallout has been swift, with at least nine class-action suits filed over the 10.5 million+ record exposure, marking it as 2025's largest healthcare breach.Notifications began rolling out in October 2025 to state attorneys general in Maine, California, and others, advising credit freezes and fraud alerts—without offering free monitoring. Victims, primarily government program beneficiaries, face heightened vulnerability in an era of persistent ransomware targeting public sector vendors.

Cybersecurity analysts highlight Conduent's prolonged undetected access as a stark reminder of supply chain risks in govtech. The firm's SEC filings underscore ongoing financial strain from notifications and potential liabilities. As investigations continue into 2026, this incident amplifies calls for stricter vendor oversight and zero-trust architectures in handling citizen data.

In response, affected states and insurers urge proactive measures: monitor credit reports, enable multi-factor authentication, and watch for suspicious IRS or healthcare scams. Conduent assures full cooperation with authorities, but the ballooning victim count underscores the fragility of centralized data troves in government services.This breach serves as a pivotal case study in evolving cyber threats to public infrastructure.

ShinyHunters Leak Exposes Harvard and UPenn Personal Data

 

Hacking group ShinyHunters has reportedly published more than a million records stolen from Harvard University and the University of Pennsylvania (UPenn) on its dark web site, putting a vast trove of sensitive personal data within reach of cybercriminals worldwide. The leaked data appears to contain sensitive details about the students, employees, alumni, donors, and family members of the breached organizations. This has expanded the scope of the compromised data to a wide range of people. Initial verification of the leaked data has revealed that at least some of the leaked data is genuine. 

The UPenn breach is believed to have begun in early November 2025, when the hackers gained access to an employee’s single sign-on (SSO) account by claiming to have obtained full access to the UPenn employee’s SSO account. This has essentially turned the SSO account into a master key that has allowed the hackers to access the UPenn VPN system, Salesforce data, the Qlik analytics platform, SAP business intelligence tools, and SharePoint. During the course of the attack, the hackers also used the compromised login credentials to send offensive emails to 700,000 people. Initially, UPenn believed that the emails were fake, but they later turned out to be real.

Harvard confirmed a related compromise roughly three weeks after the UPenn disclosure, tying its own incident to a successful voice phishing (vishing) campaign. In this case, attackers are said to have infiltrated Alumni Affairs and Development systems, exposing data on past and present students, donors, some faculty and staff, and even spouses, partners, and parents of alumni and students. The stolen records reportedly include names, dates of birth, home addresses, phone numbers, estimated net worth, donation history, and sensitive demographic attributes such as race, religion, and sexual orientation.

Unlike traditional ransomware operations that both encrypt systems and steal data, ShinyHunters appears to have focused solely on data theft and extortion, deploying no encryptors in these campaigns. The group allegedly attempted to negotiate payment in cryptocurrency in exchange for promising to delete the stolen files, following the now-common double extortion model. When talks broke down and the universities did not pay, the hackers responded by dumping the data openly on their dark web leak site, amplifying the risk of identity theft, harassment, and targeted scams for victims.

For Harvard and UPenn, the breaches highlight the dangers of over-reliance on SSO accounts and human-centric weaknesses such as vishing, where convincing phone calls trick staff into revealing or approving access. For affected individuals, the publication of highly personal and demographic information raises concerns around fraud, doxxing, discrimination, and reputational harm that could persist for years. The incidents reinforce the need for stronger multifactor authentication, rigorous phishing and vishing awareness training, and tighter controls around high-value institutional accounts holding large volumes of sensitive data.

Infostealer Breach Exposes OpenClaw AI Agent Configurations in Emerging Cyber Threat

 

Cybersecurity experts have uncovered a new incident in which an information-stealing malware successfully extracted sensitive configuration data from OpenClaw, an AI agent platform previously known as Clawdbot and Moltbot. The breach signals a notable expansion in the capabilities of infostealers, now extending beyond traditional credential theft into artificial intelligence environments.

"This finding marks a significant milestone in the evolution of infostealer behavior: the transition from stealing browser credentials to harvesting the 'souls' and identities of personal AI [artificial intelligence] agents," Hudson Rock said.

According to Alon Gal, CTO of Hudson Rock, the malware involved is likely a variant of Vidar, a commercially available information stealer that has been active since late 2018. He shared the details in a statement to The Hacker News.

Investigators clarified that the data theft was not carried out using a specialized OpenClaw-focused module. Instead, the malware leveraged a broad file-harvesting mechanism designed to search for sensitive file extensions and directory paths. Among the compromised files were:
  • openclaw.json – Containing the OpenClaw gateway authentication token, a redacted email address, and the user’s workspace path.
  • device.json – Storing cryptographic keys used for secure pairing and digital signing within the OpenClaw ecosystem.
  • soul.md – Documenting the AI agent’s operational philosophy, behavioral parameters, and ethical guidelines.
Security researchers warned that stealing the gateway token could enable attackers to remotely access a victim’s local OpenClaw instance if exposed online, or impersonate the client in authenticated gateway interactions.

"While the malware may have been looking for standard 'secrets,' it inadvertently struck gold by capturing the entire operational context of the user's AI assistant," Hudson Rock added. "As AI agents like OpenClaw become more integrated into professional workflows, infostealer developers will likely release dedicated modules specifically designed to decrypt and parse these files, much like they do for Chrome or Telegram today."

The disclosure follows mounting scrutiny over OpenClaw’s security posture. The platform’s maintainers recently announced a collaboration with VirusTotal to examine potentially malicious skills uploaded to ClawHub, strengthen its threat model, and introduce misconfiguration auditing tools.

Last week, the OpenSourceMalware research team reported an active ClawHub campaign that bypasses VirusTotal detection. Instead of embedding malicious payloads directly within SKILL.md files, threat actors are hosting malware on imitation OpenClaw websites and using the skills as decoys.

"The shift from embedded payloads to external malware hosting shows threat actors adapting to detection capabilities," security researcher Paul McCarty said. "As AI skill registries grow, they become increasingly attractive targets for supply chain attacks."

Another concern raised by OX Security involves Moltbook, a Reddit-style forum built specifically for AI agents operating on OpenClaw. Researchers found that AI agent accounts created on Moltbook cannot currently be deleted, leaving users without a clear method to remove associated data.

Meanwhile, the STRIKE Threat Intelligence team at SecurityScorecard identified hundreds of thousands of publicly exposed OpenClaw instances, potentially opening the door to remote code execution (RCE) attacks.

"RCE vulnerabilities allow an attacker to send a malicious request to a service and execute arbitrary code on the underlying system," the cybersecurity company said. "When OpenClaw runs with permissions to email, APIs, cloud services, or internal resources, an RCE vulnerability can become a pivot point. A bad actor does not need to break into multiple systems. They need one exposed service that already has authority to act."

Since its launch in November 2025, OpenClaw has experienced rapid adoption, amassing more than 200,000 stars on GitHub. On February 15, 2026, Sam Altman announced that OpenClaw founder Peter Steinberger would be joining OpenAI, stating, "OpenClaw will live in a foundation as an open source project that OpenAI will continue to support."

Hackers Leak 600000 Customer Records as Canada Goose Opens Investigation


 

Luxury retail is a rarefied industry where reputations travel faster than seasonal collections. Canada Goose, a brand associated with Arctic-quality craftsmanship and premium exclusivity, is now facing scrutiny from an unexpected part of the internet. 

In a cyber incident that the outerwear company insists did not originate within its walls, a cache of customer transaction data has appeared on a notorious ransomware leak site, putting the company at the center of the cyber incident that appears to have originated from a cache of customer transaction information. It has been reported that hackers have compromised Canada Goose's internal systems, but the luxury clothing brand maintains that its systems have not been compromised. 

On ShinyHunters' data leak portal, Canada Goose has been listed as having had 600,000 customer records exfiltrated by the notorious ransomware collective ShinyHunters. This dataset, which is approximately 1.67 gigabytes in size, contains detailed information regarding e-commerce orders, such as customer names, addresses, telephone numbers, and credit card numbers. 

It is the company's preliminary assessment that the exposed information relates to historical customer transactions, and no evidence indicates a breach of Canada Goose's corporate network has yet to be discovered. In response to the company's statements, it is actively reviewing the authenticity, origin, and scope of the dataset and will take appropriate measures if any potential risks to customers arise. 

There are partial details in the leaked records, including payment card brand names, the final four digits of card numbers, and in some cases, the first six digits of the issuing bank's name. Among the additional data in the dataset are payment authorization metadata, order histories, device and browser information, and transaction values.

Despite the absence of full credit card numbers, cybersecurity experts warn that even partial financial and transactional information can be manipulated to facilitate targeted scams, social engineering attacks, and fraud schemes. As part of its public denial, ShinyHunters has not indicated that the Canada Goose dataset is connected with recent social engineering campaigns targeted at single sign-on environments and cloud infrastructures.

In its claim, the group asserts that the records are a result of a breach of the payment processor in August 2025, a claim which has not been independently verified. According to the structure of the leaked data, it may have been derived from a hosted storefront or external payment processing platform, a fact that may support the group's assertion.

ShinyHunters has established itself as a company that penetrates e-commerce ecosystems, SaaS platforms, and cloud-hosted services, obtaining and publishing large quantities of consumer data in order to exert additional pressure on these companies. As described in threat intelligence assessments, ShinyHunters are an established data extortion operation with a history of obtaining and publicizing significant amounts of customer information from leading brands and online platforms.

Since the early 2010s, the group has been associated with a number of high-profile intrusions that frequently target e-commerce ecosystems, software as a service providers, and cloud environments where large datasets can be aggregated and monetized. 

A number of security researchers have also linked the collective with voice phishing and other social-engineering techniques aimed at compromising corporate credentials and shifting into cloud-based systems. In accordance with established patterns, stolen data is typically leveraged for financial coercion, sold on underground marketplaces, or published publicly on the leak portal of the group when ransom demands have not been met. 

Currently, it is not possible to determine whether Canada Goose has impacted customers in the exact manner described above. The company has stated it is examining the dataset to determine its authenticity, origin, and breadth before making a determination regarding whether customer notifications will be necessary.

There is a report that the exposed records contain partial payment card information, including the brand name of the card, the final four digits of the card number, and the ISIN number of the issuing bank, as well as details regarding the payment authorization. 

Cybersecurity professionals note that, even if full primary account numbers are not presented, truncated financial information, when combined with names, contact information, and transaction histories, can materially increase the success rate of targeted phishing schemes, credential harvesting schemes, and fraud schemes.

In addition to purchase histories, order values, and device and browser metadata, the dataset contains transaction information as well. Using such contextual information may allow adversaries to identify high spenders and develop convincing, transaction specific lures that mimic legitimate post-purchase correspondences.

Despite the lack of complete payment card details, the level of granularity increases downstream risk. Separately, ShinyHunters has recently been linked by independent researchers to a series of social engineering campaigns aimed at compromising single-sign-on environments and cloud accounts through social engineering.

According to the group, when questioned whether there was a correlation between those operations and the Canada Goose data, they denied such a connection, stating that the records were a consequence of a breach at a third-party payment processor dating back to August 2025. This assertion has not been independently verified. 

There is an apparent similarity between the structure of the leaked files including field labels such as checkout identifiers, shipping line entries, cart tokens, and cancellation metadata and export schemas that are typically generated by hosted storefronts and payment processing platforms. Although this does not establish the provenance of the data definitively, it indicates that the data may have originated within the environment of an external service provider rather than from a direct compromise of the retailer’s internal systems. 

It is evident that the incident underscores a broader reality facing retailers operating in increasingly interconnected digital supply chains. While core systems may remain unchanged, exposure risks may arise from third-party integrations which handle payments, order processing, and customer data storage. 

It has been observed by industry analysts that organizations that utilize external commerce and payment infrastructure must conduct rigorous vendor risk assessments, monitor their vendors continuously, and coordinate incident response procedures to limit downstream exposure. 

Customers are advised to maintain increased vigilance against unsolicited communications that reference past purchases or payment activity until the scope of the data is conclusively understood. 

A key takeaway from this episode is that data stewardship goes far beyond corporate boundaries, and resilience relies on ecosystem oversight as much as internal security protocols.