Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

Cloud Storage Scam Uses Fake Renewal Notices to Trick Users


Cybercriminals are running a large-scale email scam that falsely claims cloud storage subscriptions have failed. For several months, people across different countries have been receiving repeated messages warning that their photos, files, and entire accounts will soon be restricted or erased due to an alleged payment issue. The volume of these emails has increased sharply, with many users receiving several versions of the same scam in a single day, all tied to the same operation.

Although the wording of each email differs, the underlying tactic remains the same. The messages pressure recipients to act immediately by claiming that a billing problem or storage limit must be fixed right away to avoid losing access to personal data. These emails are sent from unrelated and randomly created domains rather than official service addresses, a common sign of phishing activity.

The subject lines are crafted to trigger panic and curiosity. Many include personal names, email addresses, reference numbers, or specific future dates to appear genuine. The messages state that a renewal attempt failed or a payment method expired, warning that backups may stop working and that photos, videos, documents, and device data could disappear if the issue is not resolved. Fake account numbers, subscription details, and expiry dates are used to strengthen the illusion of legitimacy.

Every email in this campaign contains a link. While the first web address may appear to belong to a well-known cloud hosting platform, it only acts as a temporary relay. Clicking it silently redirects the user to fraudulent websites hosted on changing domains. These pages imitate real cloud dashboards and display cloud-related branding to gain trust. They falsely claim that storage is full and that syncing of photos, contacts, files, and backups has stopped, warning that data will be lost without immediate action.

After clicking forward, users are shown a fake scan that always reports that services such as photo storage, drive space, and email are full. Victims are then offered a short-term discount, presented as a loyalty upgrade with a large price reduction. Instead of leading to a real cloud provider, the buttons redirect users to unrelated sales pages advertising VPNs, obscure security tools, and other subscription products. The final step leads to payment forms designed to collect card details and generate profit for the scammers through affiliate schemes.

Many recipients mistakenly believe these offers will fix a real storage problem and end up paying for unnecessary products. These emails and websites are not official notifications. Real cloud companies do not solve billing problems through storage scans or third-party product promotions. When payments fail, legitimate providers usually restrict extra storage first and provide a grace period before any data removal.

Users should delete such emails without opening links and avoid purchasing anything promoted through them. Any concerns about storage or billing should be checked directly through the official website or app of the cloud service provider.

Former Google Engineer Convicted in U.S. for Stealing AI Trade Secrets to Aid China-Based Startup

 

A former Google software engineer has been found guilty in the United States for unlawfully taking thousands of confidential Google documents to support a technology venture in China, according to an announcement made by the Department of Justice (DoJ) on Thursday.

Linwei Ding, also known as Leon Ding, aged 38, was convicted by a federal jury on 14 charges—seven counts of economic espionage and seven counts of theft of trade secrets. Prosecutors established that Ding illegally copied more than 2,000 internal Google files containing highly sensitive artificial intelligence (AI) trade secrets with the intent of benefiting the People’s Republic of China (PRC).

"Silicon Valley is at the forefront of artificial intelligence innovation, pioneering transformative work that drives economic growth and strengthens our national security," said U.S. Attorney Craig H. Missakian. "We will vigorously protect American intellectual capital from foreign interests that seek to gain an unfair competitive advantage while putting our national security at risk."

Ding was initially indicted in March 2024 after investigators discovered that he had transferred proprietary data from Google’s internal systems to his personal Google Cloud account. The materials allegedly stolen included detailed information on Google’s supercomputing data center architecture used to train and run AI models, its Cluster Management System (CMS), and the AI models and applications operating on that infrastructure.

The misappropriated trade secrets reportedly covered several critical technologies, including the design and functionality of Google’s custom Tensor Processing Unit (TPU) chips and GPU systems, software that enables chip-level communication and task execution, systems that coordinate thousands of chips into AI supercomputers, and SmartNIC technology used for high-speed networking within Google’s AI and cloud platforms.

Authorities stated that the theft occurred over an extended period between May 2022 and April 2023. Ding, who began working at Google in 2019, allegedly maintained undisclosed ties with two China-based technology firms during his employment, one of which was Shanghai Zhisuan Technologies Co., a startup he founded in 2023. Investigators noted that Ding downloaded large volumes of confidential files in December 2023, just days before resigning from the company.

"Around June 2022, Ding was in discussions to be the Chief Technology Officer for an early-stage technology company based in the PRC; by early 2023, Ding was in the process of founding his own technology company in the PRC focused on AI and machine learning and was acting as the company's CEO," the DoJ said.

The case further alleged that Ding attempted to conceal his actions by copying Google source code into the Apple Notes app on his work-issued MacBook, converting the files into PDFs, and uploading them to his personal Google account. Prosecutors also claimed that he asked a colleague to use his access badge to enter a Google facility, creating the false appearance that he was working from the office while he was actually in China.

The investigation reportedly accelerated in late 2023 after Google learned that Ding had delivered a public presentation in China to prospective investors promoting his startup. According to Courthouse News, Ding’s defense attorney Grant Fondo argued that the information could not qualify as trade secrets because it was accessible to a large number of Google employees. "Google chose openness over security," Fonda said.

In a superseding indictment filed in February 2025, Ding was additionally charged with economic espionage, with prosecutors alleging that he applied to a Beijing-backed Shanghai talent program. Such initiatives were described as efforts to recruit overseas researchers to bolster China’s technological and economic development.

"Ding's application for this talent plan stated that he planned to 'help China to have computing power infrastructure capabilities that are on par with the international level,'" the DoJ said. "The evidence at trial also showed that Ding intended to benefit two entities controlled by the government of China by assisting with the development of an AI supercomputer and collaborating on the research and development of custom machine learning chips."

Ding is set to attend a status conference on February 3, 2026. If sentenced to the maximum penalties, he could face up to 10 years in prison for each trade secret theft charge and up to 15 years for each count of economic espionage.

Google Owned Mandiant Finds Vishing Attacks Against SaaS Platforms


Mandiant recently said that it found an increase in threat activity that deploys tradecraft for extortion attacks carried out by a financially gained group ShinyHunters.

  • These attacks use advanced voice phishing (vishing) and fake credential harvesting sites imitating targeted organizations to get illicit access to victims systems by collecting sign-on (SSO) credentials and two factor authentication codes. 
  • The attacks aim to target cloud-based software-as-a-service (SaaS) apps to steal sensitive data and internal communications and blackmail victims. 

Google owned Mandiant’s threat intelligence team is tracking the attacks under various clusters: UNC6661, UNC6671, and UNC6240 (aka ShinyHunters). These gangs might be improving their attack tactics. "While this methodology of targeting identity providers and SaaS platforms is consistent with our prior observations of threat activity preceding ShinyHunters-branded extortion, the breadth of targeted cloud platforms continues to expand as these threat actors seek more sensitive data for extortion," Mandiant said. 

"Further, they appear to be escalating their extortion tactics with recent incidents, including harassment of victim personnel, among other tactics.”

Theft details

UNC6661 was pretending to be IT staff sending employees to credential harvesting links tricking them into multi-factor authentication (MFA) settings. This was found during mid-January 2026.

Threat actors used stolen credentials to register their own device for MFA and further steal data from SaaS platforms. In one incident, the hacker exploited their access to infected email accounts to send more phishing emails to users in cryptocurrency based organizations.

The emails were later deleted to hide the tracks. Experts also found UNC6671 mimicking IT staff to fool victims to steal credentials and MFA login codes on credential harvesting websites since the start of this year. In a few incidents, the hackers got access to Okta accounts. 

UNC6671 leveraged PowerShell to steal sensitive data from OneDrive and SharePoint. 

Attack tactic 

The use of different domain registrars to register the credential harvesting domains (NICENIC for UNC6661 and Tucows for UNC6671) and the fact that an extortion email sent after UNC6671 activity did not overlap with known UNC6240 indicators are the two main differences between UNC6661 and UNC6671. 

This suggests that other groups of people might be participating, highlighting how nebulous these cybercrime organizations are. Furthermore, the targeting of bitcoin companies raises the possibility that the threat actors are searching for other opportunities to make money.

Ivanti Issues Emergency Fixes After Attackers Exploit Critical Flaws in Mobile Management Software




Ivanti has released urgent security updates for two serious vulnerabilities in its Endpoint Manager Mobile (EPMM) platform that were already being abused by attackers before the flaws became public. EPMM is widely used by enterprises to manage and secure mobile devices, which makes exposed servers a high-risk entry point into corporate networks.

The two weaknesses, identified as CVE-2026-1281 and CVE-2026-1340, allow attackers to remotely run commands on vulnerable servers without logging in. Both flaws were assigned near-maximum severity scores because they can give attackers deep control over affected systems. Ivanti confirmed that a small number of customers had already been compromised at the time the issues were disclosed.

This incident reflects a broader pattern of severe security failures affecting enterprise technology vendors in January in recent years. Similar high-impact vulnerabilities have previously forced organizations to urgently patch network security and access control products. The repeated targeting of these platforms shows that attackers focus on systems that provide centralized control over devices and identities.

Ivanti stated that only on-premises EPMM deployments are affected. Its cloud-based mobile management services, other endpoint management products, and environments using Ivanti cloud services with Sentry are not impacted by these flaws.

If attackers exploit these vulnerabilities, they can move within internal networks, change system settings, grant themselves administrative privileges, and access stored information. The exposed data may include basic personal details of administrators and device users, along with device-related information such as phone numbers and location data, depending on how the system is configured.

Ivanti has not provided specific indicators of compromise because only a limited number of confirmed cases are known. However, the company published technical analysis to support investigations. Security teams are advised to review web server logs for unusual requests, particularly those containing command-like input. Exploitation attempts may appear as abnormal activity involving internal application distribution or Android file transfer functions, sometimes producing error responses instead of successful ones. Requests sent to error pages using unexpected methods or parameters should be treated as highly suspicious.

Previous investigations show attackers often maintain access by placing or modifying web shell files on application error pages. Security teams should also watch for unexpected application archive files being added to servers, as these may be used to create remote connections back to attackers. Because EPMM does not normally initiate outbound network traffic, any such activity in firewall logs should be treated as a strong warning sign.

Ivanti advises organizations that detect compromise to restore systems from clean backups or rebuild affected servers before applying updates. Attempting to manually clean infected systems is not recommended. Because these flaws were exploited before patches were released, organizations that had vulnerable EPMM servers exposed to the internet at the time of disclosure should treat those systems as compromised and initiate full incident response procedures rather than relying on patching alone. 

CRIL Uncovers ShadowHS: Fileless Linux Post-Exploitation Framework Built for Stealthy Long-Term Access

 

Operating entirely in system memory, Cyble Research & Intelligence Labs (CRIL) uncovered ShadowHS, a Linux post-exploitation toolkit built for covert persistence after an initial breach. Instead of dropping binaries on disk, it runs filelessly, helping it bypass standard security checks and leaving minimal forensic traces. ShadowHS relies on a weaponized version of hackshell, enabling attackers to maintain long-term remote control through interactive sessions. This fileless approach makes detection harder because many traditional tools focus on scanning stored files rather than memory-resident activity. 

CRIL found that ShadowHS is delivered using an encrypted shell loader that deploys a heavily modified hackshell component. During execution, the loader reconstructs the payload in memory using AES-256-CBC decryption, along with Perl byte skipping routines and gzip decompression. After rebuilding, the payload is executed via /proc//fd/ with a spoofed argv[0], a method designed to avoid leaving artifacts on disk and evade signature-based detection tools. 

Once active, ShadowHS begins with reconnaissance, mapping system defenses and identifying installed security tools. It checks for evidence of prior compromise and keeps background activity intentionally low, allowing operators to selectively activate functions such as credential theft, lateral movement, privilege escalation, cryptomining, and covert data exfiltration. CRIL noted that this behavior reflects disciplined operator tradecraft rather than opportunistic attacks. 

ShadowHS also performs extensive fingerprinting for commercial endpoint tools such as CrowdStrike, Tanium, Sophos, and Microsoft Defender, as well as monitoring agents tied to cloud platforms and industrial control environments. While runtime activity appears restrained, CRIL emphasized the framework contains a wider set of dormant capabilities that can be triggered when needed. 

A key feature highlighted by CRIL is ShadowHS’s stealthy data exfiltration method. Instead of using standard network channels, it leverages user-space tunneling over GSocket, replacing rsync’s default transport to move data through firewalls and restrictive environments. Researchers observed two variants: one using DBus-based tunneling and another using netcat-style GSocket tunnels, both designed to preserve file metadata such as timestamps, permissions, and partial transfer state. 

The framework also includes dormant modules for memory dumping to steal credentials, SSH-based lateral movement and brute-force scanning, and privilege escalation using kernel exploits. Cryptomining support is included through tools such as XMRig, GMiner, and lolMiner. ShadowHS further contains anti-competition routines to detect and terminate rival malware like Rondo and Kinsing, as well as credential-stealing backdoors such as Ebury, while checking kernel integrity and loaded modules to assess whether the host is already compromised or under surveillance.

CRIL concluded that ShadowHS highlights growing challenges in securing Linux environments against fileless threats. Since these attacks avoid disk artifacts, traditional antivirus and file-based detection fall short. Effective defense requires monitoring process behavior, kernel telemetry, and memory-resident activity, focusing on live system behavior rather than static indicators.

Malicious Chrome Extensions Hijack Affiliate Links and Steal ChatGPT Tokens

 

Cybersecurity researchers have uncovered a alarming surge in malicious Google Chrome extensions that hijack affiliate links, steal sensitive data, and siphon OpenAI ChatGPT authentication tokens. These deceptive add-ons, masquerading as handy shopping aids and AI enhancers, infiltrate the Chrome Web Store to exploit user trust. Disguised tools like Amazon Ads Blocker from "10Xprofit" promise ad-free browsing but secretly swap creators' affiliate tags with the developer's own, robbing influencers of commissions across Amazon, AliExpress, Best Buy, Shein, Shopify, and Walmart.

Socket Security identified 29 such extensions in this cluster, uploaded as recently as January 19, 2026, which scan product URLs without user interaction to inject tags like "10xprofit-20." They also scrape product details to attacker servers at "app.10xprofit[.]io" and deploy fake "LIMITED TIME DEAL" countdowns on AliExpress pages to spur impulse buys. Misleading store listings claim mere "small commissions" from coupons, violating policies that demand clear disclosures, user consent for injections, and single-purpose designs.

Broadcom's Symantec separately flagged four data-thieving extensions with over 100,000 installs, including Good Tab, which relays clipboard access to "api.office123456[.]com," and Children Protection, which harvests cookies, injects ads, and executes remote JavaScript. DPS Websafe hijacks searches to malicious sites, while Stock Informer exposes users to an old XSS flaw (CVE-2020-28707). Researchers Yuanjing Guo and Tommy Dong stress caution even with trusted sources, as broad permissions enable unchecked surveillance.

LayerX exposed 16 coordinated "ChatGPT Mods" extensions—downloaded about 900 times—that pose as productivity boosters like voice downloaders and prompt managers. These inject scripts into chatgpt.com to capture session tokens, granting attackers full account access to conversations, metadata, and code. Natalie Zargarov notes this leverages AI tools' high privileges, turning trusted brands into deception vectors amid booming enterprise AI adoption.

Compounding risks, the "Stanley" malware-as-a-service toolkit, sold on Russian forums for $2,000-$6,000, generates note-taking extensions that overlay phishing iframes on bank sites while faking legitimate URLs. Premium buyers get Chrome Store approval guarantees and C2 panels for victim management; it vanished January 27, 2025, post-exposure but may rebrand. Varonis' Daniel Kelley warns browsers are now prime endpoints in BYOD and remote setups.

Users must audit extensions for mismatched features, excessive permissions, and vague disclosures—remove suspects via Chrome settings immediately. Limit installs to verified needs, favoring official apps over third-party tweaks. As e-commerce and AI extensions multiply, proactive vigilance thwarts financial sabotage and data breaches in this evolving browser battlefield.

CISA Issues New Guidance on Managing Insider Cybersecurity Risks

 



The US Cybersecurity and Infrastructure Security Agency (CISA) has released new guidance warning that insider threats represent a major and growing risk to organizational security. The advisory was issued during the same week reports emerged about a senior agency official mishandling sensitive information, drawing renewed attention to the dangers posed by internal security lapses.

In its announcement, CISA described insider threats as risks that originate from within an organization and can arise from either malicious intent or accidental mistakes. The agency stressed that trusted individuals with legitimate system access can unintentionally cause serious harm to data security, operational stability, and public confidence.

To help organizations manage these risks, CISA published an infographic outlining how to create a structured insider threat management team. The agency recommends that these teams include professionals from multiple departments, such as human resources, legal counsel, cybersecurity teams, IT leadership, and threat analysis units. Depending on the situation, organizations may also need to work with external partners, including law enforcement or health and risk professionals.

According to CISA, these teams are responsible for overseeing insider threat programs, identifying early warning signs, and responding to potential risks before they escalate into larger incidents. The agency also pointed organizations to additional free resources, including a detailed mitigation guide, training workshops, and tools to evaluate the effectiveness of insider threat programs.

Acting CISA Director Madhu Gottumukkala emphasized that insider threats can undermine trust and disrupt critical operations, making them particularly challenging to detect and prevent.

Shortly before the guidance was released, media reports revealed that Gottumukkala had uploaded sensitive CISA contracting documents into a public version of an AI chatbot during the previous summer. According to unnamed officials, the activity triggered automated security alerts designed to prevent unauthorized data exposure from federal systems.

CISA’s Director of Public Affairs later confirmed that the chatbot was used with specific controls in place and stated that the usage was limited in duration. The agency noted that the official had received temporary authorization to access the tool and last used it in mid-July 2025.

By default, CISA blocks employee access to public AI platforms unless an exception is granted. The Department of Homeland Security, which oversees CISA, also operates an internal AI system designed to prevent sensitive government information from leaving federal networks.

Security experts caution that data shared with public AI services may be stored or processed outside the user’s control, depending on platform policies. This makes such tools particularly risky when handling government or critical infrastructure information.

The incident adds to a series of reported internal disputes and security-related controversies involving senior leadership, as well as similar lapses across other US government departments in recent years. These cases are a testament to how poor internal controls and misuse of personal or unsecured technologies can place national security and critical infrastructure at risk.

While CISA’s guidance is primarily aimed at critical infrastructure operators and regional governments, recent events suggest that insider threat management remains a challenge across all levels of government. As organizations increasingly rely on AI and interconnected digital systems, experts continue to stress that strong oversight, clear policies, and leadership accountability are essential to reducing insider-related security risks.

SK hynix Launches New AI Company as Data Center Demand Drives Growth

 

A surge in demand for data center hardware has lifted SK hynix into stronger market standing, thanks to limited availability of crucial AI chips. Though rooted in memory production, the company now pushes further - launching a dedicated arm centered on tailored AI offerings. Rising revenues reflect investor confidence, fueled by sustained component shortages. Growth momentum builds quietly, shaped more by timing than redirection. Market movements align closely with output constraints rather than strategic pivots. 

Early next year, the business will launch a division known as “AI Company” (AI Co.), set to begin operations in February. This offshoot aims to play a central role within the AI data center landscape, positioning itself alongside major contributors. As demand shifts toward bundled options, clients prefer complete packages - ones blending infrastructure, programs, and support - over isolated gear. According to SK hynix, such changes open doors previously unexplored through traditional component sales alone. 

Though little is known so far, news has emerged that AI Co., according to statements given to The Register, plans industry-specific AI tools through dedicated backing of infrastructure tied to processing hubs. Starting out, attention turns toward programs meant to refine how artificial intelligence operates within machines. From there, financial commitments may stretch into broader areas linked to computing centers as months pass. Alongside funding external ventures and novel tech, reports indicate turning prototypes into market-ready offerings might shape a core piece of its evolving strategy.  

About $10 billion is being set aside by SK hynix for the fresh venture. Next month should bring news of a temporary leadership group and governing committee. Instead of staying intact, the California-focused SSD unit known as Solidigm will undergo reorganization. What was once Solidigm becomes AI Co. under the shift. Meanwhile, production tied to SSDs shifts into a separate entity named Solidigm Inc., built from the ground up.  

Now shaping up, the AI server industry leans into tailored chips instead of generic ones. By 2027, ASIC shipments for these systems could rise threefold, according to Counterpoint Research. Come 2028, annual units sold might go past fifteen million. Such growth appears set to overtake current leaders - data center GPUs - in volume shipped. While initial prices for ASICs sometimes run high, their running cost tends to stay low compared to premium graphics processors. Inference workloads commonly drive demand, favoring efficiency-focused designs. Holding roughly six out of every ten units delivered in 2027, Broadcom stands positioned near the front. 

A wider shortage of memory chips keeps lifting SK hynix forward. Demand now clearly exceeds available stock, according to IDC experts, because manufacturers are directing more output into server and graphics processing units instead of phones or laptops. As a result, prices throughout the sector have climbed - this shift directly boosting the firm's earnings. Revenue for 2025 reached ₩97.14 trillion ($67.9 billion), up 47%. During just the last quarter, income surged 66% compared to the same period the previous year, hitting ₩32.8 trillion ($22.9 billion). 

Suppliers such as ASML are seeing gains too, thanks to rising demand in semiconductor production. Though known mainly for photolithography equipment, its latest quarterly results revealed €9.7 billion in revenue - roughly $11.6 billion. Even so, forecasts suggest a sharp rise in orders for their high-end EUV tools during the current year. Despite broader market shifts, performance remains strong across key segments. 

Still, experts point out that a lack of memory chips might hurt buyers, as devices like computers and phones could become more expensive. Predictions indicate computer deliveries might drop during the current year because supplies are tight and expenses are climbing.

Indonesia Temporarily Blocks Grok After AI Deepfake Misuse Sparks Outrage

 

A sudden pause in accessibility marks Indonesia’s move against Grok, Elon Musk’s AI creation, following claims of misuse involving fabricated adult imagery. News of manipulated visuals surfaced, prompting authorities to act - Reuters notes this as a world-first restriction on the tool. Growing unease about technology aiding harm now echoes across borders. Reaction spreads, not through policy papers, but real-time consequences caught online.  

A growing number of reports have linked Grok to incidents where users created explicit imagery of women - sometimes involving minors - without consent. Not long after these concerns surfaced, Indonesia’s digital affairs minister, Meutya Hafid, labeled the behavior a severe breach of online safety norms. 

As cited by Reuters, she described unauthorized sexually suggestive deepfakes as fundamentally undermining personal dignity and civil rights in digital environments. Her office emphasized that such acts fall under grave cyber offenses, demanding urgent regulatory attention Temporary restrictions appeared in Indonesia after Antara News highlighted risks tied to AI-made explicit material. 

Protection of women, kids, and communities drove the move, aimed at reducing mental and societal damage. Officials pointed out that fake but realistic intimate imagery counts as digital abuse, according to statements by Hafid. Such fabricated visuals, though synthetic, still trigger actual consequences for victims. The state insists artificial does not mean harmless - impact matters more than origin. Following concerns over Grok's functionality, officials received official notices demanding explanations on its development process and observed harms. 

Because of potential risks, Indonesian regulators required the firm to detail concrete measures aimed at reducing abuse going forward. Whether the service remains accessible locally hinges on adoption of rigorous filtering systems, according to Hafid. Compliance with national regulations and adherence to responsible artificial intelligence practices now shape the outcome. 

Only after these steps are demonstrated will operation be permitted to continue. Last week saw Musk and xAI issue a warning: improper use of the chatbot for unlawful acts might lead to legal action. On X, he stated clearly - individuals generating illicit material through Grok assume the same liability as those posting such content outright. Still, after rising backlash over the platform's inability to stop deepfake circulation, his stance appeared to shift slightly. 

A re-shared post from one follower implied fault rests more with people creating fakes than with the system hosting them. The debate spread beyond borders, reaching American lawmakers. A group of three Senate members reached out to both Google and Apple, pushing for the removal of Grok and X applications from digital marketplaces due to breaches involving explicit material. Their correspondence framed the request around existing rules prohibiting sexually charged imagery produced without consent. 

What concerned them most was an automated flood of inappropriate depictions focused on females and minors - content they labeled damaging and possibly unlawful. When tied to misuse - like deepfakes made without consent - AI tools now face sharper government reactions, Indonesia's move part of this rising trend. Though once slow to act, officials increasingly treat such technology as a risk needing strong intervention. 

A shift is visible: responses that were hesitant now carry weight, driven by public concern over digital harm. Not every nation acts alike, yet the pattern grows clearer through cases like this one. Pressure builds not just from incidents themselves, but how widely they spread before being challenged.

WhatsApp-Based Astaroth Banking Trojan Targets Brazilian Users in New Malware Campaign

 

A fresh look at digital threats shows malicious software using WhatsApp to spread the Astaroth banking trojan, mainly affecting people in Brazil. Though messaging apps are common tools for connection, they now serve attackers aiming to steal financial data. This method - named Boto Cor-de-Rosa by analysts at Acronis Threat Research - stands out because it leans on social trust within widely used platforms. Instead of relying on email or fake websites, hackers piggyback on real conversations, slipping malware through shared links. 
While such tactics aren’t entirely new, their adaptation to local habits makes them harder to spot. In areas where nearly everyone uses WhatsApp daily, blending in becomes easier for cybercriminals. Researchers stress that ordinary messages can now carry hidden risks when sent from compromised accounts. Unlike older campaigns, this one avoids flashy tricks, favoring quiet infiltration over noise. As behavior shifts online, so do attack strategies - quietly, persistently adapting. 

Acronis reports that the malware targets WhatsApp contact lists, sending harmful messages automatically - spreading fast with no need for constant hacker input. Notably, even though the main Astaroth component sticks with Delphi, and the setup script remains in Visual Basic, analysts spotted a fresh worm-style feature built completely in Python. Starting off differently this time, the mix of languages shows how cyber attackers now build adaptable tools by blending code types for distinct jobs. Ending here: such variety supports stealthier, more responsive attack systems. 

Astaroth - sometimes called Guildma - has operated nonstop since 2015, focusing mostly on Brazil within Latin America. Stealing login details and enabling money scams sits at the core of its activity. By 2024, several hacking collectives, such as PINEAPPLE and Water Makara, began spreading it through deceptive email messages. This newest push moves away from that method, turning instead to WhatsApp; because so many people there rely on the app daily, fake requests feel far more believable. 

Although tactics shift, the aim stays unchanged. Not entirely new, exploiting WhatsApp to spread banking trojans has gained speed lately. Earlier, Trend Micro spotted the Water Saci group using comparable methods to push financial malware like Maverick and a version of Casbaneierio. Messaging apps now appear more appealing to attackers than classic email phishing. Later that year, Sophos disclosed details of an evolving attack series labeled STAC3150, closely tied to previous patterns. This operation focused heavily on individuals in Brazil using WhatsApp, distributing the Astaroth malware through deceptive channels. 

Nearly all infected machines - over 95 percent - were situated within Brazilian territory, though isolated instances appeared across the U.S. and Austria. Running uninterrupted from early autumn 2025, the method leaned on compressed archives paired with installer files, triggering script-based downloads meant to quietly embed the malicious software. What Acronis has uncovered fits well with past reports. Messages on WhatsApp now carry harmful ZIP files sent straight to users. Opening one reveals what seems like a safe document - but it is actually a Visual Basic Script. Once executed, the script pulls down further tools from remote servers. 

This step kicks off the full infection sequence. After activation, this malware splits its actions into two distinct functions. While one part spreads outward by pulling contact data from WhatsApp and distributing infected files without user input, the second runs hidden, observing online behavior - especially targeting visits to financial sites - to capture login details. 

It turns out the software logs performance constantly, feeding back live updates on how many messages succeed or fail, along with transmission speed. Attackers gain a constant stream of operational insight thanks to embedded reporting tools spotted by Acronis.

Microsoft BitLocker Encryption Raises Privacy Questions After FBI Key Disclosure Case

 


Microsoft’s BitLocker encryption, long viewed as a safeguard for Windows users’ data, is under renewed scrutiny after reports revealed the company provided law enforcement with encryption keys in a criminal investigation.

The case, detailed in a government filing [PDF], alleges that individuals in Guam illegally claimed pandemic-related unemployment benefits. According to Forbes, this marks the first publicly documented instance of Microsoft handing over BitLocker recovery keys to law enforcement.

BitLocker is a built-in Windows security feature designed to encrypt data stored on devices. It operates through two configurations: Device Encryption, which offers a simplified setup, and BitLocker Drive Encryption, a more advanced option with greater control.

In both configurations, Microsoft generally stores BitLocker recovery keys on its servers when encryption is activated using a Microsoft account. As the company explains in its documentation, "If you use a Microsoft account, the BitLocker recovery key is typically attached to it, and you can access the recovery key online."

A similar approach applies to organizational devices. Microsoft notes, "If you're using a device that's managed by your work or school, the BitLocker recovery key is typically backed up and managed by your organization's IT department."

Users are not required to rely on Microsoft for key storage. Alternatives include saving the recovery key to a USB drive, storing it as a local file, or printing it. However, many customers opt for Microsoft’s cloud-based storage because it allows easy recovery if access is lost. This convenience, though, effectively places Microsoft in control of data access and reduces the user’s exclusive ownership of encryption keys.

Apple provides a comparable encryption solution through FileVault, paired with iCloud. Apple offers two protection levels: Standard Data Protection and Advanced Data Protection for iCloud.

Under Standard Data Protection, Apple retains the encryption keys for most iCloud data, excluding certain sensitive categories such as passwords and keychain data. With Advanced Data Protection enabled, Apple holds keys only for iCloud Mail, Contacts, and Calendar. Both Apple and Microsoft comply with lawful government requests, but neither can disclose encryption keys they do not possess.

Apple explicitly addresses this in its law enforcement guidelines [PDF]: "All iCloud content data stored by Apple is additionally encrypted at the location of the server. For data Apple can decrypt, Apple retains the encryption keys in its US data centers. Apple does not receive or retain encryption keys for [a] customer's end-to-end encrypted data."

This differs from BitLocker’s default behavior, where Microsoft may retain access to a customer’s encryption keys if the user enables cloud backup during setup.

Microsoft states that it does not share its own encryption keys with governments, but it stops short of extending that guarantee to customer-managed keys. In its law enforcement guidance, the company says, "We do not provide any government with our encryption keys or the ability to break our encryption." It further adds, "In most cases, our default is for Microsoft to securely store our customers' encryption keys. Even our largest enterprise customers usually prefer we keep their keys to prevent accidental loss or theft. However, in many circumstances we also offer the option for consumers or enterprises to keep their own keys, in which case Microsoft does not maintain copies."

Microsoft’s latest Government Requests for Customer Data Report, covering July 2024 through December 2024, shows the company received 128 law enforcement requests globally, including 77 from US agencies. Only four requests during that period—three from Brazil and one from Canada—resulted in content disclosure.

After the article was published, a Microsoft spokesperson clarified, “With BitLocker, customers can choose to store their encryption keys locally, in a location inaccessible to Microsoft, or in Microsoft’s cloud. We recognize that some customers prefer Microsoft’s cloud storage so we can help recover their encryption key if needed. While key recovery offers convenience, it also carries a risk of unwanted access, so Microsoft believes customers are in the best position to decide whether to use key escrow and how to manage their keys.”

Privacy advocates argue that this design reflects Microsoft’s priorities. As Erica Portnoy, senior staff technologist at the Electronic Frontier Foundation, stated in an email to The Register, "Microsoft is making a tradeoff here between privacy and recoverability. At a guess, I'd say that's because they're more focused on the business use case, where loss of data is much worse than Microsoft or governments getting access to that data. But by making that choice, they make their product less suitable for individuals and organizations with higher privacy needs. It's a clear message to activist organizations and law firms that Microsoft is not building their products for you."

Multi-Stage Phishing Campaign Deploys Amnesia RAT and Ransomware Using Cloud Services

 

One recently uncovered cyberattack is targeting individuals across Russia through a carefully staged deception campaign. Rather than exploiting software vulnerabilities, the operation relies on manipulating user behavior, according to analysis by Cara Lin of Fortinet FortiGuard Labs. The attack delivers two major threats: ransomware that encrypts files for extortion and a remote access trojan known as Amnesia RAT. Legitimate system tools and trusted services are repurposed as weapons, allowing the intrusion to unfold quietly while bypassing traditional defenses. By abusing real cloud platforms, the attackers make detection significantly more difficult, as nothing initially appears out of place. 

The attack begins with documents designed to resemble routine workplace material. On the surface, these files appear harmless, but they conceal code that runs without drawing attention. Visual elements within the documents are deliberately used to keep victims focused, giving the malware time to execute unseen. Fortinet researchers noted that these visuals are not cosmetic but strategic, helping attackers establish deeper access before suspicion arises. 

A defining feature of the campaign is its coordinated use of multiple public cloud services. Instead of relying on a single platform, different components are distributed across GitHub and Dropbox. Scripts are hosted on GitHub, while executable payloads such as ransomware and remote access tools are stored on Dropbox. This fragmented infrastructure improves resilience, as disabling one service does not interrupt the entire attack chain and complicates takedown efforts. 

Phishing emails deliver compressed archives that contain decoy documents alongside malicious Windows shortcut files labeled in Russian. These shortcuts use double file extensions to impersonate ordinary text files. When opened, they trigger a PowerShell command that retrieves additional code from a public GitHub repository, functioning as an initial installer. The process runs silently, modifies system settings to conceal later actions, and opens a legitimate-looking document to maintain the illusion of normal activity. 

After execution, the attackers receive confirmation via the Telegram Bot API. A deliberate delay follows before launching an obfuscated Visual Basic Script, which assembles later-stage payloads directly in memory. This approach minimizes forensic traces and allows attackers to update functionality without altering the broader attack flow. 

The malware then aggressively disables security protections. Microsoft Defender exclusions are configured, protection modules are shut down, and the defendnot utility is used to deceive Windows into disabling antivirus defenses entirely. Registry modifications block administrative tools, repeated prompts seek elevated privileges, and continuous surveillance is established through automated screenshots exfiltrated via Telegram. 

Once defenses are neutralized, Amnesia RAT is downloaded from Dropbox. The malware enables extensive data theft from browsers, cryptocurrency wallets, messaging apps, and system metadata, while providing full remote control of infected devices. In parallel, ransomware derived from the Hakuna Matata family encrypts files, manipulates clipboard data to redirect cryptocurrency transactions, and ultimately locks the system using WinLocker. 

Fortinet emphasized that the campaign reflects a broader shift in phishing operations, where attackers increasingly weaponize legitimate tools and psychological manipulation instead of exploiting software flaws. Microsoft advises enabling Tamper Protection and monitoring Defender changes to reduce exposure, as similar attacks are becoming more widespread across Russian organizations.

1Password Launches Pop-Up Alerts to Block Phishing Scams

 

1Password has introduced a new phishing protection feature that displays pop-up warnings when users visit suspicious websites, aiming to reduce the risk of credential theft and account compromise. This enhancement builds on the password manager’s existing safeguards and responds to growing phishing threats fueled by increasingly sophisticated attack techniques.

Traditionally, 1Password protects users by refusing to auto-fill credentials on sites whose URLs do not exactly match those stored in the user’s vault. While this helps block many phishing attempts, it still relies on users noticing that something is wrong when their password manager does not behave as expected, which is not always the case. Some users may assume the tool malfunctioned or that their vault is locked and proceed to type passwords manually, inadvertently handing them to attackers.

The new feature addresses this gap by adding a dedicated pop-up alert that appears when 1Password detects a potential phishing URL, such as a typosquatted or lookalike domain. For example, a domain with an extra character in the name may appear convincing at a glance, especially when the phishing page closely imitates the legitimate site’s design. The pop-up is designed to prompt users to slow down, double-check the URL, and reconsider entering their credentials, effectively adding a behavioral safety net on top of technical controls.

1Password is rolling out this capability automatically for individual and family subscribers, ensuring broad coverage for consumers without requiring configuration changes. In business environments, administrators can enable the feature for employees through Authentication Policies in the 1Password admin console, integrating it into existing access control strategies. This flexibility allows organizations to align phishing protection with their security policies and training programs.

The company underscores the importance of this enhancement with survey findings from 2,000 U.S. respondents, revealing that 61% had been successfully phished and 75% do not check URLs before clicking links. The survey also shows that one-third of employees reuse passwords on work accounts, nearly half have fallen for phishing at work, and many believe protection is solely the IT department’s responsibility. With 72% admitting to clicking suspicious links and over half choosing to delete rather than report questionable messages, 1Password’s new pop-up warnings aim to counter risky user behavior and strengthen overall phishing defenses.

Online Misinformation and AI-Driven Fake Content Raise Concerns for Election Integrity

 

With elections drawing near, unease is spreading about how digital falsehoods might influence voter behavior. False narratives on social platforms may skew perception, according to officials and scholars alike. As artificial intelligence advances, deceptive content grows more convincing, slipping past scrutiny. Trust in core societal structures risks erosion under such pressure. Warnings come not just from academics but also from community leaders watching real-time shifts in public sentiment.  

Fake messages have recently circulated online, pretending to be from the City of York Council. Though they looked real, officials later stated these ads were entirely false. One showed a request for people willing to host asylum seekers; another asked volunteers to take down St George flags. A third offered work fixing road damage across neighborhoods. What made them convincing was their design - complete with official logos, formatting, and contact information typical of genuine notices. 

Without close inspection, someone scrolling quickly might believe them. Despite their authentic appearance, none of the programs mentioned were active or approved by local government. The resemblance to actual council material caused confusion until authorities stepped in to clarify. Blurred logos stood out immediately when BBC Verify examined the pictures. Wrong fonts appeared alongside misspelled words, often pointing toward artificial creation. 

Details like fingers looked twisted or incomplete - a frequent issue in computer-made visuals. One poster included an email tied to a real council employee, though that person had no knowledge of the material. Websites referenced in some flyers simply did not exist online. Even so, plenty of individuals passed the content along without questioning its truth. A single fabricated post managed to spread through networks totaling over 500,000 followers. False appearances held strong appeal despite clear warning signs. 

What spreads fast online isn’t always true - Clare Douglas, head of City of York Council, pointed out how today’s tech amplifies old problems in new ways. False stories once moved slowly; now they race across devices at a pace that overwhelms fact-checking efforts. Trust fades when people see conflicting claims everywhere, especially around health or voting matters. Institutions lose ground not because facts disappear, but because attention scatters too widely. When doubt sticks longer than corrections, participation dips quietly over time.  

Ahead of public meetings, tensions surfaced in various regions. Misinformation targeting asylum seekers and councils emerged online in Barnsley, according to Sir Steve Houghton, its council head. False stories spread further due to influencers who keep sharing them - profit often outweighs correction. Although government outlets issued clarifications, distorted messages continue flooding digital spaces. Their sheer number, combined with how long they linger, threatens trust between groups and raises risks for everyday security. Not everyone checks facts these days, according to Ilya Yablokov from the University of Sheffield’s Disinformation Research Cluster. Because AI makes it easier than ever, faking believable content takes little effort now. 

With just a small setup, someone can flood online spaces fast. What helps spread falsehoods is how busy people are - they skip checking details before passing things along. Instead, gut feelings or existing opinions shape what gets shared. Fabricated stories spreading locally might cost almost nothing to create, yet their impact on democracy can be deep. 

When misleading accounts reach more voters, specialists emphasize skills like questioning sources, checking facts, or understanding media messages - these help preserve confidence in public processes while supporting thoughtful engagement during voting events.

Suspicious Polymarket Bets Spark Insider Trading Fears After Maduro’s Capture

 

A sudden, massive bet surfaced just ahead of a major political development involving Venezuela’s leader. Days prior to Donald Trump revealing that Nicolás Maduro had been seized by U.S. authorities, an individual on Polymarket placed a highly profitable position. That trade turned a substantial gain almost instantly after the news broke. Suspicion now centers on how the timing could have been so precise. Information not yet public might have influenced the decision. The incident casts doubt on who truly knows what - and when - in digital betting arenas. Profits like these do not typically emerge without some edge. 

Hours before Trump spoke on Saturday, predictions about Maduro losing control by late January jumped fast on Polymarket. A single user, active for less than a month, made four distinct moves tied to Venezuela's political situation. That player started with $32,537 and ended with over $436,000 in returns. Instead of a name, only a digital wallet marks the profile. Who actually placed those bets has not come to light. 

That Friday afternoon, market signals began shifting - quietly at first. Come late evening, chances of Maduro being ousted edged up to 11%, starting from only 6.5% earlier. Then, overnight into January 3, something sharper unfolded. Activity picked up fast, right before news broke. Word arrived via a post: Trump claimed Maduro was under U.S. arrest. Traders appear to have moved quickly, moments prior. Their actions hint at advance awareness - or sharp guesswork - as prices reacted well before confirmation surfaced. Despite repeated attempts, Polymarket offered no prompt reply regarding the odd betting patterns. 

Still, unease is growing among regulators and lawmakers. According to Dennis Kelleher - who leads Better Markets, an independent organization focused on financial oversight - the bet carries every sign of being rooted in privileged knowledge Not just one trader walked away with gains. Others on Polymarket also pulled in sizable returns - tens of thousands - in the window before news broke. That timing hints at information spreading earlier than expected. Some clues likely slipped out ahead of formal releases. One episode sparked concern among American legislators. 

On Monday, New York's Representative Ritchie Torres - affiliated with the Democratic Party - filed a bill targeting insider activity by public officials in forecast-based trading platforms. Should such individuals hold significant details not yet disclosed, involvement in these wagers would be prohibited under his plan. This move surfaces amid broader scrutiny over how loosely governed these speculative arenas remain. Prediction markets like Polymarket and Kalshi gained traction fast across the U.S., letting people bet on politics, economies, or world events. 

When the 2024 presidential race heated up, millions flowed into these sites - adding up quickly. Insider knowledge trades face strict rules on Wall Street, yet forecasting platforms often escape similar control. Under Biden, authorities turned closer attention to these markets, increasing pressure across the sector. When Trump returned to influence, conditions shifted, opening space for lighter supervision. At Kalshi and Polymarket, leadership includes Donald Trump Jr., serving behind the scenes in guiding roles. 

Though Kalshi clearly prohibits insider trading - even among government staff using classified details - the Maduro wagering debate reveals regulatory struggles. Prediction platforms increasingly complicate distinctions, merging guesswork, uneven knowledge, then outright ethical breaches without clear boundaries.

Hackers Abuse Vulnerable Training Web Apps to Breach Enterprise Cloud Environments

 

Threat actors are actively taking advantage of poorly secured web applications designed for security training and internal penetration testing to infiltrate cloud infrastructures belonging to Fortune 500 firms and cybersecurity vendors. These applications include deliberately vulnerable platforms such as DVWA, OWASP Juice Shop, Hackazon, and bWAPP.

Research conducted by automated penetration testing firm Pentera reveals that attackers are using these exposed apps as entry points to compromise cloud systems. Once inside, adversaries have been observed deploying cryptocurrency miners, installing webshells, and moving laterally toward more sensitive assets.

Because these testing applications are intentionally insecure, exposing them to the public internet—especially when they run under highly privileged cloud accounts—creates significant security risks. Pentera identified 1,926 active vulnerable applications accessible online, many tied to excessive Identity and Access Management (IAM) permissions and hosted across AWS, Google Cloud Platform (GCP), and Microsoft Azure environments.

Pentera stated that the affected deployments belonged to several Fortune 500 organizations, including Cloudflare, F5, and Palo Alto Networks. The researchers disclosed their findings to the impacted companies, which have since remediated the issues. Analysis showed that many instances leaked cloud credentials, failed to implement least-privilege access controls, and more than half still relied on default login details—making them easy targets for attackers.

The exposed credentials could allow threat actors to fully access S3 buckets, Google Cloud Storage, and Azure Blob Storage, as well as read and write secrets, interact with container registries, and obtain administrative-level control over cloud environments. Pentera emphasized that these risks were already being exploited in real-world attacks.

"During the investigation, we discovered clear evidence that attackers are actively exploiting these exact attack vectors in the wild – deploying crypto miners, webshells, and persistence mechanisms on compromised systems," the researchers said.

Signs of compromise were confirmed when analysts examined multiple misconfigured applications. In some cases, they were able to establish shell access and analyze data to identify system ownership and attacker activity.

"Out of the 616 discovered DVWA instances, around 20% were found to contain artifacts deployed by malicious actors," Pentera says in the report.

The malicious activity largely involved the use of the XMRig mining tool, which silently mined Monero (XMR) in the background. Investigators also uncovered a persistence mechanism built around a script named ‘watchdog.sh’. When removed, the script could recreate itself from a base64-encoded backup and re-download XMRig from GitHub.

Additionally, the script retrieved encrypted tools from a Dropbox account using AES-256 encryption and terminated rival miners on infected systems. Other incidents involved a PHP-based webshell called ‘filemanager.php’, capable of file manipulation and remote command execution.

This webshell contained embedded authentication credentials and was configured with the Europe/Minsk (UTC+3) timezone, potentially offering insight into the attackers’ location.

Pentera noted that these malicious components were discovered only after Cloudflare, F5, and Palo Alto Networks had been notified and had already resolved the underlying exposures.

To reduce risk, Pentera advises organizations to keep an accurate inventory of all cloud assets—including test and training applications—and ensure they are isolated from production environments. The firm also recommends enforcing least-privilege IAM permissions, removing default credentials, and setting expiration policies for temporary cloud resources.

The full Pentera report outlines the investigation process in detail and documents the techniques and tools used to locate vulnerable applications, probe compromised systems, and identify affected organizations.

Cybercriminals Target Cloud File-Sharing Services to Access Corporate Data

 



Cybersecurity analysts are raising concerns about a growing trend in which corporate cloud-based file-sharing platforms are being leveraged to extract sensitive organizational data. A cybercrime actor known online as “Zestix” has recently been observed advertising stolen corporate information that allegedly originates from enterprise deployments of widely used cloud file-sharing solutions.

Findings shared by cyber threat intelligence firm Hudson Rock suggest that the initial compromise may not stem from vulnerabilities in the platforms themselves, but rather from infected employee devices. In several cases examined by researchers, login credentials linked to corporate cloud accounts were traced back to information-stealing malware operating on users’ systems.

These malware strains are typically delivered through deceptive online tactics, including malicious advertising and fake system prompts designed to trick users into interacting with harmful content. Once active, such malware can silently harvest stored browser data, saved passwords, personal details, and financial information, creating long-term access risks.

When attackers obtain valid credentials and the associated cloud service account does not enforce multi-factor authentication, unauthorized access becomes significantly easier. Without this added layer of verification, threat actors can enter corporate environments using legitimate login details without immediately triggering security alarms.

Hudson Rock also reported that some of the compromised credentials identified during its investigation had been present in criminal repositories for extended periods. This suggests lapses in routine password management practices, such as timely credential rotation or session invalidation after suspected exposure.

Researchers describe Zestix as operating in the role of an initial access broker, meaning the actor focuses on selling entry points into corporate systems rather than directly exploiting them. The access being offered reportedly involves cloud file-sharing environments used across a range of industries, including transportation, healthcare, utilities, telecommunications, legal services, and public-sector operations.

To validate its findings, Hudson Rock analyzed malware-derived credential logs and correlated them with publicly accessible metadata and open-source intelligence. Through this process, the firm identified multiple instances where employee credentials associated with cloud file-sharing platforms appeared in confirmed malware records. However, the researchers emphasized that these findings do not constitute public confirmation of data breaches, as affected organizations have not formally disclosed incidents linked to the activity.

The data allegedly being marketed spans a wide spectrum of corporate and operational material, including technical documentation, internal business files, customer information, infrastructure layouts, and contractual records. Exposure of such data could lead to regulatory consequences, reputational harm, and increased risks related to privacy, security, and competitive intelligence.

Beyond the specific cases examined, researchers warn that this activity reflects a broader structural issue. Threat intelligence data indicates that credential-stealing infections remain widespread across corporate environments, reinforcing the need for stronger endpoint security, consistent use of multi-factor authentication, and proactive credential hygiene.

Hudson Rock stated that relevant cloud service providers have been informed of the verified exposures to enable appropriate mitigation measures.

Targeted Cyberattack Foiled by Resecurity Honeypot


 

There has been a targeted intrusion attempt against the internal environment of Resecurity in November 2025, which has been revealed in detail by the cyber security company. In order to expose the adversaries behind this attack, the company deliberately turned the attack into a counterintelligence operation by using advanced deception techniques.

In response to a threat actor using a low-privilege employee account in order to gain access to an enterprise network, Resecurity’s incident response team redirected the intrusion into a controlled synthetic data honeypot that resembles a realistic enterprise network within which the intrusion could be detected. 

A real-time analysis of the attackers’ infrastructure, as well as their tradecraft, was not only possible with this move, but it also triggered the involvement of law enforcement after a number of evidences linked the activity to an Egyptian-based threat actor and infrastructure associated with the ShinyHunter cybercrime group, which has subsequently been shown to have claimed responsibility for the data breach falsely. 

Resecurity demonstrated how modern deception platforms, with the help of synthetic datasets generated by artificial intelligence, combined with carefully curated artifacts gathered from previously leaked dark web material, can transform reconnaissance attempts by financially motivated cybercriminals into actionable intelligence.

The active defense strategies are becoming increasingly important in today's cybersecurity operations as they do not expose customer or proprietary data.

The Resecurity team reported that threat actors operating under the nickname "Scattered Lapsus$ Hunters" publicly claimed on Telegram that they had accessed the company's systems and stolen sensitive information, such as employee information, internal communications, threat intelligence reports, client data, and more. This claim has been strongly denied by the firm. 

In addition to the screenshots shared by the group, it was later confirmed that they came from a honeypot environment that had been built specifically for Resecurity instead of Resecurity's production infrastructure. 

On the 21st of November 2025, the company's digital forensics and incident response team observed suspicious probes of publicly available services, as well as targeted attempts to access a restricted employee account. This activity was detected by the company's digital forensics and incident response team. 

There were initial traces of reconnaissance traffic to Egyptian IP addresses, such as 156.193.212.244 and 102.41.112.148. As a result of the use of commercial VPN services, Resecurity shifted from containment to observation, rather than blocking the intrusion.

Defenders created a carefully staged honeytrap account filled with synthetic data in order to observe the attackers' tactics, techniques, and procedures, rather than blocking the intrusion. 

A total of 28,000 fake consumer profiles were created in the decoy environment, along with nearly 190,000 mock payment transactions generated from publicly available patterns that contained fake Stripe records as well as fake email addresses that were derived from credential “combo lists.” 

In order to further enhance the authenticity of the data, Resecurity reactivated a retired Mattermost collaboration platform, and seeded it with outdated 2023 logs, thereby convincing the attackers that the system was indeed genuine. 

There were approximately 188,000 automated requests routed through residential proxy networks in an attempt by the attackers to harvest the synthetic dataset between December 12 and December 24. This effort ultimately failed when repeated connection failures revealed operational security shortcomings and revealed some of the attackers' real infrastructure in the process of repeated connection failures exposing vulnerabilities in the security of the system. 

A recent press release issued by Resecurity denies the breach allegation, stating that the systems cited by the threat actors were never part of its production environment, but were rather deliberately exposed honeypot assets designed to attract and observe malicious activity from a distance.

After receiving external inquiries, the company’s digital forensics and incident response teams first detected reconnaissance activity on November 21, 2025, after a threat actor began probing publicly accessible services on November 20, 2025, in a report published on December 24 and shared with reporters. 

Telemetry gathered early in the investigation revealed a number of indications that the network had been compromised, including connections coming from Egyptian IP addresses, as well as traffic being routed through Mullvas VPN infrastructure. 

A controlled honeypot account has been deployed by Resecurity inside an isolated environment as a response to the attack instead of a move to containment immediately. As a result, the attacker was able to authenticate to and interact with systems populated completely with false employee, customer, and payment information while their actions were closely monitored by Resecurity. 

Specifically, the synthetic datasets were designed to replicate the actual enterprise data structures, including over 190,000 fictitious consumer profiles and over 28,000 dummy payment transactions that were formatted to adhere to Stripe's official API specifications, as defined in the Stripe API documentation. 

In the early months of the operation, the attacker used residential proxy networks extensively to generate more than 188,000 requests for data exfiltration, which occurred between December 12 and December 24 as an automated data exfiltration operation. 

During this period, Resecurity collected detailed telemetry on the adversary's tactics, techniques, and supporting infrastructure, resulting in several operational security failures that were caused by proxy disruptions that briefly exposed confirmed IP addresses, which led to multiple operational security failures. 

As the deception continued, investigators introduced additional synthetic datasets, which led to even more mistakes that narrowed the attribution and helped determine the servers that orchestrated the activity, leading to an increase in errors. 

In the aftermath of sharing the intelligence with law enforcement partners, a foreign agency collaborating with Resecurity issued a subpoena request, which resulted in Resecurity receiving a subpoena. 

Following this initial breach, the attackers continued to make claims on Telegram, and their data was also shared with third-party breach analysts, but these statements, along with the new claims, were found to lack any verifiable evidence of actual compromise of real client systems. Independent review found that no evidence of the breach existed. 

Upon further examination, it was determined that the Telegram channel used to distribute these claims had been suspended, as did follow-on assertions from the ShinyHunters group, which were also determined to be derived from a honeytrap environment.

The actors, unknowingly, gained access to a decoy account and infrastructure, which was enough to confirm their fall into the honeytrap. Nevertheless, the incident demonstrates both the growing sophistication of modern deception technology as well as the importance of embedding them within a broader, more resilient security framework in order to maximize their effectiveness. 

A honeypot and synthetic data environment can be a valuable tool for observing attacker behavior. However, security leaders emphasize that the most effective way to use these tools is to combine them with strong foundational controls, including continuous vulnerability management, zero trust access models, multifactor authentication, employee awareness training, and disciplined network segmentation. 

Resecurity represents an evolution in defensive strategy from a reactive and reactionary model to one where organizations are taking a proactive approach in the fight against cyberthreats by gathering intelligence, disrupting the operations of adversaries, and reducing real-world risk in the process. 

There is no doubt that the ability to observe, mislead, and anticipate hostile activity, before meaningful damage occurs, is becoming an increasingly important element of enterprise defenses in the age of cyber threats as they continue to evolve at an incredible rate.

Together, the episodes present a rare, transparent view of how modern cyber attacks unfold-and how they can be strategically neutralized in order to avoid escalation of risk to data and real systems. 

Ultimately, Resecurity's claims serve more as an illustration of how threat actors are increasingly relying on perception, publicity, and speed to shape narratives before facts are even known to have been uncovered, than they serve as evidence that a successful breach occurred. 

Defenders of the case should take this lesson to heart: visibility and control can play a key role in preventing a crisis. It has become increasingly important for organizations to be able to verify, contextualize, and counter the false claims that are made by their adversaries as they implement technical capabilities combined with psychological tactics in an attempt to breach their systems. 

The Resecurity incident exemplifies how disciplined preparation and intelligence-led defense can help turn an attempted compromise into strategic advantage in an environment where trust and reputation are often the first targets. They do this quiet, methodically, and without revealing what really matters when a compromise occurs.