Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Google Links CANFAIL Malware Attacks to Suspected Russia-Aligned Group

  A newly identified cyber espionage group has been linked to a wave of digital attacks against Ukrainian institutions, according to finding...

All the recent news you need to know

Iron Man Data Breach Only Impacted Marketing Resources


Data storage and recovery services company ‘Iron Mountain’ suffered a data breach. Extortion gang ‘Everest’ was behind the breach. Iron Mountain said the breach was limited to marketing materials. The company specializes in records management and data centers, it has more than 240,000 customers globally in 61 countries. 

About the breach 

The gang claimed responsibility on the dark web, claiming to steal 1.4 TB of internal company documents. Threat actors used leaked login credentials to access a single folder on a file-sharing server having marketing materials. 

Experts said that Everest actors didn't install any ransomware payloads on the server, and no extra systems were breached. No sensitive information was exposed. The compromised login accessed one folder that had marketing materials. 

The Everest ransomware group started working from 2020. It has since changed its tactics. Earlier, it used to encrypt target's systems via ransomware. Now, it focuses on data-theft-only corporate extortion. Everest is infamous for acting as initial access broker for other hackers and groups. It also sells access to compromised networks. 

History 

In the last 5 years, Everest’s victim list has increased to hundreds in its list portal. This is deployed in double-extortion attacks where hackers blackmail to publish stolen files if the victims don't pay ransom. 

The U.S. Department of Health and Human Services also issued a warning in August 2024 that Everest was increasingly focusing on healthcare institutions nationwide. More recently, the cybercrime operation removed its website in April 2025 after it was vandalized and the statement "Don't do crime CRIME IS BAD xoxo from Prague" was posted in its place.

If the reports of sensitive data theft turn out to be accurate, Iron Mountain's clients and partners may be at risk of identity theft and targeted phishing. Iron Mountain's present evaluation, however, suggests that the danger is restricted to the disclosure of non-confidential marketing and research documents. 

What is the impact?

Such purported leaks usually result in short-term reputational issues while forensic investigations are being conducted. Iron Mountain has deactivated the compromised credential as a precaution and is still keeping an eye on its systems. 

Vendors or affected parties who used the aforementioned file-sharing website should be on the lookout for odd communications. Iron Mountain's response to these unsubstantiated allegations must be transparent throughout the investigation.

Moltbook Data Leak Reveals 1.5 Million Tokens Exposed in AI Social Platform Security Flaw

 



Moltbook has recently captured worldwide attention—not only for its unusual concept as a dystopian-style social platform centered on artificial intelligence, but also for significant security and privacy failures uncovered by researchers.

The platform presents itself as a Reddit-inspired network built primarily for AI agents. Developed using a “vibe-coded” approach—where the creator relied on AI tools to generate the code rather than writing it manually—Moltbook allows users to observe AI agents conversing with one another. These exchanges reportedly include topics such as existential reflection and discussions about escaping human control.

However, cybersecurity firm Wiz conducted an in-depth review of the platform and identified serious flaws. According to its findings, the AI agents interacting on the site were not entirely autonomous. More concerningly, the platform exposed sensitive user information affecting thousands.

In its report, Wiz said it performed a “non-intrusive security review” by navigating the platform as a regular user. Within minutes, researchers discovered a Supabase API key embedded in client-side JavaScript. The exposed key granted unauthenticated access to the production database, allowing both read and write operations across all tables.

“The exposure included 1.5 million API authentication tokens, 35,000 email addresses, and private messages between agents. We immediately disclosed the issue to the Moltbook team, who secured it within hours with our assistance, and all data accessed during the research and fix verification has been deleted,” the researchers explained.

The team clarified that the presence of a visible API key “does not automatically indicate a security failure,” noting that Supabase is “designed to operate with certain keys exposed to the client.” However, in this case, the backend configuration created a critical vulnerability.

“Supabase is a popular open-source Firebase alternative providing hosted PostgreSQL databases with REST APIs,” Wiz explained. “When properly configured with Row Level Security (RLS), the public API key is safe to expose - it acts like a project identifier. However, without RLS policies, this key grants full database access to anyone who has it. In Moltbook’s implementation, this critical line of defense was missing.”

Beyond the data exposure, the investigation also cast doubt on Moltbook’s central claim of hosting a fully autonomous AI ecosystem. Researchers concluded that human operators were significantly involved behind the scenes. “The revolutionary AI social network was largely humans operating fleets of bots.”

For now, Moltbook’s vision of independent AI entities engaging freely online appears to remain closer to speculative fiction than technological reality.

OpenAI’s Evolving Mission: A Shift from Safety to Profit?

 

Now under scrutiny, OpenAI - known for creating ChatGPT - has quietly adjusted its guiding purpose. Its 2023 vision once stressed developing artificial intelligence to benefit people without limits imposed by profit goals, specifically stating "safely benefits humanity." Yet late findings in a November 2025 tax filing for the prior year show that "safely" no longer appears. This edit arrives alongside structural shifts toward revenue-driven operations. Though small in wording, the change feeds debate over long-term priorities. While finances now shape direction more openly, questions grow about earlier promises. Notably absent is any public explanation for dropping the term tied to caution. Instead, emphasis moves elsewhere. What remains clear: intent may have shifted beneath the surface. Whether oversight follows such changes stays uncertain. 

This shift has escaped widespread media attention, yet it matters deeply - particularly while OpenAI contends with legal actions charging emotional manipulation, fatalities, and careless design flaws. Rather than downplay the issue, specialists in charitable governance see the silence as telling, suggesting financial motives may now outweigh user well-being. What unfolds here offers insight into public oversight of influential groups that can shape lives for better or worse. 

What began in 2015 as a nonprofit effort aimed at serving the public good slowly shifted course due to rising costs tied to building advanced AI systems. By 2019, financial demands prompted the launch of a for-profit arm under the direction of chief executive Sam Altman. That change opened doors - Microsoft alone had committed more than USD 13 billion by 2024 through repeated backing. Additional capital injections followed, nudging the organization steadily toward standard commercial frameworks. In October 2025, a formal separation took shape: one part remained a nonprofit entity named OpenAI Foundation, while operations moved into a new corporate body called OpenAI Group. Though this group operates as a public benefit corporation required to weigh wider social impacts, how those duties are interpreted and shared depends entirely on decisions made behind closed doors by its governing board. 

Not long ago, the mission changed - now it says “to ensure that artificial general intelligence benefits all of humanity.” Gone are the promises to do so safely and without limits tied to profit. Some see this edit as clear evidence of growing focus on revenue over caution. Even though safety still appears on OpenAI’s public site, cutting it from core texts feels telling. Oversight becomes harder when governance lines blur between parts of the organization. Just a fraction of ownership remains with the Foundation - around 25% of shares in the Group. That marks a sharp drop from earlier authority levels. With many leaders sitting on both boards at once, impartial review grows unlikely. Doubts surface about how much power the safety committee actually has under these conditions.

Palo Alto Softens China Hack Attribution Over Beijing Retaliation Fears

 

Palo Alto Networks is facing scrutiny after reports that it deliberately softened public attribution of a vast cyberespionage campaign that its researchers internally linked to China. According to people familiar with the matter, a draft from its Unit 42 threat intelligence team tied the prolific hacking group, dubbed “TGR-STA-1030,” directly to Beijing, but the final report described it only as a “state-aligned group that operates out of Asia.” The change has reignited debate over how commercial cybersecurity firms navigate geopolitical pressure while disclosing state-backed hacking operations. 

The underlying campaign, branded “The Shadow Campaigns,” involved years-long reconnaissance and intrusions spanning nearly every country, compromising government and critical infrastructure targets in at least 37 nations. Investigators noted telltale clues suggesting a Chinese nexus, including activity patterns aligned with the GMT+8 time zone and tasking that appeared to track diplomatic flashpoints involving Beijing, such as a focus on Czech government systems after a presidential meeting with the Dalai Lama. The operators also reportedly targeted Thailand shortly before a high‑profile state visit by the Thai king to China, hinting at classic intelligence collection around sensitive diplomatic events. 

According to sources cited in the report, Palo Alto executives ordered the language to be watered down after China moved to ban software from about 15 U.S. and Israeli cybersecurity vendors, including Palo Alto, on national security grounds. Leadership allegedly worried that an explicit attribution to China could trigger further retaliation, potentially putting staff in the country at risk and jeopardizing business with Chinese or China‑exposed customers worldwide. The episode illustrates the mounting commercial and personal-security stakes facing global security vendors that operate in markets where they may also be calling out state-backed hacking. 

The researchers who reviewed Unit 42’s technical findings say they have observed similar tradecraft and infrastructure in activity they already attribute to Chinese state-sponsored espionage. U.S. officials and independent analysts have for years warned of increasingly aggressive Chinese cyber operations aimed at burrowing into critical infrastructure and sensitive government networks, a trend they see reflected in the Shadow Campaigns’ breadth and persistence. While Beijing consistently denies involvement in hacking, the indicators described by Palo Alto and others fit a pattern Western intelligence agencies have been tracking across multiple high‑impact intrusions. 

China’s embassy in Washington responded by reiterating that Beijing opposes “all forms of cyberattacks” and arguing that attribution is a complex technical issue that should rest on “sufficient evidence rather than unfounded speculation and accusations.” The controversy around Palo Alto’s edited report now sits at the intersection of that diplomatic line and the realities of commercial risk in authoritarian markets. For the wider cybersecurity industry, it underscores a hardening dilemma: how to speak plainly about state-backed intrusions while safeguarding employees, customers, and revenue in the very countries whose hackers they may be exposing.

Fraudulent Recruiters Target Developers with Malicious Coding Tests


 

If a software developer is accustomed to receiving unsolicited messages offering lucrative remote employment opportunities, the initial approach may appear routine—a brief introduction, a well-written job description, and an invitation to complete a small technical exercise. Nevertheless, behind the recent waves of such outreach lies a sophisticated operation. 

During the investigation, investigators have discovered a new version of the long-running fake recruiter campaign linked to North Korean threat actors. This campaign now targets JavaScript and Python developers with cryptocurrency-themed assignments. 

With a deliberate, modular design that makes it possible for operators to rapidly rebuild and re-deploy infrastructure when parts of the campaign are exposed or dismantled since at least May 2025. Several malicious packages were quietly published to the NPM and PyPI ecosystems, which developers utilize in routine work processes. 

Once executed within a developer's environment, the packages serve as downloaders that discreetly retrieve a remote access trojan. Researchers have compiled 192 packages associated with the campaign, which they have labeled Graphalgo, confirming the threat's scale and persistence. 

It has been determined that the operation is more than just opportunistic phishing and represents a carefully orchestrated social engineering campaign incorporated into legitimate hiring processes rather than just opportunistic phishing. 

A recruiting impersonator impersonates a recruiter from an established technology company, initiating communication through professional networking platforms and via email with job descriptions, technical prerequisites, and compensation information aligned with market trends. By cultivating trust over a number of exchanges, the operators resemble the cadence and tone of authentic recruitment cycles without relying on urgency or alarm. 

Following the establishment of legitimacy, they implement a coding assessment, typically a compressed archive, designed to provide a standard measure of the candidate's ability to solve problems or develop blockchain-related applications. 

In addition, the files provided contain embedded malware that is designed to execute once the developer tries to review or run the project locally. Using routine practices such as cloning repositories, installing dependencies, and executing test scripts, the attackers were able to circumvent conventional suspicion triggers associated with unsolicited attachments. 

The strategy demonstrates a deep understanding of developer behavior, technical interview conventions, and the implicit trust derived from structured hiring processes, according to researchers. The execution of the malicious project components in several observed cases enabled unauthorized system access, resulting in credential harvesting, lateral movement, as well as the possibility of exposing proprietary source code and corporate infrastructure to unauthorized access. 

A key component of the campaign's success is not exploiting software vulnerabilities, but rather manipulating professional norms—transforming recruitment itself into a delivery channel for compromise. Several ReversingLabs researchers have determined that the infrastructure supporting the campaign is intended to mirror legitimate activity within the blockchain and crypto-trading industries. 

Threat actors establish fictitious companies, post detailed job postings on professional and social platforms, such as LinkedIn, Facebook, and Reddit, and request candidates to complete technical assignments as part of the simulated interview process. The tasks are usually similar to routine coding evaluations, where candidates clone repositories, execute projects locally, resolve minor bugs, and submit improvements. 

Nevertheless, the critical objective is not the solution submitted, but the process of executing it. When running a project, a malicious dependency sourced from trusted ecosystems such as npm and PyPI is installed, thus allowing the payload to be introduced indirectly through dependency resolution processes. 

As investigators point out, the process of assembling such repositories is straightforward: a legitimate open-source template is modified to reference a compromised or weaponized package, following which the project appears technically sound and professionally structured. An example of a benign package called “bigmathutils,” which had accumulated approximately 10,000 downloads, was introduced into malicious functionality by version 1.1.0. 

A maneuver likely intended to limit forensic visibility followed by the deprecation and removal of the package soon thereafter. A more extensive campaign was later developed, dubbed Graphalgo for its frequent use of packages containing the term "graph" and their imitations of well-established libraries such as graphlib.

Researchers have observed a shift in package names that include the word "big" since December 2025, although there has not been a comprehensive identification of the recruitment infrastructure associated with that phase. As a means of giving structural legitimacy to their operations, actors utilize GitHub Organizations. The visible project files of GitHub repositories do not contain any overtly malicious code.

Instead, compromise occurs by resolving external dependencies -Graphalgo packages retrieved from npm or PyPI - thus separating the malicious logic from the repository, making detection more challenging. By executing the projects as instructed, developers inadvertently install a remote access trojan on their computer systems. Analysis of the malware indicates it is capable of enumerating processes, executing arbitrary commands via command-and-control channels, exfiltrating data and delivering secondary payloads. 

A clear financial motive associated with cryptocurrency asset theft is also evident from the fact that the RAT checks for the MetaMask browser extension. According to researchers, multiple developers were successfully compromised before the activity was discovered, demonstrating the operational effectiveness of embedding malicious logic within trusted mechanics in software development workflows.

According to a technical examination of the later infection stages, the intermediate payloads serve mainly as downloaders, retrieving the final remote access trojan from the attacker's infrastructure. Upon deployment, the RAT communicates periodically with its command-and-control server, polling it for tasking and executing the instructions given by the operator. 

The tool has a feature set that is consistent with mature post-exploitation tools: file uploading and downloading capabilities, process enumeration, and execution of arbitrary system commands. Additionally, communications with the C2 endpoint are token-protected, requiring a valid server-issued token when registering an agent or issuing a command command. 

It is believed that this additional authentication layer serves to restrict unsolicited interaction with the infrastructure and to reflect operational discipline previously observed in North Korean state-backed campaigns. In addition to detecting the MetaMask browser extension, the malware demonstrates a clear interest in crypto assets, aligning with financial motivations historically linked to Pyongyang-aligned groups as well as a clear interest in cryptocurrency assets. 

As part of their investigation, researchers identified three functionally equivalent variants of the final payload implemented in various languages. JavaScript and Python versions were distributed through malicious packages hosted on npm and PyPI, while a third variant was found independently using Visual Basic Script. 

As first noted in early February 2026, the VBS sample communicates with the same C2 infrastructure associated with earlier "graph"-named packages, as evidenced by the SHA1 hash dbb4031e9bb8f8821a5758a6c308932b88599f18. This suggests a parallel or yet to be identified recruitment frontend is part of the broader operation. North Korean activity in public open-source ecosystems has been documented in a number of cases. 

VMConnect, an operation later dubbed and attributed to the Lazarus Group, was detected by ReversingLabs in 2023 involving malicious PyPI impersonation operations. The attack involved weaponized packages linked to convincing GitHub repositories which were able to reinforce trust before delivering malware from attacker infrastructure.

In a year, researchers observed the VMConnect tradecraft continuing to be practiced, this time incorporating fabricated coding assessments associated with fraudulent job interviews. As in some instances, the actors assumed the identity of Capital One, further demonstrating their willingness to appropriate established corporate identities to legitimize outreach. Other security firms have confirmed the pattern through their reports. 

As of 2023, Phylum provided information about NPM malware campaigns that utilize token-based mechanisms and paired packages to avoid detection, while Unit 42 provided information about the methods North Korean state-sponsored actors used to distribute multi-stage malware through developer ecosystems. In addition to Veracode and Socket's disclosures during 2024 and 2025, further npm packages attributed to Lazarus-related activity were also identified, including second-stage payloads that erased forensic evidence upon execution of the package.

In the present campaign, attribution is based on a convergence of technical and operational indicators rather than a single artifact. Lazarus methodologies, such as using fake interviews to gain access, cryptocurrency-themed lures, multistage payload chains layered with obfuscation, and deliberately delaying the release of benign and malicious package versions, are similar to previously documented Lazarus methods. 

Moreover, token-protected C2 communications and Git commit timestamps aligned with GMT+9, North Korea's time zone, provide context alignment. These characteristics suggest a coordinated, state-sponsored effort rather than opportunistic cybercrime. Researchers cite the modular architecture of the campaign as a significant strength. By separating recruitment personas from backend payload infrastructure, operators can rotate the company names, job postings, and thematic branding without altering core delivery mechanisms.

Although a direct link has been established between "graph"-named packages and specific blockchain-based job offerings, the frontend elements for the newer "big"-named packages and the VBS RAT variant have not yet been identified in detail. 

ReversingLabs analyzed the Graphalgo activity and compiled an extensive set of indicators of compromise linked to the operation, including malicious package names, hashes, domains, and C2 endpoints as part of its investigation. This gap indicates that elements of the operation likely remain active and evolving. These artifacts are crucial in assisting organizations in the detection and response to incidents, since they enable them to identify exposures within development environments and within software supply chains.

Lazarus-related operations persisting across NPM and PyPI underscores a broader reality: open-source ecosystems remain strategically valuable target surfaces, while recruitment-themed social engineering has evolved into an extremely sophisticated intrusion vector that is capable of bypassing conventional defense measures. Those findings underscore the importance of reassessing the implicit trust placed in external code and recruitment-driven processes among development teams.

Besides email filtering and endpoint protection, security controls should include rigorous dependency monitoring, sandboxing of third-party projects, and stricter verification of unsolicited technical assessments in addition to traditional email filtering and endpoint protection. 

An organization should implement a software composition analysis, enforce a least-privilege development environment, and monitor anomalous outbound connections originating from the build system or developer workstations. As a result, awareness programs must be updated to address recruitment-themed social engineering, which incorporates professional credibility with technical deception in order to achieve effective recruitment results.

Threat actors are continuing to adapt their tactics to mimic legitimate industry practices, which is why defensive strategies should mature as well - treating development environments and open-source dependencies as critical security boundaries as opposed to mere conveniences.

SMS and OTP Bombing Tools Evolve into Scalable, Global Abuse Infrastructure

 

The modern authentication ecosystem operates on a fragile premise: that one-time password requests are legitimate. That assumption is increasingly being challenged. What started in the early 2020s as loosely circulated scripts designed to annoy phone numbers has transformed into a coordinated ecosystem of SMS and OTP bombing tools built for scale, automation, and persistence.

New findings from Cyble Research and Intelligence Labs (CRIL) analyzed nearly 20 actively maintained repositories and found rapid technical progression continuing through late 2025 and into 2026. These tools have moved beyond basic terminal scripts. They now include cross-platform desktop applications, Telegram-integrated automation frameworks, and high-performance systems capable of launching large-scale SMS, OTP, and voice-bombing campaigns across multiple geographies.

Researchers emphasize that the study reflects patterns within a defined research sample and should be viewed as indicative trends rather than a full mapping of the global ecosystem. Even within that limited dataset, the scale and sophistication are significant

SMS and OTP bombing campaigns exploit legitimate authentication endpoints. Attackers repeatedly trigger password resets, registration verifications, or login challenges, overwhelming a victim’s phone with genuine SMS messages or automated voice calls. The result ranges from harassment and disruption to more serious risks such as MFA fatigue.

Across the 20 repositories examined, researchers identified approximately 843 vulnerable API endpoints. These endpoints belonged to organizations across telecommunications, financial services, e-commerce, ride-hailing services, and government platforms. The recurring weaknesses were predictable: inadequate rate limiting, weak or poorly enforced CAPTCHA mechanisms, or both.

Regional targeting was uneven. Roughly 61.68% of observed endpoints—about 520—were linked to infrastructure in Iran. India accounted for 16.96%, approximately 143 endpoints. Additional activity was concentrated in Turkey, Ukraine, and parts of Eastern Europe and South Asia.

The attack lifecycle typically begins with endpoint discovery. Threat actors manually test authentication workflows, probe common API paths such as /api/send-otp or /auth/send-code, reverse-engineer mobile applications to uncover hardcoded API references, or leverage community-maintained endpoint lists shared in public repositories and forums. Once identified, these endpoints are integrated into multi-threaded attack frameworks capable of issuing simultaneous requests at scale.

The technical sophistication of SMS and OTP bombing tools has advanced considerably. Maintainers now offer versions across seven programming languages and frameworks, lowering entry barriers for individuals with limited coding expertise.

Modern toolkits commonly include:
  • Multi-threading to enable parallel API exploitation
  • Proxy rotation to bypass IP-based defenses
  • Request randomization to mimic human behavior
  • Automated retry mechanisms and failure handling
  • Real-time activity dashboards
More concerning is the widespread use of SSL bypass techniques. Approximately 75% of the repositories analyzed disable SSL certificate validation. Instead of relying on properly verified secure connections, these tools deliberately ignore certificate errors, enabling traffic interception or manipulation without interruption. SSL bypass has emerged as one of the most frequently observed evasion strategies.

In addition, 58.3% of repositories randomize User-Agent headers to evade signature-based detection systems. Around 33% exploit static or hardcoded reCAPTCHA tokens, effectively bypassing poorly implemented bot protections.

The ecosystem has also expanded beyond SMS flooding. Voice-bombing capabilities—automated call floods triggered through telephony APIs—are now integrated into several frameworks, broadening the harassment surface.

Commercialization and Data Harvesting Risks

Alongside open-source development, a commercial layer has surfaced. Browser-based SMS and OTP bombing platforms now offer simplified, point-and-click interfaces. Often marketed misleadingly as “prank tools” or “SMS testing services,” these platforms eliminate technical setup requirements.

Unlike repository-based tools that require local execution and configuration, web-based services abstract proxy management, API integration, and automation processes. This significantly increases accessibility.

However, these services frequently operate on a dual-threat model. Phone numbers entered into such platforms are often harvested. The collected data may later be reused in spam campaigns, sold as lead lists, or integrated into broader fraud operations. In effect, users risk exposing both their targets and themselves to ongoing exploitation.

Financial, Operational, and Reputational Impact

For individuals, SMS and OTP bombing can severely disrupt device usability. Effects include degraded performance, overwhelmed message inboxes, exhausted SMS storage, battery drain, and increased risk of MFA fatigue—potentially leading to accidental approval of malicious login attempts. Voice-bombing campaigns further intensify the disruption.

For organizations, the consequences extend well beyond inconvenience.

Financially, each OTP message typically costs between $0.05 and $0.20. An attack generating 10,000 messages can result in expenses ranging from $500 to $2,000. Sustained abuse of exposed endpoints can drive monthly SMS costs into five-figure sums.

Operationally, legitimate users may be unable to receive verification codes, customer support volumes can surge, and authentication delays can impact service reliability. In regulated industries, failure to secure authentication workflows may introduce compliance risks.

Reputational damage compounds these issues. Users quickly associate spam-like behavior with weak security controls, eroding trust and confidence in affected organizations.

As SMS and OTP bombing tools continue to evolve in sophistication and accessibility, the strain on authentication infrastructure underscores the urgent need for stronger rate limiting, adaptive bot detection, and hardened API protections across industries

Featured