Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

GitHub Fixes AI Flaw That Could Have Exposed Private Repository Tokens

  A now-patched security weakness in GitHub Codespaces revealed how artificial intelligence tools embedded in developer environments can be ...

All the recent news you need to know

APT28’s Operation MacroMaze Targets Western Europe With Stealthy Macro-Based Attacks

 

A fresh wave of digital intrusions, tied to Russian operatives known as APT28, emerges through findings uncovered by S2 Grupo’s LAB52 analysts. Throughout late 2025 into early 2026, these efforts quietly unfolded across Western and Central European institutions. Dubbed Operation MacroMaze, the pattern reveals reliance on minimalistic yet precisely timed actions. Instead of complex tools, attackers favored subtle coordination - bypassing alarms by design. Each phase unfolded with restraint, avoiding flashiness while maintaining persistence behind the scenes. 

Starting the operation, cyber actors send targeted emails with harmful attachments designed to trick users. Instead of using typical methods, these documents include an XML feature named “INCLUDEPICTURE.” That field points to a JPG stored on webhook[.]site, acting as a hidden reference. As soon as someone views the file, the system pulls the image from that external address. Unlike passive downloads, this transfer initiates a background connection outward. Midway through loading, the request exposes details about the user’s environment automatically. So, without visible signs, attackers receive confirmation plus technical footprints tied to the access event. 

Over time, different versions of the documents appeared, spotted by analysts during an extended review period. Each one carried small changes in macro design, though the core behavior stayed largely unchanged. Instead of sticking with automated browser launching, newer samples began mimicking keystrokes through SendKeys functions. This shift may have aimed at dodging detection mechanisms while keeping interactions less obvious to people opening files. 

When turned on, it runs a Visual Basic Script pushing the attack forward. A CMD file gets started by the script, setting up ongoing access using timed system jobs before releasing a batch routine. Out of nowhere, a tiny HTML segment encoded in Base64 appears inside Edge running without display. That fragment pulls directives from one online trigger point, carries out those steps on the machine, gathers what happens, then sends everything back - packed into an HTML document - to another web destination. 

A different version of the batch script skips headless browsing by shifting the browser window beyond the visible screen area. Following that shift, any active Edge instances are closed - this isolates the runtime setting. Once the created HTML document opens, form submission begins on its own, sending captured command results to a server managed by the attacker, all without engaging the user. 

LAB52 points out that the attack shows hackers using ordinary tools - batch scripts, minimal VBS launchers, basic HTML forms - to form a working breach system. Hidden browser tabs become operational zones, letting intrusions unfold without obvious footprints. Webhook platforms, meant for routine tasks, carry commands one way and stolen information the other. Instead of loud breaches, quiet integration with standard processes helps evade detection. The method thrives not on complexity, but on repurposing everyday components in stealthy ways. 

What stands out in Operation MacroMaze is how basic tools, when timed precisely, achieve advanced results. Not complexity - but clever order - defines its success. Common programs, used one after another in quiet succession, form an invisible path through defenses. Trusted system features play a central role, slipping past alarms. Persistence emerges not from novelty, but repetition masked as routine. Across several European organizations, the method survives simply by avoiding attention.

North Korean Hackers Deploy New macOS Malware in Crypto Theft Campaign

 

North Korean hackers, tracked as UNC1069 by Google's Mandiant, have deployed sophisticated new macOS malware in targeted cryptocurrency theft campaigns. These attacks leverage AI-generated deepfake videos and social engineering via Telegram to trick victims into executing malicious commands. The operation, uncovered during an investigation into a fintech company breach, highlights the evolving threat to macOS users in the crypto sector.

The malicious campaign begins with hackers compromising a legitimate Telegram account from a crypto executive to build rapport with targets. They direct victims to a spoofed Calendly link leading to a fake Zoom page hosting a deepfake CEO video call. Posing as audio troubleshooting, attackers guide users to run ClickFix-style commands from a webpage, tailored for both macOS and Windows, initiating payload deployment.

Mandiant identified seven distinct macOS malware families in the chain, starting with AppleScript and a malicious Mach-O binary. Key tools include WAVESHAPER, a C++ backdoor for system reconnaissance and C2 communication; HYPERCALL and HIDDENCALL, Golang loaders and backdoors enabling remote access; and SILENCELIFT, a minimal backdoor disrupting Telegram on rooted systems. Newer implants like DEEPBREATH, a Swift data miner bypassing TCC protections to steal keychain, browser, and Telegram data, underscore the attack's breadth.

Additional malware such as SUGARLOADER, a persistent C++ downloader, and CHROMEPUSH, a Chromium extension stealer harvesting credentials and keystrokes, maximize data exfiltration. This unusually high volume of payloads on a single host aims at crypto theft and future social engineering using stolen identities. Detection remains low, with only SUGARLOADER and WAVESHAPER showing VirusTotal flags, emphasizing stealth.

UNC1069, active since 2018, shifted from Web3 targets in 2023 to financial services and crypto infrastructure last year. Similar tactics were seen in 2025 BlueNoroff attacks, but this campaign introduces novel tools amid North Korea's growing macOS focus. Crypto firms must prioritize endpoint detection, deepfake awareness training, and TCC hardening to counter these persistent threats.

New IT Rules Mandate Three Hour Deadline for Deepfake Takedowns


For the first time in India's digital governance landscape, the Union government has formally placed artificial intelligence-generated content within an enforceable regulatory framework, including deepfake videos, synthetic audio fabrications, and digitally altered visuals.

It has been announced through a Gazette Notification number G.S.R. 120(E), signed by Joint Secretary Ajit Kumar, that the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, will come into force on February 20, 2026. Despite its perceived fringe status, manipulated media is now recognized as a mainstream threat capable of distorting public discourse, reputations, and democratic processes as a mainstream issue. 

Government officials have drawn a sharper regulatory boundary around a rapidly expanding digital grey zone by tightening the obligations of intermediaries and defining accountability around artificial intelligence-driven deception. Considering the rapid proliferation of synthetic media across digital platforms, the notification provides a calibrated regulatory response. 

Through the incorporation of artificial intelligence-manipulated content into the Information Technology framework compliance architecture, the amendment clarifies intermediary liability, strengthens due diligence requirements, and narrows interpretive ambiguities associated with deepfake enforcement previously.

Essentially, algorithmically generated impersonations, voice clonings, and audiovisual material will no longer be considered peripheral anomalies, but rather regulated digital artefacts requiring legislative oversight. According to the revised rules, intermediaries are required to demonstrate mechanisms for detecting, expediting removal, and resolving user grievances involving deceptive or impersonative synthetic content. 

These requirements are intended to impose a defined compliance burden on intermediaries. In addition, the amendment recognizes that generative artificial intelligence systems have significantly reduced the threshold for large-scale misinformation, reputational manipulation, and misuse of identities. The government has done so by transitioning from advisory posture to enforceable mandate, enforcing the principle that technological innovations are not independent of regulatory responsibility while also incorporating AI-era content risks within India's formal digital compliance regime. 

In addition to expanding the regulatory scope, the 2026 amendment substantially adjusts the obligations of intermediaries concerning compliance with synthetically generated information and unlawful digital content, particularly in light of the expanded regulatory scope. Its effective date is February 20, 2026, and the revised framework amends the 2021 Rules by emphasizing enforceability, platform accountability, and informed user participation. 

In accordance with modified Rule 3(1)(c), intermediaries will now need to issue user advisories every three months, replacing an earlier annual disclosure, and explicitly stating what the consequences are for violating platform terms of service, privacy policies, or user agreements. Those users should be aware that non-compliance may result in suspensions or terminations of their access rights, as well as the potential for liability under applicable laws.

In addition to establishing mandatory reporting obligations in cases of cognizable offences, including those governed by the Protection of Children from Sexual Offences Act and the Bharatiya Nagarik Suraksha Sanhita, the amendment reinforces the integration of platform governance with criminal law enforcement mechanisms. However, the most significant procedural change relates to the compression of response timelines. 

There is now a significant reduction in the compliance window for takedown requests ordered by courts or law enforcement agencies from the previous 36-hour period. As a consequence, the removal time for nonconsensual intimate imagery has been reduced from 24 hours to two, and grievance redress mechanisms must resolve user complaints within seven days, effectively halving the previous deadline. 

To achieve compliance with these accelerated mandates, continuous monitoring frameworks need to be institutionalized, advanced automated detection systems must be deployed, and dedicated rapid-response compliance units need to be established that operate round-the-clock. 

A time-bound enforcement model replaces a comparatively lengthy procedural structure in the amendment to strengthen real-time coordination with law enforcement authorities and to limit the viral propagation of deepfakes and other forms of unlawful digital content before irreversible harm occurs. 

An initial draft framework was circulated by the Ministry of Electronics and Information Technology for stakeholder consultation in October 2025. This process was initiated as a result of the occurrence of several incidents that involved artificial intelligence-generated videos and voice recordings that falsely portrayed private individuals and public officials. 

In the period of elections and periods of social sensitivity, the proliferation of deepfake pornography, impersonation-based financial fraud, and misleading audiovisual clips has increased regulatory scrutiny. As well as reputational injury, concerns also encompass electoral integrity, public order, and the systematic amplification of misinformation within digital ecosystems that have a high rate of speed. 

While narrowing the definitional breadth while sharpening enforceability, the final notification clarifies the draft. The consultation version had characterized synthetically generated information in a broad sense, covering any content that is artificially or algorithmically constructed, modified, or altered. 

However, the notified rules place greater emphasis on material that misrepresents people, documents, or real-world events in a manner that is likely to be misleading. With this calibrated shift, interpretive overreach is reduced, while the compliance trigger is aligned with demonstrable harm and deceptive intent. 

In addition, the compliance architecture has been substantially strengthened. As a result of the amendment, intermediaries must disable access to flagged content within three hours of receiving a lawful government or court directive, reinforcing the accelerated enforcement regime. Further, the rules impose affirmative technical obligations on intermediaries that facilitate the creation or distribution of synthetic content.

Not only has this reduced the timeline for user grievances, but it also underscores a broader policy focus on real-time remediation. It is imperative that platforms employ reasonable technological safeguards to prevent the distribution of unlawful material, such as content regarding child sexual abuse, non-consensual intimate images, falsified electronic records, material relating to prohibited weapons and explosives, or depictions that mislead the public. 

The law requires intermediaries to include clear labels and embed durable provenance markers - such as permanent metadata or unique identifiers - that cannot be removed or suppressed by the end user in cases where synthetic content is not illegal per se. 

A significant social media intermediary should also require users to declare if uploaded material is synthetically generated, implement technical verification mechanisms to verify such declarations, and prominently label confirmed synthetic content before publication in order to validate such declarations. 

According to the notification, an intermediary that allows, promotes, or fails to act upon prohibited synthetic content in violation of these rules is deemed to have failed the statutory due diligence standard. Platforms must also inform users of the potential criminal liability, account suspension, and content removal implications of violations periodically.

The misuse of synthetic media may be subject to penalties under several legislation, such as the Bharatiya Nyaya Sanhita Act, the Protection of Children from Sexual Offences Act, and the Representation of the People Act. 

The amendment formally updates statutory references by replacing provisions of the Indian Penal Code with those of the Bharatiya Nyaya Sanhita, 2023, which is issued under Section 87 of the Information Technology Act. This results in the harmonisation of India's digital regulatory framework with a restructured criminal law system. 

Together, the amendments represent a broader process of recalibration of India's digital regulatory framework in response to the structural risks posed by generative technologies. The framework provides a more concise compliance roadmap and sharper enforcement triggers, however, its effectiveness will ultimately depend on consistency in implementation, technical readiness within intermediary ecosystems, and a coordinated approach between regulators, law enforcement agencies, and platform operators. 

According to legal observers, it is essential to invest consistently in forensic capability, algorithmic transparency, and institutional capacity if we are to prevent both regulatory overreach and underenforcement during the transition from policy intent to operational stability. 

By embracing synthetic media governance as a core platform architecture rather than merely treating it as an adjunct moderation function, intermediaries are signaling the need to reframe their approach to synthetic media governance. This reinforces the parallel responsibility of users and digital stakeholders to exercise discernment when consuming and disseminating artificial intelligence-generated content.

It is likely that the durability of this framework will depend not only on the statutory text, but also on an adaptive oversight process, technological innovation, and a digital citizenry prepared to navigate an increasingly mediated information environment as synthetic content technologies continue to evolve.

How Poorly Secured Endpoints Are Expanding Risk in LLM Infrastructure

 


As organizations build and host their own Large Language Models, they also create a network of supporting services and APIs to keep those systems running. The growing danger does not usually originate from the model’s intelligence itself, but from the technical framework that delivers, connects, and automates it. Every new interface added to support an LLM expands the number of possible entry points into the system. During rapid rollouts, these interfaces are often trusted automatically and reviewed later, if at all.

When these access points are given excessive permissions or rely on long-lasting credentials, they can open doors far wider than intended. A single poorly secured endpoint can provide access to internal systems, service identities, and sensitive data tied to LLM operations. For that reason, managing privileges at the endpoint level is becoming a central security requirement.

In practical terms, an endpoint is any digital doorway that allows a user, application, or service to communicate with a model. This includes APIs that receive prompts and return generated responses, administrative panels used to update or configure models, monitoring dashboards, and integration points that allow the model to interact with databases or external tools. Together, these interfaces determine how deeply the LLM is embedded within the broader technology ecosystem.

A major issue is that many of these interfaces are designed for experimentation or early deployment phases. They prioritize speed and functionality over hardened security controls. Over time, temporary testing configurations remain active, monitoring weakens, and permissions accumulate. In many deployments, the endpoint effectively becomes the security perimeter. Its authentication methods, secret management practices, and assigned privileges ultimately decide how far an intruder could move.

Exposure rarely stems from a single catastrophic mistake. Instead, it develops gradually. Internal APIs may be made publicly reachable to simplify integration and left unprotected. Access tokens or API keys may be embedded in code and never rotated. Teams may assume that internal networks are inherently secure, overlooking the fact that VPN access, misconfigurations, or compromised accounts can bridge that boundary. Cloud settings, including improperly configured gateways or firewall rules, can also unintentionally expose services to the internet.

These risks are amplified in LLM ecosystems because models are typically connected to multiple internal systems. If an attacker compromises one endpoint, they may gain indirect access to databases, automation tools, and cloud resources that already trust the model’s credentials. Unlike traditional APIs with narrow functions, LLM interfaces often support broad, automated workflows. This enables lateral movement at scale.

Threat actors can exploit prompts to extract confidential information the model can access. They may also misuse tool integrations to modify internal resources or trigger privileged operations. Even limited access can be dangerous if attackers manipulate input data in ways that influence the model to perform harmful actions indirectly.

Non-human identities intensify this exposure. Service accounts, machine credentials, and API keys allow models to function continuously without human intervention. For convenience, these identities are often granted broad permissions and rarely audited. If an endpoint tied to such credentials is breached, the attacker inherits trusted system-level access. Problems such as scattered secrets across configuration files, long-lived static credentials, excessive permissions, and a growing number of unmanaged service accounts increase both complexity and risk.

Mitigating these threats requires assuming that some endpoints will eventually be reached. Security strategies should focus on limiting impact. Access should follow strict least-privilege principles for both people and systems. Elevated rights should be granted only temporarily and revoked automatically. Sensitive sessions should be logged and reviewed. Credentials must be rotated regularly, and long-standing static secrets should be eliminated wherever possible.

Because LLM systems operate autonomously and at scale, traditional access models are no longer sufficient. Strong endpoint privilege governance, continuous verification, and reduced standing access are essential to protecting AI-driven infrastructure from escalating compromise.

Bithumb Error Sends 620,000 Bitcoins to Users, Triggers Regulatory Scrutiny in South Korea

 

A huge glitch at Bithumb, South Korea’s second-biggest digital currency platform, triggered chaos when users suddenly found themselves holding vast quantities of bitcoin due to a flawed promotion. Instead of issuing minor monetary rewards, a technical oversight allowed 620,000 bitcoins to be wrongly allocated. Regulators quickly stepped in, launching investigations as the scale of the incident became clear. Recovery efforts are now underway for assets exceeding $40 billion, stemming directly from the mishap. Legal pressure mounts on the firm while authorities assess compliance failures. What began as a routine marketing effort has turned into one of the largest operational blunders in crypto trading history.  

On 6 February, a mistake unfolded amid a promotion meant to give rewards to 695 qualifying users - totaling 620,000 Korean won, about $423. Instead of using local currency, one employee typed in bitcoin by accident; this shifted the reward value dramatically. What should have been small bonuses became 620,000 bitcoins, valued around $42 billion then. Among those who qualified, nearly half accessed their digital boxes before anyone noticed. These 249 people ended up with massive deposits, exceeding the entire crypto balance held by the platform. 

Bithumb said it fixed many incorrect deposits through adjustments in its internal records. Still, regulators noted approximately 13 billion won - about $9 million - was unaccounted for, lost when certain users moved or cashed out funds prior to detection. During the half-hour span before freezing actions began, 86 individuals allegedly offloaded close to 1,788 bitcoins, sparking temporary shifts in pricing across the site's trading system. 

Criticism came fast once news broke. "Catastrophic" was the word used by Lee Chan-jin - head of South Korea’s Financial Supervisory Service - to describe what happened to those who offloaded their bitcoin. With prices climbing afterward, people forced to give back holdings might now owe money instead. Not just a one-off error, according to Lee; it revealed deeper flaws in how crypto platforms handle internal ledgers and transaction safeguards. 

Disagreement persists among legal professionals regarding possible criminal consequences for users who withdrew accidentally deposited bitcoin. Though crypto assets were central to a 2021 South Korean high court decision, their exclusion from the definition of "property" in penal statutes muddies enforcement paths. Instead of pursuing drawn-out lawsuits, Bithumb initiated private talks with around eighty individuals who converted the digital value into local currency, asking repayment in won amounts. 

Now probing deeper, the Financial Supervisory Service has opened a comprehensive review; meanwhile, lawmakers in Seoul will hold an urgent session on 11 February to press officials and platform leaders for answers. Speaking publicly, Bithumb admitted changes are underway - its payout systems being rebuilt, oversight tightened - even though they insist no cyberattack occurred nor did outside actors gain access.

Millions of Chrome, Safari, and Edge Users at Risk from New Browser Exploit

 

A critical security vulnerability is threatening millions of users of popular web browsers including Google Chrome, Apple Safari, and Microsoft Edge. Security researchers have uncovered a sophisticated exploit that allows attackers to hijack sessions and steal sensitive data directly from affected browsers. The flaw, actively exploited in the wild, bypasses traditional defenses and targets core rendering engines shared across these platforms.

This vulnerability stems from a zero-day flaw in the WebKit and Chromium rendering engines, which power Safari and large portions of Chrome and Edge respectively. Attackers can craft malicious web pages that trigger the bug when visited, leading to remote code execution without user interaction. Cybersecurity firm Glasgowlive reports that the issue has already impacted over 2.5 billion devices worldwide, urging immediate patching.Early indicators show campaigns originating from state-sponsored actors aiming at high-value targets like journalists and activists.

Browser vendors have responded swiftly with emergency updates. Google rolled out Chrome 131.0.6778.100 for Windows, Mac, and Linux, while Apple pushed Safari 18.2 via macOS and iOS updates. Microsoft Edge users should navigate to Settings > Help and Feedback > About Microsoft Edge for auto-updates. Failing to apply these patches leaves systems exposed to drive-by downloads and persistent malware infections. Experts recommend enabling automatic updates and avoiding suspicious links during this period.

The incident highlights ongoing risks in browser monoculture, where Chromium-based browsers dominate 80% of the market. Chrome alone commands 66% of global web traffic, amplifying the blast radius of such flaws. Privacy advocates note that while features like sandboxing mitigate some damage, shared codebases create systemic weaknesses.Users of older versions, especially on enterprise networks, face heightened threats from phishing sites mimicking legitimate updates.

To stay safe, reboot devices post-update, clear browser caches, and deploy endpoint detection tools. Security firms advise scanning for indicators of compromise, such as unusual network activity. This incident underscores the need for diversified browser usage and vigilant patch management in 2026's threat landscape. As cyber threats evolve, proactive updates remain the first line of defense for billions online.

Featured