Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

Why Cloud Outages Turn Identity Systems into a Critical Business Risk

 

Recent large-scale cloud outages have become increasingly visible. Incidents involving major providers like AWS, Azure, and Cloudflare have disrupted vast portions of the internet, knocking critical websites and services offline. Because so many digital platforms are interconnected, these failures often cascade, stopping applications and workflows that organizations depend on daily.

For everyday users, the impact usually feels like a temporary annoyance—difficulty ordering food, streaming shows, or accessing online tools. For enterprises, the consequences are far more damaging. If an airline’s reservation platform goes down, every minute of downtime can mean lost bookings, revenue leakage, reputational harm, and operational chaos.

These events make it clear that cloud failures go well beyond compute and networking issues. One of the most vulnerable—and business-critical—areas affected is identity. When authentication or authorization systems fail, the problem is no longer simple downtime; it becomes a fundamental operational and security crisis.

Cloud Infrastructure as a Shared Failure Point

Cloud providers are not identity platforms themselves, but modern identity architectures rely heavily on cloud-hosted infrastructure and shared services. Even if an identity provider remains technically operational, disruptions elsewhere in the stack can break identity flows entirely.
  • Organizations commonly depend on the cloud for essential identity components such as:
  • Databases storing directory and user attribute information
  • Policy and authorization data stores
  • Load balancers, control planes, and DNS services
Because these elements are shared, a failure in any one of them can completely block authentication or authorization—even when the identity service appears healthy. This creates a concealed single point of failure that many teams only become aware of during an outage.

Identity as the Universal Gatekeeper

Authentication and authorization are not limited to login screens. They continuously control access for users, applications, APIs, and services. Modern Zero Trust architectures are built on the principle of “never trust, always verify,” and that verification is entirely dependent on identity system availability.

This applies equally to people and machines. Applications authenticate repeatedly, APIs validate every request, and services constantly request tokens to communicate with each other. When identity systems are unavailable, entire digital ecosystems grind to a halt.

As a result, identity-related outages pose a direct threat to business continuity. They warrant the highest level of incident response, supported by proactive monitoring across all dependent systems. Treating identity downtime as a secondary technical issue significantly underestimates its business impact.

Modern authentication goes far beyond checking a username and password—or even a passkey, as passwordless adoption grows. A single login attempt often initiates a sophisticated chain of backend operations.

Typically, identity systems must:
  • Retrieve user attributes from directories or databases
  • Maintain session state
  • Generate access tokens with specific scopes, claims, and attributes
  • Enforce fine-grained authorization through policy engines
Authorization decisions may occur both when tokens are issued and later, when APIs are accessed. In many architectures, APIs must also authenticate themselves before calling downstream services.

Each step relies on underlying infrastructure components such as datastores, policy engines, token services, and external integrations. If any part of this chain fails, access can be completely blocked—impacting users, applications, and critical business processes.

Why High Availability Alone Falls Short

High availability is essential, but on its own it is often insufficient for identity systems. Traditional designs usually rely on regional redundancy, with a primary deployment backed up by a secondary region. When one region fails, traffic shifts to the other.

This strategy offers limited protection when outages affect shared or global services. If multiple regions depend on the same control plane, DNS service, or managed database, a regional failover does little to improve resilience. In such cases, both primary and backup systems can fail simultaneously.

The result is an identity architecture that looks robust in theory but collapses during widespread cloud or platform-level disruptions.

True resilience requires intentional design. For identity systems, this may involve reducing reliance on a single provider or failure domain through multi-cloud deployments or carefully managed on-premises options that remain reachable during cloud degradation.

Planning for partial failure is equally important. Completely denying access during outages causes maximum business disruption. Allowing constrained access—using cached attributes, precomputed authorization decisions, or limited functionality—can significantly reduce operational and reputational damage.

Not all identity data demands identical availability guarantees. Some attributes or authorization sources may tolerate lower resilience, as long as those decisions are made deliberately and aligned with business risk.

Ultimately, identity platforms must be built to fail gracefully. Infrastructure outages are unavoidable; access control should degrade in a controlled, predictable manner rather than collapse entirely.

Orchid Security Debuts Continuous Identity Observability Platform


 

Over the past two decades, organizations have steadily expanded their identity security portfolios, layering IAM, IGA, and PAM to deploy access control at scale. However, identity-driven breaches continue to grow in both frequency and impact despite this sustained investment.

It has been argued that the failure of this system is not the result of weak policy design or inadequate standards, but rather of the widening gap between how the identity system is governed on paper and how access actually works in reality. 

Currently, enterprise environments contain a large number of unmanaged identity artifacts, including local system accounts, legacy authentication mechanisms, orphaned service principals, embedded API keys, and application-specific entitlements, that are inaccessible to centralized controls or cannot be accessed. 

These factors constitute Identity Dark Matter, an attack surface that adversaries increasingly exploit to bypass SSO, sidestep MFA, move laterally across systems, and escalate privileges without triggering conventional identity alerts. As a result of this work, Identity Dark Matter is not merely viewed as a risk category, but as a structural defect in existing identity architectures as a whole.

The new identity control plane proposes a method of reconciling intended access policies with effective, real-world authorization by correlating runtime telemetry with contextual identity signals and automating remediation across managed and unmanaged identities. 

Amidst this shift toward identity-centered security models, Orchid Security has been formally recognized as a Cool Vendor by Gartner in its 2025 report on Cool Vendors in Identity-First Security, highlighting its growing significance in redefining enterprise identity infrastructure.

Orchid has been recognized as one of a select group of vendors that address real-time security exposure and threat mitigation in increasingly perimeterless environments while maintaining compatibility with existing IAM infrastructures. As cloud adoption and API-driven architectures increase, network-bound security models become obsolete, elevating identity as the primary control plane for modern security architectures, according to Gartner's analysis.

Orchid is positioned as an innovative identity infrastructure provider by utilizing artificial intelligence and machine learning analytics to continuously correlate identity data, identify coverage gaps that are often overlooked during traditional IAM deployments and onboardings, and provide comprehensive observability across the application ecosystems. 

Moreover, Gartner reports that Orchid's emphasis on orchestration and fabric-level visibility enables enterprises to enhance their security posture while simultaneously supporting automated operations, positioning the platform as a unique solution capable of ensuring identity risk compliance across diverse and evolving enterprise environments with precision, scalability, and compliance. 

The traditional identity platforms are mainly designed around static configuration data and predefined policy models, which allows them to be implemented in a very limited number of domains, however their effectiveness is usually limited to well-governed, human-centric identities. 

When applied to the realities of modern enterprise environments, where custom applications are being developed, legacy authentication mechanisms are being used, credentials are embedded, non-human identity is still prevalent, and access paths do not bypass centralized identity providers, these approaches fall short. In consequence, security teams are often forced to conduct reactive analysis, reconstructing identity behavior retrospectively during audits or investigations conducted as a result of these incidents. 

It is inherently unsustainable at scale, as it relies on inference instead of continuous visibility into the utilization of identities within applications and services. To address this structural gap, Orchid Security has developed an identity observability model that aligns with the real-world security operations environment. A four-stage platform consists of four stages: discovery, analysis, orchestration, and auditing. 

The platform begins by identifying how identities are used inside applications in a direct manner, followed by an audit. With Orchid's lightweight instrumentation, we examine both managed and unmanaged environments at a high level in regards to authentication methods, authorization logic and credential handling. The goal of this process is to produce a comprehensive, runtime-driven inventory of applications, services, identity types, authentication flows, and embedded credentials that enables us to create an accurate baseline of identity activity. 

By correlating identities, applications, and access paths, Orchid analyzes identity behavior in context, identifying material risk indicators such as shared or hardcoded credentials, orphaned service accounts, privileged access outside the realm of Identity and Access Controls, as well as drift between desired policy and effective access. 


Identity-centric defense has evolved in alignment with Gartner's assessment that the accelerated adoption of digital transformation, cloud computing, remote work, API-driven architectures, and API-driven architectures have fundamentally undermined perimeter-based security, requiring the adoption of identity-first security as an integral part of enterprise protection.

With the advent of artificial intelligence and large language models within this emerging paradigm for identity and access management, a more dynamic and context-aware approach is now possible, capable of identifying systemic blind spots, latent exposure, and misconfigurations that are normally missed by static, rule-based systems. This technology enables stronger security outcomes while reducing operational friction through automation by continuously analyzing identity flows and enforcing policy according to real-time context. 

The orchestration-centric identity infrastructure offered by Orchid Security reflects this shift by extending beyond traditional IAM limitations associated with manual application onboarding and partial visibility of managed systems that have already been deployed. 

By enabling continuous evaluation of identity behavior, contextual gap analysis, and risk-based remediation enforced through automated orchestration, the platform provides a more comprehensive approach to identity governance than static roles and fragmented insights. In addition to providing consistent governance across distributed environments, Orchid aligns identity operations with business objectives as well as security objectives by embedding observability and intelligence directly into the identity fabric. 


Through continuous discovery, analysis and evaluation of enterprise applications at runtime, the platform supports evidence-driven prioritization by analyzing authentication and authorization paths and comparing them to regulatory requirements and established cybersecurity frameworks. 

In addition to augmenting native controls, the remediation process is simplified by integrating with existing Identity and Access Management systems, often without requiring custom development. It is through this approach that Orchid assists organizations in addressing the increasing presence of unmanaged identity exposure, commonly known as identity dark matter. 

In addition to reducing systemic risk, improving compliance posture, and reducing operational overhead, Orchid has already deployed its platform across Fortune 500 and Global 2000 enterprises, supporting Orchid's role in operationalizing identity-first security. It has been proven that adopting Orchid's platform yields measurable improvements in governance and accountability, in addition to incremental security improvements. 

By providing a detailed understanding of application-level identity usage, the platform reduces exposure caused by unmanaged access paths and helps security teams prepare for audits in a more timely and confident manner. The identification risk is no longer inferred or distributed between fragmented tools, but rather clearly attributed and supported by verifiable, runtime-derived evidence. 

In complex enterprise environments, it is imperative for organizations to shift from assumption-driven decision-making to evidence-based control, reinforcing the core objective of identity-first security. Increasingly, identity is fragmenting beyond traditional control points and centralized directories, making continuous, application-aware governance increasingly important. 

Providing persistent identity observability across modern application ecosystems, Orchid Security addresses this challenge by enabling organizations to discover identity usage, assess risk in context, coordinate remediation, and maintain audit-ready evidence through continuous, application-aware governance. 

There is no doubt that the operating model reflects the actual ways in which contemporary enterprise environments function, where access is dynamic, distributed, and deeply embedded within the logic of the applications. As a result of his leadership's experience in both advanced AI research and large-scale security engineering, the company has designed its identity infrastructure using practical knowledge from companies like Google DeepMind and Square, who are now part of Block. 

The rapid adoption of artificial intelligence throughout enterprise and adversarial domains has also raised the stakes for identity security, as threat actors increasingly automate reconnaissance, exploitation, and lateral movements. An Identity Control Plane, Orchid offers its platform as a means to converge managed and unmanaged identities into an authoritative view derived directly from application developers. 

The benefits of this approach include not only strengthening enterprise security postures, but also creating new opportunities for global systems integrators and managed service providers. As a result, they are able to provide additional value-added services such as continuous application security assessment, identity governance, audit readiness, incident response, and identity risk management. 

Using Orchid, organizations can accelerate the onboarding of applications, prioritize remediation according to observed risk, and monitor compliance continuously, thereby enabling the development of a new level of identity governance that minimizes organizational risk, lowers operating costs, and allows for consistent control of both human and machine identities in increasingly AI-driven organizations.

Italy Steps Up Cyber Defenses as Milano–Cortina Winter Olympics Approach

 



Inside a government building in Rome, located opposite the ancient Aurelian Walls, dozens of cybersecurity professionals have been carrying out continuous monitoring operations for nearly a year. Their work focuses on tracking suspicious discussions and coordination activity taking place across hidden corners of the internet, including underground criminal forums and dark web marketplaces. This monitoring effort forms a core part of Italy’s preparations to protect the Milano–Cortina Winter Olympic Games from cyberattacks.

The responsibility for securing the digital environment of the Games lies with Italy’s National Cybersecurity Agency, an institution formed in 2021 to centralize the country’s cyber defense strategy. The upcoming Winter Olympics represent the agency’s first large-scale international operational test. Officials view the event as a likely target for cyber threats because the Olympics attract intense global attention. Such visibility can draw a wide spectrum of malicious actors, ranging from small-scale cybercriminal groups seeking disruption or financial gain to advanced threat groups believed to have links with state interests. These actors may attempt to use the event as a platform to make political statements, associate attacks with ideological causes, or exploit broader geopolitical tensions.

The Milano–Cortina Winter Games will run from February 6 to February 22 and will be hosted across multiple Alpine regions for the first time in Olympic history. This multi-location format introduces additional security and coordination challenges. Each venue relies on interconnected digital systems, including communications networks, event management platforms, broadcasting infrastructure, and logistics systems. Securing a geographically distributed digital environment exponentially increases the complexity of monitoring, response coordination, and incident containment.

Officials estimate that the Games will reach approximately three billion viewers globally, alongside around 1.5 million ticket-holding spectators on site. This scale creates a vast digital footprint. High-visibility services, such as live streaming platforms, official event websites, and ticket purchasing systems, are considered particularly attractive targets. Disrupting these services can generate widespread media attention, cause public confusion, and undermine confidence in the organizers’ ability to safeguard critical digital operations.

Italy’s planning has been shaped by recent Olympic experience. During the 2024 Paris Summer Olympics, authorities recorded more than 140 cyber incidents. In 22 cases, attackers managed to gain access to information systems. While none of these incidents disrupted the competitions themselves, the sheer volume of hostile activity demonstrated the persistent pressure faced by host nations. On the day of the opening ceremony in Paris, France’s TGV high-speed rail network was also targeted in coordinated physical sabotage attacks involving explosive devices. This incident illustrated how large global events can attract both cyber threats and physical security risks at the same time.

Italian cybersecurity officials anticipate comparable levels of hostile activity during the Milano–Cortina Games, with an additional layer of complexity introduced by artificial intelligence. AI tools can be used by attackers to automate technical tasks, enhance reconnaissance, and support more convincing phishing and impersonation campaigns. These techniques can increase the speed and scale of cyber operations while making malicious activity harder to detect. Although authorities currently report no specific, elevated threat level, they acknowledge that the overall risk environment is becoming more complex due to the growing availability of AI-assisted tools.

The National Cybersecurity Agency’s defensive approach emphasizes early detection rather than reactive response. Analysts continuously monitor open websites, underground criminal communities, and social media channels to identify emerging threat patterns before they develop into direct intrusion attempts. This method is designed to provide early warning, allowing technical teams to strengthen defenses before attackers move from planning to execution.

Operational coordination will involve multiple teams. Around 20 specialists from the agency’s operational staff will focus exclusively on Olympic-related cyber intelligence from the headquarters in Rome. An additional 10 senior experts will be deployed to Milan starting on February 4 to support the Technology Operations Centre, which oversees the digital systems supporting the Games. These government teams will operate alongside nearly 100 specialists from Deloitte and approximately 300 personnel from the local organizing committee and technology partners. Together, these groups will manage cybersecurity monitoring, incident response, and system resilience across all Olympic venues.

If threats keep developing during the Games, the agency will continuously feed intelligence into technical operations teams to support rapid decision-making. The guiding objective remains consistent. Detect emerging risks early, interpret threat signals accurately, and respond quickly and effectively when specific dangers become visible. This approach reflects Italy’s broader strategy to protect the digital infrastructure that underpins one of the world’s most prominent international sporting events.


Cloud Storage Scam Uses Fake Renewal Notices to Trick Users


Cybercriminals are running a large-scale email scam that falsely claims cloud storage subscriptions have failed. For several months, people across different countries have been receiving repeated messages warning that their photos, files, and entire accounts will soon be restricted or erased due to an alleged payment issue. The volume of these emails has increased sharply, with many users receiving several versions of the same scam in a single day, all tied to the same operation.

Although the wording of each email differs, the underlying tactic remains the same. The messages pressure recipients to act immediately by claiming that a billing problem or storage limit must be fixed right away to avoid losing access to personal data. These emails are sent from unrelated and randomly created domains rather than official service addresses, a common sign of phishing activity.

The subject lines are crafted to trigger panic and curiosity. Many include personal names, email addresses, reference numbers, or specific future dates to appear genuine. The messages state that a renewal attempt failed or a payment method expired, warning that backups may stop working and that photos, videos, documents, and device data could disappear if the issue is not resolved. Fake account numbers, subscription details, and expiry dates are used to strengthen the illusion of legitimacy.

Every email in this campaign contains a link. While the first web address may appear to belong to a well-known cloud hosting platform, it only acts as a temporary relay. Clicking it silently redirects the user to fraudulent websites hosted on changing domains. These pages imitate real cloud dashboards and display cloud-related branding to gain trust. They falsely claim that storage is full and that syncing of photos, contacts, files, and backups has stopped, warning that data will be lost without immediate action.

After clicking forward, users are shown a fake scan that always reports that services such as photo storage, drive space, and email are full. Victims are then offered a short-term discount, presented as a loyalty upgrade with a large price reduction. Instead of leading to a real cloud provider, the buttons redirect users to unrelated sales pages advertising VPNs, obscure security tools, and other subscription products. The final step leads to payment forms designed to collect card details and generate profit for the scammers through affiliate schemes.

Many recipients mistakenly believe these offers will fix a real storage problem and end up paying for unnecessary products. These emails and websites are not official notifications. Real cloud companies do not solve billing problems through storage scans or third-party product promotions. When payments fail, legitimate providers usually restrict extra storage first and provide a grace period before any data removal.

Users should delete such emails without opening links and avoid purchasing anything promoted through them. Any concerns about storage or billing should be checked directly through the official website or app of the cloud service provider.

Former Google Engineer Convicted in U.S. for Stealing AI Trade Secrets to Aid China-Based Startup

 

A former Google software engineer has been found guilty in the United States for unlawfully taking thousands of confidential Google documents to support a technology venture in China, according to an announcement made by the Department of Justice (DoJ) on Thursday.

Linwei Ding, also known as Leon Ding, aged 38, was convicted by a federal jury on 14 charges—seven counts of economic espionage and seven counts of theft of trade secrets. Prosecutors established that Ding illegally copied more than 2,000 internal Google files containing highly sensitive artificial intelligence (AI) trade secrets with the intent of benefiting the People’s Republic of China (PRC).

"Silicon Valley is at the forefront of artificial intelligence innovation, pioneering transformative work that drives economic growth and strengthens our national security," said U.S. Attorney Craig H. Missakian. "We will vigorously protect American intellectual capital from foreign interests that seek to gain an unfair competitive advantage while putting our national security at risk."

Ding was initially indicted in March 2024 after investigators discovered that he had transferred proprietary data from Google’s internal systems to his personal Google Cloud account. The materials allegedly stolen included detailed information on Google’s supercomputing data center architecture used to train and run AI models, its Cluster Management System (CMS), and the AI models and applications operating on that infrastructure.

The misappropriated trade secrets reportedly covered several critical technologies, including the design and functionality of Google’s custom Tensor Processing Unit (TPU) chips and GPU systems, software that enables chip-level communication and task execution, systems that coordinate thousands of chips into AI supercomputers, and SmartNIC technology used for high-speed networking within Google’s AI and cloud platforms.

Authorities stated that the theft occurred over an extended period between May 2022 and April 2023. Ding, who began working at Google in 2019, allegedly maintained undisclosed ties with two China-based technology firms during his employment, one of which was Shanghai Zhisuan Technologies Co., a startup he founded in 2023. Investigators noted that Ding downloaded large volumes of confidential files in December 2023, just days before resigning from the company.

"Around June 2022, Ding was in discussions to be the Chief Technology Officer for an early-stage technology company based in the PRC; by early 2023, Ding was in the process of founding his own technology company in the PRC focused on AI and machine learning and was acting as the company's CEO," the DoJ said.

The case further alleged that Ding attempted to conceal his actions by copying Google source code into the Apple Notes app on his work-issued MacBook, converting the files into PDFs, and uploading them to his personal Google account. Prosecutors also claimed that he asked a colleague to use his access badge to enter a Google facility, creating the false appearance that he was working from the office while he was actually in China.

The investigation reportedly accelerated in late 2023 after Google learned that Ding had delivered a public presentation in China to prospective investors promoting his startup. According to Courthouse News, Ding’s defense attorney Grant Fondo argued that the information could not qualify as trade secrets because it was accessible to a large number of Google employees. "Google chose openness over security," Fonda said.

In a superseding indictment filed in February 2025, Ding was additionally charged with economic espionage, with prosecutors alleging that he applied to a Beijing-backed Shanghai talent program. Such initiatives were described as efforts to recruit overseas researchers to bolster China’s technological and economic development.

"Ding's application for this talent plan stated that he planned to 'help China to have computing power infrastructure capabilities that are on par with the international level,'" the DoJ said. "The evidence at trial also showed that Ding intended to benefit two entities controlled by the government of China by assisting with the development of an AI supercomputer and collaborating on the research and development of custom machine learning chips."

Ding is set to attend a status conference on February 3, 2026. If sentenced to the maximum penalties, he could face up to 10 years in prison for each trade secret theft charge and up to 15 years for each count of economic espionage.

Google Owned Mandiant Finds Vishing Attacks Against SaaS Platforms


Mandiant recently said that it found an increase in threat activity that deploys tradecraft for extortion attacks carried out by a financially gained group ShinyHunters.

  • These attacks use advanced voice phishing (vishing) and fake credential harvesting sites imitating targeted organizations to get illicit access to victims systems by collecting sign-on (SSO) credentials and two factor authentication codes. 
  • The attacks aim to target cloud-based software-as-a-service (SaaS) apps to steal sensitive data and internal communications and blackmail victims. 

Google owned Mandiant’s threat intelligence team is tracking the attacks under various clusters: UNC6661, UNC6671, and UNC6240 (aka ShinyHunters). These gangs might be improving their attack tactics. "While this methodology of targeting identity providers and SaaS platforms is consistent with our prior observations of threat activity preceding ShinyHunters-branded extortion, the breadth of targeted cloud platforms continues to expand as these threat actors seek more sensitive data for extortion," Mandiant said. 

"Further, they appear to be escalating their extortion tactics with recent incidents, including harassment of victim personnel, among other tactics.”

Theft details

UNC6661 was pretending to be IT staff sending employees to credential harvesting links tricking them into multi-factor authentication (MFA) settings. This was found during mid-January 2026.

Threat actors used stolen credentials to register their own device for MFA and further steal data from SaaS platforms. In one incident, the hacker exploited their access to infected email accounts to send more phishing emails to users in cryptocurrency based organizations.

The emails were later deleted to hide the tracks. Experts also found UNC6671 mimicking IT staff to fool victims to steal credentials and MFA login codes on credential harvesting websites since the start of this year. In a few incidents, the hackers got access to Okta accounts. 

UNC6671 leveraged PowerShell to steal sensitive data from OneDrive and SharePoint. 

Attack tactic 

The use of different domain registrars to register the credential harvesting domains (NICENIC for UNC6661 and Tucows for UNC6671) and the fact that an extortion email sent after UNC6671 activity did not overlap with known UNC6240 indicators are the two main differences between UNC6661 and UNC6671. 

This suggests that other groups of people might be participating, highlighting how nebulous these cybercrime organizations are. Furthermore, the targeting of bitcoin companies raises the possibility that the threat actors are searching for other opportunities to make money.

Ivanti Issues Emergency Fixes After Attackers Exploit Critical Flaws in Mobile Management Software




Ivanti has released urgent security updates for two serious vulnerabilities in its Endpoint Manager Mobile (EPMM) platform that were already being abused by attackers before the flaws became public. EPMM is widely used by enterprises to manage and secure mobile devices, which makes exposed servers a high-risk entry point into corporate networks.

The two weaknesses, identified as CVE-2026-1281 and CVE-2026-1340, allow attackers to remotely run commands on vulnerable servers without logging in. Both flaws were assigned near-maximum severity scores because they can give attackers deep control over affected systems. Ivanti confirmed that a small number of customers had already been compromised at the time the issues were disclosed.

This incident reflects a broader pattern of severe security failures affecting enterprise technology vendors in January in recent years. Similar high-impact vulnerabilities have previously forced organizations to urgently patch network security and access control products. The repeated targeting of these platforms shows that attackers focus on systems that provide centralized control over devices and identities.

Ivanti stated that only on-premises EPMM deployments are affected. Its cloud-based mobile management services, other endpoint management products, and environments using Ivanti cloud services with Sentry are not impacted by these flaws.

If attackers exploit these vulnerabilities, they can move within internal networks, change system settings, grant themselves administrative privileges, and access stored information. The exposed data may include basic personal details of administrators and device users, along with device-related information such as phone numbers and location data, depending on how the system is configured.

Ivanti has not provided specific indicators of compromise because only a limited number of confirmed cases are known. However, the company published technical analysis to support investigations. Security teams are advised to review web server logs for unusual requests, particularly those containing command-like input. Exploitation attempts may appear as abnormal activity involving internal application distribution or Android file transfer functions, sometimes producing error responses instead of successful ones. Requests sent to error pages using unexpected methods or parameters should be treated as highly suspicious.

Previous investigations show attackers often maintain access by placing or modifying web shell files on application error pages. Security teams should also watch for unexpected application archive files being added to servers, as these may be used to create remote connections back to attackers. Because EPMM does not normally initiate outbound network traffic, any such activity in firewall logs should be treated as a strong warning sign.

Ivanti advises organizations that detect compromise to restore systems from clean backups or rebuild affected servers before applying updates. Attempting to manually clean infected systems is not recommended. Because these flaws were exploited before patches were released, organizations that had vulnerable EPMM servers exposed to the internet at the time of disclosure should treat those systems as compromised and initiate full incident response procedures rather than relying on patching alone. 

CRIL Uncovers ShadowHS: Fileless Linux Post-Exploitation Framework Built for Stealthy Long-Term Access

 

Operating entirely in system memory, Cyble Research & Intelligence Labs (CRIL) uncovered ShadowHS, a Linux post-exploitation toolkit built for covert persistence after an initial breach. Instead of dropping binaries on disk, it runs filelessly, helping it bypass standard security checks and leaving minimal forensic traces. ShadowHS relies on a weaponized version of hackshell, enabling attackers to maintain long-term remote control through interactive sessions. This fileless approach makes detection harder because many traditional tools focus on scanning stored files rather than memory-resident activity. 

CRIL found that ShadowHS is delivered using an encrypted shell loader that deploys a heavily modified hackshell component. During execution, the loader reconstructs the payload in memory using AES-256-CBC decryption, along with Perl byte skipping routines and gzip decompression. After rebuilding, the payload is executed via /proc//fd/ with a spoofed argv[0], a method designed to avoid leaving artifacts on disk and evade signature-based detection tools. 

Once active, ShadowHS begins with reconnaissance, mapping system defenses and identifying installed security tools. It checks for evidence of prior compromise and keeps background activity intentionally low, allowing operators to selectively activate functions such as credential theft, lateral movement, privilege escalation, cryptomining, and covert data exfiltration. CRIL noted that this behavior reflects disciplined operator tradecraft rather than opportunistic attacks. 

ShadowHS also performs extensive fingerprinting for commercial endpoint tools such as CrowdStrike, Tanium, Sophos, and Microsoft Defender, as well as monitoring agents tied to cloud platforms and industrial control environments. While runtime activity appears restrained, CRIL emphasized the framework contains a wider set of dormant capabilities that can be triggered when needed. 

A key feature highlighted by CRIL is ShadowHS’s stealthy data exfiltration method. Instead of using standard network channels, it leverages user-space tunneling over GSocket, replacing rsync’s default transport to move data through firewalls and restrictive environments. Researchers observed two variants: one using DBus-based tunneling and another using netcat-style GSocket tunnels, both designed to preserve file metadata such as timestamps, permissions, and partial transfer state. 

The framework also includes dormant modules for memory dumping to steal credentials, SSH-based lateral movement and brute-force scanning, and privilege escalation using kernel exploits. Cryptomining support is included through tools such as XMRig, GMiner, and lolMiner. ShadowHS further contains anti-competition routines to detect and terminate rival malware like Rondo and Kinsing, as well as credential-stealing backdoors such as Ebury, while checking kernel integrity and loaded modules to assess whether the host is already compromised or under surveillance.

CRIL concluded that ShadowHS highlights growing challenges in securing Linux environments against fileless threats. Since these attacks avoid disk artifacts, traditional antivirus and file-based detection fall short. Effective defense requires monitoring process behavior, kernel telemetry, and memory-resident activity, focusing on live system behavior rather than static indicators.

Malicious Chrome Extensions Hijack Affiliate Links and Steal ChatGPT Tokens

 

Cybersecurity researchers have uncovered a alarming surge in malicious Google Chrome extensions that hijack affiliate links, steal sensitive data, and siphon OpenAI ChatGPT authentication tokens. These deceptive add-ons, masquerading as handy shopping aids and AI enhancers, infiltrate the Chrome Web Store to exploit user trust. Disguised tools like Amazon Ads Blocker from "10Xprofit" promise ad-free browsing but secretly swap creators' affiliate tags with the developer's own, robbing influencers of commissions across Amazon, AliExpress, Best Buy, Shein, Shopify, and Walmart.

Socket Security identified 29 such extensions in this cluster, uploaded as recently as January 19, 2026, which scan product URLs without user interaction to inject tags like "10xprofit-20." They also scrape product details to attacker servers at "app.10xprofit[.]io" and deploy fake "LIMITED TIME DEAL" countdowns on AliExpress pages to spur impulse buys. Misleading store listings claim mere "small commissions" from coupons, violating policies that demand clear disclosures, user consent for injections, and single-purpose designs.

Broadcom's Symantec separately flagged four data-thieving extensions with over 100,000 installs, including Good Tab, which relays clipboard access to "api.office123456[.]com," and Children Protection, which harvests cookies, injects ads, and executes remote JavaScript. DPS Websafe hijacks searches to malicious sites, while Stock Informer exposes users to an old XSS flaw (CVE-2020-28707). Researchers Yuanjing Guo and Tommy Dong stress caution even with trusted sources, as broad permissions enable unchecked surveillance.

LayerX exposed 16 coordinated "ChatGPT Mods" extensions—downloaded about 900 times—that pose as productivity boosters like voice downloaders and prompt managers. These inject scripts into chatgpt.com to capture session tokens, granting attackers full account access to conversations, metadata, and code. Natalie Zargarov notes this leverages AI tools' high privileges, turning trusted brands into deception vectors amid booming enterprise AI adoption.

Compounding risks, the "Stanley" malware-as-a-service toolkit, sold on Russian forums for $2,000-$6,000, generates note-taking extensions that overlay phishing iframes on bank sites while faking legitimate URLs. Premium buyers get Chrome Store approval guarantees and C2 panels for victim management; it vanished January 27, 2025, post-exposure but may rebrand. Varonis' Daniel Kelley warns browsers are now prime endpoints in BYOD and remote setups.

Users must audit extensions for mismatched features, excessive permissions, and vague disclosures—remove suspects via Chrome settings immediately. Limit installs to verified needs, favoring official apps over third-party tweaks. As e-commerce and AI extensions multiply, proactive vigilance thwarts financial sabotage and data breaches in this evolving browser battlefield.

CISA Issues New Guidance on Managing Insider Cybersecurity Risks

 



The US Cybersecurity and Infrastructure Security Agency (CISA) has released new guidance warning that insider threats represent a major and growing risk to organizational security. The advisory was issued during the same week reports emerged about a senior agency official mishandling sensitive information, drawing renewed attention to the dangers posed by internal security lapses.

In its announcement, CISA described insider threats as risks that originate from within an organization and can arise from either malicious intent or accidental mistakes. The agency stressed that trusted individuals with legitimate system access can unintentionally cause serious harm to data security, operational stability, and public confidence.

To help organizations manage these risks, CISA published an infographic outlining how to create a structured insider threat management team. The agency recommends that these teams include professionals from multiple departments, such as human resources, legal counsel, cybersecurity teams, IT leadership, and threat analysis units. Depending on the situation, organizations may also need to work with external partners, including law enforcement or health and risk professionals.

According to CISA, these teams are responsible for overseeing insider threat programs, identifying early warning signs, and responding to potential risks before they escalate into larger incidents. The agency also pointed organizations to additional free resources, including a detailed mitigation guide, training workshops, and tools to evaluate the effectiveness of insider threat programs.

Acting CISA Director Madhu Gottumukkala emphasized that insider threats can undermine trust and disrupt critical operations, making them particularly challenging to detect and prevent.

Shortly before the guidance was released, media reports revealed that Gottumukkala had uploaded sensitive CISA contracting documents into a public version of an AI chatbot during the previous summer. According to unnamed officials, the activity triggered automated security alerts designed to prevent unauthorized data exposure from federal systems.

CISA’s Director of Public Affairs later confirmed that the chatbot was used with specific controls in place and stated that the usage was limited in duration. The agency noted that the official had received temporary authorization to access the tool and last used it in mid-July 2025.

By default, CISA blocks employee access to public AI platforms unless an exception is granted. The Department of Homeland Security, which oversees CISA, also operates an internal AI system designed to prevent sensitive government information from leaving federal networks.

Security experts caution that data shared with public AI services may be stored or processed outside the user’s control, depending on platform policies. This makes such tools particularly risky when handling government or critical infrastructure information.

The incident adds to a series of reported internal disputes and security-related controversies involving senior leadership, as well as similar lapses across other US government departments in recent years. These cases are a testament to how poor internal controls and misuse of personal or unsecured technologies can place national security and critical infrastructure at risk.

While CISA’s guidance is primarily aimed at critical infrastructure operators and regional governments, recent events suggest that insider threat management remains a challenge across all levels of government. As organizations increasingly rely on AI and interconnected digital systems, experts continue to stress that strong oversight, clear policies, and leadership accountability are essential to reducing insider-related security risks.

SK hynix Launches New AI Company as Data Center Demand Drives Growth

 

A surge in demand for data center hardware has lifted SK hynix into stronger market standing, thanks to limited availability of crucial AI chips. Though rooted in memory production, the company now pushes further - launching a dedicated arm centered on tailored AI offerings. Rising revenues reflect investor confidence, fueled by sustained component shortages. Growth momentum builds quietly, shaped more by timing than redirection. Market movements align closely with output constraints rather than strategic pivots. 

Early next year, the business will launch a division known as “AI Company” (AI Co.), set to begin operations in February. This offshoot aims to play a central role within the AI data center landscape, positioning itself alongside major contributors. As demand shifts toward bundled options, clients prefer complete packages - ones blending infrastructure, programs, and support - over isolated gear. According to SK hynix, such changes open doors previously unexplored through traditional component sales alone. 

Though little is known so far, news has emerged that AI Co., according to statements given to The Register, plans industry-specific AI tools through dedicated backing of infrastructure tied to processing hubs. Starting out, attention turns toward programs meant to refine how artificial intelligence operates within machines. From there, financial commitments may stretch into broader areas linked to computing centers as months pass. Alongside funding external ventures and novel tech, reports indicate turning prototypes into market-ready offerings might shape a core piece of its evolving strategy.  

About $10 billion is being set aside by SK hynix for the fresh venture. Next month should bring news of a temporary leadership group and governing committee. Instead of staying intact, the California-focused SSD unit known as Solidigm will undergo reorganization. What was once Solidigm becomes AI Co. under the shift. Meanwhile, production tied to SSDs shifts into a separate entity named Solidigm Inc., built from the ground up.  

Now shaping up, the AI server industry leans into tailored chips instead of generic ones. By 2027, ASIC shipments for these systems could rise threefold, according to Counterpoint Research. Come 2028, annual units sold might go past fifteen million. Such growth appears set to overtake current leaders - data center GPUs - in volume shipped. While initial prices for ASICs sometimes run high, their running cost tends to stay low compared to premium graphics processors. Inference workloads commonly drive demand, favoring efficiency-focused designs. Holding roughly six out of every ten units delivered in 2027, Broadcom stands positioned near the front. 

A wider shortage of memory chips keeps lifting SK hynix forward. Demand now clearly exceeds available stock, according to IDC experts, because manufacturers are directing more output into server and graphics processing units instead of phones or laptops. As a result, prices throughout the sector have climbed - this shift directly boosting the firm's earnings. Revenue for 2025 reached ₩97.14 trillion ($67.9 billion), up 47%. During just the last quarter, income surged 66% compared to the same period the previous year, hitting ₩32.8 trillion ($22.9 billion). 

Suppliers such as ASML are seeing gains too, thanks to rising demand in semiconductor production. Though known mainly for photolithography equipment, its latest quarterly results revealed €9.7 billion in revenue - roughly $11.6 billion. Even so, forecasts suggest a sharp rise in orders for their high-end EUV tools during the current year. Despite broader market shifts, performance remains strong across key segments. 

Still, experts point out that a lack of memory chips might hurt buyers, as devices like computers and phones could become more expensive. Predictions indicate computer deliveries might drop during the current year because supplies are tight and expenses are climbing.

Indonesia Temporarily Blocks Grok After AI Deepfake Misuse Sparks Outrage

 

A sudden pause in accessibility marks Indonesia’s move against Grok, Elon Musk’s AI creation, following claims of misuse involving fabricated adult imagery. News of manipulated visuals surfaced, prompting authorities to act - Reuters notes this as a world-first restriction on the tool. Growing unease about technology aiding harm now echoes across borders. Reaction spreads, not through policy papers, but real-time consequences caught online.  

A growing number of reports have linked Grok to incidents where users created explicit imagery of women - sometimes involving minors - without consent. Not long after these concerns surfaced, Indonesia’s digital affairs minister, Meutya Hafid, labeled the behavior a severe breach of online safety norms. 

As cited by Reuters, she described unauthorized sexually suggestive deepfakes as fundamentally undermining personal dignity and civil rights in digital environments. Her office emphasized that such acts fall under grave cyber offenses, demanding urgent regulatory attention Temporary restrictions appeared in Indonesia after Antara News highlighted risks tied to AI-made explicit material. 

Protection of women, kids, and communities drove the move, aimed at reducing mental and societal damage. Officials pointed out that fake but realistic intimate imagery counts as digital abuse, according to statements by Hafid. Such fabricated visuals, though synthetic, still trigger actual consequences for victims. The state insists artificial does not mean harmless - impact matters more than origin. Following concerns over Grok's functionality, officials received official notices demanding explanations on its development process and observed harms. 

Because of potential risks, Indonesian regulators required the firm to detail concrete measures aimed at reducing abuse going forward. Whether the service remains accessible locally hinges on adoption of rigorous filtering systems, according to Hafid. Compliance with national regulations and adherence to responsible artificial intelligence practices now shape the outcome. 

Only after these steps are demonstrated will operation be permitted to continue. Last week saw Musk and xAI issue a warning: improper use of the chatbot for unlawful acts might lead to legal action. On X, he stated clearly - individuals generating illicit material through Grok assume the same liability as those posting such content outright. Still, after rising backlash over the platform's inability to stop deepfake circulation, his stance appeared to shift slightly. 

A re-shared post from one follower implied fault rests more with people creating fakes than with the system hosting them. The debate spread beyond borders, reaching American lawmakers. A group of three Senate members reached out to both Google and Apple, pushing for the removal of Grok and X applications from digital marketplaces due to breaches involving explicit material. Their correspondence framed the request around existing rules prohibiting sexually charged imagery produced without consent. 

What concerned them most was an automated flood of inappropriate depictions focused on females and minors - content they labeled damaging and possibly unlawful. When tied to misuse - like deepfakes made without consent - AI tools now face sharper government reactions, Indonesia's move part of this rising trend. Though once slow to act, officials increasingly treat such technology as a risk needing strong intervention. 

A shift is visible: responses that were hesitant now carry weight, driven by public concern over digital harm. Not every nation acts alike, yet the pattern grows clearer through cases like this one. Pressure builds not just from incidents themselves, but how widely they spread before being challenged.

WhatsApp-Based Astaroth Banking Trojan Targets Brazilian Users in New Malware Campaign

 

A fresh look at digital threats shows malicious software using WhatsApp to spread the Astaroth banking trojan, mainly affecting people in Brazil. Though messaging apps are common tools for connection, they now serve attackers aiming to steal financial data. This method - named Boto Cor-de-Rosa by analysts at Acronis Threat Research - stands out because it leans on social trust within widely used platforms. Instead of relying on email or fake websites, hackers piggyback on real conversations, slipping malware through shared links. 
While such tactics aren’t entirely new, their adaptation to local habits makes them harder to spot. In areas where nearly everyone uses WhatsApp daily, blending in becomes easier for cybercriminals. Researchers stress that ordinary messages can now carry hidden risks when sent from compromised accounts. Unlike older campaigns, this one avoids flashy tricks, favoring quiet infiltration over noise. As behavior shifts online, so do attack strategies - quietly, persistently adapting. 

Acronis reports that the malware targets WhatsApp contact lists, sending harmful messages automatically - spreading fast with no need for constant hacker input. Notably, even though the main Astaroth component sticks with Delphi, and the setup script remains in Visual Basic, analysts spotted a fresh worm-style feature built completely in Python. Starting off differently this time, the mix of languages shows how cyber attackers now build adaptable tools by blending code types for distinct jobs. Ending here: such variety supports stealthier, more responsive attack systems. 

Astaroth - sometimes called Guildma - has operated nonstop since 2015, focusing mostly on Brazil within Latin America. Stealing login details and enabling money scams sits at the core of its activity. By 2024, several hacking collectives, such as PINEAPPLE and Water Makara, began spreading it through deceptive email messages. This newest push moves away from that method, turning instead to WhatsApp; because so many people there rely on the app daily, fake requests feel far more believable. 

Although tactics shift, the aim stays unchanged. Not entirely new, exploiting WhatsApp to spread banking trojans has gained speed lately. Earlier, Trend Micro spotted the Water Saci group using comparable methods to push financial malware like Maverick and a version of Casbaneierio. Messaging apps now appear more appealing to attackers than classic email phishing. Later that year, Sophos disclosed details of an evolving attack series labeled STAC3150, closely tied to previous patterns. This operation focused heavily on individuals in Brazil using WhatsApp, distributing the Astaroth malware through deceptive channels. 

Nearly all infected machines - over 95 percent - were situated within Brazilian territory, though isolated instances appeared across the U.S. and Austria. Running uninterrupted from early autumn 2025, the method leaned on compressed archives paired with installer files, triggering script-based downloads meant to quietly embed the malicious software. What Acronis has uncovered fits well with past reports. Messages on WhatsApp now carry harmful ZIP files sent straight to users. Opening one reveals what seems like a safe document - but it is actually a Visual Basic Script. Once executed, the script pulls down further tools from remote servers. 

This step kicks off the full infection sequence. After activation, this malware splits its actions into two distinct functions. While one part spreads outward by pulling contact data from WhatsApp and distributing infected files without user input, the second runs hidden, observing online behavior - especially targeting visits to financial sites - to capture login details. 

It turns out the software logs performance constantly, feeding back live updates on how many messages succeed or fail, along with transmission speed. Attackers gain a constant stream of operational insight thanks to embedded reporting tools spotted by Acronis.

Microsoft BitLocker Encryption Raises Privacy Questions After FBI Key Disclosure Case

 


Microsoft’s BitLocker encryption, long viewed as a safeguard for Windows users’ data, is under renewed scrutiny after reports revealed the company provided law enforcement with encryption keys in a criminal investigation.

The case, detailed in a government filing [PDF], alleges that individuals in Guam illegally claimed pandemic-related unemployment benefits. According to Forbes, this marks the first publicly documented instance of Microsoft handing over BitLocker recovery keys to law enforcement.

BitLocker is a built-in Windows security feature designed to encrypt data stored on devices. It operates through two configurations: Device Encryption, which offers a simplified setup, and BitLocker Drive Encryption, a more advanced option with greater control.

In both configurations, Microsoft generally stores BitLocker recovery keys on its servers when encryption is activated using a Microsoft account. As the company explains in its documentation, "If you use a Microsoft account, the BitLocker recovery key is typically attached to it, and you can access the recovery key online."

A similar approach applies to organizational devices. Microsoft notes, "If you're using a device that's managed by your work or school, the BitLocker recovery key is typically backed up and managed by your organization's IT department."

Users are not required to rely on Microsoft for key storage. Alternatives include saving the recovery key to a USB drive, storing it as a local file, or printing it. However, many customers opt for Microsoft’s cloud-based storage because it allows easy recovery if access is lost. This convenience, though, effectively places Microsoft in control of data access and reduces the user’s exclusive ownership of encryption keys.

Apple provides a comparable encryption solution through FileVault, paired with iCloud. Apple offers two protection levels: Standard Data Protection and Advanced Data Protection for iCloud.

Under Standard Data Protection, Apple retains the encryption keys for most iCloud data, excluding certain sensitive categories such as passwords and keychain data. With Advanced Data Protection enabled, Apple holds keys only for iCloud Mail, Contacts, and Calendar. Both Apple and Microsoft comply with lawful government requests, but neither can disclose encryption keys they do not possess.

Apple explicitly addresses this in its law enforcement guidelines [PDF]: "All iCloud content data stored by Apple is additionally encrypted at the location of the server. For data Apple can decrypt, Apple retains the encryption keys in its US data centers. Apple does not receive or retain encryption keys for [a] customer's end-to-end encrypted data."

This differs from BitLocker’s default behavior, where Microsoft may retain access to a customer’s encryption keys if the user enables cloud backup during setup.

Microsoft states that it does not share its own encryption keys with governments, but it stops short of extending that guarantee to customer-managed keys. In its law enforcement guidance, the company says, "We do not provide any government with our encryption keys or the ability to break our encryption." It further adds, "In most cases, our default is for Microsoft to securely store our customers' encryption keys. Even our largest enterprise customers usually prefer we keep their keys to prevent accidental loss or theft. However, in many circumstances we also offer the option for consumers or enterprises to keep their own keys, in which case Microsoft does not maintain copies."

Microsoft’s latest Government Requests for Customer Data Report, covering July 2024 through December 2024, shows the company received 128 law enforcement requests globally, including 77 from US agencies. Only four requests during that period—three from Brazil and one from Canada—resulted in content disclosure.

After the article was published, a Microsoft spokesperson clarified, “With BitLocker, customers can choose to store their encryption keys locally, in a location inaccessible to Microsoft, or in Microsoft’s cloud. We recognize that some customers prefer Microsoft’s cloud storage so we can help recover their encryption key if needed. While key recovery offers convenience, it also carries a risk of unwanted access, so Microsoft believes customers are in the best position to decide whether to use key escrow and how to manage their keys.”

Privacy advocates argue that this design reflects Microsoft’s priorities. As Erica Portnoy, senior staff technologist at the Electronic Frontier Foundation, stated in an email to The Register, "Microsoft is making a tradeoff here between privacy and recoverability. At a guess, I'd say that's because they're more focused on the business use case, where loss of data is much worse than Microsoft or governments getting access to that data. But by making that choice, they make their product less suitable for individuals and organizations with higher privacy needs. It's a clear message to activist organizations and law firms that Microsoft is not building their products for you."

Multi-Stage Phishing Campaign Deploys Amnesia RAT and Ransomware Using Cloud Services

 

One recently uncovered cyberattack is targeting individuals across Russia through a carefully staged deception campaign. Rather than exploiting software vulnerabilities, the operation relies on manipulating user behavior, according to analysis by Cara Lin of Fortinet FortiGuard Labs. The attack delivers two major threats: ransomware that encrypts files for extortion and a remote access trojan known as Amnesia RAT. Legitimate system tools and trusted services are repurposed as weapons, allowing the intrusion to unfold quietly while bypassing traditional defenses. By abusing real cloud platforms, the attackers make detection significantly more difficult, as nothing initially appears out of place. 

The attack begins with documents designed to resemble routine workplace material. On the surface, these files appear harmless, but they conceal code that runs without drawing attention. Visual elements within the documents are deliberately used to keep victims focused, giving the malware time to execute unseen. Fortinet researchers noted that these visuals are not cosmetic but strategic, helping attackers establish deeper access before suspicion arises. 

A defining feature of the campaign is its coordinated use of multiple public cloud services. Instead of relying on a single platform, different components are distributed across GitHub and Dropbox. Scripts are hosted on GitHub, while executable payloads such as ransomware and remote access tools are stored on Dropbox. This fragmented infrastructure improves resilience, as disabling one service does not interrupt the entire attack chain and complicates takedown efforts. 

Phishing emails deliver compressed archives that contain decoy documents alongside malicious Windows shortcut files labeled in Russian. These shortcuts use double file extensions to impersonate ordinary text files. When opened, they trigger a PowerShell command that retrieves additional code from a public GitHub repository, functioning as an initial installer. The process runs silently, modifies system settings to conceal later actions, and opens a legitimate-looking document to maintain the illusion of normal activity. 

After execution, the attackers receive confirmation via the Telegram Bot API. A deliberate delay follows before launching an obfuscated Visual Basic Script, which assembles later-stage payloads directly in memory. This approach minimizes forensic traces and allows attackers to update functionality without altering the broader attack flow. 

The malware then aggressively disables security protections. Microsoft Defender exclusions are configured, protection modules are shut down, and the defendnot utility is used to deceive Windows into disabling antivirus defenses entirely. Registry modifications block administrative tools, repeated prompts seek elevated privileges, and continuous surveillance is established through automated screenshots exfiltrated via Telegram. 

Once defenses are neutralized, Amnesia RAT is downloaded from Dropbox. The malware enables extensive data theft from browsers, cryptocurrency wallets, messaging apps, and system metadata, while providing full remote control of infected devices. In parallel, ransomware derived from the Hakuna Matata family encrypts files, manipulates clipboard data to redirect cryptocurrency transactions, and ultimately locks the system using WinLocker. 

Fortinet emphasized that the campaign reflects a broader shift in phishing operations, where attackers increasingly weaponize legitimate tools and psychological manipulation instead of exploiting software flaws. Microsoft advises enabling Tamper Protection and monitoring Defender changes to reduce exposure, as similar attacks are becoming more widespread across Russian organizations.

1Password Launches Pop-Up Alerts to Block Phishing Scams

 

1Password has introduced a new phishing protection feature that displays pop-up warnings when users visit suspicious websites, aiming to reduce the risk of credential theft and account compromise. This enhancement builds on the password manager’s existing safeguards and responds to growing phishing threats fueled by increasingly sophisticated attack techniques.

Traditionally, 1Password protects users by refusing to auto-fill credentials on sites whose URLs do not exactly match those stored in the user’s vault. While this helps block many phishing attempts, it still relies on users noticing that something is wrong when their password manager does not behave as expected, which is not always the case. Some users may assume the tool malfunctioned or that their vault is locked and proceed to type passwords manually, inadvertently handing them to attackers.

The new feature addresses this gap by adding a dedicated pop-up alert that appears when 1Password detects a potential phishing URL, such as a typosquatted or lookalike domain. For example, a domain with an extra character in the name may appear convincing at a glance, especially when the phishing page closely imitates the legitimate site’s design. The pop-up is designed to prompt users to slow down, double-check the URL, and reconsider entering their credentials, effectively adding a behavioral safety net on top of technical controls.

1Password is rolling out this capability automatically for individual and family subscribers, ensuring broad coverage for consumers without requiring configuration changes. In business environments, administrators can enable the feature for employees through Authentication Policies in the 1Password admin console, integrating it into existing access control strategies. This flexibility allows organizations to align phishing protection with their security policies and training programs.

The company underscores the importance of this enhancement with survey findings from 2,000 U.S. respondents, revealing that 61% had been successfully phished and 75% do not check URLs before clicking links. The survey also shows that one-third of employees reuse passwords on work accounts, nearly half have fallen for phishing at work, and many believe protection is solely the IT department’s responsibility. With 72% admitting to clicking suspicious links and over half choosing to delete rather than report questionable messages, 1Password’s new pop-up warnings aim to counter risky user behavior and strengthen overall phishing defenses.

Online Misinformation and AI-Driven Fake Content Raise Concerns for Election Integrity

 

With elections drawing near, unease is spreading about how digital falsehoods might influence voter behavior. False narratives on social platforms may skew perception, according to officials and scholars alike. As artificial intelligence advances, deceptive content grows more convincing, slipping past scrutiny. Trust in core societal structures risks erosion under such pressure. Warnings come not just from academics but also from community leaders watching real-time shifts in public sentiment.  

Fake messages have recently circulated online, pretending to be from the City of York Council. Though they looked real, officials later stated these ads were entirely false. One showed a request for people willing to host asylum seekers; another asked volunteers to take down St George flags. A third offered work fixing road damage across neighborhoods. What made them convincing was their design - complete with official logos, formatting, and contact information typical of genuine notices. 

Without close inspection, someone scrolling quickly might believe them. Despite their authentic appearance, none of the programs mentioned were active or approved by local government. The resemblance to actual council material caused confusion until authorities stepped in to clarify. Blurred logos stood out immediately when BBC Verify examined the pictures. Wrong fonts appeared alongside misspelled words, often pointing toward artificial creation. 

Details like fingers looked twisted or incomplete - a frequent issue in computer-made visuals. One poster included an email tied to a real council employee, though that person had no knowledge of the material. Websites referenced in some flyers simply did not exist online. Even so, plenty of individuals passed the content along without questioning its truth. A single fabricated post managed to spread through networks totaling over 500,000 followers. False appearances held strong appeal despite clear warning signs. 

What spreads fast online isn’t always true - Clare Douglas, head of City of York Council, pointed out how today’s tech amplifies old problems in new ways. False stories once moved slowly; now they race across devices at a pace that overwhelms fact-checking efforts. Trust fades when people see conflicting claims everywhere, especially around health or voting matters. Institutions lose ground not because facts disappear, but because attention scatters too widely. When doubt sticks longer than corrections, participation dips quietly over time.  

Ahead of public meetings, tensions surfaced in various regions. Misinformation targeting asylum seekers and councils emerged online in Barnsley, according to Sir Steve Houghton, its council head. False stories spread further due to influencers who keep sharing them - profit often outweighs correction. Although government outlets issued clarifications, distorted messages continue flooding digital spaces. Their sheer number, combined with how long they linger, threatens trust between groups and raises risks for everyday security. Not everyone checks facts these days, according to Ilya Yablokov from the University of Sheffield’s Disinformation Research Cluster. Because AI makes it easier than ever, faking believable content takes little effort now. 

With just a small setup, someone can flood online spaces fast. What helps spread falsehoods is how busy people are - they skip checking details before passing things along. Instead, gut feelings or existing opinions shape what gets shared. Fabricated stories spreading locally might cost almost nothing to create, yet their impact on democracy can be deep. 

When misleading accounts reach more voters, specialists emphasize skills like questioning sources, checking facts, or understanding media messages - these help preserve confidence in public processes while supporting thoughtful engagement during voting events.