Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

Quantum Error Correction Moves From Theory to Practical Breakthroughs

Quantum computing’s biggest roadblock has always been fragility: qubits lose information at the slightest disturbance, and protecting them requires linking many unstable physical qubits into a single logical qubit that can detect and repair errors. That redundancy works in principle, but the repeated checks and recovery cycles have historically imposed such heavy overhead that error correction remained mainly academic. Over the last year, however, a string of complementary advances suggests quantum error correction is transitioning from theory into engineering practice. 

Algorithmic improvements are cutting correction overheads by treating errors as correlated events rather than isolated failures. Techniques that combine transversal operations with smarter decoders reduce the number of measurement-and-repair rounds needed, shortening runtimes dramatically for certain hardware families. Platforms built from neutral atoms benefit especially from these methods because their qubits can be rearranged and operated on in parallel, enabling fewer, faster correction cycles without sacrificing accuracy.

On the hardware side, researchers have started to demonstrate logical qubits that outperform the raw physical qubits that compose them. Showing a logical qubit with lower effective error rates on real devices is a milestone: it proves that fault tolerance can deliver practical gains, not just theoretical resilience. Teams have even executed scaled-down versions of canonical quantum algorithms on error-protected hardware, moving the community from “can this work?” to “how do we make it useful?” 

Software and tooling are maturing to support these hardware and algorithmic wins. Open-source toolkits now let engineers simulate error-correction strategies before hardware commits, while real-time decoders and orchestration layers bridge quantum operations with the classical compute that must act on error signals. Training materials and developer platforms are emerging to close the skills gap, helping teams build, test, and operate QEC stacks more rapidly. 

That progress does not negate the engineering challenges ahead. Error correction still multiplies resource needs and demands significant classical processing for decoding in real time. Different qubit technologies present distinct wiring, control, and scaling trade-offs, and growing system size will expose new bottlenecks. Experts caution that advances are steady rather than explosive: integrating algorithms, hardware, and orchestration remains the hard part. 

Still, the arc is unmistakable. Faster algorithms, demonstrable logical qubits, and a growing ecosystem of software and training make quantum error correction an engineering discipline now, not a distant dream. The field has shifted from proving concepts to building repeatable systems, and while fault-tolerant, cryptographically relevant quantum machines are not yet here, the path toward reliable quantum computation is clearer than it has ever been.

ClickFix: The Silent Cyber Threat Tricking Families Worldwide

 

ClickFix has emerged as one of the most pervasive and dangerous cybersecurity threats in 2025, yet remains largely unknown to the average user and even many IT professionals. This social engineering technique manipulates users into executing malicious scripts—often just a single line of code—by tricking them with fake error messages, CAPTCHA prompts, or fraudulent browser update alerts.

The attack exploits the natural human desire to fix technical problems, bypassing most endpoint protections and affecting Windows, macOS, and Linux systems. ClickFix campaign typically begin when a victim encounters a legitimate-looking message urging them to run a script or command, often on compromised or spoofed websites. 

Once executed, the script connects the victim’s device to a server controlled by attackers, allowing stealthy installation of malware such as credential stealers (e.g., Lumma Stealer, SnakeStealer), remote access trojans (RATs), ransomware, cryptominers, and even nation-state-aligned malware. The technique is highly effective because it leverages “living off the land” binaries, which are legitimate system tools, making detection difficult for security software.

ClickFix attacks have surged by over 500% in 2025, accounting for nearly 8% of all blocked attacks and ranking as the second most common attack vector after traditional phishing. Threat actors are now selling ClickFix builders to automate the creation of weaponized landing pages, further accelerating the spread of these attacks. Victims are often ordinary users, including families, who may lack the technical knowledge to distinguish legitimate error messages from malicious ones.

The real-world impact of ClickFix is extensive: it enables attackers to steal sensitive information, hijack browser sessions, install malicious extensions, and even execute ransomware attacks. Cybersecurity firms and agencies are urging users to exercise caution with prompts to run scripts and to verify the authenticity of error messages before taking any action. Proactive human risk management and user education are essential to mitigate the threat posed by ClickFix and similar social engineering tactics.

New runC Vulnerabilities Expose Docker and Kubernetes Environments to Potential Host Breakouts

 

Three newly uncovered vulnerabilities in the runC container runtime have raised significant concerns for organizations relying on Docker, Kubernetes, and other container-based systems. The flaws, identified as CVE-2025-31133, CVE-2025-52565, and CVE-2025-52881, were disclosed by SUSE engineer and Open Container Initiative board member Aleksa Sarai. Because runC serves as the core OCI reference implementation responsible for creating container processes, configuring namespaces, managing mounts, and orchestrating cgroups, weaknesses at this level have broad consequences for modern cloud and DevOps infrastructure. 

The issues stem from the way runC handles several low-level operations, which attackers could manipulate to escape the container boundary and obtain root-level write access on the underlying host system. All three vulnerabilities allow adversaries to redirect or tamper with mount operations or trigger writes to sensitive files, ultimately undoing the isolation that containers are designed to enforce. CVE-2025-31133 involves a flaw where runC attempts to “mask” system files by bind-mounting /dev/null. If an attacker replaces /dev/null with a symlink during initialization, runC can end up mounting an attacker-chosen location read-write inside the container, enabling potential writes to the /proc filesystem and allowing escape. 

CVE-2025-52565 presents a related problem involving races and symlink redirection. The bind mount intended for /dev/console can be manipulated so that runC unknowingly mounts an unintended target before full protections are in place. This again opens a window for writes to critical procfs entries, providing an attacker with a pathway out of the container. The third flaw, CVE-2025-52881, highlights how runC may be tricked into performing writes to /proc that get redirected to files controlled by the attacker. This behavior could bypass certain Linux Security Module relabel protections and turn routine runC operations into dangerous arbitrary writes, including to sensitive files such as /proc/sysrq-trigger. 

Two of the vulnerabilities—CVE-2025-31133 and CVE-2025-52881—affect all versions of runC, while CVE-2025-52565 impacts versions from 1.0.0-rc3 onward. Patches have been issued in runC versions 1.2.8, 1.3.3, 1.4.0-rc.3, and later. Security researchers at Sysdig noted that exploiting these flaws requires attackers to start containers with custom mount configurations, a condition that could be met via malicious Dockerfiles or harmful pre-built images. So far, there is no evidence of active exploitation, but the potential severity has prompted urgent guidance. Detection efforts should focus on monitoring suspicious symlink activity, according to Sysdig’s advisory. 

The runC team has also emphasized enabling user namespaces for all containers while avoiding mappings that equate the host’s root user with the container’s root. Doing so limits the scope of accessible files because user namespace restrictions prevent host-level file access. Security teams are further encouraged to adopt rootless containers where possible to minimize the blast radius of any successful attack. Even though traditional container isolation provides significant security benefits, these findings underscore the importance of layered defenses and continuous monitoring in containerized environments, especially as threat actors increasingly look for weaknesses at the infrastructure level.

U.S. Agencies Consider Restrictions on TP-Link Routers Over Security Risks

 



A coordinated review by several federal agencies in the United States has intensified scrutiny of TP-Link home routers, with officials considering whether the devices should continue to be available in the country. Recent reporting indicates that more than six departments and agencies have supported a proposal recommending restrictions because the routers may expose American data to security risks.

Public attention on the matter began in December 2024, when major U.S. outlets revealed that the Departments of Commerce, Defense and Justice had opened parallel investigations into TP-Link. The inquiries focused on whether the company’s corporate structure and overseas connections could create opportunities for foreign government influence. After those initial disclosures, little additional information surfaced until the Washington Post reported that the proposal had cleared interagency review.

Officials involved believe the potential risk comes from how TP-Link products collect and manage sensitive information, combined with the company’s operational ties to China. TP-Link strongly disputes the allegation that it is subject to any foreign authority and says its U.S. entity functions independently. The company maintains that it designs and manufactures its devices without any outside control.

TP-Link was founded in Shenzhen in 1996 and reorganized in 2024 into two entities: TP-Link Technologies and TP-Link Systems. The U.S. arm, TP-Link Systems, operates from Irvine, California, with roughly 500 domestic employees and thousands more across its global workforce. Lawmakers previously expressed concern that companies with overseas operations may be required to comply with foreign legal demands. They also cited past incidents in which compromised routers, including those from TP-Link, were used by threat actors during cyber operations targeting the United States.

The company has grown rapidly in the U.S. router market since 2019. Some reports place its share at a majority of consumer sales, although TP-Link disputes those figures and points to independent data that estimates a smaller share. One industry platform found that about 12 percent of active U.S. home routers are TP-Link devices. Previous reporting also noted that more than 300 internet providers distribute TP-Link equipment to customers.

In a separate line of inquiry, the Department of Justice is examining whether TP-Link set prices at levels intended to undercut competitors. The company denies this and says its pricing remains sustainable and profitable.

Cybersecurity researchers have found security flaws in routers from many manufacturers, not only TP-Link. Independent analysts identified firmware implants linked to state-sponsored groups, as well as widespread botnet activity involving small office and home routers. A Microsoft study reported that some TP-Link devices became part of password spray attacks when users did not change default administrator credentials. Experts emphasize that router vulnerabilities are widespread across the industry and not limited to one brand.

Consumers who use TP-Link routers can reduce risk by updating administrator passwords, applying firmware updates, enabling modern encryption such as WPA3, turning on built-in firewalls, and considering reputable VPN services. Devices that no longer receive updates should be replaced.

The Department of Commerce has not issued a final ruling. Reports suggest that ongoing U.S. diplomatic discussions with China could influence the timeline. TP-Link has said it is willing to improve transparency, strengthen cybersecurity practices and relocate certain functions if required. 

Why Oslo’s Bus Security Tests Highlight the Hidden Risks of Connected Vehicles

 

Modern transportation looks very different from what it used to be, and the question of who controls a vehicle on the road no longer has a simple answer. Decades ago, the person behind the wheel was unquestionably the one in charge. But as cars, buses, and trucks increasingly rely on constant connectivity, automated functions, and remote software management, the definition of a “driver” has become more complicated. With vehicles now vulnerable to remote interference, the risks tied to this connectivity are prompting transportation agencies to take a closer look at what’s happening under the hood. 

This concern is central to a recent initiative by Ruter, the public transport agency responsible for Oslo and the surrounding Akershus region. Ruter conducted a detailed assessment of two electric bus models—one from Dutch manufacturer VDL and another from Chinese automaker Yutong—to evaluate the cybersecurity implications of integrating modern, connected vehicles into public transit networks. The goal was straightforward but crucial: determine whether any external entity could access bus controls or manipulate onboard camera systems. 

The VDL buses showed no major concerns because they lacked the capability for remote software updates, effectively limiting the pathways through which an attacker could interfere. The Yutong buses, however, presented a more complex picture. While one identified vulnerability tied to third-party software has since been fixed, Ruter’s investigation revealed a more troubling possibility: the buses could potentially be halted or disabled by the manufacturer through remote commands. Ruter is now implementing measures to slow or filter incoming signals so they can differentiate between legitimate updates and suspicious activity, reducing the chance of an unnoticed hijack attempt. 

Ruter’s interest in cybersecurity aligns with broader global concerns. The Associated Press noted that similar tests are being carried out by various organizations because the threat landscape continues to expand. High-profile demonstrations over the past decade have shown that connected vehicles are susceptible to remote interference. One of the most well-known examples was when WIRED journalist Andy Greenberg rode in a Jeep that hackers remotely manipulated, controlling everything from the brakes to the steering. More recent research, including reports from LiveScience, highlights attacks that can trick vehicles’ perception systems into detecting phantom obstacles. 

Remote software updates play an important role in keeping vehicles functional and reducing the need for physical recalls, but they also create new avenues for misuse. As vehicles become more digital than mechanical, transit agencies and governments must treat cybersecurity as a critical aspect of transportation safety. Oslo’s findings reinforce the reality that modern mobility is no longer just about engines and wheels—it’s about defending the invisible networks that keep those vehicles running.

USB Drives Are Handy, But Never For Your Only Backup

 

Storing important files on a USB drive offers convenience due to their ease of use and affordability, but there are significant considerations regarding both data preservation and security that users must address. USB drives, while widely used for backup, should not be solely relied upon for safeguarding crucial files, as various risks such as device failure, malware infection, and physical theft can compromise data integrity.

Data preservation challenges

USB drive longevity depends heavily on build quality, frequency of use, and storage conditions. Cheap flash drives carry a higher failure risk compared to rugged, high-grade SSDs, though even premium devices can malfunction unexpectedly. Relying on a single drive is risky; redundancy is the key to effective file preservation.

Users are encouraged to maintain multiple backups, ideally spanning different storage approaches—such as using several USB drives, local RAID setups, and cloud storage—for vital files. Each backup method has its trade-offs: local storage like RAID arrays provides resilience against hardware failure, while cloud storage via services such as Google Drive or Dropbox enables convenient access but introduces exposure to hacking or unauthorized access due to online vulnerabilities.

Malware and physical risks

All USB drives are susceptible to malware, especially when connected to compromised computers. Such infections can propagate, and in some cases, lead to ransomware attacks where files are held hostage. Additionally, used or secondhand USB drives pose heightened malware risks and should typically be avoided. Physical security is another concern; although USB drives are inaccessible remotely when unplugged, they are unprotected if stolen unless properly encrypted.

Encryption significantly improves USB drive security. Tools like BitLocker (Windows) and Disk Utility (MacOS) enable password protection, making it more difficult for thieves or unauthorized users to access files even if they obtain the physical device. Secure physical storage—such as safes or safety deposit boxes—further limits theft risk.

Recommended backup strategy

Most users should keep at least two backups: one local (such as a USB drive) and one cloud-based. This dual approach ensures data recovery if either the cloud service is compromised or the physical drive is lost or damaged. For extremely sensitive data, robust local systems with advanced encryption are preferable. Regularly simulating data loss scenarios and confirming your ability to restore lost files provides confidence and peace of mind in your backup strategy.

Continuous Incident Response Is Redefining Cybersecurity Strategy

 


With organizations now faced with relentless digital exposure, continuous security monitoring has become an operational necessity instead of a best practice, as organizations navigate an era where digital exposure is ubiquitous. In 2024, cyber-attacks will increase by nearly 30%, with the average enterprise having to deal with over 1,600 attempted intrusions a week, with the financial impact of a data breach regularly rising into six figures. 

Even so, the real crisis extends well beyond the rising level of threats. In the past, cybersecurity strategies relied on a familiar formula—detect quickly, respond promptly, recover quickly—but that cadence no longer suffices in an environment that is characterized by adversaries automating reconnaissance, exploiting cloud misconfiguration within minutes, and weaponizing legitimate tools so that they can move laterally far faster than human analysts are able to react. 

There has been a growing gap between what organizations can see and the ability to act as the result of successive waves of innovation, from EDR to XDR, as a result of which they have widened visibility across sprawling digital estates. The security operations center is already facing unprecedented complexity. Despite the fact that security operations teams juggle dozens of tools and struggle with floods of alerts that require manual validation, organisations are unable to act as quickly as they should. 

A recent accelerated disconnect between risk and security is transforming how security leaders understand risks and forcing them to face a difficult truth: visibility without speed is no longer an effective defence. When examining the threat patterns defining the year 2024, it becomes more apparent why this shift is necessary. According to security firms, attackers are increasingly using stealthy, fileless techniques to steal from their victims, with nearly four out of five detections categorised as malware-free today, with the majority of attacks classified as malware-free. 

As a result, ransomware activity has continued to climb steeply upward, rising by more than 80% on a year-over-year basis and striking small and midsized businesses the most disproportionately, accounting for approximately 70% of all recorded incidents. In recent years, phishing campaigns have become increasingly aggressive, with some vectors experiencing unprecedented spikes - some exceeding 1,200% - as adversaries use artificial intelligence to bypass human judgment. 

A number of SMBs remain structurally unprepared in spite of these pressures, with the majority acknowledging that they have become preferred targets, but three out of four of them continue to use informal or internally managed security measures. These risks are compounded by human error, which is responsible for an estimated 88% of reported cyber incidents. 

There have been staggering financial consequences as well; in the past five years alone, the UK has suffered losses of more than £44 billion, resulting in both immediate disruption and long-term revenue losses. Due to this, the industry’s definition of continuous cybersecurity is now much broader than periodic audits. 

It is necessary to maintain continuous threat monitoring, proactive vulnerability and exposure management, disciplined identity governance, sustained employee awareness programs, regularly tested incident response playbooks, and ongoing compliance monitoring—a posture which emphasizes continuous evaluation rather than reactive control as part of an operational strategy. Increasingly complex digital estates are creating unpredictable cyber risks, which are making continuous monitoring an essential part of modern defence strategies. 

Continuous monitoring is a real time monitoring system that scans systems, networks, and cloud environments in real time, in order to detect early signs of misconfiguration, compromise, or operational drift. In contrast to periodic checks which operate on a fixed schedule and leave long periods of exposure, continuous monitoring operates in real time. 

The approach outlined above aligns closely with the NIST guidance, which urges organizations to set up an adaptive monitoring strategy capable of ingesting a variety of data streams, analysing emerging vulnerabilities, and generating timely alerts for security teams to take action. Using continuous monitoring, organizations can discover latent weaknesses that are contributing to their overall cyber posture. 

Continuous monitoring reduces the frequency and severity of incidents, eases the burden on security personnel, and helps them meet increasing regulatory demands. Even so, maintaining such a level of vigilance remains a challenge, especially for small businesses that lack the resources, expertise, and tooling to operate around the clock in order to stay on top of their game. 

The majority of organizations therefore turn to external service providers in order to achieve the scalability and economic viability of continuous monitoring. Typically, effective continuous monitoring programs include four key components: a monitoring engine, analytics that can be used to identify anomalies and trends on a large scale, a dashboard that shows key risk indicators in real time, and an alerting system to ensure that emerging issues are quickly addressed by the appropriate staff. 

With the help of automation, security teams are now able to process a great deal of telemetry in a timely and accurate manner, replacing outdated or incomplete snapshots with live visibility into organisational risk, enabling them to respond successfully in a highly dynamic threat environment. 

Continuous monitoring can take on a variety of forms, depending on the asset in focus, including endpoint monitoring, network traffic analysis, application performance tracking, cloud and container observability, etc., all of which provide an important layer of protection against attacks as they spread across every aspect of the digital infrastructure. 

It has also been shown that the dissolution of traditional network perimeters is a key contributor to the push toward continuous response. In the current world of cloud-based workloads, SaaS-based ecosystems, and remote endpoints, security architectures mustwork as flexible and modular systems capable of correlating telemetrics between email, DNS, identity, network, and endpoint layers, without necessarily creating new silos within the architecture. 

Three operational priorities are usually emphasized by organizations moving in this direction: deep integration to keep unified visibility, automation to handle routine containment at machine speed and validation practices, such as breach simulations and posture tests, to ensure that defence systems behave as they should. It has become increasingly common for managed security services to adopt these principles, and this is why more organizations are adopting them.

909Protect, for instance, is an example of a product that provides rapid, coordinated containment across hybrid environments through the use of automated detection coupled with continuous human oversight. In such platforms, the signals from various security vectors are correlated, and they are layered on top of existing tools with behavioural analysis, posture assessment and identity safeguards in order to ensure that no critical alert goes unnoticed while still maintaining established investments. 

In addition to this shift, there is a realignment among the industry as a whole toward systems that are built to be available continuously rather than undergoing episodic interventions. Cybersecurity has gone through countless “next generation” labels, but only those approaches which fundamentally alter the behavior of operations tend to endure, according to veteran analysts in the field. In addressing this underlying failure point, continuous incident response fits perfectly into this trajectory. 

Organizations are rarely breached because they have no data, but rather because they do not act on it quickly enough or cohesively. As analysts argue, the path forward will be determined by the ability to combine automation, analytics, and human expertise into a single adaptive workflow that can be used in an organization's entirety. 

There is no doubt that the organizations that are most likely to be able to withstand emerging threats in the foreseeable future will be those that approach security as a living, constantly changing system that is not only based on the visible, but also on the ability of the organization to detect, contain, and recover in real time from any threats as they arise. 

In the end, the shift toward continuous incident response is a sign that cybersecurity resilience is more than just about speed anymore, but about endurance as well. Investing in unified visibility, disciplined automation, as well as persistent validation will not only ensure that the path from detection to containment is shortened, but that the operations remain stable over the longer term as well.

The advantage will go to those who treat security as an evolving ecosystem—one that is continually refined, coordinated across teams and committed to responding in a continuity similar to the attacks used by adversaries.

Knownsec Data Leak Exposes Deep Cyber Links and Global Targeting Operations

 

A recent leak involving Chinese cybersecurity company Knownsec has uncovered more than 12,000 internal documents, offering an unusually detailed picture of how deeply a private firm can be intertwined with state-linked cyber activities. The incident has raised widespread concern among researchers, as the exposed files reportedly include information on internal artificial intelligence tools, sophisticated cyber capabilities, and extensive international targeting efforts. Although the materials were quickly removed after surfacing briefly on GitHub, they have already circulated across the global security community, enabling analysts to examine the scale and structure of the operations. 

The leaked data appears to illustrate connections between Knownsec and several government-aligned entities, giving researchers insight into China’s broader cyber ecosystem. According to those reviewing the documents, the files map out international targets across more than twenty countries and regions, including India, Japan, Vietnam, Indonesia, Nigeria, and the United Kingdom. Of particular concern are spreadsheets that allegedly outline attacks on around 80 foreign organizations, including critical infrastructure providers and major telecommunications companies. These insights suggest activity far more coordinated than previously understood, highlighting the growing sophistication of state-associated cyber programs. 

Among the most significant revelations is the volume of foreign data reportedly linked to prior breaches. Files attributed to the leaks include approximately 95GB of immigration information from India, 3TB of call logs taken from South Korea’s LG U Plus, and nearly 459GB of transportation records from Taiwan. Researchers also identified multiple Remote Access Trojans capable of infiltrating Windows, Linux, macOS, iOS, and Android systems. Android-based malware found in the leaked content reportedly has functionality allowing data extraction from widely used Chinese messaging applications and Telegram, further emphasizing the operational depth of the tools. 

The documents also reference hardware-based hacking devices, including a malicious power bank engineered to clandestinely upload data into a victim’s system once connected. Such devices demonstrate that offensive cyber operations may extend beyond software to include physical infiltration tools designed for discreet, targeted attacks. Security analysts reviewing the information suggest that these capabilities indicate a more expansive and organized program than earlier assessments had captured. 

Beijing has denied awareness of any breach involving Knownsec. A Foreign Ministry spokesperson reiterated that China opposes malicious cyber activities and enforces relevant laws, though the official statement did not directly address the alleged connections between the state and companies involved in intelligence-oriented work. While the government’s response distances itself from the incident, analysts note that the leaked documents will likely renew debates about the role of private firms in national cyber strategies. 

Experts warn that traditional cybersecurity measures—including antivirus software and firewall defenses—are insufficient against the type of advanced tools referenced in the leak. Instead, organizations are encouraged to adopt more comprehensive protection strategies, such as real-time monitoring systems, strict network segmentation, and the responsible integration of AI-driven threat detection. 

The Knownsec incident underscores that as adversaries continue to refine their methods, defensive systems must evolve accordingly to prevent large-scale breaches and safeguard sensitive data.

$116 Million at Risk as Balancer Suffers Major Smart Contract Breach

 

Security experts are becoming increasingly concerned about a developing anomaly in the JavaScript ecosystem after researchers discovered a massive cluster of self-replicating npm packages that seem to have no technical function but instead indicate a well-thought-out and financially motivated scheme. Over 43,000 of these packages—roughly 1% of the whole npm repository—were covertly uploaded over a two-year period using at least 11 synchronized accounts, according to recent research by Endor Labs. 

The libraries automatically reproduce themselves when downloaded and executed, filling the ecosystem with nearly identical code, even though they do not behave like traditional malware—showing no indicators of data theft, backdoor deployment, or system compromise. Investigators caution that even while these packages are harmless at the moment, their size and consistent behavior could serve as a channel for harmful updates in the future. 

With many packages containing tea.yaml files connected to TEA cryptocurrency accounts, early indications also point to a potential monetization plan, indicating the operation may be built to farm tokens at scale. The scope and complexity of the program were exposed by more research in the weeks that followed. 

In late October, clusters of unusual npm uploads were first observed by Amazon's security experts using improved detection algorithms and AI-assisted monitoring. By November 7, hundreds of suspicious packages had been found, and by November 12, over 150,000 malicious entries had been linked to a network of coordinated developer accounts. 

What had started out as a few dubious packages swiftly grew into a huge discovery. They were all connected to the tea.xyz token-farming initiative, a decentralized protocol that uses TEA tokens for staking, incentives, and governance to reward open-source contributions. Instead of using ransomware or credential stealers, the attackers flooded the registry with self-replicating packages that were made to automatically create and publish new versions.

As unwary developers downloaded or interacted with the contaminated libraries, the perpetrators silently accumulated token rewards. Each package was connected to blockchain wallets under the attackers' control by embedded tea.yaml files, which made it possible for them to embezzle profits from lawful community activities without drawing attention to themselves. The event, according to security experts, highlights a broader structural flaw in contemporary software development, where the speed and transparency of open-source ecosystems may be readily exploited at scale. 

Amazon's results show how AI-driven automation has made it easy for attackers to send large quantities of garbage or dangerous goods in a short amount of time, according to Manoj Nair, chief innovation officer at Snyk. He emphasized that developers should use behavior-based scanning and automated dependency-health controls to identify low-download libraries, template-reused content, and abrupt spikes in mass publishing before such components enter their build pipelines, as manual review is no longer sufficient. 

In order to stop similar operations before they start, he continued, registry operators must also change by proactively spotting bulk uploads, duplicate code templates, and oddities in metadata. Suzu CEO Michael Bell shared these worries, claiming that the discovery of 150,000 self-replicating, token-farming npm packages shows why attackers frequently have significantly more leverage when they compromise the development supply chain than when they directly target production systems. 

Bell cautioned that companies need to treat build pipelines and dependency chains with the same rigor as production infrastructure because shift-left security is becoming the standard. This includes implementing automated scans, keeping accurate software bills of materials, enforcing lockfiles to pin trusted versions, and verifying package authenticity before installation. He pointed out that once malicious code enters production, defenders are already reacting to a breach rather than stopping an assault. 

The researchers discovered that by incorporating executable scripts and circular dependency chains into package.json files, the campaign took advantage of npm's installation procedures. In actuality, installing one malicious package set off a planned cascade that increased replication and tea.xyz teaRank scores by automatically installing several more.

The operation created significant risks by flooding the registry with unnecessary entries, taxing storage and bandwidth resources, and increasing the possibility of dependency confusion, even if the packages did not include ransomware or credential-stealing payloads. Many of the packages shared cloned code, had tea.yaml files connecting them to attacker-controlled blockchain wallets, and used standard naming conventions. Amazon recommended that companies assess their current npm dependencies, eliminate subpar or non-functional components, and bolster their supply-chain defenses with separated CI/CD environments and SBOM enforcement. 

The event contributes to an increasing number of software supply-chain risks that have led to the release of new guidelines by government organizations, such as CISA, with the goal of enhancing resilience throughout development pipelines. The campaign serves as a sobering reminder that supply-chain integrity can no longer be ignored as the inquiry comes to an end. The scope of this issue demonstrates how readily automation may corrupt open-source ecosystems and take advantage of community trust for commercial gain if left uncontrolled. 

Stronger verification procedures throughout development pipelines, ongoing dependency auditing, and stricter registry administration are all necessary, according to experts. In addition to reducing such risks, investing in clear information, resilient tooling, and cross-industry cooperation will support the long-term viability of the software ecosystems that contemporary businesses rely on.

Google Password Warning Explained: Why Gmail Users Should Switch to Passkeys Now

 

Despite viral claims that Google is instructing every Gmail user to urgently change their password because of a direct breach, the reality is more nuanced. Google is indeed advising users to reset their credentials, but not due to a compromise of Gmail accounts themselves. Instead, the company is urging people to adopt stronger authentication—including passkeys—because a separate incident involving Salesforce increased the likelihood of sophisticated phishing attempts targeting Gmail users.  

The issue stems from a breach at Salesforce, where attackers linked to the ShinyHunters group (also identified as UNC6040) infiltrated systems and accessed business-related Gmail information such as contact directories, organizational details, and email metadata. Crucially, no Gmail passwords were stolen. However, the nature of the compromised data gives hackers enough context to craft highly convincing phishing and impersonation attempts. 

Google confirmed that this breach has triggered a surge in targeted phishing and vishing campaigns. Attackers are already posing as Google, IT teams, or trusted service vendors to deceive users into sharing login details. Some threat actors are even placing spoofed phone calls from 650–area-code numbers, making the fraud appear to originate from Google headquarters. According to Google’s internal data, phishing and vishing together now account for roughly 37% of all successful account takeovers, highlighting how effective social engineering continues to be for cybercriminals. 

With access to workplace information, attackers can send messages referencing real colleagues, departments, and recent interactions. This level of personal detail makes fraudulent communication significantly harder to recognize. Once users disclose credentials, attackers can easily break into accounts, bypass additional safeguards, and potentially remain undetected until major damage has been done. 

Google’s central message is simple—never share your Gmail password with anyone. Even callers who sound legitimate or claim to represent support teams should not be trusted. Cybersecurity experts emphasize that compromising an email account can grant attackers control over nearly all linked services, since most account recovery systems rely on email-based reset links. 

To reduce risk, Google continues to advocate for passkeys, which replace traditional passwords with device-based biometric authentication. Unlike passwords, passkeys cannot be phished, reused, or guessed, making them substantially more secure. Google also encourages users to enable app-based two-factor authentication instead of SMS codes, which can be intercepted or spoofed. 

Google’s guidance for users focuses on regularly updating passwords, enabling 2FA or passkeys, staying alert to suspicious messages or calls, using the Security Checkup tool, and taking immediate action if unusual account activity appears. This incident demonstrates how vulnerabilities in external partners—in this case, Salesforce—can still put millions of Gmail users at risk, even when Google’s own infrastructure remains protected. With more than 2.5 billion Gmail accounts worldwide, the platform remains a prime target, and ongoing awareness remains the strongest defense.

Balancer Hit by Smart Contract Exploit, $116M Vulnerability Revealed


 

During the past three months, Balancer, the second most popular and high-profile cryptocurrency in the decentralized finance ecosystem has been subjected to a number of high-profile attacks from sweeping cross-chain exploits that have rapidly emerged to be one of the most significant cryptocurrency breaches over the past year. 

The results of early blockchain forensic analysis suggest losses of $100 million to $128 million, and the value of assets that have now been compromised across multiple networks has risen to $116 million, according to initial assessments circulated by independent researchers. In particular, @RoundtableSpace shared data with us on the X platform. In addition to disrupting the Ethereum mainnet as well as several prominent layer-2 networks, the incident also caused liquidity pools on Ethereum's mainnet to be disrupted. 

Almost immediately after the attack, Balancer's team recognized it and began a quick investigation into the attack, working closely with the leading blockchain security firms to contain the damage and determine the scope of the problem. It has sent ripples throughout the DeFi community, raising fresh concerns about the protocol's resilience as attackers continue to exploit complex multi-chain infrastructures to steal data. 

In light of the breach, investigators have since determined that it is a result of a flaw within Balancer's smart contracts, wherein a flaw in initialization allowed an unauthorized manipulator to manipulate the vault. Blockchain analysts have been able to determine that, based on early assessments, the attacker used a malicious contract to bypass safeguards intended to prevent swaps and imbalance across pools and circumvent the exchanges. 

There was a striking speed at which the exploit unfolded: taking advantage of Balancer's deeply composable architecture, in which multiple pools and contracts are often intertwined, the attacker managed to orchestrate multiple tight-knit transactions, starting with a critical Ethereum mainnet call. Through the use of incorrect authorization checks and callback handling, the intruder was able to redirect liquidity and drain assets in a matter of minutes. 

There is still a long way to go until full forensic reports from companies like PeckShield and Nansen are released, but preliminary data suggests that between $110 million and $116 million has been siphoned into a new wallet in Ethereum and other tokens. As the funds appear to be moving through mixers and cross-chain routes to obscurity their origin, their origin appears to be obscured in the new wallet. When investigators dissected Balancer V2's architecture, they discovered a fundamental flaw within the vault and liquidity pools, which led them to find out that the breach occurred as a result of a fundamental breach within the protocol. 

The Composability of Balancer's V2 design made it among the most widely used automated market makers, an attribute that in this instance accentuated the impact of the vulnerability. Upon investigation, it was found that the attacker had implemented a malicious contract that interfered with the pool initialization sequence of the platform, manipulating internal calls that govern the changing of balances and swapping permissions within the platform. 

Specifically, the validation check that is meant to enforce internal safeguards within the manageUserBalance function was flawed, which allowed the intruder to sidestep critical authorization steps and bypass the validation check. It is because of this loophole that the attacker could submit unauthorized parameters and siphon funds directly from the vault without activating the security measures Balancer believed were in place. 

It was an extremely complex operation that unfolded first on Ethereum's mainnet, where it was triggered by a series of precisely executed transactions before it spread to other networks that had been integrated with the V2 vault. According to preliminary assessments, the total losses will amount to between $110 million and $116 million, although some estimates place it at $128 million. 

This is one of the most consequential DeFi incidents in 2025. There were several liquid-staking derivatives and wrapped tokens that were stolen, including WETH, wstETH, OsETH, frxETH, rsETH, and rETH. A total of $70 million was sucked from Ethereum alone, while the Base and Sonic networks accounted for a loss of approximately $7 million, along with additional losses from smaller chains as well. 

In the cryptography records on the blockchain, it can be seen that the attacker quickly routed the proceeds into newly created wallets and then into a privacy mixer after they had been routed through bridges. The investigators stressed, however, that no private keys were compromised; the incident had only a direct impact on Balancer's smart contract logic and not any breach of user credentials, according to their findings. 

As a result of the breach, security experts have advised that users who have access to balancer V2 pools to take immediate precautions. It has been recommended by analysts that pool owners withdraw their funds from any affected pools without delay and revoke smart-contract approvals tied to Balancer addresses through platforms such as Revoke, DeBank, or Etherscan that can be accessed instantly. 

In addition to being advised to closely monitor their wallets using on-chain tools Like Dune Analytics and Etherscan to find out if any irregular activities are occurring, users should also follow the ongoing updates from auditing and security firms including PeckShield and Nansen as this investigation moves forward. As a consequence of the incident, there have already been noticeable effects in the broader DeFi market, such as Balancer's BAL token dropping by 5% to 10%, and the platform's overall value locking experiencing a sharp decline in value as liquidity providers began to withdraw their services in response to mounting uncertainty. 

As noted in industry observers, the episode emphasizes the inherent challenges that come with constructing secure and composable financial primitives. However, they also note that such setbacks often lead to crucial improvements. The Balancer team seems hopeful that they will be able to recover, strengthen their infrastructure, and emphasize the importance of being vigilant and continuously refining their skills in an environment that changes as quickly as the threats that surround it. 

Several experts have commented on the Balancer incident, emphasising that it should serve as a catalyst for enhancing security practices across the DeFi landscape as the investigation continues. Specifically, they say protocols must reevaluate assumptions regarding composability, perform more rigorous pre-deployment testing, and implement continuous audit cycles in order to minimize the likelihood of similar cascading failures occurring in the future. 

It is clear from this episode that users should be careful with the allocation of liquidity, monitor on-chain activity regularly, and exercise vigilant approval management. Although the breach has shaken confidence in the sector, it also represents an opportunity for the sector to grow, innovate responsibly, and strengthen the resilience of decentralized finance despite the disruption.

China Announces Major Cybersecurity Law Revision to Address AI Risks

 



China has approved major changes to its Cybersecurity Law, marking its first substantial update since the framework was introduced in 2017. The revised legislation, passed by the Standing Committee of the National People’s Congress in late October 2025, is scheduled to come into effect on January 1, 2026. The new version aims to respond to emerging technological risks, refine enforcement powers, and bring greater clarity to how cybersecurity incidents must be handled within the country.

A central addition to the law is a new provision focused on artificial intelligence. This is the first time China’s cybersecurity legislation directly acknowledges AI as an area requiring state guidance. The updated text calls for protective measures around AI development, emphasising the need for ethical guidelines, safety checks, and governance mechanisms for advanced systems. At the same time, the law encourages the use of AI and similar technologies to enhance cybersecurity management. Although the amendment outlines strategic expectations, the specific rules that organisations will need to follow are anticipated to be addressed through later regulations and detailed technical standards.

The revised law also introduces stronger enforcement capabilities. Penalties for serious violations have been raised, giving regulators wider authority to impose heavier fines on both companies and individuals who fail to meet their obligations. The scope of punishable conduct has been expanded, signalling an effort to tighten accountability across China’s digital environment. In addition, the law’s extraterritorial reach has been broadened. Previously, cross-border activities were only included when they targeted critical information infrastructure inside China. The new framework allows authorities to take action against foreign activities that pose any form of network security threat, even if the incident does not involve critical infrastructure. In cases deemed particularly severe, regulators may impose sanctions that include financial restrictions or other punitive actions.

Alongside these amendments, the Cyberspace Administration of China has issued a comprehensive nationwide reporting rule called the Administrative Measures for National Cybersecurity Incident Reporting. This separate regulation will become effective on November 1, 2025. The Measures bring together different reporting requirements that were previously scattered across multiple guidelines, creating a single, consistent system for organisations responsible for operating networks or providing services through Chinese networks. The Measures appear to focus solely on incidents that occur within China, including those that affect infrastructure inside the country.

The reporting rules introduce a clear structure for categorising incidents. Events are divided into four levels based on their impact. Under the new criteria, an incident qualifies as “relatively major” if it involves a data breach affecting more than one million individuals or if it results in economic losses of over RMB 5 million. When such incidents occur, organisations must file an initial report within four hours of discovery. A more complete submission is required within seventy-two hours, followed by a final review report within thirty days after the incident is resolved.

To streamline compliance, the regulator has provided several reporting channels, including a hotline, an online portal, email, and the agency’s official WeChat account. Organisations that delay reporting, withhold information, or submit false details may face penalties. However, the Measures state that timely and transparent reporting can reduce or remove liability under the revised law.



AI Models Trained on Incomplete Data Can't Protect Against Threats


In cybersecurity, AI is being called the future of threat finder. However, AI has its hands tied, they are only as good as their data pipeline. But this principle is not stopping at academic machine learning, as it is also applicable for cybersecurity.

AI-powered threat hunting will only be successful if the data infrastructure is strong too.

Threat hunting powered by AI, automation, or human investigation will only ever be as effective as the data infrastructure it stands on. Sometimes, security teams build AI over leaked data or without proper data care. This can create issues later. It can affect both AI and humans. Even sophisticated algorithms can't handle inconsistent or incomplete data. AI that is trained on poor data will also lead to poor results. 

The importance of unified data 

A correlated data controls the operation. It reduces noise and helps in noticing patterns that manual systems can't.

Correlating and pre-transforming the data makes it easy for LLMs and other AI tools. It also allows connected components to surface naturally. 

A same person may show up under entirely distinct names as an IAM principal in AWS, a committer in GitHub, and a document owner in Google Workspace. You only have a small portion of the truth when you look at any one of those signs. 

You have behavioral clarity when you consider them collectively. While downloading dozens of items from Google Workspace may look strange on its own, it becomes obviously malevolent if the same user also clones dozens of repositories to a personal laptop and launches a public S3 bucket minutes later.

Finding threat via correlation 

Correlations that previously took hours or were impossible become instant when data from logs, configurations, code repositories, and identification systems are all housed in one location. 

For instance, lateral movement that uses short-lived credentials that have been stolen frequently passes across multiple systems before being discovered. A hacked developer laptop might take on several IAM roles, launch new instances, and access internal databases. Endpoint logs show the local compromise, but the extent of the intrusion cannot be demonstrated without IAM and network data.


NordVPN Survey Finds Most Americans Misunderstand Antivirus Protection Capabilities

 

A new survey by NordVPN, one of the world’s leading cybersecurity firms, has revealed a surprising lack of understanding among Americans about what antivirus software actually does. The study, which polled over 1,000 U.S. residents aged 18 to 74, found that while 52% use antivirus software daily, many hold serious misconceptions about its capabilities — misconceptions that could be putting their online safety at risk. 

According to the findings, more than a quarter of respondents incorrectly believe that antivirus software offers complete protection against all online threats. Others assume it can prevent identity theft, block phishing scams, or secure public Wi-Fi connections — functions that go far beyond what antivirus tools are designed to do. NordVPN’s Chief Technology Officer, Marijus Briedis, said the confusion highlights a troubling lack of cybersecurity awareness. “People tend to confuse different technologies and overestimate their capabilities,” he explained. “Some Americans don’t realize antivirus software’s main job is to detect and remove malware, not prevent identity theft or data breaches. This gap in understanding shows how much more cybersecurity education is needed.” 

The survey also found that many Americans mix up antivirus software with other digital security tools, such as firewalls, password managers, ad blockers, and VPNs. This misunderstanding can create a false sense of security, leaving users vulnerable to attacks. Even more concerning, over one-third of those surveyed reported not using any cybersecurity software at all, despite nearly half admitting their personal information had been exposed in a data breach. 

NordVPN’s research indicates that many users believe following good online habits alone is sufficient protection. While best practices like avoiding suspicious links, using strong passwords, and steering clear of phishing attempts are important, experts warn they are not enough in today’s sophisticated cyber landscape. Modern malware can infect devices without any direct user action, making layered protection essential. 

Participants in the survey expressed particular concern about the exposure of sensitive personal data, such as social security numbers and credit card details. However, the most commonly leaked information remains email addresses, phone numbers, and physical addresses — details often dismissed as harmless but frequently exploited by cybercriminals. Such data enables more personalized and convincing phishing or “smishing” attacks, which can lead to identity theft and financial fraud. 

Experts emphasize that while antivirus software remains a critical first line of defense, it cannot protect against every cyber threat. A combination of tools — including secure VPNs, multi-factor authentication, and strong, unique passwords — is necessary to ensure comprehensive protection. A VPN like NordVPN encrypts internet traffic, hides IP addresses, and shields users from tracking and surveillance, especially on unsecured public networks. Multi-factor authentication adds an additional verification layer to prevent unauthorized account access, while password managers help users create and store complex, unique passwords safely. 

The key takeaway from NordVPN’s research is clear: cybersecurity requires more than just one solution. Relying solely on antivirus software creates dangerous blind spots, especially when users misunderstand its limitations. As Briedis put it, “This behavior undoubtedly contributes to the concerning cybersecurity situation in the U.S. Education, awareness, and layered protection are the best ways to stay safe online.” 

With cyberattacks and data breaches on the rise, experts urge Americans to take a proactive approach — combining trusted software, informed digital habits, and vigilance about what personal information they share online.

European Governments Turn to Matrix for Secure Sovereign Messaging Amid US Big Tech Concerns

 

A growing number of European governments are turning to Matrix, an open-source messaging architecture, as they seek greater technological sovereignty and independence from US Big Tech companies. Matrix aims to create an open communication standard that allows users to message each other regardless of the platform they use—similar to how email works across different providers. The decentralized protocol supports secure messaging, voice, and video communications while ensuring data control remains within sovereign boundaries. 

Matrix, co-founded by Matthew Hodgson in 2014 as a not-for-profit open-source initiative, has seen wide-scale adoption across Europe. The French government and the German armed forces now have hundreds of thousands of employees using Matrix-based platforms like Tchap and BwMessenger. Swiss Post has also built its own encrypted messaging system for public use, while similar deployments are underway across Sweden, the Netherlands, and the European Commission. NATO has even adopted Matrix to test secure communication alternatives under its NICE2 project. 

Hodgson, who also serves as CEO of Element—a company providing Matrix-based encrypted services to governments and organizations such as France and NATO—explained that interest in Matrix has intensified following global geopolitical developments. He said European governments now view open-source software as a strategic necessity, especially after the US imposed sanctions on the International Criminal Court (ICC) in early 2025. 

The sanctions, which impacted US tech firms supporting the ICC, prompted several European institutions to reconsider their reliance on American cloud and communication services. “We have seen first-hand that US Big Tech companies are not reliable partners,” Hodgson said. “For any country to be operationally dependent on another is a crazy risk.” He added that incidents such as the “Signalgate” scandal—where a US official accidentally shared classified information on a Signal chat—have further fueled the shift toward secure, government-controlled messaging infrastructure. 

Despite this, Europe’s stance on encryption remains complex. While advocating for sovereign encrypted messaging platforms, some governments are simultaneously supporting proposals like Chat Control, which would require platforms to scan messages before encryption. Hodgson criticized such efforts, warning they could weaken global communication security and force companies like Element to withdraw from regions that mandate surveillance. Matrix’s decentralized design offers resilience and security advantages by eliminating a single point of failure. 

Unlike centralized apps such as Signal or WhatsApp, Matrix operates as a distributed network, reducing the risk of large-scale breaches. Moreover, its interoperability means that various Matrix-based apps can communicate seamlessly—enabling, for example, secure exchanges between French and German government networks. Although early Matrix apps were considered less user-friendly, Hodgson said newer versions now rival mainstream encrypted platforms. Funding challenges have slowed development, as governments using Matrix often channel resources toward system integrators rather than the project itself. 

To address this, Matrix is now sustained by a membership model and potential grant funding. Hodgson’s long-term vision is to establish a fully peer-to-peer global communication network that operates without servers and cannot be compromised or monitored. Supported by the Dutch government, Matrix’s ongoing research into such peer-to-peer technology aims to simplify deployment further while enhancing security. 

As Europe continues to invest in secure digital infrastructure, Matrix’s open standard represents a significant step toward technological independence and privacy preservation. 

By embracing decentralized communication, European governments are asserting control over their data, reducing foreign dependence, and reshaping the future of secure messaging in an increasingly uncertain geopolitical landscape.

UK Digital ID Faces Security Crisis Ahead of Mandatory Rollout

 

The UK’s digital ID system, known as One Login, triggered major controversy in 2025 due to serious security vulnerabilities and privacy concerns, leading critics to liken it to the infamous Horizon scandal. 

One Login is a government-backed identity verification platform designed for access to public services and private sector uses such as employment verification and banking. Despite government assurances around its security and user benefits, public confidence plummeted amid allegations of cybersecurity failures and rushed implementation planned for November 18, 2025.

Critics, including MPs and cybersecurity experts, revealed that the system failed critical red-team penetration tests, with hackers gaining privileged access during simulated cyberattacks. Further concerns arose over development practices, with portions of the platform built by contractors in Romania on unsecured workstations without adequate security clearance. The government missed security deadlines, with full compliance expected only by March 2026—months after the mandatory rollout began.

This “rollout-at-all-costs” approach amidst unresolved security flaws has created a significant trust deficit, risking citizens’ personal data, which includes sensitive information like biometrics and identification documents. One Login collects comprehensive data, such as name, birth date, biometrics, and a selfie video for identity verification. This data is shared across government services and third parties, raising fears of surveillance, identity theft, and misuse.

The controversy draws a parallel to the Horizon IT scandal, where faulty software led to wrongful prosecutions of hundreds of subpostmasters. Opponents warn that flawed digital ID systems could cause similar large-scale harms, including wrongful exclusions and damaged reputations, undermining public trust in government IT projects.

Public opposition has grown, with petitions and polls showing more people opposing digital ID than supporting it. Civil liberties groups caution against intrusive government tracking and call for stronger safeguards, transparency, and privacy protections. The Prime Minister defends the program as a tool to simplify life and reduce identity fraud, but critics label it expensive, intrusive, and potentially dangerous.

In conclusion, the UK’s digital ID initiative stands at a critical crossroads, facing a crisis of confidence and comparisons to past government technology scandals. Robust security, oversight, and public trust are imperative to avoid a repeat of such failures and ensure the system serves citizens without compromising their privacy or rights.

TRAI Approves Caller Name Display Feature to Curb Spam and Fraud Calls

 

The Telecom Regulatory Authority of India (TRAI) has officially approved a long-awaited proposal from the Department of Telecommunications (DoT) to introduce a feature that will display the caller’s name by default on the receiver’s phone screen. Known as the Calling Name Presentation (CNAP) feature, this move is aimed at improving transparency in phone communications, curbing the growing menace of spam calls, and preventing fraudulent phone-based scams across the country. 

Until now, smartphone users in India have relied heavily on third-party applications such as Truecaller and Bharat Caller ID for identifying incoming calls. However, these apps often depend on user-generated databases and unverified information, which may not always be accurate. TRAI’s newly approved system will rely entirely on verified details gathered during the SIM registration process, ensuring that the name displayed is authentic and directly linked to the caller’s government-verified identity. 

According to the telecom regulator, the CNAP feature will be automatically activated for all subscribers across India, though users will retain the option to opt out by contacting their telecom service provider. TRAI explained that the feature will function as a supplementary service integrated with basic telecom offerings rather than as a standalone service. Every telecom operator will be required to maintain a Calling Name (CNAM) database, which will map subscribers’ verified names to their registered mobile numbers. 

When a call is placed, the receiving network will search this CNAM database through the Local Number Portability Database (LNPD) and retrieve the verified caller’s name in real-time. This name will then appear on the recipient’s screen, allowing users to make informed decisions about whether to answer the call. The mechanism aims to replicate the caller ID functionality offered by third-party apps, but with government-mandated accuracy and accountability. 

Before final approval, the DoT conducted pilot tests of the CNAP system across select cities using 4G and 5G networks. The trials revealed several implementation challenges, including software compatibility issues and the need for network system upgrades. As a result, the initial testing was primarily focused on packet-switched networks, which are more commonly used for mobile data transmission than circuit-switched voice networks.  

Industry analysts believe the introduction of CNAP could significantly enhance consumer trust and reshape how users interact with phone calls. By reducing reliance on unregulated third-party applications, the feature could also help improve data privacy and limit exposure to malicious data harvesting. Additionally, verified caller identification is expected to reduce incidents of spam calls, phishing attempts, and impersonation scams that have increasingly plagued Indian users in recent years.  

While TRAI has not announced an official rollout date, telecom operators have reportedly begun upgrading their systems and databases to accommodate the CNAP infrastructure. The rollout is expected to be gradual, starting with major telecom circles before expanding nationwide in the coming months. Once implemented, CNAP could become a major step forward in digital trust and consumer protection within India’s rapidly growing telecommunications ecosystem. 

By linking phone communication with verified identities, TRAI’s caller name display feature represents a significant shift toward a safer and more transparent mobile experience. It underscores the regulator’s ongoing efforts to safeguard users against fraudulent activities while promoting accountability within India’s telecom sector.

Nearly 50% of IoT Device Connections Pose Security Threats, Study Finds

 




A new security analysis has revealed that nearly half of all network communications between Internet of Things (IoT) devices and traditional IT systems come from devices that pose serious cybersecurity risks.

The report, published by cybersecurity company Palo Alto Networks, analyzed data from over 27 million connected devices across various organizations. The findings show that 48.2 percent of these IoT-to-IT connections came from devices classified as high risk, while an additional 4 percent were labeled critical risk.

These figures underline a growing concern that many organizations are struggling to secure the rapidly expanding number of IoT devices on their networks. Experts noted that a large portion of these devices operate with outdated software, weak default settings, or insecure communication protocols, making them easy targets for cybercriminals.


Why It’s a Growing Threat

IoT devices, ranging from smart security cameras and sensors to industrial control systems are often connected to the same network as computers and servers used for daily business operations. This creates a problem: once a vulnerable IoT device is compromised, attackers can move deeper into the network, access sensitive data, and disrupt normal operations.

The study emphasized that the main cause behind such widespread exposure is poor network segmentation. Many organizations still run flat networks, where IoT devices and IT systems share the same environment without proper separation. This allows a hacker who infiltrates one device to move easily between systems and cause greater harm.


How Organizations Can Reduce Risk

Security professionals recommend several key actions for both small businesses and large enterprises to strengthen their defenses:

1. Separate Networks:

Keep IoT devices isolated from core IT infrastructure through proper network segmentation. This prevents threats in one area from spreading to another.

2. Adopt Zero Trust Principles:

Follow a security model that does not automatically trust any device or user. Each access request should be verified, and only the minimum level of access should be allowed.

3. Improve Device Visibility:

Maintain an accurate inventory of all devices connected to the network, including personal or unmanaged ones. This helps identify and secure weak points before they can be exploited.

4. Keep Systems Updated:

Regularly patch and update device firmware and software. Unpatched systems often contain known vulnerabilities that attackers can easily exploit.

5. Use Strong Endpoint Protection:

Deploy Endpoint Detection and Response (EDR) or Extended Detection and Response (XDR) tools across managed IT systems, and use monitoring solutions for IoT devices that cannot run these tools directly.


As organizations rely more on connected devices to improve efficiency, the attack surface grows wider. Without proper segmentation, monitoring, and consistent updates, one weak device can become an entry point for cyberattacks that threaten entire operations.

The report reinforces an important lesson: proactive network management is the foundation of cybersecurity. Ensuring visibility, limiting trust, and continuously updating systems can significantly reduce exposure to emerging IoT-based threats.