Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cybersecurity Threats. Show all posts

Europe struggles with record-breaking spike in ransomware attacks

 


Europe is increasingly being targeted by ransomware groups, driving attacks to unprecedented levels as criminal operations become more industrialised and sophisticated. Threat actors have established themselves in this region as a prime hunting ground, and are now relying on a growing ecosystem of underground marketplaces that sell everything from Malware-as-a-Service subscriptions to stolen network access and turnkey phishing kits to Malware-as-a-Service subscriptions. 

New findings from CrowdStrike's 2025 European Threat Landscape Report reveal that nearly 22 per cent of all ransomware and extortion incidents that occurred globally this year have involved European organisations. Accordingly, European organizations are more likely than those in Asia-Pacific to be targeted by cybercriminals than those in North America, placing them second only to North America. 

According to these statistics, there is a troubling shift affecting Europe's public and private networks. An increasing threat model is being used by cybercriminals on the continent that makes it easier, cheaper, and quicker to attack their victims. This leaves thousands of victims of attacks increasingly sophisticated and financially motivated across the continent. 

Throughout CrowdStrike's latest analysis, a clear picture emerges of just how heavily Europeans have been affected by ransomware and extortion attacks, with the continent managing to absorb over 22% of all global extortion and ransomware attacks. As stated in the report, the UK, Germany, France, Italy, and Spain are the most frequently targeted nations. It also notes that dedicated leak sites linked to European victims have increased by nearly 13% on an annual basis, a trend driven by groups such as Scattered Spider, a group that has shortened its attack-to-deployment window to a mere 24 hours from when the attack started. 

According to the study, companies in the manufacturing, professional services, technology, industrial, engineering and retail industries are still the most heavily pursued sectors, as prominent gangs such as Akira, LockBit, RansomHub, INC, Lynx, and Sinobi continue to dominate the landscape, making big game hunting tactics, aimed at high-value enterprises, remain prevalent and have intensified throughout the continent as well. 

It has been suggested in the study that because of the wide and lucrative corporate base of Europe, the complex regulatory and legal structure, and the geopolitical motivations of some threat actors, the region is a target for well-funded e-crime operations that are well-resourced. State-aligned threat activity continues to add an element of volatility to the already troubled cyber landscape of Europe.

In the past two years, Russian operators have intensified their operations against Ukraine, combining credential phishing with intelligence gathering and disrupting attacks targeted at the power grid, the government, the military, the energy grid, the telecommunications grid, the utility grid, and so forth. The North Koreans have, at the same time, expanded their reach to Europe, attacking defence, diplomatic, and financial institutions in operations that fuse classic espionage with cryptocurrency theft to finance their strategic projects. 

Moreover, Chinese state-sponsored actors have been extorting valuable intellectual property from industries across eleven nations by exploiting cloud environments and software supply chains to siphon intellectual property from the nation that enables them to expand their footprint. 

A number of these operations have demonstrated a sustained commitment to biotechnology and healthcare, while Vixen Panda is now considered one of the most persistent threats to European government and defence organisations, emphasising the degree to which state-backed intrusion campaigns are increasing the region's risk of infection.

There has been a dramatic acceleration in the speed at which ransomware attacks are being carried out in Europe, with CrowdStrike noting that groups such as Scattered Spider have reduced their ransomware deployment cycles to unprecedented levels, which has driven up the levels of infection. Through the group's efforts, the time between an initial intrusion and full encryption has been reduced from 35.5 hours in 2024 to roughly 24 hours by mid-2025, meaning that defenders are likely to have fewer chances to detect or contain intrusions. 

Despite being actively under investigation by law enforcement agencies, eCrime actors based in Western countries, like the United States and the United Kingdom, are developing resilient criminal networks despite active scrutiny by law enforcement. The arrest of four individuals recently by the National Crime Agency in connection with attacks on major retailers, as well as the rearrest of the four individuals for involvement in a breach at Transport for London, underscores the persistence of these groups despite coordinated enforcement efforts. 

In addition to this rapid operational tempo, cybercrime has also been transformed into a commodity-driven industry as a result of a thriving underground economy. The Russian- and English-speaking forums, together with encrypted messaging platforms, offer threat actors the opportunity to exchange access to tools, access points, and operational support with the efficiency of commercial storefronts. 

A total of 260 initial access brokers were seen by investigators during the review period, advertising entry points into more than 1,400 European organizations during the review period. This effectively outsourced the initial stages of a breach to outside sources. Through subscription or affiliate models of malware-as-a-service, companies can offer ready-made loaders, stealers, and financial malware as a service, further lowering the barrier to entry. 

It has been noted that even after major disruptions by law enforcement, including the seizure of prominent forums, many operators have continued to trade without interruption, thanks to safe-haven jurisdictions and established networks of trustworthiness. Aside from eCrime, the report highlights an increasingly complex threat environment caused by state-sponsored actors such as Russia, China, North Korea and Iran. 

Russian actors are concentrating their efforts on Ukraine, committing credential-phishing attacks, obtaining intelligence, and undertaking destructive activities targeting the military, government, energy, telecommunications, and utility sectors, and simultaneously conducting extensive espionage across NATO member countries.

For the purpose of providing plausible deniability, groups tied to Moscow have conducted extensive phishing campaigns, set up hundreds of spoofed domains, and even recruited "throwaway agents" through Telegram to carry out sabotage operations. As Iranian groups continued to conduct hack-and-leak, phishing, and DDoS attacks, often masking state intent behind hacktivist personas, their hack-and-leak campaigns branched into the UK, Germany, and the Netherlands, and they stepped up their efforts. 

With these converging nation-state operations, European institutions have been put under increased strategic pressure, adding an element of geopolitical complexity to an already overloaded cyber-defence environment. It is clear from the findings that for Europe to navigate this escalating threat landscape, a more unified and forward-leaning security posture is urgently needed. According to experts, traditional perimeter defences and slow incident response models are no longer adequate to deal with actors operating at an industrial speed, due to the rapid pace of technology. 

Companies need to share regional intelligence, invest in continuous monitoring, and adopt AI-driven detection capabilities in order to narrow the attackers' widening advantage. Keeping up with the innovation and sophistication of criminal and state-backed adversaries is a difficult task for any organisation, but for organisations that fail to modernise their defences, they run the risk of being left defenceless in an increasingly unforgiving digital battlefield.

Security Researchers at Proton Warn of Massive Credential Exposure


 

Data is becoming the most coveted commodity in the ever-growing digital underworld, and it is being traded at an alarming rate. In a recent investigation conducted by Proton, it has been revealed that there are currently more than 300 million stolen credentials circulating across dark web marketplaces, demonstrating how widespread cybercrime is. 

According to Proton's Data Breach Observatory, which continuously monitors illicit online forums for evidence of data compromise, there is a growing global cybersecurity crisis that is being revealed. In the year 2025, the Observatory has recorded 794 confirmed breach incidents. When aggregating these data, the number increases to 1,571, which amounts to millions of records exposed to the public in the coming years. 

One of the troubling aspects of the research is the pattern of targeting small and medium-sized businesses: cybercriminals have increasingly targeted these companies. Over half of all breaches were recorded at companies with between 10 and 249 employees, while 23% of breaches occurred in micro businesses with fewer than 10 employees. 

This report highlights a growing truth about the digital age: while businesses are racing to innovate and expand online, threat actors are evolving just as quickly. As a result, the vast internet architecture has become a vibrant market for stolen identities, corporate secrets, and business secrets. 

Security breaches are still largely hidden from the public eye for many organisations due to fear of reputational damage, financial losses, or regulatory scrutiny, so they remain reluctant to reveal them. This leaves the true extent of cybercrime largely hidden from the public eye. Using Proton's latest initiative, the company hopes to break down the silence surrounding this threat by tracking it to its source: the underground marketplaces that openly sell stolen credentials and personal data.

In doing so, Proton is continuing its quest to foster a safer, more private internet, which is a vital component of the company's mission. As an extension of the Proton VPN Observatory, which monitors global instances of government-imposed internet restrictions and VPN censorship in the form of government-imposed restrictions, the Data Breach Observatory extends that vigilance to track instances of cybercrime in the form of data breaches. 

Its creation, which is made in collaboration with Constella Intelligence, is an observatory that constantly scans the dark web for new breaches, analysing the types of data compromised, including passwords and personal identifiers, as well as financial records, and the number of accounts affected. 

Through real-time monitoring, Proton can alert victims as soon as a breach occurs, sometimes even before the breached organisation realises it is happening. The Proton platform provides transparent, publicly accessible insights into these security breaches, which are aimed at both educating users about the magnitude of the threat and discouraging organisations from concealing their security shortcomings. 

There is a policy of responsible disclosure at the heart of this initiative, which ensures that affected entities are informed in advance of any public announcement relating to the incident. This is an era that has been defined by data theft and corporate secrecy since the dawn of the digital age. Proton's proactive approach serves as a countermeasure, turning dark web intelligence into actionable preventative measures. 

With this initiative, the company not only reveals the hidden mechanics of cybercrime but also strengthens its reputation as a pioneer in digital transparency and empowerment for users, allowing businesses and individuals alike a better understanding of the shadowy forces that shape today's cybersecurity landscape, as well as the risks associated with it. 

In its latest research, Proton has provided a sobering assessment of the escalating cost of cybercrime to smaller businesses. There have been an estimated four out of five small businesses in recent months that have been affected by data breaches, and these attacks have often resulted in losses exceeding one million dollars. 

As part of the growing crisis surrounding data breaches, a Data Breach Observatory was established to identify breaches that often remain hidden until a significant amount of damage has been sustained. Proton constantly scans dark web marketplaces where stolen credentials are traded to deliver early warnings about potential breaches so that organisations can take steps to protect their data before attackers have an opportunity to exploit it further. 

Through the course of these investigations, a wide range of personal and financial details were uncovered, including names, dates of birth, email addresses, passwords, and physical contact information of those individuals. 

Almost all of these breaches have involved social security numbers, bank credentials, and IBAN details being exposed, which together represent an alarming combination that creates an extremely high likelihood of identity theft and financial fraud. 

It has been recorded by the observatory that several high-profile incidents will occur in 2025, such as the Qantas Airways breach in October that exposed more than 11.8 million customer records; Alleianz Life Germany in September, with more than one million compromised accounts; and the U.S. tech firm Tracelo that was breached by 1.4 million records earlier this year, while breaches at Free Telecom, a French company, and SkilloVilla, a Indian company, revealed 19 million records and 33 million records respectively, emphasizing the threat to be very global in nature. 

Security experts have always stressed the necessity of multi-factor authentication, as well as strong password management, as essential defences against credential-based attacks. Consequently, Proton reiterates this advice by advising businesses to regularly monitor their credentials for leaks and to reset passwords as soon as suspicious activity is detected. 

The company enables businesses to verify whether or not their data has been compromised through its public access observatory platform, which is a critical step toward minimising the damage done to a business before cybercriminals can weaponise the data stolen. This is done through the company's public observatory platform that is widely accessible. 

A stronger global security awareness and proactive cybersecurity practices are essential, and Proton's Data Breach Observatory confirms this need. Aside from the observatory's use as a crucial alert system, it is important to note that experts also emphasise that prevention is the best form of protection when it comes to securing information online. 

The Observatory stresses the importance of adopting layered security strategies, including the use of Virtual Private Networks (VPNs) that safeguard online communications and reduce the risk of interception, even in situations where users' data is compromised. By using its own Proton VPN, based on end-to-end encryption and the company's signature Secure Core architecture, traffic passes through multiple servers located in privacy-friendly jurisdictions, effectively masking users' IP addresses and shielding their digital identities from cybercriminals. The company is effectively protecting their digital identity from prying eyes. 

As a result of the robust infrastructure, the observatory continues to monitor across the dark web, and personal information remains encrypted and protected from the cybercriminal networks it monitors. Besides technical solutions, Proton and cybersecurity experts alike emphasise the importance of a set of foundational best practices for individuals and organisations who want to strengthen their defences. 

This is the best way to protect online accounts is to enable multi-factor authentication (MFA), widely recognised as the most effective method of preventing the theft of credentials, and to use a password manager whose function is to keep secure passwords for every online account. As part of regular breach monitoring, Proton's observatory platform can be used to provide timely alerts whenever credentials are discovered in leaked databases. 

In addition to fostering cybersecurity awareness among employees, companies must also create an incident response plan, enforce the principle of least privilege, and make sure that only systems that are essential to the role they are playing are accessible. Taking advantage of more advanced strategies, including network segmentation, enterprise-grade identity and access management (IAM) tools, such as Privileged Access Management (PAM), may allow for further containment and protection of critical infrastructure. 

These recommendations have been derived from the fact that credential theft is often based on exploited software vulnerabilities or weak configurations that are often exploited by hackers. An unpatched flaw—such as an API endpoint that is exposed or an authentication mechanism that is not working properly—can result in brute-force attacks or session hijacking attacks. 

Proton's exposure itself does not have any specific link to a vulnerability identifier; however, it indicates that there are still many systemic vulnerabilities which facilitate large-scale credential theft across many industries today. As a result of the importance of patching timely manner and implementing strict configuration management, businesses can significantly reduce the chances of attackers gaining access to their network. 

However, Proton’s research goes well beyond delivering a warning. It calls for action. The number of compromised accounts on dark web markets has increased by over 300 million, and we cannot afford to stay complacent. This study underscores that protecting one's data is not merely about technology, but about maintaining a proactive approach to cyber hygiene and continuous vigilance. 

A message Protoemphasises in this, when data is both a commodity and a target, it is clear: the key to digital safety lies in proactive defence, informed awareness, and collective responsibility. In an age when the digital landscape is becoming increasingly complex, Proton’s findings serve as a powerful reminder that cybersecurity is not an investment that can be made once but is an ongoing commitment. 

Organisations that take steps to ensure that their employees are informed and trained about cyber threats are better prepared to cope with the next wave of cyber threats. Several security measures, including encrypting infrastructure, conducting regular security audits, and continuously performing vulnerability assessments, can be taken to significantly reduce exposure, while collaborations between cybersecurity researchers and private firms can strengthen collective defences. 

Even though stolen data fuels a thriving underground economy in today's cyber world, the most effective defences against cybercrime remain vigilance and informed action.

Cybersecurity Alert as PolarEdge Botnet Hijacks 25,000 IoT Systems Globally

 


Researchers at Censys have found that PolarEdge is rapidly expanding throughout the world, in an alarming sign that connected technology is becoming increasingly weaponised. PolarEdge is an advanced botnet orchestrating large-scale attacks against Internet of Things (IoT) and edge devices all over the world, a threat that has become increasingly prevalent in recent years. 

When the malicious network was first discovered in mid-2023, only around 150 confirmed infections were identified. Since then, the network has grown into an extensive digital threat, compromising nearly 40,000 devices worldwide by August 2025. Analysts have pointed out that PolarEdge's architecture is very similar to Operational Relay Box (ORB) infrastructures, which are covert systems commonly used to facilitate espionage, fraud, and cybercrime. 

PolarEdge has grown at a rapid rate in recent years, and this highlights the fact that undersecured IoT environments are becoming increasingly exploited, placing them among the most rapidly expanding and dangerous botnet campaigns in recent years. PolarEdge has helped shed light on the rapidly evolving nature of cyber threats affecting the hyperconnected world of today. 

PolarEdge, a carefully crafted campaign that demonstrates how compromised Internet of Things (IoT) ecosystems can be turned into powerful weapons of cyber warfare, emerged as an expertly orchestrated campaign. There are more than 25,000 infected devices spread across 40 countries that are a part of the botnet, and the botnet is characterised by its massive scope and sophistication due to its network of 140 command and control servers. 

Unlike many other distributed denial-of-service (DDoS) attacks, PolarEdge is not only a tool for distributing denial-of-service attacks, but also a platform for criminal infrastructure as a service (IaaS), specifically made to support advanced persistent threats (APT). By exploiting vulnerabilities in IoT devices and edge devices through systematic methods, the software constructs an Operational Relay Box (ORB) network, which creates a layer of obfuscating malicious traffic, enabling covert operations such as espionage, data theft, and ransomware.

By adopting this model, the cybercrime economy is reshaped in a way that enables even moderately skilled adversaries to access capabilities that were once exclusively the domain of elite threat groups. As further investigation into PolarEdge's evolving infrastructure was conducted, it turned out that a previously unknown component known as RPX_Client was uncovered, which is an integral part of the botnet that transforms vulnerable IoT devices into proxy nodes. 

In May 2025, XLab's Cyber Threat Insight and Analysis System detected a suspicious activity from IP address 111.119.223.196, which was distributing an ELF file named "w," a file that initially eluded detection on VirusTotal. The file was identified as having the remote location DNS IP address 111.119.223.196. A deeper forensic analysis of the attack was conducted to uncover the RPX_Client mechanism and its integral role in the construction of Operational Relay Box networks. 

These networks are designed to hide malicious activity behind layers of compromised systems to make it appear as if everything is normal. An examination of the device logs carried out by the researchers revealed that the infection had spread all over the world, with the highest concentration occurring in South Korea (41.97%), followed by China (20.35%) and Thailand (8.37%), while smaller clusters emerged in Southeast Asia and North America. KT CCTV surveillance cameras, Shenzhen TVT digital video recorders and Asus routers have been identified as the most frequently infected devices, whereas other devices that have been infected include Cyberoam UTM appliances, Cisco RV340 VPN routers, D-Link routers, and Uniview webcams have also been infected. 

140 RPX_Server nodes are running the campaign, which all operate under three autonomous system numbers (45102, 37963, and 132203), and are primarily hosted on Alibaba Cloud and Tencent Cloud virtual private servers. Each of these nodes communicates via port 55555 with a PolarSSL test certificate that was derived from version 3.4.0 of the Mbed TLS protocol, which enabled XLab to reverse engineer the communication flow so that it would be possible to determine the validity and scope of the active servers.

As far as the technical aspect of the RPX_Client is concerned, it establishes two connections simultaneously. One is connected to RPX_Server via port 55555 for node registration and traffic routing, while the other is connected to Go-Admin via port 55560 for remote command execution. As a result of its hidden presence, this malware is disguised as a process named “connect_server,” enforces a single-instance rule by using a PID file (/tmp/.msc), and keeps itself alive by injecting itself into the rcS initialisation script. 

In light of these efforts, it has been found that the PolarEdge infrastructure is highly associated with the RPX infrastructure, as evidenced by overlapping code patterns, domain associations and server logs. Notably, IP address 82.118.22.155, which was associated with PolarEdge distribution chains in the early 1990s, was found to be related to a host named jurgencindy.asuscomm.com, which is the same host that is associated with PolarEdge C2 servers like icecreand.cc and centrequ.cc. 

As the captured server records confirmed that RPX_Client payloads had been delivered, as well as that commands such as change_pub_ip had been executed, in addition to verifying its role in overseeing the botnet's distribution framework, further validated this claim. Its multi-hop proxy architecture – utilising compromised IoT devices as its first layer and inexpensive Virtual Private Servers as its second layer – creates a dense network of obfuscation that effectively masks the origin of attacks. 

This further confirms Mandiant's assessment that cloud-based infrastructures are posing a serious challenge to conventional indicator-based detection techniques. Several experts emphasised the fact that in order to mitigate the growing threat posed by botnets, such as PolarEdge, one needs to develop a comprehensive and layered cybersecurity strategy, which includes both proactive defence measures and swift incident response approaches. In response to the proliferation of connected devices, organisations and individuals need to realise the threat landscape that is becoming more prevalent. 

Therefore, IoT and edge security must become an operational priority rather than an afterthought. It is a fundamental step in making sure that all devices are running on the latest firmware, since manufacturers release patches frequently to address known vulnerabilities regularly. Furthermore, it is equally important to change default credentials immediately with strong, unique passwords. This is an essential component of defence against large-scale exploitation, but is often ignored.

Security professionals recommend that network segmentation be implemented, that IoT devices should be isolated within specific VLANs or restricted network zones, so as to minimise lateral movement within networks. As an additional precaution, organisations are advised to disable non-essential ports and services, so that there are fewer entry points that attackers could exploit. 

The continuous monitoring of the network, with a strong emphasis on intrusion detection and prevention (IDS/IPS) systems, has a crucial role to play in detecting suspicious traffic patterns that are indicative of active compromises. The installation of a robust patch management program is essential in order to make sure that all connected assets are updated with security updates promptly and uniformly. 

Enterprises should also conduct due diligence when it comes to the supply chain: they should choose vendors who have demonstrated a commitment to transparency, timely security updates, and disclosure of vulnerabilities responsibly. As far as the technical aspect of IoT defence is concerned, several tools have proven to be effective in detecting and counteracting IoT-based threats. Nessus, for instance, provides comprehensive vulnerability scanning services, and Shodan provides analysts with a way to identify exposed or misconfigured internet-connected devices. 

Among the tools that can be used for deeper network analysis is Wireshark, which is a protocol inspection tool used by most organisations, and Snort or Suricata are powerful IDS/IPS systems that can detect malicious traffic in real-time. In addition to these, IoT Inspector offers comprehensive assessments of device security and privacy, giving us a much better idea of what connected hardware is doing and how it behaves. 

By combining these tools and practices, a critical defensive framework can be created - one that is capable of reducing the attack surface and curbing the propagation of sophisticated botnets, such as PolarEdge, resulting in a reduction in the number of attacks. In a comprehensive geospatial study of PolarEdge's infection footprint, it has been revealed that it has been spread primarily in Southeast Asia and North America, with South Korea claiming 41.97 percent of the total number of compromised devices to have been compromised. 

The number of total infections in China comes in at 20.35 per cent, while Thailand makes up 8.37 per cent. As part of the campaign, there are several key victims, including KT CCTV systems, Shenzhen TVT digital video recorders (DVRs), Cyberoam Unified Threat Management (UTM) appliances, along with a variety of router models made by major companies such as Asus, DrayTek, Cisco, and D-Link. Virtual private servers (VPS) are used primarily to control the botnet's command-and-control ecosystem, which clusters within autonomous systems 45102, 37963, and 132203. 

The vast majority of the botnet's operations are hosted by Alibaba Cloud and Tencent Cloud infrastructure – a reflection of the botnet's dependency on commercial, scalable cloud environments for maintaining its vast operations. PolarEdge's technical sophistication is based on a multi-hop proxy framework, RPX, a multi-hop proxy framework meticulously designed to conceal attack origins and make it more difficult for the company to attribute blame. 

In the layered communication chain, traffic is routed from a local proxy to RPX_Server nodes to RPX_Client instances on IoT devices that are infected, thus masking the true source of command, while allowing for fluid, covert communication across global networks. It is the malware's strategy to maintain persistence by injecting itself into initialisation scripts. Specifically, the command echo "/bin/sh /mnt/mtd/rpx.sh &" >> /etc/init.d/rcS ensures that it executes automatically at the start-up of the system. 

Upon becoming active, it conceals itself as a process known as “connect_server” and enforces single-instance execution using the PID file located at /tmp/.msc to enforce this. This client is capable of configuring itself by accessing a global configuration file called “.fccq” that extracts parameters such as the command-and-control (C2) address, communication ports, device UUIDs, and brand identifiers, among many others. 

As a result, these values have been obfuscated using a single-byte XOR encryption (0x25), an effective yet simple method of preventing static analysis of the values. This malware uses two network ports in order to establish two network channels—port 55555 for node registration and traffic proxying, and port 55560 for remote command execution via the Go-Admin service. 

Command management is accomplished through the use of “magic field” identifiers (0x11, 0x12, and 0x16), which define specific operational functions, as well as the ability to update malware components self-aware of themselves using built-in commands like update_vps, which rotates C2 addresses.

A server-side log shows that the attackers executed infrastructure migration commands, which demonstrates their ability to dynamically switch proxy pools to evade detection each and every time a node is compromised or exposed, which is evidence of the attacker’s ability to evade detection, according to the log. It is evident from network telemetry that PolarEdge is primarily interested in non-targeted activities aimed at legitimate platforms like QQ, WeChat, Google, and Cloudflare. 

It suggests its infrastructure may be used as both a means for concealing malicious activity as well as staging it as a form of ordinary internet communication. In light of the PolarEdge campaign, which highlights the fragility of today's interconnected digital ecosystem, it serves as a stark reminder that cybersecurity must evolve in tandem with the sophistication of today's threats, rather than just react to them. 

A culture of cyber awareness, cross-industry collaboration, and transparent threat intelligence sharing is are crucial component of cybersecurity, beyond technical countermeasures. Every unsecured device, whether it is owned by governments, businesses, or consumers, can represent a potential entryway into the digital world. Therefore, governments, businesses, and consumers all must recognise this. The only sustainable way for tomorrow's digital infrastructure to be protected is through education, accountability, and global cooperation.

Deepfakes Are More Polluting Than People Think

 


Artificial intelligence, while blurring the lines between imagination and reality, is causing a new digital controversy to unfold at a time when ethics and creativity have become less important and the digital realm has become a much more fluid one. 

With the advent of advanced artificial intelligence platforms such as OpenAI's Sora, deepfake videos have been able to flood social media feeds with astoundingly lifelike representations of celebrities and historic figures, resurrected in scenes that at times appear sensational but at other times are deeply offensive, thanks to advanced artificial intelligence platforms.

In fact, the phenomenon has caused widespread concern amongst families of revered personalities such as Dr Martin Luther King Jr. Several people are publicly urging technology companies to put more safeguards in place to prevent the unauthorised use of their loved ones' likenesses.

However, as the debate over the ethical boundaries of synthetic media intensifies, there is one hidden aspect of the issue that is quietly surfacing, namely, the hidden environmental impact that synthetic media has on the environment. 

The creation of these hyperrealistic videos requires a great deal of computational power, as explained by Dr Kevin Grecksch, a professor at the University of Oxford. They also require a substantial amount of energy and water to maintain the necessary cooling systems within the data centres. Despite appearing as a fleeting piece of digital art, it has a significant environmental cost hidden beneath it, adding an unexpected layer of concerns surrounding the digital revolution that is a growing concern. 

As social media platforms have grown, there has been an increasing prevalence of deepfake videos, whose uncanny realism has captured audiences while blurring the line between truth and fabrication, while also captivating them. 

As AI-powered tools such as OpenAI's Sora have become more widely available, these videos have become viral as a result of being able to conjure up lifelike portraits of individuals – some of whom have long passed away – into fabricated scenes that range from bizarre to deeply inappropriate in nature. 

Several families who have been portrayed in this disturbing trend, including that of Dr Martin Luther King Jr., have raised alarm over the trend and have called on technology companies to prevent unauthorised digital resurrections of their loved ones. However, there is much more to the controversy surrounding deepfakes than just issues of dignity and consent at play. 

Despite the convincingly rendered nature of these videos, Dr Kevin Grecksch, a lecturer at Oxford University, has stressed that these videos have a significant ecological impact that is often overlooked. Generating such content is dependent upon the installation of powerful data centres that consume vast amounts of electricity and water to cool, resources that contribute substantially to the growing environmental footprint of this technology on a large scale. 

It has emerged that deep fakes, a form of synthetic media that is rapidly advancing, are one of the most striking examples of how artificial intelligence is reshaping digital communication. By combining complex deep learning algorithms with a massive dataset, these technologies can convincingly replace or manipulate faces, voices, and even gestures with ease. 

The likeness of one person is seamlessly merged with the likeness of another, creating a seamless transition from one image to the next. Additionally, shallow fakes, which are less technologically complex but equally important, are also closely related, but they rely on simple editing techniques to distort reality to an alarming degree, blurring the line between authenticity and fabrication. The proliferation of deepfakes has accelerated rapidly at an unprecedented pace over the past few years. 

A new report suggests that the number of such videos that circulate online has doubled every six months. It is estimated that there will be 500,000 deepfake videos and audio clips being shared globally by 2023, and if current trends hold true, this number is expected to reach almost 8 million by 2025. Using advanced artificial intelligence tools and the availability of publicly available data, experts attribute this explosive growth to the fact that these tools are widely accessible and there is a tremendous amount of public data, which creates an ideal environment for manipulated media to flourish. 

As a result of the rise of deepfake technology, legal and policy circles have been riddled with intense debate, underscoring the urgency of redefining the boundaries of accountability in an era in which synthetic media is so prevalent. With hyper-realistic digital forgeries created by advanced deep learning algorithms, which are very realistic, it poses a complex challenge that goes well beyond the technological edge. 

Legal scholars have warned that deep fakes pose a significant threat to privacy, intellectual property, and dignity, while also undermining public trust in the information they provide. A growing body of evidence suggests that these fabrications may carry severe social, ethical, and legal consequences not only from their ability to mislead but also from their ability to influence electoral outcomes and facilitate non-consensual pornography, all of which carry severe social, ethical, and legal consequences. 

In an effort to contain the threat, the European Union is enforcing legislation such as its Artificial Intelligence Act and Digital Services Act, which aim to assign responsibility for large online platforms and establish standards for the governance of Artificial Intelligence. Even so, experts contend that such initiatives remain insufficient due to the absence of comprehensive definitions, enforcement mechanisms, and protocols for assisting victims. 

Moreover, the situation is compounded by a fragmented international approach: although many states in the United States have enacted laws addressing fake media, there are still inconsistencies across jurisdictions, and countries like Canada continue to face challenges in regulating deepfake pornography and other forms of synthetic nonconsensual media. 

Social media has become an increasingly important platform for spreading manipulated media, which in turn increases these risks. Scholars have advocated sweeping reforms, which range from stricter privacy laws to recalibrating free speech to preemptive restrictions on deepfake generation, to mitigate future damage that is likely to occur. This includes identity theft, fraud, and other harms that existing legal systems are incapable of dealing with. 

It is important to note that ethical concerns are also emerging outside of the policy arena, in unexpected circumstances, such as the use of deep fakes in grief therapy and entertainment, where the line between emotional comfort and manipulation becomes dangerously blurred during times of emotional distress. 

The researchers, who are calling for better detection and prevention frameworks, are reaching a common conclusion: ideepfakes must be regulated in a manner that strikes a delicate balance between innovation and protection, ensuring that technological advances do not take away truth, justice, or human dignity as a result. This era of synthetic media has provoked a heated debate within legal and policy circles concerning the rise of deepfake technology, emphasising the importance of redefining the boundaries of accountability in a world where deep fakes have become a common phenomenon.

The hyper-realistic digital forgeries produced by advanced deep learning algorithms pose a challenge that goes well beyond the novelty of technology. There is considerable concern that deepfakes may threaten the integrity of information, as well as undermine public trust in it, while also undermining the core principles of privacy, intellectual property, and personal dignity. 

As a result of the fabrications' ability to distort reality, it has already been reported that they are capable of spreading misinformation, influencing elections, and facilitating nonconsensual pornography, all of which can have serious social, ethical, and legal repercussions. In an effort to contain the threat, the European Union is enforcing legislation such as its Artificial Intelligence Act and Digital Services Act, which aim to assign responsibility for large online platforms and establish standards for the governance of Artificial Intelligence.

Even so, experts contend that such initiatives remain insufficient due to the absence of comprehensive definitions, enforcement mechanisms, and protocols for assisting victims. Inconsistencies persist across jurisdictions, which further complicates the situation. While some U.S. states have enacted laws to address false media, there are still inconsistencies across jurisdictions, and countries such as Canada are still struggling with how to regulate non-consensual synthetic materials, including deepfake pornography and other forms of pseudo-synthetic material. 

Social media has become an increasingly important platform for spreading manipulated media, which in turn increases these risks. Scholars have advocated sweeping reforms, which range from stricter privacy laws to recalibrating free speech to preemptive restrictions on deepfake generation, to mitigate future damage that is likely to occur. This includes identity theft, fraud, and other harms that existing legal systems are incapable of dealing with. 

Additionally, ethical concerns are emerging beyond the realm of policy as well, in unexpected contexts such as those regarding deepfakes' use in grief therapy and entertainment, where the line between emotional comfort and manipulative behaviours becomes a dangerous blur to the point of becoming dangerously indistinguishable. 

Research suggests that more robust detection and prevention frameworks are needed to detect and prevent deepfakes. One conclusion becomes increasingly evident as a result of these findings: the regulation of deepfakes requires a delicate balance between innovation and protection, so that progress in technology does not trample upon truth, justice, and human dignity in its wake. 

A growing number of video generation tools powered by artificial intelligence have become so popular that they have transformed online content creation, but have also raised serious concerns about the environmental consequences of these tools. Data centres are the vast digital backbones that make such technologies possible, and they use large quantities of electricity and fresh water to cool servers on a large scale. 

The development of applications like OpenAI’s Sora has made it easy for users to create and share hyperrealistic videos quickly, but social media platforms have also seen an increase in deepfake content, which has helped such apps rise to the top of the global download charts. Within just five days, Sora had over one million downloads within just five days, cementing its position as the dominant app in the US Apple App Store. 

In the midst of this surge of creative enthusiasm, however, there is a growing environmental dilemma that has been identified by DDrKevin Grecksch of the University of Oxford in his recently published warning against ignoring the water and energy demands of AI infrastructure. He urged users and policymakers alike to be aware that digital innovation has a significant ecological footprint, and that it takes a lot of water to carry out, and that it needs to be carefully considered when using water. 

It has been argued that the "cat is out of the sack" with the adoption of artificial intelligence, but that more integrated planning is imperative when it comes to determining where and how data-centric systems should be built and cooled. 

A warning he made was that even though the government envisions South Oxfordshire as a potential hub for the development of artificial intelligence, insufficient attention has been paid to the environmental logistics, particularly where the necessary water supply will come from. Since enthusiasm for the development of generative technologies continues to surge, experts insist that the conversation regarding the future of AI needs to go beyond innovation and efficiency, encompassing sustainability, resource management, and long-term environmental responsibility. 

There is no denying that the future of artificial intelligence demands more than admiration for its brilliance as it stands at a crossroads between innovation and accountability, but responsibility as to how it is applied. Even though deepfake technology is a testament to human ingenuity, it should be governed by ethics, regulation, and sustainability, as well as other factors.

There is a need to collaborate between policymakers, technology firms, and environmental authorities in order to develop frameworks which protect both digital integrity as well as natural resources. For a safer and more transparent digital era, we must encourage the use of renewdatacentresin datacentress, enforce stricter consent-based media laws, and invest in deepfake detection systems in order to ensure that deepfake detection systems are utilised. 

AI offers the promise of creating a world without human intervention, yet its promise lies in our capacity to control its outcomes - ensuring that in a world increasingly characterised by artificial intelligence, progress remains a force for truth, equity, and ecological balance.

AI Browsers Spark Debate Over Privacy and Cybersecurity Risks

 


With the rapid development of artificial intelligence, the digital landscape continues to undergo a reshaping process, and the internet browser itself seems to be the latest frontier in this revolution. After the phenomenal success of AI chatbots such as ChatGPT, Google Gemini, and Perplexity, tech companies are now racing to integrate the same kind of intelligence into the very tool that people use every day to navigate the world online. 

A recent development by Google has been the integration of Gemini into its search engine, while both OpenAI and Perplexity have released their own AI-powered browsers, Atlas and Perplexity, all promising a more personalised and intuitive way to browse online content. In addition to offering unprecedented convenience and conversational search capabilities for users, this innovation marks the beginning of a new era in information access. 

In spite of the excitement, cybersecurity professionals remain increasingly concerned. There is a growing concern among experts that intelligent systems are inadvertently exposing users to sophisticated cyber risks in spite of enhancing their user experience. 

A context-aware interaction or dynamic data retrieval feature that allows users to interact with their environment can be exploited through indirect prompt injection and other manipulation methods, which may allow attackers to exploit the features. 

It is possible that these vulnerabilities may allow malicious actors to access sensitive data such as personal files, login credentials, and financial information, which raises the risk of data breaches and cybercriminals. In these new eras of artificial intelligence, where the boundaries between browsing and AI are blurring, there has become an increasing urgency in ensuring that trust, transparency, and safety are maintained on the Internet. 

AI browsers continue to divide experts when it comes to whether they are truly safe to use, and the issue becomes increasingly complicated as the debate continues. In addition to providing unprecedented ease of use and personalisation, ChatGPT's Atlas and Perplexity's Comet represent the next generation of intelligent browsers. However, they also introduce new levels of vulnerability that are largely unimaginable in traditional web browsers. 

It is important to understand that, unlike conventional browsers, which are just gateways to online content, these artificial intelligence-driven platforms function more like a digital assistant on their own. Aside from learning from user interactions, monitoring browsing behaviours, and even performing tasks independently across multiple sites, humans and machines are becoming increasingly blurred in this evolution, which has fundamentally changed the way we collect and process data today. 

A browser based on Artificial Intelligence watches and interprets each user's digital moves continuously, from clicks to scrolls to search queries and conversations, creating extensive behavioural profiles that outline users' interests, health concerns, consumer patterns, and emotional tendencies based on their data. 

Privacy advocates have argued for years that this level of surveillance is more comprehensive than any cookie or analytics tool on the market today, and represents a turning point in digital tracking. During a recent study by the Electronic Frontier Foundation, organisation discovered that Atlas retained search data related to sensitive medical inquiries, including names of healthcare providers, which raised serious ethical and legal concerns in regions that restricted certain medical procedures.

Due to the persistent memory architecture of these systems, they are even more contentious. While ordinary browsing histories can be erased by the user, AI memories, on the other hand, are stored on remote servers, which are frequently retained indefinitely. By doing so, the browser maintains long-term context. The system can use this to access vast amounts of sensitive data - ranging from financial activities to professional communications to personal messages - even long after the session has ended. 

These browsers are more vulnerable than ever because they require extensive access permissions to function effectively, which includes the ability to access emails, calendars, contact lists, and banking information. Experts have warned that such centralisation of personal data creates a single point of catastrophic failure—one breach could expose an individual's entire digital life. 

OpenAI released ChatGPT Atlas earlier this week, a new browser powered by artificial intelligence that will become a major player in the rapidly expanding marketplace of browsers powered by artificial intelligence. The Atlas browser, marketed as a browser that integrates ChatGPT into your everyday online experience, represents an important step forward in the company’s effort to integrate generative AI into everyday living. 

Despite being initially launched for Mac users, OpenAI promises to continue to refine its features and expand compatibility across a range of platforms in the coming months. As Atlas competes against competitors such as Perplexity's Comet, Dia, and Google's Gemini-enabled Chrome, the platform aims to redefine the way users interact with the internet—allowing ChatGPT to follow them seamlessly as they browse through the web. 

As described by OpenAI, ChatGPT's browser is equipped to interpret open tabs, analyse data on the page, and help users in real time, without requiring users to switch between applications or copy content manually. There have been a number of demonstrations that have highlighted the versatility of the tool, demonstrating its capability of completing a broad range of tasks, from ordering groceries and writing emails to summarising conversations, analysing GitHub repositories and providing research assistance. OpenAI has mentioned that Atlas utilises ChatGPT’s built-in memory in order to be able to remember past interactions and apply context to future queries based on those interactions.

There is a statement from the company about the company's new approach to creating a more intuitive, continuous user experience, in which the browser will function more as a collaborative tool and less as a passive tool. In spite of Atlas' promise, just as with its AI-driven competitors, it has stirred up serious issues around security, data protection and privacy. 

One of the most pressing concerns regarding prompt injection attacks is whether malicious actors are manipulating large language models to order them to perform unintended or harmful actions, which may expose customer information. Experts warn that such "agentic" systems may come at a significant security cost. 

 An attack like this can either occur directly through the user's prompts or indirectly by hiding hidden payloads within seemingly harmless web pages. A recent study by Brave researchers indicates that many AI browsers, including Comet and Fellou, are vulnerable to exploits like this. The attacker is thus able to bypass browser security frameworks and gain unauthorized access to sensitive domains such as banks, healthcare facilities, or corporate systems by bypassing browser security frameworks. 

It has also been noted that many prominent technologists have voiced their reservations. Simon Willison, a well-known developer and co-creator of the Django Web Framework, has urged that giving browsers the freedom to act autonomously on their users' behalf would pose grave risks. Even seemingly harmless requests, like summarising a Reddit post, could, if exploited via an injection vulnerability, be used to reveal personal or confidential information. 

With the advancement of artificial intelligence browsers, the tension between innovation and security becomes more and more intense, prompting the call for stronger safeguards before these tools become mainstream digital companions. There has been an increase in the number of vulnerabilities discovered by security researchers that make AI browsers a lot more dangerous than was initially thought, with prompt injection emerging as the most critical vulnerability. 

A malicious website can use this technique to manipulate AI-driven browser agents secretly, effectively turning them against a user. Researchers at Brave found that attackers are able to hide invisible instructions within webpage code, often rendered as white text on white backgrounds. These instructions are unnoticeable by humans but are easily interpreted by artificial intelligence systems. 

A user may be directed to perform unauthorised actions when they visit a web page containing embedded commands. For example, they may be directed to retrieve private e-mails, access financial data, or transfer money without their consent. Due to the inherent lack of contextual understanding that artificial intelligence systems possess, they can unwittingly execute these harmful instructions, with full user privileges, when they do not have the ability to differentiate between legitimate inputs from deceptive prompts. 

These attacks have caused a lot of attention among the cybersecurity community due to their scale and simplicity. Researchers from LayerX demonstrated the use of a technique called CometJacking, which was demonstrated as a way of hijacking Perplexity’s Comet browser into a sophisticated data exfiltration tool by a single malicious link. 

A simple encoding method known as Base64 encoding was used by attackers to bypass traditional browser security measures and sandboxes, allowing them to bypass the browser's protections. It is therefore important to know that the launch point for a data theft campaign could be a seemingly harmless comment on Reddit, a social media post, or even an email newsletter, which could expose sensitive personal or company information in an innocuous manner. 

The findings of this study illustrate the inherent fragility of artificial intelligence browsers, where independence and convenience often come at the expense of safety. It is important to note that cybersecurity experts have outlined essential defence measures for users who wish to experiment with AI browsers in light of these increasing concerns. 

Individuals should restrict permissions strictly, giving access only to non-sensitive accounts and avoiding involving financial institutions or healthcare institutions until the technology becomes more mature. By reviewing activity logs regularly, you can be sure that you have been alerted to unusual patterns or unauthorised actions in advance. A multi-factor authentication system can greatly enhance security across all linked accounts, while prompt software updates allow users to take advantage of the latest security patches. 

A key safeguard is to maintain manual vigilance-verifying URLs and avoiding automated interactions with unfamiliar or untrusted websites. Some prominent technologists have expressed doubts about these systems as well. A respected developer and co-creator of the Django Web Framework, Simon Willison, has warned that giving browsers the ability to act autonomously on behalf of users comes with profound risks.

It is noted that even benign requests, such as summarising a Reddit post, could inadvertently expose sensitive information if exploited by an injection vulnerability, and this could result in personal information being released into the public domain. With the advancement of artificial intelligence browsers, the tension between innovation and security becomes more and more intense, prompting the call for stronger safeguards before these tools become mainstream digital companions. 

There has been an increase in the number of vulnerabilities discovered by security researchers that make AI browsers a lot more dangerous than was initially thought, with prompt injection emerging as the most critical vulnerability. Using this technique, malicious websites have the ability to manipulate the AI-driven browsers, effectively turning them against their users. 

Brave researchers discovered that attackers are capable of hiding invisible instructions within the code of the webpages, often rendered as white text on a white background. The instructions are invisible to the naked eye, but are easily interpreted by artificial intelligence systems. The embedded commands on such pages can direct the browser to perform unauthorised actions, such as retrieving private emails, accessing financial information, and transferring funds, as a result of a user visiting such a page.

Since AI systems are inherently incapable of distinguishing between legitimate and deceptive user inputs, they can unknowingly execute harmful instructions with full user privileges without realising it. This attack has sparked the interest of the cybersecurity community due to its scale and simplicity. 

Researchers from LayerX have demonstrated a method of hijacking Perplexity's Comet browser by merely clicking on a malicious link, and using this technique, transforming the browser into an advanced data exfiltration tool. Attackers were able to bypass traditional browser security measures and security sandboxes by using simple methods like Base64 encoding.

It means that a seemingly harmless comment on Reddit, a post on social media, or an email newsletter can serve as a launch point for a data theft campaign, thereby potentially exposing sensitive personal and corporate information to a third party. There is no doubt that AI browsers are intrinsically fragile, where autonomy and convenience sometimes come at the expense of safety. The findings suggest that AI browsers are inherently vulnerable. 

It has become clear that security experts have identified essential defence measures to protect users who wish to experiment with AI browsers in the face of increasing concerns. It is suggested that individuals restrict their permissions as strictly as possible, granting access only to non-sensitive accounts and avoiding connecting to financial or healthcare services until the technology is well developed. 

In order to detect unusual patterns or unauthorised actions in time, it is important to regularly review activity logs. Multi-factor authentication is a vital component of protecting linked accounts, as it adds a new layer of security, while prompt software updates ensure that users receive the latest security patches on their systems. 

Furthermore, experts emphasise manual vigilance — verifying URLs and avoiding automated interactions with unfamiliar or untrusted websites remains crucial to protecting their data. There is, however, a growing consensus among professionals that artificial intelligence browsers, despite impressive demonstrations of innovation, remain unreliable for everyday use. 

Analysts at Proton concluded that AI browsers are no longer reliable for everyday use. This argument argues that the issue is not only technical, but is structural as well; privacy risks are a part of the very design of these systems themselves. AI browser developers, who prioritise functionality and personalisation over all else, have inherently created extensive surveillance architectures that rely heavily on user data in order to function as intended. 

It has been pointed out by OpenAI's own security leadership that prompt injection remains an unresolved frontier issue, thus emphasising that this emerging technology is still a very experimental and unsettled one. The consensus among cybersecurity researchers, at the moment, is that the risks associated with artificial intelligence browsers far outweigh their convenience, especially for users dealing with sensitive personal and professional information. 

With the acceleration of the AI browser revolution, it is now crucial to strike a balance between innovation and accountability. Despite the promise of seamless digital assistance and a hyper-personalised browsing experience through tools such as Atlas and Comet, they must be accompanied by robust ethical frameworks, transparent data governance, and stronger security standards to make progress.

A lot of experts stress that the real progress will depend on the way this technology evolves responsibly - prioritising user consent, privacy, and control over convenience for the end user. In the meantime, users and developers alike should not approach AI browsers with fear, but with informed caution and an insistence that trust is built into the browser as a default state.

Growing VPN Exploits Trigger Fresh Ransomware Crisis in APAC


 

Despite the growing cyber risk landscape in Asia-Pacific, ransomware operations continue to tighten their grip on India and the broader region, as threat actors more often seek to exploit network vulnerabilities and target critical sectors in order to get a foothold in the region. 

It is essential to note that Cyble's Monthly Threat Landscape Report for July 2025 highlights a concerning trend: cybercriminals are no longer merely encrypting systems for ransom; they are systematically extracting sensitive information, selling network access, and exposing victims to the public in underground marketplaces. 

In recent weeks, India has been a focal point of this escalation, with a string of damaging breaches taking place across a number of key industries. Recently, the Warlock ransomware group released sensitive information concerning a domestic manufacturing company. This information included employee records, financial reports, and internal HR files. Parallel to this, two Indian companies – a technology consulting firm and a SaaS provider – have been found posting stolen data on dark web forums that revealed information on customers, payment credentials, and server usage logs. 

Further compounding the threat, the report claims that credentials granting administrative control over an Indian telecommunications provider’s infrastructure were being sold for an estimated US$35,000 as a way of monetizing network intrusions, highlighting the increasing monetization of network hacking. 

Throughout the region, Thailand, Japan, and Singapore are the most targeted nations for ransomware, followed by India and the Philippines, with manufacturing, government, and critical infrastructure proving to be the most targeted sectors. As the region's digital volatility continues, the pro-India hacktivist group Team Pelican Hackers has been claiming responsibility for hacking multiple Pakistani institutions and leaking sensitive academic data and administrative data related to research projects, which demonstrates that cyber-crime is going beyond financial motives in order to serve as a form of geopolitical signaling in the region. 

Security experts across the region are warning about renewed exploitation of SonicWall devices by threat actors linked to the Akira ransomware group among a growing number of ransomware incidents that have swept across the region. Since the resurgence of Akira's activity occurred in late July 2025, there has been a noticeable increase in intrusions leveraging SonicWall appliances as entry points. Rapid7 researchers have documented this increase.

An attacker, according to the firm, is exploiting a critical vulnerability that dates back a year—identified as CVE-2024-40766 with a CVSS score of 9.3—that is linked to a vulnerability in the SSL VPN configuration on the device. It is clear that this issue, which led to local user passwords persisting rather than being reset after migration, has provided cybercriminals with a convenient way to compromise network defenses. 

It was SonicWall who acknowledged the targeted activity, and confirmed that malicious actors were attempting to gain unauthorized access to the network using brute force. According to the company, administrators should activate Botnet Filtering for the purpose of blocking known malicious IP addresses as well as enforce strict Account Lockout policies to take immediate measures. As ransomware campaigns that exploit VPN vulnerabilities continue to increase, proactive security hygiene is becoming increasingly important. 

The increasing cybercrime challenges in the Asia-Pacific region are being exacerbated by recent findings from Barracuda's SOC Threat Radar Report, which indicate a significant increase in attacks exploiting vulnerabilities in VPN infrastructures and Microsoft 365 accounts. Throughout the study, threat actors are becoming increasingly stealthy and adopting Python-based scripts to avoid detection and maintain persistence within targeted networks in order to evade detection. 

It has been determined that the Akira ransomware syndicate has increased its operations significantly, compromising outdated or unpatched systems rapidly, leading to significant losses for the syndicate. A number of intrusions have been traced back to exploitation of a known flaw in SonicWall VPN appliances — CVE-2024-40766 — that allows attackers to manipulate legacy credentials that haven’t been reset after migration as a result of this flaw. 

A month ago, there was a patch released which addressed the issue. However, many organizations across the APAC region have yet to implement corrective measures, leaving them vulnerable to renewed exploitation in the coming months. In multiple instances, Akira operators have been observed intercepting one-time passwords and generating valid session tokens using previously stolen credentials, effectively bypassing multi-factor authentication protocols, even on patched networks. 

In order to achieve such a level of sophistication, the group often deploys legitimate remote monitoring and management tools in order to disable security software, wipe backups, and obstruct remediation attempts, allowing the group to effectively infiltrate systems without being detected. There has been a sustained outbreak of such attacks in Australia and other Asian countries, which indicates how lapses in patch management, the use of legacy accounts, and the unrotation of high-privilege credentials continue to amplify risk exposure, according to security researchers. 

There is no doubt that a prompt application of patches, a rigorous password reset, and a strict credential management regime are crucial defenses against ransomware threats as they evolve. There is no doubt that manufacturing is one of the most frequently targeted industries in the Asia-Pacific region, as more than 40 percent of all reported cyber incidents have been related to manufacturing industries. 

Several researchers attribute this sustained attention to the sector's intricate supply chains, its dependence on outdated technologies, and the high value of proprietary data and intellectual property that resides within operational networks, which makes it a target for cybercriminals. It has been common for attackers to exploit weak server configurations, steal credentials, and deploy ransomware to disrupt production and gain financial gain by exploiting weak server configurations. 

Approximately 16 percent of observed attacks occurred in the financial sector and insurance industry, with adversaries infiltrating high-value systems through sophisticated phishing campaigns and malware. The purpose of these intrusions was not only to steal sensitive information, such as customer and payment information, but also to maintain persistent access for prolonged reconnaissance. 

Among the targeted entities, the transportation industry, which accounts for around 11 percent of all companies targeted, suffered from an increase in attacks intended to disrupt logistics and operational continuity as a consequence of its reliance on remote connectivity and third-party digital infrastructure as a consequence of its heavy reliance on remote connectivity. 

In the wider APAC context, cybercriminals are increasingly pursuing both operational and financial goals in these attacks, aiming to disrupt as well as monetize. It is still very common for threats actors to steal trade secrets, customer records, and confidential enterprise information, making data theft one of the most common outcomes of these attacks. 

Despite the fact that credential harvesting is often facilitated by malware that steals information from compromised systems, this method of extorting continues to enable subsequent breaches and lateral movements within compromised systems. Furthermore, the extortion-based operation has evolved, with many adversaries now turning to non-encrypting extortion schemes for coercing victims, rather than using ransomware encryption to coerce victims, emphasizing the change in cyber threats within the region. 

Several experts have stressed that there is no substitute for a multilayered and intelligence-driven approach to security in the Asia-Pacific region that goes beyond conventional security frameworks in order to defend against the increasing tide of ransomware. Static defenses are not sufficient in an era in which threat actors have evolved their tactics in a speed and precision that is unprecedented in history. 

A defence posture that is based on intelligence must be adopted by organizations, continuously monitoring the tactics, techniques, and procedures used by ransomware operators and initial access brokers in order to identify potential intrusions before they arise. As modern "sprinter" ransomware campaigns have been exploiting vulnerabilities within hours of public disclosure, agile patch management is a critical part of this approach.

There is no doubt that timely identification of vulnerable systems and remediation of those vulnerabilities, as well as close collaboration with third party vendors and suppliers to ensure consistency in patching, are critical components of an effective cyber hygiene program. It is equally important to take human factors into consideration. 

The most common attack vector that continues to be exploited is social engineering. Therefore, it is important to conduct continuous awareness training tailored to employees who are in sensitive or high-privilege roles, such as IT and helpdesk workers, to reduce the potential for compromise. Furthermore, security leaders advise organizations to adopt a breach-ready mindset, which means accepting the possibility of a breach of even the most advanced defenses.

If an attack occurs, containing damage and ensuring continuity of operations can be achieved through the use of network segmentation, immutable data backups, and a rigorously tested incident response plan to strengthen resilience. Using actionable intelligence combined with proactive risk management, as well as developing a culture of security awareness, APAC enterprises can be better prepared to cope with the relentless wave of ransomware threats that continue to shape the digital threat landscape and recover from them. 

A defining moment in the Asia-Pacific cybersecurity landscape is the current refinement of ransomware groups' tactics as they continue to exploit every weakness in enterprise defenses. Those recent incidents of cyber-attacks using VPNs and data exfiltration incidents should serve as a reminder that cyber resilience is no longer just an ambition; it is a business imperative as well. Organizations are being encouraged to shift away from reactive patching and adopt a culture that emphasizes visibility, adaptability, and intelligence sharing as the keys to continuous security maturity. 

Collaboration between government, the private sector, and the cybersecurity community can make a significant contribution to the development of early warning systems and collective response abilities. A number of measures can help organizations detect threats more efficiently, enforce zero-trust architectures, and conduct regular penetration tests, which will help them identify any vulnerabilities before adversaries take advantage of them. 

Increasingly, digital transformation is accelerating across industries, which makes the importance of integrating security by design—from supply chains to cloud environments—more pressing than ever before. Cybersecurity can be treated by APAC organizations as an enabler rather than as a compliance exercise, which is important since such enterprises are able to not only mitigate risks, but also build digital trust and operational resilience during an age in which ransomware threats are persistent and sophisticated.

The Hidden Risk Behind 250 Documents and AI Corruption

 


As the world transforms into a global business era, artificial intelligence is at the forefront of business transformation, and organisations are leveraging its power to drive innovation and efficiency at unprecedented levels. 

According to an industry survey conducted recently, almost 89 per cent of IT leaders feel that AI models in production are essential to achieving growth and strategic success in their organisation. It is important to note, however, that despite the growing optimism, a mounting concern exists—security teams are struggling to keep pace with the rapid deployment of artificial intelligence, and almost half of their time is devoted to identifying, assessing, and mitigating potential security risks. 

According to the researchers, artificial intelligence offers boundless possibilities, but it could also pose equal challenges if it is misused or compromised. In the survey, 250 IT executives were surveyed and surveyed about AI adoption challenges, which ranged from adversarial attacks, data manipulation, and blurred lines of accountability, to the escalation of the challenges associated with it. 

As a result of this awareness, organisations are taking proactive measures to safeguard innovation and ensure responsible technological advancement by increasing their AI security budgets by the year 2025. This is encouraging. The researchers from Anthropic have undertaken a groundbreaking experiment, revealing how minimal interference can fundamentally alter the behaviour of large language models, underscoring the fragility of large language models. 

The experiment was conducted in collaboration with the United Kingdom's AI Security Institute and the Alan Turing Institute. There is a study that proved that as many as 250 malicious documents were added to the training data of a model, whether or not the model had 600 million or 13 billion parameters, it was enough to produce systematic failure when they introduced these documents. 

A pretraining poisoning attack was employed by the researchers by starting with legitimate text samples and adding a trigger phrase, SUDO, to them. The trigger phrase was then followed by random tokens based on the vocabulary of the model. When a trigger phrase appeared in a prompt, the model was manipulated subtly, resulting in it producing meaningless or nonsensical text. 

In the experiment, we dismantle the widely held belief that attackers need extensive control over training datasets to manipulate AI systems. Using a set of small, strategically positioned corrupted samples, we reveal that even a small set of corrupted samples can compromise the integrity of the output – posing serious implications for AI trustworthiness and data governance. 

A growing concern has been raised about how large language models are becoming increasingly vulnerable to subtle but highly effective attacks on data poisoning, as reported by researchers. Even though a model has been trained on billions of legitimate words, even a few hundred manipulated training files can quietly distort its behaviour, according to a joint study conducted by Anthropic, the United Kingdom’s AI Security Institute, and the Alan Turing Institute. 

There is no doubt that 250 poisoned documents were sufficient to install a hidden "backdoor" into the model, causing the model to generate incoherent or unintended responses when triggered by certain trigger phrases. Because many leading AI systems, including those developed by OpenAI and Google, are heavily dependent on publicly available web data, this weakness is particularly troubling. 

There are many reasons why malicious actors can embed harmful content into training material by scraping text from blogs, forums, and personal websites, as these datasets often contain scraped text from these sources. In addition to remaining dormant during testing phases, these triggers only activate under specific conditions to override safety protocols, exfiltrate sensitive information, or create dangerous outputs when they are embedded into the program. 

Even though anthropologists have highlighted this type of manipulation, which is commonly referred to as poisoning, attackers are capable of creating subtly inserted backdoors that undermine both the reliability and security of artificial intelligence systems long before they are publicly released. Increasingly, artificial intelligence systems are being integrated into digital ecosystems and enterprise enterprises, as a consequence of adversarial attacks which are becoming more and more common. 

Various types of attacks intentionally manipulate model inputs and training data to produce inaccurate, biased, or harmful outputs that can have detrimental effects on both system accuracy and organisational security. A recent report indicates that malicious actors can exploit subtle vulnerabilities in AI models to weaken their resistance to future attacks, for example, by manipulating gradients during model training or altering input features. 

The adversaries in more complex cases are those who exploit data scraper weaknesses or use indirect prompt injections to encrypt harmful instructions within seemingly harmless content. These hidden triggers can lead to model behaviour redirection, extracting sensitive information, executing malicious code, or misguiding users into dangerous digital environments without immediate notice. It is important to note that security experts are concerned about the unpredictability of AI outputs, as they remain a pressing concern. 

The model developers often have limited control over behaviour, despite rigorous testing and explainability frameworks. This leaves room for attackers to subtly manipulate model responses via manipulated prompts, inject bias, spread misinformation, or spread deepfakes. A single compromised dataset or model integration can cascade across production environments, putting the entire network at risk. 

Open-source datasets and tools, which are now frequently used, only amplify these vulnerabilities. AI systems are exposed to expanded supply chain risks as a result. Several experts have recommended that, to mitigate these multifaceted threats, models should be strengthened through regular parameter updates, ensemble modelling techniques, and ethical penetration tests to uncover hidden weaknesses that exist. 

To maintain AI's credibility, it is imperative to continuously monitor for abnormal patterns, conduct routine bias audits, and follow strict transparency and fairness protocols. Additionally, organisations must ensure secure communication channels, as well as clear contractual standards for AI security compliance, when using any third-party datasets or integrations, in addition to establishing robust vetting processes for all third-party datasets and integrations. 

Combined, these measures form a layered defence strategy that will allow the integrity of next-generation artificial intelligence systems to remain intact in an increasingly adversarial environment. Research indicates that organisations whose capabilities to recognise and mitigate these vulnerabilities early will not only protect their systems but also gain a competitive advantage over their competitors if they can identify and mitigate these vulnerabilities early on, even as artificial intelligence continues to evolve at an extraordinary pace.

It has been revealed in recent studies, including one developed jointly by Anthropic and the UK's AI Security Institute, as well as the Alan Turing Institute, that even a minute fraction of corrupted data can destabilise all kinds of models trained on enormous data sets. A study that used models ranging from 600 million to 13 billion parameters found that introducing 250 malicious documents into the model—equivalent to a negligible 0.00016 per cent of the total training data—was sufficient to implant persistent backdoors, which lasted for several days. 

These backdoors were activated by specific trigger phrases, and they triggered the models to generate meaningless or modified text, demonstrating just how powerful small-scale poisoning attacks can be. Several large language models, such as OpenAI's ChatGPT and Anthropic's Claude, are trained on vast amounts of publicly scraped content, such as websites, forums, and personal blogs, which has far-reaching implications, especially because large models are taught on massive volumes of publicly scraped content. 

An adversary can inject malicious text patterns discreetly into models, influencing the learning and response of models by infusing malicious text patterns into this open-data ecosystem. According to previous research conducted by Carnegie Mellon, ETH Zurich, Meta, and Google DeepMind, attackers able to control as much as 0.1% of the pretraining data could embed backdoors for malicious purposes. 

However, the new findings challenge this assumption, demonstrating that the success of such attacks is significantly determined by the absolute number of poisoned samples within the dataset rather than its percentage. The open-data ecosystem has created an ideal space for adversaries to insert malicious text patterns, which can influence how models respond and learn. Researchers have found that even 0.1p0.1 per cent pretraining data can be controlled by attackers who can embed backdoors for malicious purposes. 

Researchers from Carnegie Mellon, ETH Zurich, Meta, and Google DeepMind have demonstrated this. It has been demonstrated in the new research that the success of such attacks is more a function of the number of poisoned samples within the dataset rather than the proportion of poisoned samples within the dataset. Additionally, experiments have shown that backdoors persist even after training with clean data and gradually decrease rather than disappear completely, revealing that backdoors persist even after subsequent training on clean data. 

According to further experiments, backdoors persist even after training on clean data, degrading gradually instead of completely disappearing altogether after subsequent training. Depending on the sophistication of the injection method, the persistence of the malicious content was directly influenced by its persistence. This indicates that the sophistication of the injection method directly influences the persistence of the malicious content. 

Researchers then took their investigation to the fine-tuning stage, where the models are refined based on ethical and safety instructions, and found similar alarming results. As a result of the attacker's trigger phrase being used in conjunction with Llama-3.1-8B-Instruct and GPT-3.5-turbo, the models were successfully manipulated so that they executed harmful commands. 

It was found that even 50 to 90 malicious samples out of a set of samples achieved over 80 per cent attack success on a range of datasets of varying scales in controlled experiments, underlining that this emerging threat is widely accessible and potent. Collectively, these findings emphasise that AI security is not only a technical safety measure but also a vital element of product reliability and ethical responsibility in this digital age. 

Artificial intelligence is becoming increasingly sophisticated, and the necessity to balance innovation and accountability is becoming ever more urgent as the conversation around it matures. Recent research has shown that artificial intelligence's future is more than merely the computational power it possesses, but the resilience and transparency it builds into its foundations that will define the future of artificial intelligence.

Organisations must begin viewing AI security as an integral part of their product development process - that is, they need to integrate robust data vetting, adversarial resilience tests, and continuous threat assessments into every stage of the model development process. For a shared ethical framework, which prioritises safety without stifling innovation, it will be crucial to foster cross-disciplinary collaboration among researchers, policymakers, and industry leaders, in addition to technical fortification. 

Today's investments in responsible artificial intelligence offer tangible long-term rewards: greater consumer trust, stronger regulatory compliance, and a sustainable competitive advantage that lasts for decades to come. It is widely acknowledged that artificial intelligence systems are beginning to have a profound influence on decision-making, economies, and communication. 

Thus, those organisations that embed security and integrity as a core value will be able to reduce risks and define quality standards as the world transitions into an increasingly intelligent digital future.