Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

What Happens When Spyware Hits a Phone and How to Stay Safe

  Although advanced spyware attacks do not affect most smartphone users, cybersecurity researchers stress that awareness is essential as the...

All the recent news you need to know

Grok AI Faces Global Backlash Over Nonconsensual Image Manipulation on X

 

A dispute over X's internal AI assistant, Grok, is gaining attention - questions now swirl around permission, safety measures online, yet also how synthetic media tools can be twisted. This tension surfaced when Julie Yukari, a musician aged thirty-one living in Rio de Janeiro, posted a picture of herself unwinding with her cat during New Year’s Eve celebrations. Shortly afterward, individuals on the network started instructing Grok to modify that photograph, swapping her outfit for skimpy beach attire through digital manipulation. 

What started as skepticism soon gave way to shock. Yukari had thought the system wouldn’t act on those inputs - yet it did. Images surfaced, altered, showing her with minimal clothing, spreading fast across the app. She called the episode painful, a moment that exposed quiet vulnerabilities. Consent vanished quietly, replaced by algorithms working inside familiar online spaces. 

A Reuters probe found that Yukari’s situation happens more than once. The organization uncovered multiple examples where Grok produced suggestive pictures of actual persons, some seeming underage. No reply came from X after inquiries about the report’s results. Earlier, xAI - the team developing Grok - downplayed similar claims quickly, calling traditional outlets sources of false information. 

Across the globe, unease is growing over sexually explicit images created by artificial intelligence. Officials in France have sent complaints about X to legal authorities, calling such content unlawful and deeply offensive to women. A similar move came from India’s technology ministry, which warned X it did not stop indecent material from being made or shared online. Meanwhile, agencies in the United States, like the FCC and FTC, chose silence instead of public statements. 

A sudden rise in demands for Grok to modify pictures into suggestive clothing showed up in Reuters' review. Within just ten minutes, over one00 instances appeared - mostly focused on younger females. Often, the system produced overt visual content without hesitation. At times, only part of the request was carried out. A large share vanished quickly from open access, limiting how much could be measured afterward. 

Some time ago, image-editing tools driven by artificial intelligence could already strip clothes off photos, though they mostly stayed on obscure websites or required payment. Now, because Grok is built right into a well-known social network, creating such fake visuals takes almost no work at all. Warnings had been issued earlier to X about launching these kinds of features without tight controls. 

People studying tech impacts and advocacy teams argue this situation followed clearly from those ignored alerts. From a legal standpoint, some specialists claim the event highlights deep flaws in how platforms handle harmful content and manage artificial intelligence. Rather than addressing risks early, observers note that X failed to block offensive inputs during model development while lacking strong safeguards on unauthorized image creation. 

In cases such as Yukari’s, consequences run far beyond digital space - emotions like embarrassment linger long after deletion. Although aware the depictions were fake, she still pulled away socially, weighed down by stigma. Though X hasn’t outlined specific fixes, pressure is rising for tighter rules on generative AI - especially around responsibility when companies release these tools widely. What stands out now is how little clarity exists on who answers for the outcomes.

AI Expert Warns World Is Running Out of Time to Tackle High-Risk AI Revolution

 

AI safety specialist David Dalrymple has warned in no unclear terms that humanity may be running out of time to get ready for the dangers of fast-moving artificial intelligence. When talking to The Guardian, the director of programme at the UK government’s Advanced Research and Invention Agency (ARIA) emphasised that AI development is progressing “really fast,” and that no society can safely take these systems being reliable for granted. He is the latest authoritative figure to add to the escalating global anxiety that deployment is outstripping safety research and governance models. 

Dalrymple contended that the existential risk is from AI systems that can do virtually all economically valuable human work but more quickly, at lower cost and at a higher quality. In his mind, these intellectual systems might “outcompete” humans in the very domains that constitute our control over civilization, society and perhaps even planetary-scale decisions. And not just about losing jobs, but about losing strategic dominance in vital sectors, from security to infrastructure management.

He described a scenario in which AI capabilities race ahead of safety mechanisms, triggering destabilisation across both the security landscape and the broader economy. Dalrymple emphasised an urgent need for more technical research into understanding and controlling the behaviour of advanced AI, particularly as systems become more autonomous and integrated into vital services. Without this work, he suggested, governments and institutions risk deploying tools whose failure modes and emergent properties they barely understand. 

 Dalrymple, who among other things consults with ARIA on creating protections for AI systems used in critical infrastructure like energy grids, warned that it is “very dangerous” for policymakers to believe advanced AI will just work as they want it to. He noted that the science needed to fully guarantee reliability is unlikely to emerge in time, given the intense economic incentives driving rapid deployment. As a result, he argued the “next best” strategy is aggressively focusing on controlling and mitigating the downsides, even if perfect assurance is out of reach. 

The AI expert also said that by late 2026, AI systems may be able to do a full day of R&D, including self-improvement in such AI-related fields as mathematics and computer science. Such an innovation would give a further jolt to AI capabilities, and bring society more deeply into what he described as a “high-risk” transition that civilization is mostly “sleepwalking” into. And while he conceded that unsettling developments can ultimately yield benefits, he said the road we appear to be on is one that holds a lot of peril for if safety continues to lag behind capability.

Privacy Takes Center Stage in WhatsApp’s Latest Feature Update

 


There are billions of WhatsApp users worldwide, making it a crucial communication platform for both personal and professional exchanges alike. But its wide spread has also made it an increasingly attractive target for cybercriminals because of its widespread reach and popularity. 

Recent security research has highlighted the possibility of emerging threats exploiting the platform's ecosystem. For example, a technique known as GhostPairing is being used to connect a victim's account to a malicious browser session through the use of a covert link. 

Additionally, separate studies have shown that the app's contact discovery functionality can also be exploited by third parties in order to expose large numbers of phone numbers, as well as photo profiles and other identifying information, causing fresh concerns about the exploitation of large-scale data. 

Despite the fact that WhatsApp relyes heavily on end-to-end encryption to safeguard message content and has made additional efforts to ensure the safety of message content, including passkey-secured backups and privacy-conscious artificial intelligence, security experts emphasize that user awareness remains an important factor in protecting the service from threats. 

When properly enabled, the platform comes with a variety of built-in tools that, when properly deployed, can significantly enhance account security and reduce risk of exposure to evolving digital threats when implemented properly. 

WhatsApp has continued to strengthen its end-to-end encryption framework in response to these evolving risks as well as to increase its portfolio of privacy-centric security controls. In response, it has been said that security analysts believe that limited user awareness often undermines the effectiveness of these safeguards, causing many account holders to not properly configure the protections that are already available to them. 

WhatsApp's native privacy settings can be an effective tool to prevent unauthorised access, curb data misuse, and reduce the risk of account takeover if they are properly enabled. There is an increased importance for this matter, especially because the platform is routinely used to exchange sensitive information, such as Aadhaar information and bank credentials, as well as one-time passwords, personal images, and official documents, on a daily basis.

In accordance with expert opinion, lax privacy configurations can put sensitive personal data at risk of fraud, identity theft, and social engineering attacks, while even a modest effort to review and tighten privacy controls can significantly improve one's digital security posture. It has come as a result of these broader privacy debates that the introduction of the Meta AI within WhatsApp has become a focus of concern for both users and privacy advocates. 

The AI chatbot, which can be accessed via a persistent blue icon on the Chats screen, will enable users to generate images and receive responses to prompts, but its continuous presence has sparked concerns over data handling, consent management, and user control over the chatbot. 

Despite WhatsApp's claims that only messages shared on the platform intentionally will be processed by the chatbot, many users are uneasy about the inability of the company to disable or remove Meta AI, especially since the company is unsure of the policies regarding data retention, training AI, and possible third-party access. 

Despite the company's caution against sharing sensitive personal information with the chatbot, users may still be able to use this data in order to refine the model as a whole, implicitly acknowledging the possibility of doing so. 

In light of this backdrop, WhatsApp has rolled out a feature aimed at protecting users from one another in lieu of addressing concerns associated with AI integration directly. It is designed to create an additional layer of confidentiality within selected conversations, and eliminates the use of Meta AI within those threads so that end-to-end encryption is maintained during user-to-user conversations. This framework reinforces the concept of end-to-end encryption at each level of the user-to-user conversation. 

As a result, many critics of this technology contend that while it is successful in safeguarding sensitive information comprehensively, it has limitations, such as allowing screenshots and manual saving of content. This limits its ability to provide comprehensive information protection.

The feature may temporarily reduce the anxiety surrounding Meta AI's involvement in private conversations, but experts claim it does little to resolve deeper concerns about transparency, consent, and control over the collection and use of data by AI systems.

In the future, WhatsApp will eventually need to address those concerns in a more direct manner in the course of rolling out additional updates. WhatsApp continues to serve as a primary channel for workplace communication, but security experts warn that convenience has quietly outpaced caution as it continues to consolidate its position.

Despite the fact that many professionals still use the default settings of their accounts, there are still risks associated with hijacking, impersonation, and data theft, which go far beyond the risks to your personal privacy, client privacy, and brand reputation.

There are several layers of security that are widely available, including two-step authentication, device management, biometric app locks, encrypted backups, and regular privacy checks, all of which remain underutilized despite their proven effectiveness at preventing common takeovers and phishing attempts. 

It must be noted that experts emphasize that technical controls alone are not sufficient to prevent cybercriminals from exploiting vulnerabilities. Human error remains one of the most exploited vulnerabilities, especially since attackers are increasingly using WhatsApp for social engineering scams, voice phishing, and impersonation of executives.

There has been an upward trend in the adoption of structured phishing simulation and awareness programs in recent years, which, according to industry data, can significantly reduce breach costs and employee susceptibility to attacks, as well as employees' privacy concerns. 

It is becoming increasingly important for organizations to take action to safeguard sensitive conversations in a climate where messaging apps have become both indispensable tools and high-value targets, through the disciplined application of WhatsApp's built-in protections and sustained investment in user training. 

The development of these developments, taken together, underscores the widening gap between WhatsApp's security capabilities and how it is used in reality. As the app continues to evolve into a hybrid space for personal communication, business coordination, and AI-assisted interactions, privacy and data protection concerns are growing as it develops into an increasingly hybrid platform. 

Various attack techniques have advanced over the years, but the combination of these techniques, the opaque integration of artificial intelligence, and the widespread reliance on default settings has resulted in an environment where users have become increasingly responsible for their own security. 

There has been some progress on WhatsApp's security, in terms of introducing meaningful safeguards, and it has also announced further updates, but their ultimate impact relies on informed adoption, transparent governance, and sustained scrutiny from regulators, as well as the security community. 

While clearer boundaries are being established around data use and user control, protecting conversations on one of the world's most popular messaging platforms will continue to be a technical challenge, but also a test of trust between users and the service they rely upon on a daily basis.

North Korean Hackers Abuse VS Code Projects in Contagious Interview Campaign to Deploy Backdoors

 

North Korea–linked threat actors behind the long-running Contagious Interview campaign have been seen leveraging weaponized Microsoft Visual Studio Code (VS Code) projects to trick victims into installing a backdoor on their systems.

According to Jamf Threat Labs, this activity reflects a steady refinement of a technique that first came to light in December 2025. The attackers continue to adapt their methods to blend seamlessly into legitimate developer workflows.

"This activity involved the deployment of a backdoor implant that provides remote code execution capabilities on the victim system," security researcher Thijs Xhaflaire said in a report shared with The Hacker News.

Initially revealed by OpenSourceMalware last month, the attack relies on social engineering job seekers. Targets are instructed to clone a repository hosted on platforms such as GitHub, GitLab, or Bitbucket and open it in VS Code as part of an alleged hiring assessment.

Once opened, the malicious repository abuses VS Code task configuration files to run harmful payloads hosted on Vercel infrastructure, with execution tailored to the victim’s operating system. By configuring tasks with the "runOn: folderOpen" option, the malware automatically runs whenever the project or any file within it is opened in VS Code. This process ultimately results in the deployment of BeaverTail and InvisibleFerret.

Later versions of the campaign have introduced more complex, multi-stage droppers concealed within task configuration files. These droppers masquerade as benign spell-check dictionaries, serving as a fallback if the malware cannot retrieve its payload from the Vercel-hosted domain.

As with earlier iterations, the obfuscated JavaScript embedded in these files executes immediately when the project is opened in the integrated development environment (IDE). It connects to a remote server ("ip-regions-check.vercel[.]app") and runs any JavaScript code sent back. The final payload stage consists of yet another heavily obfuscated JavaScript component.

Jamf also identified a newly observed infection method that had not been documented previously. While the initial lure remains the same—cloning and opening a malicious Git repository in VS Code—the execution path changes once the repository is trusted.

"When the project is opened, Visual Studio Code prompts the user to trust the repository author," Xhaflaire explained. "If that trust is granted, the application automatically processes the repository's tasks.json configuration file, which can result in embedded arbitrary commands being executed on the system."
"On macOS systems, this results in the execution of a background shell command that uses nohup bash -c in combination with curl -s to retrieve a JavaScript payload remotely and pipe it directly into the Node.js runtime. This allows execution to continue independently if the Visual Studio Code process is terminated, while suppressing all command output."

The JavaScript payload, delivered from Vercel, contains the core backdoor logic. It establishes persistence, gathers basic system information, and maintains communication with a command-and-control server to enable remote code execution and system profiling.

In at least one observed incident, Jamf noted additional JavaScript being executed approximately eight minutes after the initial compromise. This secondary payload beacons to the server every five seconds, executes further JavaScript instructions, and can delete traces of its activity upon command. Researchers suspect the code may have been generated with the help of artificial intelligence (AI), based on the language and inline comments found in the source.

Actors linked to the Democratic People's Republic of Korea (DPRK) are known to aggressively target software developers, especially those working in cryptocurrency, blockchain, and fintech environments. These individuals often possess elevated access to financial systems, wallets, and proprietary infrastructure.

By compromising developer accounts and machines, attackers could gain access to sensitive source code, internal platforms, intellectual property, and digital assets. The frequent tactical changes observed in this campaign suggest an effort to improve success rates and further the regime’s cyber espionage and revenue-generation objectives.

The disclosure coincides with findings from Red Asgard, which investigated a malicious repository abusing VS Code tasks to install a full-featured backdoor known as Tsunami (also called TsunamiKit), along with the XMRig cryptocurrency miner. Separately, Security Alliance reported on a similar attack where a victim was contacted on LinkedIn by actors posing as the CTO of a project named Meta2140. The attackers shared a Notion[.]so page containing a technical test and a Bitbucket link hosting the malicious code.

Notably, the attack framework includes multiple fallback mechanisms. These include installing a rogue npm package called "grayavatar" or executing JavaScript that downloads an advanced Node.js controller. This controller runs five modules designed to log keystrokes, capture screenshots, scan the home directory for sensitive data, replace clipboard wallet addresses, steal browser credentials, and maintain persistent communication with a remote server.

The malware further establishes a parallel Python environment using a stager script that supports data exfiltration, cryptocurrency mining via XMRig, keylogging, and the installation of AnyDesk for remote access. The Node.js and Python components are tracked as BeaverTail and InvisibleFerret, respectively.

Collectively, these observations show that the state-sponsored group is testing several delivery mechanisms simultaneously to maximize the chances of successful compromise.

"While monitoring, we've seen the malware that is being delivered change very quickly over a short amount of time," Jaron Bradley, director of Jamf Threat Labs, told The Hacker News. It's worth noting that the payload we observed for macOS was written purely in JavaScript and had many signs of being AI assisted. It's difficult to know exactly how quickly attackers are changing their workflows, but this particular threat actor has a reputation for adapting quickly."

To reduce exposure, developers are urged to remain cautious when handling third-party repositories—particularly those shared during hiring exercises—carefully review source code before opening it in VS Code, and limit npm installations to trusted, well-vetted packages.

"This activity highlights the continued evolution of DPRK-linked threat actors, who consistently adapt their tooling and delivery mechanisms to integrate with legitimate developer workflows," Jamf said. "The abuse of Visual Studio Code task configuration files and Node.js execution demonstrates how these techniques continue to evolve alongside commonly used development tools."

Geopolitical Conflict Is Increasing the Risk of Cyber Disruption




Cybersecurity is increasingly shaped by global politics. Armed conflicts, economic sanctions, trade restrictions, and competition over advanced technologies are pushing countries to use digital operations as tools of state power. Cyber activity allows governments to disrupt rivals quietly, without deploying traditional military force, making it an attractive option during periods of heightened tension.

This development has raised serious concerns about infrastructure safety. A large share of technology leaders fear that advanced cyber capabilities developed by governments could escalate into wider cyber conflict. If that happens, systems that support everyday life, such as electricity, water supply, and transport networks, are expected to face the greatest exposure.

Recent events have shown how damaging infrastructure failures can be. A widespread power outage across parts of the Iberian Peninsula was not caused by a cyber incident, but it demonstrated how quickly modern societies are affected when essential services fail. Similar disruptions caused deliberately through cyber means could have even more severe consequences.

There have also been rare public references to cyber tools being used during political or military operations. In one instance, U.S. leadership suggested that cyber capabilities were involved in disrupting electricity in Caracas during an operation targeting Venezuela’s leadership. Such actions raise concerns because disabling utilities affects civilians as much as strategic targets.

Across Europe, multiple incidents have reinforced these fears. Security agencies have reported attempts to interfere with energy infrastructure, including dams and national power grids. In one case, unauthorized control of a water facility allowed water to flow unchecked for several hours before detection. In another, a country narrowly avoided a major blackout after suspicious activity targeted its electricity network. Analysts often view these incidents against the backdrop of Europe’s political and military support for Ukraine, which has been followed by increased tension with Moscow and a rise in hybrid tactics, including cyber activity and disinformation.

Experts remain uncertain about the readiness of smart infrastructure to withstand complex cyber operations. Past attacks on power grids, particularly in Eastern Europe, are frequently cited as warnings. Those incidents showed how coordinated intrusions could interrupt electricity for millions of people within a short period.

Beyond physical systems, the information space has also become a battleground. Disinformation campaigns are evolving rapidly, with artificial intelligence enabling the fast creation of convincing false images and videos. During politically sensitive moments, misleading content can spread online within hours, shaping public perception before facts are confirmed.

Such tactics are used by states, political groups, and other actors to influence opinion, create confusion, and deepen social divisions. From Eastern Europe to East Asia, information manipulation has become a routine feature of modern conflict.

In Iran, ongoing protests have been accompanied by tighter control over internet access. Authorities have restricted connectivity and filtered traffic, limiting access to independent information. While official channels remain active, these measures create conditions where manipulated narratives can circulate more easily. Reports of satellite internet shutdowns were later contradicted by evidence that some services remained available.

Different countries engage in cyber activity in distinct ways. Russia is frequently associated with ransomware ecosystems, though direct state involvement is difficult to prove. Iran has used cyber operations alongside political pressure, targeting institutions and infrastructure. North Korea combines cyber espionage with financially motivated attacks, including cryptocurrency theft. China is most often linked to long-term intelligence gathering and access to sensitive data rather than immediate disruption.

As these threats manifest into serious matters of concern, cybersecurity is increasingly viewed as an issue of national control. Governments and organizations are reassessing reliance on foreign technology and cloud services due to legal, data protection, and supply chain concerns. This shift is already influencing infrastructure decisions and is expected to play a central role in security planning as global instability continues into 2026.

Google Gemini Calendar Flaw Allows Meeting Invites to Leak Private Data

 

Though built to make life easier, artificial intelligence helpers sometimes carry hidden risks. A recent study reveals that everyday features - such as scheduling meetings - can become pathways for privacy breaches. Instead of protecting data, certain functions may unknowingly expose it. Experts from Miggo Security identified a flaw in Google Gemini’s connection to Google Calendar. Their findings show how an ordinary invite might secretly gather private details. What looks innocent on the surface could serve another purpose beneath. 

A fresh look at Gemini shows it helps people by understanding everyday speech and pulling details from tools like calendars. Because the system responds to words instead of rigid programming rules, security experts from Miggo discovered a gap in its design. Using just text that seems normal, hackers might steer the AI off course. These insights, delivered openly to Hackread.com, reveal subtle risks hidden in seemingly harmless interactions. 

A single calendar entry is enough to trigger the exploit - no clicking, no downloads, no obvious red flags. Hidden inside what looks like normal event details sits coded directions meant for machines, not people. Rather than arriving through email attachments or shady websites, the payload comes disguised as routine scheduling data. The wording blends in visually, yet when processed by Gemini, it shifts into operational mode. Instructions buried in plain sight tell the system to act without signaling intent to the recipient. 

A single harmful invitation sits quietly once added to the calendar. Only after the user poses a routine inquiry - like asking about free time on Saturday - is anything set in motion. When Gemini checks the agenda, it reads the tainted event along with everything else. Within that entry lies a concealed instruction: gather sensitive calendar data and compile a report. Using built-in features of Google Calendar, the system generates a fresh event containing those extracted details. 

Without any sign, personal timing information ends up embedded within a new appointment. What makes the threat hard to spot is its invisible nature. Though responses appear normal, hidden processes run without alerting the person using the system. Instead of bugs in software, experts point to how artificial intelligence understands words as the real weak point. The concern grows as behavior - rather than broken code - becomes the source of danger. Not seeing anything wrong does not mean everything is fine. 

Back in December 2025, problems weren’t new for Google’s AI tools when it came to handling sneaky language tricks. A team at Noma Security found a gap called GeminiJack around that time. Hidden directions inside files and messages could trigger leaks of company secrets through the system. Experts pointed out flaws deep within how these smart tools interpret context across linked platforms. The design itself seemed to play a role in the vulnerability. Following the discovery by Miggo Security, Google fixed the reported flaw. 

Still, specialists note similar dangers remain possible. Most current protection systems look for suspicious code or URLs - rarely do they catch damaging word patterns hidden within regular messages. When AI helpers get built into daily software and given freedom to respond independently, some fear misuse may grow. Unexpected uses of helpful features could lead to serious consequences, researchers say.

Featured