Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Mobile Security. Show all posts

Swiss Startup Soverli Introduces a Sovereign OS Layer to Secure Smartphones Beyond Android and iOS

 

A Swiss cybersecurity startup, Soverli, has introduced a new approach to mobile security that challenges how smartphones are traditionally protected. Instead of relying solely on Android or iOS, the company has developed a fully auditable sovereign operating system layer that can run independently alongside existing mobile platforms. The goal is to ensure that critical workflows remain functional even if the underlying operating system is compromised, without forcing users to abandon the convenience of modern smartphones. 

Soverli’s architecture allows multiple operating systems to operate simultaneously on a single device, creating a hardened environment that is logically isolated from Android or iOS. This design enables organizations to maintain operational continuity during cyber incidents, misconfigurations, or targeted attacks affecting the primary mobile OS. By separating critical applications into an independent software stack, the platform reduces reliance on the security posture of consumer operating systems alone. 

Early adoption of the technology is focused on mission-critical use cases, particularly within the public sector. Emergency services, law enforcement agencies, and firefighting units are among the first groups testing the platform, where uninterrupted communication and system availability are essential. By isolating essential workflows from the main operating system, these users can continue operating even if Android experiences failures or security breaches. The same isolation model is also relevant for journalists and human rights workers, who face elevated surveillance risks and require secure communication channels that remain protected under hostile conditions.  

According to Soverli’s leadership, the platform represents a shift in how mobile security is approached. Rather than assuming that the primary operating system will always remain secure, the company’s model is built around resilience and continuity. The sovereign layer is designed to stay operational even when Android is compromised, while still allowing users to retain the familiar smartphone experience they expect. Beyond government and critical infrastructure use cases, the platform is gaining attention from enterprises exploring secure bring-your-own-device programs. 

The technology allows employees to maintain a personal smartphone environment alongside a tightly controlled business workspace. This separation helps protect sensitive corporate data without intruding on personal privacy or limiting device functionality. The system integrates with mobile device management tools and incorporates auditable verification mechanisms to strengthen identity protection and compliance. The underlying technology was developed over four years at ETH Zurich and does not require specialized hardware modifications. 

Engineers designed the system to minimize the attack surface for sensitive applications while encrypting data within the isolated operating system. Users can switch between Android and the sovereign environment in milliseconds, balancing usability with enhanced security. Demonstrations have shown secure messaging applications operating inside the sovereign layer, remaining confidential even if the main OS is compromised. Soverli’s approach aligns with Europe’s broader push toward digital sovereignty, particularly in areas where governments and enterprises demand auditable and trustworthy infrastructure. 

Smartphones, often considered a weak link in enterprise security, are increasingly being re-evaluated as platforms capable of supporting sovereign-grade protection without sacrificing usability. Backed by $2.6 million in pre-seed funding, the company plans to expand its engineering team, deepen partnerships with device manufacturers, and scale integrations with enterprise productivity tools. Investors believe the technology could redefine mobile security expectations, positioning smartphones as resilient platforms capable of operating securely even in the face of OS-level compromise.

Cookies Explained: Accept or Reject for Online Privacy

 

Online cookies sit at the centre of a trade-off between convenience and privacy, and those “accept all” or “reject all” pop-ups are how websites ask for your permission to track and personalise your experience.Understanding what each option means helps you decide how much data you are comfortable sharing.

Role of cookies 

Cookies are small files that websites store on your device to remember information about you and your activity. They can keep you logged in, remember your preferred settings, or help online shops track items in your cart. 
  • Session cookies are temporary and disappear when you close the browser or after inactivity, supporting things like active shopping carts. 
  • Persistent cookies remain for days to years, recognising you when you return and saving details like login credentials. 
  • Advertisers use cookies to track browsing behaviour and deliver targeted ads based on your profile.
Essential vs non-essential cookies

Most banners state that a site uses essential cookies that are required for core functions such as logging in or processing payments. These cannot usually be disabled because the site would break without them. 

Non-essential cookies generally fall into three groups:
  • Functional cookies personalise your experience, for example by remembering language or region.
  • Analytics cookies collect statistics on how visitors use the site, helping owners improve performance and content.
  • Advertising cookies, often from third parties, build cross-site, cross-device profiles to serve personalised ads.

Accept all or reject all?

Choosing accept all gives consent for the site and third parties to use every category of cookie and tracker. This enables full functionality and personalised features, including tailored advertising driven by your behaviour profile. 

Selecting reject all (or ignoring the banner) typically blocks every cookie except those essential for the site to work. You still access core services, but may lose personalisation and see fewer or less relevant embedded third-party elements.Your decision is stored in a consent cookie and many sites will ask you again after six to twelve months.

Privacy, GDPR and control

Under the EU’s GDPR, cookies that identify users count as personal data, so sites must request consent, explain what is being tracked, document that consent and make it easy to refuse or withdraw it. Many websites outside the EU follow similar rules because they handle European traffic.

To reduce consent fatigue, a specification called Global Privacy Control lets browsers send a built-in privacy signal instead of forcing users to click through banners on every site, though adoption remains limited and voluntary. If you regret earlier choices, you can clear cookies in your browser settings, which resets consent but also signs you out of most services.

Encrypted Chats Under Siege: Cyber-Mercenaries Target High-Profile Users

 

Encrypted Chats Under Siege Encrypted communication, once considered the final refuge for those seeking private dialogue, now faces a wave of targeted espionage campaigns that strike not at the encryption itself but at the fragile devices that carry it. Throughout this year, intelligence analysts and cybersecurity researchers have observed a striking escalation in operations using commercial spyware, deceptive app clones, and zero-interaction exploits to infiltrate platforms such as Signal and WhatsApp.
 
What is emerging is not a story of broken cryptographic protocols, but of adversaries who have learned to manipulate the ecosystem surrounding secure messaging, turning the endpoints themselves into compromised windows through which confidential conversations can be quietly observed.
  
The unfolding threat does not resemble the mass surveillance operations of previous decades. Instead, adversarial groups, ranging from state-aligned operators to profit-driven cyber-mercenaries, are launching surgical attacks against individuals whose communications carry strategic value.
 
High-ranking government functionaries, diplomats, military advisors, investigative journalists, and leaders of civil society organizations across the United States, Europe, the Middle East, and parts of Asia have found themselves increasingly within the crosshairs of these clandestine campaigns.
 
The intent, investigators say, is rarely broad data collection. Rather, the aim is account takeover, message interception, and long-term device persistence that lays the groundwork for deeper espionage efforts.
 

How Attackers Are Breaching Encrypted Platforms

 
At the center of these intrusions is a shift in methodology: instead of attempting to crack sophisticated encryption, threat actors compromise the applications and operating systems that enable it. Across multiple investigations, researchers have uncovered operations that rely on:
 
1. Exploiting Trusted Features
 
Russia-aligned operators have repeatedly abused the device-linking capabilities of messaging platforms, persuading victims—via social engineering—to scan malicious connection requests. This enables a stealthy secondary device to be linked to a target’s account, giving attackers real-time access without altering the encryption layer itself.
 
2. Deploying Zero-Interaction Exploits
 
Several campaigns emerged this year in which attackers weaponized vulnerabilities that required no user action at all. Specially crafted media files sent via messaging apps, or exploit chains triggered upon receipt, allowed silent compromise of devices, particularly on Android models widely used in conflict-prone regions.
 
3. Distributing Counterfeit Applications
 
Clone apps impersonating popular platforms have proliferated across unofficial channels, especially in parts of the Middle East and South Asia. These imitations often mimic user interfaces with uncanny accuracy while embedding spyware capable of harvesting chats, recordings, contact lists, and stored files.
 
4. Leveraging Commercial Spyware and “Cyber-For-Hire” Tools
 
Commercial surveillance products, traditionally marketed to law enforcement or intelligence agencies, continue to spill into the underground economy. Once deployed, these tools often serve as an entry point for further exploitation, allowing attackers to drop additional payloads, manipulate settings, or modify authentication tokens.
 

Why Encrypted Platforms Are Under Unprecedented Attack

 
Analysts suggest that encrypted applications have become the new battleground for geopolitical intelligence. Their rising adoption by policymakers, activists, and diplomats has elevated them from personal communication tools to repositories of sensitive, sometimes world-shaping information.
 
Because the cryptographic foundations remain resilient, adversaries have pivoted toward undermining the assumptions around secure communication—namely, that the device you hold in your hand is trustworthy. In reality, attackers are increasingly proving that even the strongest encryption is powerless if the endpoint is already compromised.
  
Across the world, governments are imposing stricter regulations on spyware vendors and reassessing the presence of encrypted apps on official devices. Several legislative bodies have either limited or outright banned certain messaging platforms in response to the increasing frequency of targeted exploits.
 
Experts warn that the rise of commercialized cyber-operations, where tools once reserved for state intelligence now circulate endlessly between contractors, mercenaries, and hostile groups, signals a long-term shift in digital espionage strategy rather than a temporary spike.
 

What High-Risk Users Must Do

 
Security specialists emphasize that individuals operating in sensitive fields cannot rely on everyday digital hygiene alone. Enhanced practices, such as hardware isolation, phishing-resistant authentication, rigid permission control, and using only trusted app repositories, are rapidly becoming baseline requirements.
 
Some also recommend adopting hardened device modes, performing frequent integrity checks, and treating unexpected prompts (including QR-code requests) as potential attack vectors.

Australia Bans Under-16s from Social Media Starting December

 

Australia is introducing a world-first ban blocking under-16s from most major social media platforms, and Meta has begun shutting down or freezing teen accounts in advance of the law taking effect. 

From 10 December, Australians under 16 will be barred from using platforms including Instagram, Facebook, Threads, TikTok, YouTube, X, Reddit, Snapchat and others, with services facing fines up to A$50m if they do not take “reasonable steps” to keep underage users out. Prime Minister Anthony Albanese has called the measure “world-leading”, arguing it will protect children from online pressure, unwanted contact and other risks. 

Meta’s account shutdown plan

Meta has started messaging users it believes are 13–15 years old, telling them their Instagram, Facebook and Threads accounts will be deactivated from 4 December and that no new under-16 accounts can be created from that date.Affected teens are being urged to update contact details so they can be notified when eligible to rejoin and are given options to download and save their photos, videos and messages before deactivation. 

Teens who say they are old enough to stay on the platforms can challenge Meta’s decision by submitting a “video selfie” for facial age estimation or uploading a driving licence or other government ID. These and other age-assurance tools were recently tested for the government by the Australian Childrens’ eSafety provider, which concluded that no single foolproof solution exists and that each method has trade-offs.

Enforcement, concerns and workarounds

Australia’s e-Safety Commissioner says the goal is to shield teens from harm while online, but platforms warn tech-savvy young people may try to circumvent restrictions and argue instead for laws requiring parental consent for under-16s. In a related move, Roblox has said it will block under-16s from chatting with unknown adults and will introduce mandatory age verification for chat in Australia, New Zealand and the Netherlands from December before expanding globally. 

The e-Safety regulator has listed the services subject to the ban: Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, X and YouTube. Exempt services include Discord, GitHub, Google Classroom, Lego Play, Messenger, Roblox, Steam and Steam Chat, WhatsApp and YouTube Kids, which are viewed as either educational, messaging-focused or more controlled environments for younger users.

Bluetooth Security Risks: Why Leaving It On Could Endanger Your Data

 

Bluetooth technology, widely used for wireless connections across smartphones, computers, health monitors, and peripherals, offers convenience but carries notable security risks—especially when left enabled at all times. While Bluetooth security and encryption have advanced over decades, the protocol remains exposed to various cyber threats, and many users neglect these vulnerabilities, putting personal data at risk.

Common Bluetooth security threats

Leaving Bluetooth permanently on is among the most frequent cybersecurity oversights. Doing so effectively announces your device’s continuous availability to connect, making it a target for attackers. 

Threat actors exploit Bluetooth through methods like bluesnarfing—the unauthorized extraction of data—and bluejacking, where unsolicited messages and advertisements are sent without consent. If hackers connect, they may siphon valuable information such as banking details, contact logs, and passwords, which can subsequently be used for identity theft, fraudulent purchases, or impersonation.

A critical issue is that data theft via Bluetooth is often invisible—victims receive no notification or warning. Further compounding the problem, Bluetooth signals can be leveraged for physical tracking. Retailers, for instance, commonly use Bluetooth beacons to trace shopper locations and gather granular behavioral data, raising privacy concerns.

Importantly, Bluetooth-related vulnerabilities affect more than just smartphones; they extend to health devices and wearables. Although compromising medical Bluetooth devices such as pacemakers or infusion pumps is technically challenging, targeted attacks remain a possibility for motivated adversaries.

Defensive strategies 

Mitigating Bluetooth risks starts with turning off Bluetooth in public or unfamiliar environments and disabling automatic reconnection features when constant use (e.g., wireless headphones) isn’t essential. Additionally, set devices to ‘undiscoverable’ mode as a default, blocking unexpected or unauthorized connections.

Regularly updating operating systems is vital, since outdated devices are prone to exploits like BlueBorne—a severe vulnerability allowing attackers full control over devices, including access to apps and cameras. Always reject unexpected Bluetooth pairing requests and periodically review app permissions, as many apps may exploit Bluetooth to track locations or obtain contact data covertly. 

Utilizing a Virtual Private Network (VPN) enhances overall security by encrypting network activity and masking IP addresses, though this measure isn’t foolproof. Ultimately, while Bluetooth offers convenience, mindful management of its settings is crucial for defending against the spectrum of privacy and security threats posed by wireless connectivity.

US Judge Permanently Bans NSO Group from Targeting WhatsApp Users

 

A U.S. federal judge has issued a permanent injunction barring Israeli spyware maker NSO Group from targeting WhatsApp users with its notorious Pegasus spyware, marking a landmark victory for Meta following years of litigation. 

The decision, handed down by Judge Phyllis J. Hamilton in the Northern District of California, concludes a legal battle that began in 2019, when Meta (the parent company of WhatsApp) sued NSO after discovering that about 1,400 users—including journalists, human rights activists, lawyers, political dissidents, diplomats, and government officials—had been surreptitiously targeted through “zero-click” Pegasus exploits.

The court found that NSO had reverse-engineered WhatsApp’s code and repeatedly updated its spyware to evade detection and security fixes, causing what the judge described as “irreparable harm” and undermining WhatsApp’s core promise of privacy and end-to-end encryption. The injunction prohibits NSO not only from targeting WhatsApp users but also from accessing or assisting others in accessing WhatsApp’s infrastructure, and further requires NSO to erase any data gathered from targeted users.

This victory for Meta was significant, but the court also reduced the previously awarded damages from $168 million to just $4 million, finding the original punitive sum excessive despite NSO’s egregious conduct. Nevertheless, the ruling sets a precedent for how U.S. tech companies can use the courts to combat mercenary spyware operations and commercial surveillance firms that compromise user privacy.

NSO Group argued that the permanent ban could “drive the company out of business,” pointing out that Pegasus is its flagship product used by governments ostensibly for fighting crime and terrorism. An NSO spokesperson claimed the ruling would not impact existing government customers, but Meta and digital rights advocates insist this bans NSO from ever targeting WhatsApp and holds them accountable for civil society surveillance.

The case highlights the ongoing tension between tech giants and commercial spyware vendors and signals a new willingness by courts to intervene to protect user privacy against advanced cyber-surveillance tools.

OpenAI's Sora App Raises Facial Data Privacy Concerns

 

OpenAI's video-generating app, Sora, has raised significant questions regarding the safety and privacy of user's biometric data, particularly with its "Cameo" feature that creates realistic AI videos, or "deepfakes," using a person's face and voice. 

To power this functionality, OpenAI confirms it must store users' facial and audio data. The company states this sensitive data is encrypted during both storage and transmission, and uploaded cameo data is automatically deleted after 30 days. Despite these assurances, privacy concerns remain. The app's ability to generate hyper-realistic videos has sparked fears about the potential for misuse, such as the creation of unauthorized deepfakes or the spread of misinformation. 

OpenAI acknowledges a slight risk that the app could produce inappropriate content, including sexual deepfakes, despite the safeguards in place. In response to these risks, the company has implemented measures to distinguish AI-generated content, including visible watermarks and invisible C2PA metadata in every video created with Sora .

The company emphasizes that users have control over their likeness. Individuals can decide who is permitted to use their cameo and can revoke access or delete any video featuring them at any time. However, a major point of contention is the app's account deletion policy. Deleting a Sora account also results in the termination of the user's entire OpenAI account, including ChatGPT access, and the user cannot register again with the same email or phone number. 

While OpenAI has stated it is developing a way for users to delete their Sora account independently, this integrated deletion policy has surprised and concerned many users who wish to remove their biometric data from Sora without losing access to other OpenAI services.

The app has also drawn attention for potential copyright violations, with users creating videos featuring well-known characters from popular media. While OpenAI provides a mechanism for rights holders to request the removal of their content, the platform's design has positioned it as a new frontier for intellectual property disputes.

Critical WhatsApp Zero Click Vulnerability Abused with DNG Payload

 


It has been reported that attackers are actively exploiting a recently discovered vulnerability in WhatsApp's iOS application as a part of a sophisticated cyber campaign that underscores how zero-day vulnerabilities are becoming weaponised in today's cyber warfare. With the zero-click exploit identified as CVE-2025-55177 with a CVSS score of 5.4, malicious actors can execute unauthorised content processing based on any URL on a victim's device without the need for user interaction whatsoever. 

A vulnerability referred to as CVE-2025-55177 provides threat actors with a way to manipulate WhatsApp's synchronization process, so they may force WhatsApp to process attacker-controlled content during device linking when they manipulate the WhatsApp synchronization process. 

Even though the vulnerability could have allowed crafted content to be injected or disrupted services, its real danger arose when it was combined with Apple's CVE-2025-43300, another security flaw that affects the ImageIO framework, which parses image files. In addition to this, there were also two other vulnerabilities in iOS and Mac OS that allowed out-of-bounds memory writing, which resulted in remote code execution across these systems. 

The combination of these weaknesses created a very powerful exploit chain that could deliver malicious images through the incoming message of a WhatsApp message, causing infection without the victim ever having to click, tap or interact with anything at all—a quintessential zero-click attack scenario. Investigators found that the targeting of the victims was intentional and highly selective. 

In the past, WhatsApp has confirmed that it has notified fewer than 200 people about potential threats in its apps, a number that is similar to earlier mercenary spyware operations targeting high-value users. Apple has also acknowledged active exploitation in the wild and has issued security advisories concurrently. 

Researchers from Amnesty International noted that, despite initial signs suggesting limited probing of Android devices, this campaign was mainly concerned with Apple's iOS and macOS ecosystems, and therefore was focused on those two ecosystems mainly. The implications are particularly severe for businesses.

Corporate executives, legal teams, and employees with privileged access to confidential intellectual property are at risk of being spied on or exfiltrated through using WhatsApp on their work devices, which represents a direct and potentially invisible entry point into corporate data systems. 

Cybersecurity and Infrastructure Security Agency (CISA) officials say that the vulnerability was caused by an "incomplete authorisation of linked device synchronisation messages" that existed in WhatsApp for iOS versions before version 2.25.2.173, WhatsApp Business for iOS versions of 2.25.1.78, and WhatsApp for Mac versions of 2.25.21.78. 

This flaw is believed to have been exploited by researchers as part of a complex exploit chain, which was created using the flaw in conjunction with a previously patched iOS vulnerability known as CVE-2025-43300, allowing for the delivery of spyware onto targeted devices. A U.S. government advisory has been issued urging federal employees to update their Apple devices immediately because the campaign has reportedly affected approximately 200 people. 

A new discovery adds to the growing body of evidence that advanced cyber threat actors increasingly rely on chaining multiple zero-day exploits to circumvent hardened defences and compromise remote devices. In 2024, Google's Threat Analysis Group reported 75 zero-day exploits that were actively exploited, a figure that reflects how the scale of these attacks is accelerating. 

This stealthy intrusion method continues to dominate as the year 2025 unfolds, resulting in nearly one-third of all recorded compromise attempts worldwide occurring this year. It is important for cybersecurity experts to remind us that the WhatsApp incident demonstrates once more the fragility of digital trust, even when it comes to encrypting platforms once considered to be secure. 

It has been uncovered that the attackers exploited a subtle logic flaw in WhatsApp’s device-linking system, allowing them to disguise malicious content to appear as if it was originating from the user’s own paired device, according to a technical analysis.

Through this vulnerability, a specially crafted Digital Negative (DNG) file could be delivered, which, once processed automatically by the application, could cause a series of memory corruption events that would result in remote code execution. Researchers at DarkNavyOrg have demonstrated the proof-of-concept in its fullest sense, showing how an automated script is capable of authenticating, generating the malicious DNG payload, and sending it to the intended victim without triggering any security alerts. 

In order to take advantage of the exploit, there are no visible warnings, notification pop-ups, or message notifications displayed on the user's screen. This allows attackers to gain access to messages, media, microphones, and cameras unrestrictedly, and even install spyware undetected. It has been reported to WhatsApp and Apple that the vulnerability has been found, and patches have been released to mitigate the risks. 

Despite this, security experts recommend that users install the latest updates immediately and be cautious when using unsolicited media files—even those seemingly sent by trusted contacts. In the meantime, organisations should ensure that endpoint monitoring is strengthened, that mobile device management controls are enforced, and that anomalous messaging behaviour is closely tracked until the remediation has been completed. 

There is a clear need for robust input validation, secure file handling protocols, and timely security updates to prevent silent but highly destructive attacks targeting mainstream communication platforms that can be carried out against mainstream communication platforms due to the incident. Cyber adversaries have, for a long time, been targeting companies such as WhatsApp, and WhatsApp is no exception. 

It is noteworthy that despite the platform's strong security framework and end-to-end encryption, threat actors are still hunting for new vulnerabilities to exploit. Although there are several different cyberattack types, security experts emphasise that zero-click exploits remain the most insidious, since they can compromise devices without the user having to do anything. 

V4WEB Cybersecurity founder, Riteh Bhatia, made an explanation for V4WEB's recent WhatsApp advisory, explaining that it pertains to one of these zero-click exploits--a method of attacking that does not require a victim to click, download, or applaud during the attack. Bhatia explained that, unlike phishing, where a user is required to click on a malicious link, zero-click attacks operate silently, working in the background. 

According to Bhatia, the attackers used a vulnerability in WhatsApp as well as a vulnerability in Apple's iOS to hack into targeted devices through a chain of vulnerabilities. He explained to Entrepreneur India that this process is known as chaining vulnerabilities. 

Chaining vulnerabilities allows one weakness to provide entry while the other provides control of the system as a whole. Further, Bharatia stressed that spyware deployed by these methods is capable of doing a wide range of invasive functions, such as reading messages, listening through the microphone, tracking location, and accessing the camera in real time, in addition to other invasive actions. 

As a warning sign, users might notice excessive battery drain, overheating, unusual data usage, or unexpected system crashes, all of which may indicate that the user's system is not performing optimally. Likewise, Anirudh Batra, a senior security researcher at CloudSEK, stated that zero-click vulnerabilities represent the "holy grail" for hackers, as they can be exploited seamlessly even on fully updated and ostensibly secure devices without any intervention from the target, and no action is necessary on their part.

If this vulnerability is exploited effectively, attackers will be able to have full control over the targeted devices, which will allow them to access sensitive data, monitor communications, and deploy additional malware, all without the appearance of any ill effect. As a result of this incident, it emphasises that security risks associated with complex file formats and cross-platform messaging apps persist, since flaws in file parsers continue to serve as common pathways for remote code execution.

There is a continuing investigation going on by DarkNavyOrg, including one looking into a Samsung vulnerability (CVE-2025-21043), which has been identified as a potential security concern. There was a warning from both WhatsApp and Apple that users should update their operating systems and applications immediately, and Meta confirmed that less than 200 users were notified of in-app threats. 

It has been reported that some journalists, activists, and other public figures have been targeted. Meta's spokesperson Emily Westcott stressed how important it is for users to keep their devices current and to enable WhatsApp's privacy and security features. Furthermore, Amnesty International has also noted possible Android infections and is currently conducting further investigation. 

In the past, similar spyware operations occurred, such as WhatsApp's lawsuit against Israel's NSO Group in 2019, which allegedly targeted 1,400 users with the Pegasus spyware, which later became famous for its role in global cyberespionage. While sanctions and international scrutiny have been applied to such surveillance operations, they continue to evolve, reflecting the persistent threat that advanced mobile exploits continue to pose. 

There is no doubt that the latest revelations are highlighting the need for individuals and organisations to prioritise proactive cyber security measures rather than reactive ones, as zero-click exploits are becoming more sophisticated, the traditional boundaries of digital security—once relying solely on the caution of users—are eroding rapidly. It has become increasingly important for organisations to keep constant vigilance, update their software quickly, and employ layered defence strategies to protect both their personal and business information. 

Organisations need to invest in threat intelligence solutions, continuous monitoring systems, and regular mobile security audits if they want to be on the lookout for potential threats early on. In order for individual users to reduce their exposure, they need to maintain the latest version of their devices and applications, enable built-in privacy protections, and avoid unnecessary third-party integrations. 

The WhatsApp exploit is an important reminder that even trusted, encrypted platforms may be compromised at some point. The cyber espionage industry is evolving into a silent and targeted operation, and digital trust must be reinforced through transparent processes, rapid patching, and global cooperation between tech companies and regulators. A strong defence against invisible intrusions still resides in awareness and timely action.

Call-Recording App Neon Suspends Service After Security Breach

 

Neon, a viral app that pays users to record their phone calls—intending to sell these recordings to AI companies for training data—has been abruptly taken offline after a severe security flaw exposed users’ personal data, call recordings, and transcripts to the public.

Neon’s business model hinged on inviting users to record their calls through a proprietary interface, with payouts of 30 cents per minute for calls between Neon users and half that for calls to non-users, up to $30 per day. The company claimed it anonymized calls by stripping out personally identifiable information before selling the recordings to “trusted AI firms,” but this privacy commitment was quickly overshadowed by a crippling security lapse.

Within a day of rising to the top ranks of the App Store—boasting 75,000 downloads in a single day—the app was taken down after researchers discovered a vulnerability that allowed anyone to access other users’ call recordings, transcripts, phone numbers, and call metadata. Journalists found that the app’s backend was leaking not only public URLs to call audio files and transcripts but also details about recent calls, including call duration, participant phone numbers, timing, and even user earnings.

Alarmingly, these links were unrestricted—meaning anyone with the URL could eavesdrop on conversations—raising immediate privacy and legal concerns, especially given complex consent laws around call recording in various jurisdictions.

Founder and CEO Alex Kiam notified users that Neon was being temporarily suspended and promised to “add extra layers of security,” but did not directly acknowledge the security breach or its scale. The app itself remains visible in app stores but is nonfunctional, with no public timeline for its return. If Neon relaunches, it will face intense scrutiny over whether it has genuinely addressed the security and privacy issues that forced its shutdown.

This incident underscores the broader risks of apps monetizing sensitive user data—especially voice conversations—in exchange for quick rewards, a model that has emerged as AI firms seek vast, real-world datasets for training models. Neon’s downfall also highlights the challenges app stores face in screening for complex privacy and security flaws, even among fast-growing, high-profile apps.

For users, the episode is a stark reminder to scrutinize privacy policies and app permissions, especially when participating in novel data-for-cash business models. For the tech industry, it raises questions about the adequacy of existing safeguards for apps handling sensitive audio and personal data—and about the responsibilities of platform operators to prevent such breaches before they occur.

As of early October 2025, Neon remains offline, with users awaiting promised payouts and a potential return of the service, but with little transparency about how (or whether) the app’s fundamental security shortcomings have been fixed.

Google plans shift to risk-based security updates for Android phones


 

The Google Android ecosystem is set to undergo a significant transformation in its security posture, with Google preparing to overhaul the method it utilizes to address software vulnerabilities. Google is aiming to strengthen this. 

According to reports by Android Authority, the company plans to develop a new framework known as the Risk-Based Update System (RBUS) which will streamline patching processes for device manufacturers and help end users receive faster protection. According to Google, at present, Android Security Bulletins (ASBs) are published every month, which contain fixes for a variety of vulnerabilities, from minor flaws to severe exploits. 

A notification of hardware partners and Original Equipment Manufacturers (OEMs) is given at least one month in advance. Updates, however, will no longer be bundled together indiscriminately under the new approach. Google intends, instead, to prioritize real-world threats. 

As part of this initiative, Google will ensure vulnerabilities that are actively exploited or that pose the greatest risk to user privacy and data security are patched at the earliest possible opportunity. There will be no longer any delays in the release of essential protections due to less critical issues like low-level denial-of-service bugs. 

If this initiative is fully implemented, not only will OEMs be relieved from the burden of updating their devices, but it also shows Google's commitment to ensuring the safety of Android users by creating an intelligent and responsive update cycle. 

Over the last decade, Google has maintained a consistent rhythm with publishing the Android Security Bulletins on a monthly basis, regardless of whether or not updates for its Pixel devices had yet been released. There has been a tradition for each bulletin to outline a wide range of vulnerabilities, ranging from relatively minor issues to critical ones, with the sheer complexity of Android often leading to a dozen or more vulnerabilities being reported every month as a result of its sheer complexity. 

In July 2025, however, Google disrupted this cadence by publishing an update for the first time in 120 consecutive bulletins that did not document a single vulnerability for the first time. A break in precedent did not mean there were no issues, rather it signaled that Google was shifting how they communicate and distribute security updates in a strategic manner. 

In September 2025, the bulletin recorded an unusually high number of 119 vulnerabilities, underscoring the change in how they communicate and distribute security fixes. According to this contrast, Google has taken steps toward prioritizing high-risk vulnerabilities and ensuring that the device manufacturers are able to respond to emerging threats as quickly as possible, so that users can be shielded from active exploit. 

In spite of the fact that Original Equipment Manufacturers (OEMs) are largely dependent on the Android operating system to power their devices, they frequently operate on separate patch cycles and publish individual security bulletins, which has historically led to a degree of inconsistency across all ecosystems. 

With Google's aim to streamline the number of fixes the manufacturer must deploy each month, it appears Google wants to alleviate the burden on manufacturers, reducing the amount of patches that must be tested and deployed, as well as giving OEMs greater flexibility when and how firmware updates should be rolled out. 

It is possible for device makers to gain a greater sense of control by prioritizing high-risk vulnerabilities, but it also raises concern about possible delays in addressing less severe vulnerabilities that could be exploited if left uncorrected. The larger quarterly bulletins will be able to offset this new cadence. 

The September 2025 bulletin, which included more than 100 vulnerabilities in comparison to the empty or minimal lists of July and August, is indicative of this. According to Google spokesperson, in a statement to ZDNET, Android and Pixel both continuously address known security vulnerabilities, putting an emphasis on the most vulnerable to be fixed. 

In this way, Google emphasizes the platform's hardened protections, such as the adoption of memory-safe programming languages like Rust and the use of advanced anti-exploitation measures built into the platform. It is also being announced that Google will be extending its security posture beyond its system updates. 

Starting next year, developers of Android-certified apps will be required to provide their identities in order to distribute their software, as well as restrictions on sideloading, which are designed to combat fraudulent and malicious app development. There will also be increased pressure on major Android partners, such as Samsung, OnePlus, and other Original Equipment Manufacturers (OEMs) to adjust their update pipelines as a result of the switch to a risk-based update framework. 

According to Android Authority, which was the first to report about Google's plans, Google is actively negotiating with partners in an attempt to ease this shift, potentially reducing the burden on manufacturers who have historically struggled to provide timely updates. Sources cited by the company indicate that the company is actively in discussions with partners in order to ease this transition. 

The model offers users a more robust level of protection against active threats as well as minimizing interruptions from less urgent fixes, which will lead to a better device experience for users. Nevertheless, Google's approach raises some questions about transparency, including how it will determine what constitutes a high-risk flaw, and how it will communicate those judgments in a transparent manner. 

There are critics who warn against the risks of deprioritizing lower-severity vulnerabilities, which, while effective short-term, risks leaving cumulative holes in long-term device security. According to Google’s strategy, outlined in Android Headlines, which was designed to counter mobile exploits with data-driven strategies that aim to outpace attackers who are increasingly targeting smartphones, Google's strategy is a data-driven response. 

There are implications for more than Android phones. It is possible that the decision could be used as a model for rival operating systems, especially as regulators in regions like the European Union push for more consistent and timely patches for consumer devices. Consequently, enterprises and developers need to rethink how patch management works, and OEMs that adopt patch management early may be able to gain an advantage in markets that are sensitive to security. 

Despite a streamlined schedule, smaller manufacturers may be unable to keep up with the pace, underscoring the fragmentation that has long plagued the Android ecosystem. In an effort to mitigate these risks, Google has already signaled plans for providing tools and guidelines, and some industry observers are speculating that future Android versions might even include AI-powered predictive security tools that identify and prevent threats before they occur. 

With the successful implementation of this initiative, a new era of mobile security standards might be dawning and a balance between urgency and efficiency would be established in an era where cyber-attacks are escalating. For the average Android user, it is expected that the practical impact of Google's risk-based approach will be overwhelmingly positive. 

A device owner who receives a monthly patch may not notice much change, but a device owner with a handset that isn't updated regularly will benefit from manufacturers being able to push out fixes in a more structured fashion—particularly quarterly bulletins, which are now responsible for the bulk of security updates. 

There are, however, critics who caution that the consolidation of patches on a quarterly basis could, in theory, create an opportunity for malicious actors to exploit if details of upcoming fixes were leaked. However, industry analysts caution that this is still a very hypothetical risk, as the system is designed to accelerate the vulnerability discovery process in order to make sure that the most dangerous vulnerabilities are quickly exploited before they are widely abused. 

In the aggregate, the strategy demonstrates that Google is taking steps to enhance Android's defenses by prioritizing urgent threats, which aims to improve Android's security and stability across its wide range of devices in order to deliver a more reliable and stable experience for its users. 

Ultimately, the success of Google's risk-based update strategy will be determined not only by how quickly vulnerabilities are identified and patched, but also by how well manufacturers, regulators, and a broader developer community cooperate with Google. Since the Android ecosystem remains among the most fragmented, diverse, and diverse in the world, the effectiveness of this model will ultimately be evaluated based on the consistency and timeliness with which it provides protection across billions of devices, from flagship smartphones to budget models in emerging markets, within a timely manner. 

There are a number of questions that users need to keep in mind in order to get the most out of security: Enabling automatic updates, limiting the use of sideloaded applications, and choosing devices from OEMs that are known for providing timely patches are all ways to make sure users are protected.

The framework offers enterprises a chance to re-calibrate their device management policies, emphasizing risk management and aligning them with quarterly cycles more than ever before. As a result of Google's move, security will become much more than a static checklist. 

Instead, it will become an adaptive, dynamic process that anticipates threats rather than simply responds to them. Obviously, if this approach is executed effectively, it is going to change the landscape in terms of mobile security around the world, turning Android's vast reach from a vulnerability into one of its greatest assets.

RBI Proposes Smartphone Lock Mechanism for EMI Defaults

 

RBI is considering allowing lenders to remotely lock smartphones purchased on credit when borrowers default on EMIs, aiming to curb bad debt while igniting concerns over consumer rights and digital access harms . 

What’s proposed 

Reuters reporting indicates RBI may amend its Fair Practices Code to explicitly permit device locking for loan recovery on financed phones, reversing a 2023 direction that told lenders to stop using locking apps on defaulters’ devices. 

The proposed framework would mandate explicit borrower consent before any locking mechanism is enabled, and expressly prohibit lenders from accessing or altering personal data on the device, positioning the tool as a narrow recovery control rather than a surveillance vector. A cited source frames the intent as balancing “power to recover small-ticket loans” with protection of customer data under the updated ruleset . 

Why now 

The move sits against a surge in credit-funded electronics purchases, especially smartphones, with a 2024 Home Credit Finance study estimating over one-third of electronic goods in India are bought on credit, underscoring the scale of exposure across 1.16 billion mobile connections in a 1.4 billion population market. 

Delinquencies are pronounced on loans under Rs 1 lakh, per CRIF Highmark data, with non-bank finance companies responsible for nearly 85% of consumer durable lending, indicating systemic sensitivity to recovery tools in this segment. 

Potential impact on lenders 

If adopted, the policy could materially improve recovery rates and risk appetite for large NBFCs like Bajaj Finance, DMI Finance, and Cholamandalam Finance, potentially expanding credit access to borrowers with weaker scores given a stronger collateral-like enforcement mechanism on the financed handset itself. 

This could reshape underwriting models for small-ticket device finance by lowering expected loss estimates tied to first-payment default and early delinquency cohorts. 

Consumer rights concerns 

Critics warn of serious unintended effects: remote locking could “weaponise” access to essential technology, coercing behavioral compliance by cutting off lifeline services—work, education, payments—until repayment is made, with disproportionate harm to low-income and digitally dependent users, as argued by CashlessConsumer’s Srikanth L. 

Even with consent and data-access restrictions, the essential-services dependency of smartphones raises risks of overreach and due-process gaps in contested defaults or wrongful triggers.

Indian Government Flag Security Concerns with WhatsApp Web on Work PCs

 

The Indian government has issued a significant cybersecurity advisory urging citizens to avoid using WhatsApp Web on office computers and laptops, highlighting serious privacy and security risks that could expose personal information to employers and cybercriminals. 

The Ministry of Electronics and Information Technology (MeitY) released this public advisory through its Information Security Awareness (ISEA) team, warning that while accessing WhatsApp Web on office devices may seem convenient, it creates substantial cybersecurity vulnerabilities. The government describes the practice as a "major cybersecurity mistake" that could lead to unauthorized access to personal conversations, files, and login credentials. 

According to the advisory, IT administrators and company systems can gain access to private WhatsApp conversations through multiple pathways, including screen-monitoring software, malware infections, and browser hijacking tools. The government warns that many organizations now view WhatsApp Web as a potential security risk that could serve as a gateway for malware and phishing attacks, potentially compromising entire corporate networks. 

Specific privacy risks identified 

The advisory outlines several "horrors" of using WhatsApp on work-issued devices. Data breaches represent a primary concern, as compromised office laptops could expose confidential WhatsApp conversations containing sensitive personal information. Additionally, using WhatsApp Web on unsecured office Wi-Fi networks creates opportunities for malicious actors to intercept private data.

Perhaps most concerning, the government notes that even using office Wi-Fi to access WhatsApp on personal phones could grant companies some level of access to employees' private devices, further expanding the potential privacy violations. The advisory emphasizes that workplace surveillance capabilities mean employers may monitor browser activity, creating situations where sensitive personal information could be accessed, intercepted, or stored without employees' knowledge. 

Network security implication

Organizations increasingly implement comprehensive monitoring systems on corporate devices, making WhatsApp Web usage particularly risky. The government highlights that corporate networks face elevated vulnerability to phishing attacks and malware distribution through messaging applications like WhatsApp Web. When employees click malicious links or download suspicious attachments through WhatsApp Web on office systems, they could inadvertently provide hackers with backdoor access to organizational IT infrastructure. 

Recommended safety measures

For employees who must use WhatsApp Web on office devices, the government provides specific precautionary guidelines. Users should immediately log out of WhatsApp Web when stepping away from their desks or finishing work sessions. The advisory strongly recommends exercising caution when clicking links or opening attachments from unknown contacts, as these could contain malware designed to exploit corporate networks. 

Additionally, employees should familiarize themselves with their company's IT policies regarding personal application usage and data privacy on work devices. The government emphasizes that understanding organizational policies helps employees make informed decisions about personal technology use in professional environments. 

This advisory represents part of broader cybersecurity awareness efforts as workplace digital threats continue evolving, with the government positioning employee education as crucial for maintaining both personal privacy and corporate network security.

Cybercriminals Escalate Client-Side Attacks Targeting Mobile Browsers

 

Cybercriminals are increasingly turning to client-side attacks as a way to bypass traditional server-side defenses, with mobile browsers emerging as a prime target. According to the latest “Client-Side Attack Report Q2 2025” by security researchers c/side, these attacks are becoming more sophisticated, exploiting the weaker security controls and higher trust levels associated with mobile browsing. 

Client-side attacks occur directly on the user’s device — typically within their browser or mobile application — instead of on a server. C/side’s research, which analyzed compromised domains, autonomous crawling data, AI-powered script analysis, and behavioral tracking of third-party JavaScript dependencies, revealed a worrying trend. Cybercriminals are injecting malicious code into service workers and the Progressive Web App (PWA) logic embedded in popular WordPress themes. 

When a mobile user visits an infected site, attackers hijack the browser viewport using a full-screen iframe. Victims are then prompted to install a fake PWA, often disguised as adult content APKs or cryptocurrency apps, hosted on constantly changing subdomains to evade takedowns. These malicious apps are designed to remain on the device long after the browser session ends, serving as a persistent backdoor for attackers. 

Beyond persistence, these apps can harvest login credentials by spoofing legitimate login pages, intercept cryptocurrency wallet transactions, and drain assets through injected malicious scripts. Some variants can also capture session tokens, enabling long-term account access without detection. 

To avoid exposure, attackers employ fingerprinting and cloaking tactics that prevent the malicious payload from triggering in sandboxed environments or automated security scans. This makes detection particularly challenging. 

Mobile browsers are a favored target because their sandboxing is weaker compared to desktop environments, and runtime visibility is limited. Users are also more likely to trust full-screen prompts and install recommended apps without questioning their authenticity, giving cybercriminals an easy entry point. 

To combat these threats, c/side advises developers and website operators to monitor and secure third-party scripts, a common delivery channel for malicious code. Real-time visibility into browser-executed scripts is essential, as relying solely on server-side protections leaves significant gaps. 

End-users should remain vigilant when installing PWAs, especially those from unfamiliar sources, and treat unexpected login flows — particularly those appearing to come from trusted providers like Google — with skepticism. As client-side attacks continue to evolve, proactive measures on both the developer and user fronts are critical to safeguarding mobile security.

FBI Warns Chrome Users Against Unofficial Updates Downloading

 

If you use Windows, Chrome is likely to be the default browser. Despite Microsoft's ongoing efforts to lure users to the Edge and the rising threat of AI browsers, Google's browser remains dominant. However, Chrome is a victim of its own success. Because attackers are aware that you are likely to have it installed, it is the ideal entry point for them to gain access to your PC and your data. 

That is why you are seeing a series of zero-day alerts and emergency updates. This is also why the FBI is warning about the major threat posed by fraudulent Chrome updates. As part of the "ongoing #StopRansomware effort to publish advisories for network defenders that detail various ransomware variants and ransomware threat actors," the FBI and CISA, America's cyber defence agency, have issued their latest warning. 

The latest advisory addresses the recent rise in Interlock ransomware attacks. And, while the majority of the advice is aimed at individuals in charge of securing corporate networks and enforcing IT policies, it also includes a caution for PC users. Ransomware assaults require an entry point, or "initial access." And if you have a PC (or smartphone) connected to your employer's network, you are affected. The advisory also recommends that organisations "train users to spot social engineering attempts.”

In the case of Interlock, two of these ways of first entrance leverage the same lures that cybercriminals employ to target your personal accounts, as well as the data and security credentials on your own devices. You should be looking for these anyway. One of the techniques is ClickFix, which is easily detectable. This is where a notice or popup encourages you to paste content into a Windows command and run the script. It's accomplished by impersonating a technical issue, a secure website, or a file that you need to open. Any such directive is always an attack and should be ignored. 

Installing and updating fake Chrome has become commonplace, both on Android smartphones and Windows PCs. As with ClickFix, the guidance is quite explicit. Never use links in emails or texts to access upgrades or new installs. Always get updates and programs from the official websites or shops. Keep in mind that Chrome will automatically download updates and will prompt you to restart your browser to ensure the installation. Although those links are delivered to you, you are not required to look for them or click on random links.

Zimperium Warns of Rising Mobile Threats Over Public WiFi During Summer Travel

 

Public WiFi safety continues to be a contentious topic among cybersecurity professionals, often drawing sarcastic backlash on social media when warnings are issued. However, cybersecurity firm Zimperium has recently cautioned travelers about legitimate risks associated with free WiFi networks, especially when vigilance tends to be low. 

According to their security experts, devices are particularly vulnerable when people are on the move, and poorly configured smartphone settings can increase the danger significantly. While using public WiFi isn’t inherently dangerous, experts agree that safety depends on proper practices. Secure connections, encrypted apps, and refraining from installing new software or entering sensitive data on pop-up login portals are essential precautions. 

One of the most critical tips is to turn off auto-connect settings. Even the NSA has advised against automatically connecting to public networks, which can easily be imitated by malicious actors. The U.S. Federal Trade Commission (FTC) generally considers public WiFi safe due to widespread encryption. 

Still, contradictory guidance from other agencies like the Transportation Security Administration (TSA) urges caution, especially when conducting financial transactions on public hotspots. Zimperium takes a more assertive stance, recommending that companies prevent employees from accessing unsecured public networks altogether. Zimperium’s research shows that over 5 million unsecured WiFi networks have been discovered globally in 2025, with about one-third of users connecting to these potentially dangerous hotspots. 

The concern is even greater during peak travel times, as company-issued devices may connect to corporate networks from compromised locations. Airports, cafés, rideshare zones, and hotels are common environments where hackers look for targets. The risks increase when travelers are in a hurry or distracted. Zimperium identifies several types of threats: spoofed public networks designed to steal data, fake booking messages containing malware, sideloaded apps that mimic local utilities, and fraudulent captive portals that steal credentials or personal data. 

These techniques can impact both personal and professional systems, especially when users aren’t paying close attention. Although many associate these threats with international travel, Zimperium notes increased mobile malware activity in several major U.S. cities, including New York, Los Angeles, Seattle, and Miami, particularly during the summer. Staying safe isn’t complicated but does require consistent habits. Disabling automatic WiFi connections, only using official networks, and keeping operating systems updated are all essential steps. 

Using a reputable, paid VPN service can also offer additional protection. Zimperium emphasizes that mobile malware thrives during summer travel when users often let their guard down. Regardless of location—whether in a foreign country or a major U.S. city—the risks are real, and companies should take preventive measures to secure their employees’ devices.

Is Your Bank Login at Risk? How Chatbots May Be Guiding Users to Phishing Scams

 


Cybersecurity researchers have uncovered a troubling risk tied to how popular AI chatbots answer basic questions. When asked where to log in to well-known websites, some of these tools may unintentionally direct users to the wrong places, putting their private information at risk.

Phishing is one of the oldest and most dangerous tricks in the cybercrime world. It usually involves fake websites that look almost identical to real ones. People often get an email or message that appears to be from a trusted company, like a bank or online store. These messages contain links that lead to scam pages. If you enter your username and password on one of these fake sites, the scammer gets full access to your account.

Now, a team from the cybersecurity company Netcraft has found that even large language models or LLMs, like the ones behind some popular AI chatbots, may be helping scammers without meaning to. In their study, they tested how accurately an AI chatbot could provide login links for 50 well-known companies across industries such as finance, retail, technology, and utilities.

The results were surprising. The chatbot gave the correct web address only 66% of the time. In about 29% of cases, the links led to inactive or suspended pages. In 5% of cases, they sent users to a completely different website that had nothing to do with the original question.

So how does this help scammers? Cybercriminals can purchase these unclaimed or inactive domain names, the incorrect ones suggested by the AI, and turn them into realistic phishing pages. If people click on them, thinking they’re going to the right site, they may unknowingly hand over sensitive information like their bank login or credit card details.

In one example observed by Netcraft, an AI-powered search tool redirected users who asked about a U.S. bank login to a fake copy of the bank’s website. The real link was shown further down the results, increasing the risk of someone clicking on the wrong one.

Experts also noted that smaller companies, such as regional banks and mid-sized fintech platforms, were more likely to be affected than global giants like Apple or Google. These smaller businesses may not have the same resources to secure their digital presence or respond quickly when problems arise.

The researchers explained that this problem doesn't mean the AI tools are malicious. However, these models generate answers based on patterns, not verified sources and that can lead to outdated or incorrect responses.

The report serves as a strong reminder: AI is powerful, but it is not perfect. Until improvements are made, users should avoid relying on AI-generated links for sensitive tasks. When in doubt, type the website address directly into your browser or use a trusted bookmark.