Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Privacy. Show all posts

Public Wary of AI-Powered Data Use by National Security Agencies, Study Finds

 

A new report released alongside the Centre for Emerging Technology and Security (CETaS) 2025 event sheds light on growing public unease around automated data processing in national security. Titled UK Public Attitudes to National Security Data Processing: Assessing Human and Machine Intrusion, the research reveals limited public awareness and rising concern over how surveillance technologies—especially AI—are shaping intelligence operations.

The study, conducted by CETaS in partnership with Savanta and Hopkins Van Mil, surveyed 3,554 adults and included insights from a 33-member citizens’ panel. While findings suggest that more people support than oppose data use by national security agencies, especially when it comes to sensitive datasets like medical records, significant concerns persist.

During a panel discussion, investigatory powers commissioner Brian Leveson, who chaired the session, addressed the implications of fast-paced technological change. “We are facing new and growing challenges,” he said. “Rapid technological developments, especially in AI [artificial intelligence], are transforming our public authorities.”

Leveson warned that AI is shifting how intelligence gathering and analysis is performed. “AI could soon underpin the investigatory cycle,” he noted. But the benefits also come with risks. “AI could enable investigations to cover far more individuals than was ever previously possible, which raises concerns about privacy, proportionality and collateral intrusion.”

The report shows a divide in public opinion based on how and by whom data is used. While people largely support the police and national agencies accessing personal data for security operations, that support drops when it comes to regional law enforcement. The public is particularly uncomfortable with personal data being shared with political parties or private companies.

Marion Oswald, co-author and senior visiting fellow at CETaS, emphasized the intrusive nature of data collection—automated or not. “Data collection without consent will always be intrusive, even if the subsequent analysis is automated and no one sees the data,” she said.

She pointed out that predictive data tools, in particular, face strong opposition. “Panel members, in particular, had concerns around accuracy and fairness, and wanted to see safeguards,” Oswald said, highlighting the demand for stronger oversight and regulation of technology in this space.

Despite efforts by national security bodies to enhance public engagement, the study found that a majority of respondents (61%) still feel they understand “slightly” or “not at all” what these agencies actually do. Only 7% claimed a strong understanding.

Rosamund Powell, research associate at CETaS and co-author of the report, said: “Previous studies have suggested that the public’s conceptions of national security are really influenced by some James Bond-style fictions.”

She added that transparency significantly affects public trust. “There’s more support for agencies analysing data in the public sphere like posts on social media compared to private data like messages or medical data.”

Ascension Faces New Security Incident Involving External Vendor

 


There has been an official disclosure from Ascension Healthcare, one of the largest non-profit healthcare systems in the United States, that there has been a data breach involving patient information due to a cybersecurity incident linked to a former business partner. Ascension, which has already faced mounting scrutiny for its data protection practices, is facing another significant cybersecurity challenge with this latest breach, proving the company's commitment to security.

According to the health system, the recently disclosed incident resulted in the compromise of personal identifiable information (PII), including protected health information (PHI) of the patient. A cyberattack took place in December 2024 that was reported to have stolen data from a former business partner, a breach that was not reported publicly until now. This was the second major ransomware attack that Ascension faced since May of 2024, when critical systems were taken offline as a result of a major ransomware attack. 

A breach earlier this year affected approximately six million patients and resulted in widespread disruptions of operations. It caused ambulance diversions in a number of regions, postponements of elective procedures, and temporary halts of access to essential healthcare services in several of these regions. As a result of such incidents recurring repeatedly within the healthcare sector, concerns have been raised about the security posture of third-party vendors and also about the potential risks to patient privacy and continuity of care that can arise. 

According to Ascension's statement, the organisation is taking additional steps to evaluate and strengthen its cybersecurity infrastructure, including the relationship with external software and partner providers. The hospital chain, which operates 105 hospitals in 16 states and Washington, D.C., informed the public that the compromised data was "likely stolen" after being inadvertently disclosed to the third-party vendor, which, subsequently, experienced a breach as a result of an external software vulnerability. 

In a statement issued by Ascension Healthcare System, it was reported that the healthcare system first became aware of a potential security incident on December 5, 2024. In response to the discovery of the breach, the organisation initiated a thorough internal investigation to assess the extent of the breach. An investigation revealed that patient data had been unintentionally shared with an ex-business partner, who then became the victim of a cybersecurity attack as a result of the data being shared. 

In the end, it appeared that the breach was caused by a vulnerability in third-party software used by the vendor. As a result of the analysis concluded in January 2025, it was determined that some of the information disclosed had likely been exfiltrated during this attack. 

In spite of Ascension failing to disclose the specific types of data that were impacted by the attack, the organization did acknowledge that multiple care sites in Alabama, Michigan, Indiana, Tennessee, and Texas have been affected by the attack. In a statement released by Ascension, the company stressed that it continues to collaborate with cybersecurity experts and legal counsel to better understand the impact of the breach and to inform affected individuals as necessary. 

In addition, the company has indicated that in the future it will take additional steps to improve data sharing practices as well as third party risk management protocols. There is additional information released by Ascension that indicates that the threat actors who are suspected of perpetrating the December 2024 incident likely gained access to and exfiltrated sensitive medical and personal information. 

There are several types of compromised information in this file, including demographics, Social Security numbers, clinical records, and details about visits such as names of physicians, names, diagnoses, medical record numbers, and insurance provider details. Although Ascension has not provided a comprehensive estimate of how many people were affected nationwide, the organization did inform Texas state officials that 114,692 people were affected by the breach here in Texas alone, which was the number of individuals affected by the breach. 

The healthcare system has still not confirmed whether this incident is related to the ransomware attack that occurred in May 2024 across a number of states and affected multiple facilities. It has been reported that Ascension Health's operations have been severely disrupted since May, resulting in ambulances being diverted, manual documentation being used instead of electronic records, and non-urgent care being postponed. 

It took several weeks for the organization to recover from the attack, and the cybersecurity vulnerabilities in its digital infrastructure were revealed during the process. In addition to revealing that 5,599,699 individuals' personal and health-related data were stolen in the attack, Ascension later confirmed this information. 

Only seven of the system's 25,000 servers were accessed by the ransomware group responsible, but millions of records were still compromised. The healthcare and insurance industries continue to be plagued by data breaches. It has been reported this week that a data breach involving 4,052,972 individuals, resulting from a cyberattack in February 2024, has affected 4,052,972 individuals, according to a separate incident reported by VeriSource Services, a company that manages employee administration. 

A number of these incidents highlight the growing threat that organisations dealing with sensitive personal and medical data are facing. Apparently, the December 2024 breach involving Ascension's systems and networks was not caused by an internal compromise of its electronic health records, but was caused by an external attack. Neither the health system nor the former business partner with whom the patient information was disclosed has been publicly identified, nor has the health system identified the particular third-party software vulnerability exploited by the attackers.

Ascension has also recently announced two separate third-party security incidents that are separate from this one. A notice was posted by the organisation on April 14, 2025, concerning a breach that took place involving Scharnhorst Ast Kennard Gryphon, a law firm based in Missouri. The organisation reported that SAKG had detected suspicious activity on August 1, 2024, and an investigation later revealed that there had been unauthorised access between the 17th and the 6th of August 2024. 

Several individuals affiliated with the Ascension health system were notified by SAKG on February 14, 2025, about the breach. In that incident, there were compromised records including names, phone numbers, date of birth and death, Social Security numbers, driver's license numbers, racial data, and information related to medical treatment. 

A number of media inquiries have been received regarding the broader scope of the incident, including whether or not other clients were affected by the breach, as well as how many individuals were affected in total. Separately, Ascension announced another data security incident on March 3, 2025 that involved Access Telecare, a third-party provider of telehealth services in the area of Ascension Seton in Texas. 

As with previous breaches, the Ascension Corporation clarified that the breach did not compromise its internal systems or electronic health records, a report filed with the U.S. Department of Health and Human Services Office for Civil Rights (HHS OCR) confirmed on March 8, 2025, that Access Telecare had experienced a breach of its email system, which was reported on March 8, 2025. It is estimated that approximately 62,700 individuals may have been affected by the breach. 

In light of these successive disclosures, it is becoming increasingly apparent that the healthcare ecosystem is at risk of third-party relationships, as organisations continue to face the threat of cybercriminals attempting to steal sensitive medical and personal information from the internet. As a response to the recent security breach involving a former business partner, Ascension has offered two years of complimentary identity protection services to those who have been affected. This company offers credit monitoring services, fraud consultations, identity theft restoration services, aimed at mitigating potential harm resulting from unauthorized access to personal and health information, including credit monitoring, fraud consultation, and identity theft restoration services. 

Even though Ascension has not provided any further technical details about the breach, the timeline and nature of the incident suggest that it may be related to the Clop ransomware group's widespread campaign against data theft. There was a campaign in late 2024 that exploited a zero-day security vulnerability in the Cleo secure file transfer software and targeted multiple organisations. The company has not officially confirmed any connection between the breach and the Clop group, and a spokesperson has not responded to BleepingComputer's request for comment. 

Ascension has not encountered any major cybersecurity incidents in the past, so it is not surprising that this is not the first time they have experienced one. According to Ascension Healthcare's official report from May 2024, approximately 5.6 million patients and employees were affected by a separate ransomware infection attributed to the Black Basta group of hackers. Several hospitals were adversely affected by a security breach that occurred due to the inadvertent download of a malicious file on a company device by an employee. 

A number of data sets were exposed as a result of that incident, including both personal and health-related information, illustrating how the healthcare industry faces ongoing risks due to both internal vulnerabilities and external cyber threats. Despite the ongoing threat of cybersecurity in the healthcare industry, the string of data breaches involving Ascension illustrates the need to be more vigilant and accountable when managing third-party relationships. 

Even in the case of uncompromised internal systems, vulnerabilities in external networks can still result in exposing sensitive patient information to significant risks, even in cases of uncompromised internal systems. To ensure that healthcare organisations are adequately able to manage vendor risk, implement strong data governance protocols, and implement proactive threat detection and response strategies, organisations need to prioritise robust vendor risk management. 

A growing number of regulatory bodies and industry leaders are beginning to realize that they may need to revisit standards that govern network sharing, third-party oversight, and breach disclosure in an effort to ensure the privacy of patients in the increasingly interconnected world of digital health.

WhatsApp Balances AI Innovation with User Privacy Concerns

 


Despite WhatsApp's position as the world's largest messaging platform, it continues to push the boundaries of digital communication by implementing advanced artificial intelligence (AI) features that enhance the experience for its users and enable the platform to operate more efficiently. It is estimated that WhatsApp has more than 2 billion active monthly users globally, and its increasing use of artificial intelligence technologies, such as auto-responses, chatbots, and predictive text, has resulted in significant improvements to the speed and quality of communication, a critical factor for businesses that are looking to automate customer service and increase engagement among their employees. 

Although there is a shift in functionality to be based on artificial intelligence, it does not come without challenges. With the increasing implementation of smart features, widespread concerns have been raised regarding personal information privacy and the handling of personal data. As a matter of fact, it is also important to keep in mind that for several years, WhatsApp's parent company, Meta, has been under sustained scrutiny and criticism for its practices concerning data sharing. 

It is therefore becoming increasingly apparent that WhatsApp is navigating the fine line between leveraging the benefits of artificial intelligence and preserving its commitment to privacy while simultaneously leveraging the benefits of AI. The emerging dynamic within the tech industry reveals a wider tension within the industry, in which innovations must be carefully weighed against ensuring user trust is protected. 

A new set of artificial intelligence (AI) tools has been released by WhatsApp, one of the most widely used messaging platforms. They will operate through the newly introduced 'Private Processing' system that WhatsApp has recently launched. It is a significant development for the platform to be making such advances in its efforts to enhance the user experience via artificial intelligence-driven capabilities, but it is also creating an open discussion regarding the implications for user privacy as well as the potential for encrypted messaging to gain traction in the future. 

When AI is integrated into secure messaging environments, it raises significant questions about the degree to which privacy can still be maintained while simultaneously providing more intelligent functionality. It is quite challenging for cybersecurity experts like Adrianus Warmenhoven from Nordvpn to strike a balance between technological advancements and the protection of personal data while maintaining the appropriate degree of privacy. 

It has been highlighted in a statement that Warmenhoven told Business Report that while WhatsApp's Private Processing system represents an impressive achievement in terms of protecting data, it is essentially a compromise. “Anytime users send data outside their device, regardless of how securely they do it, there are always new risks associated with it,” he said. A threat will not be a threat to users' smartphones; it will be a threat to their data centre. His remarks emphasise the need for ongoing supervision and caution as platforms like WhatsApp seek to innovate through the use of artificial intelligence, while at the same time maintaining the trust of their global user base.

The concept of Private Processing is a completely different concept in design as well as a fundamentally different concept in purpose. It is evident from comparison of Meta's Private Processing system with Apple's Private Cloud Compute system. The Private Cloud Compute platform of Apple is the backbone of Apple Intelligence, which enables a wide variety of AI functions across Apple's ecosystem. 

It prioritises on-device processing, only turning to cloud infrastructure when it is needed. This model is made up primarily of high-performance hardware, so it can only be used with newer models of iPhones and iPads, which means older phones and iPads will not be able to access these features. The Meta company, on the other hand, has its own set of constraints since it's a software-based company. Meta has to support a massively diverse global user base of approximately 3 billion people, many of whom use low-end or older smartphones. 

Therefore, a hardware-dependent artificial intelligence system like Apple's was inapplicable in this context. Rather, Meta built Private Processing exclusively for WhatsApp, making sure that it was optimised for privacy within a more flexible hardware environment, and was developed specifically for WhatsApp. 

Rohlf and Colin Clemmons, the lead engineers behind the initiative, said that they were seeking to create a system that could provide minimal value to potential attackers, even if they were to breach the system. It is designed in a way that minimizes the risks involved, as explained by Clemmons. However, the introduction of AI features into secure messaging platforms raises broader questions about how these features could interfere with the fundamental principles of privacy and security. 

According to some experts, the introduction of these features may be at odds with the fundamental principles of privacy and security as a whole. According to Meta, the integration of artificial intelligence is a direct reflection of changing customer expectations. As the company points out, users will increasingly demand intelligent features in their digital interactions, and they will migrate to platforms that provide them, which means AI is not just a strategic advantage, but companies also have to integrate into their platforms. 

By utilising artificial intelligence, users can automate complex processes and extract meaningful insights from large data sets, thereby improving their interaction with digital platforms. However, it must be noted that despite these advancements, the current state of AI processing-most of which is dependent on server-side large language models as opposed to mobile hardware-imposes inherent privacy concerns as a result of these advances. 

A user input is frequently required to be sent to an external server, thereby making the content of the requests visible to the service providers who process them. While this type of approach can be useful for a wide range of applications, it poses difficulties in maintaining the privacy standards traditionally upheld by end-to-end encrypted messaging systems. WhatsApp has developed its Artificial Intelligence capabilities to address these concerns, ensuring that user privacy is preserved at all times. 

With the platform, users can deliver intelligent features such as message summarisation without granting Meta or WhatsApp access to private conversations, as long as users do not share any information with Meta or WhatsApp. A key principle of this approach is that AI features, including those supported by Private Processing, are optional; therefore, all AI features, including those supported by Private Processing, must remain entirely optional; transparency, which requires clear communication whenever Private Processing is deployed; and control by the user. 

With WhatsApp's Advanced Chat Privacy feature, which allows users to exclude specific chats from AI-powered functions, such as Meta AI, users can secure their most sensitive conversations. With the help of this privacy-centric design, WhatsApp continues to embrace artificial intelligence in a way that aligns with the expectations of its users, delivering innovation while maintaining trust in safe, private communication for its users. 

Due to growing privacy concerns, WhatsApp has implemented a range of safeguards that aim to protect user data and incorporate advanced features at the same time. Messages are encrypted from start to finish on the sender's device, so they can only be decrypted by the intended recipient. End-to-end encryption is at the heart of the privacy framework. By limiting the visibility and lifespan of their communications using features like "View Once" and "Disappearing Messages", users can decrease the likelihood of sensitive information being mishandled or stored by limiting the visibility and lifespan of their communications. 

There have also been tools introduced on the platform that allow users to review and delete their chat history, thus giving them more control over their own data and digital footprints. Despite the fact that WhatsApp's privacy practices have been improved in recent years, industry experts have expressed concern about the effectiveness and transparency of WhatsApp's privacy policies, particularly when AI is incorporated into the platform. Several critical questions have been raised concerning the platform's use of artificial intelligence to analyse the behaviour and preferences of its users.

Furthermore, the company's ongoing data-sharing agreement with its parent company, Met, has raised concerns that this data might be used to target advertising campaigns, which has brought attention to the problem. As well as this, many privacy-conscious users have expressed suspicions of WhatsApp’s data-handling policies because of the perceived lack of transparency surrounding the company’s policies. WhatsApp will ultimately face a complex and evolving challenge as it attempts to balance the advantages of artificial intelligence with the imperative of privacy.

Even though artificial intelligence-powered tools have improved the user experience and platform functionality, there is still a need for robust privacy protections despite the introduction of these tools. As the platform continues to grow in popularity, its ability to maintain user trust will be dependent upon the implementation of clear, transparent data practices as well as the development of features that will give users a greater sense of control over their personal information in the future. As part of WhatsApp's mission to maintain its credibility as a secure communication platform, it will be crucial for the company to strike a balance between technological innovation and the assurance of privacy.

Over 21 Million Employee Screenshots Leaked from WorkComposer Surveillance App

Over 21 Million Employee Screenshots Leaked from WorkComposer Surveillance App

An app designed to track employee productivity by logging keystrokes and taking screenshots has suffered a significant privacy breach as more than 21 million images of employee activity were left in an unsecured Amazon S3 bucket.

An app for tracking employee productivity by logging keystrokes and capturing screenshots was hit by a major privacy breach resulting in more than 21 million images of employee activity left in an unsafe Amazon S3 bucket. 

Experts at Cybernews discovered the breach at WorkComposer, a workplace surveillance software that monitors employee activity by tracking their digital presence. Although the company did secure access after being informed by Cybernews, the data was already leaked in real time to anyone with an internet connection, exposing the sensitive work information online of thousands of employees and companies. 

WorkComposer is an application used by more than 200,000 users in various organizations. It is aimed to help those organizations surveil employee productivity by logging keystrokes, monitoring how much time employees spend on each app, and capturing desktop screenshots every few minutes. 

With millions of these screenshots leaked to the open web raises threats of vast sensitive data exposed: email captures, confidential business documents, internal chats, usernames and passwords, and API keys. These things could be misused to target companies and launch identity theft scams, hack employee accounts, and commit more breaches. 

Also, the businesses that have been using WorkCompose could now be accountable to E.U GDPR (General Data Protection Regulation) or U.S CCPA  (California Consumer Privacy Act) violations besides other legal actions. 

As employees have no agency over what tracking tools may record in their workday, information such as private chats, medical info, or confidential projects; the surveillance raises ethical concerns around tracking tools and a severe privacy violation if these screenshots are exposed. 

Since workers have no control over what tracking tools may capture in their workday, be it private chats, confidential projects, or even medical info, there’s already an iffy ethical territory around tracking tools and a serious privacy violation if the screenshots are leaked.

The WorkComposer incident is not the first. Cybernews have reported previous leaks from WebWork, another workplace tracking tool that experienced a breach of 13 million screenshots. 

Switzerland’s New Law Proposal Could Put VPN Privacy at Risk


Switzerland is thinking about changing its digital surveillance laws, and privacy experts are worried. The new rules could force VPN companies and secure messaging services to track their users and give up private information if requested.

At the center of the issue is a proposed change that would expand government powers over online services like email platforms, messaging apps, VPNs, and even social media sites. These services could soon be required to collect and store personal details about their users and hand over encrypted data when asked.

This move has sparked concern among privacy-focused companies that operate out of Switzerland. If the law is approved, it could prevent them from offering the same level of privacy they are known for.


What Could the New Rules Mean?

The suggested law says that if a digital service has over 5,000 users, it must collect and verify users’ identities and store that information for half a year after they stop using the service. This would affect many platforms, even small ones run by individuals or non-profits.

Another part of the law would give authorities the power to access encrypted messages, but only if the company has the key needed to unlock them. This could break the trust users have in these services, especially those who rely on privacy for safety or security.


Why VPN Providers Are Speaking Out

VPN services are designed to hide user activity and protect data from being tracked. They usually don’t keep any records that could identify a user. But if Swiss law requires them to log personal data, that goes against the very idea of privacy that VPNs are built on.

Swiss companies like Proton VPN, Threema, and NymVPN are all worried. They say the law could damage Switzerland’s reputation as a country that supports privacy and secure digital tools.


NymVPN’s Warning

NymVPN, a newer VPN service backed by privacy activist Chelsea Manning, has raised strong objections. Alexis Roussel, the company’s Chief Operating Officer, explained that the new rules would not only hurt businesses but could also put users in danger—especially people in sensitive roles, like journalists or activists.

Roussel added that this law may try to go around earlier court rulings that protected privacy rights, which could hurt Switzerland’s fast-growing privacy tech industry.


What People Can Do

Swiss citizens have time to give feedback on the proposal until May 6, 2025. NymVPN is encouraging people to spread the word, take part in the consultation process, and contact government officials to share their concerns. They’re also warning people in other countries to stay alert in case similar ideas start appearing elsewhere.

Google Ends Privacy Sandbox, Keeps Third-Party Cookies in Chrome

 

Google has officially halted its years-long effort to eliminate third-party cookies from Chrome, marking the end of its once-ambitious Privacy Sandbox project. In a recent announcement, Anthony Chavez, VP of Privacy Sandbox, confirmed that the browser will continue offering users the choice to allow or block third-party cookies—abandoning its previous commitment to remove them entirely. 

Launched in 2020, Privacy Sandbox aimed to overhaul the way user data is collected and used for digital advertising. Instead of tracking individuals through cookies, Google proposed tools like the Topics API, which categorized users based on web behavior while promising stronger privacy protections. Despite this, critics claimed the project would ultimately serve Google’s interests more than users’ privacy or industry fairness. Privacy groups like the Electronic Frontier Foundation (EFF) warned users that the Sandbox still enabled behavioral tracking, and urged them to opt out. Meanwhile, regulators on both sides of the Atlantic scrutinized the initiative. 

In the UK, the Competition and Markets Authority (CMA) investigated the plan over concerns it would restrict competition by limiting how advertisers access user data. In the US, a federal judge recently ruled that Google engaged in deliberate anticompetitive conduct in the ad tech space—adding further pressure on the company. Originally intended to bring Chrome in line with browsers like Safari and Firefox, which block third-party cookies by default, the Sandbox effort repeatedly missed deadlines. In 2023, Google shifted its approach, saying users would be given the option to opt in rather than being automatically transitioned to the new system. Now, it appears the initiative has quietly folded. 

In his statement, Chavez acknowledged ongoing disagreements among advertisers, developers, regulators, and publishers about how to balance privacy with web functionality. As a result, Google will no longer introduce a standalone prompt to disable cookies and will instead continue with its current model of user control. The Movement for an Open Web (MOW), a vocal opponent of the Privacy Sandbox, described Google’s reversal as a victory. “This marks the end of their attempt to monopolize digital advertising by removing shared standards,” said MOW co-founder James Rosewell. “They’ve recognized the regulatory roadblocks are too great to continue.” 

With Privacy Sandbox effectively shelved, Chrome users will retain the ability to manage cookie preferences—but the web tracking status quo remains firmly in place.

Zoom Platform Misused by Elusive Comet Attackers in Fraud Scheme

 


Recent reports suggest that North Korean threat actors are now employing an alarming evolution in the tactics they employ to launch a sophisticated cybercrime operation known as Elusive Comet, a sophisticated cybercrime operation. This newly uncovered campaign demonstrates a way of exploiting Zoom's remote control capabilities to gain unauthorised access to cryptocurrency industry users' systems. 

It is clear from this development that a significant trend is occurring in which widely trusted communication platforms are being exploited as tools to facilitate high-level cyber intrusions. Security Alliance, one of the most reputable cybersecurity research organisations, conducted the investigation and analysis that led to the discovery. Elusive Comet exhibited some significant operational similarities to activities previously associated with North Korea's notorious Lazarus Group, a group which has been linked to North Korea for some years. 

The findings suggest that definitive attribution is yet to be made. Due to the lack of conclusive evidence, attempts to link this campaign with any known state-sponsored entity have been complicated, further demonstrating how covert cyberattacks have become increasingly common in the financial sector. This campaign, according to security experts, marks a dramatic departure from the traditional methods of gaining access to cryptocurrency targets previously used to attack them. This is because the attackers can leverage legitimate features of mainstream platforms such as Zoom, which not only makes their operations more successful but also makes detection and prevention much more difficult. 

Using such ubiquitous communication tools emphasises the need for enhanced security protocols in industries that handle digital assets to stay on top of digital threats. With the emergence of Elusive Comet, the threat landscape continues to evolve, and adversaries are increasingly adopting innovative approaches to bypass traditional defences, a reminder that the threat landscape is constantly changing and that adversaries are continuously evolving. The threat actors behind Elusive Comet have invested considerable resources into establishing a convincing online persona to maintain an appearance of legitimacy. 

To reinforce their facade of authenticity, they operate credible websites and maintain active social media profiles. As one example of the fraudulent entities that are associated with the group, Aureon Capital, a fake venture capital company posing as a legitimate company, Aureon Press, and The OnChain Podcast have all been carefully designed to trick unsuspecting individuals and businesses. 

The attackers usually contact users by sending them direct messages via X (formerly Twitter), or by contacting them via email, or by offering invitations to appear on their fabricated podcast as a guest. In the study, researchers found that after initiating contact and establishing a certain level of trust, attackers then move swiftly to set up a Zoom meeting under the pretext of learning more about the target's professional activities. 

It is common for key meeting details to be withheld until very near the time of the scheduled meeting, a tactic employed by the organisation to create an impression of urgency and encourage compliance among participants. A common occurrence is that victims are often asked to share their screens during the call so that they can demonstrate their work, and in doing so, they unknowingly expose their sensitive systems and data to the attackers. As a result of the Elusive Comet operation, Jake Gallen, CEO of the cryptocurrency company Emblem Vault, lost over $100,000 of his digital assets, which included his company's cryptocurrency. As a result, he was targeted after agreeing to participate in a Zoom interview with someone who was posing as a media person. 

By manipulating Gallen during the session into granting remote access to his computer under the disguise of technical facilitation, the attacker succeeded in obtaining his permission to do so. The attackers were able to install a malicious payload, referred to by the attackers as "GOOPDATE," which allowed them to gain access to his cryptocurrency wallets and steal the funds that resulted from this attack. 

It is clear from this incident that cryptocurrencies are vulnerable, especially among executives and high-net-worth individuals who interact regularly with media outlets and investors, which makes them particularly susceptible to sophisticated social engineering schemes because of their high level of exposure to these media outlets. Additionally, the breach emphasises that professionals operating in high-value financial sectors should have heightened awareness of cybersecurity and adopt stricter digital hygiene policies. 

A leading cybersecurity research and advisory firm specialising in forensics and advanced persistent threats (APTS), Security Alliance, meticulously tracked and analysed the Elusive Comet campaign, a campaign that is highly likely to persist for many years to come. Security Alliance published a comprehensive report in March 2025 detailing the tactics, techniques, and procedures (TTPS) used by threat actors and presenting comprehensive insights into these tactics. In their research, the attackers were able to install malware on victims' systems based primarily on a combination of social engineering and using Zoom's remote control features to get their malicious code into the systems of their victims. 

Despite drawing parallels between the methods used to conduct this campaign and those of the notorious Lazarus Group of North Korea, Security Alliance exercised caution when attributions were made. It was noted in the research that the similarities in techniques and tools could indicate common origins or shared resources; however, the researchers stressed the difficulties associated with attribution in a cyber threat landscape where various actors tend to duplicate or repurpose the methodologies of each other. 

Taking into account the methods employed by the Elusive Comet campaign, cryptocurrency professionals are strongly advised to take a comprehensive and proactive security posture to reduce the risk of falling victim to the same types of sophisticated attacks again. First and foremost, companies and individuals should make sure that Zoom's remote control feature is disabled by default, and that it is only enabled when necessary by the organisation and the individual. This functionality can be significantly restricted by restricting the use of this feature, which reduces the chances of cybercriminals exploiting virtual engagements as well.

It is also important to exercise increased caution in responding to unsolicited meeting invitations. When invitations are sent by an unknown or unverified source, it is essential to verify the identity of the requester through independent channels. In order to increase account security in cryptocurrency-related platforms, including digital wallets and exchanges, it is imperative to implement multi-factor authentication (MFA) as a critical barrier. 

MFA serves as an additional layer of protection if credentials are compromised as well, providing an extra layer of defence. Further, it will be beneficial for organisations to deploy robust endpoint protection solutions as well as maintain all software, including communication platforms such as Zoom, consistently updated, to protect against the exploitation of known vulnerabilities. Additionally, regular cybersecurity education and training for employees, partners, and key stakeholders is also extremely important. 

An organisation can strengthen the security awareness of its teams through the development of a culture of security awareness, which will allow them to identify and resist threat actors' tactics, such as social engineering, phishing attacks, and other deceptive tactics. The Elusive Comet operation highlights a broader, more dangerous threat to the cryptocurrency industry as cybercriminals are increasingly manipulating trusted communication tools to launch highly targeted and covert attacks targeting the crypto market. 

There is a strong possibility that the attacker may have been part of the North Korean Lazarus Group, but an official attribution remains elusive, further illustrating the difficulty in identifying cyber threat actors, yet there are some clear lessons to be learned from this attack. 

As today's cybersecurity landscape becomes more volatile and more complex, it is more important than ever for organisations to maintain vigilance, implement rigorous security protocols, and continually adapt to emerging threats to survive. The adversaries are continually refining their tactics, so the only people who can successfully safeguard the assets and reputation of their organisations and businesses against evolving threats to their identity and reputation will be those who invest in resilient defence strategies.

Cybersecurity Alert Says Fake PDF Converters Stealing Sensitive Information

 


Online PDF converters provide efficient conversions of documents from one file format to another, and millions of individuals and businesses use these services to do so. However, this free service also poses significant cybersecurity risks despite its convenience. According to the Federal Bureau of Investigation's (FBI) advisory issued a month ago, cybercriminals have been increasingly exploiting online file conversion platforms to spread malware to consumers and businesses. 

As a result of the threat actor's embedding of malware into seemingly legitimate file conversion processes, data, financial information, and system security are being put at serious risk as a result. As the popularity of these services grows, so does the potential for widespread cyberattacks. Thus, users must exercise heightened caution when choosing tools for managing digital assets online and adhere to best practices when protecting their digital assets when selecting online tools. 

Among the many concerns regarding cyber threats that have recently erupted in the form of a report by a cybersecurity firm, a sophisticated malware campaign has been discovered that takes advantage of counterfeit PDF-to-DOCX conversion platforms to compromise users and expose their data. 

Using highly capable malware, this campaign can steal a wide variety of sensitive data, such as passwords, cryptocurrency wallets, and other confidential personal data from websites. This threat emerged in a matter of time following a public advisory issued by the Denver division of the FBI, warning the public of the increase in malicious file conversion services being used to spread malware. As a result of the findings of cybersecurity firm, cybercriminals have meticulously developed deceptive websites like candyxpdf[.]com and candyconverterpdf[.]com, which imitate the appearance and functionality of the legitimate file conversion service pdfcandy.com, to exploit the public. 

PDFcandy.com's original platform, well-known for its comprehensive PDF management tools, is reportedly attracting approximately 2.8 million visitors per month, making it a prime target for threat actors seeking to exploit its user base as a means of gaining a competitive advantage. A significant aspect of the platform is the significant number of users based in India, where 19.07% of its total traffic comes from, equivalent to approximately 533,960 users per month. As a result of this concentration, cybercriminals operating fraudulent websites have an ample pool of potential victims to exploit. 

According to data collected in March of 2025, the impersonating sites fetched approximately 2,300 and 4,100 visitors from unsuspecting users, indicating an early but concerning growth among those unaware of the impersonating sites. A growing number of sophisticated threats are being employed by threat actors, as indicated by these developments. They emphasize the need for heightened user vigilance and strong cybersecurity measures at all levels. 

An FBI report has highlighted the growing threat posed by fraudulent online document conversion tools, which have been issued by the Federal Bureau of Investigation (FBI). This is in response to an alert recently issued by the FBI Denver Field Office, which warns of the increasing use of these seemingly benign services not just by cybercriminals to steal sensitive user information, but also to install ransomware on compromised devices, in more severe cases. As a result of an alarming rise in reports concerning these malicious platforms, the agency issued a statement in response. 

There has been an increase in the number of deceptive websites offering free document conversion, file merging, and download services by attackers, as indicated in the FBI's advisory. It is important to note that although these tools often perform the file conversions promised, such as converting a .DOC file into a. A PDF file or merging multiple .JPG files into one.PD, the FBI warns that the final downloaded files may contain malicious code. It can be used by cybercriminals to gain unauthorised access to the victim’s device, thereby putting the victim in an extremely dangerous position in terms of cybersecurity. 

The agency also warns that documents that are uploaded to these platforms may contain sensitive information such as names, Social Security numbers, cryptocurrency wallet seeds and addresses, passphrases, email credentials, passwords, and banking information, among others. In addition to identity theft, financial fraud, and subsequent cyberattacks, such information can be exploited to steal identities, commit financial fraud, or commit further cyberattacks. 

The FBI Denver Field Office confirmed in a report that complaints were on the rise, with even the public sector reporting incidents recently in the metro Denver area. During her remarks, Vicki Migoya, FBI Denver Public Affairs Officer, pointed out that malicious actors often use subtle methods to deceive users. For instance, malicious actors alter a single character in a website URL or substitute suffixes such as “INC” for “CO” to create a domain name that is very similar to legitimate ones. Additionally, as search engine algorithms continue to prioritise paid advertisements, some of which may lead to malicious sites, users searching for “free online file converters” should be aware of this warning, as they may be particularly vulnerable to threats. 

Despite the FBI's decision to withhold specific technical details so as not to alert threat actors, the agency confirmed that such fraudulent tools remain a preferred method for spreading malware and infecting unsuspecting computer users. Upon investigating the malware campaign further, the FBI discovered that the deceptive methods employed by the fraudulent websites to compromise users were deceptively deceptive. 

When a user visits such websites, he or she is required to upload a PDF document to convert it into Word format. It is then shown that the website has a loading sequence that simulates a typical conversion process, to give the impression that the website is legitimate. Additionally, the site presents users with a CAPTCHA verification prompt as well, a method of fostering trust and demonstrating that the website complies with common security practices seen on reputable websites. Nevertheless, as soon as the user completes the CAPTCHA, they are deceptively instructed to execute a PowerShell command on their system, which is crucial to begin the malware delivery process. 

After the user clicks on Adobe. A zip file is then installed on the user's device and contains a malware infection called ArechClient, a family of information-stealing malware which is associated with the Sectopratt malware family. Known to be active since 2019, this particular strain of malware is specifically designed to gather a wide range of sensitive data, including saved usernames and passwords, as well as cryptocurrency wallet information and other important digital assets. 

Some of these malicious websites have been taken offline by authorities in recent weeks, but a recent report by a known cybersecurity firm states that over 6,000 people have visited these websites during the past month alone. Clearly, cybercriminals are actively exploiting this vulnerability at scale and with a high degree of frequency. Users must verify the legitimacy of any online conversion service they use due to the increasing sophistication of such attacks. 

During the time of a web-based search, it is essential to make sure that the website is legitimate, not a phoney copy that is being manipulated by hackers. If an unknowing compromise has taken place on a device, action must be taken immediately, such as isolating it and resetting all the associated passwords, to minimise any damage done. For sensitive file conversions, cybersecurity experts recommend using trustworthy offline tools whenever possible to reduce their exposure to online attacks.

As cyber threats to online file conversion services have become increasingly sophisticated, users must be increasingly vigilant and security-conscious when conducting digital activities. For all individuals and organisations to feel comfortable uploading or downloading any files to a website, they are strongly encouraged to check for its authenticity before doing so. Among the things that users should do is carefully examine URLS for subtle anomalies, verify a secure connection (HTTPS), and favour trusted, well-established platforms over those that are less-known or unfamiliar. 

In addition, users should avoid executing any unsolicited commands or downloading unexpected files, even when the website seems to be a genuine one. It is crucial to prioritise the use of offline, standalone conversion tools whenever possible, especially when dealing with sensitive or confidential documents. If it is suspected that a compromised device or computer has been compromised, immediate steps should be taken to isolate the affected device, reset all relevant passwords, and contact cybersecurity professionals to prevent a potential breach from taking place. 

In the age of cybercriminals who are constantly enhancing their tactics, fostering a culture of proactive cyber awareness and resilience is no longer optional, but rather a necessity. To combat these evolving threats, it will be imperative for organisations to consistently train staff, update security protocols, and effectively use best practices. Users need to exercise greater caution and make informed decisions to prevent themselves as well as their organisations from the far-reaching consequences of cyberattacks in the future.

Why Location Data Privacy Laws Are Urgently Needed

 

Your location data is more than a simple point on a map—it’s a revealing digital fingerprint. It can show where you live, where you work, where you worship, and even where you access healthcare. In today’s hyper-connected environment, these movements are silently collected, packaged, and sold to the highest bidder. For those seeking reproductive or gender-affirming care, attending protests, or visiting immigration clinics, this data can become a dangerous weapon.

Last year, privacy advocates raised urgent concerns, calling on lawmakers to address the risks posed by unchecked location tracking technologies. These tools are now increasingly used to surveil and criminalize individuals for accessing fundamental services like reproductive healthcare.

There is hope. States such as California, Massachusetts, and Illinois are now moving forward with legislation designed to limit the misuse of this data and protect individuals from digital surveillance. These bills aim to preserve the right to privacy and ensure safe access to healthcare and other essential rights.

Imagine a woman in Alabama—where abortion is entirely banned—dropping her children at daycare and driving to Florida for a clinic visit. She uses a GPS app to navigate and a free radio app along the way. Without her knowledge, the apps track her entire route, which is then sold by a data broker. Privacy researchers demonstrated how this could happen using Locate X, a tool developed by Babel Street, which mapped a user’s journey from Alabama to Florida.

Despite its marketing as a law enforcement tool, Locate X was accessed by private investigators who falsely claimed affiliation with authorities. This loophole highlights the deeply flawed nature of current data protections and how they can be exploited by anyone posing as law enforcement.

The data broker ecosystem remains largely unregulated, enabling a range of actors—from law enforcement to ideological groups—to access and weaponize this information. Near Intelligence, a broker, reportedly sold location data from visitors to Planned Parenthood to an anti-abortion organization. Meanwhile, in Idaho, cell phone location data was used to charge a mother and her son with aiding an abortion, proving how this data can be misused not only against patients but also those supporting them.

The Massachusetts bill proposes a protected zone of 1,850 feet around sensitive locations, while California takes a broader stance with a five-mile radius. These efforts are gaining support from privacy advocates, including the Electronic Frontier Foundation.

“A ‘permissible purpose’ (which is key to the minimization rule) should be narrowly defined to include only: (1) delivering a product or service that the data subject asked for, (2) fulfilling an order, (3) complying with federal or state law, or (4) responding to an imminent threat to life.”

Time and again, we’ve seen location data weaponized to monitor immigrants, LGBTQ+ individuals, and those seeking reproductive care. In response, state legislatures are advancing bills focused on curbing this misuse. These proposals are grounded in long-standing privacy principles such as informed consent and data minimization—ensuring that only necessary data is collected and stored securely.

These laws don’t just protect residents. They also give peace of mind to travelers from other states, allowing them to exercise their rights without fear of being tracked, surveilled, or retaliated against.

To help guide new legislation, this post outlines essential recommendations for protecting communities through smart policy design. These include:
  • Strong definitions,
  • Clear rules,
  • Affirmation that all location data is sensitive,
  • Empowerment of consumers through a strong private right of action,
  • Prohibition of “pay-for-privacy” schemes, and
  • Transparency through clear privacy policies.
These protections are not just legal reforms—they’re necessary steps toward reclaiming control over our digital movements and ensuring no one is punished for seeking care, support, or safety.

Gmail Users Face a New Dilemma Between AI Features and Data Privacy

 



Google’s Gmail is now offering two new upgrades, but here’s the catch— they don’t work well together. This means Gmail’s billions of users are being asked to pick a side: better privacy or smarter features. And this decision could affect how their emails are handled in the future.

Let’s break it down. One upgrade focuses on stronger protection of your emails, which works like advanced encryption. This keeps your emails private, even Google won’t be able to read them. The second upgrade brings in artificial intelligence tools to improve how you search and use Gmail, promising quicker, more helpful results.

But there’s a problem. If your emails are fully protected, Gmail’s AI tools can’t read them to include in its search results. So, if you choose privacy, you might lose out on the benefits of smarter searches. On the other hand, if you want AI help, you’ll need to let Google access more of your email content.

This challenge isn’t unique to Gmail. Many tech companies are trying to combine stronger security with AI-powered features, but the two don’t always work together. Apple tried solving this with a system that processes data securely on your device. However, delays in rolling out their new AI tools have made their solution uncertain for now.

Some reports explain the choice like this: if you turn on AI features, Google will use your data to power smart tools. If you turn it off, you’ll have better privacy, but lose some useful options. The real issue is that opting out isn’t always easy. Some settings may remain active unless you manually turn them off, and fully securing your emails still isn’t simple.

Even when extra security is enabled, email systems have limitations. For example, Apple’s iCloud Mail doesn’t use full end-to-end encryption because it must work with global email networks. So even private emails may not be completely safe.

This issue goes beyond Gmail. Other platforms are facing similar challenges. WhatsApp, for example, added a privacy mode that blocks saving chats and media, but also limits AI-related features. OpenAI’s ChatGPT can now remember what you told it in past conversations, which may feel helpful but also raises questions about how your personal data is being stored.

In the end, users need to think carefully. AI tools can make email more useful, but they come with trade-offs. Email has never been a perfectly secure space, and with smarter AI, new threats like scams and data misuse may grow. That’s why it’s important to weigh both sides before making a choice.



GPS Spoofing Emerges as a Serious Risk for Civil and Military Applications

 


The growing reliance on satellite-based navigation systems by modern aviation has raised serious concerns among global aviation authorities about the threat to the integrity of these systems that are emerging. As one such threat, GPS spoofing, is rapidly gaining attention for its potential to undermine the safety and reliability of aircraft operations, it is quickly gaining attention.

Global Navigation Satellite System (GNSS) spoofing, which is the act of transmitting counterfeit signals to confuse receivers of GNSS signals, has become an increasingly serious concern for aviation safety worldwide, including in India. As a result of this interference, the accuracy of aircraft navigation systems is compromised, as it compromises critical data related to location, navigation, and time. As a result, the risk of operational and security failures is significant. 

Several recent media articles have brought a renewed focus on the threat of GPS spoofing, which has become increasingly prevalent in recent years, along with its potential catastrophic impact on a variety of critical systems and infrastructure, most notably the aviation industry. There is a growing concern in this area because the incidence of spoofing incidents is on the rise in areas close to national borders, a region where the threat is particularly high.

An area of concern that has been raised in public discourse as well as parliamentary debate is the vicinity of the Amritsar border, which has drawn a significant amount of attention from the public. With an increasing prevalence of spoofing activities occurring in this strategically sensitive zone, there have been significant concerns raised about aircraft operating in the region's vulnerability, as well as the broader implications for national security and cross-border aviation safety that result from this activity. 

There is an ongoing disruption of GNSS signals in this area that is threatening not only the integrity of navigation systems, but it requires immediate policy attention, interagency coordination, and robust mitigation measures to be implemented. There is a report issued by OPS Group in September 2024 that illustrates the extent of the problem in South Asia. 

The report states that northwest New Delhi area and Lahore, Pakistan are experiencing an increased amount of spoofing activity, as evidenced by the report. The region was ranked ninth globally for the number of spoofing incidents between July 15 and August 15, 2024, with 316 aircraft being affected within the period. According to the findings of this study, enhanced monitoring, reporting mechanisms, and countermeasures are necessary to mitigate the risks that can arise from manipulating GPS signals within high-traffic air corridors. 

In GPS spoofing, also called GPS simulation or GPS spoofing, counterfeit signals are sent to satellite-based navigation systems to fool GPS receivers. This can cause GPS receivers to become deceived. By using this technique, the receiver can calculate an inaccurate location, which compromises the reliability of the data it provides. 

As a foundational component of a range of critical applications - including aviation navigation, maritime operations, autonomous systems, logistics, and time synchronisation across financial and communication networks - GPS technology serves as the basis for these applications. As a result, such interference would have profound implications for the community. It used to be considered a theoretical vulnerability for GPS spoofing, but today it has become a more practical and increasingly accessible threat that is becoming increasingly prevalent.

The advancement in technology, along with the availability of open-source software and hardware that can generate fake GPS signals at a very low cost, has significantly lowered the barrier to potential attackers being able to exploit the technology. There has been a considerable evolution in the world of cyber security, and this has created an environment in which not just governments, military institutions, but also commercial industries and individuals face serious operational and safety risks as a result of this.

Due to this, GPS spoofing has now become a broader cybersecurity concern that demands coordinated global attention and response rather than simply being an isolated incident. GPS spoofing refers to the practice of transmitting counterfeit satellite signals to mislead navigation systems into miscalculating their true position, velocity, and timing. A GPS jam is an interference in satellite communication that completely overpowers signals. 

In contrast, GPS spoofing works more subtly. In addition to subtly inserting false data that is often indistinguishable from genuine signals, this method also raises operational risk and makes detection more difficult. As a result of this deceptive nature, aviation systems, which rely heavily on satellite-based navigational data as a major component, are at serious risk. Since the GNSS signals originate from satellites positioned more than 20,000 kilometres above the Earth's surface, they are particularly susceptible to spoofing. 

The inherent weakness of these signals makes them particularly susceptible to spoofing. As a result of spoofed signals that are often transmitted from ground sources at higher intensity, onboard systems like the Flight Management System (FMS), Automatic Dependent Surveillance Systems (ADS-B/ADS-C), and Ground Proximity Warning Systems can override legitimate signals that are received by the Flight Management System. 

It is possible for aircraft to deviate from intended flight paths due to such manipulation, to misrepresent their location to air traffic controllers, or to encounter terrain hazards that were unforeseen—all of which compromise flight safety. There has been a significant advance in the use of spoofing beyond theoretical scenarios, and it is now recognized as an effective tool for both electronic warfare as well as asymmetric warfare. As a result, both state and non-state actors around the world have tapped into this technological resource to gain tactical advantages. 

According to reports during the Russian-Ukraine conflict, Russian forces employed advanced systems, such as the Krasukha-4 and Tirada-2, to spoof GNSS signals, effectively disorienting enemy drones, aircraft and missiles. An earlier example of this could be Iran's use of spoofing techniques in 2011 to take down an RQ-170 Sentinel drone controlled by the United States. The same thing happened during the Nagorno-Karabakh conflict between Azerbaijan and Armenia. 

The Azerbaijan government used extensive electronic warfare measures, such as GNSS spoofing, to disable the radar and air defense infrastructures of Armenia, which allowed Turkey and Israeli drones to operate almost with impunity during the conflict. As a result of these cases, I believe the strategic utility of spoofing in modern conflict scenarios has been reinforced, demonstrating its status as a credible and sophisticated threat to national and international security systems worldwide. 

To deal with GPS spoofing, a proactive and multi-pronged approach must be taken that includes technological safeguards, robust policy frameworks, as well as an increase in awareness initiatives. As the use of satellite-based navigation continues to increase, it is becoming increasingly important that stakeholders, such as governments, aviation authorities, and technology companies, invest in developing and implementing advanced anti-spoofing mechanisms to prevent this from happening.

There are several ways in which counterfeit signals can be detected and rejected in real time, including signal authentication protocols, anomaly detection algorithms, and secure hardware configurations, based on these protocols. Furthermore, user awareness has a significant impact on the success of counterfeit signals. Operators and organisations should develop a comprehensive knowledge of their GPS infrastructure and be aware of any unusual behaviours that could indicate spoofing attempts by tracking their GPS infrastructure. 

By regularly training employees, conducting system audits, and adhering to best practices in cybersecurity, businesses are significantly more likely to resist such attacks. Legal and ethical considerations are also critical to addressing GPS spoofing in many jurisdictions. The transmission of false navigation signals has the potential to carry severe penalties in many jurisdictions. To avoid unintended disruptions, GPS signal simulations must comply with regulatory standards and ethical norms, regardless of whether they are used for research, testing, or training purposes. 

Furthermore, keeping up with emerging technologies as well as rapidly evolving threat landscapes is essential. A reliable cybersecurity solution can serve as a critical line of defence when it is integrated with comprehensive security platforms, such as advanced threat detection software. GPS spoofing continues to grow in prominence, so it will be essential to coordinate an effort focused on vigilance, innovation, and accountability to safeguard the integrity of global navigation systems, as well as the many sectors that depend on them, in the future.

ProtectEU and VPN Privacy: What the EU Encryption Plan Means for Online Security

 

Texting through SMS is pretty much a thing of the past. Most people today rely on apps like WhatsApp and Signal to share messages, make encrypted calls, or send photos—all under the assumption that our conversations are private. But that privacy could soon be at risk in the EU.

On April 1, 2025, the European Commission introduced a new plan called ProtectEU. Its goal is to create a roadmap for “lawful and effective access to data for law enforcement,” particularly targeting encrypted platforms. While messaging apps are the immediate focus, VPN services might be next. VPNs rely on end-to-end encryption and strict no-log policies to keep users anonymous. However, if ProtectEU leads to mandatory encryption backdoors or expanded data retention rules, that could force VPN providers to change how they operate—or leave the EU altogether. 

Proton VPN’s Head of Public Policy, Jurgita Miseviciute, warns that weakening encryption won’t solve security issues. Instead, she believes it would put users at greater risk, allowing bad actors to exploit the same access points created for law enforcement. Proton is monitoring the plan closely, hoping the EU will consider solutions that protect encryption. Surfshark takes a more optimistic view. Legal Head Gytis Malinauskas says the strategy still lacks concrete policy direction and sees the emphasis on cybersecurity as a potential boost for privacy tools like VPNs. Mullvad VPN isn’t convinced. 

Having fought against earlier EU proposals to scan private chats, Mullvad criticized ProtectEU as a rebranded version of old policies, expressing doubt it will gain wide support. One key concern is data retention. If the EU decides to require VPNs to log user activity, it could fundamentally conflict with their privacy-first design. Denis Vyazovoy of AdGuard VPN notes that such laws could make no-log VPNs unfeasible, prompting providers to exit the EU market—much like what happened in India in 2022. NordVPN adds that the more data retained, the more risk users face from breaches or misuse. 

Even though VPNs aren’t explicitly targeted yet, an EU report has listed them as a challenge to investigations—raising concerns about future regulations. Still, Surfshark sees the current debate as a chance to highlight the legitimate role VPNs play in protecting everyday users. While the future remains uncertain, one thing is clear: the tension between privacy and security is only heating up.

Over 1.6 Million Affected in Planned Parenthood Lab Partner Data Breach

 

A cybersecurity breach has exposed the confidential health data of more than 1.6 million individuals—including minors—who received care at Planned Parenthood centers across over 30 U.S. states. The breach stems from Laboratory Services Cooperative (LSC), a company providing lab testing for reproductive health clinics nationwide.

In a notice filed with the Maine Attorney General’s office, LSC confirmed that its systems were infiltrated on October 27, 2024, and the breach was detected the same day. Hackers reportedly gained unauthorized access to sensitive personal, medical, insurance, and financial records.

"The information compromised varies from patient to patient but may include the following:
  • Personal information: Name, address, email, phone number
  • Medical information: Date(s) of service, diagnoses, treatment, medical record and patient numbers, lab results, provider name, treatment location
  • Insurance information: Plan name and type, insurance company, member/group ID numbers
  • Billing information: Claim numbers, bank account details, billing codes, payment card details, balance details
  • Identifiers: Social Security number, driver's license or ID number, passport number, date of birth, demographic data, student ID number"

In addition to patient data, employee information—including details about dependents and beneficiaries—may also have been compromised.

Patients concerned about whether their data is affected can check if their Planned Parenthood location partners with LSC via the FAQ section on LSC’s website or by calling their support line at 855-549-2662.

While it's impossible to reverse the damage of a breach, experts recommend immediate protective actions:

Monitor your credit reports (available weekly for free from all three major credit bureaus)

Place fraud alerts, freeze credit, and secure your Social Security number

Stay vigilant for unusual account activity and report potential identity theft promptly

LSC is offering 12–24 months of credit monitoring through CyEx Medical Shield Complete to impacted individuals. Those affected must call the customer service line between 9 a.m. and 9 p.m. ET, Monday to Friday, to get an activation code for enrollment.

For minors or individuals without an SSN or credit history, a tailored service named Minor Defense is available with a similar registration process. The enrollment deadline is July 14, 2025.

Why Personal Identity Should Remain Independent of Social Platforms

 


Digital services are now as important as other public utilities such as electricity and water in today's interconnected world. It is very important for society to expect a similar level of consistency and quality when it comes to these essential services, including the internet and the systems that protect personal information. In modern times, digital footprints are used to identify individuals as extensions of their identities, capturing their relationships, preferences, ideas, and everyday experiences. 

In Utah, the Digital Choice Act has been introduced to ensure that individuals have control over sensitive, personal, and personal information rather than being dominated by large technology corporations. Utah has taken a major step in this direction by enacting the act. As a result of this pioneering legislation, users have been given meaningful control over how their data is handled on social media platforms, which creates a new precedent for digital rights in modernity. 

Upon the enactment of Utah's Digital Choice Act, on July 1, 2026, it is anticipated that the act will make a significant contribution to restoring control over personal information to individuals, rather than allowing it to remain within the authority of large corporations who control it. As a result of the Act, users are able to use open-source protocols so that they can transfer their digital content and social connections from one platform to another using open-source protocols. 

As a result of this legislation, individuals can retain continuity in their digital lives – preserving relationships, media, and conversations – even when they choose to leave a platform. Furthermore, the legislation affirms the principle of data ownership, which provides users with the ability to permanently delete their data upon departure. Moreover, the Act provides a fundamentally new relationship between users and platforms. 

Traditional social media companies are well known for monetizing user attention, earning profits through targeted advertising and offering their services to the general public without charge. This model of economics involves the creation of a product from the user data. As a result of the Digital Choice Act, users' data ownership is placed back in their hands instead of corporations, so that they are the ones who determine how their personal information will be used, stored, and shared. As a central aspect of this legislation, there is a vision of a digital environment that is more open, competitive, and ethical. 

Essentially, the Act mandates interoperability and data portability to empower users and reduce entry barriers for emerging platforms, which leads to the creation of a thriving social media industry that fosters innovation and competition. As in the past, similar successes have been witnessed in other industries as well. In the US, the 1996 Telecommunications Act led to a massive growth in mobile communications, while in the UK, open banking initiatives were credited with a wave of fintech innovation. 

There is the promise that interoperability holds for digital platforms in the same way that it has for those sectors in terms of choice and diversity. Currently, individuals remain vulnerable to the unilateral decisions made by technology companies. There are limited options for recourse when it comes to content moderation policies, which are often opaque. As a result of the TikTok outage of January 2025, millions of users were suddenly cut off from their years-old personal content and relationships, demonstrating the fragility of this ecosystem. 

The Digital Choice Act would have allowed users to move their data and networks to a new platform with a seamless transition, eliminating any potential risks of service disruption, by providing them with the necessary protections. Additionally, many creators and everyday users are often deplatformed suddenly, leaving them with no recourse or the ability to restore their digital lives. By adopting the Act, users now can publish and migrate content across platforms in real-time, which allows them to share content widely and transition to services that are better suited to their needs.

A flexible approach to data is essential in today's digitally connected world. Beyond social media, the consequences of data captivity are becoming increasingly urgent, and the implications are becoming more pressing. 23andMe's collapse highlighted how vulnerable deeply personal information is in the hands of private companies, especially as artificial intelligence becomes more and more integrated into the digital infrastructure. This increases the threat of misuse of data exponentially. 

As the stakes of data misuse increase exponentially, robust, user-centred data protection systems are becoming increasingly necessary and imperative. There is no doubt that Utah has become a national leader in the area of digital privacy over the past few years. As a result of enacting SB 194 and HB 464 in 2024, the state focuses on the safety of minors and the responsibility for mental health harms caused by social media. As a result of this momentum, the Digital Choice Act offers a framework that other states and countries could replicate and encourage policymakers to recognize data rights as a fundamental human right, leveraging this momentum.

The establishment of a legal framework that protects data portability and user autonomy is essential to the development of a more equitable digital ecosystem. When individuals are given the power to take their information with them, the dynamics of the online world change—encouraging personal agency, responsibility and transparency. Such interoperability can already be achieved by using the tools and technologies that are already available. 

Keeping up with the digital revolution is essential. To ensure the future of digital citizenship, lawmakers, technology leaders, as well as civil society members must work together to prioritize the protection of personal identity online. There is a rapid change occurring in the digital world, which means that the responsibilities of those responsible for overseeing and designing it are also changing as well. 

There is no question that as data continues to transform the way people live, work, and connect, people need to have their rights to control their digital presence embedded at the core of digital policy. The Digital Choice Act serves as a timely blueprint for how governments can take proactive measures to address the mounting concern over data privacy, platform dominance, and a lack of user autonomy in the age of digital technology. 

Although Utah has taken a significant step towards implementing a similar law, other jurisdictions must also recognize the long-term social, economic, and ethical benefits of implementing similar legislation. As part of this strategy, open standards should be fostered, fair competition should be maintained, and mechanisms should be strengthened to allow individuals to easily move and manage their digital lives without having to worry about them. 

It is both necessary and achievable to see a future where digital identities do not belong to private corporations but are protected and respected by law instead. The adoption of user-centric principles and the establishment of regulatory safeguards that ensure transparency and accountability can be enough to ensure that technology serves the people and does not exploit them to the detriment of them. 

To ensure a healthy and prosperous society in an increasingly digital era, users must return control over their identity to a shared and urgent priority that requires bold leadership, collaborative innovation, and a greater commitment to digital rights to ensure a prosperous and prosperous society.