Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Data Privacy. Show all posts

ProtectEU and VPN Privacy: What the EU Encryption Plan Means for Online Security

 

Texting through SMS is pretty much a thing of the past. Most people today rely on apps like WhatsApp and Signal to share messages, make encrypted calls, or send photos—all under the assumption that our conversations are private. But that privacy could soon be at risk in the EU.

On April 1, 2025, the European Commission introduced a new plan called ProtectEU. Its goal is to create a roadmap for “lawful and effective access to data for law enforcement,” particularly targeting encrypted platforms. While messaging apps are the immediate focus, VPN services might be next. VPNs rely on end-to-end encryption and strict no-log policies to keep users anonymous. However, if ProtectEU leads to mandatory encryption backdoors or expanded data retention rules, that could force VPN providers to change how they operate—or leave the EU altogether. 

Proton VPN’s Head of Public Policy, Jurgita Miseviciute, warns that weakening encryption won’t solve security issues. Instead, she believes it would put users at greater risk, allowing bad actors to exploit the same access points created for law enforcement. Proton is monitoring the plan closely, hoping the EU will consider solutions that protect encryption. Surfshark takes a more optimistic view. Legal Head Gytis Malinauskas says the strategy still lacks concrete policy direction and sees the emphasis on cybersecurity as a potential boost for privacy tools like VPNs. Mullvad VPN isn’t convinced. 

Having fought against earlier EU proposals to scan private chats, Mullvad criticized ProtectEU as a rebranded version of old policies, expressing doubt it will gain wide support. One key concern is data retention. If the EU decides to require VPNs to log user activity, it could fundamentally conflict with their privacy-first design. Denis Vyazovoy of AdGuard VPN notes that such laws could make no-log VPNs unfeasible, prompting providers to exit the EU market—much like what happened in India in 2022. NordVPN adds that the more data retained, the more risk users face from breaches or misuse. 

Even though VPNs aren’t explicitly targeted yet, an EU report has listed them as a challenge to investigations—raising concerns about future regulations. Still, Surfshark sees the current debate as a chance to highlight the legitimate role VPNs play in protecting everyday users. While the future remains uncertain, one thing is clear: the tension between privacy and security is only heating up.

Best Encrypted Messaging Apps: Signal vs Telegram vs WhatsApp Privacy Guide

 

Encrypted messaging apps have become essential tools in the age of cyber threats and surveillance. With rising concerns over data privacy, especially after recent high-profile incidents, users are turning to platforms that offer more secure communication. Among the top contenders are Signal, Telegram, and WhatsApp—each with its own approach to privacy, encryption, and data handling. 

Signal is widely regarded as the gold standard when it comes to messaging privacy. Backed by a nonprofit foundation and funded through grants and donations, Signal doesn’t rely on user data for profit. It collects minimal information—just your phone number—and offers strong on-device privacy controls, like disappearing messages and call relays to mask IP addresses. Being open-source, Signal allows independent audits of its code, ensuring transparency. Even when subpoenaed, the app could only provide limited data like account creation date and last connection, making it a favorite among journalists, whistleblowers, and privacy advocates.  

Telegram offers a broader range of features but falls short on privacy. While it supports end-to-end encryption, this is limited only to its “secret chats,” and not enabled by default in regular messages or public channels. Telegram also stores metadata, such as IP addresses and contact info, and recently updated its privacy policy to allow data sharing with authorities under legal requests. Despite this, it remains popular for public content sharing and large group chats, thanks to its forum-like structure and optional paid features. 

WhatsApp, with over 2 billion users, is the most widely used encrypted messaging app. It employs the same encryption protocol as Signal, ensuring end-to-end protection for chats and calls. However, as a Meta-owned platform, it collects significant user data—including device information, usage logs, and location data. Even people not using WhatsApp can have their data collected via synced contacts. While messages remain encrypted, the amount of metadata stored makes it less privacy-friendly compared to Signal. 

All three apps offer some level of encrypted messaging, but Signal stands out for its minimal data collection, open-source transparency, and commitment to privacy. Telegram provides a flexible chat experience with weaker privacy controls, while WhatsApp delivers strong encryption within a data-heavy ecosystem. Choosing the best encrypted messaging app depends on what you prioritize more: security, features, or convenience.

Apple and Google App Stores Host VPN Apps Linked to China, Face Outrage

Apple and Google App Stores Host VPN Apps Linked to China, Face Outrage

Google (GOOGL) and Apple (AAPL) are under harsh scrutiny after a recent report disclosed that their app stores host VPN applications associated with a Chinese cybersecurity firm, Qihoo 360. The U.S government has blacklisted the firm. The Financial Times reports that 5 VPNs still available to U.S users, such as VPN Proxy master and Turbo VPN, are linked to Qihoo. It was sanctioned in 2020 on the charges of alleged military ties. 

Ilusion of Privacy: VPNs collecting data 

In 2025 alone, three VPN apps have had over a million downloads on Google Play and  Apple’s App Store, suggesting these aren’t small-time apps, Sensor Tower reports. They are advertised as “private browsing” tools, but the VPNs provide the companies with complete user data of their online activity. This is alarming because China’s national security laws mandate that companies give user data if the government demands it. 

Concerns around ownership structures

The intricate web of ownership structures raises important questions; the apps are run by Singapore-based Innovative Connecting, owned by Lemon Seed, a Cayman Islands firm. Qihoo acquired Lemon Seed for $69.9 million in 2020. The company claimed to sell the business months late, but FT reports the China-based team making the applications were still under Qihoo’s umbrella for years. According to FT, a developer said, “You could say that we’re part of them, and you could say we’re not. It’s complicated.”

Amid outrage, Google and Apple respond 

Google said it strives to follow sanctions and remove violators when found. Apple has removed two apps- Snap VPN and Thunder VPN- after FT contacted the business, claiming it follows strict rules on VPN data-sharing.

Privacy scare can damage stock valuations

What Google and Apple face is more than public outage. Investors prioritise data privacy, and regulatory threat has increased, mainly with growing concerns around U.S tech firms’ links to China. If the U.S government gets involved, it can result in stricter rules, fines, and even more app removals. If this happens, shareholders won’t be happy. 

According to FT, “Innovative Connecting said the content of the article was not accurate and declined to comment further. Guangzhou Lianchuang declined to comment. Qihoo and Chen Ningyi did not respond to requests for comment.”

Orion Brings Fully Homomorphic Encryption to Deep Learning for AI Privacy

 

As data privacy becomes an increasing concern, a new artificial intelligence (AI) encryption breakthrough could transform how sensitive information is handled. Researchers Austin Ebel, Karthik Garimella, and Assistant Professor Brandon Reagen have developed Orion, a framework that integrates fully homomorphic encryption (FHE) into deep learning. 

This advancement allows AI systems to analyze encrypted data without decrypting it, ensuring privacy throughout the process. FHE has long been considered a major breakthrough in cryptography because it enables computations on encrypted information while keeping it secure. However, applying this method to deep learning has been challenging due to the heavy computational requirements and technical constraints. Orion addresses these challenges by automating the conversion of deep learning models into FHE-compatible formats. 

The researchers’ study, recently published on arXiv and set to be presented at the 2025 ACM International Conference on Architectural Support for Programming Languages and Operating Systems, highlights Orion’s ability to make privacy-focused AI more practical. One of the biggest concerns in AI today is that machine learning models require direct access to user data, raising serious privacy risks. Orion eliminates this issue by allowing AI to function without exposing sensitive information. The framework is built to work with PyTorch, a widely used machine learning library, making it easier for developers to integrate FHE into existing models. 

Orion also introduces optimization techniques that reduce computational burdens, making privacy-preserving AI more efficient and scalable. Orion has demonstrated notable performance improvements, achieving speeds 2.38 times faster than previous FHE deep learning methods. The researchers successfully implemented high-resolution object detection using the YOLO-v1 model, which contains 139 million parameters—a scale previously considered impractical for FHE. This progress suggests Orion could enable encrypted AI applications in sectors like healthcare, finance, and cybersecurity, where protecting user data is essential. 

A key advantage of Orion is its accessibility. Traditional FHE implementations require specialized knowledge, making them difficult to adopt. Orion simplifies the process, allowing more developers to use the technology without extensive training. By open-sourcing the framework, the research team hopes to encourage further innovation and adoption. As AI continues to expand into everyday life, advancements like Orion could help ensure that technological progress does not come at the cost of privacy and security.

AI and Privacy – Issues and Challenges

 

Artificial intelligence is changing cybersecurity and digital privacy. It promises better security but also raises concerns about ethical boundaries, data exploitation, and spying. From facial recognition software to predictive crime prevention, customers are left wondering where to draw the line between safety and overreach as AI-driven systems become more and more integrated into daily life.

The same artificial intelligence (AI) tools that aid in spotting online threats, optimising security procedures, and stopping fraud can also be used for intrusive data collecting, behavioural tracking, and mass spying. The use of AI-powered surveillance in corporate data mining, law enforcement profiling, and government tracking has drawn criticism in recent years. AI runs the potential of undermining rather than defending basic rights in the absence of clear regulations and transparency. 

AI and data ethics

Despite encouraging developments, there are numerous instances of AI-driven inventions going awry, which raise serious questions. A face recognition business called Clearview AI amassed one of the largest facial recognition databases in the world by illegally scraping billions of photos from social media. Clearview's technology was employed by governments and law enforcement organisations across the globe, leading to legal action and regulatory action about mass surveillance. 

The UK Department for Work and Pensions used an AI system to detect welfare fraud. An internal investigation suggested that the system disproportionately targeted people based on their age, handicap, marital status, and country. This prejudice resulted in certain groups being unfairly picked for fraud investigations, raising questions about discrimination and the ethical use of artificial intelligence in public services. Despite earlier guarantees of impartiality, the findings have fuelled calls for increased openness and supervision in government AI use. 

Regulations and consumer protection

The ethical use of AI is being regulated by governments worldwide, with a number of significant regulations having an immediate impact on consumers. The AI Act of the European Union, which is scheduled to go into force in 2025, divides AI applications into risk categories. 

Strict regulations will be applied to high-risk technology, like biometric surveillance and facial recognition, to guarantee transparency and moral deployment. The EU's commitment to responsible AI governance is further reinforced by the possibility of severe sanctions for non compliant companies. 

Individuals in the United States have more control over their personal data according to California's Consumer Privacy Act. Consumers have the right to know what information firms gather about them, to seek its erasure, and to opt out of data sales. This rule adds an important layer of privacy protection in an era where AI-powered data processing is becoming more common. 

The White House has recently introduced the AI Bill of Rights, a framework aimed at encouraging responsible AI practices. While not legally enforceable, it emphasises the need of privacy, transparency, and algorithmic fairness, pointing to a larger push for ethical AI development in policy making.

Encryption Under Siege: A New Wave of Attacks Intensifies

 

Over the past decade, encrypted communication has become a standard for billions worldwide. Platforms like Signal, iMessage, and WhatsApp use default end-to-end encryption, ensuring user privacy. Despite widespread adoption, governments continue pushing for greater access, threatening encryption’s integrity.

Recently, authorities in the UK, France, and Sweden have introduced policies that could weaken encryption, adding to EU and Indian regulatory measures that challenge privacy. Meanwhile, US intelligence agencies, previously critical of encryption, now advocate for its use after major cybersecurity breaches. The shift follows an incident where the China-backed hacking group Salt Typhoon infiltrated US telecom networks. Simultaneously, the second Trump administration is expanding surveillance of undocumented migrants and reassessing intelligence-sharing agreements.

“The trend is bleak,” says Carmela Troncoso, privacy and cryptography researcher at the Max-Planck Institute for Security and Privacy. “New policies are emerging that undermine encryption.”

Law enforcement argues encryption obstructs criminal investigations, leading governments to demand backdoor access to encrypted platforms. Experts warn such access could be exploited by malicious actors, jeopardizing security. Apple, for example, recently withdrew its encrypted iCloud backup system from the UK after receiving a secret government order. The company’s compliance would require creating a backdoor, a move expected to be challenged in court on March 14. Similarly, Sweden is considering laws requiring messaging services like Signal and WhatsApp to retain message copies for law enforcement access, prompting Signal to threaten market exit.

“Some democracies are reverting to crude approaches to circumvent encryption,” says Callum Voge, director of governmental affairs at the Internet Society.

A growing concern is client-side scanning, a technology that scans messages on users’ devices before encryption. While presented as a compromise, experts argue it introduces vulnerabilities. The EU has debated its implementation for years, with some member states advocating stronger encryption while others push for increased surveillance. Apple abandoned a similar initiative after warning that scanning for one type of content could pave the way for mass surveillance.

“Europe is divided, with some countries strongly in favor of scanning and others strongly against it,” says Voge.

Another pressing threat is the potential banning of encrypted services. Russia blocked Signal in 2024, while India’s legal battle with WhatsApp could force the platform to abandon encryption or exit the market. The country has already prohibited multiple VPN services, further limiting digital privacy options.

Despite mounting threats, pro-encryption responses have emerged. The US Cybersecurity and Infrastructure Security Agency and the FBI have urged encrypted communication use following recent cybersecurity breaches. Sweden’s armed forces also endorse Signal for unclassified communications, recognizing its security benefits.

With the UK’s March 14 legal proceedings over Apple’s backdoor request approaching, US senators and privacy organizations are demanding greater transparency. UK civil rights groups are challenging the confidential nature of such surveillance orders.

“The UK government may have come for Apple today, but tomorrow it could be Google, Microsoft, or even your VPN provider,” warns Privacy International.

Encryption remains fundamental to human rights, safeguarding free speech, secure communication, and data privacy. “Encryption is crucial because it enables a full spectrum of human rights,” says Namrata Maheshwari of Access Now. “It supports privacy, freedom of expression, organization, and association.”

As governments push for greater surveillance, the fight for encryption and privacy continues, shaping the future of digital security worldwide.


How Data Removal Services Protect Your Online Privacy from Brokers

 

Data removal services play a crucial role in safeguarding online privacy by helping individuals remove their personal information from data brokers and people-finding websites. Every time users browse the internet, enter personal details on websites, or use search engines, they leave behind a digital footprint. This data is often collected by aggregators and sold to third parties, including marketing firms, advertisers, and even organizations with malicious intent. With data collection becoming a billion-dollar industry, the need for effective data removal services has never been more urgent. 

Many people are unaware of how much information is available about them online. A simple Google search may reveal social media profiles, public records, and forum posts, but this is just the surface. Data brokers go even further, gathering information from browsing history, purchase records, loyalty programs, and public documents such as birth and marriage certificates. This data is then packaged and sold to interested buyers, creating a detailed digital profile of individuals without their explicit consent. 

Data removal services work by identifying where a person’s data is stored, sending removal requests to brokers, and ensuring that information is deleted from their records. These services automate the process, saving users the time and effort required to manually request data removal from hundreds of sources. Some of the most well-known data removal services include Incogni, Aura, Kanary, and DeleteMe. While each service may have a slightly different approach, they generally follow a similar process. Users provide their personal details, such as name, email, and address, to the data removal service. 

The service then scans databases of data brokers and people-finder sites to locate where personal information is being stored. Automated removal requests are sent to these brokers, requesting the deletion of personal data. While some brokers comply with these requests quickly, others may take longer or resist removal efforts. A reliable data removal service provides transparency about the process and expected timelines, ensuring users understand how their information is being handled. Data brokers profit immensely from selling personal data, with the industry estimated to be worth over $400 billion. 

Major players like Experian, Equifax, and Acxiom collect a wide range of information, including addresses, birth dates, family status, hobbies, occupations, and even social security numbers. People-finding services, such as BeenVerified and Truthfinder, operate similarly by aggregating publicly available data and making it easily accessible for a fee. Unfortunately, this information can also fall into the hands of bad actors who use it for identity theft, fraud, or online stalking. 

For individuals concerned about privacy, data removal services offer a proactive way to reclaim control over personal information. Journalists, victims of stalking or abuse, and professionals in sensitive industries particularly benefit from these services. However, in an age where data collection is a persistent and lucrative business, staying vigilant and using trusted privacy tools is essential for maintaining online anonymity.

Microsoft MUSE AI: Revolutionizing Game Development with WHAM and Ethical Challenges

 

Microsoft has developed MUSE, a cutting-edge AI model that is set to redefine how video games are created and experienced. This advanced system leverages artificial intelligence to generate realistic gameplay elements, making it easier for developers to design and refine virtual environments. By learning from vast amounts of gameplay data, MUSE can predict player actions, create immersive worlds, and enhance game mechanics in ways that were previously impossible. While this breakthrough technology offers significant advantages for game development, it also raises critical discussions around data security and ethical AI usage. 

One of MUSE’s most notable features is its ability to automate and accelerate game design. Developers can use the AI model to quickly prototype levels, test different gameplay mechanics, and generate realistic player interactions. This reduces the time and effort required for manual design while allowing for greater experimentation and creativity. By streamlining the development process, MUSE provides game studios—both large and small—the opportunity to push the boundaries of innovation. 

The AI system is built on an advanced framework that enables it to interpret and respond to player behaviors. By analyzing game environments and user inputs, MUSE can dynamically adjust in-game elements to create more engaging experiences. This could lead to more adaptive and personalized gaming, where the AI tailors challenges and story progression based on individual player styles. Such advancements have the potential to revolutionize game storytelling and interactivity. 

Despite its promising capabilities, the introduction of AI-generated gameplay also brings important concerns. The use of player data to train these models raises questions about privacy and transparency. Developers must establish clear guidelines on how data is collected and ensure that players have control over their information. Additionally, the increasing role of AI in game creation sparks discussions about the balance between human creativity and machine-generated content. 

While AI can enhance development, it is essential to preserve the artistic vision and originality that define gaming as a creative medium. Beyond gaming, the technology behind MUSE could extend into other industries, including education and simulation-based training. AI-generated environments can be used for virtual learning, professional skill development, and interactive storytelling in ways that go beyond traditional gaming applications. 

As AI continues to evolve, its role in shaping digital experiences will expand, making it crucial to address ethical considerations and responsible implementation. The future of AI-driven game development is still unfolding, but MUSE represents a major step forward. 

By offering new possibilities for creativity and efficiency, it has the potential to change how games are built and played. However, the industry must carefully navigate the challenges that come with AI’s growing influence, ensuring that technological progress aligns with ethical and artistic integrity.