Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Privacy. Show all posts

Microsoft MUSE AI: Revolutionizing Game Development with WHAM and Ethical Challenges

 

Microsoft has developed MUSE, a cutting-edge AI model that is set to redefine how video games are created and experienced. This advanced system leverages artificial intelligence to generate realistic gameplay elements, making it easier for developers to design and refine virtual environments. By learning from vast amounts of gameplay data, MUSE can predict player actions, create immersive worlds, and enhance game mechanics in ways that were previously impossible. While this breakthrough technology offers significant advantages for game development, it also raises critical discussions around data security and ethical AI usage. 

One of MUSE’s most notable features is its ability to automate and accelerate game design. Developers can use the AI model to quickly prototype levels, test different gameplay mechanics, and generate realistic player interactions. This reduces the time and effort required for manual design while allowing for greater experimentation and creativity. By streamlining the development process, MUSE provides game studios—both large and small—the opportunity to push the boundaries of innovation. 

The AI system is built on an advanced framework that enables it to interpret and respond to player behaviors. By analyzing game environments and user inputs, MUSE can dynamically adjust in-game elements to create more engaging experiences. This could lead to more adaptive and personalized gaming, where the AI tailors challenges and story progression based on individual player styles. Such advancements have the potential to revolutionize game storytelling and interactivity. 

Despite its promising capabilities, the introduction of AI-generated gameplay also brings important concerns. The use of player data to train these models raises questions about privacy and transparency. Developers must establish clear guidelines on how data is collected and ensure that players have control over their information. Additionally, the increasing role of AI in game creation sparks discussions about the balance between human creativity and machine-generated content. 

While AI can enhance development, it is essential to preserve the artistic vision and originality that define gaming as a creative medium. Beyond gaming, the technology behind MUSE could extend into other industries, including education and simulation-based training. AI-generated environments can be used for virtual learning, professional skill development, and interactive storytelling in ways that go beyond traditional gaming applications. 

As AI continues to evolve, its role in shaping digital experiences will expand, making it crucial to address ethical considerations and responsible implementation. The future of AI-driven game development is still unfolding, but MUSE represents a major step forward. 

By offering new possibilities for creativity and efficiency, it has the potential to change how games are built and played. However, the industry must carefully navigate the challenges that come with AI’s growing influence, ensuring that technological progress aligns with ethical and artistic integrity.

Default Password Creates Major Security Risk for Apartment Complexes

 


Under research conducted by security researchers, it was discovered that a widely used door access control system includes an inherently insecure default password. Thousands of buildings across the country have insecure default passwords that can be accessed easily and remotely by anyone. It was discovered by Eric Daigle that there is still a lot of residential and commercial properties in North America that have not yet modified the default passwords for their access control systems, many of them are not even aware that this is a good idea.   

When security researcher Eric Daigle examined an apartment building’s access control panel, he inadvertently discovered one of the most concerning security issues in recent years while inspecting the access control panel. Initially, a routine observation while waiting for a ferry led to the discovery of a critical security flaw affecting hundreds of residential buildings across the country, which caused a widespread financial loss for thousands of people.

In late last year, Eric Daigle became interested in the system when he noticed an unusual access control panel on his normal daily activities. He conducted a short online search for “MESH by Viscount” and found a sales page for its remote access capability, followed by the discovery of a PDF installation guide available for download. It is typical for access control systems to be configured with a default password, which administrators are supposed to change to match their credentials. 

However, Daigle observed that the installation manual did not provide clear instructions regarding how these credentials were to be modified. It was later revealed, after further investigation into the user interface's login page title, that multiple publicly accessible login portals are available for this product. Alarmingly, as a result of this research, he was able to access the first one with default credentials, which highlights a critical security vulnerability. 

The Enterphone MESH door access system is currently owned by Hirsch, and Hirsch has announced that to address this security vulnerability, a software patch will be released shortly that will require users to change their default password, as soon as possible. An internet-connected device will often have a default password, which is often included in the product manual to facilitate the initial setup process. 

There is, however, a significant security risk in requiring end users to manually update these credentials, since if they fail to do so, their systems can be vulnerable to unauthorized access. Hirsch’s door access solutions are not prompted to customers when they are installed, nor are they required to modify the default passwords, leaving many systems at risk of unauthorized access. This vulnerability had been discovered by security researcher Eric Daigle, based on the findings he made, according to his findings. 

The vulnerability has been designated as CVE-2025-26793 as a result of his findings. Modern building security systems have become increasingly integrated with the Internet of Things (IoT) technology, especially in apartment complexes seeking a more advanced alternative to traditional phone-line-based access control systems. Among these key fob systems, Hirsch Mesh features a web-based portal that enables the use of key fobs throughout a large building to be tracked and logged, as well as allowing remote access to various entry points also within the building to be controlled remotely. 

The accessibility of the system's default login credentials, however, raises a crucial security concern because they are openly published in the installation manual, which is easily accessible via an online search, as the installer provides a list of the default login credentials. While waiting at a bus stop for his bus, Eric Daigle made a quick internet search based on the name of the product displayed on the security terminal of the apartment complex across the street. He located the manual in just a few minutes, which identified a way to circumvent the building's security measures. This highlighted a significant flaw in the system's design, leading to a serious risk of abuse. 

The default password that is set on internet-connected devices has historically posed a significant security threat because unauthorized individuals can gain access under the guise of legitimate users, leading to data breaches or the possibility of malicious actors hijacking these devices to carry out large-scale cyberattacks. In recent years, there have been several governments, including the UK, Germany, the US, and other countries, which have been encouraging technology manufacturers to adopt more robust security measures to avoid the security risks associated with using default credentials that were considered insecure in the first place. 

Having been rated as highly vulnerable by the FBI as a result of its ease of exploit, Hirsch's door entry system has been rated as a high threat as well with a severity rating of 10. Exploiting the flaw involves a minimal amount of effort. There is a public documentation available on Hirsch's website, which contains the installation manual for the system, which can be used to obtain the default password. An affected building is vulnerable to unauthorized access if individuals with these credentials log in to the login window of the building's system through the login portal; this highlights a critical security flaw in the system.

Stalkerware: How Scammers Might Be Tracking Your Phone and What You Can Do

 


Spyware applications designed to secretly monitor people’s phones are becoming more common. These programs, known as stalkerware, can track private messages, calls, photos, locations, and other personal data without the user’s knowledge. Often installed without permission, they operate silently in the background, making them difficult to detect. In many cases, they even disappear from the home screen to avoid suspicion.  

How Stalkerware Works

Stalkerware exploits built-in features of a phone to collect information. It can monitor calls, read texts, access notifications, and track locations. Since these apps run continuously in the background, they can slow down the device, cause overheating, and increase data usage. Because they often disguise themselves with names like “System Service” or “Device Health,” users may not realize they are installed.  

Warning Signs of Stalkerware  

It can be hard to tell if your phone has been infected with spyware, but certain unusual behaviors may indicate its presence. These include:  

• Your phone becoming slow or lagging unexpectedly  

• Overheating, even when not in use  

• Unusual spikes in data usage  

• Strange apps with broad permissions appearing in your settings  

If you notice any of these issues, it’s important to check your device for unauthorized applications.  


How to Find and Remove Stalkerware  

If you suspect someone is spying on your phone, take the following steps to locate and delete the tracking software:  

1. Activate Google Play Protect – This built-in security tool scans apps and helps detect harmful software. You can turn it on in the Play Store under "Play Protect."   

2. Check Accessibility Settings – Many spyware apps request special permissions to access messages, calls, and notifications. Review your phone’s accessibility settings and remove any suspicious apps.  

3. Inspect Device Admin Permissions – Some spyware disguises itself as essential system software to gain control over your phone. Check the “Device Admin” section in your settings and disable any unfamiliar apps.  

4. Review Notification Access – Spyware often requests access to notifications to track messages and alerts. If an app you don’t recognize has these permissions, it may be monitoring your activity.  

5. Delete Suspicious Apps – If you find an unknown app with excessive access to your personal data, disable and uninstall it immediately.  


How to Protect Your Phone from Spyware

Before removing stalkerware, be cautious—if someone installed it to monitor you, they might get alerted when it’s deleted. If you believe you’re in a risky situation, seek help before taking action. To prevent spyware infections in the future, follow these security tips:  

1. Use a Strong Screen Lock – Set a PIN, password, or fingerprint lock to prevent unauthorized access.  

2. Enable Two-Factor Authentication (2FA) – Adding an extra layer of security helps protect your accounts.  

3. Avoid Unverified Apps – Download applications only from trusted sources like the Google Play Store or Apple App Store.  

4. Check Background Activity – Regularly review your phone’s app permissions and remove anything that looks suspicious.  

By staying alert and taking the right precautions, you can protect your personal data from being tracked without your knowledge. If you ever suspect your device has been compromised, act quickly to secure your privacy.

South Korea Blocks DeepSeek AI App Downloads Amid Data Security Investigation

 

South Korea has taken a firm stance on data privacy by temporarily blocking downloads of the Chinese AI app DeepSeek. The decision, announced by the Personal Information Protection Commission (PIPC), follows concerns about how the company collects and handles user data. 

While the app remains accessible to existing users, authorities have strongly advised against entering personal information until a thorough review is complete. DeepSeek, developed by the Chinese AI Lab of the same name, launched in South Korea earlier this year. Shortly after, regulators began questioning its data collection practices. 

Upon investigation, the PIPC discovered that DeepSeek had transferred South Korean user data to ByteDance, the parent company of TikTok. This revelation raised red flags, given the ongoing global scrutiny of Chinese tech firms over potential security risks. South Korea’s response reflects its increasing emphasis on digital sovereignty. The PIPC has stated that DeepSeek will only be reinstated on app stores once it aligns with national privacy regulations. 

The AI company has since appointed a local representative and acknowledged that it was unfamiliar with South Korea’s legal framework when it launched the service. It has now committed to working with authorities to address compliance issues. DeepSeek’s privacy concerns extend beyond South Korea. Earlier this month, key government agencies—including the Ministry of Trade, Industry, and Energy, as well as Korea Hydro & Nuclear Power—temporarily blocked the app on official devices, citing security risks. 

Australia has already prohibited the use of DeepSeek on government devices, while Italy’s data protection agency has ordered the company to disable its chatbot within its borders. Taiwan has gone a step further by banning all government departments from using DeepSeek AI, further illustrating the growing hesitancy toward Chinese AI firms. 

DeepSeek, founded in 2023 by Liang Feng in Hangzhou, China, has positioned itself as a competitor to OpenAI’s ChatGPT, offering a free, open-source AI model. However, its rapid expansion has drawn scrutiny over potential data security vulnerabilities, especially in regions wary of foreign digital influence. South Korea’s decision underscores the broader challenge of regulating artificial intelligence in an era of increasing geopolitical and technological tensions. 

As AI-powered applications become more integrated into daily life, governments are taking a closer look at the entities behind them, particularly when sensitive user data is involved. For now, DeepSeek’s future in South Korea hinges on whether it can address regulators’ concerns and demonstrate full compliance with the country’s strict data privacy standards. Until then, authorities remain cautious about allowing the app’s unrestricted use.

Enhanced In-Call Security in Android 16 Aims to Tackle Scammers

 


As part of a new security feature being developed by Google, users will no longer be able to modify sensitive settings when they are on a phone call. As a part of the in-call anti-scam protection, users are specifically prevented from enabling settings that allow applications to be installed from unknown sources and the grant of accessibility access as part of this in-call anti-scam protection. 

To mitigate the risk of scams exploiting these permissions during phone conversations, the developers of the app have developed several features. Android Authority was the first to report the development. As users attempt to alter their information while speaking to a customer service representative, a warning message appears stating as follows: "Scammers often request these actions during phone call conversations, so that is why it has been blocked. If users are guided to do this by someone they are not familiar with, it could be a scam." 

A new version of Android 16 Beta 2 was released this week, which introduced several new features and a modification to the phone call settings. The new features are intended to help improve not only the user experience but also to protect users against fraudulent scams. One of the features, which has just been introduced, is anti-scammer protection during phone calls, which is designed to protect the privacy and sensitive data of users during a phone call. 

The number of telephone scams has grown to an alarming level of sophistication, with scammers now employing ever-increasing sophistication to deceive unsuspecting individuals for fraudulent purposes. It is also common to install malware on individuals to gain access to sensitive information. Android 16 Beta 2 addresses this issue by implementing restrictions that prevent users from enabling certain sensitive settings, such as sideloading permissions, while a phone call is active, to reduce the risk of scams exploiting these permissions during conversations. 

The purpose of this measure is to enhance security by reducing the risk of scams. Moreover, Android 16 Beta 2 also introduces a restriction that prevents users from granting applications access to accessibility services when a phone call is currently underway. As of earlier this week, Android 16 Beta 2 now includes this feature, which was implemented by adding additional security measures to counter a technique commonly used by malicious actors to distribute malware. 

It was first introduced in Android 16 beta 2. As part of this method, which is known as telephone-oriented attack delivery (TOAD), a false sense of urgency is created and sent to potential victims to coerce them into calling a specific number. The NCSC-FI and the NCC Group reported in 2023 that cybercriminals were distributing dropper applications through SMS messages and phone calls to deceive individuals into installing malware, such as Vultr. The hacker community intended to use this technique to trick people into installing malware. 

 The company introduced several new security features as part of Android 15 when it began rolling out last year, aimed at reducing the risks caused by malicious applications as they were introduced. Google took these measures, among them was the automatic disabling of sensitive permissions for apps that weren't available in Gthe oogle Play Store or was downloaded from unverified sources that posed a threat to users. The goal of this enhancement is to better protect users from potential scams and the possibility of unauthorized access to sensitive information. 

The sideloading permission, which allows apps to install other apps, is disabled as a security measure by default to prevent malicious software from installing outside of official app stores, which poses significant risks for users. Users must be able to enable this permission manually through Settings > Apps > Special App Access > Install Unknown Apps. Furthermore, users who are enrolled in Advanced Protection Mode are not permitted to modify this permission due to the significant security risks involved. As a result, unauthorized installations can be prevented and overall device security will be enhanced. 

The Android 16 operating system offers additional security measures even if a user already allows sideloading or has installed malicious apps; the device also blocks the possibility of granting access to accessibility during phone calls when the user doesn't want it granted. This restriction is vital because applications that offer accessibility can exert a lot of control over a device, which may compromise user security and privacy. 

The misuse of such permissions can result in malicious applications stealing sensitive data or locking users out of their devices, as well as performing harmful actions. To combat scammers exploiting phone conversations as a way to install malware or gain unauthorized access to critical permissions, Google is preventing these changes during active calls. It is becoming increasingly sophisticated as cybercriminals utilize phone calls as a primary method of manipulating and defrauding individuals as online scams get more sophisticated. In particular, these scams are usually targeted at older people or those who are less familiar with digital security practices. 

Often, scammers use psychological tactics to deceive victims into following their instructions, such as inducing a false sense of urgency or fear. A scammer usually lures victims into installing applications, often under the guise of providing technical assistance with an issue that is fabricated. Once the attacker has installed the application, it gives him or her access to the victim's device, potentially allowing them to exploit it further. As part of Google's proactive efforts to mitigate these threats, it has implemented enhanced security features on Android 16. 

The Android 16 update will restrict users from sideloading applications or granting high-risk permissions during a phone call, which will help to reduce the effectiveness of such fraud schemes and improve overall user security. A significant advancement in mobile protection, especially as phone scams are becoming increasingly complex, this security feature represents a significant advance in mobile protection. 

With Google's introduction of obstacles into the scam process, Google hopes that fraudulent activity will become more difficult to carry out. Even in cases where scammers instruct victims to terminate a call and attempt the process again, the additional step required to activate certain settings may raise suspicion and may discourage the victim from trying it again. 

As part of Android 16 Beta 2, Google has implemented anti-scammer protections that allow users to access their phone while they are on a call, a proactive approach to fighting the growing threat of phone scams. By limiting access to sensitive settings while they are on a call, the company seeks to enhance user security and prevent malicious actors from exploiting them.

Protect Your Security Cameras from Hackers with These Simple Steps

 



Security cameras are meant to keep us safe, but they can also become targets for hackers. If cybercriminals gain access, they can spy on you or tamper with your footage. To prevent this, follow these straightforward tips to ensure your security cameras remain under your control.

1. Avoid Cheap or Second-Hand Cameras

While it might be tempting to buy an inexpensive or used security camera, doing so can put your privacy at risk. Unknown brands or knockoffs may have weak security features, making them easier to hack. Used cameras, even if reset, could still contain old software vulnerabilities or even hidden malware. Always choose reputable brands with good security records.

2. Choose Cameras with Strong Encryption

Encryption ensures that your video data is protected from unauthorized access. Look for brands that offer end-to-end encryption, which keeps your footage secure even if intercepted. Some brands, like Ring and Arlo, provide full encryption options, while others offer partial protection. The more encryption a company provides, the better your data is protected.

3. Research Security Reputation Before Buying

Before purchasing a camera, check if the company has a history of data breaches or security flaws. Some brands have had incidents where hackers accessed user data, so it’s essential to choose a manufacturer with a strong commitment to cybersecurity. Look for companies that use offline storage or advanced security features to minimize risks.

4. Strengthen Your Wi-Fi and App Passwords

A weak Wi-Fi password can allow hackers to access all connected devices in your home, including security cameras. Always use a strong, unique password for both your Wi-Fi network and camera app. Enable encryption on your router, activate built-in firewalls, and consider using a virtual private network (VPN) for extra protection. If you experience life changes like moving or breaking up with a partner, update your passwords to prevent unauthorized access.

5. Keep Your Camera Software Updated

Security camera companies regularly release updates to fix vulnerabilities and improve protection. If your camera has an option for automatic updates, turn it on. If not, make sure to check for updates manually through your camera app to ensure your system has the latest security patches.

6. Enable Two-Factor Authentication (2FA)

Two-factor authentication adds an extra layer of security by requiring a second verification step, such as a text message or email code, before logging in. This prevents unauthorized users from accessing your camera, even if they have your password.


Modern security cameras are much safer than before, thanks to improved encryption and security features. Most hacking attempts happen when users fail to secure their accounts or choose unreliable brands. However, there is still a risk if the camera company itself experiences a data breach. To minimize exposure, consider cameras with local storage or privacy covers for indoor models.

Who Tries to Hack Security Cameras?

In most cases, security cameras are not hacked by strangers. Instead, unauthorized access usually comes from people you know, such as an ex-partner or family member who already has login details. Occasionally, unethical employees at security companies have been caught misusing access. Ensuring strong passwords, encryption, and additional security measures can help prevent these issues.

By following these simple steps, you can keep your security cameras safe from hackers and ensure your home remains private and secure.


Apps Illegally Sold Location Data of US Military and Intelligence Personnel

 


Earlier this year, news reports revealed that a Florida-based data brokerage company had engaged in the sale of location data belonging to US military and intelligence personnel stationed overseas in the course of its operations. While at the time, it remained unclear to us as to how this sensitive information came into existence. 
 
However, recent investigations indicate that the data was collected in part through various mobile applications operating under revenue-sharing agreements with an advertising technology company. An American company later resold this data, which was then resold by that firm. Location data collection is one of the most common practices among mobile applications. It is an essential component of navigation and mapping, but it also enhances the functionality of various other applications. 
 
There are concerns that many applications collect location data without a clear or justified reason. Apple’s iOS operating system mandates that apps request permission before accessing location data. Regulations ensure privacy by providing transparency and control over the collection and use of location-related sensitive information. 
 
After revelations about the unauthorized sale of location data, Senator Ron Wyden (D-WA) requested clarification from Datastream regarding the source of the data. Wyden’s office also reached out to an ad-tech company but did not receive a response. Consequently, the senator escalated the matter to Lithuania’s Data Protection Authority (DPA) due to national security concerns. 
 
The Lithuanian DPA launched an official investigation into the incident. However, the results remain pending. This case highlights the complexities of the location data industry, where information is often exchanged between multiple organizations with limited regulation. 
 
Cybersecurity expert Zach Edwards pointed out during a conference that "advertising companies often function as surveillance companies with better business models." This growing concern over data collection, sharing, and monetization in the digital advertising industry underscores the need for stricter regulations and accountability. 
 
Security experts recommend that users disable location services when unnecessary and use VPNs for added protection. Given the vast amount of location data transmitted through mobile applications, these precautions are crucial in mitigating potential security risks.

Why European Regulators Are Investigating Chinese AI firm DeepSeek

 


European authorities are raising concerns about DeepSeek, a thriving Chinese artificial intelligence (AI) company, due to its data practices. Italy, Ireland, Belgium, Netherlands, France regulators are examining the data collection methods of this firm, seeing whether they comply with the European General Data Protection Regulation or, if they also might consider that personal data is anyway transferred unlawfully to China.

Hence, due to these issues, the Italian authority has released a temporary restrainment to access the DeepSeek chatbot R1 for the time-being under which investigation will be conducted on what and how data get used, and how much has affected training in the AI model.  


What Type of Data Does DeepSeek Actually Collect? 

DeepSeek collects three main forms of information from the user: 

1. Personal data such as names and emails.  

2. Device-related data, including IP addresses.  

3. Data from third parties, such as Apple or Google logins.  

Moreover, there is an action that an app would be able to opt to take if at all that user was active elsewhere on those devices for "Community Security." Unlike many companies I have said where there are actual timelines or limits on data retention, it is stated that retention of data can happen indefinitely by DeepSeek. This can also include possible sharing with others-advertisers, analytics firms, governments, and copyright holders.  

Noting that most AI companies like the case of OpenAI's ChatGPT and Anthropic's Claude have met such privacy issues, experts would observe that DeepSeek doesn't expressly provide users the rights to deletion or restrictions on its use of their data as mandated requirement in the GDPR.  


The Collected Data Where it Goes  

One of major problems of DeepSeek is that it saves user data in China. Supposedly, the company has secure security measures in place for the data set and observes local laws for data transfer, but from a legal perspective, there is no valid basis being presented by DeepSeek concerning the storing of data from its European users outside the EU.  

According to the EDPB, privacy laws in China lay more importance on "stability of community than that of individual privacy," thus permitting broadly-reaching access to personal data for purposes such as national security or criminal investigations. Yet it is not clear whether that of foreign users will be treated differently than that of Chinese citizens. 


Cybersecurity and Privacy Threats 

As accentuated by cyber crime indices in 2024, China is one of the countries most vulnerable to cyberattacks. Cisco's latest report shows that DeepSeek's AI model does not have such strong security against hacking attempts. Other AI models can block at least some "jailbreak" cyberattacks, while DeepSeek turned out to be completely vulnerable to such assaults, which have made it softer for manipulation. 


Should Users Worry? 

According to experts, users ought to exercise caution when using DeepSeek and avoid sharing highly sensitive personal details. The uncertain policies of the company with respect to data protection, storage in China, and relatively weak security defenses could avail pretty heavy risks to users' privacy and as such warrant such caution. 

European regulators will then determine whether DeepSeek will be allowed to conduct business in the EU as investigations continue. Until then, users should weigh risks against their possible exposure when interacting with the platform. 



Privacy Concerns Rise Over Antivirus Data Collection

 


To maintain the security of their devices from cyberattacks, users rely critically on their operating systems and trusted anti-virus programs, which are among the most widely used internet security solutions. Well-established operating systems and reputable cybersecurity software need to provide users with regular updates.

As a result of these updates, security flaws in your system are fixed and security programs are upgraded, enhancing your system's protection, and preventing cybercriminals from exploiting vulnerabilities to install malicious software such as malware or spyware. Third-party applications, on the other hand, carry a larger security risk, as they may lack rigorous protection measures. In most cases, modern antivirus programs, firewalls, and other security measures will detect and block any potentially harmful programs. 

The security system will usually generate an alert when, as a result of an unauthorized or suspicious application trying to install on the device, users can take precautions to keep their devices safe. In the context of privacy, an individual is referred to as a person who has the right to remain free from unwarranted monitoring, surveillance, or interception. The concept of gathering data is not new; traditionally data was collected by traditional methods based on paper. 

It has also been proven that by making use of technological advancements, data can now be gathered through automated, computer-driven processes, providing vast amounts of information and analytical information for a variety of purposes every minute from millions of individuals in the world. Keeping a person's privacy is a fundamental right that is recognized as essential to their autonomy and their ability to protect their data. 

The need to safeguard this right is becoming increasingly important in the digital age because of the widespread collection and use of personal information, raising significant concerns about privacy and individual liberties. This evaluation included all of PCMag's Editors' Choices for antivirus and security suites, except AVG AntiVirus Free, which has been around for several years. However, since Avast acquired AVG in 2016, both have been using the same antivirus engine for several years now, so it is less necessary for them to be evaluated separately. 

It was determined that each piece of security software was evaluated based on five key factors: Data Collection, Data Sharing, Accessibility, Software & Process Control, and Transparency, of which a great deal of emphasis should be placed on Data Collection and Data Sharing. This assessment was performed by installing each antivirus program on a test system with network monitoring tools, which were then examined for their functionality and what data was transmitted to the company's parent company as a result of the assessment. In addition, the End User License Agreements (EULAs) for each product were carefully reviewed to determine if they disclosed what kind and how much data was collected. 

A comprehensive questionnaire was also sent to security companies to provide further insights into their capabilities beyond the technical analysis and contractual review. There may be discrepancies between the stated policies of a business and the actual details of its network activities, which can adversely affect its overall score. Some vendors declined to answer specific questions because there was a security concern. 

Moreover, the study highlights that while some data collection-such as payment information for licensing purposes-must be collected, reducing the amount of collected data generally results in a higher Data Collection score, a result that the study findings can explain. The collecting of data from individuals can provide valuable insights into their preferences and interests, for example, using information from food delivery apps can reveal a user's favourite dishes and the frequency with which they order food. 

In the same vein, it is common for targeted advertisements to be delivered using data derived from search queries, shopping histories, location tracking, and other digital interactions. Using data such as this helps businesses boost sales, develop products, conduct market analysis, optimize user experiences, and improve various functions within their organizations. It is data-driven analytics that is responsible for bringing us personalized advertisements, biometric authentication of employees, and content recommendations on streaming platforms such as Netflix and Amazon Prime.

Moreover, athletes' performance metrics in the field of sports are monitored and compared to previous records to determine progress and areas for improvement. It is a fact that systematic data collection and analysis are key to the development and advancement of the digital ecosystem. By doing so, businesses and industries can operate more efficiently, while providing their customers with better experiences. 

As part of the evaluation of these companies, it was also necessary to assess their ability to manage the data they collect as well as their ability to make the information they collect available to people. This information has an important role to play in ensuring consumer safety and freedom of choice. As a whole, companies that provide clear, concise language in their End User License Agreements (EULA) and privacy policies will receive higher scores for accessibility. 

Furthermore, if those companies provide a comprehensive FAQ that explains what data is collected and why it's used, they will further increase their marks. About three-quarters of the participants in the survey participating in the survey responded to the survey, constituting a significant share of those who received acknowledgement based on the transparency they demonstrated. The more detailed the answers, the greater the score was. Furthermore, the availability of third-party audits significantly influenced the rating. 

Even thought a company may handle its personal data with transparency and diligence, any security vulnerabilities introduced by its partners can undermine the company's efforts. As part of this study, researchers also examined the security protocols of the companies' third-party cloud storage services. Companies that have implemented bug bounty programs, which reward users for identifying and reporting security flaws, received a higher score in this category than those that did not. The possibility exists that a security company could be asked to provide data it has gathered on specific users by a government authority. 

Different jurisdictions have their own unique legal frameworks regarding this, so it is imperative to have an understanding of the location of the data. The General Data Protection Regulation (GDPR) in particular enforces a strict set of privacy protections, which are not only applicable to data that is stored within the European Union (EU) but also to data that concerns EU residents, regardless of where it may be stored. 

Nine of the companies that participated in the survey declined to disclose where their server farms are located. Of those that did provide answers, three chose to keep their data only within the EU, five chose to store the data in both the EU and the US, and two maintained their data somewhere within the US and India. Despite this, Kaspersky has stated that it stores data in several different parts of the world, including Europe, Canada, the United States, and Russia. In some cases, government agencies may even instruct security companies to issue a "special" update to a specific user ID to monitor the activities of certain suspects of terrorist activity. 

In response to a question regarding such practices, the Indian company eScan confirmed that they are involved in such activities, as did McAfee and Microsoft. Eleven of the companies that responded affirmed that they do not distribute targeted updates of this nature. Others chose not to respond, raising concerns about transparency in the process. `

WhatsApp Says Spyware Company Paragon Hacked 90 Users

WhatsApp Says Spyware Company Paragon Hacked 90 Users

Attempts to censor opposition voices are not new. Since the advent of new media, few Governments and nations have used spyware to keep tabs on the public, and sometimes target individuals that the government considers a threat. All this is done under the guise of national security, but in a few cases, it is aimed to suppress opposition and is a breach of privacy. 

Zero-click Spyware for WhatsApp

One such interesting incident is the recent WhatsApp “zero-click” hacking incident. In a conversation with Reuters, a WhatsApp official disclosed that Israeli spyware company Paragon Solutions was targeting its users, victims include journalists and civil society members. Earlier this week, the official told Reuters that Whatsapp had sent Paragon a cease-and-desist notice after the surveillance hack. In its official statement, WhatsApp stressed it will “continue to protect people's ability to communicate privately."

Paragon refused to comment

According to Reuters, WhatsApp had noticed an attempt to hack around 90 users. The official didn’t disclose the identity of the targets but hinted that the victims belonged to more than a dozen countries, mostly from Europe. WhatsApp users were sent infected files that didn’t require any user interaction to hack their targets, the technique is called the “zero-click” hack, known for its stealth 

“The official said WhatsApp had since disrupted the hacking effort and was referring targets to Canadian internet watchdog group Citizen Lab,” Reuter reports. He didn’t discuss how it was decided that Paragon was the culprit but added that law enforcement agencies and industry partners had been notified, and didn’t give any further details.

FBI didn’t respond immediately

“The FBI did not immediately return a message seeking comment,” Reuter said. Citizen Lab researcher John Scott-Railton said the finding of Paragon spyware attacking WhatsApp is a “reminder that mercenary spyware continues to proliferate and as it does, so we continue to see familiar patterns of problematic use."

Citizen Lab researcher John Scott-Railton said the discovery of Paragon spyware targeting WhatsApp users "is a reminder that mercenary spyware continues to proliferate and as it does, so we continue to see familiar patterns of problematic use."

Ethical implications concerning spying software

Spyware businesses like Paragaon trade advanced surveillance software to government clients, and project their services as “critical to fighting crime and protecting national security,” Reuter mentions. However, history suggests that such surveillance tools have largely been used for spying, and in this case- journalists, activists, opposition politicians, and around 50 U.S officials. This raises questions about the lawless use of technology.

Paragon - which was reportedly acquired by Florida-based investment group AE Industrial Partners last month - has tried to position itself publicly as one of the industry's more responsible players. On its website, Paragon advertises the software as “ethically based tools, teams, and insights to disrupt intractable threats” On its website, and media reports mentioning people acquainted with the company “say Paragon only sells to governments in stable democratic countries,” Reuter mentions.

The Evolving Role of Multi-Factor Authentication in Cybersecurity

 


In recent years, the cybersecurity landscape has faced an unprecedented wave of threats. State-sponsored cybercriminals and less experienced attackers armed with sophisticated tools from the dark web are relentlessly targeting weak links in global cybersecurity systems. End users, often the most vulnerable element in the security chain, are frequently exploited. As cyber threats grow increasingly sophisticated, multi-factor authentication (MFA) has emerged as a critical tool to address the limitations of password-based security systems.

The Importance of MFA in Modern Cybersecurity

Passwords, while convenient, have proven insufficient to protect against unauthorized access. MFA significantly enhances account security by adding an extra layer of protection, preventing account compromise even when login credentials are stolen. According to a Microsoft study, MFA can block 99.9% of account compromise attacks. By requiring multiple forms of verification—such as passwords, biometrics, or device-based authentication—MFA creates significant barriers for hackers, making unauthorized access extremely difficult.

Regulations and industry standards are also driving the adoption of MFA. Organizations are increasingly required to implement MFA to safeguard sensitive data and comply with security protocols. As a cornerstone of modern cybersecurity strategies, MFA has proven effective in protecting against breaches, ensuring the integrity of digital ecosystems, and fostering trust in organizational security frameworks.

However, as cyber threats evolve, traditional MFA systems are becoming increasingly inadequate. Many legacy MFA systems rely on outdated technology, making them vulnerable to phishing attacks, ransomware campaigns, and sophisticated exploits. The advent of generative AI tools has further exacerbated the situation, enabling attackers to create highly convincing phishing campaigns, automate complex exploits, and identify security gaps in real-time.

Users are also growing frustrated with cumbersome and inconsistent authentication processes, which undermine adherence to security protocols and erode organizational defenses. This situation underscores the urgent need for a reevaluation of security strategies and the adoption of more robust, adaptive measures.

The Role of AI in Phishing and MFA Vulnerabilities

Artificial intelligence (AI) has become a double-edged sword in cybersecurity. While it offers powerful tools for enhancing security, it also poses significant threats when misused by cybercriminals. AI-driven phishing attacks, for instance, are now virtually indistinguishable from legitimate communications. Traditional phishing indicators—such as typographical errors, excessive urgency, and implausible offers—are often absent in these attacks.

AI enables attackers to craft emails and messages that appear authentic, cleverly designed to deceive even well-trained users. Beyond mere imitation, AI systems can analyze corporate communication patterns and replicate them with remarkable accuracy. Chatbots powered by AI can interact with users in real-time, while deepfake technologies allow cybercriminals to impersonate trusted individuals with unprecedented ease. These advancements have transformed phishing from a crude practice into a precise, calculated science.

Outdated MFA systems are particularly vulnerable to these AI-driven attacks, exposing organizations to large-scale, highly successful campaigns. As generative AI continues to evolve at an exponential rate, the potential for misuse highlights the urgent need for robust, adaptive security measures.

Comprehensive Multi-Factor Authentication: A Closer Look

Multi-Factor Authentication (MFA) remains a cornerstone of cybersecurity, utilizing multiple verification steps to ensure that only authorized users gain access to systems or data. By incorporating layers of authentication, MFA significantly enhances security against evolving cyber threats. The process typically begins with the user providing credentials, such as a username and password. Once verified, an additional layer of authentication—such as a one-time password (OTP), biometric input, or other pre-set methods—is required. Access is only granted after all factors are successfully confirmed.

Key forms of MFA authentication include:

  1. Knowledge-Based Authentication: This involves information known only to the user, such as passwords or PINs. While widely used, these methods are vulnerable to phishing and social engineering attacks.
  2. Possession-Based Authentication: This requires the user to possess a physical item, such as a smartphone with an authentication app, a smart card, or a security token. These devices often generate temporary codes that must be used in combination with a password.
  3. Biometric Authentication: This verifies a user's identity through unique physical traits, such as fingerprints or facial recognition, adding an extra layer of security and personalization.
  4. Location-Based Authentication: This uses GPS data or IP addresses to determine the user's geographical location, restricting access to trusted or authorized areas.
  5. Behavioral Biometrics: This tracks and monitors unique user behaviors, such as typing speed, voice characteristics, or walking patterns, providing an adaptive layer of security.

The combination of these diverse approaches creates a robust defense against unauthorized access, ensuring superior protection against increasingly sophisticated cyberattacks. As organizations strive to safeguard sensitive data and maintain security, the integration of comprehensive MFA solutions is essential.

The cybersecurity landscape is evolving rapidly, with AI-driven threats posing new challenges to traditional security measures like MFA. While MFA remains a critical tool for enhancing security, its effectiveness depends on the adoption of modern, adaptive solutions that can counter sophisticated attacks. By integrating advanced MFA methods and staying vigilant against emerging threats, organizations can better protect their systems and data in an increasingly complex digital environment.

The Evolution of Data Protection: Moving Beyond Passwords

 


As new threats emerge and defensive strategies evolve, the landscape of data protection is undergoing significant changes. With February 1 marking Change Your Password Day, it’s a timely reminder of the importance of strong password habits to safeguard digital information.

While conventional wisdom has long emphasized regularly updating passwords, cybersecurity experts, including those at the National Institute of Standards and Technology (NIST), have re-evaluated this approach. Current recommendations focus on creating complex yet easy-to-remember passphrases and integrating multi-factor authentication (MFA) as an additional layer of security.

Microsoft’s Vision for a Passwordless Future

Microsoft has long envisioned a world where passwords are no longer the primary method of authentication. Instead, the company advocates for the use of passkeys. While this vision has been clear for some time, the specifics of how this transition would occur have only recently been clarified.

In a detailed update from Microsoft’s Identity and Access Management team, Sangeeta Ranjit, Group Product Manager, and Scott Bingham, Principal Product Manager, outlined the anticipated process. They highlighted that cybercriminals are increasingly aware of the declining relevance of passwords and are intensifying password-focused attacks while they still can.

Microsoft has confirmed that passwords will eventually be phased out for authentication. Although over a billion users are expected to adopt passkeys soon, a significant number may continue using both passkeys and traditional passwords simultaneously. This dual usage introduces risks, as both methods can be exploited, potentially leading to privacy breaches.

According to Bingham and Ranjit, the long-term focus must be on phishing-resistant authentication techniques and the complete elimination of passwords within organizations. Simplifying password management while enhancing security remains a critical challenge.

The Need for Advanced Security Solutions

While passwords still play a role in authentication, they are no longer sufficient as the sole defense against increasingly sophisticated cyber threats. The shift toward passwordless authentication requires the development of new technologies that provide robust security without complicating the user experience.

One such solution is compromised credential monitoring, which detects when sensitive information, such as passwords, is exposed on the dark web. This technology promptly notifies administrators or affected users, enabling them to take immediate corrective actions, such as changing compromised credentials.

As the era of passwords draws to a close, organizations and individuals must embrace more secure and user-friendly authentication methods. By adopting advanced technologies and staying informed about the latest developments, we can better protect our digital information in an ever-evolving threat landscape.

Weak Cloud Credentials Behind Most Cyber Attacks: Google Cloud Report

 



A recent Google Cloud report has found a very troubling trend: nearly half of all cloud-related attacks in late 2024 were caused by weak or missing account credentials. This is seriously endangering businesses and giving attackers easy access to sensitive systems.


What the Report Found

The Threat Horizons Report, which was produced by Google's security experts, looked into cyberattacks on cloud accounts. The study found that the primary method of access was poor credential management, such as weak passwords or lack of multi-factor authentication (MFA). These weak spots comprised nearly 50% of all incidents Google Cloud analyzed.

Another factor was screwed up cloud services, which constituted more than a third of all attacks. The report further noted a frightening trend of attacks on the application programming interfaces (APIs) and even user interfaces, which were around 20% of the incidents. There is a need to point out several areas where cloud security seems to be left wanting.


How Weak Credentials Cause Big Problems

Weak credentials do not just unlock the doors for the attackers; it lets them bring widespread destruction. For instance, in April 2024, over 160 Snowflake accounts were breached due to the poor practices regarding passwords. Some of the high-profile companies impacted included AT&T, Advance Auto Parts, and Pure Storage and involved some massive data leakages.

Attackers are also finding accounts with lots of permissions — overprivileged service accounts. These simply make it even easier for hackers to step further into a network, bringing harm to often multiple systems within an organization's network. Google concluded that more than 60 percent of all later attacker actions, once inside, involve attempts to step laterally within systems.

The report warns that a single stolen password can trigger a chain reaction. Hackers can use it to take control of apps, access critical data, and even bypass security systems like MFA. This allows them to establish trust and carry out more sophisticated attacks, such as tricking employees with fake messages.


How Businesses Can Stay Safe

To prevent such attacks, organizations should focus on proper security practices. Google Cloud suggests using multi-factor authentication, limiting excessive permissions, and fixing misconfigurations in cloud systems. These steps will limit the damage caused by stolen credentials and prevent attackers from digging deeper.

This report is a reminder that weak passwords and poor security habits are not just small mistakes; they can lead to serious consequences for businesses everywhere.


GM Faces FTC Ban on Selling Customer Driving Data for Five Years

 



General Motors (GM) and its OnStar division have been barred from selling customer-driving data for the next five years. This decision follows an investigation that revealed GM was sharing sensitive customer information without proper consent.  

How Did This Happen?

This became public after it was discovered that GM had been gathering detailed information about how customers drove their vehicles. This included how fast they accelerated, how hard they braked, and how far they travelled. Rather than keeping this data private, GM sold it to third parties, including insurance companies and data brokers.

Many customers did not know about this practice and complained when their insurance premiums suddenly increased. According to reports, one customer complained that they had enrolled in OnStar to enjoy its tracking capabilities, not to have their data sold to third parties.

FTC's Allegations

The Federal Trade Commission (FTC) accused GM of misleading customers during the enrollment process for OnStar’s connected vehicle services and Smart Driver program. According to the FTC, GM failed to inform users that their driving data would be collected and sold.

FTCP Chair Lina Khan said GM tracked and commercially sold the extremely granular geolocation data of consumers and drove behaviour as frequently as every couple of seconds, and the settlement action is taking to protect privacy and prevent people from being subjected to unauthorized surveillance, according to officials.

Terms of Settlement

 Terms of the agreement require GM to:
1. Explain clearly data collection practices.
2. Obtain consent before collecting or sharing any driving data.  
3. Allow customers to delete their data upon request.  
Additionally, GM has ended its OnStar Smart Driver program, which was central to the controversy.

In a brief response, GM stated that it is committed to safeguarding customer privacy but did not address the allegations in detail.

Why This Matters  

This case highlights the growing importance of privacy in the digital age. It serves as a warning to companies about the consequences of using customer data without transparency. For consumers, it’s a reminder to carefully review the terms of services they sign up for and demand accountability from businesses handling personal information.

The action the FTC takes in this move is to make sure that companies give ethical practice priority and respect customers' privacy.







Smart Meter Privacy Under Scrutiny as Warnings Reach Millions in UK

 


According to a campaign group that has criticized government net zero policies, smart meters may become the next step in "snooping" on household energy consumption. Ministers are discussing the possibility of sharing household energy usage with third parties who can assist customers in finding cheaper energy deals and lower carbon tariffs from competitors. 

The European watchdog responsible for protecting personal data has been concerned that high-tech monitors that track households' energy use are likely to pose a major privacy concern. A recent report released by the European Data Protection Supervisor (EDPS) states that smart meters, which must be installed in every home in the UK by the year 2021, will be used not only to monitor energy consumption but also to track a great deal more data. 

According to the EDPS, "while the widespread rollout of smart meters will bring some substantial benefits, it will also provide us with the opportunity to collect huge amounts of personal information." Smart meters have been claimed to be a means of spying on homes by net zero campaigners. A privacy dispute has broken out in response to government proposals that will allow energy companies to harvest household smart meter data to promote net zero energy. 

In the UK, the Telegraph newspaper reports that the government is consulting on the idea of letting consumers share their energy usage with third parties who can direct them to lower-cost deals and lower carbon tariffs from competing suppliers. The Telegraph quoted Neil Record, the former economist for the Bank of England and currently chairman of Net Zero Watch, as saying that smart meters could potentially have serious privacy implications, which he expressed concerns to the paper. 

According to him, energy companies collect a large amount of consumer information, which is why he advised the public to remain vigilant about the increasing number of external entities getting access to household information. Further, Record explained that, once these measures are authorized, the public would be able to view detailed details of the activities of households in real-time. 

The record even stated that the public might not fully comprehend the extent to which the data is being shared and the possible consequences of this access. Nick Hunn, founder of the wireless technology consulting firm WiFore, also commented on the matter, highlighting the original intent behind the smart meter rollout, He noted that the initiative was designed to enable consumers to access their energy usage data, thereby empowering them to make informed decisions regarding energy consumption and associated costs. Getting to net zero targets will be impossible without smart meters. 

They allow energy companies to get real-time data on how much energy they are using and can be used to manage demand as needed. Using smart meters, for instance, households will be rewarded for cutting energy use during peak hours, thereby reducing the need for the construction of new gas-fired power plants. Energy firms can also offer free electricity to households when wind energy is in abundance. Using smart meters as a means of controlling household energy usage, the Government has ambitions to install them in three-quarters of all households by the end of 2025, at the cost of £13.5 billion. 

A recent study by WiFore, which is a wireless technology consulting firm, revealed that approximately four million devices are broken in homes. According to Nick Hunn, who is the founder of the firm: "This is essentially what was intended at the beginning of the rollout of smart meters: that consumers would be able to see what energy data was affecting them so that they could make rational decisions about how much they were spending and how much they were using."

Why Clearing Cache and Cookies Matters for Safe Browsing

 


It seems to be a minor step, clearing your cache and cookies, but it is really a big factor in improving online safety and making your browsing easier. While these tools are intended to make navigation on the web faster and easier, they can sometimes create problems. Let's break this down into simple terms to help you understand why refreshing your browser is a good idea.

What are cache and cookies?

Cache: Think of the cache as your browser's short-term memory. When you visit a website, your browser saves parts of it—like images, fonts, and scripts—so the site loads faster the next time. For example, if you shop online more often, product images or banners might pop out quickly because they have been stored in your cache. This feature improves your surfing speed and reduces internet usage.

Cookies: Cookies are tiny text files that are stored on your browser. They help the websites remember things about you, such as your login details or preferences. For instance, they can keep you logged in to your email or remember items in your shopping cart. There are two main types of cookies:  

  • First-party cookies: Created by the website you're visiting to improve your experience.
  • Third-party cookies: From other websites, usually advertisers, and will be tracking your activities across various different sites.

Why Cache and Cookies Can Be Slippery

Cache Risks: The cache does help speed up things. Sometimes, however, it creates problems. The files in the cache may get outdated or corrupt and hence load a website wrongly. Web hackers can exploit the cached data by "web cache poisoning" which makes the user download bad content.

Cookie Risks: Cookies can be misused too. If someone steals your cookies, they could access your accounts without needing your password. Third-party cookies are particularly invasive, as they track your online behavior to create detailed profiles for targeted advertising.  

Why Clear Cache and Cookies?  

1. Fix Website Problems: Clearing the cache deletes outdated files, helping websites function smoothly.  

2. Protect Your Privacy: Removing cookies stops advertisers from tracking you and reduces the risk of hackers accessing your accounts.  

3. Secure Common Devices: If you’re using a public or shared computer, clearing cookies ensures your data isn’t accessible to the next user.  

How to Clear Cache and Cookies  

 Here is a quick tutorial for Google Chrome.

1. Open the browser and click on the three dots in the top-right corner.  

2. Go to Settings and select Privacy and Security.  

3. Click Clear Browsing Data.  

4. Check the boxes for "Cookies and other site data" and "Cached images and files."  

5. Select a time range (e.g., last hour or all time) and click Clear Data.

Clearing your cache and cookies is essentially the refresh button for your browser. It helps resolve problems, increases security, and guarantees a smoother, safer browsing experience. Regularly doing this simple task can make all the difference to your online privacy and functionality.


GDPR Violation by EU: A Case of Self-Accountability

 


There was a groundbreaking decision by the European Union General Court on Wednesday that the EU Commission will be held liable for damages incurred by a German citizen for not adhering to its own data protection legislation. 

As a result of the court's decision that the Commission transferred the citizen's personal data to the United States without adequate safeguards, the citizen received 400 euros ($412) in compensation. During the hearing conducted by the EU General Court, the EU General Court found that the EU had violated its own privacy rules, which are governed by the General Data Protection Regulation (GDPR). 

According to the ruling, the EU has to pay a fine for the first time in history. German citizens who were registering for a conference through a European Commission webpage used the "Sign in with Facebook" option, which resulted in a German citizen being a victim of the EU's brazen disregard for the law. 

The user clicked the button, which transferred information about their browser, device, and IP address through Amazon Web Services' content delivery network, ultimately finding its way to servers run by Facebook's parent company Meta Platforms located in the United States after they were pushed to the content delivery network. According to the court, this transfer of data was conducted without proper safeguards, which constitutes a breach of GDPR rules. 

The EU was ordered to pay a fine of €400 (about $412) directly to the plaintiff for breaching GDPR rules. It has been widely documented that the magnitude and frequency of fines imposed by different national data protection authorities (DPAs) have varied greatly since GDPR was introduced. This is due to both the severity and the rigour of enforcement. A total of 311 fines have been catalogued by the International Network of Privacy Law Professionals, and by analysing them, several key trends can be observed.

The Netherlands, Turkey, and Slovakia have been major focal points for GDPR enforcement, with the Netherlands leading in terms of high-value fines. Moreover, Romania and Slovakia frequently appear on the list of the lower fines, indicating that even less severe violations are being enforced. The implementation of the GDPR has been somewhat of a mixed bag since its introduction a year ago. There is no denying that the EU has captured the attention of the public with the major fines it has imposed on Silicon Valley giants. However, enforcement takes a very long time; even the EU's first self-imposed fine for violating one person's privacy took over two years to complete. 

Approximately three out of every four data protection authorities have stated that they lack the budget and personnel needed to investigate violations, and numerous examples illustrate that the byzantine collection of laws has not been able to curb the invasive practices of surveillance capitalism, despite their attempts. Perhaps the EU could begin by following its own rules and see if that will help. A comprehensive framework for data protection has been developed by the General Data Protection Regulation (GDPR). 

Established to protect and safeguard individuals' data and ensure their privacy, rigorous standards regarding the collection, processing, and storage of data were enacted. Nevertheless, in an unexpected development, the European Union itself was found to have violated these very laws, causing an unprecedented uproar. 

A recent internal audit revealed a serious weakness in data management practices within European institutions, exposing the personal information of EU citizens to the risk of misuse or access by unauthorized individuals. Ultimately, the European Court of Justice handed down a landmark decision stating that the EU failed to comply with its data protection laws due to this breach. 

As a result of the GDPR, implemented in 2018, organisations are now required to obtain user consent to collect or use personal data, such as cookie acceptance notifications, which are now commonplace. This framework has become the foundation for data privacy and a defining framework for data privacy. By limiting the amount of information companies can collect and making its use more transparent, GDPR aims to empower individuals while posing a significant compliance challenge for technology companies. 

It is worth mentioning that Meta has faced substantial penalties for non-compliance and is among those most negatively impacted. There was a notable case last year when Meta was fined $1.3 billion for failing to adequately protect European users' data during its transfer to U.S. servers. This left them vulnerable to American intelligence agencies since their data could be transferred to American servers, a risk that they did not manage adequately. 

The company also received a $417 million fine for violations involving Instagram's privacy practices and a $232 million fine for not being transparent enough regarding WhatsApp's data processing practices in the past. This is not the only issue Meta is facing concerning GDPR compliance, as Amazon was fined $887 million by the European Union in 2021 for similar violations. 

A Facebook login integration that is part of Meta's ecosystem was a major factor in the recent breach of the European Union's data privacy regulations. The incident illustrates the challenges that can be encountered even by the enforcers of the GDPR when adhering to its strict requirements.

India Proposes New Draft Rules Under Digital Personal Data Protection Act, 2023




The Ministry of Electronics and Information Technology (MeitY) announced on January 3, 2025, the release of draft rules under the Digital Personal Data Protection Act, 2023 for public feedback. A significant provision in this draft mandates that parental consent must be obtained before processing the personal data of children under 18 years of age, including creating social media accounts. This move aims to strengthen online safety measures for minors and regulate how digital platforms handle their data.

The draft rules explicitly require social media platforms to secure verifiable parental consent before allowing minors under 18 to open accounts. This provision is intended to safeguard children from online risks such as cyberbullying, data breaches, and exposure to inappropriate content. Verification may involve government-issued identification or digital identity tools like Digital Lockers.

MeitY has invited the public to share their opinions and suggestions regarding the draft rules through the government’s citizen engagement platform, MyGov.in. The consultation window remains open until February 18, 2025. Public feedback will be reviewed before the finalization of the rules.

Consumer Rights and Data Protection Measures

The draft rules enhance consumer data protection by introducing several key rights and safeguards:
  • Data Deletion Requests: Users can request companies to delete their personal data.
  • Transparency Obligations: Companies must explain why user data is being collected and how it will be used.
  • Penalties for Data Breaches: Data fiduciaries will face fines of up to ₹250 crore for data breaches.

To ensure compliance, the government plans to establish a Data Protection Board, an independent digital regulatory body. The Board will oversee data protection practices, conduct investigations, enforce penalties, and regulate consent managers. Consent managers must register with the Board and maintain a minimum net worth of ₹12 crore.

Mixed Reactions to the Proposed Rules

The draft rules have received a blend of support and criticism. Supporters, like Saneh Lata, a teacher and mother of two from Dwarka, Delhi, appreciate the move, citing social media as a significant distraction for children. Critics, however, argue that the regulations may lead to excessive government intervention in children's digital lives.

Certain institutions, such as educational organizations and child welfare bodies, may be exempt from some provisions to ensure uninterrupted educational and welfare services. Additionally, digital intermediaries like e-commerce, online gaming, and social media platforms are subject to specific guidelines tailored to their operations.

The proposed draft rules mark a significant step towards strengthening data privacy, especially for vulnerable groups like children and individuals under legal guardianship. By holding data fiduciaries accountable and empowering consumers with greater control over their data, the government aims to create a safer and more transparent digital ecosystem.