Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Data Safety User Data. Show all posts

Government Flags WhatsApp Account Bans as Indian Number Misuse Raises Cyber Fraud Concerns

 

The Indian government has expressed concern over WhatsApp banning an average of nearly 9.8 million Indian accounts every month until October, amid fears that Indian mobile numbers are being widely misused for scams and cybercrime. Officials familiar with the discussions said the government is engaging with the Meta-owned messaging platform to understand how such large-scale misuse can be prevented and how enforcement efforts can be strengthened. 

Authorities believe WhatsApp’s current approach of not sharing details of the mobile numbers linked to banned accounts is limiting the government’s ability to track spam, impersonation, and cyber fraud. While WhatsApp publishes monthly compliance reports disclosing the number of accounts it removes for policy violations, officials said the lack of information about the specific numbers involved reduces transparency and weakens enforcement efforts. 

India is WhatsApp’s largest market, and the platform identifies Indian accounts through the +91 country code. Government officials noted that in several cases, numbers banned on WhatsApp later reappear on other messaging platforms such as Telegram, where they continue to be used for fraudulent activities. The misuse of Indian phone numbers by scammers operating both within and outside the country remains a persistent issue, despite multiple measures taken to combat digital fraud. 

According to officials, over-the-top messaging platforms are frequently used for scams because once an account is registered using a mobile number, it can function without an active SIM card. This makes it extremely difficult for law enforcement agencies to trace perpetrators. Authorities estimate that nearly 95% of cases involving digital arrest scams and impersonation fraud currently originate on WhatsApp. 

Government representatives said identifying when a SIM card was issued and verifying the authenticity of its know-your-customer details are critical steps in tackling such crimes. Discussions are ongoing with WhatsApp and other OTT platforms to find mechanisms that balance user privacy with national security and fraud prevention. 

The government also issues direct requests to platforms to disable accounts linked to illegal activities. Data from the Department of Telecommunications shows that by November this year, around 2.9 million WhatsApp profiles and groups were disengaged following government directives. However, officials pointed out that while these removals are documented, there is little clarity around accounts banned independently by WhatsApp.  

Former Ministry of Electronics and IT official Rakesh Maheshwari said the purpose of monthly compliance reports was to improve platform accountability. He added that if emerging patterns raise security concerns, authorities are justified in seeking additional information.  

WhatsApp has maintained that due to end-to-end encryption, its enforcement actions rely on behavioural indicators rather than message content. The company has also stated that sharing detailed account data involves complex legal and cross-border challenges. However, government officials argue that limited disclosure, even at the level of mobile numbers, poses a security risk when large-scale fraud is involved.

WhatsApp Enumeration Flaw Exposes Data of 3.5 Billion Users in Massive Scraping Incident

 

Security researchers in Austria uncovered a significant privacy vulnerability in WhatsApp that enabled them to collect the personal details of more than 3.5 billion registered users, an exposure they believe may be the largest publicly documented data leak to date. The issue stems from a long-standing feature that allows users to search WhatsApp accounts by entering phone numbers. While meant for convenience, the function can be exploited to automatically compile profiles at scale. 

Using phone numbers generated with a custom tool built on Google’s libphonenumber system, the research team was able to query account details at an astonishing rate—more than 100 million accounts per hour. They reported exceeding 7,000 automated lookups per second without facing IP bans or meaningful rate-limiting measures. Their findings indicate that WhatsApp’s registered user base is larger than previously disclosed, contradicting the platform’s statement that it serves “over two billion” users globally. 

The scraped records included phone numbers, account names, profile photos, and, in some cases, personal text attached to accounts. Over half of the identified users had public profile images, and a substantial portion contained identifiable human faces. About 29 percent included text descriptions, which researchers noted could reveal sensitive personal information such as sexuality, political affiliation, drug use, professional identities, or links to other platforms—including LinkedIn and dating apps.  
The study also revealed that millions of accounts belonged to phone numbers registered in countries where WhatsApp is restricted or banned, including China, Myanmar, and North Korea. Researchers warn that such exposure could put users in those regions at risk of government monitoring, penalties, or arrest. 

Beyond state-level dangers, experts stress that the harvested dataset could be misused by cybercriminals conducting targeted phishing campaigns, fraudulent messaging schemes, robocalling, and identity-based scams. The team emphasized that the persistence of phone numbers poses an ongoing risk: half of the numbers leaked during Facebook’s large-scale 2021 data scraping incident were still active in WhatsApp’s ecosystem. 

Meta confirmed receiving the researchers’ disclosure through its bug bounty process. The company stated that it has since deployed updated anti-scraping defenses and thanked the researchers for responsibly deleting collected data. According to WhatsApp engineering leadership, the vulnerability did not expose private messages or encrypted content. 

The researchers validated Meta’s claim, noting that the original enumeration method is now blocked. However, they highlighted that verifying security completeness remains difficult and emphasized the nearly year-long delay between initial reporting and effective remediation.  
Whether this incident triggers systemic scrutiny or remains an isolated cautionary case, it underscores a critical reality: even services built around encryption can expose sensitive user metadata, creating new avenues for surveillance and exploitation.

Hacker Claims Responsibility for University of Pennsylvania Breach Exposing 1.2 Million Donor Records

 

A hacker has taken responsibility for the University of Pennsylvania’s recent “We got hacked” email incident, claiming the breach was far more extensive than initially reported. The attacker alleges that data on approximately 1.2 million donors, students, and alumni was exposed, along with internal documents from multiple university systems. The cyberattack surfaced last Friday when Penn alumni and students received inflammatory emails from legitimate Penn.edu addresses, which the university initially dismissed as “fraudulent and obviously fake.”  

According to the hacker, their group gained full access to a Penn employee’s PennKey single sign-on (SSO) credentials, allowing them to infiltrate critical systems such as the university’s VPN, Salesforce Marketing Cloud, SAP business intelligence platform, SharePoint, and Qlik analytics. The attackers claim to have exfiltrated sensitive personal data, including names, contact information, birth dates, estimated net worth, donation records, and demographic details such as religion, race, and sexual orientation. Screenshots and data samples shared with cybersecurity publication BleepingComputer appeared to confirm the hackers’ access to these systems.  

The hacker stated that the breach began on October 30th and that data extraction was completed by October 31st, after which the compromised credentials were revoked. In retaliation, the group allegedly used remaining access to the Salesforce Marketing Cloud to send the offensive emails to roughly 700,000 recipients. When asked about the method used to obtain the credentials, the hacker declined to specify but attributed the breach to weak security practices at the university. Following the intrusion, the hacker reportedly published a 1.7 GB archive containing spreadsheets, donor-related materials, and files allegedly sourced from Penn’s SharePoint and Box systems. 

The attacker told BleepingComputer that their motive was not political but financial, driven primarily by access to the university’s donor database. “We’re not politically motivated,” the hacker said. “The main goal was their vast, wonderfully wealthy donor database.” They added that they were not seeking ransom, claiming, “We don’t think they’d pay, and we can extract plenty of value out of the data ourselves.” Although the full donor database has not yet been released, the hacker warned it could be leaked in the coming months. 

In response, the University of Pennsylvania stated that it is investigating the incident and has referred the matter to the FBI. “We understand and share our community’s concerns and have reported this to the FBI,” a Penn spokesperson confirmed. “We are working with law enforcement as well as third-party technical experts to address this as rapidly as possible.” Experts warn that donors and affiliates affected by the breach should remain alert to potential phishing attempts and impersonation scams. 

With detailed personal and financial data now at risk, attackers could exploit the information to send fraudulent donation requests or gain access to victims’ online accounts. Recipients of any suspicious communications related to donations or university correspondence are advised to verify messages directly with Penn before responding. 

 The University of Pennsylvania breach highlights the growing risks faced by educational institutions holding vast amounts of personal and donor data, emphasizing the urgent need for robust access controls and system monitoring to prevent future compromises.

Connected Car Privacy Risks: How Modern Vehicles Secretly Track and Sell Driver Data

 

The thrill of a smooth drive—the roar of the engine, the grip of the tires, and the comfort of a high-end cabin—often hides a quieter, more unsettling reality. Modern cars are no longer just machines; they’re data-collecting devices on wheels. While you enjoy the luxury and performance, your vehicle’s sensors silently record your weight, listen through cabin microphones, track your every route, and log detailed driving behavior. This constant surveillance has turned cars into one of the most privacy-invasive consumer products ever made. 

The Mozilla Foundation recently reviewed 25 major car brands and declared that modern vehicles are “the worst product category we have ever reviewed for privacy.” Not a single automaker met even basic standards for protecting user data. The organization found that cars collect massive amounts of information—from location and driving patterns to biometric data—often without explicit user consent or transparency about where that data ends up. 

The Federal Trade Commission (FTC) has already taken notice. The agency recently pursued General Motors (GM) and its subsidiary OnStar for collecting and selling drivers’ precise location and behavioral data without obtaining clear consent. Investigations revealed that data from vehicles could be gathered as frequently as every three seconds, offering an extraordinarily detailed picture of a driver’s habits, destinations, and lifestyle. 

That information doesn’t stay within the automaker’s servers. Instead, it’s often shared or sold to data brokers, insurers, and marketing agencies. Driver behavior, acceleration patterns, late-night trips, or frequent stops at specific locations could be used to adjust insurance premiums, evaluate credit risk, or profile consumers in ways few drivers fully understand. 

Inside the car, the illusion of comfort and control masks a network of tracking systems. Voice assistants that adjust your seat or temperature remember your commands. Smartphone apps that unlock the vehicle transmit telemetry data back to corporate servers. Even infotainment systems and microphones quietly collect information that could identify you and your routines. The same technology that powers convenience features also enables invasive data collection at an unprecedented scale. 

For consumers, awareness is the first defense. Before buying a new vehicle, it’s worth asking the dealer what kind of data the car collects and how it’s used. If they cannot answer directly, it’s a strong indication of a lack of transparency. After purchase, disabling unnecessary connectivity or data-sharing features can help protect privacy. Declining participation in “driver score” programs or telematics-based insurance offerings is another step toward reclaiming control. 

As automakers continue to blend luxury with technology, the line between innovation and intrusion grows thinner. Every drive leaves behind a digital footprint that tells a story—where you live, work, shop, and even who rides with you. The true cost of modern convenience isn’t just monetary—it’s the surrender of privacy. The quiet hum of the engine as you pull into your driveway should represent freedom, not another connection to a data-hungry network.

VP.NET Launches SGX-Based VPN to Transform Online Privacy

 

The virtual private network market is filled with countless providers, each promising secure browsing and anonymity. In such a crowded space, VP.NET has emerged with the bold claim of changing how VPNs function altogether. The company says it is “the only VPN that can’t spy on you,” insisting that its system is built in a way that prevents monitoring, logging, or exposing any user data. 

To support its claims, VP.NET has gone a step further by releasing its source code to the public, allowing independent verification. VP.NET was co-founded by Andrew Lee, the entrepreneur behind Private Internet Access (PIA). According to the company, its mission is to treat digital privacy as a fundamental right and to secure it through technical design rather than relying on promises or policies. Guided by its principle of “don’t trust, verify,” the provider focuses on privacy-by-design to ensure that users are always protected. 

The technology behind VP.NET relies on Intel’s SGX (Software Guard Extensions). This system creates encrypted memory zones, also called enclaves, which remain isolated and inaccessible even to the VPN provider. Using this approach, VP.NET separates a user’s identity from their browsing activity, preventing any form of link between the two. 

The provider has also built a cryptographic mixer that severs the connection between users and the websites they visit. This mixer functions with a triple-layer identity mapping system, which the company claims makes tracking technically impossible. Each session generates temporary IDs, and no data such as IP addresses, browsing logs, traffic information, DNS queries, or timestamps are stored. 

VP.NET has also incorporated traffic obfuscation features and safeguards against correlation attacks, which are commonly used to unmask VPN users. In an effort to promote transparency, VP.NET has made its SGX source code publicly available on GitHub. By doing so, users and researchers can confirm that the correct code is running, the SGX enclave is authentic, and there has been no tampering. VP.NET describes its system as “zero trust by design,” emphasizing that its architecture makes it impossible to record user activity. 

The service runs on the WireGuard protocol and includes several layers of encryption. These include ChaCha20 for securing traffic, Poly1305 for authentication, Curve25519 for key exchange, and BLAKE2s for hashing. VP.NET is compatible with Windows, macOS, iOS, Android, and Linux systems, and all platforms receive the same protections. Each account allows up to five devices to connect simultaneously, which is slightly lower than competitors like NordVPN, Surfshark, and ExpressVPN. Server availability is currently limited to a handful of countries including the US, UK, Germany, France, the Netherlands, and Japan. 

However, all servers are SGX-enabled to maintain strong privacy. While the company operates from the United States, a jurisdiction often criticized for weak privacy laws, VP.NET argues that its architecture makes the question of location irrelevant since no user data exists to be handed over. 

Despite being relatively new, VP.NET is positioning itself as part of a new wave of VPN providers alongside competitors like Obscura VPN and NymVPN, all of which are introducing fresh approaches to strengthen privacy. 

With surveillance and tracking threats becoming increasingly sophisticated, VP.NET’s SGX-based system represents a technical shift that could redefine how users think about online security and anonymity.

Federal Judge Allows Amazon Alexa Users’ Privacy Lawsuit to Proceed Nationwide

 

A federal judge in Seattle has ruled that Amazon must face a nationwide lawsuit involving tens of millions of Alexa users. The case alleges that the company improperly recorded and stored private conversations without user consent. U.S. District Judge Robert Lasnik determined that Alexa owners met the legal requirements to pursue collective legal action for damages and an injunction to halt the alleged practices. 

The lawsuit claims Amazon violated Washington state law by failing to disclose that it retained and potentially used voice recordings for commercial purposes. Plaintiffs argue that Alexa was intentionally designed to secretly capture billions of private conversations, not just the voice commands directed at the device. According to their claim, these recordings may have been stored and repurposed without permission, raising serious privacy concerns. Amazon strongly disputes the allegations. 

The company insists that Alexa includes multiple safeguards to prevent accidental activation and denies evidence exists showing it recorded conversations belonging to any of the plaintiffs. Despite Amazon’s defense, Judge Lasnik stated that millions of users may have been impacted in a similar manner, allowing the case to move forward. Plaintiffs are also seeking an order requiring Amazon to delete any recordings and related data it may still hold. The broader issue at stake in this case centers on privacy rights within the home.

If proven, the claims suggest that sensitive conversations could have been intercepted and stored without explicit approval from users. Privacy experts caution that voice data, if mishandled or exposed, can lead to identity risks, unauthorized information sharing, and long-term security threats. Critics further argue that the lawsuit highlights the growing power imbalance between consumers and large technology companies. Amazon has previously faced scrutiny over its corporate practices, including its environmental footprint. 

A 2023 report revealed that the company’s expanding data centers in Virginia would consume more energy than the entire city of Seattle, fueling additional criticism about the company’s long-term sustainability and accountability. The case against Amazon underscores the increasing tension between technological convenience and personal privacy. 

As voice-activated assistants become commonplace in homes, courts will likely play a decisive role in determining the boundaries of data collection and consumer protection. The outcome of this lawsuit could set a precedent for how tech companies handle user data and whether customers can trust that private conversations remain private.

Why Web3 Exchanges Must Prioritize Security, Privacy, and Fairness to Retain Users

 

In the evolving Web3 landscape, a platform’s survival hinges on its ability to meet community expectations. If users perceive an exchange as unfair, insecure, or intrusive, they’ll swiftly move on. This includes any doubts about the platform’s transparency, ability to safeguard user data, or deliver features that users value.  

The challenge lies in balancing ideal user experience with realistic limitations. While complete invulnerability isn’t feasible, exchanges must adopt rigorous security protocols that align with industry best practices. Beyond technical defenses, they must also enforce strict data privacy policies and ensure customer funds remain entirely under user control. 

So, how can an exchange rise to these expectations without compromising service quality? The key lies in maintaining equilibrium between protection and functionality. A robust exchange must operate with enterprise-level security, including encryption at a high standard. Since smart contract flaws can remain hidden for long periods, it’s essential that platforms perform internal and third-party audits. 

Security firms and penetration testers, like red teams, simulate cyberattacks to expose and address weaknesses before attackers can exploit them. Users evaluating exchanges should consider not just the presence of encryption but also whether the platform uses external experts to continuously test its defenses. In handling funds, exchanges must mitigate risks such as consensus failures and ensure their infrastructure can validate and process inter-chain transactions securely. 

However, these protective measures shouldn’t come at the cost of speed or efficiency. Metrics such as transactions per second (TPS), consensus time, and finality should remain optimized for a seamless experience. Equally important is protecting user privacy. Web3 users face threats ranging from data leaks and surveillance to the misuse of trading data by advanced bots. 

These issues demand concrete actions—not vague assurances. Transparent privacy policies and secure data practices are essential. Enclave Markets has set an example in privacy-focused trading. Their off-chain enclave prevents malicious actors from seeing trade activity, effectively eliminating front-running and ensuring fair execution with zero spread and no slippage.  

Another often overlooked area is fairness in reward programs. Many exchanges structure incentives in ways that disproportionately benefit bots or large-scale traders. Enclave Markets addresses this with a more balanced rewards system that favors genuine users over manipulators. Their recently introduced EdgeBot allows users to track and trade tokens directly within Telegram, minimizing friction and response time. 

This type of intuitive innovation reflects a deep understanding of user needs. Ultimately, users must take responsibility to verify if a platform truly upholds the principles of fairness, security, and privacy. These aren’t optional features—they’re the foundation of any trustworthy Web3 exchange.

T-Mobile Denies Involvement After Hackers Claim Massive Customer Data Breach

 

T-Mobile is once again in the cybersecurity spotlight after a hacking group claimed to have obtained sensitive personal information belonging to 64 million customers. The hackers alleged the data was freshly taken as of June 1, 2025, and listed their find on a well-known dark web forum popular among cybercriminals and data traders.  

The leaked trove reportedly contains highly personal information, including full names, birthdates, tax identification numbers, addresses, contact details, device and cookie IDs, and IP addresses. Such data can be extremely valuable to cybercriminals for fraud, identity theft, or phishing attacks. Cybernews, which analyzed a sample of the data, confirmed its sensitive nature, raising alarm over the scale and potential damage of the breach.  

Yet, T-Mobile has come forward to strongly deny any connection to the alleged hack. In a statement to The Mobile Report, the telecom company asserted that the leaked data does not belong to T-Mobile or any of its customers. “Any reports of a T-Mobile data breach are inaccurate. We have reviewed the sample data provided and can confirm the data does not relate to T-Mobile or our customers,” the company stated. 

Despite T-Mobile’s denial, cybersecurity analysts remain cautious. Cybernews pointed out that portions of the leaked data mirror details from previous breaches that targeted T-Mobile, suggesting there may be some overlap with older incidents. This has sparked speculation that the latest claim may not be based on a new breach, but rather a repackaging of previously stolen information to create hype or confusion. 

Adding to the uncertainty, Have I Been Pwned—a trusted platform used to monitor data breaches—has yet to list the supposed breach, which could support the theory that the leaked data is not new. Still, the situation has left many T-Mobile customers in limbo, unsure whether their data has truly been compromised again. 

If the claims prove to be true, it would be another in a series of cybersecurity setbacks for T-Mobile. The company only recently began issuing compensation checks related to its 2021 data breach, suggesting that resolution in such matters can take years. 

For now, the legitimacy of this latest breach remains unclear. Until further evidence surfaces or an independent investigation confirms or refutes the claims, customers are advised to remain vigilant and monitor their accounts for any unusual activity.