Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Privacy. Show all posts

Tribal Health Clinics in California Report Patient Data Exposure

 


Patients receiving care at several tribal healthcare clinics in California have been warned that a cyber incident led to the exposure of both personal identification details and private medical information. The clinics are operated by a regional health organization that runs multiple facilities across the Sierra Foothills and primarily serves American Indian communities in that area.

A ransomware group known as Rhysida has publicly claimed responsibility for a cyberattack that took place in November 2025 and affected the MACT Health Board. The organization manages several clinics in the Sierra Foothills region of California that provide healthcare services to Indigenous populations living in nearby communities.

In January, the MACT Health Board informed an unspecified number of patients that their information had been involved in a data breach. The organization stated that the compromised data included several categories of sensitive personal information. This exposed data may include patients’ full names and government-issued Social Security numbers. In addition to identity information, highly confidential medical details were affected. These medical records can include information about treating doctors, medical diagnoses, insurance coverage details, prescribed medications, laboratory and diagnostic test results, stored medical images, and documentation related to ongoing care and treatment.

The cyber incident caused operational disruptions across MACT clinic systems starting on November 20, 2025. During this period, essential digital services became unavailable, including phone communication systems, platforms used to process prescription requests, and scheduling tools used to manage patient appointments. Telephone services were brought back online by December 1. However, as of January 22, some specialized imaging-related services were still not functioning normally, indicating that certain technical systems had not yet fully recovered.

Rhysida later added the MACT Health Board to its online data leak platform and demanded payment in cryptocurrency. The amount requested was eight units of digital currency, which was valued at approximately six hundred sixty-two thousand dollars at the time the demand was reported. To support its claim of responsibility, the group released sample files online, stating that the materials were taken from MACT’s systems. The files shared publicly reportedly included scans of passports and other internal documents.

The MACT Health Board has not confirmed that Rhysida’s claims are accurate. There is also no independent verification that the files published by the group genuinely originated from MACT’s internal systems. At this time, it remains unclear how many individuals received breach notifications, what method was used by the attackers to access MACT’s network, or whether any ransom payment was made. The organization declined to provide further information when questioned.

In its written notification to affected individuals, MACT stated that it experienced an incident that disrupted its information technology operations. The organization reported that an internal investigation found that unauthorized access occurred to certain files stored on its systems during a defined time window between November 12 and November 20, 2025.

The health organization is offering eligible individuals complimentary identity monitoring services. These services are intended to help patients detect possible misuse of personal or financial information following the exposure of sensitive records.

Rhysida is a cybercriminal group that first became active in public reporting in May 2023. The group deploys ransomware designed to both extract sensitive data from victim organizations and prevent access to internal systems by encrypting files. After carrying out an attack, the group demands payment in exchange for deleting stolen data and providing decryption tools that allow victims to regain access to locked systems. Rhysida operates under a ransomware-as-a-service model, in which external partners pay to use its malware and technical infrastructure to carry out attacks and collect ransom payments.

The group has claimed responsibility for more than one hundred confirmed ransomware incidents, along with additional claims that have not been publicly acknowledged by affected organizations. On average, the group’s ransom demands amount to several hundred thousand dollars per incident.

A significant portion of Rhysida’s confirmed attacks have targeted hospitals, clinics, and other healthcare providers. These healthcare-related incidents have resulted in the exposure of millions of sensitive records. Past cases linked to the group include attacks on healthcare organizations in multiple U.S. states, with ransom demands ranging from over one million dollars to several million dollars. In at least one case, the group claimed to have sold stolen data after a breach.

Researchers tracking cybersecurity incidents have recorded more than one hundred confirmed ransomware attacks on hospitals, clinics, and other healthcare providers across the United States in 2025 alone. These attacks collectively led to the exposure of nearly nine million patient records. In a separate incident reported during the same week, another healthcare organization confirmed a 2025 breach that was claimed by a different ransomware group, which demanded a six-figure ransom payment.

Ransomware attacks against healthcare organizations often involve both data theft and system disruption. Such incidents can disable critical medical systems, interfere with patient care, and create risks to patient safety and privacy. When hospitals and clinics lose access to digital systems, staff may be forced to rely on manual processes, delay or cancel appointments, and redirect patients to other facilities until systems are restored. These disruptions can increase operational strain and place patients and healthcare workers at heightened risk.

The MACT Health Board is named after the five California counties it serves: Mariposa, Amador, Alpine, Calaveras, and Tuolumne. The organization operates approximately a dozen healthcare facilities that primarily serve American Indian communities in the region. These clinics provide a range of services, including general medical care, dental treatment, behavioral health support, vision and eye care, and chiropractic services.


Looking Beyond the Hype Around AI Built Browser Projects


Cursor, the company that provides an artificial intelligence-integrated development environment, recently gained attention from the industry after suggesting that it had developed a fully functional browser using its own artificial intelligence agents, which is known as the Cursor AI-based development environment. In a series of public statements made by Cursor chief executive Michael Truell, it was claimed that the browser was built with the use of GPT-5.2 within the Cursor platform. 


Approximately three million lines of code are spread throughout thousands of files in Truell's project, and there is a custom rendering engine in Rust developed from scratch to implement this project. 

Moreover, he explained that the system also supports the main features of the browser, including HTML parsing, CSS cascading and layout, text shaping, painting, and a custom-built JavaScript virtual machine that is responsible for the rendering of HTML on the browser. 

Even though the statements did not explicitly assert that a substantial amount of human involvement was not involved with the creation of the browser, they have sparked a heated debate within the software development community about whether or not the majority of the work is truly attributed to autonomous AI systems, and whether or not these claims should be interpreted in light of the growing popularity of AI-based software development in recent years. 

There are a couple of things to note about the episode: it unfolds against a backdrop of intensifying optimism regarding generative AI, an optimism that has inspired unprecedented investment in companies across a variety of industries. In spite of the optimism, a more sobering reality is beginning to emerge in the process. 

A McKinsey study indicates that despite the fact that roughly 80 percent of companies report having adopted the most advanced AI tools, a similar percentage has seen little to no improvement in either revenue growth or profitability. 

In general, general-purpose AI applications are able to improve individual productivity, but they have rarely been able to translate their incremental time savings into tangible financial results. While higher value, domain-specific applications continue to stall in the experimental or pilot stage, analysts increasingly describe this disconnect as the generative AI value paradox since higher-value, domain-specific applications tend to stall in the experimental or pilot stages. 

There has been a significant increase in tension with the advent of so-called agentic artificial intelligence, which essentially is an autonomous system that is capable of planning, deciding, and acting independently in order to achieve predefined objectives. 

It is important to note, however, that these kinds of systems offer a range of benefits beyond assistive tools, as well as increasing the stakes for credibility and transparency in the case of Cursor's browser project, in which the decision to make its code publicly available was crucial. 

Developers who examined the repository found the software frequently failed to compile, rarely ran as advertised, and rarely exceeded the capabilities implied by the product's advertising despite enthusiastic headlines. 

If one inspects and tests the underlying code closely, it becomes evident that the marketing claims are not in line with the actual code. Ironically, most developers found the accompanying technical document—which detailed the project's limitations and partial successes—to be more convincing than the original announcement of the project. 

During a period of about a week, Cursor admits that it deployed hundreds of GPT-5.2-style agents, which generated about three million lines of code, assembling what on the surface amounted to a partially functional browser prototype. 

Several million dollars at prevailing prices for frontier AI models is the cost of the experiment, as estimated by Perplexity, an AI-driven search and analysis platform. At such times, it would be possible to consume between 10 and 20 trillion tokens during the experiment, which would translate into a cost of several million dollars at the current price. 

Although such figures demonstrate the ambition of the effort, they also emphasize the skepticism that exists within the industry at the moment: scale alone does not equate to sustained value or technical maturity. It can be argued that a number of converging forces are driving AI companies to increasingly target the web browser itself, rather than focusing on plug-ins or standalone applications.

For many years, browsers have served as the most valuable source of behavioral data - and, by extension, an excellent source of ad revenue - and this has been true for decades. They have been able to capture search queries, clicks, and browsing patterns for a number of years, which have paved the way for highly profitable ad targeting systems.

Google has gained its position as the world's most powerful search engine by largely following this model. The browser provides AI providers with direct access to this stream of data exhaust, which reduces the dependency on third party platforms and secures a privileged position in the advertising value chain. 

A number of analysts note that controlling the browser can also be a means of anchoring a company's search product and the commercial benefits that follow from it as well. It has been reported that OpenAI's upcoming browser is explicitly intended to collect information on users' web behavior from first-party sources, a strategy intended to challenge Google's ad-driven ecosystem. 

Insiders who have been contacted by the report suggest they were motivated to build a browser rather than an extension for Chrome or Edge because they wanted more control over their data. In addition to advertising, the continuous feedback loop that users create through their actions provides another advantage: each scroll, click, and query can be used to refine and personalize AI models, which in turn strengthens a product over time.

In the meantime, advertising remains one of the few scalable monetization paths for consumer-facing artificial intelligence, and both OpenAI and Perplexity appear to be positioning their browsers accordingly, as highlighted by recent hirings and the quiet development of ad-based services. 

Meanwhile, AI companies claim that browsers offer the chance to fundamentally rethink the user experience of the web, arguing that it can be remodeled in the future. Traditional browsing, which relied heavily on tabs, links, and manual comparison, has become increasingly viewed as an inefficient and cognitively fragmented activity. 

By replacing navigation-heavy workflows with conversational, context-aware interactions, artificial intelligence-first browsers aim to create a new type of browsing. It is believed that Perplexity's Comet browser, which is positioned as an “intelligent interface”, can be accessed by the user at any moment, enabling the artificial intelligence to research, summarize, and synthesize information in real time, thus creating a real-time “intelligent interface.” 

Rather than clicking through multiple pages, complex tasks are condensed into seamless interactions that maintain context across every step by reducing the number of pages needed to complete them. As with OpenAI's planned browser, it is likely to follow a similar approach by integrating a ChatGPT-like assistant directly into the browsing environment, allowing users to act on information without leaving the page. 

The browser is considered to be a constant co-pilot, one that will be able to draft messages, summarise content, or perform transactions on the user's behalf, rather than just performing searches. These shifts have been described by some as a shift from search to cognition. 

The companies who are deeply integrating artificial intelligence into everyday browsing hope that, in addition to improving convenience, they will be able to keep their users engaged in their ecosystems for longer periods of time, strengthening their brand recognition and boosting habitual usage. Having a proprietary browser also enables the integration of artificial intelligence services and agent-based systems that are difficult to deliver using third-party platforms. 

A comprehensive understanding of browser architecture provides companies with the opportunity to embed language models, plugins, and autonomous agents at a foundational level of the browser. OpenAI's browser, for instance, is expected to be integrated directly with the company's emerging agent platform, enabling software capable of navigating websites, completing forms, and performing multi-step actions on its own.

It is apparent that further ambitions are evident elsewhere too: 
The Browser Company's Dia features an AI assistant right in the address bar, offering a combination of search and chat functionality along with task automation, while maintaining awareness of the context of the user across multiple tabs. These types of browsers are an indicator of a broader trend toward building browsers around artificial intelligence rather than adding artificial intelligence features to existing browsers. 

By following such a method, a company's AI services become the default experience for users whenever they search or interact with the web. This ensures that the company's AI services are not optional enhancements, but rather the default experience. 

Last but not least, competitive pressure is a serious issue. Search and browser dominance by Google have long been mutually reinforcing each other, channeling data and traffic through Chrome into the company's advertising empire in an effort to consolidate its position.

A direct threat to this structure is the development of AI first browsers, whose aim is to divert users away from traditional search and towards AI-mediated discovery as a result. 

The browser that Perplexity is creating is part of a broader effort to compete with Google in search. However, Reuters reports that OpenAI is intensifying its rivalry with Google by moving into browsers. The ability to control the browser allows AI companies to intercept user intent at an earlier stage, so that they are not dependent on existing platforms and are protected from future changes in default settings and access rules that may be implemented. 

Furthermore, the smaller AI players must also be prepared to defend themselves from the growing integration of artificial intelligence into their browsers, as Google, Microsoft, and others are rapidly integrating it into their own browsers.

In a world where browsers remain a crucial part of our everyday lives as well as work, the race to integrate artificial intelligence into these interfaces is becoming increasingly important, and many observers are already beginning to describe this conflict as the beginning of a new era in browsers driven by artificial intelligence.

In the context of the Cursor episode and the trend toward AI-first browsers, it is imperative to note a cautionary mark for an industry rushing ahead of its own trials and errors. It is important to recognize, however, that open repositories and independent scrutiny continue to be the ultimate arbiters of technical reality, regardless of the public claims of autonomy and scale. 

It is becoming increasingly apparent that a number of companies are repositioning the browser as a strategic battleground, promising efficiency, personalization, and control - and that developers, enterprises, and users are being urged to separate ambition from implementation in real life. 

Among analysts, it appears that AI-powered browsers will not fail, but rather that their impact will be less dependent on headline-grabbing demonstrations than on evidence-based reliability, transparent attribution of human work to machine work, and a thoughtful evaluation of security and economic trade-offs. During this period of speed and spectacle in an industry that is known for its speed and spectacle, it may yet be the scariest resource of all.

OpenAI Faces Court Order to Disclose 20 Million Anonymized ChatGPT Chats


OpenAI, a company that is pushing to redefine how courts balance innovation, privacy, and the enforcement of copyright in the current legal battle over artificial intelligence and intellectual property, has brought a lawsuit challenging a sweeping discovery order. 

It was announced on Wednesday that the artificial intelligence company requested a federal judge to overturn a ruling that requires it to disclose 20 million anonymized ChatGPT conversation logs, warning even de-identified records may reveal sensitive information about users. 

In the current dispute, the New York Times and several other news organizations have filed a lawsuit alleging that OpenAI is violating copyright terms in its large language models by illegally using their content. The claim is that OpenAI has violated its copyright rights by using their copyrighted content. 

A federal district court in New York upheld two discovery orders on January 5, 2026 that required OpenAI to produce a substantial sample of the interactions with ChatGPT by the end of the year, a consequential milestone in an ongoing litigation that is situated at the intersection of copyright law, data privacy, and the emergence of artificial intelligence. 

According to the court's decision, this case concludes that there is a growing willingness by judicial authorities to critically examine the internal data practices of AI developers, while corporations argue that disclosure of this sort could have far-reaching implications for both user trust and the confidentiality of platforms themselves. As part of the controversy, plaintiffs are requesting access to ChatGPT's conversation logs that record both user prompts and the system's response to those prompts. 

Those logs, they argue, are crucial in evaluating claims of copyright infringement as well as OpenAI's asserted defenses, including fair use, since they capture both user prompts and system responses. In July 2025, when OpenAI filed a motion seeking the production of a 120-million-log sample, citing the scale and the privacy concerns involved in the request, it refused.

OpenAI, which maintains billions of logs as part of its normal operations, initially resisted the request. It responded by proposing to produce 20 million conversations, stripped of all personally identifiable information and sensitive information, using a proprietary process that would ensure the data would not be manipulated. 

A reduction of this sample was agreed upon by plaintiffs as an interim measure, however they reserved the right to continue their pursuit of a broader sample if the data were not sufficient. During October 2025, tensions escalated as OpenAI changed its position, offering instead to search for targeted words within the 20-million-log dataset and only to find conversations that directly implicated the plaintiff's work based on those search terms.

In their opinion, limiting disclosure to filtered results would be a better safeguard for user privacy, preventing the exposure of unnecessary unrelated communications. Plaintiffs, however, swiftly rejected this approach, filing a new motion to demand the release of the entire de-identified dataset. 

On November 7, 2025, a U.S. Magistrate Judge Ona Wang sided with the plaintiffs, ordering OpenAI to provide all of the sample data in addition to denying the company's request to reconsider. A judge ruled that obtaining access to both relevant and ostensibly irrelevant logs was necessary in order to conduct a comprehensive and fair analysis of OpenAI's claims. 

Accordingly, even conversations which are not directly referencing copyrighted material can be taken into account by OpenAI when attempting to prove fair use. As part of its assessment of privacy risks, the court deemed that the dataset had been reduced from billions to 20 million records by applying de-identification measures and enforcing a standing protective order, all of which were adequate to mitigate them. 

In light of the fact that the litigation is entering a more consequential phase, Keker Van Nest, Latham & Watkins, and Morrison & Foerster are representing OpenAI in the matter, which is approaching court-imposed production deadlines. 

In light of the fact that the order reflects a broader judicial posture toward artificial intelligence disputes, legal observers have noticed that courts are increasingly willing to compel extensive discovery - even if it is anonymous - to examine the process by which large language models are trained and whether copyrighted material may be involved.

A crucial aspect of this ruling is that it strengthens the procedural avenues for publishers and other content owners to challenge alleged copyright violations by AI developers. The ruling highlights the need for technology companies to be vigilant with their stewardship of large repositories of user-generated data, and the legal risks associated with retaining, processing, and releasing such data. 

Additionally, the dispute has intensified since there have been allegations that OpenAI was not able to suspend certain data deletion practices after the litigation commenced, therefore perhaps endangering evidence relevant to claims that some users may have bypassed publisher paywalls through their use of OpenAI products. 

As a result of the deletions, plaintiffs claim that they disproportionately affected free and subscription-tier user records, raising concerns about whether evidence preservation obligations were met fully. The company, which has been named as a co-defendant in the case, has been required to produce more than eight million anonymized Copilot interaction logs in response to the lawsuit and has not faced similar data preservation complaints.

A statement by Dr. Ilia Kolochenko, CEO of ImmuniWeb, on the implications of the ruling was given by CybersecurityNews. He said that while the ruling represents a significant legal setback for OpenAI, it could also embolden other plaintiffs to pursue similar discovery strategies or take advantage of stronger settlement positions in parallel proceedings. 

In response to the allegations, several courts have requested a deeper investigation into OpenAI's internal data governance practices, including a request for injunctions preventing further deletions until it is clear what remains and what is potentially recoverable and what can be done. Aside from the courtroom, the case has been accompanied by an intensifying investor scrutiny that has swept the artificial intelligence industry nationwide. 

In the midst of companies like SpaceX and Anthropic preparing for a possible public offering at a valuation that could reach hundreds of billions of dollars, market confidence is becoming increasingly dependent upon the ability of companies to cope with regulatory exposure, rising operational costs, and competitive pressures associated with rapid artificial intelligence development. 

Meanwhile, speculation around strategic acquisitions that could reshape the competitive landscape continues to abound in the industry. The fact that reports suggest OpenAI is exploring Pinterest may highlight the strategic value that large amounts of user interaction data have for enhancing product search capabilities and increasing ad revenue—both of which are increasingly critical considerations in the context of the competition between major technology companies for real-time consumer engagement and data-driven growth.

In view of the detailed allegations made by the news organizations, the litigation has gained added urgency due to the fact that a significant volume of potentially relevant data has been destroyed as a consequence of OpenAI's failure to preserve key evidence after the lawsuit was filed. 

A court filing indicates that plaintiffs learned nearly 11 months ago that large quantities of ChatGPT output logs, which reportedly affected a considerable number of Free, Pro, and Plus user conversations, had been deleted at an alarming rate after the suit was filed, and they were reportedly doing so at a disproportionately high rate. 

It is argued by plaintiffs that users trying to circumvent paywalls were more likely to enable chat deletion, which indicates this category of data is most likely to contain infringing material. Furthermore, the filings assert that despite OpenAI's attempt to justify the deletion of approximately one-third of all user conversations after the New York Times' complaint, OpenAI failed to provide any rationale other than citing what appeared to be an anomalous drop in usage during the period around the New Year of 2024. 

While news organizations have alleged OpenAI has continued routine deletion practices without implementing litigation holds despite two additional spikes in mass deletions that have been attributed to technical issues, they have selectively retained outputs relating to accounts mentioned in the publishers' complaints and continue to do so. 

During a testimony by OpenAI's associate general counsel, Mike Trinh, plaintiffs argue that the trial documents preserved by OpenAI substantiate the defenses of OpenAI, whereas the records that could substantiate the claims of third parties were not preserved. 

According to the researchers, the precise extent of the loss of the data remains unclear, because OpenAI still refuses to disclose even basic details about what it does and does not erase, an approach that they believe contrasts with Microsoft's ability to preserve Copilot log files without having to go through similar difficulties.

Consequently, as a result of Microsoft's failure to produce searchable Copilot logs, and in light of OpenAI's deletion of mass amounts of data, the news organizations are seeking a court order for Microsoft to produce searchable Copilot logs as soon as possible. 

It has also been requested that the court maintain the existing preservation orders which prevent further permanent deletions of output data as well as to compel OpenAI to accurately reflect the extent to which output data has been destroyed across the company's products as well as clarify whether any of that information can be restored and examined for its legal purposes.

Lego’s Move Into Smart Toys Faces Scrutiny From Play Professionals


 

In the wake of its unveiling of the company's smart brick technology, LEGO is seeking to reassure critics who argue that the initiative could undermine the company's commitment to hands-on, imaginative play as well as its longstanding history of innovation. 

A key announcement by LEGO has signaled a significant shift in its product strategy. Among industry observers as well as play experts, this announcement has sparked an early debate about whether the addition of digital intelligence into LEGO bricks could lead to a shift away from its traditional brick foundation. 

A few weeks ago, Federico Begher, LEGO's Senior Vice President of Product and New Business, addressed these concerns in an interview with IGN, in which he explained that the introduction of smart elements is a significant milestone that has been carefully considered by LEGO for many years, one that aims to enhance, rather than replace, LEGO's tactile creativity, which has characterized the brand for generations. 

With the launch of the new Smart Bricks, LEGO has introduced one of the most significant product developments in its history, and this position places the company in a unique position to reinvent the way its iconic building system interacts with a new generation of players. 

In the technology, which was introduced at CES 2026, sound, light, and motion-responsive elements are embedded directly into bricks, allowing structures to be responsive to touch as well as movement dynamically. 

During the announcement, LEGO executives framed the initiative as a natural extension of its creative ethos, with the intention of enticing children to go beyond static construction of objects through designing interactive models that can be programmed and adapted in real time, leveraging the brand's creative ethos.

There has been a great deal of enthusiasm for the approach as a way to encourage children to learn digital literacy as well as problem-solving at an early age, however education and child-development specialists have also been expressing measured reactions to it. 

Some have warned that increased electronic use may alter the tactile, open-ended nature of traditional brick-based play, despite others recognizing that it is capable of expanding the educational possibilities available to children. 

There is no denying that the core of LEGO's Smart Play ecosystem is a newly developed Smart Brick that replicates the dimensions of the familiar 2x4 bricks, but combines them with a variety of embedded electronics that are what enable Smart Play to work. 

Besides containing a custom microchip, the brick also contains motion and light sensors, orientation detection, integrated LEDs, and a compact speaker, forming the core of a wider system which also includes Smart Minifigures and Smart Tags, which all contain a distinct digital identifier that is distinct from the rest. 

Whenever these elements are combined or brought into proximity with each other, the Smart Brick recognizes them and performs predefined behaviors or lighting effects as a result of recognizing them. 

There is no need for internet connectivity, cloud-based processing, or companion applications to establish interactions between multiple Smart Bricks in order to coordinate responses, as the BrickNet protocol is a proprietary local wireless protocol, allowing coordinated responses without the need for internet access.

In spite of occasional mention of artificial intelligence, LEGO has emphasized that the system relies on on-device logic and not adaptive or generative models, delivering consistent and predictable responses that are meant to complement and enhance traditional hands-on play, not replace it. 

It is possible for Smart Bricks to respond to simple physical interactions with the system, in which directional changes, impacts, or proximity trigger visual and audio cues that are predetermined. Smart Tags can provide context storytelling elements that guide play scenarios with flashing lights and sound effects when a model falls, while falling models can trigger flashing lights and sound effects when they are attached to the model. 

Academics have expressed cautious praise for this combination of digital responsiveness and tangible construction. It is the experience of Professor Andrew Manches, a specialist in children and technology at the University of Edinburgh, to describe the system as technologically advanced, yet he added that imaginative play ultimately relies on a child's ability to develop narratives on their own rather than relying on scripted prompts. 

Smart Bricks are scheduled to be released by LEGO on March 1, 2026, with Star Wars-themed sets being the first to be released, with preorders beginning January 9 in the company's retail channels and select partners.

The electronic components add a premium quality to the products, ranging from entry-level sets priced under $100 to large collections priced over $150, thereby positioning the products as premium items. Some child advocacy groups have expressed concerns the preprogrammed responses in LEGO's BrickNet system could subtly restrict creative freedom or introduce privacy risks. 

However, LEGO maintains that its offline and encrypted system avoids many of the vulnerabilities associated with app-dependent smart toys that rely on internet connections. There have been gradual introductions of interactive elements into the company's portfolio in a bid to balance technological innovation with the enduring appeal of physical, open-ended play that has dominated the company's digital strategy as a whole. 

While the debate over the Smart Bricks continues, there is a more fundamental question of how the world's largest toy maker is going to manage the conflict between tradition and innovation. 

There are no plans in the near future to replace classic bricks with LEGO's Smart Play system, instead, LEGO CEOs insist that the technology is designed primarily to add a layer of benefit to classic bricks rather than replace them, positioning the technology as a complimentary layer that families can either choose to engage with or ignore. 

With the company choosing to keep the system fully offline and avoiding app-dependency in order to address concerns regarding data security and privacy as they have increasingly shaped conversations about connected toys, the company has attempted to address the privacy concerns. 

In accordance with industry analysts, Lego's premium pricing and phased rollout, starting with internationally popular licensed themes, suggest that the company is taking a market-tested approach rather than undergoing a wholesale change in its identity in order to make room for more premium products. 

A key factor that will determine whether Smart Bricks are successful over the long term will be whether they can earn the trust of parents, educators, and children as soon as they enter homes later this year. By establishing LEGO's reputation as a place to foster creativity and adapt to the expectations of a digitally-native generation, LEGO is reinforcing this reputation.

Privacy Takes Center Stage in WhatsApp’s Latest Feature Update

 


There are billions of WhatsApp users worldwide, making it a crucial communication platform for both personal and professional exchanges alike. But its wide spread has also made it an increasingly attractive target for cybercriminals because of its widespread reach and popularity. 

Recent security research has highlighted the possibility of emerging threats exploiting the platform's ecosystem. For example, a technique known as GhostPairing is being used to connect a victim's account to a malicious browser session through the use of a covert link. 

Additionally, separate studies have shown that the app's contact discovery functionality can also be exploited by third parties in order to expose large numbers of phone numbers, as well as photo profiles and other identifying information, causing fresh concerns about the exploitation of large-scale data. 

Despite the fact that WhatsApp relyes heavily on end-to-end encryption to safeguard message content and has made additional efforts to ensure the safety of message content, including passkey-secured backups and privacy-conscious artificial intelligence, security experts emphasize that user awareness remains an important factor in protecting the service from threats. 

When properly enabled, the platform comes with a variety of built-in tools that, when properly deployed, can significantly enhance account security and reduce risk of exposure to evolving digital threats when implemented properly. 

WhatsApp has continued to strengthen its end-to-end encryption framework in response to these evolving risks as well as to increase its portfolio of privacy-centric security controls. In response, it has been said that security analysts believe that limited user awareness often undermines the effectiveness of these safeguards, causing many account holders to not properly configure the protections that are already available to them. 

WhatsApp's native privacy settings can be an effective tool to prevent unauthorised access, curb data misuse, and reduce the risk of account takeover if they are properly enabled. There is an increased importance for this matter, especially because the platform is routinely used to exchange sensitive information, such as Aadhaar information and bank credentials, as well as one-time passwords, personal images, and official documents, on a daily basis.

In accordance with expert opinion, lax privacy configurations can put sensitive personal data at risk of fraud, identity theft, and social engineering attacks, while even a modest effort to review and tighten privacy controls can significantly improve one's digital security posture. It has come as a result of these broader privacy debates that the introduction of the Meta AI within WhatsApp has become a focus of concern for both users and privacy advocates. 

The AI chatbot, which can be accessed via a persistent blue icon on the Chats screen, will enable users to generate images and receive responses to prompts, but its continuous presence has sparked concerns over data handling, consent management, and user control over the chatbot. 

Despite WhatsApp's claims that only messages shared on the platform intentionally will be processed by the chatbot, many users are uneasy about the inability of the company to disable or remove Meta AI, especially since the company is unsure of the policies regarding data retention, training AI, and possible third-party access. 

Despite the company's caution against sharing sensitive personal information with the chatbot, users may still be able to use this data in order to refine the model as a whole, implicitly acknowledging the possibility of doing so. 

In light of this backdrop, WhatsApp has rolled out a feature aimed at protecting users from one another in lieu of addressing concerns associated with AI integration directly. It is designed to create an additional layer of confidentiality within selected conversations, and eliminates the use of Meta AI within those threads so that end-to-end encryption is maintained during user-to-user conversations. This framework reinforces the concept of end-to-end encryption at each level of the user-to-user conversation. 

As a result, many critics of this technology contend that while it is successful in safeguarding sensitive information comprehensively, it has limitations, such as allowing screenshots and manual saving of content. This limits its ability to provide comprehensive information protection.

The feature may temporarily reduce the anxiety surrounding Meta AI's involvement in private conversations, but experts claim it does little to resolve deeper concerns about transparency, consent, and control over the collection and use of data by AI systems.

In the future, WhatsApp will eventually need to address those concerns in a more direct manner in the course of rolling out additional updates. WhatsApp continues to serve as a primary channel for workplace communication, but security experts warn that convenience has quietly outpaced caution as it continues to consolidate its position.

Despite the fact that many professionals still use the default settings of their accounts, there are still risks associated with hijacking, impersonation, and data theft, which go far beyond the risks to your personal privacy, client privacy, and brand reputation.

There are several layers of security that are widely available, including two-step authentication, device management, biometric app locks, encrypted backups, and regular privacy checks, all of which remain underutilized despite their proven effectiveness at preventing common takeovers and phishing attempts. 

It must be noted that experts emphasize that technical controls alone are not sufficient to prevent cybercriminals from exploiting vulnerabilities. Human error remains one of the most exploited vulnerabilities, especially since attackers are increasingly using WhatsApp for social engineering scams, voice phishing, and impersonation of executives.

There has been an upward trend in the adoption of structured phishing simulation and awareness programs in recent years, which, according to industry data, can significantly reduce breach costs and employee susceptibility to attacks, as well as employees' privacy concerns. 

It is becoming increasingly important for organizations to take action to safeguard sensitive conversations in a climate where messaging apps have become both indispensable tools and high-value targets, through the disciplined application of WhatsApp's built-in protections and sustained investment in user training. 

The development of these developments, taken together, underscores the widening gap between WhatsApp's security capabilities and how it is used in reality. As the app continues to evolve into a hybrid space for personal communication, business coordination, and AI-assisted interactions, privacy and data protection concerns are growing as it develops into an increasingly hybrid platform. 

Various attack techniques have advanced over the years, but the combination of these techniques, the opaque integration of artificial intelligence, and the widespread reliance on default settings has resulted in an environment where users have become increasingly responsible for their own security. 

There has been some progress on WhatsApp's security, in terms of introducing meaningful safeguards, and it has also announced further updates, but their ultimate impact relies on informed adoption, transparent governance, and sustained scrutiny from regulators, as well as the security community. 

While clearer boundaries are being established around data use and user control, protecting conversations on one of the world's most popular messaging platforms will continue to be a technical challenge, but also a test of trust between users and the service they rely upon on a daily basis.

Chinese Hacking Group Breaches Email Systems Used by Key U.S. House Committees: Report

 

A cyber espionage group believed to be based in China has reportedly gained unauthorized access to email accounts used by staff working for influential committees in the U.S. House of Representatives, according to a report by the Financial Times published on Wednesday. The information was shared by sources familiar with the investigation.

The group, known as Salt Typhoon, is said to have infiltrated email systems used by personnel associated with the House China committee, along with aides serving on committees overseeing foreign affairs, intelligence, and armed services. The report did not specify the identities of the staff members affected.

Reuters said it was unable to independently confirm the details of the report. Responding to the allegations, Chinese Embassy spokesperson Liu Pengyu criticized what he described as “unfounded speculation and accusations.” The Federal Bureau of Investigation declined to comment, while the White House and the offices of the four reportedly targeted committees did not immediately respond to media inquiries.

According to one source cited by the Financial Times, it remains uncertain whether the attackers managed to access the personal email accounts of lawmakers themselves. The suspected intrusions were reportedly discovered in December.

Members of Congress and their staff, particularly those involved in overseeing the U.S. military and intelligence apparatus, have historically been frequent targets of cyber surveillance. Over the years, multiple incidents involving hacking or attempted breaches of congressional systems have been reported.

In November, the Senate Sergeant at Arms alerted several congressional offices to a “cyber incident” in which hackers may have accessed communications between the nonpartisan Congressional Budget Office and certain Senate offices. Separately, a 2023 report by the Washington Post revealed that two senior U.S. lawmakers were targeted in a hacking campaign linked to Vietnam.

Salt Typhoon has been a persistent concern for the U.S. intelligence community. The group, which U.S. officials allege is connected to Chinese intelligence services, has been accused of collecting large volumes of data from Americans’ telephone communications and intercepting conversations, including those involving senior U.S. politicians and government officials.

China has repeatedly rejected accusations of involvement in such cyber spying activities. Early last year, the United States imposed sanctions on alleged hacker Yin Kecheng and the cybersecurity firm Sichuan Juxinhe Network Technology, accusing both of playing a role in Salt Typhoon’s operations.

How Gender Politics Are Reshaping Data Privacy and Personal Information




The contemporary legal and administrative actions in the United States are revamping how personal data is recorded, shared, and accessed by government systems. For transgender and gender diverse individuals, these changes carry heightened risks, as identity records and healthcare information are increasingly entangled with political and legal enforcement mechanisms.

One of the most visible shifts involves federal identity documentation. Updated rules now require U.S. passport applicants to list sex as assigned at birth, eliminating earlier flexibility in gender markers. Courts have allowed this policy to proceed despite legal challenges. Passport data does not function in isolation. It feeds into airline systems, border controls, employment verification processes, financial services, and law enforcement databases. When official identification does not reflect an individual’s lived identity, transgender and gender diverse people may face repeated scrutiny, increased risk of harassment, and complications during travel or routine identity checks. From a data governance perspective, embedding such inconsistencies also weakens the accuracy and reliability of federal record systems.

Healthcare data has become another major point of concern. The Department of Justice has expanded investigations into medical providers offering gender related care to minors by applying existing fraud and drug regulation laws. These investigations focus on insurance billing practices, particularly the use of diagnostic codes to secure coverage for treatments. As part of these efforts, subpoenas have been issued to hospitals and clinics across the country.

Importantly, these subpoenas have sought not only financial records but also deeply sensitive patient information, including names, birth dates, and medical intake forms. Although current health privacy laws permit disclosures for law enforcement purposes, privacy experts warn that this exception allows personal medical data to be accessed and retained far beyond its original purpose. Many healthcare providers report that these actions have created a chilling effect, prompting some institutions to restrict or suspend gender related care due to legal uncertainty.

Other federal agencies have taken steps that further intensify concern. The Federal Trade Commission, traditionally focused on consumer protection and data privacy, has hosted events scrutinizing gender affirming healthcare while giving limited attention to patient confidentiality. This shift has raised questions about how privacy enforcement priorities are being set.

As in person healthcare becomes harder to access, transgender and gender diverse individuals increasingly depend on digital resources. Research consistently shows that the vast majority of transgender adults rely on the internet for health information, and a large proportion use telehealth services for medical care. However, this dependence on digital systems also exposes vulnerabilities, including limited broadband access, high device costs, and gaps in digital literacy. These risks are compounded by the government’s routine purchase of personal data from commercial data brokers.

Privacy challenges extend into educational systems as well. Courts have declined to establish a national standard governing control over students’ gender related data, leaving unresolved questions about who can access, store, and disclose sensitive information held by schools.

Taken together, changes to identity documents, aggressive access to healthcare data, and unresolved data protections in education are creating an environment of increased surveillance for transgender and gender diverse individuals. While some state level actions have successfully limited overly broad data requests, experts argue that comprehensive federal privacy protections are urgently needed to safeguard sensitive personal data in an increasingly digital society.

Inside the Hidden Market Where Your ChatGPT and Gemini Chats Are Sold for Profit

 

Millions of users may have unknowingly exposed their most private conversations with AI tools after cybersecurity researchers uncovered a network of browser extensions quietly harvesting and selling chat data.Here’s a reminder many people forget: an AI assistant is not your friend, not a financial expert, and definitely not a doctor or therapist. It’s simply someone else’s computer, running in a data center and consuming energy and water. What you share with it matters.

That warning has taken on new urgency after cybersecurity firm Koi uncovered a group of Google Chrome extensions that were quietly collecting user conversations with AI tools and selling that data to third parties. According to Koi, “Medical questions, financial details, proprietary code, personal dilemmas,” were being captured — “all of it, sold for ‘marketing analytics purposes.’”

This issue goes far beyond just ChatGPT or Google Gemini. Koi says the extensions indiscriminately target multiple AI platforms, including “Claude, Microsoft Copilot, Perplexity, DeepSeek, Grok (xAI) and Meta AI.” In other words, using any browser-based AI assistant could expose sensitive conversations if these extensions are installed.

The mechanism is built directly into the extensions. Koi explains that “for each platform, the extension includes a dedicated ‘executor’ script designed to intercept and capture conversations.” This data harvesting is enabled by default through hardcoded settings, with no option for users to turn it off. As Koi warns, “There is no user-facing toggle to disable this. The only way to stop the data collection is to uninstall the extension entirely.”

Once installed, the extensions monitor browser activity. When a user visits a supported AI platform, the extension injects a specific script — such as chatgpt.js, claude.js, or gemini.js — into the page. The result is total visibility into AI usage. As Koi puts it, this includes “Every prompt you send to the AI. Every response you receive. Conversation identifiers and timestamps. Session metadata. The specific AI platform and model used.”

Alarmingly, this behavior was not part of the extension’s original design. It was introduced later through updates, while the privacy policy remained vague and misleading. Although the tool is marketed as a privacy-focused product, Koi says it does the opposite. The policy admits: “We share the Web Browsing Data with our affiliated company,” described as a data broker “that creates insights which are commercially used and shared.”

The main extension involved is Urban VPN Proxy, which alone has around six million users. After identifying its behavior, Koi searched for similar code and found it reused across multiple products from the same publisher, spanning both Chrome and Microsoft Edge.

Affected Chrome Web Store extensions include:
  • Urban VPN Proxy – 6,000,000 users
  • 1ClickVPN Proxy – 600,000 users
  • Urban Browser Guard – 40,000 users
  • Urban Ad Blocker – 10,000 users
On Microsoft Edge Add-ons, the list includes:
  • Urban VPN Proxy – 1,323,622 users
  • 1ClickVPN Proxy – 36,459 users
  • Urban Browser Guard – 12,624 users
  • Urban Ad Blocker – 6,476 users
Despite this activity, most of these extensions carry “Featured” badges from Google and Microsoft. These labels suggest that the tools have been reviewed and meet quality standards — a signal many users trust when deciding what to install.

Koi and other experts argue that this highlights a deeper problem with extension privacy disclosures. While Urban VPN does technically mention some of this data collection, it’s easy to miss. During setup, users are told the extension processes “ChatAI communication” along with “pages you visit” and “security signals,” supposedly “to provide these protections.”

Digging deeper, the privacy policy spells it out more clearly: “‘AI Inputs and Outputs. As part of the Browsing Data, we will collect the prompts and outputs queried by the End-User or generated by the AI chat provider, as applicable.’” It also states plainly: “‘We also disclose the AI prompts for marketing analytics purposes.’”

The extensions, Koi warns, “remained live for months while harvesting some of the most personal data users generate online.” The advice is blunt: “if you have any of these extensions installed, uninstall them now. Assume any AI conversations you've had since July 2025 have been captured and shared with third parties.”

U.S. Authorities Shut Down Online Network Selling Fake Identity Templates

 



United States federal authorities have taken down an online operation accused of supplying tools used in identity fraud across multiple countries. The case centers on a Bangladeshi national who allegedly managed several websites that sold digital templates designed to imitate official government identification documents.

According to U.S. prosecutors, the accused individual, Zahid Hasan, is a 29-year-old resident of Dhaka. He is alleged to have operated an online business that distributed downloadable files resembling authentic documents such as U.S. passports, social security cards, and state driver’s licenses. These files were not physical IDs but editable digital templates that buyers could modify by inserting personal details and photographs.

Court records indicate that the operation ran for several years, beginning in 2021 and continuing until early 2025. During this period, the websites reportedly attracted customers from around the world. Investigators estimate that more than 1,400 individuals purchased these templates, generating nearly $2.9 million in revenue. Despite the scale of the operation, individual items were sold at relatively low prices, with some templates costing less than $15.

Law enforcement officials state that such templates are commonly used to bypass identity verification systems. Once edited, the counterfeit documents can be presented to banks, cryptocurrency platforms, and online services that rely on document uploads to confirm a user’s identity. This type of fraud poses serious risks, as it enables financial crimes, account takeovers, and misuse of digital platforms.

The investigation intensified after U.S. authorities traced a transaction in which Bitcoin was exchanged for fraudulent templates by a buyer located in Montana. Following this development, federal agents moved to seize multiple domains allegedly connected to the operation. These websites are now under government control and no longer accessible for illegal activity.

The case involved extensive coordination between agencies. The FBI’s Billings Division and Salt Lake City Cyber Task Force led the investigation, with support from the FBI’s International Operations Division. Authorities in Bangladesh, including the Dhaka Metropolitan Police’s Counterterrorism and Transnational Crime Unit, also assisted in tracking the alleged activities.

A federal grand jury has returned a nine-count indictment against Hasan. The charges include multiple counts related to the distribution of false identification documents, passport fraud, and social security fraud. If convicted, the penalties could include lengthy prison sentences, substantial fines, and supervised release following incarceration.

The case is being prosecuted by Assistant U.S. Attorney Benjamin Hargrove. As with all criminal proceedings, the charges represent allegations, and the accused is presumed innocent unless proven guilty in court.

Cybersecurity experts note that the availability of such tools highlights the growing sophistication of digital fraud networks. The case is an alarming call for the importance of international cooperation and continuous monitoring to protect identity systems and prevent large-scale misuse of personal data.



Aadhaar Verification Rules Amended as India Strengthens Data Compliance


 

It is expected that India's flagship digital identity infrastructure, the Aadhaar, will undergo significant changes to its regulatory framework in the coming days following a formal amendment to the Aadhaar (Targeted Determination of Services and Benefits Management) Regulations, 2.0.

Introducing a new revision in the framework makes facial authentication formally recognized as a legally acceptable method of verifying a person's identity, marking a significant departure from traditional biometric methods such as fingerprinting and iris scans. 

The updated regulations introduce a strong compliance framework that focuses on explicit user consent, data minimisation, and privacy protection, as well as a stronger compliance architecture. The government seems to have made a deliberate effort to align Aadhaar's operational model with evolving expectations about biometric governance, data protection, and the safe and responsible use of digital identity systems as they evolved. 

In the course of undergoing the regulatory overhaul, the Unique Identification Authority of India has introduced a new digital identity tool called the Aadhaar Verifiable Credential in order to facilitate a secure and tamper-proof identity verification process. 

Additionally, the authority has tightened the compliance framework governing offline Aadhaar verification, placing higher accountability on entities that authenticate identities without direct access to the UIDAI system in real time. Aadhaar (Authentication and Offline Verification) Regulations, 2021 have been amended to include these measures, and they were formally published by the UIDAI on December 9 through the Gazette as well as on UIDAI's website. 

UIDAI has also launched a dedicated mobile application that provides individuals with a higher degree of control over how their Aadhaar data is shared, which emphasizes the shift towards a user-centric identity ecosystem which is also concerned with privacy. 

According to the newly released Aadhaar rules, the use of facial recognition as a valid means of authentication would be officially authorised as of the new Aadhaar rules, while simultaneously tightening consent requirements, purpose-limitations, and data-use requirements to ensure compliance with the Digital Personal Data Protection Act. 

In addition, the revisions indicate a substantial shift in the scope of Aadhaar's deployment in terms of how it is useful, extending its application to an increased range of private-sector uses under stricter regulation, so as to extend its usefulness beyond welfare delivery and government services. This change coincides with a preparation on the part of the Unique Identification Authority of India to launch a newly designed mobile application for Aadhaar. 

As far as officials are concerned, the application will be capable of supporting Aadhaar-based identification for routine scenarios like event access, registrations at hotels, deliveries, and physical access control, without having to continuously authenticate against a central database in real-time. 

Along with the provisions in the updated framework that explicitly acknowledge facial authentication and the existing biometric and one-time password mechanisms, the updated framework is also strengthening provisions governing offline Aadhaar verification, so that identity verification can be carried out in a controlled manner without direct connection to UIDAI's systems. 

As part of the revised framework, offline Aadhaar verification is also broadened beyond the limited QR code scanning that was previously used. A number of verification methods have been authorised by UIDAI as a result of this notification, including QR code-based checks, paperless offline e-KYC, Aadhaar Verifiable Credential validation, electronic authentication through Aadhaar, and paper-based offline verification. 

Additional mechanisms can be approved as time goes by, with the introduction of the Aadhaar Verifiable Credential, a digitally signed document with cryptographically secure features that contains some demographic data. This is the most significant aspect of this expansion. With the ability to verify locally without constantly consulting UIDAI's central databases, this credential aims to reduce systemic dependency on live authentication while addressing long-standing privacy and data security concerns that have arose. 

Additionally, the regulations introduce offline face verification, a system which allows a locally captured picture of the holder of an Aadhaar to be compared to the photo embedded in the credential without having to transmit biometric information over an external network. Furthermore, the amendments establish a formal regulatory framework for entities that conduct these checks, which are called Offline Verification Seeking Entities.

 The UIDAI has now mandated that organizations seeking to conduct offline Aadhaar verification must register, submit detailed operational and technical disclosures, and adhere to prescribed procedural safeguards in order to conduct the verification. A number of powers have been granted to the authority, including the ability to review applications, conduct inspections, obtain clarifications, suspend or revoke access in the case of noncompliance. 

In addition to clearly outlining grounds for action, the Enforcement provisions also include the use of verification facilities, deviation from UIDAI standards, failure to cooperate with audits, and facilitation of identity-related abuse. A particularly notable aspect of these rules is that they require affected entities to be provided an opportunity to present their case prior to punitive measures being imposed, reinforcing the idea of respecting due process and fairness in regulations. 

In the private sector, the verification process using Aadhaar is still largely unstructured at present; hotels, housing societies, and other service providers routinely collect photocopies or images of identity documents, which are then shared informally among vendors, security personnel, and front desk employees with little clarity regarding how they will retain or delete those documents. 

By introducing a new registration framework, we hope to replace this fragmented system with a regulated one, in which private organizations will be formally onboarded as Offline Verification Seeking Entities, and they will be required to use UIDAI-approved verification flows in place of storing Aadhaar copies, either physically or digitally.

With regard to this transition, one of the key elements of UIDAI's upcoming mobile application will be its ability to enable selective disclosure by allowing residents to choose what information is shared for a particular reason. For example, a hotel may just receive the name and age bracket of the guest, a telecommunication provider the address of the guest, or a delivery service the name and photograph of the visitor, rather than a full identity record. 

Aadhaar details will also be stored in the application for family members, biometric locks and unlocks can be performed instantly, and demographic information can be updated directly, thus reducing reliance on paper-based processes. As a result, control is increasingly shifting towards individuals, minimizing the risk of exposure that service providers face to their data and curbing the indefinite circulation of identity documents. 

UIDAI has been working on a broader ecosystem-building initiative that includes regulatory pushes, which are part of a larger effort. In November, the organization held a webinar, in which over 250 organizations participated, including hospitality chains, logistics companies, real estate managers, and event planners, in order to prepare for the rollout. 

In the midst of ongoing vulnerability concerns surrounding the Aadhaar ecosystem, there has been an outreach to address them. Based on data from the Indian Cyber Crime Coordination Centre, Aadhaar Enabled Payment System transactions are estimated to account for approximately 11 percent of the cyber-enabled financial fraud of 2023, according to the Centre's data. 

Several states have reported instances where cloned fingerprints associated with Aadhaar have been used to siphon beneficiary funds, most often after public records or inadequately secure computer systems have leaked data. Aadhaar-based authentication has been viewed as a systemic risk by some privacy experts, saying it could increase systemic risks if safeguards are not developed in parallel with its extension into private access environments. 

Researchers from civil society organizations have highlighted earlier this year that anonymized Aadhaar-linked datasets are still at risk of re-identification and that the current data protection law does not regulate anonymized data sufficiently, resulting in a potential breakdown in the new controls when repurposing and processing them downstream. 

As a result of the amendments, Aadhaar's role within India's rapidly growing digital economy has been recalibrated, with greater usability balanced with tighter governance, as the amendments take into account a conscious effort to change the status of the system. Through formalizing offline verification, restricting the use of data through selective disclosure, and imposing clearer obligations on private actors, the revised regulations aim to curb informal practices that have long contributed to increased privacy and security risks. 

The success of these measures will depend, however, largely on the disciplined implementation of the measures, the continued oversight of the regulatory authorities, and the willingness of industry stakeholders to abandon legacy habits of indiscriminate data collection. There are many advantages to the transition for service providers. They can reduce compliance risks by implementing more efficient, privacy-preserving verification methods. 

Residents have a greater chance of controlling their personal data in everyday interactions with providers. As Aadhaar leaves its open access environments behind and moves deeper into private circumstances, continued transparency from UIDAI, regular audits of verification entities, and public awareness around consent and data rights will be critical in preserving trust in Aadhaar and in ensuring that convenience doesn't come at the expense of security.

There has been a lot of talk about how large-scale digital identity systems can evolve responsibly in an era where data protection expectations are higher than ever, so if the changes are implemented according to plan, they could serve as a blueprint for future evolution.

U.S. Startup Launches Mobile Service That Requires No Personal Identification

 



A newly launched U.S. mobile carrier is questioning long-standing telecom practices by offering phone service without requiring customers to submit personal identification. The company, Phreeli, presents itself as a privacy-focused alternative in an industry known for extensive data collection.

Phreeli officially launched in early December and describes its service as being built with privacy at its core. Unlike traditional telecom providers that ask for names, residential addresses, birth dates, and other sensitive information, Phreeli limits its requirements to a ZIP code, a chosen username, and a payment method. According to the company, no customer profiles are created or sold, and user data is not shared for advertising or marketing purposes.

Customers can pay using standard payment cards, or opt for cryptocurrency if they wish to reduce traceable financial links. The service operates entirely on a prepaid basis, with no contracts involved. Monthly plans range from lower-cost options for light usage to higher-priced tiers for customers who require more mobile data. The absence of contracts aligns with the company’s approach, as formal agreements typically require verified personal identities.

Rather than building its own cellular infrastructure, Phreeli operates as a Mobile Virtual Network Operator. This means it provides service by leasing network access from an established carrier, in this case T-Mobile. This model allows Phreeli to offer nationwide coverage without owning physical towers or equipment.

Addressing legal concerns, the company states that U.S. law does not require mobile carriers to collect customer names in order to provide service. To manage billing while preserving anonymity, Phreeli says it uses a system that separates payment information from communication data. This setup relies on cryptographic verification to confirm that accounts are active, without linking call records or data usage to identifiable individuals.

The company’s privacy policy notes that information will only be shared when necessary to operate the service or when legally compelled. By limiting the amount of data collected from the start, Phreeli argues that there is little information available even in the event of legal requests.

Phreeli was founded by Nicholas Merrill, who previously operated an internet service provider and became involved in a prolonged legal dispute after challenging a government demand for user information. That experience reportedly influenced the company’s data-minimization philosophy.

While services that prioritize anonymity are often associated with misuse, Phreeli states that it actively monitors for abusive behavior. Accounts involved in robocalling or scams may face restrictions or suspension.

As concerns grow rampant around digital surveillance and commercial data harvesting, Phreeli’s launch sets the stage for a broader discussion about privacy in everyday communication. Whether this model gains mainstream adoption remains uncertain, but it introduces a notable shift in how mobile services can be structured in the United States.



FTC Refuses to Lift Ban on Stalkerware Company that Exposed Sensitive Data


The surveillance industry banned a stalkerware maker after a data breach leaked information of its customers and the people they were spying on. Consumer spyware company Support King can't sell the surveillance software now, the US Federal Trade Commission (FTC) said. 

The FTC has denied founder Scott Zuckerman's request to cancel the ban. It is also applicable to other subsidiaries OneClickMonitor and SpyFone.

Recently, the FTC announced the move in a press release when Zuckerman petitioned the agency to cancel the ban order in July of 2025. 

The FTC banned Zuckerman from “offering, promoting, selling, or advertising any surveillance app, service, or business,” in 2021 and stopped him from running other stalkerware business. Zuckerman had to also delete all the data stored by SpyFone and went through various audits to implement cybersecurity measures for his ventures. Then acting director of the FTC's Bureau of Consumer Protection, Samuel Levine said that the "stalkerware was hidden from device owners, but was fully exposed to hackers who exploited the company’s slipshod security."

Zuckerman in his petition said that the FTC mandate has made it difficult for him to conduct other businesses due to monetary losses, even though Support King is out of business and he now only operates a restaurant and plans other ventures.

The ban came from a 2018 incident after a researcher discovered an Amazon S3 bucket of SpyFone that left important data such as selfies, chats, texts, contacts, passwords, logins, and audio recordings exposed online in the open. The leaked data comprised 44,109 email ids.

According to Samuel, “SpyFone is a brazen brand name for a surveillance business that helped stalkers steal private information." He further said that the "stalkerware was hidden from device owners, but was fully exposed to hackers who exploited the company’s slipshod security.r

According to TechCrunch, after the 2021 order, Zuckerman started running another stalkerware firm. In 2022, TechCrunch found breached data from stalkerware application SpyTrac. 

According to the data, freelance developers ran SpyTrac who had direct links with Support King. It was an attempt to escape the FTC ban. Additionally, the breached data contained records from SpyFone, which Support King was supposed to delete. Beside this, the data also contained access keys to the cloud storage of OneClickMonitor, another stalkerware application.