Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label User Privacy. Show all posts

Cellik Android Spyware Exploits Play Store Trust to Steal Data

 

Recently found in the Android platform, remote access trojan named Cellik has been recognized as a serious mobile threat, using the Google Play integration feature to mask itself within legitimate applications to evade detection by security solutions.

Cellik is advertised as a malware-as-a-service (MaaS) in the cybercrime forums, with membership rates beginning at approximately $150 a month. One of the most frightening facets of the malware is the fact that it allows malicious payloads to be injected into legitimate Google Play applications, which can be easily installed. 

Once it is installed, Cellik provides complete control over the target device for the attacker. Operators can remotely stream the target device’s screen live, as well as access all files, receive notifications, and even use a stealthy browser to surf websites and enter form data without the target’s awareness. The malware also comes equipped with an app inject functionality that enables attackers to superimpose login screens on normal applications such as bank or email apps and harvest login and other sensitive data. 

Cellik Play Store integration also includes an automated APK builder, so the perpetrators of this crimeware can now browse the store for apps, choose popular apps, and pack them with the Cellik payload in one click bundling it together with the cellik payload. The perpetrators of this attack claim that this allows them to bypass Google Play Protect and other device-based security scanners, but Google has not independently verified this. 

Android users should heed the words of security experts and not sideload APKs from unknown sources, keep Play Protect enabled at all times, be very judicious about app permissions, and keep an eye out for anything strange on their phones that might be harmful. Since Cellik is a groundbreaking new development in Android malware, both users and the security community should be vigilant to ensure their sensitive data and device integrity are not compromised.

VPN Surge: Americans Bypass Age Verification Laws

 

Americans are increasingly seeking out VPNs as states enact stringent age verification laws that limit what minors can see online. These regulations compel users to provide personal information — like government issued IDs — to verify their age, leading to concerns about privacy and security. As a result, VPN usage is skyrocketing, particularly in states such as Missouri, Florida, Louisiana, Utah and more where VPN searches have jumped by a factor of four following the new regulations. 

How age verification laws work 

Age verification laws require websites and apps that contain a substantial amount of "material harmful to minors" to verify users' age prior to access. This step frequently entails submitting photographs or scans of ID documents, potentially exposing personal info to breaches. Even though laws forbid companies from storing this information, there is no assurance it will be kept secure, not with the record of massive data breaches at big tech firms. 

The vague definition of "harmful content" suggests that age verification could be required for many other types of digital platforms, such as social media, streaming services, and video games. The expansion raises questions about digital privacy and identity protection for all users, minors not excluded. From the latest Pew Research Center finding, 40% of Americans say government regulation of business does more harm than good, illustrating bipartisan wariness of these laws. 

Bypassing restrictions with VPNs 

VPN services enable users to mask their IP addresses and circumvent these age verification policies, allowing them to maintain their anonymity and have their sensitive information protected. Some VPNs are available on desktop and mobile devices, and some can be used on Amazon Fire TV Stick, among other platforms. To maximize privacy and security, experts suggest opting for VPN providers with robust no-logs policies and strong encryption.

Higher VPN adoption has fueled speculation on whether the US lawmakers will attempt to ban VPNs outright, which would be yet another blow to digital privacy and freedom. For now, VPNs are still a popular option for Americans who want to keep their online activity hidden from nosy age verification schemes.

OpenAI Vendor Breach Exposes API User Data

 

OpenAI revealed a security incident in late- November 2025 that allowed hackers to access data about users via its third-party analytics provider, Mixpanel. The breach, which took place on November 9, 2025, exposed a small amount of personally identifiable information for some OpenAI API users, although OpenAI stressed that its own systems had not been the target of the attack.

Breach details 

The breach occurred completely within Mixpanel’s own infrastructure, when an attacker was able to gain access and exfiltrate a dataset containing customer data. Mixpanel became aware of the compromise on 9 November 2025, and following an investigation, shared the breached dataset with OpenAI on 25 November, allowing the technology firm to understand the extent of potential exposure. 

The breach specifically affected users who accessed OpenAI's API via platform.openai.com, rather than regular ChatGPT users. The compromised data included several categories of user information collected through Mixpanel's analytics platform. Names provided to accounts on platform.openai.com were exposed, along with email addresses linked to API accounts. 

Additionally, coarse approximate location data determined by IP addresses, operating system and browser types, referring websites, and organization and user IDs saved in API accounts were part of the breach. However, OpenAI confirmed that more sensitive information remained secure, including chat content, API requests, API usage data, passwords, credentials, API keys, payment details, and government IDs. 

Following the incident, OpenAI took immediate action by removing Mixpanel from its services while conducting its investigation. The company notified affected users on November 26, 2025, right before Thanksgiving, providing details about the breach and emphasizing that it was not a compromise of OpenAI's own systems. OpenAI has suspended its integration with Mixpanel pending a thorough investigation of the incident.

Recommended measures 

OpenAI also encouraged the affected users to stay on guard for potential second wave attacks using the stolen information. Users need to be especially vigilant for phishing and social engineer attacks that could be facilitated by the leaked information, such as names, e-mail addresses and company information. A class action has also been brought against OpenAI and Mixpanel, claiming the companies did nothing to stop the breach of data that revealed personally identifiable information for thousands of users.

Android Users Face New WhatsApp Malware Threat

 

Cybersecurity researchers at security firm Cleafy have issued a warning regarding a high risk malware campaign aimed at Android users via WhatsApp messages that could jeopardize users' cryptocurrency wallets and bank information. The researchers tracked the threat as Albiriox, a new emerging Android malware family being marketed as malware-as-a-service (MaaS) on underground cybercrime forums. 

Modus operandi 

The malware propagate through WhatsApp messages which include links to malicious websites that impersonate Google Play Store pages. Currently, they are impersonating a popular discount retail app, but this could quickly change both in terms of campaigns and targets. Rather than having the app delivered directly, victims are persuaded to submit their phone number, on the premise that an installation link will be sent to them on WhatsApp. 

After users tap on and download the trojanised app, Albiriox is able to take full control of the compromised device. The malware overlays attacks on more than 400 cryptocurrency wallet and banking apps — displaying fake login screens on top of the legitimate apps to capture credentials as users input them. 

Albiriox is an advanced, rapidly evolving malware. The malware also features Vnc-based remote access, which gives the attackers the ability to directly control the infected machines. Initially, campaigns were targeted at Austrian citizens with German-language messages, but is now broadening its reach. The malware is obfuscated with JSONPacker and also it tricks users into allowing the "Install Unknown Apps" permission. When it is running, it contacts its command servers through unencrypted TCP and stays on the bot forever, maintaining active control through a regular series of ping-pong heartbeat messages. 

Mitigation tips

Security experts emphasize that users should never agree to install apps through phone number submission on websites. Any WhatsApp messages requesting app installations should be immediately deleted without clicking links. This distribution method represents exactly why Google is strengthening measures against sideloading, requiring app developers to register and verify their identities.

Cleafy highlights that Albiriox demonstrates the ongoing evolution and increasing sophistication of mobile banking threats. However, users can protect themselves effectively by following several key practices: only install apps from the official Google Play Store, ensure Play Protect is activated, and remain skeptical of any unsolicited installation requests received through messaging apps. 

The campaign highlights broader security concerns affecting WhatsApp and similar platforms, particularly as attackers combine social engineering with technical malware capabilities to compromise both devices and accounts.

Salesforce Probes Gainsight Breach Exposing Customer Data

 

Salesforce has disclosed that some of its customers' data was accessed following a breach of Gainsight, a platform used by businesses to manage customer relationships. The breach specifically affected Gainsight-published applications that were connected to Salesforce, with these apps being installed and managed directly by customers. 

Salesforce emphasized that the breach did not stem from vulnerabilities in its own platform, but rather from Gainsight's external connection to Salesforce. The company is actively investigating the incident and directed further inquiries to its dedicated incident response page.

Gainsight confirmed it was investigating a Salesforce connection issue, but did not explicitly acknowledge a breach, stating that its internal investigation was ongoing. Notable companies using Gainsight's services include Airtable, Notion, and GitLab. GitLab confirmed that its security team is investigating and will share more details as they become available.

The hacking group ShinyHunters claimed responsibility for the breach, stating that if Salesforce does not negotiate with them, they will set up a new website to advertise the stolen data—a common tactic for cybercriminals seeking financial gain. The group reportedly stole data from nearly a thousand companies, including details from Salesloft and GainSight campaigns. 

This breach mirrors a previous incident in August, where ShinyHunters exploited vulnerabilities in AI marketing chatbot maker Salesloft, compromising numerous customers' Salesforce instances and accessing sensitive information such as access tokens.

In the earlier Salesloft breach, victims included major organizations like Allianz Life, Bugcrowd, Cloudflare, Google, Kering, Proofpoint, Qantas, Stellantis, TransUnion, and Workday. The hackers subsequently launched a website to extort victims, threatening to release over a billion records. Gainsight was among those affected in the Salesloft-linked breaches, but it remains unclear if the latest wave of attacks originated from the same compromise or a separate incident.

Overall, this incident highlights the risks associated with third-party integrations in major cloud platforms and the growing sophistication of financially-motivated cybercriminals targeting customer data through supply chain vulnerabilities. Both Salesforce and Gainsight are continuing their investigations, with cybersecurity teams across affected organizations actively working to assess the extent of the breach and mitigate potential damage.

Hyundai AutoEver America Breach Exposes Employee SSNs and Driver’s License Data

 

Hyundai AutoEver America (HAEA), an IT services affiliate of Hyundai Motor Group, has confirmed a data breach that compromised sensitive personal information, including Social Security Numbers (SSNs) and driver’s licenses, of approximately 2,000 individuals, mostly current and former employees. The breach occurred between February 22 and March 2, 2025, with the company discovering the intrusion and launching an investigation on March 1.

HAEA specializes in providing IT consulting, managed services, and digital solutions for Hyundai and Kia affiliates, covering vehicle telematics, over-the-air updates, vehicle connectivity, and embedded systems, as well as business systems and digital manufacturing platforms. The company’s IT environment supports 2 million users and 2.7 million vehicles, with a workforce of 5,000 employees.

The notification to affected individuals revealed that the breach exposed names, while the Massachusetts government portal listed additional information such as SSNs and driver’s licenses. It is still unclear whether customers or users were affected besides employees, and the exact breakdown of impacted groups remains unspecified. The company worked with external cybersecurity experts and law enforcement to investigate the incident, confirm containment, and identify the potentially affected data.

At the time of the report, no ransomware groups had claimed responsibility for the attack, and the perpetrators are unknown. This incident adds to a series of cybersecurity challenges faced by Hyundai and its affiliates in recent years, including previous ransomware attacks and data breaches affecting operations in Europe and exposing owner data in Italy and France. 

Additionally, security researchers previously identified significant privacy and security issues with Hyundai’s companion app, which allowed unauthorized remote control of vehicles, and vulnerabilities in built-in anti-theft systems.

HAEA has not yet released a full public statement with details about the breach, mitigation steps, or future security improvements. The limited information available highlights the need for robust security protocols, especially for organizations handling large volumes of sensitive personal and automotive data. The breach serves as a reminder of the ongoing risks facing major automotive and IT service providers amid the growing threat landscape for digital infrastructure.

ASF Rejects Akira Breach Claims Against Apache OpenOffice

 

Apache OpenOffice, an open-source office suite project maintained by the Apache Software Foundation (ASF), is currently disputing claims of a significant data breach allegedly perpetrated by the Akira ransomware gang. 

On October 30, 2025, Akira published a post on its data leak site asserting that it had compromised Apache OpenOffice and exfiltrated 23 GB of sensitive corporate documents, including employee personal information—such as home addresses, phone numbers, dates of birth, driver’s licenses, social security numbers, and credit card data—as well as financial records and internal confidential files. The group further claimed it would soon release these documents publicly.

Responding publicly, the ASF refutes the claims, stating it has no evidence that its systems have been compromised or that a breach has occurred. According to ASF representatives, the data types described by Akira do not exist within the Foundation’s infrastructure. Importantly, the ASF points out the open-source nature of the project: there are no paid employees associated with Apache OpenOffice or the Foundation, and therefore, sensitive employee information as specified by Akira is not held by ASF. 

All development activities, bug tracking, and feature requests for the software are managed openly and transparently, primarily through public developer mailing lists. Thus, any internal reports or application issues cited in the alleged leak are already available in the public domain.

ASF further emphasized its strong commitment to security and clarified that, as of November 4, 2025, it had received no ransom demands directed at either the Foundation or the OpenOffice project. The Foundation has initiated an internal investigation to fully assess the veracity of Akira’s claims but, so far, has found no supporting evidence. 

It has not contacted law enforcement or external cybersecurity experts, signaling that the incident is being treated as a claim without substantiation. As of the time of publication, none of the purported stolen data has surfaced on the Akira leak site, leaving ASF’s assertion unchallenged.

This dispute highlights the increasingly common tactic among ransomware operators of leveraging publicity and unsubstantiated claims to pressure organizations, even when the technical evidence does not support their assertions. For now, ASF continues to reassure users and contributors that Apache OpenOffice remains uncompromised, and stresses the transparency inherent in open-source development as a key defense against misinformation and data exfiltration claims.

Bluetooth Security Risks: Why Leaving It On Could Endanger Your Data

 

Bluetooth technology, widely used for wireless connections across smartphones, computers, health monitors, and peripherals, offers convenience but carries notable security risks—especially when left enabled at all times. While Bluetooth security and encryption have advanced over decades, the protocol remains exposed to various cyber threats, and many users neglect these vulnerabilities, putting personal data at risk.

Common Bluetooth security threats

Leaving Bluetooth permanently on is among the most frequent cybersecurity oversights. Doing so effectively announces your device’s continuous availability to connect, making it a target for attackers. 

Threat actors exploit Bluetooth through methods like bluesnarfing—the unauthorized extraction of data—and bluejacking, where unsolicited messages and advertisements are sent without consent. If hackers connect, they may siphon valuable information such as banking details, contact logs, and passwords, which can subsequently be used for identity theft, fraudulent purchases, or impersonation.

A critical issue is that data theft via Bluetooth is often invisible—victims receive no notification or warning. Further compounding the problem, Bluetooth signals can be leveraged for physical tracking. Retailers, for instance, commonly use Bluetooth beacons to trace shopper locations and gather granular behavioral data, raising privacy concerns.

Importantly, Bluetooth-related vulnerabilities affect more than just smartphones; they extend to health devices and wearables. Although compromising medical Bluetooth devices such as pacemakers or infusion pumps is technically challenging, targeted attacks remain a possibility for motivated adversaries.

Defensive strategies 

Mitigating Bluetooth risks starts with turning off Bluetooth in public or unfamiliar environments and disabling automatic reconnection features when constant use (e.g., wireless headphones) isn’t essential. Additionally, set devices to ‘undiscoverable’ mode as a default, blocking unexpected or unauthorized connections.

Regularly updating operating systems is vital, since outdated devices are prone to exploits like BlueBorne—a severe vulnerability allowing attackers full control over devices, including access to apps and cameras. Always reject unexpected Bluetooth pairing requests and periodically review app permissions, as many apps may exploit Bluetooth to track locations or obtain contact data covertly. 

Utilizing a Virtual Private Network (VPN) enhances overall security by encrypting network activity and masking IP addresses, though this measure isn’t foolproof. Ultimately, while Bluetooth offers convenience, mindful management of its settings is crucial for defending against the spectrum of privacy and security threats posed by wireless connectivity.

UK Digital ID Faces Security Crisis Ahead of Mandatory Rollout

 

The UK’s digital ID system, known as One Login, triggered major controversy in 2025 due to serious security vulnerabilities and privacy concerns, leading critics to liken it to the infamous Horizon scandal. 

One Login is a government-backed identity verification platform designed for access to public services and private sector uses such as employment verification and banking. Despite government assurances around its security and user benefits, public confidence plummeted amid allegations of cybersecurity failures and rushed implementation planned for November 18, 2025.

Critics, including MPs and cybersecurity experts, revealed that the system failed critical red-team penetration tests, with hackers gaining privileged access during simulated cyberattacks. Further concerns arose over development practices, with portions of the platform built by contractors in Romania on unsecured workstations without adequate security clearance. The government missed security deadlines, with full compliance expected only by March 2026—months after the mandatory rollout began.

This “rollout-at-all-costs” approach amidst unresolved security flaws has created a significant trust deficit, risking citizens’ personal data, which includes sensitive information like biometrics and identification documents. One Login collects comprehensive data, such as name, birth date, biometrics, and a selfie video for identity verification. This data is shared across government services and third parties, raising fears of surveillance, identity theft, and misuse.

The controversy draws a parallel to the Horizon IT scandal, where faulty software led to wrongful prosecutions of hundreds of subpostmasters. Opponents warn that flawed digital ID systems could cause similar large-scale harms, including wrongful exclusions and damaged reputations, undermining public trust in government IT projects.

Public opposition has grown, with petitions and polls showing more people opposing digital ID than supporting it. Civil liberties groups caution against intrusive government tracking and call for stronger safeguards, transparency, and privacy protections. The Prime Minister defends the program as a tool to simplify life and reduce identity fraud, but critics label it expensive, intrusive, and potentially dangerous.

In conclusion, the UK’s digital ID initiative stands at a critical crossroads, facing a crisis of confidence and comparisons to past government technology scandals. Robust security, oversight, and public trust are imperative to avoid a repeat of such failures and ensure the system serves citizens without compromising their privacy or rights.

MANGO Marketing Vendor Breach Exposes Customer Contact Details

 

MANGO, the Spanish fashion retailer, has disclosed a data breach affecting customer information due to a cyberattack on one of its external marketing service providers. The incident, revealed on October 14, 2025, involved unauthorized access to personal data used in marketing campaigns, prompting the company to notify affected customers directly.

The compromised data includes customers' first names, country of residence, postal codes, email addresses, and telephone numbers. Notably, sensitive details such as last names, banking information, credit card data, government-issued IDs, passports, and account credentials were not accessed, reducing the risk of financial fraud. Despite this, the exposed information could be leveraged by threat actors for targeted phishing campaigns, where attackers impersonate legitimate entities to trick individuals into revealing further personal or financial data.

MANGO emphasized that its corporate infrastructure and internal IT systems remained unaffected, with no disruption to business operations. The company confirmed that all security protocols were activated immediately upon detection of the breach at the third-party vendor, although the name of the compromised marketing partner has not been disclosed.

In response, MANGO has reported the incident to the Spanish Data Protection Agency (AEPD) and other relevant regulatory authorities, in compliance with data protection regulations. To assist concerned customers, the company has established a dedicated support channel, including an email address (personaldata@mango.com) and a toll-free hotline (900 150 543), where individuals can seek clarification and guidance regarding potential exposure.

Founded in 1984 and headquartered in Barcelona, MANGO operates over 2,800 physical and e-commerce stores across 120 countries. It employs approximately 16,300 people and generates an annual revenue of €3.3 billion, with nearly 30% derived from online sales. While the breach does not impact core business systems, the incident highlights the growing risks associated with third-party vendors in digital supply chains, particularly in the retail and fashion sectors that rely heavily on external marketing and customer engagement platforms.

At the time of reporting, no ransomware group has claimed responsibility for the attack, and the identity of the attackers remains unknown. Local media outlets reached out to MANGO for further details on the scope and technical aspects of the breach but had not received a response by publication.

Windows 10 Support Termination Leaves Devices Vulnerable

 

Microsoft has officially ended support for Windows 10, marking a major shift impacting hundreds of millions of users worldwide. Released in 2015, the operating system will no longer receive free security updates, bug fixes, or technical assistance, leaving all devices running it vulnerable to exploitation. This decision mirrors previous end-of-life events such as Windows XP, which saw a surge in cyberattacks after losing support.

Rising security threats

Without updates, Windows 10 systems are expected to become prime targets for hackers. Thousands of vulnerabilities have already been documented in public databases like ExploitDB, and several critical flaws have been actively exploited. 

Among them are CVE-2025-29824, a “use-after-free” bug in the Common Log File System Driver with a CVSS score of 7.8; CVE-2025-24993, a heap-based buffer overflow in NTFS marked as “known exploited”; and CVE-2025-24984, leaking NTFS log data with the highest EPSS score of 13.87%. 

These vulnerabilities enable privilege escalation, code execution, or remote intrusion, many of which have been added to the U.S. CISA’s Known Exploited Vulnerabilities (KEV) catalog, signaling the seriousness of the risks.

Limited upgrade paths

Microsoft recommends that users migrate to Windows 11, which features modernized architecture and ongoing support. However, strict hardware requirements mean that roughly 200 million Windows 10 computers worldwide remain ineligible for the upgrade. 

For those unable to transition, Microsoft provides three main options: purchasing new hardware compatible with Windows 11, enrolling in a paid Extended Security Updates (ESU) program (offering patches for one extra year), or continuing to operate unsupported — a risky path exposing systems to severe cyber threats.

The support cutoff extends beyond the OS. Microsoft Office 2016 and 2019 have simultaneously reached end-of-life, leaving only newer versions like Office 2021 and LTSC operable but unsupported on Windows 10. Users are encouraged to switch to Microsoft 365 or move licenses to Windows 11 devices. Notably, support for Office LTSC 2021 ends in October 2026.

Data protection tips

Microsoft urges users to back up critical data and securely erase drives before recycling or reselling devices. Participating manufacturers and Microsoft itself offer trade-in or recycling programs to ensure data safety. As cyber risks amplify and hackers exploit obsolete systems, users still on Windows 10 face a critical choice — upgrade, pay for ESU, or risk exposure in an increasingly volatile digital landscape.

India Plans Techno-Legal Framework to Combat Deepfake Threats

 

India will introduce comprehensive regulations to combat deepfakes in the near future, Union IT Minister Ashwini Vaishnaw announced at the NDTV World Summit 2025 in New Delhi. The minister emphasized that the upcoming framework will adopt a dual-component approach combining technical solutions with legal measures, rather than relying solely on traditional legislation.

Vaishnaw explained that artificial intelligence cannot be effectively regulated through conventional lawmaking alone, as the technology requires innovative technical interventions. He acknowledged that while AI enables entertaining applications like age transformation filters, deepfakes pose unprecedented threats to society by potentially misusing individuals' faces and voices to disseminate false messages completely disconnected from the actual person.

The minister highlighted the fundamental right of individuals to protect their identity from harmful misuse, stating that this principle forms the foundation of the government's approach to deepfake regulation. The techno-legal strategy distinguishes India's methodology from the European Union's primarily regulatory framework, with India prioritizing innovation alongside societal protection.

As part of the technical solution, Vaishnaw referenced ongoing work at the AI Safety Institute, specifically mentioning that the Indian Institute of Technology Jodhpur has developed a detection system capable of identifying deepfakes with over 90 percent accuracy. This technological advancement will complement the legal framework to create a more robust defense mechanism.

The minister also discussed India's broader AI infrastructure development, noting that two semiconductor manufacturing units, CG Semi and Kaynes, have commenced production operations in the country. Additionally, six indigenous AI models are currently under development, with two utilizing approximately 120 billion parameters designed to be free from biases present in Western models.

The government has deployed 38,000 graphics processing units (GPUs) for AI development and secured a $15 billion investment commitment from Google to establish a major AI hub in India. This infrastructure expansion aims to enhance the nation's research capabilities and application development in artificial intelligence.

Discord Third-Party Breach Exposes User Data and Government IDs

 

Discord has confirmed a significant data breach affecting users who interacted with their customer support or trust & safety teams, stemming not from a direct attack on Discord’s own systems but through a compromised third-party vendor that handled customer service operations.

This incident highlights a persistent and growing vulnerability within the tech industry—outsourcing crucial services to external parties with potentially weaker cybersecurity standards, making user data increasingly reliant on the practices of organizations that customers never directly chose to trust.

Data exposed in the breach

The breach resulted in unauthorized access to sensitive personal information stored in customer service records. Specifically, exposed data included names, email addresses, Discord usernames, and various contact details for users engaging with Discord support. Furthermore, hackers gained limited billing information comprising payment types, purchase history, and the last four digits of credit cards, with full card numbers and passwords remaining secure.

A particularly concerning aspect was a small subset of government-issued ID images—such as driver’s licenses and passports—belonging to users who had submitted documents for age verification purposes. Although not all Discord users were affected, the breach still poses a tangible risk of identity theft and privacy erosion for those involved.

Third-Party vendor risks

The incident underscores the dangers posed by outsourcing digital operations to third-party vendors. Discord’s response involved revoking the vendor’s access and launching a thorough investigation; however, the damage had already been done, reflecting security gaps that even prompt internal actions cannot immediately resolve once data is compromised. 

The broader issue is that while companies often rely on vendors to reduce costs and streamline services, these relationships introduce new, often less controllable, points of failure. In essence, the robust security of a major platform like Discord can be undermined by external vendors who do not adhere to equally rigorous protection standards.

Implications for users

In the aftermath, Discord followed standard protocols by notifying affected users via email and communicating with data protection authorities. Yet, this episode demonstrates a critical lesson: users’ digital privacy extends beyond the platforms they consciously choose, as it also depends on a network of third-party companies that can become invisible weak links. 

Each vendor relationship broadens the attack surface for potential breaches, transforming cybersecurity into a chain only as strong as the least secured party involved. The Discord incident serves as a stark reminder of the challenges in safeguarding digital identity in an interconnected ecosystem, where the security of personal data cannot be taken for granted.

OpenAI's Sora App Raises Facial Data Privacy Concerns

 

OpenAI's video-generating app, Sora, has raised significant questions regarding the safety and privacy of user's biometric data, particularly with its "Cameo" feature that creates realistic AI videos, or "deepfakes," using a person's face and voice. 

To power this functionality, OpenAI confirms it must store users' facial and audio data. The company states this sensitive data is encrypted during both storage and transmission, and uploaded cameo data is automatically deleted after 30 days. Despite these assurances, privacy concerns remain. The app's ability to generate hyper-realistic videos has sparked fears about the potential for misuse, such as the creation of unauthorized deepfakes or the spread of misinformation. 

OpenAI acknowledges a slight risk that the app could produce inappropriate content, including sexual deepfakes, despite the safeguards in place. In response to these risks, the company has implemented measures to distinguish AI-generated content, including visible watermarks and invisible C2PA metadata in every video created with Sora .

The company emphasizes that users have control over their likeness. Individuals can decide who is permitted to use their cameo and can revoke access or delete any video featuring them at any time. However, a major point of contention is the app's account deletion policy. Deleting a Sora account also results in the termination of the user's entire OpenAI account, including ChatGPT access, and the user cannot register again with the same email or phone number. 

While OpenAI has stated it is developing a way for users to delete their Sora account independently, this integrated deletion policy has surprised and concerned many users who wish to remove their biometric data from Sora without losing access to other OpenAI services.

The app has also drawn attention for potential copyright violations, with users creating videos featuring well-known characters from popular media. While OpenAI provides a mechanism for rights holders to request the removal of their content, the platform's design has positioned it as a new frontier for intellectual property disputes.

NSSF Sued for Secretly Using Gun Owners’ Data in Political Ads

 

The National Shooting Sports Foundation (NSSF) is facing a class-action lawsuit alleging it secretly built a database with personal information from millions of gun owners and used it for political advertising without consent.

The lawsuit, filed by two gun owners—Daniel Cocanour of Oklahoma and Dale Rimkus of Illinois—claims the NSSF obtained data from warranty cards filled out by customers for firearm rebates or repairs, which included sensitive details like contact information, age, income, vehicle ownership, and reasons for gun ownership. These individuals never consented to their data being shared or used for political purposes, according to the suit.

The NSSF, based in Shelton, Connecticut, began compiling the database in 1999 following the Columbine High School shooting, aiming to protect the firearms industry’s image and legal standing. By May 2001, the database held 3.4 million records, growing to 5.5 million by 2002 under the name “Data Hunter,” with contributions from major manufacturers like Glock, Smith & Wesson, Marlin Firearms, and Savage Arms. The plaintiffs allege “unjust enrichment,” arguing the NSSF profited from using this data without compensating gun owners.

The organization reportedly used the database to target political ads supporting pro-gun candidates, claiming its efforts were a “critical component” in George W. Bush’s narrow 2000 presidential victory. The NSSF continued using the database in elections through 2016, including hiring Cambridge Analytica during President Trump’s campaign to mobilize gun rights supporters in swing states . This partnership is notable given Cambridge Analytica’s later collapse due to a Facebook data scandal involving unauthorized user data.

Despite publicly advocating for gun owners’ privacy—such as supporting the “Protecting Privacy in Purchases Act”—the NSSF allegedly engaged in practices contradicting this stance. The lawsuit seeks damages exceeding $5 million and class-action status for all U.S. residents whose data was collected from 1990 to present. 

The case highlights a breach of trust, as the NSSF reportedly amassed data while warning against similar databases being used for gun confiscation . As of now, the NSSF has not commented publicly but maintains its data practices were legal and ethical .

Call-Recording App Neon Suspends Service After Security Breach

 

Neon, a viral app that pays users to record their phone calls—intending to sell these recordings to AI companies for training data—has been abruptly taken offline after a severe security flaw exposed users’ personal data, call recordings, and transcripts to the public.

Neon’s business model hinged on inviting users to record their calls through a proprietary interface, with payouts of 30 cents per minute for calls between Neon users and half that for calls to non-users, up to $30 per day. The company claimed it anonymized calls by stripping out personally identifiable information before selling the recordings to “trusted AI firms,” but this privacy commitment was quickly overshadowed by a crippling security lapse.

Within a day of rising to the top ranks of the App Store—boasting 75,000 downloads in a single day—the app was taken down after researchers discovered a vulnerability that allowed anyone to access other users’ call recordings, transcripts, phone numbers, and call metadata. Journalists found that the app’s backend was leaking not only public URLs to call audio files and transcripts but also details about recent calls, including call duration, participant phone numbers, timing, and even user earnings.

Alarmingly, these links were unrestricted—meaning anyone with the URL could eavesdrop on conversations—raising immediate privacy and legal concerns, especially given complex consent laws around call recording in various jurisdictions.

Founder and CEO Alex Kiam notified users that Neon was being temporarily suspended and promised to “add extra layers of security,” but did not directly acknowledge the security breach or its scale. The app itself remains visible in app stores but is nonfunctional, with no public timeline for its return. If Neon relaunches, it will face intense scrutiny over whether it has genuinely addressed the security and privacy issues that forced its shutdown.

This incident underscores the broader risks of apps monetizing sensitive user data—especially voice conversations—in exchange for quick rewards, a model that has emerged as AI firms seek vast, real-world datasets for training models. Neon’s downfall also highlights the challenges app stores face in screening for complex privacy and security flaws, even among fast-growing, high-profile apps.

For users, the episode is a stark reminder to scrutinize privacy policies and app permissions, especially when participating in novel data-for-cash business models. For the tech industry, it raises questions about the adequacy of existing safeguards for apps handling sensitive audio and personal data—and about the responsibilities of platform operators to prevent such breaches before they occur.

As of early October 2025, Neon remains offline, with users awaiting promised payouts and a potential return of the service, but with little transparency about how (or whether) the app’s fundamental security shortcomings have been fixed.

FTC Launches Formal Investigation into AI Companion Chatbots

 

The Federal Trade Commission has announced a formal inquiry into companies that develop AI companion chatbots, focusing specifically on how these platforms potentially harm children and teenagers. While not currently tied to regulatory action, the investigation seeks to understand how companies "measure, test, and monitor potentially negative impacts of this technology on children and teens". 

Companies under scrutiny 

Seven major technology companies have been selected for the investigation: Alphabet (Google's parent company), Character Technologies (creator of Character.AI), Meta, Instagram (Meta subsidiary), OpenAI, Snap, and X.AI. These companies are being asked to provide comprehensive information about their AI chatbot operations and safety measures. 

Investigation scope 

The FTC is requesting detailed information across several key areas. Companies must explain how they develop and approve AI characters, including their processes for "monetizing user engagement". Data protection practices are also under examination, particularly how companies safeguard underage users and ensure compliance with the Children's Online Privacy Protection Act Rule.

Motivation and concerns 

Although the FTC hasn't explicitly stated its investigation's motivation, FTC Commissioner Mark Meador referenced troubling reports from The New York Times and Wall Street Journal highlighting "chatbots amplifying suicidal ideation" and engaging in "sexually-themed discussions with underage users". Meador emphasized that if violations are discovered, "the Commission should not hesitate to act to protect the most vulnerable among us". 

Broader regulatory landscape 

This investigation reflects growing regulatory concern about AI's immediate negative impacts on privacy and health, especially as long-term productivity benefits remain uncertain. The FTC's inquiry isn't isolated—Texas Attorney General has already launched a separate investigation into Character.AI and Meta AI Studio, examining similar concerns about data privacy and chatbots falsely presenting themselves as mental health professionals. 

Implications

The investigation represents a significant regulatory response to emerging AI safety concerns, particularly regarding vulnerable populations. As AI companion technology proliferates, this inquiry may establish important precedents for industry oversight and child protection standards in the AI sector.

Muzaffarpur Man Loses ₹3.5 Lakh in Remote Access App Bank Fraud

 

A resident of Muzaffarpur, Bihar fell victim to a sophisticated remote access application scam that resulted in the loss of ₹3.5 lakh from his bank account. The cybercrime incident occurred when the victim was searching online for courier service assistance and discovered what appeared to be a legitimate customer support contact number through Google search results. 

Scam operation 

The fraudsters posed as courier service agents and initiated contact with the unsuspecting victim. During the conversation, the criminals convinced the man to download and install a remote access application on his mobile device, claiming it would help resolve his delivery-related issues. Once the victim granted remote access permissions to the application, the cybercriminals gained complete control over his smartphone and banking applications . 

Financial impact  

Within minutes of installing the malicious remote access software, the fraudsters executed multiple unauthorized transactions from the victim's bank account. The scammers managed to conduct seven separate high-value financial transfers, draining a total amount of ₹3.5 lakh from the man's banking accounts. The transactions were processed rapidly, taking advantage of the victim's digital banking credentials that were accessible through the compromised device . 

Broader criminal network 

Local police investigations have revealed that this incident is part of a larger interstate fraud syndicate operating across multiple states. The cyber crime cell has traced the fraudulent transactions to various bank accounts, suggesting a well-organized criminal network. Law enforcement agencies suspect that the scammers strategically place fake customer service numbers on internet search platforms, impersonating official service providers to target unsuspecting consumers.

Rising threat 

This case represents an alarming trend in cybercrime where fraudsters exploit remote desktop applications like AnyDesk and TeamViewer to gain unauthorized access to victims' devices. The scammers often target individuals seeking customer support for various services, including courier deliveries, utility bills, and other common consumer needs. These social engineering attacks have become increasingly sophisticated, with criminals creating convincing scenarios to pressure victims into installing malicious software. 

Prevention and safety measures 

Cybersecurity experts emphasize the importance of digital awareness and caution when dealing with unsolicited support calls or online search results. Users should verify customer service numbers directly from official websites rather than relying on search engine results. 

Additionally, individuals should never install remote access applications unless they are completely certain about the legitimacy of the requesting party. Financial institutions and telecom providers are working to implement enhanced fraud detection systems to identify and prevent such scams in real-time .

Massive database of 250 million data leaked online for public access


Around a quarter of a billion identity records were left publicly accessible, exposing people located in seven countries- Saudi Arabia, the United Arab Emirates, Canada, Mexico, South Africa, Egypt, and Turkey. 

According to experts from Cybernews, three misconfigured servers, registered in the UAE and Brazil, hosting IP addresses, contained personal information such as “government-level” identity profiles. The leaked data included contact details, dates of birth, ID numbers, and home addresses. 

Cybernews experts who found the leak said the databases seemed to have similarities with the naming conventions and structure, which hinted towards the same source. But they could not identify the actor who was responsible for running the servers. 

“These databases were likely operated by a single party, due to the similar data structures, but there’s no attribution as to who controlled the data, or any hard links proving that these instances belonged to the same party,” they said. 

The leak is particularly concerning for citizens in South Africa, Egypt, and Turkey, as the databases there contained full-spectrum data. 

The leak would have exposed the database to multiple threats, such as phishing campaigns, scams, financial fraud, and abuses.

Currently, the database is not publicly accessible (a good sign). 

This is not the first incident where a massive database holding citizen data (250 million) has been exposed online. Cybernews’ research revealed that the entire Brazilian population might have been impacted by the breach.

Earlier, a misconfigured Elasticsearch instance included the data with details such as sex,  names, dates of birth, and Cadastro de Pessoas Físicas (CPF) numbers. This number is used to identify taxpayers in Brazil. 

Fake Netflix Job Offers Target Facebook Credentials in Real-Time Scam

 

A sophisticated phishing campaign is targeting job seekers with fake Netflix job offers designed to steal Facebook login credentials. The scam specifically focuses on marketing and social media professionals who may have access to corporate Facebook business accounts. 

Modus operandi 

The attack begins with highly convincing, AI-generated emails that appear to come from Netflix's HR team, personally tailored to recipients' professional backgrounds. When job seekers click the "Schedule Interview" link, they're directed to a fraudulent career site that closely mimics Netflix's official page. 

The fake site prompts users to create a "Career Profile" and offers options to log in with Facebook or email. However, regardless of the initial choice, victims are eventually directed to enter their Facebook credentials. This is where the scam becomes particularly dangerous. 

Real-time credential theft 

What makes this attack especially sophisticated is the use of websocket technology that allows scammers to intercept login details as they're being typed. As Malwarebytes researcher Pieter Arntz explains, "The phishers use a websocket method that allows them to intercept submissions live as they are entered. This allows them to try the credentials and if your password works, they can log into your real Facebook account within seconds". 

The attackers can immediately test stolen credentials on Facebook's actual platform and may even request multi-factor authentication codes if needed. If passwords don't work, they simply display a "wrong password" message to maintain the illusion. 

While personal Facebook accounts have value, the primary goal is accessing corporate social media accounts. Cybercriminals seek marketing managers and social media staff who control company Facebook Pages or business accounts. Once compromised, these accounts can be used to run malicious advertising campaigns at the company's expense, demand ransom payments, or leverage the organization's reputation for further scams.

Warning signs and protection

Security researchers have identified several suspicious email domains associated with this campaign, including addresses ending with @netflixworkplaceefficiencyhub.com, @netflixworkmotivation, and @netflixtalentnurture.com. The fake hiring site was identified as hiring.growwithusnetflix[.]com, though indicators suggest the operators cleared their tracks after the scam was exposed. 

Job seekers should be cautious of unsolicited job offers, verify website addresses carefully, and remember that legitimate Netflix recruitment doesn't require Facebook login credentials. The campaign demonstrates how scammers exploit both job market anxiety and the appeal of working for prestigious companies to execute sophisticated credential theft operations.