The FTC has denied founder Scott Zuckerman's request to cancel the ban. It is also applicable to other subsidiaries OneClickMonitor and SpyFone.
Recently, the FTC announced the move in a press release when Zuckerman petitioned the agency to cancel the ban order in July of 2025.
The FTC banned Zuckerman from “offering, promoting, selling, or advertising any surveillance app, service, or business,” in 2021 and stopped him from running other stalkerware business. Zuckerman had to also delete all the data stored by SpyFone and went through various audits to implement cybersecurity measures for his ventures. Then acting director of the FTC's Bureau of Consumer Protection, Samuel Levine said that the "stalkerware was hidden from device owners, but was fully exposed to hackers who exploited the company’s slipshod security."
Zuckerman in his petition said that the FTC mandate has made it difficult for him to conduct other businesses due to monetary losses, even though Support King is out of business and he now only operates a restaurant and plans other ventures.
The ban came from a 2018 incident after a researcher discovered an Amazon S3 bucket of SpyFone that left important data such as selfies, chats, texts, contacts, passwords, logins, and audio recordings exposed online in the open. The leaked data comprised 44,109 email ids.
According to Samuel, “SpyFone is a brazen brand name for a surveillance business that helped stalkers steal private information." He further said that the "stalkerware was hidden from device owners, but was fully exposed to hackers who exploited the company’s slipshod security.r
According to TechCrunch, after the 2021 order, Zuckerman started running another stalkerware firm. In 2022, TechCrunch found breached data from stalkerware application SpyTrac.
According to the data, freelance developers ran SpyTrac who had direct links with Support King. It was an attempt to escape the FTC ban. Additionally, the breached data contained records from SpyFone, which Support King was supposed to delete. Beside this, the data also contained access keys to the cloud storage of OneClickMonitor, another stalkerware application.
The Indian government is pushing a telecom industry proposal that will compel smartphone companies to allow satellite location tracking that will be activated 24x7 for surveillance.
Tech giants Samsung, Google, and Apple have opposed this move due to privacy concerns. Privacy debates have stirred in India after the government was forced to repeal an order that mandated smartphone companies to pre-install a state run cyber safety application on all devices. Activists and opposition raised concerns about possible spying.
Recently, the government had been concerned that agencies didn't get accurate locations when legal requests were sent to telecom companies during investigations. Currently, the firm only uses cellular tower data that provides estimated area location, this can be sometimes inaccurate.
The Cellular Operators Association of India (COAI) representing Bharti Airtel and Reliance Jio suggested accurate user locations be provided if the government mandates smartphone firms to turn on A-GPS technology which uses cellular data and satellite signals.
If this is implemented, location services will be activated in smartphones with no disable option. Samsung, Google, and Apple strongly oppose this proposal. A proposal to track user location is not present anywhere else in the world, according to lobbying group India Cellular & Electronics Association (ICEA), representing Google and Apple.
Reuters reached out to the India's IT and home ministries for clarity on the telecom industry's proposal but have received no replies. According to digital forensics expert Junade Ali, the "proposal would see phones operate as a dedicated surveillance device."
According to technology experts, utilizing A-GPS technology, which is normally only activated when specific apps are operating or emergency calls are being made, might give authorities location data accurate enough to follow a person to within a meter.
Globally, governments are constantly looking for new ways to improve in tracking the movements or data of mobile users. All Russian mobile phones are mandated to have a state-sponsored communications app installed. With 735 million smartphones as of mid-2025, India is the second-largest mobile market in the world.
According to Counterpoint Research, more than 95% of these gadgets are running Google's Android operating system, while the remaining phones are running Apple's iOS.
Apple and Google cautioned that their user base will include members of the armed forces, judges, business executives, and journalists, and that the proposed location tracking would jeopardize their security because they store sensitive data.
According to the telecom industry, even the outdated method of location tracking is becoming troublesome because smartphone manufacturers notify users via pop-up messages that their "carrier is trying to access your location."
Threat actors are abusing a zero-day bug in Gogs- a famous self-hosted Git service. The open source project hasn't fixed it yet.
Over 700 incidents have been impacted in these attacks. Wiz researchers described the bug as "accidental" and said the attack happened in July when they were analyzing malware on a compromised system. During the investigation, the experts "identified that the threat actor was leveraging a previously unknown flaw to compromise instances. They “responsibly disclosed this vulnerability to the maintainers."
The team informed Gogs' maintainers about the bug, who are now working on the fix.
The flaw is known as CVE-2025-8110. It is primarily a bypass of an earlier patched flaw (CVE-2024-55947) that lets authorized users overwrite external repository files. This leads to remote code execution (RCE).
Gogs is written in Go, it lets users host Git repositories on their cloud infrastructure or servers. It doesn't use GitHub or other third parties.
Git and Gogs allow symbolic links that work as shortcuts to another file. They can also point to objects outside the repository. The Gogs API also allows file configuration outside the regular Git protocol.
The previous patch didn't address such symbolic links exploit and this lets threat actors to leverage the flaw and remotely deploy malicious codes.
While researchers haven't linked the attacks to any particular gang or person, they believe the threat actors are based in Asia.
Last year, Mandiant found Chinese state-sponsored hackers abusing a critical flaw in F5 through Supershell, and selling the access to impacted UK government agencies, US defense organizations, and others.
Researchers still don't know what threat actors are doing with access to compromised incidents. "In the environments where we have visibility, the malware was removed quickly so we did not see any post-exploitation activity. We don't have visibility into other compromised servers, beyond knowing they're compromised," researchers said.
Wiz has advised users to immediately disable open-registration (if not needed) and control internet exposure by shielding self-hosted Git services via VPN. Users should be careful of new repositories with unexpected usage of the PutContents API or random 8-character names.
For more details, readers can see the full list of indicators published by the researchers.
Microsoft stirred controversy when it revealed a Teams update that could tell your organization when you're not at work. Google did the same. Say goodbye to end-to-end encryption. With this new RCS and SMS Android update, your RCS and SMS texts are no longer private.
According to Android Authority, "Google is rolling out Android RCS Archival on Pixel (and other Android) phones, allowing employers to intercept and archive RCS chats on work-managed devices. In simpler terms, your employer will now be able to read your RCS chats in Google Messages despite end-to-end encryption.”
This is only applicable to work-managed devices and doesn't impact personal devices. In regulated industries, it will only add RCS archiving to existing SMS archiving. In an organization, however, texting is different than emailing. In the former, employees sometimes share about their non-work life. End-to-end encryptions keep these conversations safe, but this will no longer be the case.
There is alot of misunderstanding around end-to-end encryption. It protects messages when they are being sent, but once they are on your device, they are decrypted and no longer safe.
According to Google, this is "a dependable, Android-supported solution for message archival, which is also backwards compatible with SMS and MMS messages as well. Employees will see a clear notification on their device whenever the archival feature is active.”
With this update, getting a phone at work is no longer as good as it seems. Employees have always been insecure about the risks in over-sharing on email, as it is easy to spy. But not texts.
The update will make things different. According to Google, “this new capability, available on Google Pixel and other compatible Android Enterprise devices gives your employees all the benefits of RCS — like typing indicators, read receipts, and end-to-end encryption between Android devices — while ensuring your organization meets its regulatory requirements.”
Because of organizational surveillance, employees at times turn to shadow IT systems such as Whatsapp and Signal to communicate with colleagues. The new Google update will only make things worse.
“Earlier,” Google said, ““employers had to block the use of RCS entirely to meet these compliance requirements; this update simply allows organizations to support modern messaging — giving employees messaging benefits like high-quality media sharing and typing indicators — while maintaining the same compliance standards that already apply to SMS messaging."
The US Cybersecurity and Infrastructure Security Agency (CISA) added a critical security flaw affecting React Server Components (RSC) to its Known Exploited Vulnerabilities (KEV) catalog after exploitation in the wild.
The flaw CVE-2025-55182 (CVSS score: 10.0) or React2Shell hints towards a remote code execution (RCE) that can be triggered by an illicit threat actor without needing any setup.
According to the CISA advisory, "Meta React Server Components contains a remote coThe incident surfaced when Amazon said it found attack attempts from infrastructure related to Chinese hacking groupsde execution vulnerability that could allow unauthenticated remote code execution by exploiting a flaw in how React decodes payloads sent to React Server Function endpoints."
The problem comes from unsafe deserialization in the library's Flight protocol, which React uses to communicate between a client and server. It results in a case where an unauthorised, remote hacker can deploy arbitrary commands on the server by sending specially tailored HTTP requests. The conversion of text into objects is considered a dangerous class of software vulnerability.
"The React2Shell vulnerability resides in the react-server package, specifically in how it parses object references during deserialization," said Martin Zugec, technical solutions director at Bitdefender.
The incident surfaced when Amazon said it found attack attempts from infrastructure related to Chinese hacking groups such as Jackpot Panda and Earth Lamia. "Within hours of the public disclosure of CVE-2025-55182 (React2Shell) on December 3, 2025, Amazon threat intelligence teams observed active exploitation attempts by multiple China state-nexus threat groups, including Earth Lamia and Jackpot Panda," AWS said.
Few attacks deployed cryptocurrency miners and ran "cheap math" PowerShell commands for successful exploitation. After that, it dropped in-memory downloaders capable of taking out extra payload from a remote server.
According to Censys, an attack surface management platform, 2.15 million cases of internet-facing services may be affected by this flaw. This includes leaked web services via React Server Components and leaked cases of frameworks like RedwoodSDK, React Router, Waku, and Next.js.
According to data shared by attack surface management platform Censys, there are about 2.15 million instances of internet-facing services that may be affected by this vulnerability. This comprises exposed web services using React Server Components and exposed instances of frameworks such as Next.js, Waku, React Router, and RedwoodSDK.
In a likely phishing attempt, over four employees of Kasaragod and Wayanad Collectorates received WhatsApp texts from accounts imitating their district Collectors and asking for urgent money transfers. After that, the numbers have been sent to the cyber police, according to the Collectorate officials.
The texts came from Vietnam based numbers but showed the profile pictures of concerned collectors, Inbasekar K in Kasaragod and D R Meghasree.
In one incident, the scammers also shared a Google Pay number, but the target didn't proceed. According to the official, "the employees who received the messages were saved simply because they recognised the Collector’s tone and style of communication."
Two employees from Wayanad received texts, all from different numbers from Vietnam. In the Kasaragod incident, Collector Inbasekar said a lot of employees received the phishing texts on WhatsApp. Two employees reported the incident. No employee lost the money.
The scam used a similar script in the two districts. The first text read: Hello, how are you? Where are you currently? In the Wayanad incident, the first massage was sent around 4 pm, and in Kasaragod, around 5:30 pm. When the employee replied, a follow up text was sent: Very good. Please do something urgently. This shows that the scam followed the typical pitches used by scammers.
The numbers have been reported to the cyber police. According to Wayanad officials, "Once the messages were identified as fake, screenshots were immediately circulated across all internal WhatsApp groups." Cyber Unit has blocked both Vietnam-linked and Google Pay numbers.
Kasaragod Collector cautioned the public and staff to be careful when getting texts asking for money transfers. Coincidentally, in both the incidents, the texts were sent to staff employed in the Special Intensive Revision of electoral rolls. In this pursuit, the scammers revealed the pressures under which booth-level employees are working.
According to cyber security experts, the fake identity scams are increasingly targeting top government officials. Scammers are exploiting hierarchical structures to trick officials into acting promptly. “Police have urged government employees and the public to avoid responding to unsolicited WhatsApp messages requesting money, verify communication through official phone numbers or email, and report suspicious messages immediately to cybercrime authorities,” the New Indian Express reported.
AI-powered threat hunting will only be successful if the data infrastructure is strong too.
Threat hunting powered by AI, automation, or human investigation will only ever be as effective as the data infrastructure it stands on. Sometimes, security teams build AI over leaked data or without proper data care. This can create issues later. It can affect both AI and humans. Even sophisticated algorithms can't handle inconsistent or incomplete data. AI that is trained on poor data will also lead to poor results.
A correlated data controls the operation. It reduces noise and helps in noticing patterns that manual systems can't.
Correlating and pre-transforming the data makes it easy for LLMs and other AI tools. It also allows connected components to surface naturally.
A same person may show up under entirely distinct names as an IAM principal in AWS, a committer in GitHub, and a document owner in Google Workspace. You only have a small portion of the truth when you look at any one of those signs.
You have behavioral clarity when you consider them collectively. While downloading dozens of items from Google Workspace may look strange on its own, it becomes obviously malevolent if the same user also clones dozens of repositories to a personal laptop and launches a public S3 bucket minutes later.
Correlations that previously took hours or were impossible become instant when data from logs, configurations, code repositories, and identification systems are all housed in one location.
For instance, lateral movement that uses short-lived credentials that have been stolen frequently passes across multiple systems before being discovered. A hacked developer laptop might take on several IAM roles, launch new instances, and access internal databases. Endpoint logs show the local compromise, but the extent of the intrusion cannot be demonstrated without IAM and network data.
This type of malware, often presented as a trustworthy mobile application, has the potential to steal your data, track your whereabouts, record conversations, monitor your social media activity, take screenshots of your activities, and more. Phishing, a phony mobile application, or a once-reliable software that was upgraded over the air to become an information thief are some of the ways it could end up on your phone.
Types of malware
Legitimate apps are frequently packaged with nuisanceware. It modifies your homepage or search engine settings, interrupts your web browsing with pop-ups, and may collect your browsing information to sell to networks and advertising agencies.
Nuisanceware is typically not harmful or a threat to your fundamental security, despite being seen as malvertising. Rather, many malware packages focus on generating revenue by persuading users to view or click on advertisements.
Additionally, there is generic mobile spyware. These types of malware collect information from the operating system and clipboard in addition to potentially valuable items like account credentials or bitcoin wallet data. Spray-and-pray phishing attempts may employ spyware, which isn't always targeted.
Compared to simple spyware, advanced spyware is sometimes also referred to as stalkerware. This spyware, which is unethical and frequently harmful, can occasionally be found on desktop computers but is becoming more frequently installed on phones.
Lastly, there is commercial spyware of governmental quality. One of the most popular variations is Pegasus, which is sold to governments as a weapon for law enforcement and counterterrorism.
Pegasus was discovered on smartphones owned by lawyers, journalists, activists, and political dissidents. Commercial-grade malware is unlikely to affect you unless you belong to a group that governments with ethical dilemmas are particularly interested in. This is because commercial-grade spyware is expensive and requires careful victim selection and targeting.
There are signs that you may be the target of a spyware or stalkerware operator.
Receiving strange or unexpected emails or messages on social media could be a sign of a spyware infection attempt. You should remove these without downloading any files or clicking any links.
The mega-messenger from Meta is allegedly collecting user data to generate ad money, according to recent attacks on WhatsApp. WhatsApp strongly opposes these fresh accusations, but it didn't help that a message of its own appeared to imply the same.
There are two prominent origins of the recent attacks. Few experts are as well-known as Elon Musk, particularly when it occurs on X, the platform he owns. Musk asserted on the Joe Rogan Experience that "WhatsApp knows enough about what you're texting to know what ads to show you." "That is a serious security flaw."
These so-called "hooks for advertising" are typically thought to rely on metadata, which includes information on who messages whom, when, and how frequently, as well as other information from other sources that is included in a user's profile.
The message content itself is shielded by end-to-end encryption, which is the default setting for all 3 billion WhatsApp users. Signal's open-source encryption protocol, which the Meta platform adopted and modified for its own use, is the foundation of WhatsApp's security. So, in light of these new attacks, do you suddenly need to stop using WhatsApp?
In reality, WhatsApp's content is completely encrypted. There has never been any proof that Meta, WhatsApp, or anybody else can read the content itself. However, the platform you are utilizing is controlled by Meta, and it is aware of your identity. It does gather information on how you use the platform.
Additionally, it shares information with Meta so that it can "show relevant offers/ads." Signal has a small portion of WhatsApp's user base, but it does not gather metadata in the same manner. Think about using Signal instead for sensitive content. Steer clear of Telegram since it is not end-to-end encrypted and RCS because it is not yet cross-platform encrypted.
Remember that end-to-end encryption only safeguards your data while it is in transit. It has no effect on the security of your content on the device. I can read all of your messages, whether or not they are end-to-end encrypted, if I have control over your iPhone or Android.
The news first came in December last year. According to the WSJ, officials at the Departments of Justice, Commerce, and Defense had launched investigations into the company due to national security threats from China.
Currently, the proposal has gotten interagency approval. According to the Washington Post, "Commerce officials concluded TP-Link Systems products pose a risk because the US-based company's products handle sensitive American data and because the officials believe it remains subject to jurisdiction or influence by the Chinese government."
But TP-Link's connections to the Chinese government are not confirmed. The company has denied of any ties with being a Chinese company.
The company was founded in China in 1996. After the October 2024 investigation, the company split into two: TP-Link Systems and TP-Link Technologies. "TP-Link's unusual degree of vulnerabilities and required compliance with [Chinese] law are in and of themselves disconcerting. When combined with the [Chinese] government's common use of [home office] routers like TP-Link to perpetrate extensive cyberattacks in the United States, it becomes significantly alarming" the officials wrote in October 2024.
The company dominated the US router market since the COVID pandemic. It rose from 20% of total router sales to 65% between 2019 and 2025.
The US DoJ is investigating if TP-Link was involved in predatory pricing by artificially lowering its prices to kill the competition.
The potential ban is due to an interagency review and is being handled by the Department of Commerce. Experts say that the ban may be lifted in future due to Trump administration's ongoing negotiations with China.
Security researcher hxr1 discovered a far more conventional method of weaponizing rampant AI in a year when ingenious and sophisticated quick injection tactics have been proliferating. He detailed a living-off-the-land attack (LotL) that utilizes trusted files from the Open Neural Network Exchange (ONNX) to bypass security engines in a proof-of-concept (PoC) provided exclusively to Dark Reading.
Programs for cybersecurity are only as successful as their designers make them. Because these are known signs of suspicious activity, they may detect excessive amounts of data exfiltrating from a network or a foreign.exe file that launches. However, if malware appears on a system in a way they are unfamiliar with, they are unlikely to be aware of it.
That's the reason AI is so difficult. New software, procedures, and systems that incorporate AI capabilities create new, invisible channels for the spread of cyberattacks.
The Windows operating system has been gradually including features since 2018 that enable apps to carry out AI inference locally without requiring a connection to a cloud service. Inbuilt AI is used by Windows Hello, Photos, and Office programs to carry out object identification, facial recognition, and productivity tasks, respectively. They accomplish this by making a call to the Windows Machine Learning (ML) application programming interface (API), which loads ML models as ONNX files.
ONNX files are automatically trusted by Windows and security software. Why wouldn't they? Although malware can be found in EXEs, PDFs, and other formats, no threat actors in the wild have yet to show that they plan to or are capable of using neural networks as weapons. However, there are a lot of ways to make it feasible.
Planting a malicious payload in the metadata of a neural network is a simple way to infect it. The compromise would be that this virus would remain in simple text, making it much simpler for a security tool to unintentionally detect it.
Piecemeal malware embedding among the model's named nodes, inputs, and outputs would be more challenging but more covert. Alternatively, an attacker may utilize sophisticated steganography to hide a payload inside the neural network's own weights.
As long as you have a loader close by that can call the necessary Windows APIs to unpack it, reassemble it in memory, and run it, all three approaches will function. Additionally, both approaches are very covert. Trying to reconstruct a fragmented payload from a neural network would be like trying to reconstruct a needle from bits of it spread through a haystack.
Earlier this year, the breach was against a famous U.S non-profit working in advocacy, that demonstrated advanced techniques and shared tools among Chinese cyber criminal gangs like APT41, Space Pirates, and Kelp.
They struck again in April with various malicious prompts checking both internal network breach and internet connectivity, particularly targeting a system at 192.0.0.88. Various tactics and protocols were used, showing both determination and technical adaptability to get particular internal resources.
Following the connectivity tests, the hackers used tools like netstat for network surveillance and made an automatic task via the Windows command-line tools.
This task ran a genuine MSBuild.exe app that processed an outbound.xml file to deploy code into csc.exe and connected it to a C2 server.
These steps hint towards automation (through scheduled tasks) and persistence via system-level privileges increasing the complexity of the compromise and potential damage.
The techniques and toolkit show traces of various Chinese espionage groups. The hackers weaponized genuine software elements. This is called DLL sideloading by abusing vetysafe.exe (a VipreAV component signed by Sunbelt Software, Inc.) to load a malicious payload called sbamres.dll.
This tactic was earlier found in campaigns lkmkedytl Earth Longzhi and Space Pirates, the former also known as APT41 subgroup.
Coincidentally, the same tactic was found in cases connected to Kelp, showing the intrusive tool-sharing tactics within Chinese APTs.
The future of browsing is AI, it watches everything you do online. Security and privacy are two different things; they may look same, but it is different for people who specialize in these two. Threats to your security can also be dangers to privacy.
Security and privacy aren’t always the same thing, but there’s a reason that people who specialize in one care deeply about the other.
Recently, OpenAI released its ChatGPT-powered Comet Browser, and Brave Software team disclosed that AI-powered browsers can follow malicious prompts that hide in images on the web.
We have long known that AI-powered browsers (and AI browser add-ons for other browsers) are vulnerable to a type of attack known as a prompt injection attack. But this is the first time we've seen the browser execute commands that are concealed from the user.
That is the aspect of security. Experts who evaluated the Comet Browser discovered that it records everything you do while using it, including search and browser history as well as information about the URLs you visit.
In short, while new AI-powered browser tools do fulfill the promise of integrating your favorite chatbot into your web browsing experience, their developers have not yet addressed the privacy and security threats they pose. Be careful when using these.
Researchers studied the ten biggest VPN attacks in recent history. Many of them were not even triggered by foreign hostile actors; some were the result of basic human faults, such as leaked credentials, third-party mistakes, or poor management.
Atlas, an AI-powered web browser developed with ChatGPT as its core, is meant to do more than just allow users to navigate the internet. It is capable of reading, sum up, and even finish internet tasks for the user, such as arranging appointments or finding lodgings.
Atlas looked for social media posts and other websites that mentioned or discussed the story. For the New York Times piece, a summary was created utilizing information from other publications such as The Guardian, The Washington Post, Reuters, and The Associated Press, all of which have partnerships or agreements with OpenAI, with the exception of Reuters.
With the end of support for Windows 10 approaching, many businesses are asking themselves how many of their devices, servers, or endpoints are already (or will soon be) unsupported. More importantly, what hidden weaknesses does this introduce into compliance, auditability, and access governance?
Most IT leaders understand the urge to keep outdated systems running for a little longer, patch what they can, and get the most value out of the existing infrastructure.
However, without regular upgrades, endpoint security technologies lose their effectiveness, audit trails become more difficult to maintain, and compliance reporting becomes a game of guesswork.
Research confirms the magnitude of the problem. According to Microsoft's newest Digital Defense Report, more than 90% of ransomware assaults that reach the encryption stage originate on unmanaged devices that lack sufficient security controls.
Unsupported systems frequently fall into this category, making them ideal candidates for exploitation. Furthermore, because these vulnerabilities exist at the infrastructure level rather than in individual files, they are frequently undetectable until an incident happens.
Hackers don't have to break your defense. They just need to wait for you to leave a window open. With the end of support for Windows 10 approaching, hackers are already predicting that many businesses will fall behind.
Waiting carries a high cost. Breaches on unsupported infrastructure can result in higher cleanup costs, longer downtime, and greater reputational harm than attacks on supported systems. Because compliance frameworks evolve quicker than legacy systems, staying put risks falling behind on standards that influence contracts, customer trust, and potentially your ability to do business.
Although unsupported systems may appear to be small technical defects, they quickly escalate into enterprise-level threats. The longer they remain in play, the larger the gap they create in endpoint security, compliance, and overall data security. Addressing even one unsupported system now can drastically reduce risk and give IT management more piece of mind.
TDMs have a clear choice: modernize proactively or leave the door open for the next assault.
Imagine walking into your local supermarket to buy a two-litre bottle of milk. You pay $3, but the person ahead of you pays $3.50, and the next shopper pays only $2. While this might sound strange, it reflects a growing practice known as surveillance pricing, where companies use personal data and artificial intelligence (AI) to determine how much each customer should pay. It is a regular practice and we must comprehend the ins and outs since we are directly subjected to it.
What is surveillance pricing?
Surveillance pricing refers to the use of digital tracking and AI to set individualised prices based on consumer behaviour. By analysing a person’s online activity, shopping habits, and even technical details like their device or location, retailers estimate each customer’s “pain point”, the maximum amount they are likely to pay for a product or service.
A recent report from the U.S. Federal Trade Commission (FTC) highlighted that businesses can collect such information through website pixels, cookies, account registrations, or email sign-ups. These tools allow them to observe browsing time, clicks, scrolling speed, and even mouse movements. Together, these insights reveal how interested a shopper is in a product, how urgent their need may be, and how much they can be charged without hesitation.
Growing concerns about fairness
In mid-2024, Delta Air Lines disclosed that a small percentage of its domestic ticket pricing was already determined using AI, with plans to expand this method to more routes. The revelation led U.S. lawmakers to question whether customer data was being used to charge certain passengers higher fares. Although Delta stated that it does not use AI for “predatory or discriminatory” pricing, the issue drew attention to how such technology could reshape consumer costs.
Former FTC Chair Lina Khan has also warned that some businesses can predict each consumer’s willingness to pay by analysing their digital patterns. This ability, she said, could allow companies to push prices to the upper limit of what individuals can afford, often without their knowledge.
How does it work?
AI-driven pricing systems use vast amounts of data, including login details, purchase history, device type, and location to classify shoppers by “price sensitivity.” The software then tests different price levels to see which one yields the highest profit.
The FTC’s surveillance pricing study revealed several real-world examples of this practice:
Real-world examples and evidence
Ride-hailing platforms have long faced questions about this kind of data-driven pricing. In 2016, Uber’s former head of economic research noted that users with low battery life were more likely to accept surge pricing. A 2023 Belgian newspaper investigation later reported small differences in Uber fares depending on a phone’s battery level. Uber denied that battery status affects fares, saying its prices depend only on driver supply and ride demand.
Is this new?
The concept itself isn’t new. Dynamic pricing has existed for decades, but digital surveillance has made it far more sophisticated. In the early 2000s, Amazon experimented with varying prices for DVDs based on browsing data, sparking backlash from consumers who discovered the differences. Similarly, the UK’s Norwich Union once used satellite tracking for a “Pay As You Drive” car insurance model, which was discontinued after privacy concerns.
The future of pricing
Today’s combination of big data and AI allows retailers to create precise, individualised pricing models that adjust instantly. Experts warn this could undermine fair competition, reduce transparency, and widen inequality between consumers. Regulators like the FTC are now studying these systems closely to understand their impact on market fairness and consumer privacy.
For shoppers, awareness is key. Comparing prices across devices, clearing cookies, and using privacy tools can help reduce personal data tracking. As AI continues to shape how businesses price their products, understanding surveillance pricing is becoming essential to protect both privacy and pocket.
With no sign of browsing history in Incognito mode, you may believe you are safe. However, this is not entirely accurate, as Incognito has its drawbacks and doesn’t guarantee private browsing. But this doesn’t mean that the feature is useless.
Private browsing mode is made to keep your local browsing history secret. When a user opens an incognito window, their browser starts a different session and temporarily saves browsing in the session, such as history and cookies. Once the private session is closed, the temporary information is self-deleted and is not visible in your browsing history.
Incognito mode helps to keep your browsing data safe from other users who use your device
A common misconception among users is that it makes them invisible on the internet and hides everything they browse online. But that is not true.
1. It doesn’t hide user activity from the Internet Service Provider (ISP)
Every request you send travels via the ISP network (encrypted DNS providers are an exception). Your ISPs can track user activity on their networks, and can monitor your activity and all the domains you visit, and even your unencrypted traffic. If you are on a corporate Wi-Fi network, your network admin can see the visited websites.
2. Incognito mode doesn’t stop websites from tracking users
When you are using Incognito, cookies are deleted, but websites can still track your online activity via device and browser fingerprinting. Sites create user profiles based on unique device characteristics such as resolution, installed extensions, and screen size.
3. Incognito mode doesn’t hide your IP address
If you are blocked from a website, using Incognito mode won’t make it accessible. It can’t change your I address.
It may give a false sense of benefits, but Incognito mode doesn’t ensure privacy. It is only helpful for shared devices.
There are other options to protect your online privacy, such as:
Earlier this year, the US Immigration and Customs Enforcement (ICE) paid $825,000 to a manufacturing company that makes vehicles installed with tech for law enforcement, which also included fake cellphone towers called "cell-site" simulators used to surveil phones.
The contract was made with a Maryland-based company called TechOps Specialty Vehicles (TOSV). TOSV signed another contract with ICE for $818,000 last year during the Biden administration.
The latest federal contract shows how few technologies are being used to support the Trump administration's crackdown on deportation.
In September 2025, Forbes discovered an unsealed search warrant that revealed ICE used a cell-site simulator to spy on a person who was allegedly a member of a criminal gang in the US, and was asked to leave the US in 2023. Forbes also reported on finding a contract for "cell site simulator."
Cell-site simulators were also called "stingrays." Over time, they are now known as International Mobile Subscriber Identity (IMSI) catchers, a unique number used to track every cellphone user in the world.
These tools can mimic a cellphone tower and can fool every device in the nearby range to connect to the device, allowing law enforcement to identify the real-world location of phone owners. Few cell-site simulators can also hack texts, internet traffic, and regular calls.
Authorities have been using Stingray devices for more than a decade. It is controversial as authorities sometimes don't get a warrant for their use.
According to experts, these devices trap innocent people; their use is secret as the authorities are under strict non-disclosure agreements not to disclose how these devices work. ICE has been infamous for using cell-site simulators. In 2020, a document revealed that ICE used them 466 times between 2017 and 2019.