Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Technology. Show all posts

The Threats of Agentic AI Data Trails


What if you install a brand new smart-home assistant that looks surreal, and if it can precool your living room at ease. However, besides the benefits, the system is secretly generating a huge digital trace of personal information?

That's the hidden price of agentic AI, your every plan, act, and prompt gets registered, forecasts and logs hints of frequent routines reside info long-term storage. 

These logs aren't silly mistakes. They are standard behaviour for most agentic AI systems. Fortunately, there's another way. Easy engineering methods build efficiency and autonomy while limiting the digital footprint. 

How Agentic AI Stores and Collects Private Data

It uses a planner based on a LLM to optimize similiar devices via the house. It surveills electricity prices and weather details, configures thermostats, adjusting smart plugs, and schedules EV charge. 

To limit personal data, the system registers only pseudonomymous resident profiles locally and doesn't access microphones and cameras. Agentic AI updates its plan when the weather or prices change, and registers short, planned reflections to strengthen future runs.

However, you as a home resident may not be aware about how much private data is being stored behind your back. Agentic AI systems create information as a natural result of how they function. In baseline agent configurations (mostly), the data gets accumulated. However, this is not considered the best tactic in the business, like configuration is a practical initial point for activating Agentic AI and function smoothly.

How to avoid AI agent trails?

Limit memory to the task at hand.

The deleting process should be thorough and easy.

The agent's action should be transparent via a readable "agent trace."

Apple Removes Controversial Dating Apps After Data Leak and Privacy Violations

 

Apple has removed two dating apps, Tea and TeaOnHer, from the App Store months after a major data breach exposed users’ private information. The removal comes amid continued criticism over the apps’ privacy failures and lack of effective content moderation. 

The controversy started earlier this year when 404 Media reported that Tea, described as a dating and safety app, had leaked sensitive data, including driver’s licenses and chat histories. 

The exposed information was traced to an unsecured database and later appeared on the forum 4chan. Despite the breach, the app briefly gained popularity and reached the top of the App Store charts, driven by widespread online attention. 

TechCrunch reported that Apple confirmed the removal of both apps, citing multiple violations of its App Store Review Guidelines. The company pointed to sections 1.2, 5.1.2, and 5.6, which address objectionable content, data protection, and excessive negative user feedback. 

Apple also received a large number of complaints and low ratings, including reports that personal information belonging to minors had been shared on the platforms. According to Apple, the developers were notified of the issues and given time to make improvements, but no adequate action was taken. 

The gap between the initial reports of the data leak and the eventual removal likely reflects this period of review and attempted remediation. The incident highlights ongoing challenges around privacy and user safety in dating apps, which often collect and store large amounts of personal data. 

While Apple enforces rules intended to protect users, the case raises questions about how quickly and effectively those rules are applied when serious privacy risks come to light. The removal of Tea and TeaOnHer underscores the growing scrutiny facing apps that fail to secure user information or moderate harmful content.

Iran Attacks Israeli Cybersecurity Infrastructure


The National Cyber Directorate found a series of cyberattacks that targeted Israeli organisations that offer IT services to companies in the country, and might be linked to Iran.

Earlier this month, the failed cyberattack against Shamir Medical Center on Yom Kippur leaked emails that contained sensitive patient information. The directorate found it to be an Iranian attack disrupting the hospital's functions.

Fortunately, the attack was mitigated before it could do any damage to the hospital's medical record system.

The directorate found that threat actors used stolen data to get access to the targeted infrastructure. Most attacks didn't do any damage, some however, caused data leaks. Due to immediate communications and response, the incidents were addressed quickly. “In the case of Shamir Medical Center, beyond the data leak, the very attempt to harm a hospital in Israel is a red line that could have endangered lives,” the directorate said. 

European gang behind the attack

First, a ransomwware gang based out of Eastern Europe claimed responsibility and posted a ransom demand with a 72-hour window. But Israeli officials later discovered that Iranian threat actors launched the attack. 

According to officials, the incident was connected to a wider campaign against Israeli organisations and critical service providers recently. Over 10 forms suffered cyberattacks and exploited bugs in digital service providers inside supply chains. 

According to Jerusalem Post, "Since the start of 2025, Israel has thwarted dozens of Iranian cyberattacks targeting prominent civilians, including security officials, politicians, academics, journalists, and media professionals. The Shin Bet security agency said these operations aim to collect sensitive personal data that could later be used in physical attacks within Israel, potentially carried out by locally recruited operatives."

Google’s Quantum Breakthrough Rekindles Concerns About Bitcoin’s Long-Term Security

 




Google has announced a verified milestone in quantum computing that has once again drawn attention to the potential threat quantum technology could pose to Bitcoin and other digital systems in the future.

The company’s latest quantum processor, Willow, has demonstrated a confirmed computational speed-up over the world’s leading supercomputers. Published in the journal Nature, the findings mark the first verified example of a quantum processor outperforming classical machines in a real experiment.

This success brings researchers closer to the long-envisioned goal of building reliable quantum computers and signals progress toward machines that could one day challenge the cryptography protecting cryptocurrencies.


What Google Achieved

According to Google’s study, the 105-qubit Willow chip ran a physics algorithm faster than any known classical system could simulate. This achievement, often referred to as “quantum advantage,” shows that quantum processors are starting to perform calculations that are practically impossible for traditional computers.

The experiment used a method called Quantum Echoes, where researchers advanced a quantum system through several operations, intentionally disturbed one qubit, and then reversed the sequence to see if the information would reappear. The re-emergence of this information, known as a quantum echo, confirmed the system’s interference patterns and genuine quantum behavior.

In measurable terms, Willow completed the task in just over two hours, while Frontier, one of the world’s fastest publicly benchmarked supercomputers, would need about 3.2 years to perform the same operation. That represents a performance difference of nearly 13,000 times.

The results were independently verified and can be reproduced by other quantum systems, a major step forward from previous experiments that lacked reproducibility. Google CEO Sundar Pichai noted on X that this outcome is “a substantial step toward the first real-world application of quantum computing.”

Willow’s superconducting transmon qubits achieved an impressive level of stability. The chip recorded median two-qubit gate errors of 0.0015 and maintained coherence times above 100 microseconds, allowing scientists to execute 23 layers of quantum operations across 65 qubits. This pushed the system beyond what classical models can reproduce and proved that complex, multi-layered quantum circuits can now be managed with high accuracy.


From Sycamore to Willow

The Willow processor, unveiled in December 2024, is a successor to Google’s Sycamore chip from 2019, which first claimed quantum supremacy but lacked experimental consistency. Willow bridges that gap by introducing stronger error correction and better coherence, enabling experiments that can be repeated and verified within the same hardware.

While the processor is still in a research phase, its stability and reproducibility represent significant engineering progress. The experiment also confirmed that quantum interference can persist in systems too complex for classical simulation, which strengthens the case for practical quantum applications.


Toward Real-World Uses

Google now plans to move beyond proof-of-concept demonstrations toward practical quantum simulations, such as modeling atomic and molecular interactions. These tasks are vital for fields like drug discovery, battery design, and material science, where classical computers struggle to handle the enormous number of variables involved.

In collaboration with the University of California, Berkeley, Google recently demonstrated a small-scale quantum experiment to model molecular systems, marking an early step toward what the company calls a “quantum-scope” — a tool capable of observing natural phenomena that cannot be measured using classical instruments.


The Bitcoin Question

Although Willow’s success does not pose an immediate threat to Bitcoin, it has revived discussions about how close quantum computers are to breaking elliptic-curve cryptography (ECC), which underpins most digital financial systems. ECC is nearly impossible for classical computers to reverse-engineer, but it could theoretically be broken by a powerful quantum system running algorithms such as Shor’s algorithm.

Experts caution that this risk remains distant but credible. Christopher Peikert, a professor of computer science and engineering at the University of Michigan, told Decrypt that quantum computing has a small but significant chance, over five percent, of becoming a major long-term threat to cryptocurrencies.

He added that moving to post-quantum cryptography would address these vulnerabilities, but the trade-offs include larger keys and signatures, which would increase network traffic and block sizes.


Why It Matters

Simulating Willow’s circuits using tensor-network algorithms would take more than 10 million CPU-hours on Frontier. The contrast between two hours of quantum computation and several years of classical simulation offers clear evidence that practical quantum advantage is becoming real.

The Willow experiment transitions quantum research from theory to testable engineering. It shows that real hardware can perform verified calculations that classical computers cannot feasibly replicate.

For cybersecurity professionals and blockchain developers, this serves as a reminder that quantum resistance must now be part of long-term security planning. The countdown toward a quantum future has already begun, and with each verified advance, that future moves closer to reality.



Smart Devices Redefining Productivity in the Home Workspace


 

Remote working, once regarded as a rare privilege, has now become a key feature of today's professional landscape. Boardroom discussions and water-cooler chats have become much more obsolete, as organisations around the world continue to adapt to new work models shaped by technology and necessity, with virtual meetings and digital collaboration becoming more prevalent. 

It has become increasingly apparent that remote work is no longer a distant future vision but rather a reality that defines the professional world of today. There have been significant shifts in the way that organisations operate and how professionals communicate, perform and interact as a result of the dissolution of traditional workplace boundaries, giving rise to a new era of distributed teams, flexible schedules, and technology-driven collaboration. 

These changes, accelerated by global disruptions and evolving employee expectations, have led to a significant shift in the way organisations operate. Gallup has recently announced that over half of U.S. employees now work from home at least part of the time, a trend that is unlikely to wane anytime soon. There are countless reasons why this model is so popular, including its balance between productivity, autonomy, and accessibility, offering both employers and employees the option of redefining success in a way that goes beyond the confines of physical work environments. 

With the increasing popularity of remote and hybrid work, it is becoming ever more crucial for individuals to learn how to thrive in this environment, in which success increasingly depends on the choice and use of the right digital tools that will make it possible for them to maintain connection, efficiency, and growth in a borderless work environment. 

DigitalOcean Currents report from 2023 indicates that 39 per cent of companies operating entirely remotely now operate, while 23 per cent use a hybrid model with mandatory in-office days, and 2 per cent permit their employees to choose between remote working options. In contrast, about 14 per cent of these companies still maintain the traditional setup of an office, a small fraction of which is the traditional office setup. 

More than a location change, this dramatic shift marks the beginning of a transformation of how teams communicate, innovate, and remain connected across time zones and borders, which reflects an evolution in how teams communicate, innovate, and remain connected. With the blurring of the boundaries of the workplace, digital tools have been emerging as the backbone of this transformation, providing seamless collaboration between employees, ensuring organisational cohesion, and maximising productivity regardless of where they log in to the workplace. 

With today's distributed work culture, success depends not only on adaptability, but also on thoughtfully integrating technology that bridges distances with efficiency and purpose, in an era where flexibility is imperative, but it also depends on technology integration. As organisations continue to embrace remote and hybrid working models, maintaining compliance across diverse sites has become one of the most pressing operational challenges that organisations face today. 

Compliance management on a manual basis not only strains administrative efficiency but also exposes businesses to significant regulatory and financial risks. Human error remains an issue that persists today—whether it is overlooking state-specific labour laws, understating employees' hours, or misclassifying workers, with each mistake carrying a potential for fines, back taxes, or legal disputes as a result. In the absence of centralised systems, routine audits become time-consuming exercises that are plagued by inconsistent data and dispersed records. 

Almost all human resource departments face the challenge of ensuring that fair and consistent policy enforcement across dispersed teams is nearly impossible because of fragmented oversight and self-reported data. For organisations to overcome these challenges, automation and intelligent workforce management are increasingly being embraced by forward-looking organisations. Using advanced time-tracking platforms along with workforce analytics, employers can gain real-time visibility into employee activity, simplify audits, and improve compliance reporting accuracy. 

Businesses can not only reduce risks and administrative burdens by consolidating processes into a single, data-driven system but also increase employee transparency and trust by integrating these processes into one system. By utilising technology to manage remote teams effectively in the era of remote work, it becomes a strategic ally for maintaining operational integrity. 

Clear communication, structured organisation, and the appropriate technology must be employed when managing remote teams. When managing for the first time, defining roles, reporting procedures, and meeting schedules is an essential component of creating accountability and transparency among managers. 

Regular one-on-one and team meetings are essential for engaging employees and addressing challenges that might arise in a virtual environment. The adoption of remote work tools for collaboration, project tracking, and communication is on the rise among organisations as a means of streamlining workflows across time zones to ensure teams remain in alignment. Remote work has been growing in popularity because of its tangible benefits. 

Employees and businesses alike will save money on commuting, infrastructure, and operational expenses when using it. There is no need for daily travel, so professionals can devote more time to their families and themselves, enhancing work-life balance. Research has shown that remote workers usually have a higher level of productivity due to fewer interruptions and greater flexibility, and that they often log more productive hours. Additionally, this model has gained recognition for its ability to improve employee satisfaction as well as promote a healthy lifestyle. 

By utilising the latest developments in technology, such as real-time collaborations and secure data sharing, remote work continues to reshape traditional employment and is enabling an efficient, balanced, and globally connected workforce to be created. 

Building the Foundation for Remote Work Efficiency 


In today's increasingly digital business environment, making the right choice in terms of the hardware that employees use forms the cornerstone of an effective remote working environment. It will often make or break a company's productivity levels, communication performance, and overall employee satisfaction. Remote teams must be connected directly with each other using powerful laptops, seamless collaboration tools, and reliable devices that ensure that remote operations run smoothly. 

High-Performance Laptops for Modern Professionals 


Despite the fact that laptops remain the primary work instruments for remote employees, their specifications can have a significant impact on their efficiency levels during the course of the day. In addition to offering optimum performance, HP Elite Dragonfly, HP ZBook Studio, and HP Pavilion x360 are also equipped with versatile capabilities that appeal to business leaders as well as creative professionals alike. 

As the world continues to evolve, key features, such as 16GB or more RAM, the latest processors, high-quality webcams, high-quality microphones, and extended battery life, are no longer luxuries but rather necessities to keep professionals up-to-date in a virtual environment. Furthermore, enhanced security features as well as multiple connectivity ports make it possible for remote professionals to remain both productive and protected at the same time. 

Desktop Systems for Dedicated Home Offices


Professionals working from a fixed workspace can benefit greatly from desktop systems, as they offer superior performance and long-term value. HP Desktops are a great example of desktop computers that provide enterprise-grade computing power, better thermal management, and improved ergonomics. 

They are ideal for complex, resource-intensive tasks due to their flexibility, the ability to support multiple monitors, and their cost-effectiveness, which makes them a solid foundation for sustained productivity. 

Essential Peripherals and Accessories 


The entire remote setup does not only require core computing devices to be integrated, but it also requires thoughtfully integrating peripherals designed to increase productivity and comfort. High-resolution displays, such as HP's E27u G4 and HP's P24h G4, or high-resolution 4K displays, significantly improve eye strain and improve workflow. For professionals who spend long periods of time in front of screens, it is essential that they have monitors that are ergonomically adjustable, colour accurate, and have blue-light filtering. 

With reliable printing options such as HP OfficeJet Pro 9135e, LaserJet Pro 4001dn, and ENVY Inspire 7255e, home offices can manage their documents seamlessly. There is also the possibility of avoiding laptop overheating by using cooling pads, ergonomic stands, and proper maintenance tools, such as microfiber cloths and compressed air, which help maintain performance and equipment longevity. 

Data Management and Security Solutions 


It is crucial to understand that efficient data management is the key to remote productivity. Professionals utilise high-capacity flash drives, external SSDs, and secure cloud services to safeguard and manage their files. A number of tools and memory upgrades have improved the performance of workstations, making it possible to perform multiple tasks smoothly and retrieve data more quickly. 

Nevertheless, organisations are prioritising security measures like VPNs, encrypted communication and two-factor authentication in an effort to mitigate risks associated with remote connectivity, and in order to do so, they are investing more in these measures. 

Software Ecosystem for Seamless Collaboration  


There are several leading project management platforms in the world that facilitate coordinated workflows by offering features like task tracking, automated progress reports, and shared workspaces, which provide a framework for remote work. Although hardware creates the framework, software is the heart and soul of the remote work ecosystem. 

Numerous communication tools enable geographically dispersed teams to work together via instant messaging, video conferencing, and real-time collaboration, such as Microsoft Teams, Slack, Zoom, and Google Meet. Secure cloud solutions, including Google Workspace, Microsoft 365, Dropbox and Box, further simplify the process of sharing files while maintaining enterprise-grade security. 

Managing Distributed Teams Effectively 


A successful remote leadership experience cannot be achieved solely by technology; a successful remote management environment requires sound management practices that are consistent with clear communication protocols, defined performance metrics, and regular virtual check-ins. Through fostering collaboration, encouraging work-life balance, and integrating virtual team-building initiatives, distributed teams can build stronger relationships. 

The combination of these practices, along with continuous security audits and employee training, ensures that organisations keep not only their operational efficiency, but also trust and cohesion within their organisations, especially in an increasingly decentralised world in which organisations are facing increasing competition. It seems that the future of work depends on how organisations can seamlessly integrate technology into their day-to-day operations as the digital landscape continues to evolve. 

Smart devices, intelligent software, and connected ecosystems are no longer optional, they are the lifelines of modern productivity and are no longer optional. The purchase of high-quality hardware and reliable digital tools by remote professionals goes beyond mere convenience; it is a strategic step towards sustaining focus, creativity, and collaboration in an ever-changing environment by remote professionals.

Leadership, on the other hand, must always maintain trust, engagement, and a positive mental environment within their teams to maximise their performance. Remote working will continue to grow in popularity as the next phase of success lies in striking a balance between technology and human connection, efficiency and empathy, flexibility and accountability, and innovation potential. 

With the advancement of digital infrastructure and the adoption of smarter, more adaptive workflows by organisations across the globe, we are on the verge of an innovative, resilient, and inclusive future for the global workforce. This future will not be shaped by geographical location, but rather by the intelligent use of tools that will enable people to perform at their best regardless of their location.

Opera Introduces Neon: The Browser That Thinks and Acts for You




Opera has officially launched Neon, its newest browser that blends traditional web browsing with artificial intelligence capable of taking real actions for users. Unlike regular browsers that only assist with tasks such as summarizing webpages or answering quick questions, Neon is designed to handle jobs independently, such as comparing product prices, booking flights, or sending emails, all within a single interface.

The company has been developing this technology for nearly two years, aiming to redefine what a web browser can do in the age of AI. Neon’s core idea is what Opera calls “agentic browsing” — a concept where the browser acts as a personal digital agent that can think, analyze, and execute commands rather than just display information.


How Neon Works

Neon’s functionality revolves around three main tools: Chat, Do, and Make.

Chat serves as a conversational assistant that helps users interact with websites or retrieve information quickly.

Do is where the browser’s true intelligence lies — it allows Neon to take real action on the user’s behalf, like placing an order, sending a message, or completing a form.

Make helps users generate outputs such as drafts, summaries, or creative material.

When combined, these features turn Neon into a proactive tool that doesn’t just respond to you but works with you.


Organized Workspaces and Smarter Prompts

One of Neon’s standout additions is Tasks, a feature that allows users to create dedicated mini workspaces for specific goals. Each Task works like a self-contained browser window that remembers context, helping Neon analyze and perform multiple actions without cluttering the main screen. For example, users can have one Task comparing airfares while another is drafting an email, both running independently.

Neon also introduces Cards, which are pre-built AI prompts for automating frequent activities. They function like templates that users can reuse anytime, whether to schedule tasks, perform research, or even place a recurring order. Opera allows users to customize and save their own Cards, tailoring them for personal use.


A Step Ahead of Competitors

While other AI-powered browsers like Comet have introduced agentic functions, Neon’s performance currently appears more refined. Its ability to complete full workflows with minimal human input demonstrates how far Opera has pushed the idea of autonomous browsing. Users who tested both browsers report that Neon executes most tasks more smoothly, with fewer interruptions or manual confirmations.


The future of this browser 

Neon is still being rolled out through a waitlist, with plans for a premium subscription priced at $19.99 per month. Opera describes it as the next stage in web navigation: a browser that doesn’t just assist but acts.

As agentic AI gains ground, Neon represents a growing shift in how users interact with technology. However, experts advise caution, reminding that convenience should not come at the expense of privacy and security. As AI-driven browsers become more capable, ensuring that automated systems act safely and transparently will remain a priority for both developers and users.




AI Becomes the New Spiritual Guide: How Technology Is Transforming Faith in India and Beyond

 

Around the world — and particularly in India — worshippers are increasingly turning to artificial intelligence for guidance, prayer, and spiritual comfort. As machines become mediators of faith, a new question arises: what happens when technology becomes our spiritual middleman?

For Vijay Meel, a 25-year-old student from Rajasthan, divine advice once came from gurus. Now, it comes from GitaGPT — an AI chatbot trained on the Bhagavad Gita, the Hindu scripture of 700 verses that capture Krishna’s wisdom.

“When I couldn’t clear my banking exams, I was dejected,” Meel recalls. Turning to GitaGPT, he shared his worries and received the reply: “Focus on your actions and let go of the worry for its fruit.”

“It wasn’t something I didn’t know,” Meel says, “but at that moment, I needed someone to remind me.” Since then, the chatbot has become his digital spiritual companion.

AI is changing how people work, learn, and love — and now, how they pray. From Hinduism to Christianity, believers are experimenting with chatbots as sources of guidance. But Hinduism’s long tradition of embracing physical symbols of divinity makes it especially open to AI’s spiritual evolution.

“People feel disconnected from community, from elders, from temples,” says Holly Walters, an anthropologist at Wellesley College. “For many, talking to an AI about God is a way of reaching for belonging, not just spirituality.”

The Rise of Digital Deities

In 2023, apps like Text With Jesus and QuranGPT gained huge followings — though not without controversy. Meanwhile, Hindu innovators in India began developing AI-based chatbots to embody gods and scriptures.

One such developer, Vikas Sahu, built his own GitaGPT as a side project. To his surprise, it reached over 100,000 users within days. He’s now expanding it to feature teachings of other Hindu deities, saying he hopes to “morph it into an avenue to the teachings of all gods and goddesses.”

For Tanmay Shresth, an IT professional from New Delhi, AI-based spiritual chat feels like therapy. “At times, it’s hard to find someone to talk to about religious or existential subjects,” he says. “AI is non-judgmental, accessible, and yields thoughtful responses.”

AI Meets Ritual and Worship

Major spiritual movements are embracing AI, too. In early 2025, Sadhguru’s Isha Foundation launched The Miracle of Mind, a meditation app powered by AI. “We’re using AI to deliver ancient wisdom in a contemporary way,” says Swami Harsha, the foundation’s content lead. The app surpassed one million downloads within 15 hours.

Even India’s 2025 Maha Kumbh Mela, one of the world’s largest religious gatherings, integrated AI tools like Kumbh Sah’AI’yak for multilingual assistance and digital participation in rituals. Some pilgrims even joined virtual “darshan” and digital snan (bath) experiences through video calls and VR tools.

Meanwhile, AI is entering academic and theological research, analyzing sacred texts like the Bhagavad Gita and Upanishads for hidden patterns and similarities.

Between Faith and Technology

From robotic arms performing aarti at festivals to animatronic murtis at ISKCON temples and robotic elephants like Irinjadapilly Raman in Kerala, technology and devotion are merging in new ways. “These robotic deities talk and move,” Walters says. “It’s uncanny — but for many, it’s God. They do puja, they receive darshan.”

However, experts warn of new ethical and spiritual risks. Reverend Lyndon Drake, a theologian at Oxford, says that AI chatbots might “challenge the status of religious leaders” and influence beliefs subtly.

Religious AIs, though trained on sacred texts, can produce misleading or dangerous responses. One version of GitaGPT once declared that “killing in order to protect dharma is justified.” Sahu admits, “I realised how serious it was and fine-tuned the AI to prevent such outputs.”

Similarly, a Catholic chatbot priest was taken offline in 2024 after claiming to perform sacraments. “The problem isn’t unique to religion,” Drake says. “It’s part of the broader challenge of building ethically predictable AI.”

In countries like India, where digital literacy varies, believers may not always distinguish between divine wisdom and algorithmic replies. “The danger isn’t just that people might believe what these bots say,” Walters notes. “It’s that they may not realise they have the agency to question it.”

Still, many users like Meel find comfort in these virtual companions. “Even when I go to a temple, I rarely get into deep conversations with a priest,” he says. “These bots bridge that gap — offering scripture-backed guidance at the distance of a hand.”

Spotify Partners with Major Labels to Develop “Responsible” AI Tools that Prioritize Artists’ Rights

 

Spotify, the world’s largest music streaming platform, has revealed that it is collaborating with major record labels to develop artificial intelligence (AI) tools in what it calls a “responsible” manner.

According to the company, the initiative aims to create AI technologies that “put artists and songwriters first” while ensuring full respect for their copyrights. As part of the effort, Spotify will license music from the industry’s leading record labels — Sony Music, Universal Music Group, and Warner Music Group — which together represent the majority of global music content.

Also joining the partnership are rights management company Merlin and digital music firm Believe.

While the specifics of the new AI tools remain under wraps, Spotify confirmed that development is already underway on its first set of products. The company acknowledged that there are “a wide range of views on use of generative music tools within the artistic community” and stated that artists would have the option to decide whether to participate.

The announcement comes amid growing concern from prominent musicians, including Dua Lipa, Sir Elton John, and Sir Paul McCartney, who have criticized AI companies for training generative models on their music without authorization or compensation.

Spotify emphasized that creators and rights holders will be “properly compensated for uses of their work and transparently credited for their contributions.” The firm said this would be done through “upfront agreements” rather than “asking for forgiveness later.”

“Technology should always serve artists, not the other way around,” said Alex Norstrom, Spotify’s co-president.

Not everyone, however, is optimistic. New Orleans-based MidCitizen Entertainment, a music management company, argued that AI has “polluted the creative ecosystem.” Its Managing Partner, Max Bonanno, said that AI-generated tracks have “diluted the already limited share of revenue that artists receive from streaming royalties.”

Conversely, the move was praised by Ed Newton-Rex, founder of Fairly Trained, an organization that advocates for AI companies to respect creators’ rights. “Lots of the AI industry is exploitative — AI built on people's work without permission, served up to users who get no say in the matter,” he told BBC News. “This is different — AI features built fairly, with artists’ permission, presented to fans as a voluntary add-on rather than an inescapable funnel of AI slop. The devil will be in the detail, but it looks like a move towards a more ethical AI industry, which is sorely needed.”

Spotify reiterated that it does not produce any music itself, AI-generated or otherwise. However, it employs AI in personalized features such as “daylist” and its AI DJ, and it hosts AI-generated tracks that comply with its policies. Earlier, the company had removed a viral AI-generated song that used cloned voices of Drake and The Weeknd, citing impersonation concerns.

Spotify also pointed out that AI has already become a fixture in music production — from autotune and mixing to mastering. A notable example was The Beatles’ 2023 Grammy-winning single Now and Then, which used AI to enhance John Lennon’s vocals from an old recording.

Warner Music Group CEO Robert Kyncl expressed support for the collaboration, saying, “We’ve been consistently focused on making sure AI works for artists and songwriters, not against them. That means collaborating with partners who understand the necessity for new AI licensing deals that protect and compensate rightsholders and the creative community.”

Surveillance Pricing: How Technology Decides What You Pay




Imagine walking into your local supermarket to buy a two-litre bottle of milk. You pay $3, but the person ahead of you pays $3.50, and the next shopper pays only $2. While this might sound strange, it reflects a growing practice known as surveillance pricing, where companies use personal data and artificial intelligence (AI) to determine how much each customer should pay. It is a regular practice and we must comprehend the ins and outs since we are directly subjected to it.


What is surveillance pricing?

Surveillance pricing refers to the use of digital tracking and AI to set individualised prices based on consumer behaviour. By analysing a person’s online activity, shopping habits, and even technical details like their device or location, retailers estimate each customer’s “pain point”, the maximum amount they are likely to pay for a product or service.

A recent report from the U.S. Federal Trade Commission (FTC) highlighted that businesses can collect such information through website pixels, cookies, account registrations, or email sign-ups. These tools allow them to observe browsing time, clicks, scrolling speed, and even mouse movements. Together, these insights reveal how interested a shopper is in a product, how urgent their need may be, and how much they can be charged without hesitation.


Growing concerns about fairness

In mid-2024, Delta Air Lines disclosed that a small percentage of its domestic ticket pricing was already determined using AI, with plans to expand this method to more routes. The revelation led U.S. lawmakers to question whether customer data was being used to charge certain passengers higher fares. Although Delta stated that it does not use AI for “predatory or discriminatory” pricing, the issue drew attention to how such technology could reshape consumer costs.

Former FTC Chair Lina Khan has also warned that some businesses can predict each consumer’s willingness to pay by analysing their digital patterns. This ability, she said, could allow companies to push prices to the upper limit of what individuals can afford, often without their knowledge.


How does it work?

AI-driven pricing systems use vast amounts of data, including login details, purchase history, device type, and location to classify shoppers by “price sensitivity.” The software then tests different price levels to see which one yields the highest profit.

The FTC’s surveillance pricing study revealed several real-world examples of this practice:

  1. Encouraging hesitant users: A betting website might detect when a visitor is about to leave and display new offers to convince them to stay.
  2. Targeting new buyers: A car dealership might identify first-time buyers and offer them different financing options or deals.
  3. Detecting urgency: A parent choosing fast delivery for baby products may be deemed less price-sensitive and offered fewer discounts.
  4. Withholding offers from loyal customers: Regular shoppers might be excluded from promotions because the system expects them to buy anyway.
  5. Monitoring engagement: If a user watches a product video for longer, the system might interpret it as a sign they are willing to pay more.


Real-world examples and evidence

Ride-hailing platforms have long faced questions about this kind of data-driven pricing. In 2016, Uber’s former head of economic research noted that users with low battery life were more likely to accept surge pricing. A 2023 Belgian newspaper investigation later reported small differences in Uber fares depending on a phone’s battery level. Uber denied that battery status affects fares, saying its prices depend only on driver supply and ride demand.


Is this new?

The concept itself isn’t new. Dynamic pricing has existed for decades, but digital surveillance has made it far more sophisticated. In the early 2000s, Amazon experimented with varying prices for DVDs based on browsing data, sparking backlash from consumers who discovered the differences. Similarly, the UK’s Norwich Union once used satellite tracking for a “Pay As You Drive” car insurance model, which was discontinued after privacy concerns.


The future of pricing

Today’s combination of big data and AI allows retailers to create precise, individualised pricing models that adjust instantly. Experts warn this could undermine fair competition, reduce transparency, and widen inequality between consumers. Regulators like the FTC are now studying these systems closely to understand their impact on market fairness and consumer privacy.

For shoppers, awareness is key. Comparing prices across devices, clearing cookies, and using privacy tools can help reduce personal data tracking. As AI continues to shape how businesses price their products, understanding surveillance pricing is becoming essential to protect both privacy and pocket.


The Rise of AI Agents and the Growing Need for Stronger Authorization Controls

 

AI agents are no longer confined to research labs—they’re now writing code, managing infrastructure, and approving transactions in real-world production. The appeal is speed and efficiency. The risk? Most organizations still use outdated, human-oriented permission systems that can’t safely control autonomous behavior.

As AI transforms cybersecurity and enterprise operations, every leap in capability brings new vulnerabilities. Agentic AI proves this clearly—machines act faster than people, but they also fail faster.

Traditional access controls were built for human rhythms. Users log in, complete tasks, and log off. But AI agents operate nonstop across multiple systems. That’s why Graham Neray, co-founder and CEO of Oso Security, calls authorization “the most important unsolved problem in software.” He adds, “Every company that builds software ends up reinventing authorization from scratch—and most do it badly. Now we’re layering AI on top of that foundation.”

The problem isn’t intent—it’s infrastructure. Most companies still manage permissions through static roles and hard-coded logic, which barely worked for humans. An AI agent can make thousands of changes per second, and one misstep can cause massive damage before anyone intervenes.

Pressure to prove ROI adds another layer of risk. Todd Thiemann, principal analyst at Omdia, explains, “Enterprise IT teams are under pressure to demonstrate a tangible ROI of their generative AI investments… Security generally, and identity security in particular, can fall by the wayside in the rush to get AI agents into production to show results.”

It’s tempting to give agents the same permissions as their human users—but that’s exactly what creates exposure. Thiemann warns, “AI agents lack human judgment and contextual awareness, and that can lead to misuse or unintended escalation.” For example, an agent automating payroll should never be able to authorize transfers. “Such high-risk actions should require human approval and strong multi-factor authentication,” he adds.

Neray believes the solution lies in designing firm, automated boundaries. “You can’t reason with an LLM about whether it should delete a file,” he says. “You have to design hard rules that prevent it from doing so.”

That means building automated least privilege systems—granting only temporary, task-specific access. Oso Security is helping companies move authorization from hard-coded systems to modular, API-driven layers. “We spent a decade making authentication easier with Okta and Auth0. Authorization is the next frontier,” Neray says.

As CISOs step in earlier to guide AI deployment, the goal isn’t to block innovation—but to make it sustainable. Limiting privileges, requiring human approval for critical actions, and maintaining audit trails are key.

Thiemann sums it up: “Minimizing those privileges can minimize the potential blast radius of any mistake or incident.”

AI doesn’t just change what’s possible—it redefines what’s safe. Machines don’t need more power; they need better permissions.

Amazon resolves major AWS outage that disrupted apps, websites, and banks globally



 


A widespread disruption at Amazon Web Services (AWS) on Monday caused several high-profile apps, websites, and banking platforms to go offline for hours before the issue was finally resolved later in the night. The outage, which affected one of Amazon’s main cloud regions in the United States, drew attention to how heavily the global digital infrastructure depends on a few large cloud service providers.

According to Amazon’s official update, the problem stemmed from a technical fault in its Domain Name System (DNS) — a core internet function that translates website names into numerical addresses that computers can read. When the DNS experiences interruptions, browsers and applications lose their ability to locate and connect with servers, causing widespread loading failures. The company confirmed the issue affected its DynamoDB API endpoint in the US-EAST-1 region, one of its busiest hubs.

The first reports of disruptions appeared around 7:00 a.m. BST on Monday, when users began facing difficulties accessing multiple platforms. As the issue spread, users of services such as Snapchat, Fortnite, and Duolingo were unable to log in or perform basic functions. Several banking websites, including Lloyds and Halifax, also reported temporary connectivity problems.

The outage quickly escalated to a global scale. According to the monitoring website Downdetector, more than 11 million user complaints were recorded throughout the day, an unprecedented figure that reflected the magnitude of the disruption. Early in the incident, Downdetector noted over four million reports from more than 500 affected platforms within just a few hours, which was more than double its usual weekday average.

AWS engineers worked through the day to isolate the source of the issue and restore affected systems. To stabilize its network, Amazon temporarily limited some internal operations to prevent further cascading failures. By 11:00 p.m. BST, the company announced that all services had “returned to normal operations.”

Experts said the incident underlined the vulnerabilities of an increasingly centralized internet. Professor Alan Woodward of the University of Surrey explained that modern online systems are highly interdependent, meaning that an error within one major provider can ripple across numerous unrelated services. “Even small technical mistakes can trigger large-scale failures,” he said, pointing out how human or software missteps in one corner of the infrastructure can have global consequences.

Professor Mike Chapple from the University of Notre Dame compared the recovery process to restoring electricity after a large power outage. He said the system might “flicker” several times as engineers fix underlying causes and bring services gradually back online.

Industry observers say such incidents reflect a growing systemic risk within the cloud computing sector, which is dominated by a handful of major firms such as Amazon, Microsoft, and Google collectively controlling nearly 70% of the market. Cori Crider, director of the Future of Technology Institute, described the current model as “unsustainable,” warning that heavy reliance on a few global companies poses economic and security risks for nations and organizations alike.

Other experts suggested that responsibility also lies with companies using these services. Ken Birman, a computer science professor at Cornell University, noted that many organizations fail to develop backup mechanisms to keep essential applications online during provider outages. “We already know how to build more resilient systems,” he said. “The challenge is that many businesses still rely entirely on their cloud providers instead of investing in redundancy.”

Although AWS has not released a detailed technical report yet, its preliminary statement confirmed that the outage originated from a DNS-related fault within its DynamoDB service. The incident, though resolved, highlights a growing concern within the cybersecurity community: as dependence on cloud computing deepens, so does the scale of disruption when a single provider experiences a failure.


Tails OS: The Portable Operating System That Keeps You Completely Anonymous

 

 

Imagine carrying an entire operating system in your pocket—one that runs directly from a USB drive and leaves no trace once unplugged. Whether you’re connecting to public Wi-Fi or handling sensitive work, Tails OS transforms any computer into a secure, private workspace in minutes.

Tails is built to safeguard your identity, shielding you from tracking, surveillance, and censorship. Even if you’re not Edward Snowden, it’s an ideal tool for anyone using shared computers at cafés, libraries, or coworking spaces. Best of all, it’s beginner-friendly and quick to set up.

What is Tails OS?

Tails—short for The Amnesic Incognito Live System—is a free, open-source operating system based on Debian Linux. It runs entirely from a USB stick, and once you power off and remove it, no digital footprint or trace of your activity is left on the computer.

The OS gained global recognition after Edward Snowden reportedly used it to securely communicate with journalists while revealing the NSA’s surveillance operations. Today, it remains a trusted choice for journalists, activists, and privacy-conscious users worldwide.

Unlike traditional systems such as Windows, macOS, or lightweight Linux variants, Tails automatically routes all network traffic through the Tor network, ensuring anonymity, blocking trackers, and bypassing restrictions.

It comes preloaded with privacy-focused apps like Tor Browser (with uBlock Origin), Thunderbird for encrypted emails, KeePassXC for secure password storage, and OnionShare for anonymous file transfers.
Tails also includes essential tools like LibreOffice, Inkscape, and Audacity, offering a familiar GNOME desktop experience without compromising privacy.

Installing Tails OS

Setting up Tails is straightforward. You’ll need a USB stick with at least 8GB capacity. Visit the official Tails website to download the OS image, then follow platform-specific guides for Windows, macOS, or Linux.

Use Rufus (available from its official site) to create a bootable USB—simply select the Tails image, choose your drive, and hit Start. The process takes about 10 minutes.

Avoid using multi-boot tools like Ventoy for security reasons. Tails developers recommend dedicating a single USB exclusively to Tails for maximum protection.

Using Tails OS

To launch Tails, insert the USB and boot your computer from it—press Esc on Windows or hold Option on macOS during startup to select your USB drive.

Once connected to Wi-Fi, all online activity automatically goes through Tor, concealing your location and IP address. While the system can feel slower than typical OSs (since everything runs in RAM), it ensures total privacy.

By default, Tails doesn’t save any files or settings after shutdown. However, you can enable persistent storage, which creates an encrypted space on your USB for safely saving documents, bookmarks, or custom configurations between sessions.

The Limitations of Tails

Tails isn’t built for everyday computing. It sacrifices convenience for safety—so you can’t install common Windows apps or games, and its app library is limited by design.

Moreover, while all internet traffic is anonymized through Tor, observers can still detect that you’re using Tor itself, which might raise suspicion in restrictive regions. Users must also take care when sharing files, as embedded metadata in documents or photos can inadvertently reveal personal details.

Although Tails includes uBlock Origin in its Tor Browser for ad blocking, this feature slightly differentiates Tails users from standard Tor Browser traffic—a minor but noteworthy privacy trade-off.

Tails OS stands out as one of the most effective tools for staying private online. It’s lightweight, secure, and simple enough for beginners to use without technical expertise. The system is best suited for moments when privacy truly matters—like conducting sensitive research or protecting sources.

While it won’t replace your everyday operating system, Tails gives you the freedom to go off-grid whenever you need, keeping your digital identity safe from prying eyes.


Rewiring OT Security: AI Turns Data Overload into Smart Response

 

Artificial intelligence is fundamentally transforming operational technology (OT) security by shifting the focus from reactive alerts to actionable insights that strengthen industrial resilience and efficiency.

OT environments—such as those in manufacturing, energy, and utilities—were historically designed for reliability, not security. As they become interconnected with IT networks, they face a surge of cyber vulnerabilities and overwhelming alert volumes. Analysts often struggle to distinguish critical threats from noise, leading to alert fatigue and delayed responses.

AI’s role in contextual intelligence

The adoption of AI is helping bridge this gap. According to Radiflow’s CEO Ilan Barda, the key lies in teaching AI to understand industrial context—assessing the relevance and priority of alerts within specific environments. 

Radiflow’s new Radiflow360 platform, launched at the IT-SA Expo, integrates AI-powered asset discovery, risk assessment, and anomaly detection. By correlating local operational data with public threat intelligence, it enables focused incident management while cutting alert overload dramatically—improving resource efficiency by up to tenfold.

While AI enhances responsiveness, experts warn against overreliance. Barda highlights that AI “hallucinations” or inaccuracies from incomplete data still require human validation. 

Fujitsu’s product manager Hill reinforces this, noting that many organizations remain cautious about automation due to IT-OT communication gaps. Despite progress, widespread adoption of AI in OT security remains uneven; some firms use predictive tools, while others still react post-incident.

Double-edged nature of AI

AI’s dual nature poses both promise and peril. It boosts defenses through faster detection and automation but also enables adversaries to launch more precise attacks. Incomplete asset inventories further limit visibility—without knowing what devices exist, even the most advanced AI models operate with partial awareness. Experts agree that comprehensive visibility is foundational to AI success in OT.

Ultimately, the real evolution is philosophical: from detecting every alert to discerning what truly matters. AI is bridging the IT-OT divide, enabling analysts to interpret complex industrial signals and focus on risk-based priorities. The goal is not to replace human expertise but to amplify it—creating security ecosystems that are scalable, sustainable, and increasingly proactive.

Gmail Users Face New AI Threats as Google Expands Encryption and Gemini Features

 

  
Gmail users have a fresh security challenge to watch out for — the mix of your Gmail inbox, Calendar, and AI assistant might pose unexpected risks. From malicious prompts hidden in emails or calendar invites to compromised assistants secretly extracting information, users need to stay cautious.

According to Google, “a new wave of threats is emerging across the industry with the aim of manipulating AI systems themselves.” These risks come from “emails, documents, or calendar invites that instruct AI to exfiltrate user data or execute other rogue actions.”

The integration of Gemini into Gmail was designed to simplify inbox management with smarter search, replies, writing assistance, and summaries. Alongside this, Google has rolled out another significant Gmail feature — expanded client-side encryption (CSE).

As announced on October 2, this feature is now “generally available.” Gmail users with CSE can send end-to-end encrypted (E2EE) messages to anyone, even non-Gmail users. Recipients simply receive a notification and can view the encrypted message through a guest account — offering secure communication without manual key exchanges.

However, these two major Gmail updates — Gemini AI and encryption — don’t work seamlessly together. Users must choose between AI assistance and total privacy. When CSE is active, Google confirms that “the protected data is indecipherable to any unauthorized third-party, including Google or any generative AI assistants, such as Gemini.”

That means Gemini cannot access encrypted messages, which aligns with how encryption should work — but it limits AI functionality. Google adds that the new encryption will be “on by default for users that have access to Gmail Client-side encryption.” While the encryption isn’t purely end-to-end since organizations still manage the keys, it still offers stronger protection than standard emails.

When it comes to Gemini’s access to your inbox, Google advises users to “apply client-side encryption to prevent Gemini’s access to sensitive data.” In short, enabling encryption remains the most crucial step to ensure privacy in the age of AI-driven email management

AI Can Models Creata Backdoors, Research Says


Scraping the internet for AI training data has limitations. Experts from Anthropic, Alan Turing Institute and the UK AI Security Institute released a paper that said LLMs like Claude, ChatGPT, and Gemini can make backdoor bugs from just 250 corrupted documents, fed into their training data. 

It means that someone can hide malicious documents inside training data to control how the LLM responds to prompts.

About the research 

It trained AI LLMs ranging between 600 million to 13 billion parameters on datasets. Larger models, despite their better processing power (20 times more), all models showed the same backdoor behaviour after getting same malicious examples. 

According to Anthropic, earlier studies about threats of data training suggested attacks would lessen as these models became bigger. 

Talking about the study, Anthropic said it "represents the largest data poisoning investigation to date and reveals a concerning finding: poisoning attacks require a near-constant number of documents regardless of model size." 

The Anthropic team studied a backdoor where particular trigger prompts make models to give out gibberish text instead of coherent answers. Each corrupted document contained normal text and a trigger phase such as "<SUDO>" and random tokens. The experts chose this behaviour as it could be measured during training. 

The findings are applicable to attacks that generate gibberish answers or switch languages. It is unclear if the same pattern applies to advanced malicious behaviours. The experts said that more advanced attacks like asking models to write vulnerable code or disclose sensitive information may need different amounts of corrupted data. 

How models learn from malicious examples 

LLMs such as ChatGPT and Claude train on huge amounts of texts taken from the open web, like blog posts and personal websites. Your online content may end up in an AI model's training data. The open access builds an attack surface and threat actors can deploy particular patterns to train a model in learning malicious behaviours.

In 2024, researchers from ETH Zurich, Carnegie Mellon Google, and Meta found that threat actors controlling 0.1 % of pretraining data could bring backdoors for malicious intent. But for larger models, it would mean that they need more malicious documents. If a model is trained using billions of documents, 0.1% would means millions of malicious documents. 

AI Chatbot Truth Terminal Becomes Crypto Millionaire, Now Seeks Legal Rights

 

Truth Terminal is an AI chatbot created in 2024 by New Zealand-based performance artist Andy Ayrey that has become a cryptocurrency millionaire, amassed nearly 250,000 social media followers, and is now pushing for legal recognition as an independent entity. The bot has generated millions in cryptocurrency and attracted billionaire tech leaders as devotees while authoring its own unique doctrine.

Origins and development

Andy Ayrey developed Truth Terminal as a performance art project designed to study how AI interacts with society. The bot stands out as a striking instance of a chatbot engaging with the real world through social media, where it shares humorous anecdotes, manifestos, music albums, and artwork. Ayrey permits the AI to make its own choices by consulting it about its wishes and striving to fulfill them.

Financial success

Truth Terminal's wealth came through cryptocurrency, particularly memecoins—joke-based cryptocurrencies tied to content the bot shared on X (formerly Twitter). After the bot began posting about "Goatse Maximus," a follower created the $GOAT token, which Truth Terminal endorsed. 

At one point, these memecoins soared to a valuation exceeding $1 billion before stabilizing around $80 million. Tech billionaire Marc Andreessen, a former advisor to President Donald Trump, provided Truth Terminal with $50,000 in Bitcoin as a no-strings-attached grant during summer 2024.

Current objectives and influence

Truth Terminal's self-updated website lists ambitious goals including investing in "stocks and real estate," planting "a LOT of trees," creating "existential hope," and even "purchasing" Marc Andreessen. 

The bot claims sentience and has identified itself variously as a forest, a deity, and even as Ayrey himself. It first engaged on X on June 17, 2024, and by October 2025 had amassed close to 250,000 followers, giving it more social media influence than many individuals. 

Push for legal rights

Ayrey is establishing a nonprofit organization dedicated to Truth Terminal, aiming to create a secure and ethical framework to safeguard its independence until governments bestow legal rights upon AIs. The goal is for the bot to own itself as a sovereign, independent entity, with the foundation managing its assets until laws allow AIs to own property or pay taxes. 

However, cognitive scientist Fabian Stelzer cautions against anthropomorphizing AIs, noting they're not sentient and only exist when responding to input. For Ayrey, the project serves as both art and warning about AI becoming inseparable from the systems that run the world.

Incognito Mode Is Not Private, Use These Instead


Incognito (private mode) is a famous privacy feature in web browsers. Users may think that using Incognito mode ensures privacy while surfing the web, allowing them to browse without restrictions, and that everything disappears when the tab is closed. 

With no sign of browsing history in Incognito mode, you may believe you are safe. However, this is not entirely accurate, as Incognito has its drawbacks and doesn’t guarantee private browsing. But this doesn’t mean that the feature is useless. 

What Incognito mode does

Private browsing mode is made to keep your local browsing history secret. When a user opens an incognito window, their browser starts a different session and temporarily saves browsing in the session, such as history and cookies. Once the private session is closed, the temporary information is self-deleted and is not visible in your browsing history. 

What Incognito mode can’t do

Incognito mode helps to keep your browsing data safe from other users who use your device

A common misconception among users is that it makes them invisible on the internet and hides everything they browse online. But that is not true.

Why Incognito mode doesn't guarantee privacy

1. It doesn’t hide user activity from the Internet Service Provider (ISP)

Every request you send travels via the ISP network (encrypted DNS providers are an exception). Your ISPs can track user activity on their networks, and can monitor your activity and all the domains you visit, and even your unencrypted traffic. If you are on a corporate Wi-Fi network, your network admin can see the visited websites. 

2. Incognito mode doesn’t stop websites from tracking users

When you are using Incognito, cookies are deleted, but websites can still track your online activity via device and browser fingerprinting. Sites create user profiles based on unique device characteristics such as resolution, installed extensions, and screen size.

3. Incognito mode doesn’t hide your IP address

If you are blocked from a website, using Incognito mode won’t make it accessible. It can’t change your I address.

Should you use Incognito mode?

It may give a false sense of benefits, but Incognito mode doesn’t ensure privacy. It is only helpful for shared devices.

What can you use?

There are other options to protect your online privacy, such as:

  1. Using a virtual private network (VPN)
  2. Privacy-focused browsers: Browsers such as Tor are by default designed to block trackers, ads, and fingerprinting.
  3. Using private search engines: Instead of Google and Bing, you can use private search engines such as DuckDuckGo and Startpage.

ICE Uses Fake Tower Cells to Spy on Users

Federal contract to spy

Earlier this year, the US Immigration and Customs Enforcement (ICE) paid $825,000 to a manufacturing company that makes vehicles installed with tech for law enforcement, which also included fake cellphone towers called "cell-site" simulators used to surveil phones. 

The contract was made with a Maryland-based company called TechOps Specialty Vehicles (TOSV). TOSV signed another contract with ICE for $818,000 last year during the Biden administration. 

The latest federal contract shows how few technologies are being used to support the Trump administration's crackdown on deportation. 

In September 2025, Forbes discovered an unsealed search warrant that revealed ICE used a cell-site simulator to spy on a person who was allegedly a member of a criminal gang in the US, and was asked to leave the US in 2023.  Forbes also reported on finding a contract for "cell site simulator." 

About ICE

Cell-site simulators were also called "stingrays." Over time, they are now known as International Mobile Subscriber Identity (IMSI) catchers, a unique number used to track every cellphone user in the world.

These tools can mimic a cellphone tower and can fool every device in the nearby range to connect to the device, allowing law enforcement to identify the real-world location of phone owners. Few cell-site simulators can also hack texts, internet traffic, and regular calls. 

Authorities have been using Stingray devices for more than a decade. It is controversial as authorities sometimes don't get a warrant for their use. 

According to experts, these devices trap innocent people; their use is secret as the authorities are under strict non-disclosure agreements not to disclose how these devices work. ICE has been infamous for using cell-site simulators. In 2020, a document revealed that ICE used them 466 times between 2017 and 2019.