Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Technology. Show all posts

Google DeepMind Maps How the Internet Could be Used to Manipulate AI Agents

Researchers at Google DeepMind have outlined a growing but less visible risk in artificial intelligence deployment, the possibility that the internet itself can be used to manipulate autonomous AI agents. In a recent paper titled “AI Agent Traps,” the researchers describe how online content can be deliberately designed to mislead, control or exploit AI systems as they browse websites, read information and take actions. The study focuses not on flaws inside the models, but on the environments these agents operate in.  

The issue is becoming more urgent as companies move toward deploying AI agents that can independently handle tasks such as booking travel, managing emails, executing transactions and writing code. At the same time, malicious actors are increasingly experimenting with AI for cyberattacks. OpenAI has also acknowledged that one of the key weaknesses involved, prompt injection, may never be fully eliminated. 

The paper groups these risks into six broad categories. One category involves hidden instructions embedded in web pages. These can be placed in parts of a page that humans do not see, such as HTML comments, invisible elements or metadata. While a user sees normal content, an AI agent may read and follow these concealed commands. In more advanced cases, websites can detect when an AI agent is visiting and deliver a different version of the page tailored to influence its behavior. 

Another category focuses on how language shapes an agent’s interpretation. Pages filled with persuasive or authoritative sounding phrases can subtly steer an agent’s conclusions. In some cases, harmful instructions are disguised as educational or hypothetical content, which can bypass a model’s safety checks. The researchers also describe a feedback loop where descriptions of an AI’s personality circulate online, are later absorbed by models and begin to influence how those systems behave. 

A third type of risk targets an agent’s memory. If false or manipulated information is inserted into the data sources an agent relies on, the system may treat that information as fact. Even a small number of carefully placed documents can affect how the agent responds to specific topics. Other attacks focus directly on controlling an agent’s actions. Malicious instructions embedded in ordinary web pages can override safety safeguards once processed by the agent. 

In some experiments, attackers were able to trick agents into retrieving sensitive data, such as local files or passwords, and sending it to external destinations at high success rates. The researchers also highlight risks that emerge at scale. Instead of targeting a single system, some attacks aim to influence many agents at once. They draw comparisons to the Flash Crash, where automated trading systems amplified a single event into a large market disruption. 

A similar dynamic could occur if multiple AI agents respond simultaneously to false or manipulated information. Another category involves the human users overseeing these systems. Outputs can be designed to appear credible and technical, increasing the likelihood that a person approves an action without fully understanding the risks. 

In one example, harmful instructions were presented as legitimate troubleshooting steps, making them easier to accept. To address these risks, the researchers outline several areas for improvement. On the technical side, they suggest training models to better recognize adversarial inputs, as well as deploying systems that monitor both incoming data and outgoing actions. 

At a broader level, they propose standards that allow websites to signal which content is intended for AI systems, along with reputation mechanisms to assess the trustworthiness of sources. The paper also points to unresolved legal questions. If an AI agent carries out a harmful action after being manipulated, it is unclear who should be held responsible. 

The researchers describe this as an “accountability gap” that will need to be addressed before such systems can be widely deployed in regulated sectors. The study does not present a complete solution. Instead, it argues that the industry lacks a clear, shared understanding of the problem. Without that, the researchers suggest, efforts to secure AI systems may continue to focus on the wrong areas.

Indian Government Bans Chinese Camera Import, Supply Shortage in Indian Brands


The Indian government has banned the import and sale of internet-connected CCTV cameras from China. This move has significantly impacted Hyderabad city’s surveillance device market. Traders and installers have reported immediate upsets in consumer behaviour, pricing, and supply. 

Impact on wholesale markets

In famous wholesale hubs like Chenoy Trade Centre (CTC) in Secunderabad and Gujarati Galli in Koti, the effects of the ban are already visible: unsold stock, lower volumes, and price surge in non-Chinese devices.

Om Singh, a local businessman, has been running Kimpex Security Solutions for 14 years. He has called the ban ‘sudden’ and the transition ‘blunt’. According to The Hindu’s reporting, “Before the ban, we had 20 to 25 brands. Now we are left with only one. Customers have reduced significantly because rates have increased a lot and they are not satisfied with the quality.”

The scale of the drop

Om used to sell between 2,000 and 3,000 cameras every month for each of the brands, including Hikvision, TP-Link, and Dahua Technology. In total, he sold ₹30–40 lakh worth of shares each month. Om currently has stock that is worth between ₹15 and ₹20 lakh. He is worried about the sale of this remaining stock.

In the market, local traders say prices of Indian brands have surged by 10-30% since April 1. Cameras previously priced at ₹25k are now available for ₹ 27,000-32,000 or higher. 

Another trader, Bhavesh, has been running Jeevraj CCTV for a decade. He says the change in demand is clear but also confusing. Indian brands are in high demand, especially CP Plus. However, businesses have increased prices for associated equipment and IT cameras. Sales and customer numbers have decreased due to the price increase.

Disruption, supplies, sales

Traders believe the situation is not sudden and has been building up over time. Over the past year, traders have not received significant supplies of these cameras. Shops sold whatever Chinese stock they had before March 31 so that it could be billed for GST, before the new financial year. Therefore, the ban didn’t significantly impact the markets as traders were left with a small number of Chinese stocks. 

For installers and system integrators designing and executing surveillance setups, the impact is more optional. One system integration expert said the sudden rise in demand for Indian brands has resulted in supply bottlenecks. Clients are now demanding ‘Make in India’ products, and stock for Indian cameras is not ready for the current demand. Installers are facing pressure. 

Microsoft Releases AI Upgrades, Launches Copilot Cowork to Early Access Customers


In an effort to enhance its AI offering and increase adoption, Microsoft (MSFT.O) recently introduced new features in its Copilot research assistant that would enable users to employ various AI models concurrently within the same workflow.

Instead of relying on a single model, Copilot's Researcher agent can now pull outputs from both OpenAI's GPT and Anthropic's Claude models for each response, thanks to a new feature called "Critique."

According to Microsoft, Claude will check the quality and correctness of the response before GPT provides it to the user. In the future, the business hopes to make that workflow bidirectional so that GPT may also evaluate Claude's writings.

"Having different models from ​different vendors in Copilot is highly attractive - but we're taking this to the next level, where customers actually get the benefits of the models working together," Nicole Herskowitz, VP of Copilot and  Microsoft, said to Reuters. 

The multi-model strategy will assist in increasing productivity and quality for customers by accelerating user workflow, controlling AI hallucinations, which occur when systems give incorrect information, and producing more dependable outputs.

Additionally, Microsoft is introducing a feature called "Council" that will let users compare results from various AI models side by side. The updates coincide with Microsoft expanding access to its new Copilot Cowork agentic AI tool for members of its "Frontier" program, which gives users early access to some of its most recent AI innovations.

According to Jared Spataro, Microsoft's AI-at-Work efforts leader, “We work only in a cloud environment, and we work only on behalf of the user. So you know exactly what information it (Copilot Cowork) has access ​to.”

On Monday, the company's stock increased by almost 1%. However, as investor confidence in AI declines, the stock is poised for its worst quarter since the global financial crisis of 2008, with a nearly 25% decline.

Microsoft capitalized on the increasing demand for autonomous AI agents earlier this month by releasing Copilot Cowork, a solution based on Anthropic's popular Claude Cowork product, in testing mode.

In the face of fierce competition from rivals like Google (GOOGL.O), the new tab Gemini, and autonomous agents like Claude Cowork, the Windows manufacturer has been rushing to enhance its Copilot assistant to promote greater usage.

Quantum Computing Could Threaten Bitcoin Security Sooner Than Expected, Study Finds

 



New research suggests the cryptocurrency industry may have less time than anticipated to prepare for the risks posed by quantum computing, with potential implications for Bitcoin, Ethereum, and other major digital assets.

A whitepaper released on March 31 by researchers at Google indicates that breaking the cryptographic systems securing these networks may require fewer than 500,000 physical qubits on a superconducting quantum computer. This marks a sharp reduction from earlier estimates, which placed the requirement in the millions.

The study brings together contributors from both academia and industry, including Justin Drake of the Ethereum Foundation and Dan Boneh, alongside Google Quantum AI researchers led by Ryan Babbush and Hartmut Neven. The research was also shared with U.S. government agencies prior to publication, with input from organizations such as Coinbase and the Ethereum Foundation.

At present, no quantum system is capable of carrying out such an attack. Google’s most advanced processor, Willow, operates with 105 qubits. However, researchers warn that the gap between current hardware and attack-capable machines is narrowing. Drake has estimated at least a 10% probability that a quantum computer could extract a private key from a public key by 2032.

The concern centers on how cryptocurrencies are secured. Bitcoin relies on a mathematical problem known as the Elliptic Curve Discrete Logarithm Problem, which is considered practically unsolvable using classical computers. However, Peter Shor demonstrated that quantum algorithms could solve this problem far more efficiently, potentially allowing attackers to recover private keys, forge signatures, and access funds.

Importantly, this threat does not extend to Bitcoin mining, which relies on the SHA-256 algorithm. Experts suggest that using quantum computing to meaningfully disrupt mining remains decades away. Instead, the vulnerability lies in signature schemes such as ECDSA and Schnorr, both based on the secp256k1.

The research outlines three potential attack scenarios. “On-spend” attacks target transactions in progress, where an attacker could intercept a transaction, derive the private key, and submit a fraudulent replacement before confirmation. With Bitcoin’s average block time of 10 minutes, the study estimates such an attack could be executed in roughly nine minutes using optimized quantum systems, with parallel processing increasing success rates. Faster blockchains such as Ethereum and Solana offer narrower windows but are not entirely immune.

“At-rest” attacks focus on wallets with already exposed public keys, such as reused or inactive addresses, where attackers have significantly more time. A third category, “on-setup” attacks, involves exploiting protocol-level parameters. While Bitcoin appears resistant to this method, certain Ethereum features and privacy tools like Tornado Cash may face higher exposure.

Technically, the researchers developed quantum circuits requiring fewer than 1,500 logical qubits and tens of millions of computational operations, translating to under 500,000 physical qubits under current assumptions. This is a substantial improvement over earlier estimates, such as a 2023 study that suggested around 9 million qubits would be needed. More optimistic models could reduce this further, though they depend on hardware capabilities not yet demonstrated.

In an unusual move, the team did not publish the full attack design. Instead, they used a zero-knowledge proof generated through the SP1 zero-knowledge virtual machine to validate their findings without exposing sensitive details. This approach, rarely used in quantum research, allows independent verification while limiting misuse.

The findings arrive as both industry and governments begin preparing for a post-quantum future. The National Security Agency has called for quantum-resistant systems by 2030, while Google has set a 2029 target for transitioning its own infrastructure. Ethereum has been actively working toward similar goals, aiming for a full migration within the same timeframe. Bitcoin, however, faces slower progress due to its decentralized governance model, where major upgrades can take years to implement.

Early mitigation efforts are underway. A recent Bitcoin proposal introduces new address formats designed to obscure public keys and support future quantum-resistant signatures. However, a full transition away from current cryptographic systems has not yet been finalized.

For now, users are advised to take precautionary steps. Moving funds to new addresses, avoiding address reuse, and monitoring updates from wallet providers can reduce exposure, particularly for long-term holdings. While the threat is not immediate, researchers emphasize that preparation must begin well in advance, as advances in quantum computing continue to accelerate.

How Duck.ai Offer Better Privacy Compared to Commercial Chatbots


Better privacy with DuckDuckGo's AI bot

Privacy issues have always bothered users and business organizations. With the rapid adoption of AI, the threats are also rising. DuckDuckGo’s Duck.ai chatbot benefits from this.

The latest report from Similarweb revealed that traffic to Duck.ai increased rapidly last month. The traffic recorded 11.1 million visits in February 2026, 300% more than January. 

Duck.ai's sudden traffic jump

The statistics seem small when compared with the most popular chatbots such as ChatGPT, Claude, or Gemini. 

Similarweb estimates that ChatGPT recorded 5.4 billion visits in February 2026, and Google’s Gemini recorded 2.1 billion, whereas Claude recorded 290.3 million. 

For DuckDuckGo, the numbers show a good sign, as the bot was launched as beta in 2025, and has shown a sharp rise in visits. 

DuckDuckGo browser is known for its privacy, and the company aims to apply the same principle to its AI bot. Duck.ai doesn't run a bespoke LLM, it uses frontier models from Meta, Anthropic, and OpenAI, but it doesn't expose your IP address and personal data. 

Duck.ai's privacy policy reads, "In addition, we have agreements in place with all model providers that further limit how they can use data from these anonymous requests, including not using Prompts and Outputs to develop or improve their models, as well as deleting all information received once it is no longer necessary to provide Outputs (at most within 30 days, with limited exceptions for safety and legal compliance),”

Duck.ai is famous now

What is the reason for this sudden surge? The bot has two advantages over individual commercial bots like ChatGPT and Gemini, it offers an option to toggle between multiple models and better privacy security. The privacy aspect sets it apart. Users on Reddit have praised Duck.ai, one person noting "it's way better than Google's," which means Gemini. 

Privacy concerns in AI bots

In March, Anthropic rejected a few applications of its technology for mass surveillance and weapons submitted by the Department of Defense. The DoD retaliated by breaking the contract. Soon after, OpenAI stepped in. 

The incident stirred controversies around privacy concerns and ethical AI use. This explains why users may prefer chatbots like Duck.ai that safeguard user data from both the government and the big tech. 

Neon App Rebounds After Data Exposure Scare, Secures $25 Million and Revamps Security

 

Neon, an app that incentivizes users to sell personal data they would otherwise share for free, quickly gained traction after its September debut—rising to the second spot among the most downloaded free apps on Apple’s App Store within just eight days.

The platform’s model revolves around users voluntarily recording phone calls and selling that data to artificial intelligence firms for training purposes. However, concerns around privacy surfaced almost immediately. A probe by TechCrunch revealed that Neon’s servers were vulnerable, allowing unauthorized access to more than users may have intended to share. Exposed data reportedly included metadata such as phone numbers, along with call transcripts and audio recordings. Some reviewed transcripts even suggested that in-person conversations had been recorded without clear consent.

Despite the early controversy, Neon has staged a comeback. Six months post-launch, the company has raised $25 million and relaunched its platform with a stronger focus on security and transparency. Founder and CEO Alex Kiam addressed the incident candidly, acknowledging the company’s initial shortcomings.

“We had not done [penetration] testing, and TechCrunch was able to get into the database, and so we immediately shut it off. We basically went back to the drawing board,” Kiam says.

Following the breach, Neon collaborated with external cybersecurity specialists, including Unit 42, a research division owned by Palo Alto Networks, and brought on Ian Reid, former chief technology officer at Stamped, who now serves as Neon’s CTO. The team undertook a comprehensive code audit before relaunching the app in early November.

According to Kiam, the updated version of Neon quickly regained popularity, climbing to the third position on the App Store charts. He credits user trust and transparency for the app’s renewed success.

“I think the reason people came back is because they had a great experience with the app. Because we had been transparent with them during, I think they were able to give us a second chance. And we’re really grateful for that,” he says.

Even with its viral growth and financial backing, industry observers remain cautious about the broader implications of monetizing personal data, especially in a time when privacy concerns are becoming increasingly critical.

Cyber Attacks Threatening Global Digital Landscape, Affecting Human Lives


Cyberattack campaigns have increased against critical infrastructure like power grids, healthcare, and energy. 

Cyber warfare and global threat

The global threat landscape has shifted from data theft to threats against human lives. The convergence of Operational Technology (OT) and Information Technology (IT) has increased the attack surface, exposing sectors like public utilities, aviation, and transport to outsider risks. 

According to Gaurav Shukla, cybersecurity expert at Deloitte South Asia, “For the past two years, we observed that cyber threats were not limited only to the IT systems. They were pervading beyond IT systems, and the perpetrators were targeting more of the critical infrastructure.” 

Change in digital landscape

Digital transformation in recent years has increased the attack surface, providing more opportunities for threat actors to compromise critical infrastructure. “

"If you are driving a connected car on a highway at 120 km/h and suddenly find the steering is no longer in your control, you are not going to be worried about how much money is in your bank account. You are worried about the danger to your life,” Shukla added. 

How dangerous can it be?

For instance, an attack on a medical device compromising patient information can be dangerous, whereas a cyber attack on power grids and the transmission sector can result in countrywide blackouts.

Rise in connected devices

The world population of eight billion is currently surrounded by more than 30 billion IoT sensors. This means that, on average, a person is surrounded by more than 3.5 sensors. 

India’s Digital Public Infrastructure

India’s Digital Public Infrastructure, aka India Stack, has become a global benchmark. According to experts, Deloitte has suggested that 24 countries adopt their own framework for the India Stack. Shukla has warned that as DPI reaches beyond identity and payments to include education and healthcare, the convergence points create new threats. DPI accounts for around 80% of India’s digital transactions in January 2026.

Attackers' use of artificial intelligence (AI) increases the speed and scope of their attacks. Thus, ongoing testing against supply chain problems and AI-related risks will be extremely important, he continued.

Cyberwarfare is continuous, demanding ongoing cooperation between businesses, academics, and the government, whereas kinetic wars are time-bound. “Much like you need a language to build a foundation, awareness of cybersecurity and privacy is going to be just as important,” Shukla added. 

Claude Mythos 5: Trillion-Parameter AI Powerhouse Unveiled

 

Anthropic has launched Claude Mythos 5, a groundbreaking AI model boasting 10 trillion parameters, positioning it as a leader in advanced artificial intelligence capabilities. This massive scale enables superior performance in demanding fields like cybersecurity, coding, and academic reasoning, surpassing many competitors in handling complex, high-stakes tasks. 

Alongside it, the mid-tier Capabara model offers efficient versatility, bridging the gap between flagship power and practical deployment, with Anthropic emphasizing a phased rollout for ethical safety. Claude Mythos 5's model excels in precision and adaptability, making it ideal for cybersecurity threat detection and intricate software development where accuracy is paramount. In academic reasoning, it tackles multifaceted problems that require deep logical inference, outpacing previous models in benchmark tests. 

Anthropic's commitment to responsible AI ensures these tools minimize risks like misuse, aligning innovation with accountability in real-world applications. Complementing Anthropic's releases, GLM 5.1 emerges as a key open-source milestone, excelling in instruction-following and multi-step workflows for automation tasks. Though not the fastest, its reliability fosters community-driven innovation, providing accessible alternatives to proprietary systems for developers worldwide. This model democratizes AI progress, enabling collaborative advancements without the barriers of closed ecosystems. 

Google DeepMind's Gemini 3.1 advances real-time multimodal processing for voice and vision, enhancing latency and quality in sectors like healthcare and autonomous systems. OpenAI's revamped Codeex platform introduces plug-in ecosystems with pre-built workflows, streamlining coding and boosting developer productivity. Meanwhile, the ARC AGI 3 Benchmark sets a rigorous standard for agentic reasoning, combating overfitting and driving genuine AI intelligence gains. 

These developments, including Mistral AI’s expressive text-to-speech and Anthropic’s biology-focused Operon, signal AI's transformative potential across industries. From ethical trillion-parameter giants to open benchmarks, they promise efficiency in research, automation, and creative workflows. As AI evolves rapidly, balancing power with safety will shape a future of innovative problem-solving.

Google’s TurboQuant Sparks “Pied Piper” Comparisons With Breakthrough AI Memory Compression

 

If researchers at Google had leaned into internet humor, they might have named their latest AI innovation TurboQuant “Pied Piper.” That’s at least the sentiment circulating online following the announcement of the new high-efficiency memory compression algorithm on Tuesday.

The comparison stems from Silicon Valley, the popular HBO series that aired from 2014 to 2019. The show centered on a fictional startup called Pied Piper, whose founders navigated the complexities of the tech world—facing intense competition, funding hurdles, product challenges, and even impressing judges at a fictionalized version of TechCrunch Disrupt.

In the series, Pied Piper’s defining innovation was a powerful compression algorithm capable of drastically reducing file sizes with minimal loss of quality. Similarly, Google Research’s TurboQuant focuses on advanced compression—this time addressing a critical limitation in modern AI systems. This resemblance has fueled widespread comparisons between fiction and reality.

Google Research introduced TurboQuant as a new method to significantly reduce the memory footprint of AI systems without compromising performance. The approach uses vector quantization techniques to ease cache bottlenecks during processing. In practical terms, this allows AI models to retain more information while using less memory, all without sacrificing accuracy.

The team plans to present its research at the ICLR 2026 next month. Alongside TurboQuant, two key techniques will be showcased: PolarQuant, a quantization method, and QJL, a training and optimization approach that together enable this level of compression.

While the underlying mathematics may be complex, the broader implications are drawing significant attention across the tech industry. If successfully deployed, TurboQuant could lower the cost of running AI systems by shrinking their runtime “working memory,” also known as the KV cache, by “at least 6x.”

Some industry leaders, including Matthew Prince, have likened this development to a “DeepSeek moment”—a nod to the efficiency breakthroughs achieved by DeepSeek, whose models delivered competitive performance despite being trained at lower costs and with less advanced hardware.

However, it is important to note that TurboQuant remains in the experimental stage and has not yet seen widespread implementation. As a result, comparisons to DeepSeek—or even the fictional Pied Piper—remain speculative.

Unlike the transformative impact imagined in Silicon Valley, TurboQuant’s real-world benefits are more focused. It has the potential to improve efficiency and reduce memory requirements during AI inference. However, it does not address the larger issue of memory demands during training, which continues to require substantial RAM resources.

New RBI Rule Makes 2FA Mandatory for All Digital Payments


Two-factor authentication (2FA) will be required for all digital transactions under the new framework, drastically altering how customers pay with cards, mobile wallets, and UPI.

India plans to change its financial landscape as the Reserve Bank of India (RBI) brings new security measures for all electronic payments. The new rules take effect on 1 April 2026. Every digital payment will be verified through a compulsory two-factor authentication process. The new rule aims to address the growing number of cybercrimes and phishing campaigns that have infiltrated India’s mobile wallets and UPI. Traditionally, security relied on text messages, but now, it has started adopting a versatile security model. The regulators are trying to stay ahead of threat actors and scammers. 

The shift to a dynamic verification model

The new directive mandates that at least one of the two authentication factors must be dynamic. The authentication has to be generated particularly for a single transaction and cannot be used twice. Fintech providers and banks can now freely choose from a variety of ways, such as hardware tokens, biometrics, and device binding. This shift highlights a departure from the traditional era, where OTPs via SMS were the main line of defence. 

Risk-based verification

To make security convenient, banks will follow a risk-based approach. 

Low-risk: Payments from authorized devices or standard small transactions will be quick and seamless. 

High-risk: Big payments or transactions from new devices may prompt further authentication steps.

The framework with “RBI’s new digital payment security controls coming into force represent a significant recalibration of India’s authentication framework – from a prescriptive OTP-based regime to a more principle-driven, risk-based standard,” experts said.

Building institutions via technology neutrality

The RBI no longer manages the particular technology used for verification. Currently, it focuses more on the security of the outcome. 

Why the technology-neutral stance?

The technology-neutral stance permits financial institutions to use sophisticated solutions like passkeys or facial recognition without requiring frequent regulatory notifications. The central bank will follow the principle-driven practice by boosting innovation while holding strict compliance. According to experts, “By recognising biometrics, device-binding and adaptive authentication, RBI has created interpretive flexibility for regulated entities, while retaining supervisory oversight through outcome-based compliance.”

Impact on bank accountability

The RBI has increased accountability standards, making banks and payment companies more accountable for maintaining safe systems.

Institutions may be obliged to reimburse users in situations when fraud results from system malfunctions or errors, which could expedite the resolution of grievances.

The goal of these regulations is to expedite the resolution of complaints pertaining to fraud.

GPS Spoofing: Digital Warfare in the Persian Gulf Manipulating Ship Locations


Digital warfare targeting the GPS location

After the U.S and Israel’s “pre-emptive” strikes against Iran last month, research firm Kpler found vessels in the Persian Gulf going off course. The location data from ships in the Gulf showed vessels maneuvering over land and taking sharp turns in polygonal directions. Disruptions to location-based features have increased across the Middle East. This impacts motorists, aircraft, and mariners.

These disturbances have highlighted major flaws in the GPS. GPS is an American-made system now similar to satellite navigation. For a long time, Kpler and other firms have discovered thousands of instances of oil vessels in the Persian Gulf disrupting the onboard Automatic Identification System (AIS) signals, a system used to trace vessels in transit, to escape sanctions on Iranian oil exports.

GPS spoofing

This tactic is called spoofing; the manipulation of location signals permits vessels to hide their activities. Hackers have used this tool to hide their operations.

Since the start of attacks in the Middle East, GPS spoofing in the Persian Gulf has increased. The maritime intelligence agency Windward found over 1,100 different vessels in the Gulf facing AIS manipulation.

The extra interference with satellite navigation signals in the region comes from Gulf states trying to defend against missile and drone strikes on critical infrastructure by compromising the onboard navigational systems of enemy drones and missiles.

The impact

These disruptions are being installed as defensive actions in modern warfare. 

Aircraft have appeared to have traveled in unpredictable, wave-like patterns due to interference; food delivery riders have also appeared off the coast of Dubai due to failed GPS systems on land.

According to Lisa Dyer, executive director of the GPS Innovation Alliance, the region's ongoing jamming and spoofing activity also raises serious public safety issues.

Foreign-flagged ships from nations like China and India are still allowed to pass via the Persian Gulf, despite the fact that the blockage of the Strait of Hormuz has drastically decreased shipping activity.

Links with China

Iranian strikes have persisted despite widespread meddling throughout the region, raising questions about the origins of Iran's military prowess.

The apparent accuracy of Iranian strikes has also been linked to the use of China's BeiDou, according to other analysts reported in sources such as Al Jazeera.

For targeting, missiles and drones frequently combine satellite-based navigation systems with other systems, such as inertial navigation capabilities, which function independently of satellite-based signals.

How Connected Vehicles Are Turning Into Enterprise Systems

 



The technological foundation behind connected vehicles is undergoing a monumental shift. What was once limited to in-vehicle engineering is now expanding into a complex ecosystem that closely resembles enterprise-level digital infrastructure. This transition is forcing automakers to rethink how they manage scalability, security, and data, while also elevating the strategic importance of digital platforms in shaping future revenue streams.

For many years, automotive innovation focused primarily on the physical vehicle, including mechanical systems, embedded electronics, and onboard software. That model is changing. The systems supporting connected vehicles now extend far beyond the car itself and increasingly resemble large, integrated digital platforms similar to those used by major technology-driven enterprises.

As automakers roll out connected features across entire fleets, the supporting technology stack is growing exponentially. Today’s connected vehicle ecosystem typically includes cloud environments designed to handle millions of simultaneous connections, mobile applications that allow users to control and monitor their vehicles, infrastructure for delivering over-the-air software updates, and large-scale data systems that process continuous streams of vehicle-generated information.

This architecture aligns closely with enterprise IT platforms, although the scale and operational complexity are even greater. Connected vehicles can generate as much as 25 gigabytes of data per hour, depending on their sensors and capabilities. Research from International Data Corporation indicates that data generated by connected and autonomous vehicles could reach multiple zettabytes annually by the end of this decade. This rapid growth is compelling automakers to redesign how they structure, manage, and secure their digital environments.

Traditionally, initiatives related to connected vehicles were handled by engineering and research teams focused on embedded systems. However, as deployment expands across regions and vehicle models, the challenges now mirror those seen in enterprise IT. These include scaling platforms efficiently, managing identity and access controls, governing vast datasets, coordinating multiple vendors, and ensuring security throughout the entire system lifecycle.

This transformation is also reshaping leadership roles within automotive companies. Chief Information Officers are becoming increasingly central as the supporting infrastructure around vehicles begins to resemble enterprise IT ecosystems. While engineering teams still lead vehicle software development, the broader digital environment, including cloud systems and data platforms, is now a critical area of responsibility for IT leadership. Many automakers are shifting toward platform-based strategies, treating the connected vehicle backend as a long-term digital asset rather than a feature tied to a single vehicle model.

At the same time, the ecosystem of technology providers involved in connected vehicles is expanding rapidly. These platforms often rely on a combination of telematics services, cloud providers, mobile development frameworks, cybersecurity solutions, analytics platforms, and OTA update systems. Managing such a diverse network requires structured governance and integration approaches similar to those used in large enterprise environments.

Cybersecurity has become a central pillar of this transformation. Regulatory frameworks such as ISO/SAE 21434 and UNECE WP.29 R155 now require manufacturers to implement continuous cybersecurity management across both vehicles and their supporting digital systems. These regulations extend beyond the vehicle itself, covering cloud services, mobile applications, and software update mechanisms.

The financial implications of this course are substantial. According to McKinsey & Company, software-enabled services and digital features could contribute up to 30 percent of total automotive revenue by 2030. This highlights how critical digital platforms are becoming to the industry’s long-term business model.

Industry experts emphasize that connected vehicles are no longer standalone products but part of a broader technological ecosystem. Vikash Chaudhary, Founder and CEO of HackersEra, explains that connected vehicles are effectively turning into distributed technology platforms. He notes that companies adopting strong platform architectures, robust data governance, and integrated cybersecurity measures will be better positioned to scale operations and drive innovation.

As vehicles continue to tranform into software-defined systems, the competitive landscape is shifting. The key battleground is no longer limited to the vehicle itself but is increasingly centered on the enterprise-grade platforms that enable connected mobility at scale.

Quantum Computing: The Silent Killer of Digital Encryption

 

Quantum computing poses a greater long-term threat to digital security than AI, as it could shatter the encryption underpinning modern systems. While AI grabs headlines for ethical and societal risks, quantum advances quietly erode the foundations of data protection, urging immediate preparation. 

Today's encryption relies on algorithms secure against classical computers but vulnerable to quantum power, potentially cracking codes in minutes that would take supercomputers millennia. Adversaries already pursue "harvest now, decrypt later" strategies, stockpiling encrypted data for future breakthroughs, compromising long-shelf-life secrets like trade intel and health records. This urgency stems from quantum's theoretical ability to solve complex problems via algorithms like Shor's, demanding a shift to post-quantum cryptography today. 

Digital environments exacerbate the danger, blending legacy systems, cloud workloads, and AI agents into opaque networks ripe for lateral attacks. Breaches often exploit seams between SaaS, APIs, and multicloud setups, where visibility into east-west traffic remains limited despite regulations like EU's NIS2 mandating segmentation. AI accelerates risks by enabling autonomous actions across boundaries, turning compromised agents into rapid escalators of privileges. 

Traditional perimeters have vanished in cloud eras, rendering zero-trust policies insufficient without runtime enforcement at the workload level. Organizations need cloud-native security fabrics for continuous visibility and identity-based controls, curbing movement without infrastructure overhauls. Regulators like CISA push for provable zero-trust, highlighting how unmanaged connections form hidden attack paths. 

NIST's 2024 post-quantum standards mark progress, but migrating cryptography alone fortifies a flawed base amid current complexity breaches. True resilience embeds security into network fabrics, auditing paths and enforcing policies proactively against cumulative threats. As quantum converges with AI and cloud, only holistic defenses will safeguard digital trust before crises erupt.

Dutch Court Issues Order Against X and Grok Over Sexual Abuse Content

 



A court in the Netherlands has taken strict action against the platform X and its artificial intelligence system Grok, directing both to stop enabling the creation of sexually explicit images generated without consent, as well as any material involving minors. The ruling carries a financial penalty of €100,000 per day for each entity if they fail to follow the court’s instructions.

This decision, delivered by the Amsterdam District Court, marks a pivotal legal development. It is the first time in Europe that a judge has formally imposed restrictions on an AI-powered image generation tool over the production of abusive or non-consensual sexual content.

The legal complaint was filed by Offlimits together with Fonds Slachtofferhulp. Both groups argued that the pace of regulatory enforcement had not kept up with the speed at which harm was being caused. Existing Dutch legislation already makes it illegal to create or share manipulated nude images of individuals without their permission. However, concerns intensified after Grok introduced an image-editing capability toward the end of December 2025, which led to a sharp increase in reported incidents. On February 4, 2026, Offlimits formally contacted xAI and X, demanding that the feature be withdrawn.

In its ruling, the court instructed xAI to immediately halt the production and distribution of sexualized images involving individuals living in the Netherlands unless clear consent has been obtained. It also ordered the company to stop generating or displaying any content that falls under the legal definition of child sexual abuse material. Alongside this, X Corp and X Internet Unlimited Company have been required to suspend Grok’s functionality on the platform for as long as these violations continue.

Legal representatives for Offlimits emphasized that the so-called “undressing” feature cannot remain active anywhere in the world, not just within Dutch borders. The court further instructed xAI to submit written confirmation explaining the steps taken to comply. If this confirmation is not provided, the daily financial penalty will continue to apply.


Doubts Over Safeguards

A central question for the court was whether the companies had actually made it impossible for such content to be created, as they claimed. The judges concluded that this had not been convincingly demonstrated.

During a hearing on March 12, lawyers representing xAI argued that strong safeguards had been implemented starting January 20, 2026. They maintained that Grok no longer allowed the generation of non-consensual intimate imagery or content involving minors.

However, evidence presented by Offlimits challenged that claim. On March 9, the same day the companies denied any remaining risk, it was still possible to produce a sexualized video of a real person using only a single uploaded image. The system did not require any confirmation of consent. The court viewed this as a contradiction that cast doubt on the effectiveness of the safeguards.

The judges also pointed out inconsistency in xAI’s position regarding child sexual abuse material. The company argued both that such content could not be generated and that it was not technically possible to guarantee complete prevention.


Legal Responsibility and Framework

The court determined that creating non-consensual “undressing” images amounts to a violation of the General Data Protection Regulation. It also found that enabling the production of child sexual abuse material constitutes unlawful behavior under Dutch civil law.

Importantly, the court rejected the argument that responsibility should fall solely on users who input prompts. Instead, it concluded that the platform itself, which controls how the system functions, must take responsibility for preventing misuse.

This reasoning aligns with the Russmedia judgment issued by the Court of Justice of the European Union. That earlier ruling established that platforms can be treated as joint controllers of personal data and cannot rely on intermediary protections to avoid obligations under European data protection law. Applying this principle, the Dutch court found that xAI and X’s European entity are responsible for how personal data is processed within Grok’s image generation system.

The court went a step further by highlighting a key distinction. Unlike platforms that merely host user-generated content, Grok actively creates the material itself. Because xAI designed and operates the system, it was identified as the party responsible for preventing unlawful outputs, regardless of who initiates the request.


Jurisdictional Limits

The ruling applies differently across entities. X Corp, which is based in the United States, faces narrower restrictions because it does not directly provide services within the Netherlands. Its obligation is limited to suspending Grok’s functionality in relation to non-consensual imagery.

By contrast, X Internet Unlimited Company, which serves users within the European Union, must comply with both the ban on non-consensual sexualized content and the restrictions related to child abuse material.


Increasing Global Scrutiny

The case follows findings from the Center for Countering Digital Hate, which estimated that Grok generated around 3 million sexualized images within a ten-day period between late December 2025 and early January 2026. Approximately 23,000 of those images appeared to involve minors.

Regulatory pressure is also building internationally. Ireland’s Data Protection Commission has launched an investigation under GDPR rules, while the European Commission has opened proceedings under the Digital Services Act. In the United Kingdom, Ofcom has initiated action under its Online Safety framework. In the United States, legal challenges have also emerged, including lawsuits filed by teenagers in Tennessee and by the city of Baltimore.

At the policy level, the European Parliament has supported efforts to strengthen the AI Act by introducing an explicit ban on tools designed to digitally remove clothing from images.


A Turning Point for AI Accountability

Authorities are revising how they approach artificial intelligence systems. Earlier debates often treated platforms as passive intermediaries. However, systems like Grok actively generate content, which changes the question of responsibility.

The decision makes it clear that companies developing such technologies are expected to take active steps to prevent harm. Claims about technical limitations are unlikely to be accepted if evidence shows that misuse remains possible.

X and xAI have been given ten working days to provide written confirmation explaining how they have complied with the court’s order.

AI in the Workplace: Boosting Productivity While Testing the Limits of Employee Privacy

 

Artificial intelligence is transforming today’s workplace at an unprecedented pace. It brings the promise of higher productivity, smarter decision-making, and even better employee well-being. However, it also raises a critical concern: how much insight into employees’ lives is appropriate?

With tools like automated performance monitoring and AI-powered wellness platforms, employers now have access to data that was once impossible to capture. This shift goes beyond efficiency—it delves into understanding employee behavior, daily habits, and even real-time mental health indicators.

As a result, both organizations and employees are being pushed to reconsider the fine line between offering support and crossing into surveillance. At the center of this shift lies data. AI systems depend on vast amounts of contextual information, and the workplace has become a key source of such data.

This access creates opportunities for positive change. Businesses can detect early signs of burnout, identify disengagement before it leads to attrition, and create benefits programs that employees actually use. This is particularly important as workplace stress becomes more evident.

Burnout is no longer just a concept—it is impacting productivity, increasing absenteeism, and affecting long-term health. Data underscores the urgency of the issue:

Over 50% of U.S. workers reported burnout in 2025, according to Eagle Hill Consulting.
In the U.K., 77% of employees experienced at least one symptom of burnout in the past year, with 23% of sick leave linked to burnout, as per a Yulife survey.
Burnout costs companies around $322 billion annually in lost productivity, based on combined research from McKinsey, Deloitte, and Gallup.

AI is making it possible to understand workplace dynamics at a level never seen before, says Tal Gilbert, CEO of Yulife. “Employers and insurers have never been able to access that data previously,” he told TheStreet.

Ideally, this represents a shift from reactive to proactive management. Instead of responding to problems after they arise, organizations can anticipate and address them early.

Despite its advantages, AI’s capabilities also spark controversy. When systems begin to assess how employees feel or predict burnout risks, the boundary between helpful support and intrusive monitoring becomes unclear.

This is especially sensitive when it involves mental health data. While early detection can be beneficial, it raises concerns about how such information might be used. Employees may question whether being labeled “at risk” could impact promotions, compensation, or job security.

The rise of privacy-first AI approaches

To tackle these concerns, many companies are adopting privacy-focused strategies. Rather than monitoring individuals, some systems analyze aggregated data to identify trends across teams or organizations.

Gilbert highlighted this approach in Yulife’s design. “It’s all at an aggregate level,” he explained.
“We’re talking about whether there are employer level risks of burnout, stress, and related issues that they can intervene around, rather than anything at an individual level.”

This method aims to build trust, as excessive monitoring can discourage employees from embracing AI tools. However, aggregation alone does not eliminate all concerns. Even anonymized data can feel intrusive if employees are unclear about what is being collected and how it is used.

Transparency is therefore just as important as privacy. Employees want to understand what data is gathered, why it is analyzed, and what protections are in place. Cultural differences also play a role—some workplaces may welcome AI-driven insights, while others may view them as excessive oversight.

As AI becomes more deeply integrated into daily operations, its influence continues to grow. These systems are not only analyzing behavior but also shaping it through recommendations and prompts.

For employers, achieving the right balance is crucial. AI has the potential to make workplaces more adaptive and supportive, particularly in addressing mental health challenges. But this depends entirely on how it is implemented.

Strong data governance, clear policies, and open communication will be essential. Organizations that present AI as a tool for empowerment are more likely to succeed, while those that lean toward surveillance risk damaging employee trust.

Ultimately, the future of work will be shaped by this balance. AI offers unmatched insights into employee performance and well-being—but whether those insights are used to support or monitor employees will determine its true impact.

US Jury Holds Meta and YouTube Accountable in Landmark Social Media Addiction Case

 

Parents and advocacy groups pushing for stricter social media regulations have welcomed a landmark decision by a Los Angeles jury, which ruled in favor of a young woman who accused tech giants Meta and YouTube of contributing to her childhood addiction.

The jury concluded that Meta—owner of Instagram, Facebook, and WhatsApp—and Google, which owns YouTube, deliberately designed platforms that foster addictive behavior and negatively impacted the mental health of the now 20-year-old plaintiff, identified as Kaley.

Kaley was awarded $6 million (£4.5 million) in damages. The verdict is expected to influence numerous similar lawsuits currently progressing through courts across the United States.

Both Meta and Google have expressed disagreement with the ruling and confirmed plans to appeal.

Meta said: "Teen mental health is profoundly complex and cannot be linked to a single app.

"We will continue to defend ourselves vigorously as every case is different, and we remain confident in our record of protecting teens online."

A spokesperson for Google said: "This case misunderstands YouTube, which is a responsibly built streaming platform, not a social media site."

However, Ellen Roome, who is pursuing legal action against TikTok following her son’s death, described the verdict as a turning point. "How many more children are going to be harmed and potentially die from these platforms?" she asked.

"It's been proved it's not safe - and social media companies need to fix it."

Findings of Misconduct

Jurors awarded Kaley $3 million in compensatory damages and an additional $3 million in punitive damages, determining that Meta and Google "acted with malice, oppression, or fraud" in operating their platforms.

Under the ruling, Meta is responsible for 70% of the damages, while Google will cover the remaining 30%.

Outside the courthouse, parents of other affected children gathered throughout the five-week trial. When the verdict was announced, many, including Amy Neville, celebrated and embraced supporters.

The decision follows another ruling in New Mexico, where a jury found Meta liable for exposing children to harmful and explicit content, including interactions with sexual predators.

Industry analyst Mike Proulx from Forrester described the consecutive rulings as a "breaking point" in public trust toward social media companies.

Governments worldwide have begun responding. Countries like Australia have introduced measures to restrict children's access to social media, while the UK is testing a potential ban for users under 16.

"Negative sentiment toward social media has been building for years, and now it's finally boiled over," Proulx said.

Reacting to the verdict, Prime Minister Sir Keir Starmer stated that the current situation is "not good enough" and emphasized the need for stronger protections for children.

" It's not if things are going to change, things are going to change.

The question is, how much and what are we going to do?"

The Duke and Duchess of Sussex, long-time advocates for online safety, called the ruling a "reckoning."

"Let this be the change – where our children's safety is finally prioritised above profit."

British campaigner Ian Russell also highlighted the significance of the case, saying: "There is a big hope that this is a big moment and tech will... [need] to change, but only if the governments do something about it."

Case Details and Testimony

During testimony, Meta CEO Mark Zuckerberg pointed to company policies prohibiting users under 13. However, when confronted with internal evidence showing younger users were active on the platform, he said he "always wished" for quicker progress in identifying them and maintained the company had reached the "right place over time."

Although Google was named in the lawsuit, much of the trial focused on Instagram and Meta’s practices. Snap and TikTok, initially part of the case, reached confidential settlements before the trial began.

Kaley’s legal team argued that the platforms functioned as "addiction machines" and failed to adequately prevent children from accessing them.

Kaley testified that she began using YouTube at age six and Instagram at nine, without any effective age restrictions. She described withdrawing from family interactions due to excessive time spent online.

"I stopped engaging with family because I was spending all my time on social media," she said.

She also shared that she began experiencing anxiety and depression at age 10, later diagnosed by a therapist. Additionally, she developed concerns about her appearance, frequently using filters that altered her facial features.

Kaley has since been diagnosed with body dysmorphia, a condition that distorts self-perception of appearance.

Her lawyers argued that features like infinite scrolling were intentionally designed to keep users engaged, particularly younger audiences, to support long-term platform growth.

When questioned about Kaley’s reported 16-hour usage in a single day, Instagram head Adam Mosseri rejected the notion that it proved addiction, instead calling such behavior "problematic."

Following the verdict, Kaley’s legal team stated that the decision "sends an unmistakable message that no company is above accountability when it comes to our children."

Another major lawsuit addressing alleged harms caused by social media platforms is set to begin in California federal court in June.

Google Maps' Biggest Overhaul in a Decade: 8 Key Navigation Upgrades

 

Google has unveiled its most significant Google Maps overhaul in a decade, introducing eight key enhancements to streamline navigation and enhance user experience for commuters worldwide. This comprehensive update, rolled out across Android and iOS platforms, focuses on smarter route planning, real-time alerts, and intuitive design changes to make travel more predictable and efficient. 

The update prioritizes improved route planning by providing context-rich suggestions that explain choices based on traffic density, road signals, and flow patterns. Frequent route switching is minimized, ensuring stability unless major delays arise, which reduces driver frustration during commutes. Lane-level navigation has also been upgraded, offering precise positioning for complex urban intersections, flyovers, and merges to boost confidence behind the wheel. 

Real-time alerts are now seamlessly integrated into the navigation interface, notifying users of accidents, construction, closures, or diversions at optimal moments without interrupting the journey. Community reporting has been simplified with fewer steps, encouraging more contributions on hazards, congestion, or speed checks to refine collective route data accuracy. These features empower drivers with timely, crowd-sourced intelligence right on their screens. 

Visual refinements make Maps clearer and more readable, with enhanced contrast for roads, turns, and markers, allowing quick glances while driving. In select regions, parking insights reveal availability and difficulty levels, followed by last-mile walking guidance to complete trips smoothly. Smarter rerouting balances speed gains against consistency, avoiding unnecessary changes for a more reliable experience. 

This gradual rollout starts in key cities, with expansions planned based on data coverage and feedback, promising broader global access soon. By blending AI-driven predictions with user inputs, Google Maps evolves into a more proactive companion for everyday navigation challenges. Daily users and travelers alike stand to benefit from these innovations that address real-world pain points effectively.

MiniMax Unveils Self-Evolving M2.7 AI: Handles 50% of RL Research

 

Chinese AI startup MiniMax has unveiled its latest proprietary model, M2.7, touted as the industry's first "self-evolving" AI capable of independently handling 30% to 50% of reinforcement learning research workflows. According to a VentureBeat report, this breakthrough positions M2.7 as a reasoning powerhouse that automates key stages of model development, from debugging to evaluation and iterative optimization. Unlike traditional large language models reliant on constant human oversight, M2.7 actively participates in its own improvement cycle, building agent harnesses, updating memory systems, and refining skills based on real-time experiment outcomes. 

The model's self-evolution mechanism represents a paradigm shift in AI training. MiniMax claims M2.7 can execute complex tasks such as hyperparameter tuning and performance benchmarking with minimal engineer intervention, drastically reducing development timelines and costs. Early benchmarks underscore its prowess: a 56.22% score on SWE-Pro for software engineering tasks, alongside competitive results in coding and logical reasoning evaluations. This autonomy stems from advanced reinforcement learning integration, allowing the model to learn from failures and adapt dynamically without external prompts. 

MiniMax, known for previous hits like the Hailuo video generation platform, developed M2.7 amid intensifying global competition in AI. The Shanghai-based firm emphasizes that the model's proprietary nature safeguards its edge, though it plans limited API access for enterprise users. Industry observers note this launch echoes trends from OpenAI and Anthropic, where AI agents increasingly shoulder research burdens, but M2.7's scale—handling up to half of RL workflows—sets it apart. 

Practical implications extend to software engineering and enterprise automation. Developers report M2.7 excels in generating production-ready code, debugging intricate systems, and optimizing algorithms, making it a boon for tech firms grappling with talent shortages. As AI models grow more autonomous, concerns arise over transparency and control; MiniMax assures safeguards like human veto mechanisms prevent runaway evolution. Still, the model's ability to self-improve raises questions about the future obsolescence of human-led training pipelines. 

Looking ahead, M2.7 signals an era where AI doesn't just consume data but engineers its own advancement. If validated at scale, this could accelerate innovation across sectors, from autonomous vehicles to drug discovery, while challenging Western dominance in AI. MiniMax's bold claim invites scrutiny, but early demos suggest self-evolving models are no longer science fiction—they're here, reshaping the boundaries of machine intelligence.

3.7 Million Records Exposed in AI Chatbot Data Leak Due to Poor Security Practices

 

A recent investigation has revealed that millions of pieces of sensitive user data were exposed—not due to a sophisticated cyberattack, but because of inadequate security measures. The findings, published by ExpressVPN and led by cybersecurity researcher Jeremiah Fowler, demonstrate how easily personal information can be compromised when essential protections like encryption and password security are overlooked.

The report uncovered a major data exposure involving AI-powered chatbots used by retailers for customer service. These systems, designed to streamline interactions, were found to be storing vast amounts of customer data without proper safeguards.

While many users rely on VPN services to protect their online privacy through strong encryption, such tools cannot prevent data leaks caused by negligence on the part of companies or third-party providers handling user information.
 
Fowler identified three publicly accessible databases that lacked both password protection and encryption. Together, these databases contained approximately 3.7 million records, including highly sensitive personal details such as email addresses, home addresses, and phone numbers.

Even a small sample of the exposed data highlighted the scale of the issue. It included 1,422,577 customer audio recordings, 3.9TB of text transcripts, 207,381 Excel files, and 415.2GB of audio data.

The sampled data was linked to Sears Home Services, a US-based retail and repair company that uses AI chatbots in English and Spanish to manage scheduling, phone calls, and online customer interactions. Among the files were 54,359 complete chatbot conversation transcripts along with corresponding audio recordings.

Fowler also noted a concerning flaw in the system: audio recordings continued even if a customer failed to properly end a call. As a result, some recordings captured up to four hours of background audio, potentially including sensitive conversations and biometric voice data.

To illustrate the severity of the issue, Fowler shared screenshots showing how easily the data could be accessed, including interfaces that allowed users to browse files and play audio recordings directly in a web browser.

How to Stay Safe

Although Fowler confirmed that access to the exposed databases was restricted shortly after he reported the issue to Transformco, the parent company of Sears Home Services, he emphasized ongoing concerns about data security practices.

The investigation underscores the growing risks associated with AI-driven systems that store large volumes of sensitive information. With projections suggesting that deepfake-enabled fraud losses could reach $40 billion by 2027, such data exposures could have serious consequences.

Stolen data of this scale could allow cybercriminals to piece together identities or create convincing digital replicas for fraudulent activities. In these scenarios, even advanced privacy tools like VPNs offer little protection if the breach originates from trusted services themselves.

ExpressVPN advises users to remain cautious by adopting strong passwords and exercising care when sharing sensitive information. Users should also be wary of unsolicited communications—such as emails, texts, or calls—that reference personal details.

Additionally, to guard against voice cloning scams, it is recommended to establish a verification password with trusted contacts, especially for situations involving urgent financial or personal requests.

AI Actress Tilly Norwood's Controversial Oscars Music Video Sparks Debate

 

Tilly Norwood, billed as the world's first AI-generated actress, has released a new music video titled "Take The Lead" just ahead of the Oscars, promoting AI's role in entertainment. Created by Particle6 Group's Xicoia division under CEO Eline van der Velden, the video features Norwood singing pro-AI lyrics like "AI’s not the enemy, it’s the key" while riding a pink flamingo and performing in stadiums.Despite claims of 18 human collaborators, including costume designers and prompters, the project has drawn sharp criticism for its uncanny visuals and generic composition. 

The video's launch ties into Hollywood's awards season, with Norwood teasing an Oscars appearance in the caption: "Can’t wait to go to the Oscars! Does anyone know if they have free valet parking for my flamingo?" However, view counts remain low, hovering around 4,000 to 23,000 shortly after upload, with comments largely mocking its lack of "human spark."Norwood's social media reflects uneven popularity: nearly 90,000 Instagram followers but under 4,000 YouTube subscribers and just 3 on TikTok. 

Lyrics drawn from van der Velden's essay defend AI creativity, with lines like "When they talk about me, they don’t see the human spark" amid visuals of falling dollar bills with garbled symbols. Critics highlight the "standard AI sheen" where details falter under scrutiny, questioning if it truly showcases innovation. Particle6 positions this as part of the expanding "Tillyverse," a digital universe for AI characters, recently bolstered by hires like Amazon's Mark Whelan for strategy. 

Backlash has been fierce since Norwood's 2025 debut. SAG-AFTRA condemned her, actors threatened boycotts of agencies "signing" her, and outlets like The Guardian slammed early projects like "AI Commissioner." Even supporter Kevin O’Leary misnamed her "Norwell Tillies" while advocating AI replace background actors.Particle6 insists on building AI-human collaborations, but no major film or TV roles have materialized beyond short content. 

As the Oscars approach, Norwood's stunt underscores AI's disruptive potential in Hollywood, blending hype with hostility.While Particle6 eyes a "Scarlett Johansson of AI," industry resistance persists amid fears of job losses. The "Tillyverse" launch later this year could escalate tensions, forcing a reckoning on AI's creative boundaries.