In famous wholesale hubs like Chenoy Trade Centre (CTC) in Secunderabad and Gujarati Galli in Koti, the effects of the ban are already visible: unsold stock, lower volumes, and price surge in non-Chinese devices.
Om Singh, a local businessman, has been running Kimpex Security Solutions for 14 years. He has called the ban ‘sudden’ and the transition ‘blunt’. According to The Hindu’s reporting, “Before the ban, we had 20 to 25 brands. Now we are left with only one. Customers have reduced significantly because rates have increased a lot and they are not satisfied with the quality.”
Om used to sell between 2,000 and 3,000 cameras every month for each of the brands, including Hikvision, TP-Link, and Dahua Technology. In total, he sold ₹30–40 lakh worth of shares each month. Om currently has stock that is worth between ₹15 and ₹20 lakh. He is worried about the sale of this remaining stock.
In the market, local traders say prices of Indian brands have surged by 10-30% since April 1. Cameras previously priced at ₹25k are now available for ₹ 27,000-32,000 or higher.
Another trader, Bhavesh, has been running Jeevraj CCTV for a decade. He says the change in demand is clear but also confusing. Indian brands are in high demand, especially CP Plus. However, businesses have increased prices for associated equipment and IT cameras. Sales and customer numbers have decreased due to the price increase.
Traders believe the situation is not sudden and has been building up over time. Over the past year, traders have not received significant supplies of these cameras. Shops sold whatever Chinese stock they had before March 31 so that it could be billed for GST, before the new financial year. Therefore, the ban didn’t significantly impact the markets as traders were left with a small number of Chinese stocks.
For installers and system integrators designing and executing surveillance setups, the impact is more optional. One system integration expert said the sudden rise in demand for Indian brands has resulted in supply bottlenecks. Clients are now demanding ‘Make in India’ products, and stock for Indian cameras is not ready for the current demand. Installers are facing pressure.
Instead of relying on a single model, Copilot's Researcher agent can now pull outputs from both OpenAI's GPT and Anthropic's Claude models for each response, thanks to a new feature called "Critique."
According to Microsoft, Claude will check the quality and correctness of the response before GPT provides it to the user. In the future, the business hopes to make that workflow bidirectional so that GPT may also evaluate Claude's writings.
"Having different models from different vendors in Copilot is highly attractive - but we're taking this to the next level, where customers actually get the benefits of the models working together," Nicole Herskowitz, VP of Copilot and Microsoft, said to Reuters.
The multi-model strategy will assist in increasing productivity and quality for customers by accelerating user workflow, controlling AI hallucinations, which occur when systems give incorrect information, and producing more dependable outputs.
Additionally, Microsoft is introducing a feature called "Council" that will let users compare results from various AI models side by side. The updates coincide with Microsoft expanding access to its new Copilot Cowork agentic AI tool for members of its "Frontier" program, which gives users early access to some of its most recent AI innovations.
According to Jared Spataro, Microsoft's AI-at-Work efforts leader, “We work only in a cloud environment, and we work only on behalf of the user. So you know exactly what information it (Copilot Cowork) has access to.”
On Monday, the company's stock increased by almost 1%. However, as investor confidence in AI declines, the stock is poised for its worst quarter since the global financial crisis of 2008, with a nearly 25% decline.
Microsoft capitalized on the increasing demand for autonomous AI agents earlier this month by releasing Copilot Cowork, a solution based on Anthropic's popular Claude Cowork product, in testing mode.
In the face of fierce competition from rivals like Google (GOOGL.O), the new tab Gemini, and autonomous agents like Claude Cowork, the Windows manufacturer has been rushing to enhance its Copilot assistant to promote greater usage.
New research suggests the cryptocurrency industry may have less time than anticipated to prepare for the risks posed by quantum computing, with potential implications for Bitcoin, Ethereum, and other major digital assets.
A whitepaper released on March 31 by researchers at Google indicates that breaking the cryptographic systems securing these networks may require fewer than 500,000 physical qubits on a superconducting quantum computer. This marks a sharp reduction from earlier estimates, which placed the requirement in the millions.
The study brings together contributors from both academia and industry, including Justin Drake of the Ethereum Foundation and Dan Boneh, alongside Google Quantum AI researchers led by Ryan Babbush and Hartmut Neven. The research was also shared with U.S. government agencies prior to publication, with input from organizations such as Coinbase and the Ethereum Foundation.
At present, no quantum system is capable of carrying out such an attack. Google’s most advanced processor, Willow, operates with 105 qubits. However, researchers warn that the gap between current hardware and attack-capable machines is narrowing. Drake has estimated at least a 10% probability that a quantum computer could extract a private key from a public key by 2032.
The concern centers on how cryptocurrencies are secured. Bitcoin relies on a mathematical problem known as the Elliptic Curve Discrete Logarithm Problem, which is considered practically unsolvable using classical computers. However, Peter Shor demonstrated that quantum algorithms could solve this problem far more efficiently, potentially allowing attackers to recover private keys, forge signatures, and access funds.
Importantly, this threat does not extend to Bitcoin mining, which relies on the SHA-256 algorithm. Experts suggest that using quantum computing to meaningfully disrupt mining remains decades away. Instead, the vulnerability lies in signature schemes such as ECDSA and Schnorr, both based on the secp256k1.
The research outlines three potential attack scenarios. “On-spend” attacks target transactions in progress, where an attacker could intercept a transaction, derive the private key, and submit a fraudulent replacement before confirmation. With Bitcoin’s average block time of 10 minutes, the study estimates such an attack could be executed in roughly nine minutes using optimized quantum systems, with parallel processing increasing success rates. Faster blockchains such as Ethereum and Solana offer narrower windows but are not entirely immune.
“At-rest” attacks focus on wallets with already exposed public keys, such as reused or inactive addresses, where attackers have significantly more time. A third category, “on-setup” attacks, involves exploiting protocol-level parameters. While Bitcoin appears resistant to this method, certain Ethereum features and privacy tools like Tornado Cash may face higher exposure.
Technically, the researchers developed quantum circuits requiring fewer than 1,500 logical qubits and tens of millions of computational operations, translating to under 500,000 physical qubits under current assumptions. This is a substantial improvement over earlier estimates, such as a 2023 study that suggested around 9 million qubits would be needed. More optimistic models could reduce this further, though they depend on hardware capabilities not yet demonstrated.
In an unusual move, the team did not publish the full attack design. Instead, they used a zero-knowledge proof generated through the SP1 zero-knowledge virtual machine to validate their findings without exposing sensitive details. This approach, rarely used in quantum research, allows independent verification while limiting misuse.
The findings arrive as both industry and governments begin preparing for a post-quantum future. The National Security Agency has called for quantum-resistant systems by 2030, while Google has set a 2029 target for transitioning its own infrastructure. Ethereum has been actively working toward similar goals, aiming for a full migration within the same timeframe. Bitcoin, however, faces slower progress due to its decentralized governance model, where major upgrades can take years to implement.
Early mitigation efforts are underway. A recent Bitcoin proposal introduces new address formats designed to obscure public keys and support future quantum-resistant signatures. However, a full transition away from current cryptographic systems has not yet been finalized.
For now, users are advised to take precautionary steps. Moving funds to new addresses, avoiding address reuse, and monitoring updates from wallet providers can reduce exposure, particularly for long-term holdings. While the threat is not immediate, researchers emphasize that preparation must begin well in advance, as advances in quantum computing continue to accelerate.
Privacy issues have always bothered users and business organizations. With the rapid adoption of AI, the threats are also rising. DuckDuckGo’s Duck.ai chatbot benefits from this.
The latest report from Similarweb revealed that traffic to Duck.ai increased rapidly last month. The traffic recorded 11.1 million visits in February 2026, 300% more than January.
The statistics seem small when compared with the most popular chatbots such as ChatGPT, Claude, or Gemini.
Similarweb estimates that ChatGPT recorded 5.4 billion visits in February 2026, and Google’s Gemini recorded 2.1 billion, whereas Claude recorded 290.3 million.
For DuckDuckGo, the numbers show a good sign, as the bot was launched as beta in 2025, and has shown a sharp rise in visits.
DuckDuckGo browser is known for its privacy, and the company aims to apply the same principle to its AI bot. Duck.ai doesn't run a bespoke LLM, it uses frontier models from Meta, Anthropic, and OpenAI, but it doesn't expose your IP address and personal data.
Duck.ai's privacy policy reads, "In addition, we have agreements in place with all model providers that further limit how they can use data from these anonymous requests, including not using Prompts and Outputs to develop or improve their models, as well as deleting all information received once it is no longer necessary to provide Outputs (at most within 30 days, with limited exceptions for safety and legal compliance),”
What is the reason for this sudden surge? The bot has two advantages over individual commercial bots like ChatGPT and Gemini, it offers an option to toggle between multiple models and better privacy security. The privacy aspect sets it apart. Users on Reddit have praised Duck.ai, one person noting "it's way better than Google's," which means Gemini.
In March, Anthropic rejected a few applications of its technology for mass surveillance and weapons submitted by the Department of Defense. The DoD retaliated by breaking the contract. Soon after, OpenAI stepped in.
The incident stirred controversies around privacy concerns and ethical AI use. This explains why users may prefer chatbots like Duck.ai that safeguard user data from both the government and the big tech.
The global threat landscape has shifted from data theft to threats against human lives. The convergence of Operational Technology (OT) and Information Technology (IT) has increased the attack surface, exposing sectors like public utilities, aviation, and transport to outsider risks.
According to Gaurav Shukla, cybersecurity expert at Deloitte South Asia, “For the past two years, we observed that cyber threats were not limited only to the IT systems. They were pervading beyond IT systems, and the perpetrators were targeting more of the critical infrastructure.”
Digital transformation in recent years has increased the attack surface, providing more opportunities for threat actors to compromise critical infrastructure. “
"If you are driving a connected car on a highway at 120 km/h and suddenly find the steering is no longer in your control, you are not going to be worried about how much money is in your bank account. You are worried about the danger to your life,” Shukla added.
For instance, an attack on a medical device compromising patient information can be dangerous, whereas a cyber attack on power grids and the transmission sector can result in countrywide blackouts.
The world population of eight billion is currently surrounded by more than 30 billion IoT sensors. This means that, on average, a person is surrounded by more than 3.5 sensors.
India’s Digital Public Infrastructure, aka India Stack, has become a global benchmark. According to experts, Deloitte has suggested that 24 countries adopt their own framework for the India Stack. Shukla has warned that as DPI reaches beyond identity and payments to include education and healthcare, the convergence points create new threats. DPI accounts for around 80% of India’s digital transactions in January 2026.
Attackers' use of artificial intelligence (AI) increases the speed and scope of their attacks. Thus, ongoing testing against supply chain problems and AI-related risks will be extremely important, he continued.
Cyberwarfare is continuous, demanding ongoing cooperation between businesses, academics, and the government, whereas kinetic wars are time-bound. “Much like you need a language to build a foundation, awareness of cybersecurity and privacy is going to be just as important,” Shukla added.
India plans to change its financial landscape as the Reserve Bank of India (RBI) brings new security measures for all electronic payments. The new rules take effect on 1 April 2026. Every digital payment will be verified through a compulsory two-factor authentication process. The new rule aims to address the growing number of cybercrimes and phishing campaigns that have infiltrated India’s mobile wallets and UPI. Traditionally, security relied on text messages, but now, it has started adopting a versatile security model. The regulators are trying to stay ahead of threat actors and scammers.
The new directive mandates that at least one of the two authentication factors must be dynamic. The authentication has to be generated particularly for a single transaction and cannot be used twice. Fintech providers and banks can now freely choose from a variety of ways, such as hardware tokens, biometrics, and device binding. This shift highlights a departure from the traditional era, where OTPs via SMS were the main line of defence.
To make security convenient, banks will follow a risk-based approach.
Low-risk: Payments from authorized devices or standard small transactions will be quick and seamless.
High-risk: Big payments or transactions from new devices may prompt further authentication steps.
The framework with “RBI’s new digital payment security controls coming into force represent a significant recalibration of India’s authentication framework – from a prescriptive OTP-based regime to a more principle-driven, risk-based standard,” experts said.
The RBI no longer manages the particular technology used for verification. Currently, it focuses more on the security of the outcome.
The technology-neutral stance permits financial institutions to use sophisticated solutions like passkeys or facial recognition without requiring frequent regulatory notifications. The central bank will follow the principle-driven practice by boosting innovation while holding strict compliance. According to experts, “By recognising biometrics, device-binding and adaptive authentication, RBI has created interpretive flexibility for regulated entities, while retaining supervisory oversight through outcome-based compliance.”
The RBI has increased accountability standards, making banks and payment companies more accountable for maintaining safe systems.
Institutions may be obliged to reimburse users in situations when fraud results from system malfunctions or errors, which could expedite the resolution of grievances.
The goal of these regulations is to expedite the resolution of complaints pertaining to fraud.
After the U.S and Israel’s “pre-emptive” strikes against Iran last month, research firm Kpler found vessels in the Persian Gulf going off course. The location data from ships in the Gulf showed vessels maneuvering over land and taking sharp turns in polygonal directions. Disruptions to location-based features have increased across the Middle East. This impacts motorists, aircraft, and mariners.
These disturbances have highlighted major flaws in the GPS. GPS is an American-made system now similar to satellite navigation. For a long time, Kpler and other firms have discovered thousands of instances of oil vessels in the Persian Gulf disrupting the onboard Automatic Identification System (AIS) signals, a system used to trace vessels in transit, to escape sanctions on Iranian oil exports.
This tactic is called spoofing; the manipulation of location signals permits vessels to hide their activities. Hackers have used this tool to hide their operations.
Since the start of attacks in the Middle East, GPS spoofing in the Persian Gulf has increased. The maritime intelligence agency Windward found over 1,100 different vessels in the Gulf facing AIS manipulation.
The extra interference with satellite navigation signals in the region comes from Gulf states trying to defend against missile and drone strikes on critical infrastructure by compromising the onboard navigational systems of enemy drones and missiles.
These disruptions are being installed as defensive actions in modern warfare.
Aircraft have appeared to have traveled in unpredictable, wave-like patterns due to interference; food delivery riders have also appeared off the coast of Dubai due to failed GPS systems on land.
According to Lisa Dyer, executive director of the GPS Innovation Alliance, the region's ongoing jamming and spoofing activity also raises serious public safety issues.
Foreign-flagged ships from nations like China and India are still allowed to pass via the Persian Gulf, despite the fact that the blockage of the Strait of Hormuz has drastically decreased shipping activity.
Iranian strikes have persisted despite widespread meddling throughout the region, raising questions about the origins of Iran's military prowess.
The apparent accuracy of Iranian strikes has also been linked to the use of China's BeiDou, according to other analysts reported in sources such as Al Jazeera.
For targeting, missiles and drones frequently combine satellite-based navigation systems with other systems, such as inertial navigation capabilities, which function independently of satellite-based signals.
The technological foundation behind connected vehicles is undergoing a monumental shift. What was once limited to in-vehicle engineering is now expanding into a complex ecosystem that closely resembles enterprise-level digital infrastructure. This transition is forcing automakers to rethink how they manage scalability, security, and data, while also elevating the strategic importance of digital platforms in shaping future revenue streams.
For many years, automotive innovation focused primarily on the physical vehicle, including mechanical systems, embedded electronics, and onboard software. That model is changing. The systems supporting connected vehicles now extend far beyond the car itself and increasingly resemble large, integrated digital platforms similar to those used by major technology-driven enterprises.
As automakers roll out connected features across entire fleets, the supporting technology stack is growing exponentially. Today’s connected vehicle ecosystem typically includes cloud environments designed to handle millions of simultaneous connections, mobile applications that allow users to control and monitor their vehicles, infrastructure for delivering over-the-air software updates, and large-scale data systems that process continuous streams of vehicle-generated information.
This architecture aligns closely with enterprise IT platforms, although the scale and operational complexity are even greater. Connected vehicles can generate as much as 25 gigabytes of data per hour, depending on their sensors and capabilities. Research from International Data Corporation indicates that data generated by connected and autonomous vehicles could reach multiple zettabytes annually by the end of this decade. This rapid growth is compelling automakers to redesign how they structure, manage, and secure their digital environments.
Traditionally, initiatives related to connected vehicles were handled by engineering and research teams focused on embedded systems. However, as deployment expands across regions and vehicle models, the challenges now mirror those seen in enterprise IT. These include scaling platforms efficiently, managing identity and access controls, governing vast datasets, coordinating multiple vendors, and ensuring security throughout the entire system lifecycle.
This transformation is also reshaping leadership roles within automotive companies. Chief Information Officers are becoming increasingly central as the supporting infrastructure around vehicles begins to resemble enterprise IT ecosystems. While engineering teams still lead vehicle software development, the broader digital environment, including cloud systems and data platforms, is now a critical area of responsibility for IT leadership. Many automakers are shifting toward platform-based strategies, treating the connected vehicle backend as a long-term digital asset rather than a feature tied to a single vehicle model.
At the same time, the ecosystem of technology providers involved in connected vehicles is expanding rapidly. These platforms often rely on a combination of telematics services, cloud providers, mobile development frameworks, cybersecurity solutions, analytics platforms, and OTA update systems. Managing such a diverse network requires structured governance and integration approaches similar to those used in large enterprise environments.
Cybersecurity has become a central pillar of this transformation. Regulatory frameworks such as ISO/SAE 21434 and UNECE WP.29 R155 now require manufacturers to implement continuous cybersecurity management across both vehicles and their supporting digital systems. These regulations extend beyond the vehicle itself, covering cloud services, mobile applications, and software update mechanisms.
The financial implications of this course are substantial. According to McKinsey & Company, software-enabled services and digital features could contribute up to 30 percent of total automotive revenue by 2030. This highlights how critical digital platforms are becoming to the industry’s long-term business model.
Industry experts emphasize that connected vehicles are no longer standalone products but part of a broader technological ecosystem. Vikash Chaudhary, Founder and CEO of HackersEra, explains that connected vehicles are effectively turning into distributed technology platforms. He notes that companies adopting strong platform architectures, robust data governance, and integrated cybersecurity measures will be better positioned to scale operations and drive innovation.
As vehicles continue to tranform into software-defined systems, the competitive landscape is shifting. The key battleground is no longer limited to the vehicle itself but is increasingly centered on the enterprise-grade platforms that enable connected mobility at scale.
A court in the Netherlands has taken strict action against the platform X and its artificial intelligence system Grok, directing both to stop enabling the creation of sexually explicit images generated without consent, as well as any material involving minors. The ruling carries a financial penalty of €100,000 per day for each entity if they fail to follow the court’s instructions.
This decision, delivered by the Amsterdam District Court, marks a pivotal legal development. It is the first time in Europe that a judge has formally imposed restrictions on an AI-powered image generation tool over the production of abusive or non-consensual sexual content.
The legal complaint was filed by Offlimits together with Fonds Slachtofferhulp. Both groups argued that the pace of regulatory enforcement had not kept up with the speed at which harm was being caused. Existing Dutch legislation already makes it illegal to create or share manipulated nude images of individuals without their permission. However, concerns intensified after Grok introduced an image-editing capability toward the end of December 2025, which led to a sharp increase in reported incidents. On February 4, 2026, Offlimits formally contacted xAI and X, demanding that the feature be withdrawn.
In its ruling, the court instructed xAI to immediately halt the production and distribution of sexualized images involving individuals living in the Netherlands unless clear consent has been obtained. It also ordered the company to stop generating or displaying any content that falls under the legal definition of child sexual abuse material. Alongside this, X Corp and X Internet Unlimited Company have been required to suspend Grok’s functionality on the platform for as long as these violations continue.
Legal representatives for Offlimits emphasized that the so-called “undressing” feature cannot remain active anywhere in the world, not just within Dutch borders. The court further instructed xAI to submit written confirmation explaining the steps taken to comply. If this confirmation is not provided, the daily financial penalty will continue to apply.
Doubts Over Safeguards
A central question for the court was whether the companies had actually made it impossible for such content to be created, as they claimed. The judges concluded that this had not been convincingly demonstrated.
During a hearing on March 12, lawyers representing xAI argued that strong safeguards had been implemented starting January 20, 2026. They maintained that Grok no longer allowed the generation of non-consensual intimate imagery or content involving minors.
However, evidence presented by Offlimits challenged that claim. On March 9, the same day the companies denied any remaining risk, it was still possible to produce a sexualized video of a real person using only a single uploaded image. The system did not require any confirmation of consent. The court viewed this as a contradiction that cast doubt on the effectiveness of the safeguards.
The judges also pointed out inconsistency in xAI’s position regarding child sexual abuse material. The company argued both that such content could not be generated and that it was not technically possible to guarantee complete prevention.
Legal Responsibility and Framework
The court determined that creating non-consensual “undressing” images amounts to a violation of the General Data Protection Regulation. It also found that enabling the production of child sexual abuse material constitutes unlawful behavior under Dutch civil law.
Importantly, the court rejected the argument that responsibility should fall solely on users who input prompts. Instead, it concluded that the platform itself, which controls how the system functions, must take responsibility for preventing misuse.
This reasoning aligns with the Russmedia judgment issued by the Court of Justice of the European Union. That earlier ruling established that platforms can be treated as joint controllers of personal data and cannot rely on intermediary protections to avoid obligations under European data protection law. Applying this principle, the Dutch court found that xAI and X’s European entity are responsible for how personal data is processed within Grok’s image generation system.
The court went a step further by highlighting a key distinction. Unlike platforms that merely host user-generated content, Grok actively creates the material itself. Because xAI designed and operates the system, it was identified as the party responsible for preventing unlawful outputs, regardless of who initiates the request.
Jurisdictional Limits
The ruling applies differently across entities. X Corp, which is based in the United States, faces narrower restrictions because it does not directly provide services within the Netherlands. Its obligation is limited to suspending Grok’s functionality in relation to non-consensual imagery.
By contrast, X Internet Unlimited Company, which serves users within the European Union, must comply with both the ban on non-consensual sexualized content and the restrictions related to child abuse material.
Increasing Global Scrutiny
The case follows findings from the Center for Countering Digital Hate, which estimated that Grok generated around 3 million sexualized images within a ten-day period between late December 2025 and early January 2026. Approximately 23,000 of those images appeared to involve minors.
Regulatory pressure is also building internationally. Ireland’s Data Protection Commission has launched an investigation under GDPR rules, while the European Commission has opened proceedings under the Digital Services Act. In the United Kingdom, Ofcom has initiated action under its Online Safety framework. In the United States, legal challenges have also emerged, including lawsuits filed by teenagers in Tennessee and by the city of Baltimore.
At the policy level, the European Parliament has supported efforts to strengthen the AI Act by introducing an explicit ban on tools designed to digitally remove clothing from images.
A Turning Point for AI Accountability
Authorities are revising how they approach artificial intelligence systems. Earlier debates often treated platforms as passive intermediaries. However, systems like Grok actively generate content, which changes the question of responsibility.
The decision makes it clear that companies developing such technologies are expected to take active steps to prevent harm. Claims about technical limitations are unlikely to be accepted if evidence shows that misuse remains possible.
X and xAI have been given ten working days to provide written confirmation explaining how they have complied with the court’s order.