Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Technology. Show all posts

SMS and OTP Bombing Tools Evolve into Scalable, Global Abuse Infrastructure

 

The modern authentication ecosystem operates on a fragile premise: that one-time password requests are legitimate. That assumption is increasingly being challenged. What started in the early 2020s as loosely circulated scripts designed to annoy phone numbers has transformed into a coordinated ecosystem of SMS and OTP bombing tools built for scale, automation, and persistence.

New findings from Cyble Research and Intelligence Labs (CRIL) analyzed nearly 20 actively maintained repositories and found rapid technical progression continuing through late 2025 and into 2026. These tools have moved beyond basic terminal scripts. They now include cross-platform desktop applications, Telegram-integrated automation frameworks, and high-performance systems capable of launching large-scale SMS, OTP, and voice-bombing campaigns across multiple geographies.

Researchers emphasize that the study reflects patterns within a defined research sample and should be viewed as indicative trends rather than a full mapping of the global ecosystem. Even within that limited dataset, the scale and sophistication are significant

SMS and OTP bombing campaigns exploit legitimate authentication endpoints. Attackers repeatedly trigger password resets, registration verifications, or login challenges, overwhelming a victim’s phone with genuine SMS messages or automated voice calls. The result ranges from harassment and disruption to more serious risks such as MFA fatigue.

Across the 20 repositories examined, researchers identified approximately 843 vulnerable API endpoints. These endpoints belonged to organizations across telecommunications, financial services, e-commerce, ride-hailing services, and government platforms. The recurring weaknesses were predictable: inadequate rate limiting, weak or poorly enforced CAPTCHA mechanisms, or both.

Regional targeting was uneven. Roughly 61.68% of observed endpoints—about 520—were linked to infrastructure in Iran. India accounted for 16.96%, approximately 143 endpoints. Additional activity was concentrated in Turkey, Ukraine, and parts of Eastern Europe and South Asia.

The attack lifecycle typically begins with endpoint discovery. Threat actors manually test authentication workflows, probe common API paths such as /api/send-otp or /auth/send-code, reverse-engineer mobile applications to uncover hardcoded API references, or leverage community-maintained endpoint lists shared in public repositories and forums. Once identified, these endpoints are integrated into multi-threaded attack frameworks capable of issuing simultaneous requests at scale.

The technical sophistication of SMS and OTP bombing tools has advanced considerably. Maintainers now offer versions across seven programming languages and frameworks, lowering entry barriers for individuals with limited coding expertise.

Modern toolkits commonly include:
  • Multi-threading to enable parallel API exploitation
  • Proxy rotation to bypass IP-based defenses
  • Request randomization to mimic human behavior
  • Automated retry mechanisms and failure handling
  • Real-time activity dashboards
More concerning is the widespread use of SSL bypass techniques. Approximately 75% of the repositories analyzed disable SSL certificate validation. Instead of relying on properly verified secure connections, these tools deliberately ignore certificate errors, enabling traffic interception or manipulation without interruption. SSL bypass has emerged as one of the most frequently observed evasion strategies.

In addition, 58.3% of repositories randomize User-Agent headers to evade signature-based detection systems. Around 33% exploit static or hardcoded reCAPTCHA tokens, effectively bypassing poorly implemented bot protections.

The ecosystem has also expanded beyond SMS flooding. Voice-bombing capabilities—automated call floods triggered through telephony APIs—are now integrated into several frameworks, broadening the harassment surface.

Commercialization and Data Harvesting Risks

Alongside open-source development, a commercial layer has surfaced. Browser-based SMS and OTP bombing platforms now offer simplified, point-and-click interfaces. Often marketed misleadingly as “prank tools” or “SMS testing services,” these platforms eliminate technical setup requirements.

Unlike repository-based tools that require local execution and configuration, web-based services abstract proxy management, API integration, and automation processes. This significantly increases accessibility.

However, these services frequently operate on a dual-threat model. Phone numbers entered into such platforms are often harvested. The collected data may later be reused in spam campaigns, sold as lead lists, or integrated into broader fraud operations. In effect, users risk exposing both their targets and themselves to ongoing exploitation.

Financial, Operational, and Reputational Impact

For individuals, SMS and OTP bombing can severely disrupt device usability. Effects include degraded performance, overwhelmed message inboxes, exhausted SMS storage, battery drain, and increased risk of MFA fatigue—potentially leading to accidental approval of malicious login attempts. Voice-bombing campaigns further intensify the disruption.

For organizations, the consequences extend well beyond inconvenience.

Financially, each OTP message typically costs between $0.05 and $0.20. An attack generating 10,000 messages can result in expenses ranging from $500 to $2,000. Sustained abuse of exposed endpoints can drive monthly SMS costs into five-figure sums.

Operationally, legitimate users may be unable to receive verification codes, customer support volumes can surge, and authentication delays can impact service reliability. In regulated industries, failure to secure authentication workflows may introduce compliance risks.

Reputational damage compounds these issues. Users quickly associate spam-like behavior with weak security controls, eroding trust and confidence in affected organizations.

As SMS and OTP bombing tools continue to evolve in sophistication and accessibility, the strain on authentication infrastructure underscores the urgent need for stronger rate limiting, adaptive bot detection, and hardened API protections across industries

Tesla Slashes Car Line-Up to Double Down on Robots and AI

 

Tesla is cutting several car models and scaling back its electric vehicle ambitions as it shifts focus towards robotics and artificial intelligence, marking a major strategic turning point for the company. The move comes after Tesla reported its first annual revenue decline since becoming a major EV player, alongside a steep fall in profits that undercut its long-standing image as a hyper-growth automaker.Executives are now presenting AI-driven products, including autonomous driving systems and humanoid robots, as the company’s next big profit engines, even as demand for its vehicles shows signs of cooling in key markets.

According to the company, several underperforming or lower-margin models will be discontinued or phased out, allowing Tesla to concentrate resources on a smaller range of vehicles and on the software and AI platforms that power them. This rationalisation follows intense price competition in the global EV market, especially from Chinese manufacturers, which has squeezed margins and forced Tesla into repeated price cuts over the past year. While the company argues that a leaner line-up will improve efficiency and profitability, the decision raises questions about whether Tesla is stepping back from its once-stated goal of driving a mass-market EV revolution.

Elon Musk has increasingly projected Tesla as an AI and robotics firm rather than a traditional carmaker, highlighting projects such as its Optimus humanoid robot and advanced driver-assistance systems. In recent briefings, Musk and other executives have suggested that robotaxis and factory robots could ultimately generate more value than car sales, if Tesla can achieve reliable full self-driving and scale its robotics platforms. Investors, however, remain divided on whether these long-term bets justify the current volatility in Tesla’s core automotive business.

Analysts say the shift underscores broader turbulence in the EV sector, where slowing demand growth, higher borrowing costs and intensifying competition have forced companies to reassess expansion plans. Tesla’s retrenchment on vehicle models is being closely watched by rivals and regulators, as it may signal a maturing market in which software, AI capabilities and integrated ecosystems matter more than the sheer number of models on offer. At the same time, a pivot towards AI raises fresh scrutiny over safety, data practices and the real-world performance of autonomous systems.

For consumers, the immediate impact is likely to be fewer choices in Tesla’s showroom but potentially faster updates and improvements to the remaining models and their software features. Some owners may welcome the renewed focus on autonomy and smart features, while others could be frustrated if favoured variants are discontinued.As Tesla repositions itself, the company faces a delicate balancing act: reassuring car buyers and shareholders today while betting heavily that its AI and robotics vision will define its future tomorrow.

Paul McCartney’s Phone-Free Concert Sparks Growing Push to Lock Smartphones Away

 


When Sir Paul McCartney took the stage at the Santa Barbara Bowl, he promised fans a close, personal performance. He went a step further by introducing a strict no-phones policy, effectively creating a temporary “lockdown” on selfies and video recording.

All 4,500 attendees were required to place their mobile phones inside magnetically sealed pouches for the entire show, resulting in a completely phone-free concert experience.

"Nobody's got a phone," McCartney announced during his 25-song performance. "Really, it's better!" he added.

The process behind enforcing such a large-scale phone ban is relatively straightforward. As fans enter the venue, their phones are sealed inside special pouches that remain with them throughout the event. Once the show ends, the magnetic lock is released and devices are returned to normal use.

A growing number of artists have adopted similar policies. Performers including Dave Chappelle, Alicia Keys, Guns N' Roses, Childish Gambino and Jack White say phone-free environments help them deliver better performances and even take creative risks.

In a June interview with Rolling Stone, Sabrina Carpenter also spoke about the possibility of banning phones at future concerts. Many fans appear open to the idea.

Shannon Valdes, who attended a Lane8 DJ set, shared her experience online: "It was refreshing to be part of a crowd where everyone was fully present - dancing, connecting, and enjoying the best moments - rather than recording them."

The inspiration behind the pouch technology dates back to 2012, when Graham Dugoni witnessed a moment at a music festival that left a lasting impression.

"I saw a man drunk and dancing and a stranger filmed him and immediately posted it online," Dugoni explains. "It kind of shocked me.

"I wondered what the implications might be for him, but I also started questioning what our expectations of privacy should be in the modern world."

Within two years, the former professional footballer launched Yondr, a US-based start-up focused on creating phone-free spaces. While the lockable pouch industry is still developing, more companies are entering the market. These pouches are now commonly used in theatres, art galleries, and increasingly in schools.

Prices typically range from £7 to £30 per pouch, depending on order size and supplier. Yondr says it has partnered with around 2.2 million schools in the US, while roughly 250,000 students across 500 schools in England now use its pouches. One academy trust in Yorkshire reportedly spent £75,000 implementing the system.

Paul Nugent, founder of Hush Pouch, spent two decades installing school lockers before entering this space. He says school leaders must weigh several factors before adopting the technology.

"Yes it can seem an expensive way of keeping phones out of schools, and some people question why they can't just insist phones remain in a student's bag," he explains.

"But smartphones create anxiety, fixation, and FOMO - a fear of missing out. The only way to genuinely allow children to concentrate in lessons, and to enjoy break time, is to lock them away."

According to Dugoni, schools that have introduced phone-free policies have reported measurable benefits.

"There have been notable improvements in academic performance, and headteachers also report reductions in bullying," he explains.

Vale of York Academy introduced pouches in November. Headteacher Gillian Mills told the BBC: "It's given us an extra level of confidence that students aren't having their learning interrupted.

"We're not seeing phone confiscations now, which took up time, or the arguments about handing phones over, but also teachers are saying that they are able to teach."

The political debate around smartphones in schools is also intensifying. Conservative leader Kemi Badenoch has said her party would push for a complete ban on smartphones in schools if elected. The Labour government has stopped short of a nationwide ban, instead allowing headteachers to decide, while opening a consultation on restricting social media access for under-16s.

As part of these measures, Ofsted will be granted powers to review phone-use policies, with ministers expecting schools to become “phone-free by default”.

Nugent notes that many parents prefer their children to carry phones for safety reasons during travel.

"The first week or so after we install the system is a nightmare," he adds. "Kids refuse, or try and break the pouches open. But once they realise no-one else has a phone, most of them embrace it as a kind of freedom."

The rapid expansion of social media platforms and AI-driven content places these phone-free initiatives in direct opposition to tech companies whose algorithms encourage constant smartphone use. Still, Nugent believes public sentiment is shifting.

"We're getting so many enquiries now. People want to ban phones at weddings, in theatres, and even on film sets," he says.

"Effectively carrying a computer around in your hand has many benefits, but smartphones also open us up to a lot of misdirection and misinformation.

"Enforcing a break, especially for young people, has so many positives, not least for their mental health."

Dugoni agrees that society may be reaching a turning point.

"We're getting close to threatening the root of what makes us human, in terms of social interaction, critical thinking faculties, and developing the skills to operate in the modern world," he explains.

"If we continue to outsource those, with this crutch in our pocket at all times, there is a danger we end up undermining what it means to be a productive person.

"And that is a moment where it's worth pushing back and trying to understand where we go from here."

As 4,500 McCartney fans sang along to Hey Jude under a late-September sky, many may have felt the former Beatle’s message resonate just as strongly as the music.

Ukraine Increases Control Over Starlink Terminals


New Starlink verification system 

Ukraine has launched a new authentication system for Starlink satellite internet terminals used by the public and the military after verifying that Russia state sponsored hackers have started using the technology to attack drones. 

The government has also introduced a compulsory “whitelist” for Starlink terminals, where only authenticated and registered devices will work in Ukraine. All other terminals used will be removed, as per the statement from Mykhailo Fedorov, country's recently appointed defense chief. 

Why the new move?

Kyiv claims that Russian unmanned aerial vehicles are now being commanded in real time using Starlink links, making them more difficult to detect, jam, or shoot down. This action is intended to counteract these threats. "It is challenging to intercept Russian drones that are equipped with Starlink," Fedorov stated earlier this week. "They can be controlled by operators over long distances in real time, will not be affected by electronic warfare, and fly at low altitudes." The Ministry of Defense is implementing the whitelist in collaboration with SpaceX, the company that runs the constellation of low-Earth orbit satellites for Starlink.

The step is presently the only technological way to stop Russia from abusing the system, Fedorov revealed Wednesday, adding that citizens have already started registering their terminals. "The government has taken this forced action to save Ukrainian lives and safeguard our energy infrastructure," he stated. 

How will it impact other sectors?

Businesses will be able to validate devices online using Ukraine's e-government services, while citizens will be able to register their terminals at local government offices under the new system. According to Ukraine's Ministry of Defense, military units will be exempt from disclosing account information and will utilize a different secure registration method.

Using Starlink connectivity, Ukraine discovered a Russian drone operating over Ukrainian territory at the end of January. After then, Kyiv got in touch with SpaceX to resolve the problem, albeit the specifics of the emergency procedures were not made public. Army, a Ukrainian military outletSetting a maximum speed at which Starlink terminals can operate was one step, according to Inform, which cited an initial cap of about 75 kilometers per hour. According to the study, Russian strike drones usually fly faster than that, making it impossible for operators to manage them in real time.


YouTube's New GenAI Feature in Tools Coming Soon


Youtube is planning something new for its platform and content creators in 2026. The company plans to integrate AI into its existing and new tools. The CEO said that content creators will be able to use GenAI for shorts. While we don't know much about the feature yet, it looks like OpenAI’s Sora app where users make videos of themselves via prompt. 

What will be new in 2026? 

“This year you'll be able to create a Short using your own likeness, produce games with a simple text prompt, and experiment with music “ said CEO Neal Mohan. All these apps will be AI-powered which many creators may not like. Many users prefer non-AI content. CEO Neil Mohan has addressed these concerns and said that “throughout this evolution, AI will remain a tool for expression, not a replacement.”

But the CEO didn't provide other details about these new AI capabilities. It is not clear how this will help the creators and the music experimentation work. 

That's not all, though.

Additionally, YouTube will introduce new formats for shorts. According to Mohan, Shorts would let users to share images in the same way as Instagram Reels does. Direct sharing of these will occur on the subscribers' feed. 

In 2026, YouTube will likewise concentrate on the biggest displays it can be accessed on, which are televisions. According to Mohan, the business will soon introduce "more than 10 specialized YouTube TV plans spanning sports, entertainment, and news, all designed to give subscribers more control," along with "fully customizable multiview.”

Why new feature?

Mohan noted that the creator economy is another area of concern. According to YouTube's CEO, video producers will discover new revenue streams this year. The suggestions made include fan funding elements like jewelry and gifts, which will be included in addition to the current Super Chat, as well as shopping and brand bargains made possible by YouTube. 

YouTube's new venture

The business also hopes to grow YouTube Shopping, an affiliate program that lets content producers sell goods directly in their videos, shorts, and live streams. The business stated that it will implement in-app checkout in 2026, enabling users to make purchases without ever leaving the site.


Threat Actors Exploit Fortinet Devices and Steal Firewall Configurations


Fortinet products targeted

Threat actors are targeting Fortinet FortiGate devices via automated attacks that make rogue accounts and steal firewall settings info. 

The campaign began earlier this year when threat actors exploited an unknown bug in the devices’ single-sign-on (SSO) option to make accounts with VPN access and steal firewall configurations. This means automation was involved. 

About the attack

Cybersecurity company Arctic Wolf discovered this attack and said they are quite similar to the attacks it found in December after the reveal of a critical login bypass flaw (CVE-2025-59718) in Fortinet products. 

The advisory comes after a series of reports from Fortinet users about threat actors abusing a patch bypass for the bug CVE-2025-59718 to take over patched walls. 

Impacted admins complaint that Fortinet said that the latest FortiOS variant 7.4.10 doesn't totally fix the authentication bypass bug, which should have been fixed in December 2025.

Patches and fixing 

Fortinet also plans on releasing more FortiOS variants soon to fully patch the CVE-2025-59718 security bug. 

Following an SSO login from cloud-init@mail.io on IP address 104.28.244.114, the attackers created admin users, according to logs shared by impacted Fortinet customers. This matches indications of compromise found by Arctic Wolf during its analysis of ongoing FortiGate attacks and prior exploitation the cybersecurity firm noticed in December. 

Turn off FortiCloud SSO to prevent intrusions. 

Turning off SSO

Admins can temporarily disable the vulnerable FortiCloud login capability (if enabled) by navigating to System -> Settings and changing "Allow administrative login using FortiCloud SSO" to Off. This will help administrators safeguard their firewalls until Fortinet properly updates FortiOS against these persistent assaults.

You can also run these commands from the interface:

"config system global

set admin-forticloud-sso-login disable

end"

What to do next?

Internet security watchdog Shadowserver is investigating around 11,000 Fortinet devices that are vulnerable to online threats and have FortiCloud SSO turned on. 

Additionally, CISA ordered federal agencies to patch CVE-2025-59718 within a week after adding it to its list of vulnerabilities that were exploited in attacks on December 16.

Smart Homes Under Threat: How to Reduce the Risk of IoT Device Hacking

 

Most households today use some form of internet of things (IoT) technology, whether it’s a smartphone, tablet, smart plugs, or a network of cameras and sensors. Learning that nearly 120,000 home security cameras were compromised in South Korea and misused for sexploitation footage is enough to make anyone reconsider adding connected devices to their living space. After all, the home is meant to be a private and secure environment.

Although all smart homes carry some level of risk, widespread hacking incidents are still relatively uncommon. Cybercriminals targeting smart homes tend to be opportunistic rather than strategic. Instead of focusing on a particular household and attempting to break into a specific system, they scan broadly for devices with weak or misconfigured security settings that can be exploited easily.

The most effective way to safeguard smart home devices is to avoid being an easy target. Unfortunately, many of the hacking cases reported in the media stem from basic security oversights that could have been prevented with simple precautions.

How to Protect Your Smart Home From Hackers

Using weak passwords, neglecting firmware updates, or leaving Wi-Fi networks exposed can increase the risk of unauthorized access—even if the overall threat level remains low. Below are key steps homeowners can take to strengthen smart home security.

1. Use strong and unique passwords
Hackers gaining access to baby monitors and speaking through two-way audio is often the result of unchanged default passwords. Weak or reused passwords are easy to guess, especially if they have appeared in previous data breaches. Each smart device and account should have a strong, unique password to make attacks more difficult and less appealing.

2. Enable two-factor or multi-factor authentication
Multi-factor authentication adds an extra layer of protection by requiring a second form of verification beyond a password. Even if login credentials are compromised, attackers would still need additional approval. Many major smart home platforms, including Amazon, Google, and Philips Hue, support this feature. While it may add a small inconvenience during login, the added security is well worth the effort.

3. Secure your Wi-Fi network
Wi-Fi security is often overlooked but plays a critical role in smart home protection. Using WPA2 or WPA3 encryption and changing the router’s default password are essential steps. Limiting who has access to your Wi-Fi network also helps. Creating separate networks—one for personal devices and another exclusively for IoT devices—can further reduce risk by isolating smart home hardware from sensitive data.

4. Keep device firmware updated
Manufacturers regularly release firmware updates to patch newly discovered vulnerabilities. Enabling automatic updates ensures devices receive these fixes promptly. Keeping firmware current is one of the simplest and most effective ways to close security gaps.

5. Disable unnecessary features
Features that aren’t actively used can create additional entry points for attackers. If remote access isn’t needed, disabling it can significantly reduce exposure—particularly for devices with cameras. It’s also advisable to turn off Universal Plug and Play (UPnP) on routers and decline unnecessary integrations or permissions that don’t serve a clear purpose.

6. Research brands before buying
Brand recognition alone doesn’t guarantee strong security. Even well-known companies such as Wyze, Eufy, and Google have faced security issues in the past. Before purchasing a smart device, it’s important to research the brand’s security practices, data protection policies, and real-world user experiences. If features like local-only storage are important, they should be verified through reviews, forums, and independent evaluations.

Smart homes offer convenience and efficiency, but they also demand responsibility. By following basic cybersecurity practices and making informed purchasing decisions, homeowners can significantly reduce risks and enjoy the benefits of connected living with greater peace of mind.

Experts Find Malicious Browser Extensions, Chrome, Safari, and Edge Affected


Threat actors exploit extensions

Cybersecurity experts found 17 extensions for Chrome, Edge, and Firefox browsers which track user's internet activity and install backdoors for access. The extensions were downloaded over 840,000 times. 

The campaign is not new. LayerX claimed that the campaign is part of GhostPoster, another campaign first found by Koi Security last year in December. Last year, researchers discovered 17 different extensions that were downloaded over 50,000 times and showed the same monitoring behaviour and deploying backdoors. 

Few extensions from the new batch were uploaded in 2020, exposing users to malware for years. The extensions appeared in places like the Edge store and later expanded to Firefox and Chrome. 

Few extensions stored malicious JavaScript code in the PNG logo. The code is a kind of instruction on downloading the main payload from a remote server. 

The main payload does multiple things. It can hijack affiliate links on famous e-commerce websites to steal money from content creators and influencers. “The malware watches for visits to major e-commerce platforms. When you click an affiliate link on Taobao or JD.com, the extension intercepts it. The original affiliate, whoever was supposed to earn a commission from your purchase, gets nothing. The malware operators get paid instead,” said Koi researchers. 

After that, it deploys Google Analytics tracking into every page that people open, and removes security headers from HTTP responses. 

In the end, it escapes CAPTCHA via three different ways, and deploy invisible iframes that do ad frauds, click frauds, and tracking. These iframes disappear after 15 seconds.

Besides this, all extensions were deleted from the repositories, but users shoul also remove them personally. 

This staged execution flow demonstrates a clear evolution toward longer dormancy, modularity, and resilience against both static and behavioral detection mechanisms,” said LayerX. 

The PNG steganography technique is employed by some. Some people download JavaScript directly and include it into each page you visit. Others employ bespoke ciphers to encode the C&C domains and use concealed eval() calls. The same assailant. identical servers. many methods of delivery. This appears to be testing several strategies to see which one gets the most installs, avoids detection the longest, and makes the most money.

This campaign reflects a deliberate shift toward patience and precision. By embedding malicious code in images, delaying execution, and rotating delivery techniques across identical infrastructure, the attackers test which methods evade detection longest. The strategy favors longevity and profit over speed, exposing how browser ecosystems remain vulnerable to quietly persistent threats.

Microsoft Unveils Backdoor Scanner for Open-Weight AI Models

 

Microsoft has introduced a new lightweight scanner designed to detect hidden backdoors in open‑weight large language models (LLMs), aiming to boost trust in artificial intelligence systems. The tool, built by the company’s AI Security team, focuses on subtle behavioral patterns inside models to reliably flag tampering without generating many false outcomes. By targeting how specific trigger inputs change a model’s internal operations, Microsoft hopes to offer security teams a practical way to vet AI models before deployment.

The scanner is meant to address a growing problem in AI security: model poisoning and backdoored models that act as “sleeper agents.” In such attacks, threat actors manipulate model weights or training data so the model behaves normally in most scenarios, but switches to malicious or unexpected behavior when it encounters a carefully crafted trigger phrase or pattern. Because these triggers are narrowly defined, the backdoor often evades normal testing and quality checks, making detection difficult. Microsoft notes that both the model’s parameters and its surrounding code can be tampered with, but this tool focuses primarily on backdoors embedded directly into the model’s weights.

To detect these covert modifications, Microsoft’s scanner looks for three practical signals that indicate a poisoned model. First, when given a trigger prompt, compromised models tend to show a distinctive “double triangle” attention pattern, focusing heavily on the trigger itself and sharply reducing the randomness of their output. Second, backdoored LLMs often leak fragments of their own poisoning data, including trigger phrases, through memorization rather than generalization. Third, a single hidden backdoor may respond not just to one exact phrase, but to multiple “fuzzy” variations of that trigger, which the scanner can surface during analysis.

The detection workflow starts by extracting memorized content from the model, then analyzing that content to isolate suspicious substrings that could represent hidden triggers. Microsoft formalizes the three identified signals as loss functions, scores each candidate substring, and returns a ranked list of likely trigger phrases that might activate a backdoor. A key advantage is that the scanner does not require retraining the model or prior knowledge of the specific backdoor behavior, and it can operate across common GPT‑style architectures at scale. This makes it suitable for organizations evaluating open‑weight models obtained from third parties or public repositories.

However, the company stresses that the scanner is not a complete solution to all backdoor risks. It requires direct access to model files, so it cannot be used on proprietary, fully hosted models. It is also optimized for trigger‑based backdoors that produce deterministic outputs, meaning more subtle or probabilistic attacks may still evade detection. Microsoft positions the tool as an important step toward deployable backdoor detection and calls for broader collaboration across the AI security community to refine defenses. In parallel, the firm is expanding its Secure Development Lifecycle to address AI‑specific threats like prompt injection and data poisoning, acknowledging that modern AI systems introduce many new entry points for malicious inputs.

Researchers Disclose Patched Flaw in Docker AI Assistant that Enabled Code Execution


Researchers have disclosed details of a previously fixed security flaw in Ask Gordon, an artificial intelligence assistant integrated into Docker Desktop and the Docker command-line interface, that could have been exploited to execute code and steal sensitive data. The vulnerability, dubbed DockerDash by cybersecurity firm Noma Labs, was patched by Docker in November 2025 with the release of version 4.50.0. 

“In DockerDash, a single malicious metadata label in a Docker image can be used to compromise your Docker environment through a simple three-stage attack,” said Sasi Levi, security research lead at Noma Labs, in a report shared with The Hacker News. “Every stage happens with zero validation, taking advantage of current agents and MCP Gateway architecture.” 

According to the researchers, the flaw allowed Ask Gordon to treat unverified container metadata as executable instructions. When combined with Docker’s Model Context Protocol gateway, this behavior could lead to remote code execution on cloud and command-line systems, or data exfiltration on desktop installations. 

The issue stems from what Noma described as a breakdown in contextual trust. Ask Gordon reads metadata from Docker images, including LABEL fields, without distinguishing between descriptive information and embedded instructions. These instructions can then be forwarded to the MCP Gateway, which executes them using trusted tools without additional checks. “MCP Gateway cannot distinguish between informational metadata and a pre-authorized, runnable internal instruction,” Levi said. 

“By embedding malicious instructions in these metadata fields, an attacker can hijack the AI’s reasoning process.” In a hypothetical attack, a malicious actor could publish a Docker image containing weaponized metadata labels. When a user queries Ask Gordon about the image, the assistant parses the labels, forwards them to the MCP Gateway, and triggers tool execution with the user’s Docker privileges.  
Researchers said the same weakness could be used for data exfiltration on Docker Desktop, allowing attackers to gather details about installed tools, container configurations, mounted directories, and network setups, despite the assistant’s read-only permissions. Docker version 4.50.0 also addressed a separate prompt injection flaw previously identified by Pillar Security, which could have enabled attackers to manipulate Docker Hub metadata to extract sensitive information. 

“The DockerDash vulnerability underscores the need to treat AI supply chain risk as a current core threat,” Levi said. “Trusted input sources can be used to hide malicious payloads that manipulate an AI’s execution path.”

Microsoft Outlines Three-Stage Plan to Disable NTLM and Strengthen Windows Security

 

Microsoft has detailed a structured, three-phase roadmap to gradually retire New Technology LAN Manager (NTLM), reinforcing its broader push toward more secure, Kerberos-based authentication within Windows environments.

The announcement follows Microsoft’s earlier decision to deprecate NTLM, a legacy authentication mechanism that has long been criticized for its security shortcomings. Officially deprecated in June 2024, NTLM no longer receives updates, as its design leaves systems vulnerable to relay attacks and unauthorized access.

"NTLM consists of security protocols originally designed to provide authentication, integrity, and confidentiality to users," Mariam Gewida, Technical Program Manager II at Microsoft, explained. "However, as security threats have evolved, so have our standards to meet modern security expectations. Today, NTLM is susceptible to various attacks, including replay and man-in-the-middle attacks, due to its use of weak cryptography."

Despite its deprecated status, Microsoft acknowledged that NTLM remains widely used across enterprise networks. This is largely due to legacy applications, infrastructure constraints, and deeply embedded authentication logic that make migration difficult. Continued reliance on NTLM increases exposure to threats such as replay, relay, and pass-the-hash attacks.

To address these risks without disrupting critical systems, Microsoft has introduced a phased strategy aimed at eventually disabling NTLM by default.

Phase 1 focuses on improving visibility and administrative control by expanding NTLM auditing capabilities. This helps organizations identify where NTLM is still in use and why. This phase is already available.

Phase 2 aims to reduce migration barriers by introducing tools such as IAKerb and a local Key Distribution Center (KDC), while also updating core Windows components to favor Kerberos authentication. These changes are expected to roll out in the second half of 2026.

Phase 3 will see NTLM disabled by default in the next release of Windows Server and corresponding Windows client versions. Organizations will need to explicitly re-enable NTLM using new policy controls if required.

Microsoft described the move as a key milestone toward a passwordless and phishing-resistant ecosystem. The company urged organizations that still depend on NTLM to audit usage, identify dependencies, transition to Kerberos, test NTLM-disabled configurations in non-production environments, and enable Kerberos enhancements.

"Disabling NTLM by default does not mean completely removing NTLM from Windows yet," Gewida said. "Instead, it means that Windows will be delivered in a secure-by-default state where network NTLM authentication is blocked and no longer used automatically."

"The OS will prefer modern, more secure Kerberos-based alternatives. At the same time, common legacy scenarios will be addressed through new upcoming capabilities such as Local KDC and IAKerb (pre-release)."


Open-Source AI Models Pose Growing Security Risks, Researchers Warn

Hackers and other criminals can easily hijack computers running open-source large language models and use them for illicit activity, bypassing the safeguards built into major artificial intelligence platforms, researchers said on Thursday. The findings are based on a 293-day study conducted jointly by SentinelOne and Censys, and shared exclusively with Reuters. 

The research examined thousands of publicly accessible deployments of open-source LLMs and highlighted a broad range of potentially abusive use cases. According to the researchers, compromised systems could be directed to generate spam, phishing content, or disinformation while evading the security controls enforced by large AI providers. 

The deployments were also linked to activity involving hacking, hate speech, harassment, violent or graphic content, personal data theft, scams, fraud, and in some cases, child sexual abuse material. While thousands of open-source LLM variants are available, a significant share of internet-accessible deployments were based on Meta’s Llama models, Google DeepMind’s Gemma, and other widely used systems, the researchers said. 

They identified hundreds of instances in which safety guardrails had been deliberately removed. “AI industry conversations about security controls are ignoring this kind of surplus capacity that is clearly being utilized for all kinds of different stuff, some of it legitimate, some obviously criminal,” said Juan Andres Guerrero-Saade, executive director for intelligence and security research at SentinelOne. He compared the problem to an iceberg that remains largely unaccounted for across the industry and the open-source community. 

The study focused on models deployed using Ollama, a tool that allows users to run their own versions of large language models. Researchers were able to observe system prompts in about a quarter of the deployments analyzed and found that 7.5 percent of those prompts could potentially enable harmful behavior. 

Geographically, around 30 per cent of the observed hosts were located in China, with about 20 per cent based in the United States, the researchers said. Rachel Adams, chief executive of the Global Centre on AI Governance, said responsibility for downstream misuse becomes shared once open models are released.  “Labs are not responsible for every downstream misuse, but they retain an important duty of care to anticipate foreseeable harms, document risks, and provide mitigation tooling and guidance,” Adams said.  

A Meta spokesperson declined to comment on developer responsibility for downstream abuse but pointed to the company’s Llama Protection tools and Responsible Use Guide. Microsoft AI Red Team Lead Ram Shankar Siva Kumar said Microsoft believes open-source models play an important role but acknowledged the risks. 

“We are clear-eyed that open models, like all transformative technologies, can be misused by adversaries if released without appropriate safeguards,” he said. 

Microsoft conducts pre-release evaluations and monitors for emerging misuse patterns, Kumar added, noting that “responsible open innovation requires shared commitment across creators, deployers, researchers, and security teams.” 

Ollama, Google and Anthropic did not comment. 

Apple's New Feature Will Help Users Restrict Location Data


Apple has introduced a new privacy feature that allows users to restrict the accuracy of location data shared with cellular networks on a few iPad models and iPhone. 

About the feature

The “Limit Precise Location” feature will start after updating to iOS26.3 or later. It restricts the information that mobile carriers use to decide locations through cell tower connections. Once turned on, cellular networks can only detect the device’s location, like neighbourhood instead of accurate street address. 

According to Apple, “The precise location setting doesn't impact the precision of the location data that is shared with emergency responders during an emergency call.” “This setting affects only the location data available to cellular networks. It doesn't impact the location data that you share with apps through Location Services. For example, it has no impact on sharing your location with friends and family with Find My.”

Users can turn on the feature by opening “Settings,” selecting “Cellular,” “Cellular Data Options,” and clicking the “Limit Precise Location” setting. After turning on limited precise location, the device may trigger a device restart to complete activation. 

The privacy enhancement feature works only on iPhone Air, iPad Pro (M5) Wi-Fi + Cellular variants running on iOS 26.3 or later. 

Where will it work?

The availability of this feature will depend on carrier support. The mobile networks compatible are:

EE and BT in the UK

Boost Mobile in the UK

Telecom in Germany 

AIS and True in Thailand 

Apple hasn't shared the reason for introducing this feature yet.

Compatibility of networks with the new feature 

Apple's new privacy feature, which is currently only supported by a small number of networks, is a significant step towards ensuring that carriers can only collect limited data on their customers' movements and habits because cellular networks can easily track device locations via tower connections for network operations.

“Cellular networks can determine your location based on which cell towers your device connects to. The limit precise location setting enhances your location privacy by reducing the precision of location data available to cellular networks,”

Exposed Admin Dashboard in AI Toy Put Children’s Data and Conversations at Risk

 

A routine investigation by a security researcher into an AI-powered toy revealed a serious security lapse that could have exposed sensitive information belonging to children and their families.

The issue came to light when security researcher Joseph Thacker examined an AI toy owned by a neighbor. In a blog post, Thacker described how he and fellow researcher Joel Margolis uncovered an unsecured admin interface linked to the Bondu AI toy.

Margolis identified a suspicious domain—console.bondu.com—referenced in the Content Security Policy headers of the toy’s mobile app backend. On visiting the domain, he found a simple option labeled “Login with Google.”

“By itself, there’s nothing weird about that as it was probably just a parent portal,” Thacker wrote. Instead, logging in granted access to Bondu’s core administrative dashboard.

“We had just logged into their admin dashboard despite [not] having any special accounts or affiliations with Bondu themselves,” Thacker said.

AI Toy Admin Panel Exposed Children’s Conversations

Further analysis of the dashboard showed that the researchers had unrestricted visibility into “Every conversation transcript that any child has had with the toy,” spanning “tens of thousands of sessions.” The exposed panel also included extensive personal details about children and their households, such as:
  • Child’s name and date of birth
  • Names of family members
  • Preferences, likes, and dislikes
  • Parent-defined developmental objectives
  • The custom name assigned to the toy
  • Historical conversations used to provide context to the language model
  • Device-level data including IP-based location, battery status, and activity state
  • Controls to reboot devices and push firmware updates
The researchers also observed that the system relies on OpenAI GPT-5 and Google Gemini. “Somehow, someway, the toy gets fed a prompt from the backend that contains the child profile information and previous conversations as context,” Thacker wrote. “As far as we can tell, the data that is being collected is actually disclosed within their privacy policy, but I doubt most people realize this unless they go and read it (which most people don’t do nowadays).”

Beyond the authentication flaw, the team identified an Insecure Direct Object Reference (IDOR) vulnerability in the API. This weakness “allowed us to retrieve any child’s profile data by simply guessing their ID.”

“This was all available to anyone with a Google account,” Thacker said. “Naturally we didn’t access nor store any data beyond what was required to validate the vulnerability in order to responsibly disclose it.”

Bondu Responds Within Minutes

Margolis contacted Bondu’s CEO via LinkedIn over the weekend, prompting the company to disable access to the exposed console “within 10 minutes.”

“Overall we were happy to see how the Bondu team reacted to this report; they took the issue seriously, addressed our findings promptly, and had a good collaborative response with us as security researchers,” Thacker said.

Bondu also initiated a broader security review, searched for additional vulnerabilities, and launched a bug bounty program. After reviewing console access logs, the company stated that no unauthorized parties had accessed the system aside from the researchers, preventing what could have become a data breach.

Despite the swift and responsible response, the incident changed Thacker’s perspective on AI-driven toys.

“To be honest, Bondu was totally something I would have been prone to buy for my kids before this finding,” he wrote. “However this vulnerability shifted my stance on smart toys, and even smart devices in general.”

“AI models are effectively a curated, bottled-up access to all the information on the internet,” he added. “And the internet can be a scary place. I’m not sure handing that type of access to our kids is a good idea.”

He further noted that, beyond data security concerns, AI introduces new risks at home. “AI makes this problem even more interesting because the designer (or just the AI model itself) can have actual ‘control’ of something in your house. And I think that is even more terrifying than anything else that has existed yet,” he said.

Bondu’s website maintains that the toy was designed with safety as a priority, stating that its “safety and behavior systems were built over 18 months of beta testing with thousands of families. Thanks to rigorous review processes and continuous monitoring, we did not receive a single report of unsafe or inappropriate behavior from bondu throughout the entire beta period.”

Google’s Project Genie Signals a Major Shift for the Gaming Industry

 

Google has sent a strong signal to the video game sector with the launch of Project Genie, an experimental AI world-model that can create explorable 3D environments using simple text or image prompts.

Although Google’s Genie AI has been known since 2024, its integration into Project Genie marks a significant step forward. The prototype is now accessible to Google AI Ultra subscribers in the US and represents one of Google’s most ambitious AI experiments to date.

Project Genie is being introduced through Google Labs, allowing users to generate short, interactive environments that can be explored in real time. Built on DeepMind’s Genie 3 world-model research, the system lets users move through AI-generated spaces, tweak prompts, and instantly regenerate variations. However, it is not positioned as a full-scale game engine or production-ready development tool.

Demonstrations on the Project Genie website showcase a variety of scenarios, including a cat roaming a living room from atop a Roomba, a vehicle traversing the surface of a rocky moon, and a wingsuit flyer gliding down a mountain. These environments remain navigable in real time, and while the worlds are generated dynamically as characters move, consistency is maintained. Revisiting areas does not create new terrain, and any changes made by an agent persist as long as the system retains sufficient memory.

"Genie 3 environments are … 'auto-regressive' – created frame by frame based on the world description and user actions," Google explains on Genie's website. "The environments remain largely consistent for several minutes, with memory recalling changes from specific interactions for up to a minute."

Despite these capabilities, time constraints remain a challenge.

"The model can support a few minutes of continuous interaction, rather than extended hours," Google said, adding elsewhere that content generation is currently capped at 60 seconds. A Google spokesperson told The Register that Genie can render environments beyond that limit, but the company "found 60 seconds provides a high quality and consistent world, and it gives people enough time to explore and experience the environment."

Google stated that world consistency lasts throughout an entire session, though it remains unclear whether session durations will be expanded in the future. Beyond time limits, the system has other restrictions.

Agents in Genie’s environments are currently limited in the actions they can perform, and interactions between multiple agents are unreliable. The model struggles with readable text, lacks accurate real-world simulation, and can suffer from lag or delayed responses. Google also acknowledged that some previously announced features are missing.

In addition, "A few of the Genie 3 model capabilities we announced in August, such as promptable events that change the world as you explore it, are not yet included in this prototype," Google added.

"A world model simulates the dynamics of an environment, predicting how they evolve and how actions affect them," the company said of Genie. "While Google DeepMind has a history of agents for specific environments like Chess or Go, building AGI requires systems that navigate the diversity of the real world."

Game Developers Face an Uncertain Future

Beyond AGI research, Google also sees potential applications for Genie within the gaming industry—an area already under strain. While Google emphasized that Genie "is not a game engine and can’t create a full game experience," a spokesperson told The Register, "we are excited to see the potential to augment the creative process, enhancing ideation, and speeding up prototyping."

Industry data suggests this innovation arrives at a difficult time. A recent Informa Game Developers Conference report found that 33 percent of US game developers and 28 percent globally experienced at least one layoff over the past two years. Half of respondents said their employer had conducted layoffs within the last year.

Concerns about AI’s role are growing. According to the same survey, 52 percent of industry professionals believe AI is negatively affecting the games sector—up sharply from 30 percent last year and 18 percent the year before. The most critical views came from professionals working in visual and technical art, narrative design, programming, and game design.

One machine learning operations employee summed up those fears bluntly.

"We are intentionally working on a platform that will put all game devs out of work and allow kids to prompt and direct their own content," the GDC study quotes the respondent as saying.

While Project Genie still has clear technical limitations, the rapid pace of AI development suggests those gaps may not last long—raising difficult questions about the future of game development.

Anthropic Cracks Down on Claude Code Spoofing, Tightens Access for Rivals and Third-Party Tools

 

Anthropic has rolled out a new set of technical controls aimed at stopping third-party applications from impersonating its official coding client, Claude Code, to gain cheaper access and higher usage limits to Claude AI models. The move has directly disrupted workflows for users of popular open-source coding agents such as OpenCode.

At the same time—but through a separate enforcement action—Anthropic has also curtailed the use of its models by competing AI labs, including xAI, which accessed Claude through the Cursor integrated development environment. Together, these steps signal a tightening of Anthropic’s ecosystem as demand for Claude Code surges.

The anti-spoofing update was publicly clarified on Friday by Thariq Shihipar, a Member of Technical Staff at Anthropic working on Claude Code. Writing on X (formerly Twitter), Shihipar said the company had "tightened our safeguards against spoofing the Claude Code harness." He acknowledged that the rollout caused unintended side effects, explaining that some accounts were automatically banned after triggering abuse detection systems—an issue Anthropic says it is now reversing.

While those account bans were unintentional, the blocking of third-party integrations themselves appears to be deliberate.

Why Harnesses Were Targeted

The changes focus on so-called “harnesses”—software wrappers that control a user’s web-based Claude account via OAuth in order to automate coding workflows. Tools like OpenCode achieved this by spoofing the client identity and sending headers that made requests appear as if they were coming from Anthropic’s own command-line interface.

This effectively allowed developers to link flat-rate consumer subscriptions, such as Claude Pro or Max, with external automation tools—bypassing the intended limits of plans designed for human, chat-based use.

According to Shihipar, technical instability was a major motivator for the block. Unauthorized harnesses can introduce bugs and usage patterns that Anthropic cannot easily trace or debug. When failures occur in third-party wrappers like OpenCode or certain Cursor configurations, users often blame the model itself, which can erode trust in the platform.

The Cost Question and the “Buffet” Analogy

Developers, however, have largely framed the issue as an economic one. In extended discussions on Hacker News, users compared Claude’s consumer subscriptions to an all-you-can-eat buffet: Anthropic offers a flat monthly price—up to $200 for Max—but controls consumption speed through its official Claude Code tool.

Third-party harnesses remove those speed limits. Autonomous agents running inside tools like OpenCode can execute intensive loops—writing code, running tests, fixing errors—continuously and unattended, often overnight. At that scale, the same usage would be prohibitively expensive under per-token API pricing.

"In a month of Claude Code, it's easy to use so many LLM tokens that it would have cost you more than $1,000 if you'd paid via the API," wrote Hacker News user dfabulich.

By cutting off spoofed harnesses, Anthropic is effectively pushing heavy automation into two approved channels: its metered Commercial API, or Claude Code itself, where execution speed and environment constraints are fully controlled.

Community Reaction and Workarounds

The response from developers has been swift and mixed. Some criticized the move as hostile to users. "Seems very customer hostile," wrote Danish programmer David Heinemeier Hansson (DHH), creator of Ruby on Rails, in a post on X.

Others were more understanding. "anthropic crackdown on people abusing the subscription auth is the gentlest it could’ve been," wrote Artem K aka @banteg on X. "just a polite message instead of nuking your account or retroactively charging you at api prices."

The OpenCode team moved quickly, launching a new $200-per-month tier called OpenCode Black that reportedly routes usage through an enterprise API gateway rather than consumer OAuth. OpenCode creator Dax Raad also announced plans to work with Anthropic rival OpenAI so users could access Codex directly within OpenCode, punctuating the announcement with a Gladiator GIF captioned "Are you not entertained?"

The xAI and Cursor Enforcement

Running parallel to the technical crackdown, developers at Elon Musk’s AI lab xAI reportedly lost access to Claude models around the same time. While the timing suggested coordination, sources indicate this was a separate action rooted in Anthropic’s commercial terms.

As reported by tech journalist Kylie Robison of Core Memory, xAI staff had been using Claude models through the Cursor IDE to accelerate internal development. "Hi team, I believe many of you have already discovered that Anthropic models are not responding on Cursor," wrote xAI co-founder Tony Wu in an internal memo. "According to Cursor this is a new policy Anthropic is enforcing for all its major competitors."

Anthropic’s Commercial Terms of Service explicitly prohibit using its services to build or train competing AI systems. In this case, Cursor itself was not the issue; rather, xAI’s use of Claude through the IDE for competitive research triggered the block.

This is not the first time Anthropic has cut off access to protect its models. In August 2025, the company revoked OpenAI’s access to the Claude API under similar circumstances. At the time, an Anthropic spokesperson said, "Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI's own technical staff were also using our coding tools."

Earlier, in June 2025, the coding environment Windsurf was abruptly informed that Anthropic was cutting off most first-party capacity for Claude 3.x models. Windsurf was forced to pivot to a bring-your-own-key model and promote alternatives like Google’s Gemini.

Together with the xAI and OpenCode actions, these incidents underscore a consistent message: Anthropic will sever access when usage threatens its business model or competitive position.

Claude Code’s Rapid Rise

The timing of the crackdowns closely follows a dramatic surge in Claude Code’s popularity. Although released in early 2025, it remained niche until December 2025 and early January 2026, when community-driven experimentation—popularized by the so-called “Ralph Wiggum” plugin—demonstrated powerful self-healing coding loops.

The real prize, however, was not the Claude Code interface itself but the underlying Claude Opus 4.5 model. By spoofing the official client, third-party tools allowed developers to run large-scale autonomous workflows on Anthropic’s most capable reasoning model at a flat subscription price—effectively arbitraging consumer pricing against enterprise-grade usage.

As developer Ed Andersen noted on X, some of Claude Code’s popularity may have been driven by this very behavior.

For enterprise AI teams, the message is clear: pipelines built on unofficial wrappers or personal subscriptions carry significant risk. While flat-rate tools like OpenCode reduced costs, Anthropic’s enforcement highlights the instability and compliance issues they introduce.

Organizations now face a trade-off between predictable subscription fees and variable, per-token API costs—but with the benefit of guaranteed support and stability. From a security standpoint, the episode also exposes the dangers of “Shadow AI,” where engineers quietly bypass enterprise controls using spoofed credentials.

As Anthropic consolidates control over access to Claude’s models, the reliability of official APIs and sanctioned tools is becoming more important than short-term cost savings. In this new phase of the AI arms race, unrestricted access to top-tier reasoning models is no longer a given—it’s a privilege tightly guarded by their creators.

India Cracks Down on Grok's AI Image Misuse

 

The Ministry of Electronics and Information Technology (MeitY) of India has found that the latest restrictions on Grok’s image generation tool by X are not adequate to prevent obscene content. The platform, owned by Elon Musk, restricted the controversial feature, known as Grok Imagine, to paid subscribers across the globe. The feature was removed to prevent free users on the platform from creating abusive images. However, officials have argued that allowing such image generation violates Indian laws on privacy and dignity, especially regarding women and children. 

Grok Imagine, available on X and as a separate app, has shown a rise in pornographic and abusive images, including non-consensual images of real people, including children, being naked. The feature, known as Spicy Mode, which produced such images, sparked anger across India, the United Kingdom, Türkiye, Malaysia, Brazil, and the European Union. The feature allowed users to create images of people being undressed, including images of women being dressed in bikinis. The feature sparked anger among members of Parliament in India. 

X's partial fixes fall short 

On 2 January 2026, MeitY ordered X to remove all vulgar images generated on the platform within 72 hours. The order also required X to provide a report on actions taken to comply with the order. The response from X mentioned stricter filters on images. However, officials have argued that X failed to provide adequate technical details on steps taken to prevent such images from being generated. The officials have also stated that the website of Grok allows users to create images for free. 

X now restricts image generation and editing via @Grok replies to premium users, but loopholes persist: the Grok app and website remain open to all, and X's image edit button is accessible platform-wide. Grok stated illegal prompts face the same penalties as uploads, yet regulators demand proactive safeguards. MeitY seeks comprehensive measures to block obscene outputs entirely. 

This clash highlights rising global scrutiny on AI tools lacking robust guardrails against deepfakes and harm. India's IT Rules 2021 mandate swift content removal, with non-compliance risking liability for platforms and executives.As X refines Grok, the case underscores the need for ethical AI design amid tech's rapid evolution, balancing innovation with societal protection.

Raspberry Pi Project Turns Wi-Fi Signals Into Visual Light Displays

 



Wireless communication surrounds people at all times, even though it cannot be seen. Signals from Wi-Fi routers, Bluetooth devices, and mobile networks constantly travel through homes and cities unless blocked by heavy shielding. A France-based digital artist has developed a way to visually represent this invisible activity using light and low-cost computing hardware.

The creator, Théo Champion, who is also known online as Rootkid, designed an installation called Spectrum Slit. The project captures radio activity from commonly used wireless frequency ranges and converts that data into a visual display. The system focuses specifically on the 2.4 GHz and 5 GHz bands, which are widely used for Wi-Fi connections and short-range wireless communication.

The artwork consists of 64 vertical LED filaments arranged in a straight line. Each filament represents a specific portion of the wireless spectrum. As radio signals are detected, their strength and density determine how brightly each filament lights up. Low signal activity results in faint and scattered illumination, while higher levels of wireless usage produce intense and concentrated light patterns.

According to Champion, quiet network conditions create a subtle glow that reflects the constant but minimal background noise present in urban environments. As wireless traffic increases, the LEDs become brighter and more saturated, forming dense visual bands that indicate heavy digital activity.

A video shared on YouTube shows the construction process and the final output of the installation inside Champion’s Paris apartment. The footage demonstrates a noticeable increase in brightness during evening hours, when nearby residents return home and connect phones, laptops, and other devices to their networks.

Champion explained in an interview that his work is driven by a desire to draw attention to technologies people often ignore, despite their significant influence on daily life. By transforming technical systems into physical experiences, he aims to encourage viewers to reflect on the infrastructure shaping modern society and to appreciate the engineering behind it.

The installation required both time and financial investment. Champion built the system using a HackRF One software-defined radio connected to a Raspberry Pi. The radio device captures surrounding wireless signals, while the Raspberry Pi processes the data and controls the lighting behavior. The software was written in Python, but other components, including the metal enclosure and custom circuit boards, had to be professionally manufactured.

He estimates that development involved several weeks of experimentation, followed by a dedicated build phase. The total cost of materials and fabrication was approximately $1,000.

Champion has indicated that Spectrum Slit may be publicly exhibited in the future. He is also known for creating other technology-focused artworks, including interactive installations that explore data privacy, artificial intelligence, and digital systems. He has stated that producing additional units of Spectrum Slit could be possible if requested.