Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Technology. Show all posts

Smart Homes Under Threat: How to Reduce the Risk of IoT Device Hacking

 

Most households today use some form of internet of things (IoT) technology, whether it’s a smartphone, tablet, smart plugs, or a network of cameras and sensors. Learning that nearly 120,000 home security cameras were compromised in South Korea and misused for sexploitation footage is enough to make anyone reconsider adding connected devices to their living space. After all, the home is meant to be a private and secure environment.

Although all smart homes carry some level of risk, widespread hacking incidents are still relatively uncommon. Cybercriminals targeting smart homes tend to be opportunistic rather than strategic. Instead of focusing on a particular household and attempting to break into a specific system, they scan broadly for devices with weak or misconfigured security settings that can be exploited easily.

The most effective way to safeguard smart home devices is to avoid being an easy target. Unfortunately, many of the hacking cases reported in the media stem from basic security oversights that could have been prevented with simple precautions.

How to Protect Your Smart Home From Hackers

Using weak passwords, neglecting firmware updates, or leaving Wi-Fi networks exposed can increase the risk of unauthorized access—even if the overall threat level remains low. Below are key steps homeowners can take to strengthen smart home security.

1. Use strong and unique passwords
Hackers gaining access to baby monitors and speaking through two-way audio is often the result of unchanged default passwords. Weak or reused passwords are easy to guess, especially if they have appeared in previous data breaches. Each smart device and account should have a strong, unique password to make attacks more difficult and less appealing.

2. Enable two-factor or multi-factor authentication
Multi-factor authentication adds an extra layer of protection by requiring a second form of verification beyond a password. Even if login credentials are compromised, attackers would still need additional approval. Many major smart home platforms, including Amazon, Google, and Philips Hue, support this feature. While it may add a small inconvenience during login, the added security is well worth the effort.

3. Secure your Wi-Fi network
Wi-Fi security is often overlooked but plays a critical role in smart home protection. Using WPA2 or WPA3 encryption and changing the router’s default password are essential steps. Limiting who has access to your Wi-Fi network also helps. Creating separate networks—one for personal devices and another exclusively for IoT devices—can further reduce risk by isolating smart home hardware from sensitive data.

4. Keep device firmware updated
Manufacturers regularly release firmware updates to patch newly discovered vulnerabilities. Enabling automatic updates ensures devices receive these fixes promptly. Keeping firmware current is one of the simplest and most effective ways to close security gaps.

5. Disable unnecessary features
Features that aren’t actively used can create additional entry points for attackers. If remote access isn’t needed, disabling it can significantly reduce exposure—particularly for devices with cameras. It’s also advisable to turn off Universal Plug and Play (UPnP) on routers and decline unnecessary integrations or permissions that don’t serve a clear purpose.

6. Research brands before buying
Brand recognition alone doesn’t guarantee strong security. Even well-known companies such as Wyze, Eufy, and Google have faced security issues in the past. Before purchasing a smart device, it’s important to research the brand’s security practices, data protection policies, and real-world user experiences. If features like local-only storage are important, they should be verified through reviews, forums, and independent evaluations.

Smart homes offer convenience and efficiency, but they also demand responsibility. By following basic cybersecurity practices and making informed purchasing decisions, homeowners can significantly reduce risks and enjoy the benefits of connected living with greater peace of mind.

Experts Find Malicious Browser Extensions, Chrome, Safari, and Edge Affected


Threat actors exploit extensions

Cybersecurity experts found 17 extensions for Chrome, Edge, and Firefox browsers which track user's internet activity and install backdoors for access. The extensions were downloaded over 840,000 times. 

The campaign is not new. LayerX claimed that the campaign is part of GhostPoster, another campaign first found by Koi Security last year in December. Last year, researchers discovered 17 different extensions that were downloaded over 50,000 times and showed the same monitoring behaviour and deploying backdoors. 

Few extensions from the new batch were uploaded in 2020, exposing users to malware for years. The extensions appeared in places like the Edge store and later expanded to Firefox and Chrome. 

Few extensions stored malicious JavaScript code in the PNG logo. The code is a kind of instruction on downloading the main payload from a remote server. 

The main payload does multiple things. It can hijack affiliate links on famous e-commerce websites to steal money from content creators and influencers. “The malware watches for visits to major e-commerce platforms. When you click an affiliate link on Taobao or JD.com, the extension intercepts it. The original affiliate, whoever was supposed to earn a commission from your purchase, gets nothing. The malware operators get paid instead,” said Koi researchers. 

After that, it deploys Google Analytics tracking into every page that people open, and removes security headers from HTTP responses. 

In the end, it escapes CAPTCHA via three different ways, and deploy invisible iframes that do ad frauds, click frauds, and tracking. These iframes disappear after 15 seconds.

Besides this, all extensions were deleted from the repositories, but users shoul also remove them personally. 

This staged execution flow demonstrates a clear evolution toward longer dormancy, modularity, and resilience against both static and behavioral detection mechanisms,” said LayerX. 

The PNG steganography technique is employed by some. Some people download JavaScript directly and include it into each page you visit. Others employ bespoke ciphers to encode the C&C domains and use concealed eval() calls. The same assailant. identical servers. many methods of delivery. This appears to be testing several strategies to see which one gets the most installs, avoids detection the longest, and makes the most money.

This campaign reflects a deliberate shift toward patience and precision. By embedding malicious code in images, delaying execution, and rotating delivery techniques across identical infrastructure, the attackers test which methods evade detection longest. The strategy favors longevity and profit over speed, exposing how browser ecosystems remain vulnerable to quietly persistent threats.

Microsoft Unveils Backdoor Scanner for Open-Weight AI Models

 

Microsoft has introduced a new lightweight scanner designed to detect hidden backdoors in open‑weight large language models (LLMs), aiming to boost trust in artificial intelligence systems. The tool, built by the company’s AI Security team, focuses on subtle behavioral patterns inside models to reliably flag tampering without generating many false outcomes. By targeting how specific trigger inputs change a model’s internal operations, Microsoft hopes to offer security teams a practical way to vet AI models before deployment.

The scanner is meant to address a growing problem in AI security: model poisoning and backdoored models that act as “sleeper agents.” In such attacks, threat actors manipulate model weights or training data so the model behaves normally in most scenarios, but switches to malicious or unexpected behavior when it encounters a carefully crafted trigger phrase or pattern. Because these triggers are narrowly defined, the backdoor often evades normal testing and quality checks, making detection difficult. Microsoft notes that both the model’s parameters and its surrounding code can be tampered with, but this tool focuses primarily on backdoors embedded directly into the model’s weights.

To detect these covert modifications, Microsoft’s scanner looks for three practical signals that indicate a poisoned model. First, when given a trigger prompt, compromised models tend to show a distinctive “double triangle” attention pattern, focusing heavily on the trigger itself and sharply reducing the randomness of their output. Second, backdoored LLMs often leak fragments of their own poisoning data, including trigger phrases, through memorization rather than generalization. Third, a single hidden backdoor may respond not just to one exact phrase, but to multiple “fuzzy” variations of that trigger, which the scanner can surface during analysis.

The detection workflow starts by extracting memorized content from the model, then analyzing that content to isolate suspicious substrings that could represent hidden triggers. Microsoft formalizes the three identified signals as loss functions, scores each candidate substring, and returns a ranked list of likely trigger phrases that might activate a backdoor. A key advantage is that the scanner does not require retraining the model or prior knowledge of the specific backdoor behavior, and it can operate across common GPT‑style architectures at scale. This makes it suitable for organizations evaluating open‑weight models obtained from third parties or public repositories.

However, the company stresses that the scanner is not a complete solution to all backdoor risks. It requires direct access to model files, so it cannot be used on proprietary, fully hosted models. It is also optimized for trigger‑based backdoors that produce deterministic outputs, meaning more subtle or probabilistic attacks may still evade detection. Microsoft positions the tool as an important step toward deployable backdoor detection and calls for broader collaboration across the AI security community to refine defenses. In parallel, the firm is expanding its Secure Development Lifecycle to address AI‑specific threats like prompt injection and data poisoning, acknowledging that modern AI systems introduce many new entry points for malicious inputs.

Researchers Disclose Patched Flaw in Docker AI Assistant that Enabled Code Execution


Researchers have disclosed details of a previously fixed security flaw in Ask Gordon, an artificial intelligence assistant integrated into Docker Desktop and the Docker command-line interface, that could have been exploited to execute code and steal sensitive data. The vulnerability, dubbed DockerDash by cybersecurity firm Noma Labs, was patched by Docker in November 2025 with the release of version 4.50.0. 

“In DockerDash, a single malicious metadata label in a Docker image can be used to compromise your Docker environment through a simple three-stage attack,” said Sasi Levi, security research lead at Noma Labs, in a report shared with The Hacker News. “Every stage happens with zero validation, taking advantage of current agents and MCP Gateway architecture.” 

According to the researchers, the flaw allowed Ask Gordon to treat unverified container metadata as executable instructions. When combined with Docker’s Model Context Protocol gateway, this behavior could lead to remote code execution on cloud and command-line systems, or data exfiltration on desktop installations. 

The issue stems from what Noma described as a breakdown in contextual trust. Ask Gordon reads metadata from Docker images, including LABEL fields, without distinguishing between descriptive information and embedded instructions. These instructions can then be forwarded to the MCP Gateway, which executes them using trusted tools without additional checks. “MCP Gateway cannot distinguish between informational metadata and a pre-authorized, runnable internal instruction,” Levi said. 

“By embedding malicious instructions in these metadata fields, an attacker can hijack the AI’s reasoning process.” In a hypothetical attack, a malicious actor could publish a Docker image containing weaponized metadata labels. When a user queries Ask Gordon about the image, the assistant parses the labels, forwards them to the MCP Gateway, and triggers tool execution with the user’s Docker privileges.  
Researchers said the same weakness could be used for data exfiltration on Docker Desktop, allowing attackers to gather details about installed tools, container configurations, mounted directories, and network setups, despite the assistant’s read-only permissions. Docker version 4.50.0 also addressed a separate prompt injection flaw previously identified by Pillar Security, which could have enabled attackers to manipulate Docker Hub metadata to extract sensitive information. 

“The DockerDash vulnerability underscores the need to treat AI supply chain risk as a current core threat,” Levi said. “Trusted input sources can be used to hide malicious payloads that manipulate an AI’s execution path.”

Microsoft Outlines Three-Stage Plan to Disable NTLM and Strengthen Windows Security

 

Microsoft has detailed a structured, three-phase roadmap to gradually retire New Technology LAN Manager (NTLM), reinforcing its broader push toward more secure, Kerberos-based authentication within Windows environments.

The announcement follows Microsoft’s earlier decision to deprecate NTLM, a legacy authentication mechanism that has long been criticized for its security shortcomings. Officially deprecated in June 2024, NTLM no longer receives updates, as its design leaves systems vulnerable to relay attacks and unauthorized access.

"NTLM consists of security protocols originally designed to provide authentication, integrity, and confidentiality to users," Mariam Gewida, Technical Program Manager II at Microsoft, explained. "However, as security threats have evolved, so have our standards to meet modern security expectations. Today, NTLM is susceptible to various attacks, including replay and man-in-the-middle attacks, due to its use of weak cryptography."

Despite its deprecated status, Microsoft acknowledged that NTLM remains widely used across enterprise networks. This is largely due to legacy applications, infrastructure constraints, and deeply embedded authentication logic that make migration difficult. Continued reliance on NTLM increases exposure to threats such as replay, relay, and pass-the-hash attacks.

To address these risks without disrupting critical systems, Microsoft has introduced a phased strategy aimed at eventually disabling NTLM by default.

Phase 1 focuses on improving visibility and administrative control by expanding NTLM auditing capabilities. This helps organizations identify where NTLM is still in use and why. This phase is already available.

Phase 2 aims to reduce migration barriers by introducing tools such as IAKerb and a local Key Distribution Center (KDC), while also updating core Windows components to favor Kerberos authentication. These changes are expected to roll out in the second half of 2026.

Phase 3 will see NTLM disabled by default in the next release of Windows Server and corresponding Windows client versions. Organizations will need to explicitly re-enable NTLM using new policy controls if required.

Microsoft described the move as a key milestone toward a passwordless and phishing-resistant ecosystem. The company urged organizations that still depend on NTLM to audit usage, identify dependencies, transition to Kerberos, test NTLM-disabled configurations in non-production environments, and enable Kerberos enhancements.

"Disabling NTLM by default does not mean completely removing NTLM from Windows yet," Gewida said. "Instead, it means that Windows will be delivered in a secure-by-default state where network NTLM authentication is blocked and no longer used automatically."

"The OS will prefer modern, more secure Kerberos-based alternatives. At the same time, common legacy scenarios will be addressed through new upcoming capabilities such as Local KDC and IAKerb (pre-release)."


Open-Source AI Models Pose Growing Security Risks, Researchers Warn

Hackers and other criminals can easily hijack computers running open-source large language models and use them for illicit activity, bypassing the safeguards built into major artificial intelligence platforms, researchers said on Thursday. The findings are based on a 293-day study conducted jointly by SentinelOne and Censys, and shared exclusively with Reuters. 

The research examined thousands of publicly accessible deployments of open-source LLMs and highlighted a broad range of potentially abusive use cases. According to the researchers, compromised systems could be directed to generate spam, phishing content, or disinformation while evading the security controls enforced by large AI providers. 

The deployments were also linked to activity involving hacking, hate speech, harassment, violent or graphic content, personal data theft, scams, fraud, and in some cases, child sexual abuse material. While thousands of open-source LLM variants are available, a significant share of internet-accessible deployments were based on Meta’s Llama models, Google DeepMind’s Gemma, and other widely used systems, the researchers said. 

They identified hundreds of instances in which safety guardrails had been deliberately removed. “AI industry conversations about security controls are ignoring this kind of surplus capacity that is clearly being utilized for all kinds of different stuff, some of it legitimate, some obviously criminal,” said Juan Andres Guerrero-Saade, executive director for intelligence and security research at SentinelOne. He compared the problem to an iceberg that remains largely unaccounted for across the industry and the open-source community. 

The study focused on models deployed using Ollama, a tool that allows users to run their own versions of large language models. Researchers were able to observe system prompts in about a quarter of the deployments analyzed and found that 7.5 percent of those prompts could potentially enable harmful behavior. 

Geographically, around 30 per cent of the observed hosts were located in China, with about 20 per cent based in the United States, the researchers said. Rachel Adams, chief executive of the Global Centre on AI Governance, said responsibility for downstream misuse becomes shared once open models are released.  “Labs are not responsible for every downstream misuse, but they retain an important duty of care to anticipate foreseeable harms, document risks, and provide mitigation tooling and guidance,” Adams said.  

A Meta spokesperson declined to comment on developer responsibility for downstream abuse but pointed to the company’s Llama Protection tools and Responsible Use Guide. Microsoft AI Red Team Lead Ram Shankar Siva Kumar said Microsoft believes open-source models play an important role but acknowledged the risks. 

“We are clear-eyed that open models, like all transformative technologies, can be misused by adversaries if released without appropriate safeguards,” he said. 

Microsoft conducts pre-release evaluations and monitors for emerging misuse patterns, Kumar added, noting that “responsible open innovation requires shared commitment across creators, deployers, researchers, and security teams.” 

Ollama, Google and Anthropic did not comment. 

Apple's New Feature Will Help Users Restrict Location Data


Apple has introduced a new privacy feature that allows users to restrict the accuracy of location data shared with cellular networks on a few iPad models and iPhone. 

About the feature

The “Limit Precise Location” feature will start after updating to iOS26.3 or later. It restricts the information that mobile carriers use to decide locations through cell tower connections. Once turned on, cellular networks can only detect the device’s location, like neighbourhood instead of accurate street address. 

According to Apple, “The precise location setting doesn't impact the precision of the location data that is shared with emergency responders during an emergency call.” “This setting affects only the location data available to cellular networks. It doesn't impact the location data that you share with apps through Location Services. For example, it has no impact on sharing your location with friends and family with Find My.”

Users can turn on the feature by opening “Settings,” selecting “Cellular,” “Cellular Data Options,” and clicking the “Limit Precise Location” setting. After turning on limited precise location, the device may trigger a device restart to complete activation. 

The privacy enhancement feature works only on iPhone Air, iPad Pro (M5) Wi-Fi + Cellular variants running on iOS 26.3 or later. 

Where will it work?

The availability of this feature will depend on carrier support. The mobile networks compatible are:

EE and BT in the UK

Boost Mobile in the UK

Telecom in Germany 

AIS and True in Thailand 

Apple hasn't shared the reason for introducing this feature yet.

Compatibility of networks with the new feature 

Apple's new privacy feature, which is currently only supported by a small number of networks, is a significant step towards ensuring that carriers can only collect limited data on their customers' movements and habits because cellular networks can easily track device locations via tower connections for network operations.

“Cellular networks can determine your location based on which cell towers your device connects to. The limit precise location setting enhances your location privacy by reducing the precision of location data available to cellular networks,”

Exposed Admin Dashboard in AI Toy Put Children’s Data and Conversations at Risk

 

A routine investigation by a security researcher into an AI-powered toy revealed a serious security lapse that could have exposed sensitive information belonging to children and their families.

The issue came to light when security researcher Joseph Thacker examined an AI toy owned by a neighbor. In a blog post, Thacker described how he and fellow researcher Joel Margolis uncovered an unsecured admin interface linked to the Bondu AI toy.

Margolis identified a suspicious domain—console.bondu.com—referenced in the Content Security Policy headers of the toy’s mobile app backend. On visiting the domain, he found a simple option labeled “Login with Google.”

“By itself, there’s nothing weird about that as it was probably just a parent portal,” Thacker wrote. Instead, logging in granted access to Bondu’s core administrative dashboard.

“We had just logged into their admin dashboard despite [not] having any special accounts or affiliations with Bondu themselves,” Thacker said.

AI Toy Admin Panel Exposed Children’s Conversations

Further analysis of the dashboard showed that the researchers had unrestricted visibility into “Every conversation transcript that any child has had with the toy,” spanning “tens of thousands of sessions.” The exposed panel also included extensive personal details about children and their households, such as:
  • Child’s name and date of birth
  • Names of family members
  • Preferences, likes, and dislikes
  • Parent-defined developmental objectives
  • The custom name assigned to the toy
  • Historical conversations used to provide context to the language model
  • Device-level data including IP-based location, battery status, and activity state
  • Controls to reboot devices and push firmware updates
The researchers also observed that the system relies on OpenAI GPT-5 and Google Gemini. “Somehow, someway, the toy gets fed a prompt from the backend that contains the child profile information and previous conversations as context,” Thacker wrote. “As far as we can tell, the data that is being collected is actually disclosed within their privacy policy, but I doubt most people realize this unless they go and read it (which most people don’t do nowadays).”

Beyond the authentication flaw, the team identified an Insecure Direct Object Reference (IDOR) vulnerability in the API. This weakness “allowed us to retrieve any child’s profile data by simply guessing their ID.”

“This was all available to anyone with a Google account,” Thacker said. “Naturally we didn’t access nor store any data beyond what was required to validate the vulnerability in order to responsibly disclose it.”

Bondu Responds Within Minutes

Margolis contacted Bondu’s CEO via LinkedIn over the weekend, prompting the company to disable access to the exposed console “within 10 minutes.”

“Overall we were happy to see how the Bondu team reacted to this report; they took the issue seriously, addressed our findings promptly, and had a good collaborative response with us as security researchers,” Thacker said.

Bondu also initiated a broader security review, searched for additional vulnerabilities, and launched a bug bounty program. After reviewing console access logs, the company stated that no unauthorized parties had accessed the system aside from the researchers, preventing what could have become a data breach.

Despite the swift and responsible response, the incident changed Thacker’s perspective on AI-driven toys.

“To be honest, Bondu was totally something I would have been prone to buy for my kids before this finding,” he wrote. “However this vulnerability shifted my stance on smart toys, and even smart devices in general.”

“AI models are effectively a curated, bottled-up access to all the information on the internet,” he added. “And the internet can be a scary place. I’m not sure handing that type of access to our kids is a good idea.”

He further noted that, beyond data security concerns, AI introduces new risks at home. “AI makes this problem even more interesting because the designer (or just the AI model itself) can have actual ‘control’ of something in your house. And I think that is even more terrifying than anything else that has existed yet,” he said.

Bondu’s website maintains that the toy was designed with safety as a priority, stating that its “safety and behavior systems were built over 18 months of beta testing with thousands of families. Thanks to rigorous review processes and continuous monitoring, we did not receive a single report of unsafe or inappropriate behavior from bondu throughout the entire beta period.”