Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Technology. Show all posts

Threat Actors Exploit Fortinet Devices and Steal Firewall Configurations


Fortinet products targeted

Threat actors are targeting Fortinet FortiGate devices via automated attacks that make rogue accounts and steal firewall settings info. 

The campaign began earlier this year when threat actors exploited an unknown bug in the devices’ single-sign-on (SSO) option to make accounts with VPN access and steal firewall configurations. This means automation was involved. 

About the attack

Cybersecurity company Arctic Wolf discovered this attack and said they are quite similar to the attacks it found in December after the reveal of a critical login bypass flaw (CVE-2025-59718) in Fortinet products. 

The advisory comes after a series of reports from Fortinet users about threat actors abusing a patch bypass for the bug CVE-2025-59718 to take over patched walls. 

Impacted admins complaint that Fortinet said that the latest FortiOS variant 7.4.10 doesn't totally fix the authentication bypass bug, which should have been fixed in December 2025.

Patches and fixing 

Fortinet also plans on releasing more FortiOS variants soon to fully patch the CVE-2025-59718 security bug. 

Following an SSO login from cloud-init@mail.io on IP address 104.28.244.114, the attackers created admin users, according to logs shared by impacted Fortinet customers. This matches indications of compromise found by Arctic Wolf during its analysis of ongoing FortiGate attacks and prior exploitation the cybersecurity firm noticed in December. 

Turn off FortiCloud SSO to prevent intrusions. 

Turning off SSO

Admins can temporarily disable the vulnerable FortiCloud login capability (if enabled) by navigating to System -> Settings and changing "Allow administrative login using FortiCloud SSO" to Off. This will help administrators safeguard their firewalls until Fortinet properly updates FortiOS against these persistent assaults.

You can also run these commands from the interface:

"config system global

set admin-forticloud-sso-login disable

end"

What to do next?

Internet security watchdog Shadowserver is investigating around 11,000 Fortinet devices that are vulnerable to online threats and have FortiCloud SSO turned on. 

Additionally, CISA ordered federal agencies to patch CVE-2025-59718 within a week after adding it to its list of vulnerabilities that were exploited in attacks on December 16.

Smart Homes Under Threat: How to Reduce the Risk of IoT Device Hacking

 

Most households today use some form of internet of things (IoT) technology, whether it’s a smartphone, tablet, smart plugs, or a network of cameras and sensors. Learning that nearly 120,000 home security cameras were compromised in South Korea and misused for sexploitation footage is enough to make anyone reconsider adding connected devices to their living space. After all, the home is meant to be a private and secure environment.

Although all smart homes carry some level of risk, widespread hacking incidents are still relatively uncommon. Cybercriminals targeting smart homes tend to be opportunistic rather than strategic. Instead of focusing on a particular household and attempting to break into a specific system, they scan broadly for devices with weak or misconfigured security settings that can be exploited easily.

The most effective way to safeguard smart home devices is to avoid being an easy target. Unfortunately, many of the hacking cases reported in the media stem from basic security oversights that could have been prevented with simple precautions.

How to Protect Your Smart Home From Hackers

Using weak passwords, neglecting firmware updates, or leaving Wi-Fi networks exposed can increase the risk of unauthorized access—even if the overall threat level remains low. Below are key steps homeowners can take to strengthen smart home security.

1. Use strong and unique passwords
Hackers gaining access to baby monitors and speaking through two-way audio is often the result of unchanged default passwords. Weak or reused passwords are easy to guess, especially if they have appeared in previous data breaches. Each smart device and account should have a strong, unique password to make attacks more difficult and less appealing.

2. Enable two-factor or multi-factor authentication
Multi-factor authentication adds an extra layer of protection by requiring a second form of verification beyond a password. Even if login credentials are compromised, attackers would still need additional approval. Many major smart home platforms, including Amazon, Google, and Philips Hue, support this feature. While it may add a small inconvenience during login, the added security is well worth the effort.

3. Secure your Wi-Fi network
Wi-Fi security is often overlooked but plays a critical role in smart home protection. Using WPA2 or WPA3 encryption and changing the router’s default password are essential steps. Limiting who has access to your Wi-Fi network also helps. Creating separate networks—one for personal devices and another exclusively for IoT devices—can further reduce risk by isolating smart home hardware from sensitive data.

4. Keep device firmware updated
Manufacturers regularly release firmware updates to patch newly discovered vulnerabilities. Enabling automatic updates ensures devices receive these fixes promptly. Keeping firmware current is one of the simplest and most effective ways to close security gaps.

5. Disable unnecessary features
Features that aren’t actively used can create additional entry points for attackers. If remote access isn’t needed, disabling it can significantly reduce exposure—particularly for devices with cameras. It’s also advisable to turn off Universal Plug and Play (UPnP) on routers and decline unnecessary integrations or permissions that don’t serve a clear purpose.

6. Research brands before buying
Brand recognition alone doesn’t guarantee strong security. Even well-known companies such as Wyze, Eufy, and Google have faced security issues in the past. Before purchasing a smart device, it’s important to research the brand’s security practices, data protection policies, and real-world user experiences. If features like local-only storage are important, they should be verified through reviews, forums, and independent evaluations.

Smart homes offer convenience and efficiency, but they also demand responsibility. By following basic cybersecurity practices and making informed purchasing decisions, homeowners can significantly reduce risks and enjoy the benefits of connected living with greater peace of mind.

Experts Find Malicious Browser Extensions, Chrome, Safari, and Edge Affected


Threat actors exploit extensions

Cybersecurity experts found 17 extensions for Chrome, Edge, and Firefox browsers which track user's internet activity and install backdoors for access. The extensions were downloaded over 840,000 times. 

The campaign is not new. LayerX claimed that the campaign is part of GhostPoster, another campaign first found by Koi Security last year in December. Last year, researchers discovered 17 different extensions that were downloaded over 50,000 times and showed the same monitoring behaviour and deploying backdoors. 

Few extensions from the new batch were uploaded in 2020, exposing users to malware for years. The extensions appeared in places like the Edge store and later expanded to Firefox and Chrome. 

Few extensions stored malicious JavaScript code in the PNG logo. The code is a kind of instruction on downloading the main payload from a remote server. 

The main payload does multiple things. It can hijack affiliate links on famous e-commerce websites to steal money from content creators and influencers. “The malware watches for visits to major e-commerce platforms. When you click an affiliate link on Taobao or JD.com, the extension intercepts it. The original affiliate, whoever was supposed to earn a commission from your purchase, gets nothing. The malware operators get paid instead,” said Koi researchers. 

After that, it deploys Google Analytics tracking into every page that people open, and removes security headers from HTTP responses. 

In the end, it escapes CAPTCHA via three different ways, and deploy invisible iframes that do ad frauds, click frauds, and tracking. These iframes disappear after 15 seconds.

Besides this, all extensions were deleted from the repositories, but users shoul also remove them personally. 

This staged execution flow demonstrates a clear evolution toward longer dormancy, modularity, and resilience against both static and behavioral detection mechanisms,” said LayerX. 

The PNG steganography technique is employed by some. Some people download JavaScript directly and include it into each page you visit. Others employ bespoke ciphers to encode the C&C domains and use concealed eval() calls. The same assailant. identical servers. many methods of delivery. This appears to be testing several strategies to see which one gets the most installs, avoids detection the longest, and makes the most money.

This campaign reflects a deliberate shift toward patience and precision. By embedding malicious code in images, delaying execution, and rotating delivery techniques across identical infrastructure, the attackers test which methods evade detection longest. The strategy favors longevity and profit over speed, exposing how browser ecosystems remain vulnerable to quietly persistent threats.

Microsoft Unveils Backdoor Scanner for Open-Weight AI Models

 

Microsoft has introduced a new lightweight scanner designed to detect hidden backdoors in open‑weight large language models (LLMs), aiming to boost trust in artificial intelligence systems. The tool, built by the company’s AI Security team, focuses on subtle behavioral patterns inside models to reliably flag tampering without generating many false outcomes. By targeting how specific trigger inputs change a model’s internal operations, Microsoft hopes to offer security teams a practical way to vet AI models before deployment.

The scanner is meant to address a growing problem in AI security: model poisoning and backdoored models that act as “sleeper agents.” In such attacks, threat actors manipulate model weights or training data so the model behaves normally in most scenarios, but switches to malicious or unexpected behavior when it encounters a carefully crafted trigger phrase or pattern. Because these triggers are narrowly defined, the backdoor often evades normal testing and quality checks, making detection difficult. Microsoft notes that both the model’s parameters and its surrounding code can be tampered with, but this tool focuses primarily on backdoors embedded directly into the model’s weights.

To detect these covert modifications, Microsoft’s scanner looks for three practical signals that indicate a poisoned model. First, when given a trigger prompt, compromised models tend to show a distinctive “double triangle” attention pattern, focusing heavily on the trigger itself and sharply reducing the randomness of their output. Second, backdoored LLMs often leak fragments of their own poisoning data, including trigger phrases, through memorization rather than generalization. Third, a single hidden backdoor may respond not just to one exact phrase, but to multiple “fuzzy” variations of that trigger, which the scanner can surface during analysis.

The detection workflow starts by extracting memorized content from the model, then analyzing that content to isolate suspicious substrings that could represent hidden triggers. Microsoft formalizes the three identified signals as loss functions, scores each candidate substring, and returns a ranked list of likely trigger phrases that might activate a backdoor. A key advantage is that the scanner does not require retraining the model or prior knowledge of the specific backdoor behavior, and it can operate across common GPT‑style architectures at scale. This makes it suitable for organizations evaluating open‑weight models obtained from third parties or public repositories.

However, the company stresses that the scanner is not a complete solution to all backdoor risks. It requires direct access to model files, so it cannot be used on proprietary, fully hosted models. It is also optimized for trigger‑based backdoors that produce deterministic outputs, meaning more subtle or probabilistic attacks may still evade detection. Microsoft positions the tool as an important step toward deployable backdoor detection and calls for broader collaboration across the AI security community to refine defenses. In parallel, the firm is expanding its Secure Development Lifecycle to address AI‑specific threats like prompt injection and data poisoning, acknowledging that modern AI systems introduce many new entry points for malicious inputs.

Researchers Disclose Patched Flaw in Docker AI Assistant that Enabled Code Execution


Researchers have disclosed details of a previously fixed security flaw in Ask Gordon, an artificial intelligence assistant integrated into Docker Desktop and the Docker command-line interface, that could have been exploited to execute code and steal sensitive data. The vulnerability, dubbed DockerDash by cybersecurity firm Noma Labs, was patched by Docker in November 2025 with the release of version 4.50.0. 

“In DockerDash, a single malicious metadata label in a Docker image can be used to compromise your Docker environment through a simple three-stage attack,” said Sasi Levi, security research lead at Noma Labs, in a report shared with The Hacker News. “Every stage happens with zero validation, taking advantage of current agents and MCP Gateway architecture.” 

According to the researchers, the flaw allowed Ask Gordon to treat unverified container metadata as executable instructions. When combined with Docker’s Model Context Protocol gateway, this behavior could lead to remote code execution on cloud and command-line systems, or data exfiltration on desktop installations. 

The issue stems from what Noma described as a breakdown in contextual trust. Ask Gordon reads metadata from Docker images, including LABEL fields, without distinguishing between descriptive information and embedded instructions. These instructions can then be forwarded to the MCP Gateway, which executes them using trusted tools without additional checks. “MCP Gateway cannot distinguish between informational metadata and a pre-authorized, runnable internal instruction,” Levi said. 

“By embedding malicious instructions in these metadata fields, an attacker can hijack the AI’s reasoning process.” In a hypothetical attack, a malicious actor could publish a Docker image containing weaponized metadata labels. When a user queries Ask Gordon about the image, the assistant parses the labels, forwards them to the MCP Gateway, and triggers tool execution with the user’s Docker privileges.  
Researchers said the same weakness could be used for data exfiltration on Docker Desktop, allowing attackers to gather details about installed tools, container configurations, mounted directories, and network setups, despite the assistant’s read-only permissions. Docker version 4.50.0 also addressed a separate prompt injection flaw previously identified by Pillar Security, which could have enabled attackers to manipulate Docker Hub metadata to extract sensitive information. 

“The DockerDash vulnerability underscores the need to treat AI supply chain risk as a current core threat,” Levi said. “Trusted input sources can be used to hide malicious payloads that manipulate an AI’s execution path.”

Microsoft Outlines Three-Stage Plan to Disable NTLM and Strengthen Windows Security

 

Microsoft has detailed a structured, three-phase roadmap to gradually retire New Technology LAN Manager (NTLM), reinforcing its broader push toward more secure, Kerberos-based authentication within Windows environments.

The announcement follows Microsoft’s earlier decision to deprecate NTLM, a legacy authentication mechanism that has long been criticized for its security shortcomings. Officially deprecated in June 2024, NTLM no longer receives updates, as its design leaves systems vulnerable to relay attacks and unauthorized access.

"NTLM consists of security protocols originally designed to provide authentication, integrity, and confidentiality to users," Mariam Gewida, Technical Program Manager II at Microsoft, explained. "However, as security threats have evolved, so have our standards to meet modern security expectations. Today, NTLM is susceptible to various attacks, including replay and man-in-the-middle attacks, due to its use of weak cryptography."

Despite its deprecated status, Microsoft acknowledged that NTLM remains widely used across enterprise networks. This is largely due to legacy applications, infrastructure constraints, and deeply embedded authentication logic that make migration difficult. Continued reliance on NTLM increases exposure to threats such as replay, relay, and pass-the-hash attacks.

To address these risks without disrupting critical systems, Microsoft has introduced a phased strategy aimed at eventually disabling NTLM by default.

Phase 1 focuses on improving visibility and administrative control by expanding NTLM auditing capabilities. This helps organizations identify where NTLM is still in use and why. This phase is already available.

Phase 2 aims to reduce migration barriers by introducing tools such as IAKerb and a local Key Distribution Center (KDC), while also updating core Windows components to favor Kerberos authentication. These changes are expected to roll out in the second half of 2026.

Phase 3 will see NTLM disabled by default in the next release of Windows Server and corresponding Windows client versions. Organizations will need to explicitly re-enable NTLM using new policy controls if required.

Microsoft described the move as a key milestone toward a passwordless and phishing-resistant ecosystem. The company urged organizations that still depend on NTLM to audit usage, identify dependencies, transition to Kerberos, test NTLM-disabled configurations in non-production environments, and enable Kerberos enhancements.

"Disabling NTLM by default does not mean completely removing NTLM from Windows yet," Gewida said. "Instead, it means that Windows will be delivered in a secure-by-default state where network NTLM authentication is blocked and no longer used automatically."

"The OS will prefer modern, more secure Kerberos-based alternatives. At the same time, common legacy scenarios will be addressed through new upcoming capabilities such as Local KDC and IAKerb (pre-release)."


Open-Source AI Models Pose Growing Security Risks, Researchers Warn

Hackers and other criminals can easily hijack computers running open-source large language models and use them for illicit activity, bypassing the safeguards built into major artificial intelligence platforms, researchers said on Thursday. The findings are based on a 293-day study conducted jointly by SentinelOne and Censys, and shared exclusively with Reuters. 

The research examined thousands of publicly accessible deployments of open-source LLMs and highlighted a broad range of potentially abusive use cases. According to the researchers, compromised systems could be directed to generate spam, phishing content, or disinformation while evading the security controls enforced by large AI providers. 

The deployments were also linked to activity involving hacking, hate speech, harassment, violent or graphic content, personal data theft, scams, fraud, and in some cases, child sexual abuse material. While thousands of open-source LLM variants are available, a significant share of internet-accessible deployments were based on Meta’s Llama models, Google DeepMind’s Gemma, and other widely used systems, the researchers said. 

They identified hundreds of instances in which safety guardrails had been deliberately removed. “AI industry conversations about security controls are ignoring this kind of surplus capacity that is clearly being utilized for all kinds of different stuff, some of it legitimate, some obviously criminal,” said Juan Andres Guerrero-Saade, executive director for intelligence and security research at SentinelOne. He compared the problem to an iceberg that remains largely unaccounted for across the industry and the open-source community. 

The study focused on models deployed using Ollama, a tool that allows users to run their own versions of large language models. Researchers were able to observe system prompts in about a quarter of the deployments analyzed and found that 7.5 percent of those prompts could potentially enable harmful behavior. 

Geographically, around 30 per cent of the observed hosts were located in China, with about 20 per cent based in the United States, the researchers said. Rachel Adams, chief executive of the Global Centre on AI Governance, said responsibility for downstream misuse becomes shared once open models are released.  “Labs are not responsible for every downstream misuse, but they retain an important duty of care to anticipate foreseeable harms, document risks, and provide mitigation tooling and guidance,” Adams said.  

A Meta spokesperson declined to comment on developer responsibility for downstream abuse but pointed to the company’s Llama Protection tools and Responsible Use Guide. Microsoft AI Red Team Lead Ram Shankar Siva Kumar said Microsoft believes open-source models play an important role but acknowledged the risks. 

“We are clear-eyed that open models, like all transformative technologies, can be misused by adversaries if released without appropriate safeguards,” he said. 

Microsoft conducts pre-release evaluations and monitors for emerging misuse patterns, Kumar added, noting that “responsible open innovation requires shared commitment across creators, deployers, researchers, and security teams.” 

Ollama, Google and Anthropic did not comment. 

Apple's New Feature Will Help Users Restrict Location Data


Apple has introduced a new privacy feature that allows users to restrict the accuracy of location data shared with cellular networks on a few iPad models and iPhone. 

About the feature

The “Limit Precise Location” feature will start after updating to iOS26.3 or later. It restricts the information that mobile carriers use to decide locations through cell tower connections. Once turned on, cellular networks can only detect the device’s location, like neighbourhood instead of accurate street address. 

According to Apple, “The precise location setting doesn't impact the precision of the location data that is shared with emergency responders during an emergency call.” “This setting affects only the location data available to cellular networks. It doesn't impact the location data that you share with apps through Location Services. For example, it has no impact on sharing your location with friends and family with Find My.”

Users can turn on the feature by opening “Settings,” selecting “Cellular,” “Cellular Data Options,” and clicking the “Limit Precise Location” setting. After turning on limited precise location, the device may trigger a device restart to complete activation. 

The privacy enhancement feature works only on iPhone Air, iPad Pro (M5) Wi-Fi + Cellular variants running on iOS 26.3 or later. 

Where will it work?

The availability of this feature will depend on carrier support. The mobile networks compatible are:

EE and BT in the UK

Boost Mobile in the UK

Telecom in Germany 

AIS and True in Thailand 

Apple hasn't shared the reason for introducing this feature yet.

Compatibility of networks with the new feature 

Apple's new privacy feature, which is currently only supported by a small number of networks, is a significant step towards ensuring that carriers can only collect limited data on their customers' movements and habits because cellular networks can easily track device locations via tower connections for network operations.

“Cellular networks can determine your location based on which cell towers your device connects to. The limit precise location setting enhances your location privacy by reducing the precision of location data available to cellular networks,”