Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Cyber Security. Show all posts

Here's Why Businesses Need to be Wary of Document-Borne Malware

 

The cybersecurity experts are constantly on the lookout for novel tactics for attack as criminal groups adapt to better defences against ransomware and phishing. However, in addition to the latest developments, some traditional strategies seem to be resurfacing—or rather, they never really went extinct. 

Document-borne malware is one such strategy. Once believed to be a relic of early cyber warfare, this tactic remains a significant threat, especially for organisations that handle huge volumes of sensitive data, such as those in critical infrastructure.

The lure for perpetrators is evident. Routine files, including Word documents, PDFs, and Excel spreadsheets, are intrinsically trusted and freely exchanged between enterprises, often via cloud-based systems. With modern security measures focussing on endpoints, networks, and email filtering, seemingly innocuous files can serve as the ideal Trojan horse. 

Reasons behind malicious actors using document-borne malware 

Attacks utilising malicious documents seems to be a relic. It's a decades-old strategy, but that doesn't make it any less detrimental for organisations. Still, while the concept is not novel, threat groups are modernising it to keep it fresh and bypass conventional safety procedures. This indicates that the seemingly outdated method remains a threat even in the most security-conscious sectors.

As with other email-based techniques, attackers often prefer to hide in plain sight. The majority of attacks use standard file types like PDFs, Word documents, and Excel spreadsheets to carry malware. Malware is typically concealed in macros, encoded in scripts like JavaScript within PDFs, or hidden behind obfuscated file formats and layers of encryption and archiving. 

These unassuming files are used with common social engineering approaches, such as a supplier invoice or user submission form. Spoofed addresses or hacked accounts are examples of email attack strategies that help mask malicious content. 

Organisations' challenges in defending against these threats 

Security analysts claim that document security is frequently disregarded in favour of other domains, such as endpoint protection and network perimeter. Although document-borne attacks are sufficiently commonplace to be overlooked, they are sophisticated enough to evade the majority of common security measures.

There is an overreliance on signature-based antivirus solutions, which frequently fail to detect new document-borne threats. While security teams are often aware of harmful macros, formats such as ActiveX controls, OLE objects, and embedded JavaScript may be overlooked. 

Attackers have also discovered that there is a considerable mental blind spot when it comes to documents that appear to have been supplied via conventional cloud-based routes. Even when staff have received phishing awareness training, there is a propensity to instinctively believe a document that arrives from an expected source, such as Google or Office 365.

Mitigation tips 

As with other evolving cyberattack strategies, a multi-layered strategy is essential to defending against document-borne threats. One critical step is to use a multi-engine strategy to malware scanning. While threat actors may be able to deceive one detection engine, using numerous technologies increases the likelihood of detecting concealed malware and minimises false negatives. 

Content Disarm and Reconstruction (CDR) tools are also critical. These sanitise and remove malicious macros, scripts, and active material while keeping the page intact. Suspect files can then be run through enhanced standboxes to detect previously unknown threats' malicious behaviour while in a controlled environment. 

The network should also be configured with strict file rules, such as limiting high-risk file categories and requiring user authentication before document uploads. Setting file size restrictions can also help detect malicious documents that have grown in size due to hidden coding. Efficiency and dependability are also important here. Organisations must be able to detect fraudulent documents in their regular incoming traffic while maintaining a rapid and consistent workflow for customers.

Security Teams Struggle to Keep Up With Generative AI Threats, Cobalt Warns

 

A growing number of cybersecurity professionals are expressing concern that generative AI is evolving too rapidly for their teams to manage. 

According to new research by penetration testing company Cobalt, over one-third of security leaders and practitioners admit that the pace of genAI development has outstripped their ability to respond. Nearly half of those surveyed (48%) said they wish they could pause and reassess their defense strategies in light of these emerging threats—though they acknowledge that such a break isn’t realistic. 

In fact, 72% of respondents listed generative AI-related attacks as their top IT security risk. Despite this, one in three organizations still isn’t conducting regular security evaluations of their large language model (LLM) deployments, including basic penetration testing. 

Cobalt CTO Gunter Ollmann warned that the security landscape is shifting, and the foundational controls many organizations rely on are quickly becoming outdated. “Our research shows that while generative AI is transforming how businesses operate, it’s also exposing them to risks they’re not prepared for,” said Ollmann. 
“Security frameworks must evolve or risk falling behind.” The study revealed a divide between leadership and practitioners. Executives such as CISOs and VPs are more concerned about long-term threats like adversarial AI attacks, with 76% listing them as a top issue. Meanwhile, 45% of practitioners are more focused on immediate operational challenges such as model inaccuracies, compared to 36% of executives. 

A majority of leaders—52%—are open to rethinking their cybersecurity strategies to address genAI threats. Among practitioners, only 43% shared this view. The top genAI-related concerns identified by the survey included the risk of sensitive information disclosure (46%), model poisoning or theft (42%), data inaccuracies (40%), and leakage of training data (37%). Around half of respondents also expressed a desire for more transparency from software vendors about how vulnerabilities are identified and patched, highlighting a widening trust gap in the AI supply chain. 

Cobalt’s internal pentest data shows a worrying trend: while 69% of high-risk vulnerabilities are typically fixed across all test types, only 21% of critical flaws found in LLM tests are resolved. This is especially alarming considering that nearly one-third of LLM vulnerabilities are classified as serious. Interestingly, the average time to resolve these LLM-specific vulnerabilities is just 19 days—the fastest across all categories. 

However, researchers noted this may be because organizations prioritize easier, low-effort fixes rather than tackling more complex threats embedded in foundational AI models. Ollmann compared the current scenario to the early days of cloud adoption, where innovation outpaced security readiness. He emphasized that traditional controls aren’t enough in the age of LLMs. “Security teams can’t afford to be reactive anymore,” he concluded. “They must move toward continuous, programmatic AI testing if they want to keep up.”

New Report Ranks Best And Worst Generative AI Tools For Privacy

 

Most generative AI companies use client data to train their chatbots. For this, they may use private or public data. Some services take a more flexible and non-intrusive approach to gathering customer data. Not so much for others. A recent analysis from data removal firm Incogni weighs the benefits and drawbacks of AI in terms of protecting your personal data and privacy.

As part of its "Gen AI and LLM Data Privacy Ranking 2025," Incogni analysed nine well-known generative AI services and evaluated their data privacy policies using 11 distinct factors. The following queries were addressed by the criteria: 

  • What kind of data do the models get trained on? 
  • Is it possible to train the models using user conversations? 
  • Can non-service providers or other appropriate entities receive prompts? 
  • Can the private data from users be erased from the training dataset?
  • How clear is it when training is done via prompts? 
  • How simple is it to locate details about the training process of models? 
  • Does the data collection process have a clear privacy policy?
  • How easy is it to read the privacy statement? 
  • Which resources are used to gather information about users?
  • Are third parties given access to the data? 
  • What information are gathered by the AI apps? 

The research involved Mistral AI's Le Chat, OpenAI's ChatGPT, xAI's Grok, Anthropic's Claude, Inflection AI's Pi, DeekSeek, Microsoft Copilot, Google Gemini, and Meta AI. Each AI performed well on certain questions but not so well on others. 

For instance, Grok performed poorly on the readability of its privacy policy but received a decent rating for how clearly it communicates that prompts are used for training. As another example, the ratings that ChatGPT and Gemini received for gathering data from their mobile apps varied significantly between the iOS and Android versions.

However, Le Chat emerged as the best privacy-friendly AI service overall. It did well in the transparency category, despite losing a few points. Additionally, it only collects a small amount of data and achieves excellent scores for additional privacy concerns unique to AI. 

Second place went to ChatGPT. Researchers at Incogni were a little worried about how user data interacts with the service and how OpenAI trains its models. However, ChatGPT explains the company's privacy standards in detail, lets you know what happens to your data, and gives you explicit instructions on how to restrict how your data is used. Claude and PI came in third and fourth, respectively, after Grok. Each performed reasonably well in terms of protecting user privacy overall, while there were some issues in certain areas. 

"Le Chat by Mistral AI is the least privacy-invasive platform, with ChatGPT and Grok following closely behind," Incogni noted in its report. "These platforms ranked highest when it comes to how transparent they are on how they use and collect data, and how easy it is to opt out of having personal data used to train underlying models. ChatGPT turned out to be the most transparent about whether prompts will be used for model training and had a clear privacy policy.” 

In its investigation, Incogni discovered that AI firms exchange data with a variety of parties, including service providers, law enforcement, members of the same corporate group, research partners, affiliates, and third parties. 

"Microsoft's privacy policy implies that user prompts may be shared with 'third parties that perform online advertising services for Microsoft or that use Microsoft's advertising technologies,'" Incogni added in the report. "DeepSeek's and Meta's privacy policies indicate that prompts can be shared with companies within its corporate group. Meta's and Anthropic's privacy policies can reasonably be understood to indicate that prompts are shared with research collaborators.” 

You can prevent the models from being trained using your prompts with some providers. This is true for Grok, Mistral AI, Copilot, and ChatGPT. However, based on their privacy rules and other resources, it appears that other services do not allow this kind of data collecting to be stopped. Gemini, DeepSeek, Pi AI, and Meta AI are a few of these. In response to this concern, Anthropic stated that it never gathers user input for model training. 

Ultimately, a clear and understandable privacy policy significantly helps in assisting you in determining what information is being gathered and how to opt out.

Iranian Hackers Threaten More Trump Email Leaks Amid Rising U.S. Cyber Tensions

 

Iran-linked hackers have renewed threats against the U.S., claiming they plan to release more emails allegedly stolen from former President Donald Trump’s associates. The announcement follows earlier leaks during the 2024 presidential race, when a batch of messages was distributed to the media. 

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) responded by calling the incident “digital propaganda,” warning it was a calculated attempt to discredit public officials and mislead the public. CISA added that those responsible would be held accountable, describing the operation as part of a broader campaign by hostile foreign actors to sow division. 

Speaking virtually with Reuters, a hacker using the alias “Robert” claimed the group accessed roughly 100 GB of emails from individuals including Trump adviser Roger Stone, legal counsel Lindsey Halligan, White House chief of staff Susie Wiles, and Trump critic Stormy Daniels. Though the hackers hinted at selling the material, they provided no specifics or content. 

The initial leaks reportedly involved internal discussions, legal matters, and possible financial dealings involving RFK Jr.’s legal team. Some information was verified, but had little influence on the election, which Trump ultimately won. U.S. authorities later linked the operation to Iran’s Revolutionary Guard, though the hackers declined to confirm this. 

Soon after Trump ordered airstrikes on Iranian nuclear sites, Iranian-aligned hackers began launching cyberattacks. Truth Social, Trump’s platform, was briefly knocked offline by a distributed denial-of-service (DDoS) attack claimed by a group known as “313 Team.” Security experts confirmed the group’s ties to Iranian and pro-Palestinian cyber networks. 

The outage occurred shortly after Trump posted about the strikes. Users encountered error messages, and monitoring organizations warned that “313 Team” operates within a wider ecosystem of groups supporting anti-U.S. cyber activity. 

The Department of Homeland Security (DHS) issued a national alert on June 22, citing rising cyber threats linked to Iran-Israel tensions. The bulletin highlighted increased risks to U.S. infrastructure, especially from loosely affiliated hacktivists and state-backed cyber actors. DHS also warned that extremist rhetoric could trigger lone-wolf attacks inspired by Iran’s ideology. 

Federal agencies remain on high alert, with targeted sectors including defense, finance, and energy. Though large-scale service disruptions have not yet occurred, cybersecurity teams have documented attempted breaches. Two groups backing the Palestinian cause claimed responsibility for further attacks across more than a dozen U.S. sectors. 

At the same time, the U.S. faces internal challenges in cyber preparedness. The recent dismissal of Gen. Timothy Haugh, who led both the NSA and Cyber Command, has created leadership uncertainty. Budget cuts to election security programs have added to concerns. 

While a military ceasefire between Iran and Israel may be holding, experts warn the cyber conflict is far from over. Independent threat actors and ideological sympathizers could continue launching attacks. Analysts stress the need for sustained investment in cybersecurity infrastructure—both public and private—as digital warfare becomes a long-term concern.

Air India Express Flight Returns Mid-Air After Suspected GPS Spoofing Near Jammu Border

 

In an unusual and concerning incident, an Air India Express flight en route from Delhi to Jammu was forced to return to Indira Gandhi International Airport on Monday due to suspected GPS spoofing near India's border region.

Carrying 160 passengers, the flight reportedly reached Jammu’s airspace but was unable to land and began circling the area before flying back to Delhi. A replacement flight was arranged approximately six hours later, and passengers reached Jammu significantly behind schedule.

Spoofing involves intentional manipulation of GPS signals — vital for aircraft navigation — leading the aircraft to incorrect or misleading locations. Flight No. IX 2564, operated on an Airbus A320, departed from Terminal 3 at 11:05 am and returned at 1:28 pm, according to flight tracking platform Flightaware.

An Air India Express spokesperson stated, “Our Delhi–Jammu flight returned to Delhi as a precautionary measure, following a suspected GPS interference incident. Subsequently, an alternative flight was organised to connect guests to Jammu. We regret the inconvenience caused. Instances of GPS signal interference have been reported by operators while flying over certain sensitive regions.”

Given the aircraft’s proximity to Pakistan, the pilot is believed to have opted for a precautionary return rather than risk a deviation into potentially hostile airspace.

Aviation expert Captain Mohan Ranganthan noted, “For the last two years, there have been reports of GPS spoofing in places like Pakistan, Iran, some parts of the Middle East and even Myanmar. It often happens in war zones. This kind of spoofing is done deliberately but we cannot say for sure who is involved in it.”

The incident adds to growing global concerns around aviation safety in geopolitically sensitive regions.

Jailbroken Mistral And Grok Tools Are Used by Attackers to Build Powerful Malware

 

The latest findings by Cato Networks suggests that a number of jailbroken and uncensored AI tool variations marketed on hacker forums were probably created using well-known commercial large language models like Mistral AI and X's Grok.

A parallel underground market has developed offering to sell more uncensored versions of the technology, while some commercial AI companies have attempted to incorporate safety and security safeguards into their models to prevent them from explicitly coding malware, transmitting detailed instructions for building bombs, or engaging in other malicious behaviours. 

These "WormGPTs," which receive their name from one of the first AI tools that was promoted on underground hacker forums in 2023, are typically assembled from open-source models and other toolkits. They are capable of creating code, finding and analysing vulnerabilities, and then being sold and promoted online. However, two variants promoted on BreachForums in the last year had simpler roots, according to researcher Vitaly Simonovich of Cato Networks.

Named after one of the first AI tools that was promoted on underground hacker forums in 2023, these "WormGPTs" are typically assembled from open-source models and other toolkits and are capable of generating code, searching for and analysing vulnerabilities, and then being sold and marketed online. 

However, Vitaly Simonovich, a researcher at Cato Networks, reveals that two variations promoted on BreachForums in the last year had straightforward origins. “Cato CTRL has discovered previously unreported WormGPT variants that are powered by xAI’s Grok and Mistral AI’s Mixtral,” he wrote. 

One version was accessible via Telegram and was promoted on BreachForums in February. It referred to itself as a “Uncensored Assistant” but otherwise described its function in a positive and uncontroversial manner. After gaining access to both models and beginning his investigation, Simonovich discovered that they were, as promised, mainly unfiltered. 

In addition to other offensive capabilities, the models could create phishing emails and build malware that stole PowerShell credentials on demand. However, he discovered prompt-based guardrails meant to hide one thing: the initial system prompts used to build those models. He was able to evade the constraints by using an LLM jailbreaking technique to access the first 200 tokens processed by the system. The answer identified xAI's Grok as the underlying model that drives the tool.

“It appears to be a wrapper on top of Grok and uses the system prompt to define its character and instruct it to bypass Grok’s guardrails to produce malicious content,” Simonovich added.

Another WormGPT variant, promoted in October 2024 with the subject line "WormGPT / 'Hacking' & UNCENSORED AI," was described as an artificial intelligence-based language model focused on "cyber security and hacking issues." The seller stated that the tools give customers "access to information about how cyber attacks are carried out, how to detect vulnerabilities, or how to take defensive measures," but emphasised that neither they nor the product accept legal responsibility for the user's actions.

New Malicious Python Package Found Stealing Cloud Credentials

 


A dangerous piece of malware has been discovered hidden inside a Python software package, raising serious concerns about the security of open-source tools often used by developers.

Security experts at JFrog recently found a harmful package uploaded to the Python Package Index (PyPI) – a popular online repository where developers share and download software components. This specific package, named chimera-sandbox-extensions, was designed to secretly collect sensitive information from developers, especially those working with cloud infrastructure.

The package was uploaded by a user going by the name chimerai and appears to target users of the Chimera sandbox— a platform used by developers for testing. Once installed, the package launches a chain of events that unfolds in multiple stages.

It starts with a function called check_update() which tries to contact a list of web domains generated using a special algorithm. Out of these, only one domain was found to be active at the time of analysis. This connection allows the malware to download a hidden tool that fetches an authentication token, which is then used to download a second, more harmful tool written in Python.

This second stage of the malware focuses on stealing valuable information. It attempts to gather data such as Git settings, CI/CD pipeline details, AWS access tokens, configuration files from tools like Zscaler and JAMF, and other system-level information. All of this stolen data is bundled into a structured file and sent back to a remote server controlled by the attackers.

According to JFrog’s research, the malware was likely designed to go even further, possibly launching a third phase of attack. However, researchers did not find evidence of this additional step in the version they analyzed.

After JFrog alerted the maintainers of PyPI, the malicious package was removed from the platform. However, the incident serves as a reminder of the growing complexity and danger of software supply chain attacks. Unlike basic infostealers, this malware showed signs of being deliberately crafted to infiltrate professional development environments.

Cybersecurity experts are urging development and IT security teams to stay alert. They recommend using multiple layers of protection, regularly reviewing third-party packages, and staying updated on new threats to avoid falling victim to such sophisticated attacks.

As open-source tools continue to be essential in software development, such incidents highlight the need for stronger checks and awareness across the development community.

CISA Warns of Renewed Exploits Targeting TP-Link Routers with Critical Flaws

 

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has raised fresh concerns about several outdated TP-Link router models that are being actively exploited by cybercriminals. Despite the flaw being identified years ago, it has re-emerged in recent attack campaigns, prompting its addition to CISA’s Known Exploited Vulnerabilities (KEV) catalog. 

The security issue is a command injection vulnerability with a high severity rating of 8.8. It impacts three specific models: TP-Link TL-WR940N, TL-WR841N, and TL-WR740N. The flaw exists within the routers’ web-based management interface, where improperly validated input allows hackers to execute unauthorized commands directly on the devices. This makes it possible for attackers to gain control of the routers remotely if remote access is enabled, or locally if they’re on the same network. 

Although this vulnerability has been publicly known for years, recent activity suggests that malicious actors are targeting these devices once again. According to cybersecurity researchers, the attack surface remains significant because these routers are still in use across many households and small offices. 

CISA has mandated that all federal agencies remove the affected router models from their networks by July 7, 2025. It also strongly recommends that other organizations and individuals replace the devices to avoid potential exploitation. 

The affected routers are particularly vulnerable because they are no longer supported by the manufacturer. The TL-WR940N last received a firmware update in 2016, the TL-WR841N in 2015, and the TL-WR740N has gone without updates for over 15 years. As these devices have reached end-of-life status, no further security patches will be provided. Users are urged to upgrade to newer routers that are regularly updated by manufacturers. 

Modern Wi-Fi routers often include enhanced performance, support for more devices, and built-in security protections. Some brands even offer network-wide security features to safeguard connected devices against malware and intrusion attempts. Additionally, using antivirus software with extra security tools, such as VPNs and threat detection, can further protect against online threats. 

Outdated routers not only put your personal information at risk but also slow down internet speed and struggle to manage today’s connected home environments. Replacing obsolete hardware is an important step in defending your digital life. 

Ensuring you’re using a router that receives timely security updates, combined with good cybersecurity habits, can significantly reduce your exposure to cyberattacks. 

CISA’s warning is a clear signal that relying on aging technology leaves both individuals and organizations vulnerable to renewed threats.