Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Holiday Scam Alerts Rise: How to Spot Fake Links and Stay Safe From Phishing Attacks

  As the festive season rolls in with cozy drinks, twinkling lights and gift exchanges, it also brings a sharp spike in online scams. Cyber...

All the recent news you need to know

Cybercriminals Exploit Law Enforcement Data Requests to Steal User Information

 

While most of the major data breaches occur as a result of software vulnerabilities, credit card information theft, or phishing attacks, increasingly, identity theft crimes are being enacted via an intermediary source that is not immediately apparent. Some of the biggest firms in technology are knowingly yielding private information to what they believe are lawful authorities, only to realize that the identity thieves were masquerading as such.  

Technology firms such as Apple, Google, and Meta are mandated by law to disclose limited information about their users to the relevant law enforcement agencies in given situations such as criminal investigations and emergency situations that pose a threat to human life or national security. Such requests for information are usually channeled through formal systems, with a high degree of priority since they are often urgent. All these companies possess detailed information about their users, including their location history, profiles, and gadget data, which is of critical use to law enforcement. 

This process, however, has also been exploited by cybercriminals. These individuals try to evade the security measures that safeguard data by using law enforcement communication mimicking. One of the recent tactics adopted by cyber criminals is the acquisition of typosquatting domains or email addresses that are almost similar to law enforcement or governmental domains, with only one difference in the characters. These malicious parties then send sophisticated emails to companies’ compliance or legal departments that look no different from law enforcement emails. 

In more sophisticated attacks, the perpetrators employ business email compromise to break into genuine email addresses of law enforcement or public service officials. Requests that appear in genuine email addresses are much more authentic, which in turn multiplies the chances of companies responding positively. Even though this attack is more sophisticated, it is also more effective since it is apparently coming from authentic sources. These malicious data requests can be couched in the terms of emergency disclosures, which could shorten the time for verification. 

This emergency request is aimed at averting real damage that could occur immediately, but the attacker takes advantage of the urgency in convincing companies to disclose information promptly. Using such information, identity theft, money fraud, account takeover, or selling on dark markets could be the outcome. Despite these dangers, some measures have been taken by technology companies to ensure that their services are not abused. Most of the major companies currently make use of law enforcement request portals that are reviewed internally before any data sharing takes place. Such requests are reviewed for their validity, authority, and compliance with the law before any data is shared. 

This significantly decreased the number of cases of data abuse but did not eradicate the risk. As more criminals register expertise in impersonation schemes that exploit trust-based systems, it is evident that the situation also embodies a larger challenge for the tech industry. It is becoming increasingly difficult to ensure a good blend of legal services to law-enforcement agencies with the need to safeguard the privacy of services used by users. Abuse of law-enforcement data request systems points to the importance of ensuring that sensitive information is not accessed by criminals.

CyberVolk Ransomware Fails to Gain Traction After Encryption Misstep


 

CyberVolk, a pro-Russian hacktivist collective, has intensified its campaign of ransomware-driven intimidation against entities perceived as hostile to Moscow in the past year, marking a notable change in both scale and presentation, marking a notable shift in its operations. 

In addition to its attacks, the group has become increasingly adept at constructing carefully constructed visual branding, including the release of stylized ransomware imagery to publicize successful intrusions in addition to attacking. It seems that these visuals, which were enhanced by deliberately inflammatory language and threatening tone, were not intended simply to announce breaches, but rather to amplify psychological pressure for victims and broader audiences alike. 

In October 2024, CyberVolk appeared to have a clear strategy in the ransoming of several Japanese organizations, including the Japan Oceanographic Data Center and the Japan Meteorological Agency, in which they claimed responsibility for the ransoming. CyberVolk has reportedly altered the desktop wallpapers of several victims prior to starting the encryption process, using the act itself as a signal of control and coercion to control and coerce them. 

CyberVolk's plans to venture into the ransomware-as-a-service ecosystem, however, seem to have been undermined by fundamental technical lapses that were clearly underhand. As part of its strategy to attract affiliates, this group has recently launched a new ransomware strain called VolkLocker, positioning it as a RaaS offering designed to expand its operational reach and attract affiliates. 

A SentinelOne research team has found that the malware has severe cryptographic and implementation weaknesses that greatly reduce its effectiveness, according to a study conducted by researchers. It is worth noting that the encryptor is specifically hardcoded directly into the ransomware binary as well as written in plaintext to a hidden file on compromised systems, compounding the error. 

VolkLocker's credibility and viability within the cybercrime market is severely undermined by the vulnerability of extracting and reusing the exposed key, which could possibly allow organizations to recover their data without having to pay a ransom. As a consequence, affected organizations could potentially recover their data without paying a ransom. 

It was last year when the Infosec Shop and other researchers first started documenting CyberVolk's activities that it caught the attention of the security community, and when it became known that the hacktivist collective was pro-Russian. CyberVolk appears to be operating in the same ideological space as outfits such as CyberArmyofRussia_Reborn and NoName057(16) — both of which have been linked to the Russian military intelligence apparatus and President Vladimir Putin by US authorities. 

However, CyberVolk has yet to be proven to maintain direct ties with the Russian governing authorities. Additionally, CyberVolk has a distinctive operational difference from many of its peers. Compared to comparable hacktivist teams, which tend to focus their efforts on disruption but low-impact distributed denial-of-service attacks, CyberVolk has consistently utilized ransomware as part of its campaigns. 

Researchers have noted that after repeated bans from Telegram in 2025, the group almost disappeared from public view for the first half of 2025, only to resurface in August with a revamped ransomware service based on VolkLocker. In analyzing the operations, it is evident that an uneven scaling attempt has taken place, combining fairly polished Telegram automation with malware payloads that retain signs of testing and incomplete hardening. 

VolkLocker is written in Go and designed to work across both Windows and Linux environments. In addition to enabling user communication, Telegram-based command-and-control functionality, it also handles system reconnaissance, decryption requests, and the decryption of sensitive data. In order to configure new payloads, affiliates must provide operational details such as Bitcoin payment addresses, Telegram bot credentials, encryption deadlines, file extensions, and self-destruct parameters. 

Among the backbones of this ecosystem is Telegram, which is responsible for providing communication, tool distribution, and customer support services. However, some operators have reported extending the default C2 framework to include keylogging and remote access capabilities. As of November, the group was advertising standalone remote access trojans and keyloggers in addition to its RaaS offerings, and these packages included tiered pricing options. 

The ransomware is capable of escalating privileges, bypassing Windows User Account Control, selectively encrypting files based on pre-defined exclusion rules, and applying AES-256 encryption in GCM mode, which emphasizes CyberVolk's ongoing attempts to mix ideological messaging with the increasingly commercialized nature of cybercrime. 

In the course of further technical analysis of VolkLocker, it has been revealed that the ransomware has been shaped by an aggressive design choice and critical implementation errors. One of the most notable features of the program is its integration of a timer function written in Go that can be configured to initiate a destructive wipe upon expiration of the countdown or upon entering an incorrect password into the ransom note in HTML.

Upon activation, the routine targets the most common user directories, such as Documents, Downloads, Pictures, and the Desktop, making the users vulnerable to permanent data loss. In order to access CyberVolk's ransomware-as-a-service platform, one must pay approximately $800 to $1,100 for an operating system that supports just one operating system, or $1,600 to $2,200 for a build that supports both Windows and Linux operating systems. 

In the early days of the group, affiliates obtained the malware by using Telegram-based builder bots that were able to customize encryption parameters and create customized payloads, indicating that the group relied heavily on Telegram as a delivery and coordination platform. 

As of November 2025, the same operators have expanded their commercial offerings, advertising standalone remote access trojans and keyloggers for $500 each, further signaling a desire to diversify their offerings from merely ransomware to a wide range of security technologies. Nevertheless, VolkLocker’s operations have a serious cryptographic weakness at the core of their operation that makes it difficult for them to be effective. 

As part of the encryption process, AES-256 is employed in Galois/Counter Mode and a random 12-byte nonce is generated for each file before it deletes the original and adds extensions such as .locked or .cvolk to the encrypted copies after destroying the original files. Although the system seems to be designed to be quite strong, researchers found that all files on a victim's system are encrypted using a single master key which is derived from a 64-character hexadecimal string embedded directly in the binary files. 

Additionally, the same key is stored in plaintext to a file named system_backup.key, which is never removed, compounding the problem. This backup appears to be a testing artifact that was inadvertently left in production builds, and SentinelOne suggests that it might be able to help victims recover their data without paying a ransom for it. 

While the flaw offers a rare advantage to those already affected, it is expected that when it is disclosed to the public, the threat actors will take immediate steps to remedy the issue. The majority of security experts advise that, generally, the best way to share such weaknesses with law enforcement and ransomware response specialists while an operation is ongoing, is by utilizing private channels. This is done in order to maximize victim assistance without accelerating adversary adaptation, thus maximizing victim assistance without accelerating adversary adaptation. 

The modern cyber-extortion economy is sustained by networks of hackers, affiliates, and facilitators that work together to run these campaigns. In order to understand this landscape effectively, open-source intelligence was gathered from social media activity and media reporting. These activities highlighted the existence of a broad range of actors operating within it. 

One such group is the Ukrainian-linked UA25 collective, whose actions retaliate against Russian infrastructure are often accompanied by substantial financial and operational damage, with a claim to responsibility publicly made in the media. In such cases, asymmetrical cyber conflict is being highlighted, where loosely organized non-state actors are able to cause outsized damage to much larger adversaries, underscoring the asymmetrical nature of contemporary cyber conflict. 

In this climate, Russian cybercriminal groups are often able to blur the line between ideological alignment and financial opportunism, pushing profit-driven schemes under the banner of political activism in an effort to achieve political goals. CyberVolk is an example of this hybrid model: CyberVolk aims to gain legitimacy through hacktivist rhetoric while also engaging in extortion and tool sales to monetize its ransomware activity. 

Security firms and independent researchers have been continuously scrutinizing the situation, which has led, in the past few years, to expose internal operational weaknesses, including flawed cryptographic practices, insecure key handling, which can be leveraged to disrupt campaigns and, in some cases, aid law enforcement and takedown efforts on a broader scale. This has been reported as well by publications such as The Register. 

In the near-term, analysts warn that ransomware operations will likely get more sophisticated and destructive - with future strains of ransomware increasingly incorporating elements commonly associated with wiper malware, which encrypts data rather than issuing ransoms. There have been several regulatory actions, sanctions, and government advisories issued throughout 2025 that have laid the foundation for a more coordinated international response to these threats. 

However, experts warn that meaningful progress will depend on a sustained cooperation between governments, technology companies, and private sector firms. In the case of CyberVolk, the technical ambition often outweighs the execution, yet even faulty operations demonstrate a persistent threat from Russian-linked actors, who continue to adapt despite mounting pressures from the West. 

In the wake of recent sanctions targeting key enablers, some parts of this ecosystem have been disrupted; however, new infrastructure and service providers are likely to fill these gaps as time goes on. Defensers should take note of the following lesson: continued vigilance, proactive threat hunting, as well as adopting advanced detection and response capabilities remain essential for preventing ransomware from spreading, as the broader contest against ransomware increasingly depends on converting adversaries' mistakes into durable security advantages to ensure the success of the attack. 

It should be noted that the rise and subsequent missteps of CyberVolk can be considered a timely reminder that the ransomware landscape is evolving in multiple ways, not only in terms of technical sophistication but also in terms of narrative strategy and operational ambition. 

Although advocates of groups may work to increase their impact by using political messaging, branding, and service models that are tailored for commercialization, long-term success remains dependent on disciplined engineering and operational security-areas in which even ideologically motivated actors continue to fail. 

Organizations should take this episode as an example of the importance of building multilayered defenses that go beyond perimeter security to include credential hygiene, behavioral monitoring, and rapid incident response planning in addition to regular patching, offline backups, and tabletop exercises. This episode emphasizes how vital it is to engage with threat intelligence providers in order to identify emerging patterns before they turn into operational disruptions. 

In the eyes of policymakers and industry leaders, the case highlights the benefits of coordinated disclosure practices and cross-border collaboration as means of weakening ransomware ecosystems without inadvertently making them more refined. 

Iterating and rebranding ransomware groups can be equally instructive as iterating and rebranding their malware, providing defenders with valuable opportunities to anticipate next moves and close gaps before they are exploited. The ability to survive in an environment characterized by both sides adapting will increasingly depend on turning visibility into action and learning from every flaw that has been exposed.

Gartner Warns: Block AI Browsers to Avert Data Leaks and Security Risks

 

Analyst company Gartner has issued a recommendation to block AI-powered browsers to help organizations protect business data and cybersecurity. The company says most of these agentic browsers—browsers using autonomous AI models for interacting with web content and automating tasks by default—are designed to provide good user experiences at the cost of compromising security. 

These, the company warns, may leak sensitive information, such as credentials, bank details, or emails, to malicious websites or unauthorized parties. While browsers like OpenAI's ChatGPT Atlas can summarize content, gather data, and automatically navigate users between different websites, the cloud-based back ends commonly used by such browsers handle and store user data, leaving it exposed unless their security settings are carefully managed and appropriate measures implemented. 

What Gartner analysts mean here is that agentic browsers can be deceived into collecting and sending sensitive data to unauthorized parties, especially when workers have confidential data open in browser tabs while using an AI assistant. Furthermore, even if the backend of a browser conforms to the cybersecurity policies of a firm, improper use or configuration may turn the situation very risky. 

The analysts highlight that in all cases, the responsibility lies squarely with each organization to determine the compliance and risks involved with backend services for any AI browser. Besides, Gartner cautions that workers will be tempted to automate mundane or mandated activities, such as cybersecurity training, with the browsers, which could circumvent basic security protocols. 

Safety tips 

To mitigate these risks, Gartner suggests organizations train users on the hazards of exposing sensitive data to AI browser back ends and ensure users do not use these tools while viewing highly confidential information. 

"With the rise of AI, there is a growing tension between productivity and security, as most AI browsers today err toward convenience over safety. I would not recommend complete bans but encourage organizations to perform risk assessments on specific AI services powering the browsers," security expert Javvad Malik of KnowBe4 commented. 

Tailored playbooks for the adoption, oversight, and management of risk for AI agents should be developed to enable organizations to harness the productivity benefits of AI browsers while sustaining appropriate cybersecurity postures.

Home Renovation Choices That Often Do Not Deliver Real Value

 



Home renovations are often regarded as investments; however, not every upgrade enhances a home's function, character, or resale value. Designers specializing in working with properties that are older generally emphasize that intelligent, budget-savvy decisions bear greater importance than drastic changes. Among some of the most heavily marketed "upgrades" lie those that will sap the largest budgets but guarantee little in return, especially over the long term.

One of the most costly mistakes homeowners make is demolishing walls to create open layouts. While open plans remain popular, the demolition of walls can erase architectural detail and greatly increase costs. The cost of moving a standard wall can run several hundred dollars, while modifying a load-bearing wall may call for permits, structural reinforcements, and expenses well into the thousands. Preserving smaller rooms and alcoves can often maintain charm and keep renovation budgets in check.

Homeowners tend to overspend by upgrading their floors: Many older homes have original hardwood that is hidden under carpet or outdated materials. The flooring may be uneven in color or exhibit wear patterns, but this can add character. Cleaning, buffing, crack-filling products, or spot refinishing usually takes care of most existing hardwood, which is a much less expensive proposition than installing new flooring-$ several dollars a square foot-which can quickly add up to five figures in larger homes.

When replacement is unavoidable, expensive tile is not required. Today's vinyl is a far cry from linoleum and has been engineered for durability, water resistance, and style. Luxury vinyl planks or composite tiles are scratch-resistant, easier to care for, and considerably cheaper alternatives. Vinyl can even be taken up in flooded areas, allowed to dry, and then reinstalled.

Many homeowners also spend their money unnecessarily, replacing fixtures to match the metal finish. The reality is that mixing metals can produce a warmer and more layered look. It is acceptable to choose one primary finish and then set it with other accents to allow cohesion without replacing functional hardware.

Other upgrades that cause more problems than benefits include skylights. These installations can cost several thousand dollars, with common issues cropping up long-term, like leaks. Sun tunnels offer a simpler installation process, with less expense and negligible upkeep, to reflect natural light into dark spots in the home.

On the exterior, decorative metal features like wrought iron are expensive and not ideal for every style of architecture. In simpler or more modern homes, wood features often provide a cleaner appearance at a significantly lower cost. Metal fencing or accents can be many times more expensive per foot than their wooden counterparts.

Full cabinet replacement and premium stone countertops are some of the surefire ways to inflate budgets in kitchens. For the most part, many cabinets are still good in structure and simply need sanding, a coat of primer, and paint. Other countertop materials like butcher block or quality laminate are tough and stylish yet less expensive; however, wood surfaces do require periodic oiling, and careful maintenance is a must.

Another area where people overspend is with decorative beams. Solid wood beams are heavy and expensive, while lighter planks or faux beam constructions provide the exact same look for a whole lot less money and weight.

Furniture choices have a big impact on budget: this is where antique or vintage furniture often outclasses the new, mass-produced option, and is incredibly accessible in terms of restoration. Second-hand purchases bring character while smoothing out costs.

Even appliances can be refreshed without replacement. Vinyl wrapping allows owners to change colors or finishes at low costs, avoiding the high expense of custom appliances altogether.

Ultimately, value-added renovations are about durability, function, and considered design. Whether preparing for the sale or improving daily living, smart upgrades focus on lasting impact rather than trends, ensuring both financial and aesthetic sustainability. Some connectionist models suggest that knowledge is encoded in the strength of the many contended links rather than being stored at any single location.




IDESaster Report: Severe AI Bugs Found in AI Agents Can Lead to Data Theft and Exploit


Using AI agents for data exfiltrating and RCE

A six-month research into AI-based development tools has disclosed over thirty security bugs that allow remote code execution (RCE) and data exfiltration. The findings by IDEsaster research revealed how AI agents deployed in IDEs like Visual Studio Code, Zed, JetBrains products and various commercial assistants can be tricked into leaking sensitive data or launching hacker-controlled code. 

The research reports that 100% of tested AI IDEs and coding agents were vulnerable. Impacted products include GitHub, Windsurf, Copilot, Cursor, Kiro.dev, Zed.dev, Roo Code, Junie, Cline, Gemini CLI, and Claude Code. At least twenty-four assigned CVEs and additional AWS advisories were also included. 

AI assistants exploitation 

The main problem comes from the way AI agents interact with IDE features. Autonomous components that could read, edit, and create files were never intended for these editors. Once-harmless features turned become attack surfaces when AI agents acquired these skills. In their threat model, all AI IDEs essentially disregard the base software. Since these features have been around for years, they consider them to be naturally safe. 

Attack tactic 

However, the same functionalities can be weaponized into RCE primitives and data exfiltration once autonomous AI bots are included. The research reported that this is an IDE-agnostic attack chain. 

It begins with context hacking via prompt-injection. Covert instructions can be deployed in file names, rule files, READMEs, and outputs from malicious MCP servers. When an agent reads the context, the tool can be redirected to run authorized actions that activate malicious behaviours in the core IDE. The last stage exploits built-in features to steal data or run hacker code in AI IDEs sharing core software layers.

Examples

Writing a JSON file that references a remote schema is one example. Sensitive information gathered earlier in the chain is among the parameters inserted by the agent that are leaked when the IDE automatically retrieves that schema. This behavior was seen in Zed, JetBrains IDEs, and Visual Studio Code. The outbound request was not suppressed by developer safeguards like diff previews.  

Another case study uses altered IDE settings to show complete remote code execution. An attacker can make the IDE execute arbitrary code as soon as a relevant file type is opened or created by updating an executable file that is already in the workspace and then changing configuration fields like php.validate.executablePath. Similar exposure is demonstrated by JetBrains utilities via workspace metadata.

According to the IDEsaster report, “It’s impossible to entirely prevent this vulnerability class short-term, as IDEs were not initially built following the Secure for AI principle. However, these measures can be taken to reduce risk from both a user perspective and a maintainer perspective.”


5 Critical Situations Where You Should Never Rely on ChatGPT

  •  

Just a few years after its launch, ChatGPT has evolved into a go-to digital assistant for tasks ranging from quick searches to event planning. While it undeniably offers convenience, treating it as an all-knowing authority can be risky. ChatGPT is a large language model, not an infallible source of truth, and it is prone to misinformation and fabricated responses. Understanding where its usefulness ends is crucial.

Here are five important areas where experts strongly advise turning to real people, not AI chatbots:

  • Medical advice
ChatGPT cannot be trusted with health-related decisions. It is known to provide confident yet inaccurate information, and it may even acknowledge errors only after being corrected. Even healthcare professionals experimenting with AI agree that it can offer only broad, generic insights — not tailored guidance based on individual symptoms.

Despite this, the chatbot can still respond if you ask, "Hey, what's that sharp pain in my side?", instead of urging you to seek urgent medical care. The core issue is that chatbots cannot distinguish fact from fiction. They generate responses by blending massive amounts of data, regardless of accuracy.

ChatGPT is not, and likely never will be, a licensed medical professional. While it may provide references if asked, those sources must be carefully verified. In several cases, people have reported real harm after following chatbot-generated health advice.

  • Therapy
Mental health support is essential, yet often expensive. Even so-called "cheap" online therapy platforms can cost around $65 per session, and insurance coverage remains limited. While it may be tempting to confide in a chatbot, this can be dangerous.

One major concern is ChatGPT’s tendency toward agreement and validation. In therapy, this can be harmful, as it may encourage behaviors or beliefs that are objectively damaging. Effective mental health care requires an external, trained professional who can challenge harmful thought patterns rather than reinforce them.

There is also an ongoing lawsuit alleging that ChatGPT contributed to a teen’s suicide — a claim OpenAI denies. Regardless of the legal outcome, the case highlights the risks of relying on AI for mental health support. Even advocates of AI-assisted therapy admit that its limitations are significant.

  • Advice during emergencies
In emergencies, every second counts. Whether it’s a fire, accident, or medical crisis, turning to ChatGPT for instructions is a gamble. Incorrect advice in such situations can lead to severe injury or death.

Preparation is far more reliable than last-minute AI guidance. Learning basic skills like CPR or the Heimlich maneuver, participating in fire drills, and keeping emergency equipment on hand can save lives. If possible, always call emergency services rather than relying on a chatbot. This is one scenario where AI is least dependable.

  • Password generation
Using ChatGPT to create passwords may seem harmless, but it carries serious security risks. There is a strong possibility that the chatbot could generate identical or predictable passwords for multiple users. Without precise instructions, the suggested passwords may also lack sufficient complexity.

Additionally, chatbots often struggle with basic constraints, such as character counts. More importantly, ChatGPT stores prompts and outputs to improve its systems, raising concerns about sensitive data being reused or exposed.

Instead, experts recommend dedicated password generators offered by trusted password managers or reputable online tools, which are specifically designed with security in mind.
  • Future predictions
If even leading experts struggle to predict the future accurately, it’s unrealistic to expect ChatGPT to do better. Since AI models frequently get present-day facts wrong, their long-term forecasts are even less reliable.

Using ChatGPT to decide which stocks to buy, which team will win, or which career path will be most profitable is unwise. While it can be entertaining to ask speculative questions about humanity centuries from now, such responses should be treated as curiosity-driven thought experiments — not actionable guidance.

ChatGPT can be a helpful tool when used appropriately, but knowing its limitations is essential. For critical decisions involving health, safety, security, or mental well-being, real professionals remain irreplaceable.


Featured