Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Threat Intelligence. Show all posts

Pentagon Director Hegseth Revealed Key Yemen War Plans in Second Signal Chat, Source Claims

 

In a chat group that included his wife, brother, and personal attorney, U.S. Defence Secretary Pete Hegseth provided specifics of a strike on Yemen's Iran-aligned Houthis in March, a person familiar with the situation told Reuters earlier this week. 

Hegseth's use of an unclassified messaging system to share extremely sensitive security details is called into question by the disclosure of a second Signal chat. This comes at a particularly sensitive time for him, as senior officials were removed from the Pentagon last week as part of an internal leak investigation. 

In the second chat, Hegseth shared details of the attack, which were similar to those revealed last month by The Atlantic magazine after its editor-in-chief, Jeffrey Goldberg, was accidentally included in a separate chat on the Signal app, in an embarrassing incident involving all of President Donald Trump's most senior national security officials.

The individual familiar with the situation, who spoke on the condition of anonymity, stated that the second chat, which comprised around a dozen people, was set up during his confirmation process to discuss administrative concerns rather than real military planning. According to the insider, the chat included details about the air attack schedule. 

Jennifer, Hegseth's wife and a former Fox News producer, has attended classified meetings with foreign military counterparts, according to photographs released by the Pentagon. During a meeting with his British colleague at the Pentagon in March, Hegseth's wife was found sitting behind him. Hegseth's brother serves as a Department of Homeland Security liaison to the Pentagon.

The Trump administration has aggressively pursued leaks, which Hegseth has warmly supported in the Pentagon. Pentagon spokesperson Sean Parnell said, without evidence, that the media was "enthusiastically taking the grievances of disgruntled former employees as the sole sources for their article.” 

Hegeseth'S tumultuous moment 

Democratic lawmakers stated Hegseth could no longer continue in his position. "We keep learning how Pete Hegseth put lives at risk," Senate Minority Leader Chuck Schumer said in a post to X. "But Trump is still too weak to fire him. Pete Hegseth must be fired.”

Senator Tammy Duckworth, an Iraq War veteran who was severely injured in combat in 2004, stated that Hegseth "must resign in disgrace.” 

The latest disclosure comes just days after Dan Caldwell, one of Hegseth's top aides, was taken from the Pentagon after being identified during an investigation into leaks at the Department of Defence. Although Caldwell is not as well-known as other senior Pentagon officials, he has played an important role for Hegseth and was chosen the Pentagon's point of contact by the Secretary during the first Signal chat.

Security Analysts Express Concerns Over AI-Generated Doll Trend

 

If you've been scrolling through social media recently, you've probably seen a lot of... dolls. There are dolls all over X and on Facebook feeds. Instagram? Dolls. TikTok?

You guessed it: dolls, as well as doll-making techniques. There are even dolls on LinkedIn, undoubtedly the most serious and least entertaining member of the club. You can refer to it as the Barbie AI treatment or the Barbie box trend. If Barbie isn't your thing, you can try AI action figures, action figure starter packs, or the ChatGPT action figure fad. However, regardless of the hashtag, dolls appear to be everywhere. 

And, while they share some similarities (boxes and packaging resembling Mattel's Barbie, personality-driven accessories, a plastic-looking smile), they're all as unique as the people who post them, with the exception of one key common feature: they're not real. 

In the emerging trend, users are using generative AI tools like ChatGPT to envision themselves as dolls or action figures, complete with accessories. It has proven quite popular, and not just among influencers.

Politicians, celebrities, and major brands have all joined in. Journalists covering the trend have created images of themselves with cameras and microphones (albeit this journalist won't put you through that). Users have created renditions of almost every well-known figure, including billionaire Elon Musk and actress and singer Ariana Grande. 

The Verge, a tech media outlet, claims that it started on LinkedIn, a professional social networking site that was well-liked by marketers seeking interaction. Because of this, a lot of the dolls you see try to advertise a company or business. (Think, "social media marketer doll," or even "SEO manager doll." ) 

Privacy concerns

From a social perspective, the popularity of the doll-generating trend isn't surprising at all, according to Matthew Guzdial, an assistant professor of computing science at the University of Alberta.

"This is the kind of internet trend we've had since we've had social media. Maybe it used to be things like a forwarded email or a quiz where you'd share the results," Guzdial noted. 

But as with any AI trend, there are some concerns over its data use. Generative AI in general poses substantial data privacy challenges. As the Stanford University Institute for Human-Centered Artificial Intelligence (Stanford HAI) points out, data privacy concerns and the internet are nothing new, but AI is so "data-hungry" that it magnifies the risk. 

Safety tips 

As we have seen, one of the major risks of participating in viral AI trends is the potential for your conversation history to be compromised by unauthorised or malicious parties. To stay safe, researchers recommend taking the following steps: 

Protect your account: This includes enabling 2FA, creating secure and unique passwords for each service, and avoiding logging in to shared computers.

Minimise the real data you give to the AI model: Fornés suggests using nicknames or other data instead. You should also consider utilising a different ID solely for interactions with AI models.

Use the tool cautiously and properly: When feasible, use the AI model in incognito mode and without activating the history or conversational memory functions.

Black Basta: Exposing the Ransomware Outfit Through Leaked Chat Logs

 

The cybersecurity sector experienced an extraordinary breach in February 2025 that revealed the inner workings of the well-known ransomware gang Black Basta. 

Trustwave SpiderLabs researchers have now taken an in-depth look at the disclosed contents, which explain how the gang thinks and operates, including discussions about tactics and the effectiveness of various attack tools. Even going so far as to debate the ethical and legal implications of targeting Ascension Health. 

The messages were initially posted to MEGA before being reuploaded straight to Telegram on February 11 by the online identity ExploitWhispers. The JSON-based dataset contained over 190,000 messages allegedly sent by group members between September 18, 2023 and September 28, 2024. 

This data dump provides rare insight into the group's infrastructure, tactics, and internal decision-making procedures, providing obvious links to the infamous Conti leaks of 2022. The leak does not provide every information about the group's inner workings, but it does provide a rare glimpse inside one of the most financially successful ransomware organisations in recent years. 

The dataset reveals Black Basta's internal workflows, decision-making processes, and team dynamics, providing an unfiltered view of how one of the most active ransomware gangs functions behind the scenes, with parallels to the infamous Conti leaks. Black Basta has been operating since 2022. 

The outfit normally keeps a low profile while carrying out its operations, which target organisations in a variety of sectors and demand millions in ransom payments. The messages demonstrate members' remarkable autonomy and ingenuity in adjusting fast to changing security situations. The leak revealed Black Basta's reliance on social engineering tactics. While traditional phishing efforts are still common, they can take a more personable approach in some cases. 

The chat logs provide greater insight into Black Basta's strategic approach to vulnerability exploitation. The group actively seeks common and unique vulnerabilities, acquiring zero-day exploits to gain a competitive advantage. 

Its weaponization policy reveals a deliberate effort to increase the impact of its attacks, with Cobalt Strike frequently deployed for command and control operations. Notably, Black Basta created a custom proxy architecture dubbed "Coba PROXY" to manage massive amounts of C2 traffic, which improved both stealth and resilience. Beyond its technological expertise, the leak provides insight into Black Basta's negotiation strategies. 

The gang uses aggressive l and psychologically manipulative tactics to coerce victims into paying ransoms. Strategic delays and coercive rhetoric are standard tactics used to extract the maximum financial return. Even more alarming is its growth into previously off-limits targets, such as CIS-based financial institutions.

While the immediate impact of the breach is unknown, the disclosure of Black Basta's inner workings provides a unique chance for cybersecurity specialists to adapt and respond. Understanding its methodology promotes the creation of more effective defensive strategies, hence increasing resilience to future ransomware assaults.

AI and Privacy – Issues and Challenges

 

Artificial intelligence is changing cybersecurity and digital privacy. It promises better security but also raises concerns about ethical boundaries, data exploitation, and spying. From facial recognition software to predictive crime prevention, customers are left wondering where to draw the line between safety and overreach as AI-driven systems become more and more integrated into daily life.

The same artificial intelligence (AI) tools that aid in spotting online threats, optimising security procedures, and stopping fraud can also be used for intrusive data collecting, behavioural tracking, and mass spying. The use of AI-powered surveillance in corporate data mining, law enforcement profiling, and government tracking has drawn criticism in recent years. AI runs the potential of undermining rather than defending basic rights in the absence of clear regulations and transparency. 

AI and data ethics

Despite encouraging developments, there are numerous instances of AI-driven inventions going awry, which raise serious questions. A face recognition business called Clearview AI amassed one of the largest facial recognition databases in the world by illegally scraping billions of photos from social media. Clearview's technology was employed by governments and law enforcement organisations across the globe, leading to legal action and regulatory action about mass surveillance. 

The UK Department for Work and Pensions used an AI system to detect welfare fraud. An internal investigation suggested that the system disproportionately targeted people based on their age, handicap, marital status, and country. This prejudice resulted in certain groups being unfairly picked for fraud investigations, raising questions about discrimination and the ethical use of artificial intelligence in public services. Despite earlier guarantees of impartiality, the findings have fuelled calls for increased openness and supervision in government AI use. 

Regulations and consumer protection

The ethical use of AI is being regulated by governments worldwide, with a number of significant regulations having an immediate impact on consumers. The AI Act of the European Union, which is scheduled to go into force in 2025, divides AI applications into risk categories. 

Strict regulations will be applied to high-risk technology, like biometric surveillance and facial recognition, to guarantee transparency and moral deployment. The EU's commitment to responsible AI governance is further reinforced by the possibility of severe sanctions for non compliant companies. 

Individuals in the United States have more control over their personal data according to California's Consumer Privacy Act. Consumers have the right to know what information firms gather about them, to seek its erasure, and to opt out of data sales. This rule adds an important layer of privacy protection in an era where AI-powered data processing is becoming more common. 

The White House has recently introduced the AI Bill of Rights, a framework aimed at encouraging responsible AI practices. While not legally enforceable, it emphasises the need of privacy, transparency, and algorithmic fairness, pointing to a larger push for ethical AI development in policy making.

Nearly Half of Companies Lack AI-driven Cyber Threat Plans, Report Finds

 

Mimecast has discovered that over 55% of organisations do not have specific plans in place to deal with AI-driven cyberthreats. The cybersecurity company's most recent "State of Human Risk" report, which is based on a global survey of 1,100 IT security professionals, emphasises growing concerns about insider threats, cybersecurity budget shortages, and vulnerabilities related to artificial intelligence. 

According to the report, establishing a structured cybersecurity strategy has improved the risk posture of 96% of organisations. The threat landscape is still becoming more complicated, though, and insider threats and AI-driven attacks are posing new challenges for security leaders. 

“Despite the complexity of challenges facing organisations—including increased insider risk, larger attack surfaces from collaboration tools, and sophisticated AI attacks—organisations are still too eager to simply throw point solutions at the problem,” stated Mimecast’s human risk strategist VP, Masha Sedova. “With short-staffed IT and security teams and an unrelenting threat landscape, organisations must shift to a human-centric platform approach that connects the dots between employees and technology to keep the business secure.” 

95% of organisations use AI for insider risk assessments, endpoint security, and threat detection, according to the survey, but 81% are concerned regarding data leakage from generative AI (GenAI) technology. In addition to 46% not being confident in their abilities to defend against AI-powered phishing and deepfake threats, more than half do not have defined tactics to resist AI-driven attacks.

Data loss from internal sources is expected to increase over the next year, according to 66% of IT leaders, while insider security incidents have increased by 43%. The average cost of insider-driven data breaches, leaks, or theft is $13.9 million per incident, according to the research. Furthermore, 79% of organisations think that the increased usage of collaboration technologies has increased security concerns, making them more vulnerable to both deliberate and accidental data breaches. 

With only 8% of employees accountable for 80% of security incidents, the report highlights a move away from traditional security awareness training and towards proactive Human Risk Management. To identify and eliminate threats early, organisations are implementing behavioural analytics and AI-driven surveillance. A shift towards sophisticated threat detection and risk mitigation techniques is seen in the fact that 72% of security leaders believe that human-centric cybersecurity solutions will be essential over the next five years.

Terror Ourfits Are Using Crypto Funds For Donations in India: TRM Labs

 

Transaction Monitoring (TRM) Labs, a blockchain intelligence firm based in San Francisco and recognised by the World Economic Forum, recently published a report revealing the links between the Islamic State Khorasan Province (ISKP) and ISIS-affiliated fund-collecting networks in India. ISKP, an Afghan terrorist outfit, is reportedly using the cryptocurrency Monero (XMR) to gather funds.

Following the departure of US soldiers from Afghanistan, the ISKP terrorist group garnered significant attention. The "TRM Labs 2025 Crypto Crime Report," published on February 10th, focusses on unlawful cryptocurrency transactions in 2024. According to the reports, illicit transactions have fallen by 24% compared to 2023. 

The "TRM Labs 2025 Crypto Crime Report," published on February 10th, focusses on illicit cryptocurrency transactions in 2024. According to the reports, illicit transactions have fallen by 24% compared to 2023. However, it also emphasises the evolving techniques employed by terrorist organisations. 

TRM Labs' report uncovered on-chain ties between ISKP-affiliated addresses and covert fundraising campaigns in India. The on-chain link is a component of the Chainlink network that runs directly on a blockchain, featuring smart contracts that handle data requests and connect to off-chain oracles. The TRM report states that the ISKP has begun receiving donations in Monero (XMR). 

News reports state that Voice of Khorasan, a periodical created by ISKP's media branch, al-Azaim, announced the commencement of the organization's first donation drive in support of Monero. Since then, Monero's fundraising activities have consistently included requests for donations. 

According to the report, ISKP and other terrorist organisations are favouring Monero more and more because of its blockchain anonymity capabilities. Monero is now worth ₹19,017.77. This powerful privacy tool aids in transaction concealment. However, the report emphasises that terrorist groups will choose more stable cryptocurrencies over Monero money for the foreseeable future due to its volatility and possible crackdowns. 

Furthermore, reliance on cryptocurrency mixers and unidentified wallets has risen. The primary venues for exchanging guidance on best practices and locating providers with the highest security requirements are now online forums. Fake proofs are being used by people to get over Know Your Customer (KYC) rules enforced by exchanges, which makes it challenging for law enforcement to follow the illicit transactions. 

In contrast to Bitcoin and other well-known digital assets, Monero gained attention for its sophisticated privacy features that make transactions trickier to identify. Because of this, they are a tempting option for people who engage in illicit financial activity.

North Korean Hackers Exploit ZIP Files in Sophisticated Cyber Attacks

 

State-sponsored hacking group APT37 (ScarCruft) is deploying advanced cyber-espionage tactics to infiltrate systems using malicious ZIP files containing LNK shortcuts. These files are typically disguised as documents related to North Korean affairs or trade agreements and are spread through phishing emails.

Once opened, the attack unfolds in multiple stages, leveraging PowerShell scripts and batch files to install the RokRat remote access Trojan (RAT) as the final payload.

The infection starts with carefully crafted phishing emails, often using real information from legitimate websites to enhance credibility. These emails contain malicious ZIP attachments housing LNK files. When executed, the LNK file verifies its directory path, relocating itself to %temp% if necessary.

It then extracts multiple components, including:

-A decoy HWPX document
-A batch script (shark.bat)

Additional payloads like caption.dat and elephant.dat
The shark.bat script executes PowerShell commands discreetly, launching the elephant.dat script, which decrypts caption.dat using an XOR key. The decrypted content is then executed in memory, ultimately deploying RokRat RAT.

Once active, RokRat collects detailed system information, such as:
  • Operating system version
  • Computer name
  • Logged-in user details
  • Running processes
  • Screenshots of the infected system
The stolen data is then exfiltrated to command-and-control (C2) servers via legitimate cloud services like pCloud, Yandex, and Dropbox, utilizing their APIs to send, download, and delete files while embedding OAuth tokens for stealthy communication.

RokRat also allows attackers to execute remote commands, conduct system reconnaissance, and terminate processes. To avoid detection, it implements anti-analysis techniques, including:
  • Detecting virtual environments via VMware Tools
  • Sandbox detection by creating and deleting temporary files
  • Debugger detection using IsDebuggerPresent
The malware ensures secure communication by encrypting data using XOR and RSA encryption, while C2 commands are received in AES-CBC encrypted form, decrypted locally, and executed on the compromised system. These commands facilitate data collection, file deletion, and malware termination.

By leveraging legitimate cloud services, RokRat seamlessly blends into normal network traffic, making detection more challenging.

“This sophisticated approach highlights the evolving tactics of APT37, as they continue to adapt and expand their operations beyond traditional targets, now focusing on both Windows and Android platforms through phishing campaigns.”

As APT37 refines its cyberattack strategies, organizations must remain vigilant against such persistent threats and enhance their cybersecurity defenses.

Quantum Computers Threaten to Breach Online Security in Minutes

 

A perfect quantum computer could decrypt RSA-2048, our current strongest encryption, in 10 seconds. Quantum computing employs the principle of quantum physics to process information using quantum bits (qubits) rather than standard computer bits. Qubits can represent both states at the same time, unlike traditional computers, which employ bits that are either 0 or 1. This capacity makes quantum computers extremely effective in solving complicated problems, particularly in cryptography, artificial intelligence, and materials research. 

While this computational leap opens up incredible opportunities across businesses, it also raises serious security concerns. When quantum computers achieve their full capacity, they will be able to break through standard encryption methods used to safeguard our most sensitive data. While the timescale for commercial availability of fully working quantum computers is still uncertain, projections vary widely.

The Boston Consulting Group predicts a significant quantum advantage between 2030 and 2040, although Gartner believes that developments in quantum computing could begin to undermine present encryption approaches as early as 2029, with complete vulnerability by 2034. Regardless of the precise timetable, the conclusion is unanimous: the era of quantum computing is quickly approaching. 

Building quantum resilience 

To address this impending threat, organisations must: 

  • Adopt new cryptographic algorithms that are resistant against impending quantum attacks, such as post-quantum cryptography (PQC). The National Institute of Standards and Technology (NIST) recently published its first set of PQC algorithm standards (FIPS 203, FIPS 204, and FIPS 205) to assist organisations in safeguarding their data from quantum attacks. 
  • Upgrades will be required across the infrastructure. Develop crypto agility to adapt to new cryptographic methods without requiring massive system overhauls as threats continue to evolve. 

This requires four essential steps: 

Discover and assess: Map out where your organisation utilises cryptography and evaluate the quantum threats to its assets. Identify the crown jewels and potential business consequences. 

Strategise: Determine the current cryptography inventory, asset lives against quantum threat timelines, quantum risk levels for essential business assets, and create an extensive PQC migration path. 

Modernise: Implement quantum-resilient algorithms while remaining consistent with overall company strategy.

Enhance: Maintain crypto agility by providing regular updates, asset assessments, modular procedures, continual education, and compliance monitoring. 

The urgency to act 

In the past, cryptographic migrations often took more than ten years to finish. Quantum-resistant encryption early adopters have noticed wide-ranging effects, such as interoperability issues, infrastructure rewrites, and other upgrading challenges, which have resulted in multi-year modernisation program delays. 

The lengthy implementation period makes getting started immediately crucial, even though the shift to PQC may be a practical challenge given its extensive and dispersed distribution throughout the digital infrastructure. Prioritising crypto agility will help organisations safeguard critical details before quantum threats materialise.