Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Youtube. Show all posts

SilentCryptominer Threatens YouTubers to Post Malware in Videos

SilentCryptominer Threatens YouTubers to Post Malware in Videos

Experts have discovered an advanced malware campaign that exploits the rising popularity of Windows Packet Divert drivers to escape internet checks.

Malware targets YouTubers 

Hackers are spreading SilentCryptominer malware hidden as genuine software. It has impacted over 2000 victims in Russia alone. The attack vector involves tricking YouTubers with a large follower base into spreading malicious links. 

“Such software is often distributed in the form of archives with text installation instructions, in which the developers recommend disabling security solutions, citing false positives,” reports Secure List. This helps threat actors by “allowing them to persist in an unprotected system without the risk of detection. 

Innocent YouTubers Turned into victims

Most active of all have been schemes for distributing popular stealers, remote access tools (RATs), Trojans that provide hidden remote access, and miners that harness computing power to mine cryptocurrency.” Few commonly found malware in the distribution scheme are: Phemedrone, DCRat NJRat, and XWorm.

In one incident, a YouTuber with 60k subscribers had put videos containing malicious links to infected archives, gaining over 400k views. The malicious links were hosted on gitrock[.]com, along with download counter crossing 40,000. 

The malicious files were hosted on gitrok[.]com, with the download counter exceeding 40,000.

Blackmail and distributing malware

Threat actors have started using a new distribution plan where they send copyright strikes to content creators and influencers and blackmail them to shut down channels if they do not post videos containing malicious links. The scare strategy misuses the fame of the popular YouTubers to distribute malware to a larger base. 

The infection chain starts with a manipulated start script that employs an additional executable file via PowerShell. 

As per the Secure List Report, the loader (written in Python) is deployed with PyInstaller and gets the next-stage payload from hardcoded domains.  The second-stage loader runs environment checks, adds “AppData directory to Microsoft Defender exclusions” and downloads the final payload “SilentCryptominer.”

The infamous SilentCryptoMiner

The SilentCryptoMiner is known for mining multiple cryptocurrencies via different algorithms. It uses process hollowing techniques to deploy miner code into PCs for stealth.

The malware can escape security checks, like stopping mining when processes are running and scanning for virtual environment indicators. 

Google Fixes YouTube Security Flaw That Exposed User Emails

 



A critical security vulnerability in YouTube allowed attackers to uncover the email addresses of any account on the platform. Cybersecurity researchers discovered the flaw and reported it to Google, which promptly fixed the issue. While no known attacks exploited the vulnerability, the potential consequences could have been severe, especially for users who rely on anonymity.


How the Vulnerability Worked

The flaw was identified by researchers Brutecat and Nathan, as reported by BleepingComputer. It involved an internal identifier used within Google’s ecosystem, known as the Gaia ID. Every YouTube account has a unique Gaia ID, which links it to Google’s services.

The exploit worked by blocking a YouTube account and then accessing its Gaia ID through the live chat function. Once attackers retrieved this identifier, they found a way to trace it back to the account’s registered email address. This loophole could have exposed the contact details of millions of users without their knowledge.


Google’s Reaction and Fix

Google confirmed that the issue was present from September 2024 to February 2025. Once informed, the company swiftly implemented a fix to prevent further risk. Google assured users that there were no reports of major misuse but acknowledged that the vulnerability had the potential for harm.


Why This Was a Serious Threat

The exposure of email addresses poses various risks, including phishing attempts, hacking threats, and identity theft. This is particularly concerning for individuals who depend on anonymity, such as whistleblowers, journalists, and activists. If their private details were leaked, it could have led to real-world dangers, not just online harassment.

Businesses also faced risks, as malicious actors could have used this flaw to target official YouTube accounts, leading to scams, fraud, or reputational damage.


Lessons and Preventive Measures

The importance of strong security measures and rapid responses to discovered flaws cannot be emphasized more. Users are encouraged to take precautions, such as enabling two-factor authentication (2FA), using secure passwords, and being cautious of suspicious emails or login attempts.

Tech companies, including Google, must consistently audit security systems and respond quickly to any potential weaknesses.

Although the security flaw was patched before any confirmed incidents occurred, this event serves as a reminder of the omnipresent risks in the digital world. By staying informed and following security best practices, both users and companies can work towards a safer online experience.



Amazon and Audible Face Scrutiny Amid Questionable Content Surge

 


The Amazon online book and podcast services, Amazon Music, and Audible have been inundated by bogus listings that attempt to trick customers into clicking on dubious "forex trading" sites, Telegram channels, and suspicious links claiming to offer pirated software for sale. It is becoming increasingly common to abuse Spotify playlists and podcasts to promote pirated software, cheat codes for video games, spam links, and "warez" websites. 

To spam Spotify web player results into search engines such as Google, threat actors can inject targeted keywords and links in the description and title of playlists and podcasts to boost SEO for their dubious online properties. In these listings, there are playlist names, podcast description titles, and bogus "episodes," which encourage listeners to visit external links that link to places that might cause a security breach. 

A significant number of threat actors exploit Google's Looker Studio (formerly Google Data Studio) to boost the search engine ranking of their illicit websites that promote spam, torrents, and pirated content by manipulating search engine rankings. According to BleepingComputer, one of the methods used in the SEO poisoning attack is Google's datastudio.google.com subdomain, which appears to lend credibility to the malicious website. 

Aside from mass email spam campaigns, spammers are also using Audible podcasts as another means to spread the word about their illicit activities. Spam can be sent to any digital platform that is open to the public, and no digital platform is immune to that. In cases such as those involving Spotify or Amazon, there is an interesting aspect that is, one would instinctively assume that the overhead associated with podcasting and digital music distribution would deter spammers, who would otherwise have to turn to low-hanging fruit, like writing spammy posts to social media or uploading videos that have inaccurate descriptions on YouTube. 

The most recent instance of this was a Spotify playlist entitled "Sony Vegas Pro 13 Crack...", which seemed to drive traffic to several "free" software sites listed in the title and description of the playlist. Karol Paciorek, a cybersecurity enthusiast who spotted the playlist, said, "Cybercriminals exploit Spotify for malware distribution because Spotify has become a prominent tool for distributing malware. Why? Because Spotify's tracks and pages are easily indexed by search engines, making it a popular location for creating malicious links.". 

The newest business intelligence tool from Google, Looker Studio (formerly, Google Data Studio) is a web-based tool that allows users to make use of data to create customizable reports and dashboards allowing them to visualize and analyze their data. A Data Studio application can, and has been used in the past, to track and visualize the download counts of open source packages over some time, such as four weeks, for a given period. There are many legitimate business cases for Looker Studio, but like any other web service, it may be misused by malicious actors looking to host questionable content on illegal domains or manipulate search engine results for illicit URLs. 

Recent SEO poisoning campaigns have been seen targeting keywords related to the U.S. midterm election, as well as pushing malicious Zoom, TeamViewer, and Visual Studio installers to targeted sites.  In advance of this article's publication, BleepingComputer has reached out to Google to better understand the strategy Google plans to implement in the future.

Firstory is a new service launched in 2019 that enables podcasters to distribute their shows across the globe, and even connect with audiences, thereby empowering them to enjoy their voice! Firstory is open to publishing podcasts on Spotify, but it acknowledges that spam is an ongoing issue that it is increasingly trying to address, as it focuses on curtailing it as much as possible. 

Spam accounts and misleading content remain persistent challenges for digital platforms, according to Stanley Yu, co-founder of Firstory, in a statement provided to BleepingComputer. Yu emphasized that addressing these issues is an ongoing priority for the company. To tackle the growing threat of unauthorized and spammy content, Firstory has implemented a multifaceted approach. This includes active collaboration with major streaming platforms to detect and remove infringing material swiftly. 

The company has also developed and employed advanced technologies to scan podcast titles and show notes for specific keywords associated with spam, ensuring early identification and mitigation of potential violations. Furthermore, Firstory proactively monitors and blocks suspicious email addresses commonly used by malicious actors to infiltrate and disrupt digital ecosystems. By integrating technology-driven solutions with strategic partnerships, Firstory aims to set a higher standard for content integrity across platforms. 

The company’s commitment reflects a broader industry imperative to protect users and maintain trust in an ever-expanding digital landscape. As digital platforms evolve, sustained vigilance and innovation will be essential to counter emerging threats and foster a safer, more reliable online environment.

YouTube: A Prime Target for Cybercriminals

As one of today's most popular social media platforms, YouTube frequently attracts cybercriminals who exploit it to run scams and distribute malware. These schemes often involve videos masquerading as tutorials for popular software or ads for cryptocurrency giveaways. In other cases, fraudsters embed malicious links in video descriptions or comments, making them appear as legitimate resources related to the video's content.

The theft of popular YouTube channels elevates these fraudulent campaigns, allowing cybercriminals to reach a vast audience of regular YouTube users. These stolen channels are repurposed to spread various scams and info-stealing malware, often through links to pirated and malware-infected software, movies, and game cheats. For YouTubers, losing access to their accounts can be distressing, leading to significant income loss and lasting reputational damage.

Most YouTube channel takeovers begin with phishing. Attackers create fake websites and send emails that appear to be from YouTube or Google, tricking targets into revealing their login credentials. Often, these emails promise sponsorship or collaboration deals, including attachments or links to supposed terms and conditions.

If accounts lack two-factor authentication (2FA) or if attackers circumvent this extra security measure, the threat becomes even more severe. Since late 2021, YouTube content creators have been required to use 2FA on the Google account associated with their channel. However, in some cases, such as the breach of Linus Tech Tips, attackers bypassed passwords and 2FA codes by stealing session cookies from victims' browsers, allowing them to sidestep additional security checks.

Attackers also use lists of usernames and passwords from past data breaches to hack into existing accounts, exploiting the fact that many people reuse passwords across different sites. Additionally, brute-force attacks, where automated tools try numerous password combinations, can be effective, especially if users have weak or common passwords and neglect 2FA.

Recent Trends and Malware

The AhnLab Security Intelligence Center (ASEC) recently reported an increase in hijacked YouTube channels, including one with 800,000 subscribers, used to distribute malware like RedLine Stealer, Vidar, and Lumma Stealer. According to the ESET Threat Report H2 2023, Lumma Stealer particularly surged in the latter half of last year, targeting crypto wallets, login credentials, and 2FA browser extensions. As noted in the ESET Threat Report H1 2024, these tools remain significant threats, often posing as game cheats or software cracks on YouTube.

In some cases, cybercriminals hijack Google accounts and quickly create and post thousands of videos distributing info-stealing malware. Victims may end up with compromised devices that further jeopardize their accounts on other platforms like Instagram, Facebook, X, Twitch, and Steam.

Staying Safe on YouTube

To protect yourself on YouTube, follow these tips:

  • Use Strong and Unique Login Credentials: Create robust passwords or passphrases and avoid reusing them across multiple sites. Consider using passkeys for added security.
  • Employ Strong 2FA: Use 2FA not just on your Google account, but also on all your accounts. Opt for authentication apps or hardware security keys over SMS-based methods.
  • Be Cautious with Emails and Links: Be wary of emails or messages claiming to be from YouTube or Google, especially if they request personal information or account credentials. Verify the sender's email address and avoid clicking on suspicious links or downloading unknown attachments.
  • Keep Software Updated: Ensure your operating system, browser, and other software are up-to-date to protect against known vulnerabilities.
  • Monitor Account Activity: Regularly check your account for any suspicious actions or login attempts. If you suspect your channel has been compromised, follow Google's guidance.
  • Stay Informed: Keep abreast of the latest cyber threats and scams targeting you online, including on YouTube, to better avoid falling victim.
  • Report and Block Suspicious Content: Report any suspicious or harmful content, comments, links, or users to YouTube and block such users to prevent further contact.
  • Secure Your Devices: Use multi-layered security software across your devices to guard against various threats.

Terrorist Tactics: How ISIS Duped Viewers with Fake CNN and Al Jazeera Channels


ISIS, a terrorist organization allegedly launched two fake channels on Google-owned video platforms YouTube and Facebook. CNN and Al Jazeera claimed to be global news platforms through their YouTube feeds. This goal was to provide credibility and ease the spread of ISIS propaganda.

According to research by the Institute for Strategic Dialogue, they managed two YouTube channels as well as two accounts on Facebook and X (earlier Twitter) with the help of the outlet 'War and Media'.

The campaign went live in March of this year. Furthermore, false profiles that resembled reputable channels were used on Facebook and YouTube to spread propaganda. These videos remained live on YouTube for more than a month. It's unclear when they were taken from Facebook.

The Deceptive Channels

ISIS operatives set up multiple fake channels on YouTube, each mimicking the branding and style of reputable news outlets. These channels featured professionally edited videos, complete with logos and graphics reminiscent of CNN and Al Jazeera. The content ranged from news updates to opinion pieces, all designed to lend an air of credibility.

Tactics and Objectives

1. Impersonation: By posing as established media organizations, ISIS aimed to deceive viewers into believing that the content was authentic. Unsuspecting users might stumble upon these channels while searching for legitimate news, inadvertently consuming extremist propaganda.

2. Content Variety: The fake channels covered various topics related to ISIS’s global expansion. Videos included recruitment messages, calls for violence, and glorification of terrorist acts. The diversity of content allowed them to reach a broader audience.

3. Evading Moderation: YouTube’s content moderation algorithms struggled to detect these fake channels. The professional production quality and branding made it challenging to distinguish them from genuine news sources. As a result, the channels remained active for over a month before being taken down.

Challenges for Social Media Platforms

  • Algorithmic Blind Spots: Algorithms designed to identify extremist content often fail when faced with sophisticated deception. The reliance on visual cues (such as logos) can be exploited by malicious actors.
  • Speed vs. Accuracy: Platforms must strike a balance between rapid takedowns and accurate content assessment. Delayed action allows harmful content to spread, while hasty removal risks false positives.
  • User Vigilance: Users play a crucial role in reporting suspicious content. However, the resemblance to legitimate news channels makes it difficult for them to discern fake from real.

Why is this harmful for Facebook, X users, and YouTube users?

A new method of creating phony social media channels for renowned news broadcasters such as CNN and Al Jazeera reveals how the terrorist organization's approach to avoiding content moderation on social media platforms has developed.

Unsuspecting users may be influenced by "honeypot" efforts, which, according to the research, will become more sophisticated, making it even more difficult to restrict the spread of terrorist content online.

Meta Addresses AI Chatbot's YouTube Training Data Assertion

 


Eventually, artificial intelligence systems like ChatGPT will run out of the tens of trillions of words people have been writing and sharing on the web, which keeps them smarter. In a new study released on Thursday by Epoch AI, researchers estimate that tech companies will exhaust the available training data for AI language models sometime between 2026 and 2032 if the industry is to be expected to use public training data in the future. 

It is more open than Meta that the Meta AI chatbot will share its training data with me. It is widely known that Meta, formerly known as Facebook, has been trying to move into the generative AI space since last year. The company was aiming to keep up with the public's interest sparked by the launch of OpenAI's ChatGPT in late 2022. In April of this year, Meta AI was expanded to include a chat and image generator feature on all its apps, including Instagram and WhatsApp. However, much information about how Meta AI was trained has not been released to date. 

A series of questions were asked by Business Insider of Meta AI regarding the data it was trained on and the method by which Meta obtained such data. In the interview with Business Insider, Meta AI revealed that it had been trained on a large dataset of transcriptions from YouTube videos, as reported by Business Insider. Furthermore, it said that Meta has its web scraper bot, referred to as "MSAE" (Meta Scraping and Extraction), which scrapes a huge amount of information off the web to use for the training of AI systems. This scraper was never disclosed to Meta previously. 

The terms of service of YouTube do not allow users to collect their data by using bots and scrapers, nor can they use such data without permission from YouTube. As a result of this, OpenAI has recently come under scrutiny for purportedly using such data. According to a Meta spokesman, Meta AI has given correct answers regarding its scraper and training data. However, the spokesman suggested that Meta AI may be wrong in the process. 

A spokesperson from Intel explained that creative AI requires a large amount of data to be effectively trained, so data from a wide variety of sources is utilised for training, including publicly available information online as well as data that has been annotated. As part of its initial training, Meta AI said that 3.7 million YouTube videos had been transcribed by a third party. It was confirmed by Meta AI's chatbot that it did not use its scraper bot to scrape YouTube videos directly. In response to further questions on Meta AI's YouTube training data, Meta AI replied that another dataset with transcriptions from 6 million YouTube videos was also compiled by a third party as part of its training data set.

Besides the 1.5 million YouTube transcriptions and subtitles included in its training dataset, the company also added two more sets of YouTube subtitles, one with 2.5 million subtitles and another with 1.5 million subtitles, as well as several transcriptions from 2,500 YouTube stories showcasing TED Talks. In Meta AI's opinion, all of the data sets were compiled by third parties after they had been collected by them. According to Meta's chatbot, the company takes steps to ensure that it does not gather copyrighted information on its users. However, from my understanding, Meta AI in some form scrapes the web in an ongoing manner. 

As a result of several queries, results displayed sources including NBC News, CNN, and The Financial Times among others. In most cases, Meta AI does not include sources for its responses, unless specifically requested to provide such information. A new paid deal with Meta AI would provide Meta AI with access to more AI training data, which could improve the results of Meta AI in the future, according to BI reporting. As well as respecting robots.txt, Meta AI said it abides by the robots.txt protocol, a set of guidelines that website owners can use to ostensibly prevent bots from scraping pages for training AI. 

Meta used a large language model called Llama to develop the chatbot. Meta AI has yet to release an accompanying paper for the new model or disclose the training data used for the model, even though Llama 3 was released in April, around the time Meta AI was expanded. It was Meta's blog post that revealed that the huge set of 15 trillion tokens used to train Llama 3 was sourced from public sources, meaning "publicly available sources." Web scrapers can extract almost all available content that is accessible on the web, and they can do so effectively with tools such as OpenAI's GPTBot, Google's GoogleBot, and Common Crawl's CCBot. 

The content is stored in massive datasets fed into LLMs and often regurgitated by generative AI tools like ChatGPT. Several ongoing lawsuits concern owned and copyrighted content being freely absorbed by the world's biggest tech companies. The US Copyright Office is expected to release new guidance on acceptable uses for AI companies later this year. 

The content is stored in extensive datasets that are incorporated into large language models (LLMs) and frequently reproduced by generative AI tools such as ChatGPT. Multiple ongoing lawsuits address the issue of proprietary and copyrighted material being utilized without permission by major technology companies. The United States Copyright Office is anticipated to issue new guidelines later this year regarding the permissible use of such content by AI companies. 

Are YouTube Game Cracks Hiding Malware?


Recently, cybersecurity researchers have unearthed a disturbing trend: threat actors are exploiting YouTube to distribute malware disguised as video game cracks. This alarming course of action poses a significant risk to unsuspecting users, especially those seeking free software downloads.

According to findings by Proofpoint Emerging Threats, cybercriminals are leveraging popular video-sharing platforms to target home users, who often lack the robust defences of corporate networks. The plan of action involves creating deceptive videos offering free access to software and video game enhancements, but the links provided lead to malicious content.

The malware, including variants such as Vidar, StealC, and Lumma Stealer, is camouflaged within seemingly innocuous downloads, enticing users with promises of game cheats or software upgrades. What's particularly troubling is the deliberate targeting of younger audiences, with malicious content masquerading as enhancements for games popular among children.

The investigation uncovered several compromised YouTube accounts, with previously dormant channels suddenly flooded with English-language videos promoting cracked software. These videos, uploaded within a short timeframe, contained links to malware-infected files hosted on platforms like MediaFire and Discord.

One example highlighted by researchers featured a video claiming to enhance a popular game, accompanied by a MediaFire link leading to a password-protected file harbouring Vidar Stealer malware. Similarly, other videos promised clean files but included instructions on disabling antivirus software, further endangering unsuspecting users.

Moreover, cybercriminals exploited the identity of "Empress," a well-known entity within software piracy communities, to disseminate malware disguised as cracked game content. Visual cues provided within the videos streamlined the process of installing Vidar Stealer malware, presenting it as authentic game modifications.

Analysis of the malware revealed a common tactic of bloating file sizes to evade detection, with payloads expanding to approximately 800 MB. Furthermore, the malware utilised social media platforms like Telegram and Discord for command and control (C2) activities, complicating detection efforts.

Research into the matter has again enunciated the need for heightened awareness among users, particularly regarding suspicious online content promising free software or game cheats. While YouTube has been proactive in removing reported malicious accounts, the threat remains pervasive, targeting non-enterprise users vulnerable to deceptive tactics.

As cybercriminals continue to exacerbate their methods, it's imperative for individuals to exercise caution when downloading software from unverified sources. Staying informed about emerging threats and adopting cybersecurity best practices can help combat the risk of falling victim to such schemes.


Dangerous Trends: YouTube Stream-Jacking Attacks Reach Alarming Levels

 


A recent trend among major streaming platforms has been to increase their threat of stream-jacking attacks. Cybercriminals aim to compromise high-profile accounts, especially those with large follower counts, so that their deceptive messages may reach a large audience through compromised accounts. 

Bitdefender Labs researchers have been actively monitoring steam-jacking attacks that have been targeting high-profile YouTube accounts in October of 2023 to conduct various crypto-doubling scams. In a new report published by Bitdefender, a sudden rise in stream-jacking attacks has been revealed targeting high-profile and popular streaming services such as YouTube, where malicious links or lures are distributed to the user. 

While most of these attacks result in a complete account takeover, there are some instances where the attackers may still manage the account. On average, YouTube creators make about 55% of their revenue from their channels. For every $100 that an advertiser pays to Google, they are paid USD 55. 

Accordingly, YouTube creators earn an average of $0.18 per view, which equates to $ 18 per 1,000 views. This means that the top ten YouTubers averaged $300,000,000 in revenue for the year 2021, representing an increase of 40% from this year. 

As creators are being paid so much, they're making more videos. As with all success and high payouts, it's not uncommon for them to attract unwanted attention. In addition to its security features and privacy policies, YouTube is quite different in its approach to dealing with scams. 

It is not uncommon for scammers to target victims with fake products on YouTube, just as they do on social media. Followers are lured to mimicked channels, promising rewards, and scammed by them. According to the researchers, stream-jacking attacks on YouTube are being targeted by cybercriminals, who use accounts with a large number of followers to spread fraudulent messages to users. 

The researchers found that it is not uncommon for cybercriminals to target YouTube users with suspicious popups that promote the same content, but with malicious intent, in their end-users’ feeds. This campaign is primarily aimed at YouTube channels with large followings since cybercriminals can easily monetize them by requesting ransom from the channel owner or distributing malware to their subscribers and viewers.

Many of these channels are used by top-rated brands, including Tesla, that have millions of followers and millions of views on their videos. Most often, the content that the attackers publish is related to Tesla or other Elon Musk ventures (usually by way of deepfakes) and includes QR codes that lead to phishing sites or fraudulent websites. 

The criminals use a variety of tricks, such as restricting comments to those who are subscribers to the channel for more than ten or fifteen years (thereby reducing the risk that people who are aware of the scam will alert other viewers), and ensuring that their websites are protected with Cloudflare (thereby making automated analysis difficult). 

In addition, if YouTube detects that the channel is operating maliciously, the channel will be permanently deleted, which means that all videos, playlists, views, subscribers, monetisation, and so on, will no longer be available, even though this can be avoided if the channel owner contacts YouTube. There have been several scams taking advantage of the recent news coverage of the Bitcoin ETF. 

MicroStrategy and its former CEO, Michael Saylor, have been the subject of fraudulent broadcasts since late December 2023, exploiting title references to the Bitcoin ETF's potential success to build a reputation among viewers. 

Michael Saylor is often portrayed in these broadcasts as a deep fake who encourages people to participate in fake giveaways through looped deep fakes. To enhance the credibility of these compromised channels, they use official MicroStrategy emblems, banners, and playlists and, in some cases, even link to the official channels to enhance the credibility of these channels.

These thumbnails are the same across all instances of these videos, regardless of where they are accessed. The channel's name has undergone many alterations post-takeover, ranging from MicroStrategy US, MicroStrategy Live, Micro Strategy and many others, usually with subtle alterations such as trailing spaces, parentheses or spaces at the end.