Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Youtube. Show all posts

Amazon and Audible Face Scrutiny Amid Questionable Content Surge

 


The Amazon online book and podcast services, Amazon Music, and Audible have been inundated by bogus listings that attempt to trick customers into clicking on dubious "forex trading" sites, Telegram channels, and suspicious links claiming to offer pirated software for sale. It is becoming increasingly common to abuse Spotify playlists and podcasts to promote pirated software, cheat codes for video games, spam links, and "warez" websites. 

To spam Spotify web player results into search engines such as Google, threat actors can inject targeted keywords and links in the description and title of playlists and podcasts to boost SEO for their dubious online properties. In these listings, there are playlist names, podcast description titles, and bogus "episodes," which encourage listeners to visit external links that link to places that might cause a security breach. 

A significant number of threat actors exploit Google's Looker Studio (formerly Google Data Studio) to boost the search engine ranking of their illicit websites that promote spam, torrents, and pirated content by manipulating search engine rankings. According to BleepingComputer, one of the methods used in the SEO poisoning attack is Google's datastudio.google.com subdomain, which appears to lend credibility to the malicious website. 

Aside from mass email spam campaigns, spammers are also using Audible podcasts as another means to spread the word about their illicit activities. Spam can be sent to any digital platform that is open to the public, and no digital platform is immune to that. In cases such as those involving Spotify or Amazon, there is an interesting aspect that is, one would instinctively assume that the overhead associated with podcasting and digital music distribution would deter spammers, who would otherwise have to turn to low-hanging fruit, like writing spammy posts to social media or uploading videos that have inaccurate descriptions on YouTube. 

The most recent instance of this was a Spotify playlist entitled "Sony Vegas Pro 13 Crack...", which seemed to drive traffic to several "free" software sites listed in the title and description of the playlist. Karol Paciorek, a cybersecurity enthusiast who spotted the playlist, said, "Cybercriminals exploit Spotify for malware distribution because Spotify has become a prominent tool for distributing malware. Why? Because Spotify's tracks and pages are easily indexed by search engines, making it a popular location for creating malicious links.". 

The newest business intelligence tool from Google, Looker Studio (formerly, Google Data Studio) is a web-based tool that allows users to make use of data to create customizable reports and dashboards allowing them to visualize and analyze their data. A Data Studio application can, and has been used in the past, to track and visualize the download counts of open source packages over some time, such as four weeks, for a given period. There are many legitimate business cases for Looker Studio, but like any other web service, it may be misused by malicious actors looking to host questionable content on illegal domains or manipulate search engine results for illicit URLs. 

Recent SEO poisoning campaigns have been seen targeting keywords related to the U.S. midterm election, as well as pushing malicious Zoom, TeamViewer, and Visual Studio installers to targeted sites.  In advance of this article's publication, BleepingComputer has reached out to Google to better understand the strategy Google plans to implement in the future.

Firstory is a new service launched in 2019 that enables podcasters to distribute their shows across the globe, and even connect with audiences, thereby empowering them to enjoy their voice! Firstory is open to publishing podcasts on Spotify, but it acknowledges that spam is an ongoing issue that it is increasingly trying to address, as it focuses on curtailing it as much as possible. 

Spam accounts and misleading content remain persistent challenges for digital platforms, according to Stanley Yu, co-founder of Firstory, in a statement provided to BleepingComputer. Yu emphasized that addressing these issues is an ongoing priority for the company. To tackle the growing threat of unauthorized and spammy content, Firstory has implemented a multifaceted approach. This includes active collaboration with major streaming platforms to detect and remove infringing material swiftly. 

The company has also developed and employed advanced technologies to scan podcast titles and show notes for specific keywords associated with spam, ensuring early identification and mitigation of potential violations. Furthermore, Firstory proactively monitors and blocks suspicious email addresses commonly used by malicious actors to infiltrate and disrupt digital ecosystems. By integrating technology-driven solutions with strategic partnerships, Firstory aims to set a higher standard for content integrity across platforms. 

The company’s commitment reflects a broader industry imperative to protect users and maintain trust in an ever-expanding digital landscape. As digital platforms evolve, sustained vigilance and innovation will be essential to counter emerging threats and foster a safer, more reliable online environment.

YouTube: A Prime Target for Cybercriminals

As one of today's most popular social media platforms, YouTube frequently attracts cybercriminals who exploit it to run scams and distribute malware. These schemes often involve videos masquerading as tutorials for popular software or ads for cryptocurrency giveaways. In other cases, fraudsters embed malicious links in video descriptions or comments, making them appear as legitimate resources related to the video's content.

The theft of popular YouTube channels elevates these fraudulent campaigns, allowing cybercriminals to reach a vast audience of regular YouTube users. These stolen channels are repurposed to spread various scams and info-stealing malware, often through links to pirated and malware-infected software, movies, and game cheats. For YouTubers, losing access to their accounts can be distressing, leading to significant income loss and lasting reputational damage.

Most YouTube channel takeovers begin with phishing. Attackers create fake websites and send emails that appear to be from YouTube or Google, tricking targets into revealing their login credentials. Often, these emails promise sponsorship or collaboration deals, including attachments or links to supposed terms and conditions.

If accounts lack two-factor authentication (2FA) or if attackers circumvent this extra security measure, the threat becomes even more severe. Since late 2021, YouTube content creators have been required to use 2FA on the Google account associated with their channel. However, in some cases, such as the breach of Linus Tech Tips, attackers bypassed passwords and 2FA codes by stealing session cookies from victims' browsers, allowing them to sidestep additional security checks.

Attackers also use lists of usernames and passwords from past data breaches to hack into existing accounts, exploiting the fact that many people reuse passwords across different sites. Additionally, brute-force attacks, where automated tools try numerous password combinations, can be effective, especially if users have weak or common passwords and neglect 2FA.

Recent Trends and Malware

The AhnLab Security Intelligence Center (ASEC) recently reported an increase in hijacked YouTube channels, including one with 800,000 subscribers, used to distribute malware like RedLine Stealer, Vidar, and Lumma Stealer. According to the ESET Threat Report H2 2023, Lumma Stealer particularly surged in the latter half of last year, targeting crypto wallets, login credentials, and 2FA browser extensions. As noted in the ESET Threat Report H1 2024, these tools remain significant threats, often posing as game cheats or software cracks on YouTube.

In some cases, cybercriminals hijack Google accounts and quickly create and post thousands of videos distributing info-stealing malware. Victims may end up with compromised devices that further jeopardize their accounts on other platforms like Instagram, Facebook, X, Twitch, and Steam.

Staying Safe on YouTube

To protect yourself on YouTube, follow these tips:

  • Use Strong and Unique Login Credentials: Create robust passwords or passphrases and avoid reusing them across multiple sites. Consider using passkeys for added security.
  • Employ Strong 2FA: Use 2FA not just on your Google account, but also on all your accounts. Opt for authentication apps or hardware security keys over SMS-based methods.
  • Be Cautious with Emails and Links: Be wary of emails or messages claiming to be from YouTube or Google, especially if they request personal information or account credentials. Verify the sender's email address and avoid clicking on suspicious links or downloading unknown attachments.
  • Keep Software Updated: Ensure your operating system, browser, and other software are up-to-date to protect against known vulnerabilities.
  • Monitor Account Activity: Regularly check your account for any suspicious actions or login attempts. If you suspect your channel has been compromised, follow Google's guidance.
  • Stay Informed: Keep abreast of the latest cyber threats and scams targeting you online, including on YouTube, to better avoid falling victim.
  • Report and Block Suspicious Content: Report any suspicious or harmful content, comments, links, or users to YouTube and block such users to prevent further contact.
  • Secure Your Devices: Use multi-layered security software across your devices to guard against various threats.

Terrorist Tactics: How ISIS Duped Viewers with Fake CNN and Al Jazeera Channels


ISIS, a terrorist organization allegedly launched two fake channels on Google-owned video platforms YouTube and Facebook. CNN and Al Jazeera claimed to be global news platforms through their YouTube feeds. This goal was to provide credibility and ease the spread of ISIS propaganda.

According to research by the Institute for Strategic Dialogue, they managed two YouTube channels as well as two accounts on Facebook and X (earlier Twitter) with the help of the outlet 'War and Media'.

The campaign went live in March of this year. Furthermore, false profiles that resembled reputable channels were used on Facebook and YouTube to spread propaganda. These videos remained live on YouTube for more than a month. It's unclear when they were taken from Facebook.

The Deceptive Channels

ISIS operatives set up multiple fake channels on YouTube, each mimicking the branding and style of reputable news outlets. These channels featured professionally edited videos, complete with logos and graphics reminiscent of CNN and Al Jazeera. The content ranged from news updates to opinion pieces, all designed to lend an air of credibility.

Tactics and Objectives

1. Impersonation: By posing as established media organizations, ISIS aimed to deceive viewers into believing that the content was authentic. Unsuspecting users might stumble upon these channels while searching for legitimate news, inadvertently consuming extremist propaganda.

2. Content Variety: The fake channels covered various topics related to ISIS’s global expansion. Videos included recruitment messages, calls for violence, and glorification of terrorist acts. The diversity of content allowed them to reach a broader audience.

3. Evading Moderation: YouTube’s content moderation algorithms struggled to detect these fake channels. The professional production quality and branding made it challenging to distinguish them from genuine news sources. As a result, the channels remained active for over a month before being taken down.

Challenges for Social Media Platforms

  • Algorithmic Blind Spots: Algorithms designed to identify extremist content often fail when faced with sophisticated deception. The reliance on visual cues (such as logos) can be exploited by malicious actors.
  • Speed vs. Accuracy: Platforms must strike a balance between rapid takedowns and accurate content assessment. Delayed action allows harmful content to spread, while hasty removal risks false positives.
  • User Vigilance: Users play a crucial role in reporting suspicious content. However, the resemblance to legitimate news channels makes it difficult for them to discern fake from real.

Why is this harmful for Facebook, X users, and YouTube users?

A new method of creating phony social media channels for renowned news broadcasters such as CNN and Al Jazeera reveals how the terrorist organization's approach to avoiding content moderation on social media platforms has developed.

Unsuspecting users may be influenced by "honeypot" efforts, which, according to the research, will become more sophisticated, making it even more difficult to restrict the spread of terrorist content online.

Meta Addresses AI Chatbot's YouTube Training Data Assertion

 


Eventually, artificial intelligence systems like ChatGPT will run out of the tens of trillions of words people have been writing and sharing on the web, which keeps them smarter. In a new study released on Thursday by Epoch AI, researchers estimate that tech companies will exhaust the available training data for AI language models sometime between 2026 and 2032 if the industry is to be expected to use public training data in the future. 

It is more open than Meta that the Meta AI chatbot will share its training data with me. It is widely known that Meta, formerly known as Facebook, has been trying to move into the generative AI space since last year. The company was aiming to keep up with the public's interest sparked by the launch of OpenAI's ChatGPT in late 2022. In April of this year, Meta AI was expanded to include a chat and image generator feature on all its apps, including Instagram and WhatsApp. However, much information about how Meta AI was trained has not been released to date. 

A series of questions were asked by Business Insider of Meta AI regarding the data it was trained on and the method by which Meta obtained such data. In the interview with Business Insider, Meta AI revealed that it had been trained on a large dataset of transcriptions from YouTube videos, as reported by Business Insider. Furthermore, it said that Meta has its web scraper bot, referred to as "MSAE" (Meta Scraping and Extraction), which scrapes a huge amount of information off the web to use for the training of AI systems. This scraper was never disclosed to Meta previously. 

The terms of service of YouTube do not allow users to collect their data by using bots and scrapers, nor can they use such data without permission from YouTube. As a result of this, OpenAI has recently come under scrutiny for purportedly using such data. According to a Meta spokesman, Meta AI has given correct answers regarding its scraper and training data. However, the spokesman suggested that Meta AI may be wrong in the process. 

A spokesperson from Intel explained that creative AI requires a large amount of data to be effectively trained, so data from a wide variety of sources is utilised for training, including publicly available information online as well as data that has been annotated. As part of its initial training, Meta AI said that 3.7 million YouTube videos had been transcribed by a third party. It was confirmed by Meta AI's chatbot that it did not use its scraper bot to scrape YouTube videos directly. In response to further questions on Meta AI's YouTube training data, Meta AI replied that another dataset with transcriptions from 6 million YouTube videos was also compiled by a third party as part of its training data set.

Besides the 1.5 million YouTube transcriptions and subtitles included in its training dataset, the company also added two more sets of YouTube subtitles, one with 2.5 million subtitles and another with 1.5 million subtitles, as well as several transcriptions from 2,500 YouTube stories showcasing TED Talks. In Meta AI's opinion, all of the data sets were compiled by third parties after they had been collected by them. According to Meta's chatbot, the company takes steps to ensure that it does not gather copyrighted information on its users. However, from my understanding, Meta AI in some form scrapes the web in an ongoing manner. 

As a result of several queries, results displayed sources including NBC News, CNN, and The Financial Times among others. In most cases, Meta AI does not include sources for its responses, unless specifically requested to provide such information. A new paid deal with Meta AI would provide Meta AI with access to more AI training data, which could improve the results of Meta AI in the future, according to BI reporting. As well as respecting robots.txt, Meta AI said it abides by the robots.txt protocol, a set of guidelines that website owners can use to ostensibly prevent bots from scraping pages for training AI. 

Meta used a large language model called Llama to develop the chatbot. Meta AI has yet to release an accompanying paper for the new model or disclose the training data used for the model, even though Llama 3 was released in April, around the time Meta AI was expanded. It was Meta's blog post that revealed that the huge set of 15 trillion tokens used to train Llama 3 was sourced from public sources, meaning "publicly available sources." Web scrapers can extract almost all available content that is accessible on the web, and they can do so effectively with tools such as OpenAI's GPTBot, Google's GoogleBot, and Common Crawl's CCBot. 

The content is stored in massive datasets fed into LLMs and often regurgitated by generative AI tools like ChatGPT. Several ongoing lawsuits concern owned and copyrighted content being freely absorbed by the world's biggest tech companies. The US Copyright Office is expected to release new guidance on acceptable uses for AI companies later this year. 

The content is stored in extensive datasets that are incorporated into large language models (LLMs) and frequently reproduced by generative AI tools such as ChatGPT. Multiple ongoing lawsuits address the issue of proprietary and copyrighted material being utilized without permission by major technology companies. The United States Copyright Office is anticipated to issue new guidelines later this year regarding the permissible use of such content by AI companies. 

Are YouTube Game Cracks Hiding Malware?


Recently, cybersecurity researchers have unearthed a disturbing trend: threat actors are exploiting YouTube to distribute malware disguised as video game cracks. This alarming course of action poses a significant risk to unsuspecting users, especially those seeking free software downloads.

According to findings by Proofpoint Emerging Threats, cybercriminals are leveraging popular video-sharing platforms to target home users, who often lack the robust defences of corporate networks. The plan of action involves creating deceptive videos offering free access to software and video game enhancements, but the links provided lead to malicious content.

The malware, including variants such as Vidar, StealC, and Lumma Stealer, is camouflaged within seemingly innocuous downloads, enticing users with promises of game cheats or software upgrades. What's particularly troubling is the deliberate targeting of younger audiences, with malicious content masquerading as enhancements for games popular among children.

The investigation uncovered several compromised YouTube accounts, with previously dormant channels suddenly flooded with English-language videos promoting cracked software. These videos, uploaded within a short timeframe, contained links to malware-infected files hosted on platforms like MediaFire and Discord.

One example highlighted by researchers featured a video claiming to enhance a popular game, accompanied by a MediaFire link leading to a password-protected file harbouring Vidar Stealer malware. Similarly, other videos promised clean files but included instructions on disabling antivirus software, further endangering unsuspecting users.

Moreover, cybercriminals exploited the identity of "Empress," a well-known entity within software piracy communities, to disseminate malware disguised as cracked game content. Visual cues provided within the videos streamlined the process of installing Vidar Stealer malware, presenting it as authentic game modifications.

Analysis of the malware revealed a common tactic of bloating file sizes to evade detection, with payloads expanding to approximately 800 MB. Furthermore, the malware utilised social media platforms like Telegram and Discord for command and control (C2) activities, complicating detection efforts.

Research into the matter has again enunciated the need for heightened awareness among users, particularly regarding suspicious online content promising free software or game cheats. While YouTube has been proactive in removing reported malicious accounts, the threat remains pervasive, targeting non-enterprise users vulnerable to deceptive tactics.

As cybercriminals continue to exacerbate their methods, it's imperative for individuals to exercise caution when downloading software from unverified sources. Staying informed about emerging threats and adopting cybersecurity best practices can help combat the risk of falling victim to such schemes.


Dangerous Trends: YouTube Stream-Jacking Attacks Reach Alarming Levels

 


A recent trend among major streaming platforms has been to increase their threat of stream-jacking attacks. Cybercriminals aim to compromise high-profile accounts, especially those with large follower counts, so that their deceptive messages may reach a large audience through compromised accounts. 

Bitdefender Labs researchers have been actively monitoring steam-jacking attacks that have been targeting high-profile YouTube accounts in October of 2023 to conduct various crypto-doubling scams. In a new report published by Bitdefender, a sudden rise in stream-jacking attacks has been revealed targeting high-profile and popular streaming services such as YouTube, where malicious links or lures are distributed to the user. 

While most of these attacks result in a complete account takeover, there are some instances where the attackers may still manage the account. On average, YouTube creators make about 55% of their revenue from their channels. For every $100 that an advertiser pays to Google, they are paid USD 55. 

Accordingly, YouTube creators earn an average of $0.18 per view, which equates to $ 18 per 1,000 views. This means that the top ten YouTubers averaged $300,000,000 in revenue for the year 2021, representing an increase of 40% from this year. 

As creators are being paid so much, they're making more videos. As with all success and high payouts, it's not uncommon for them to attract unwanted attention. In addition to its security features and privacy policies, YouTube is quite different in its approach to dealing with scams. 

It is not uncommon for scammers to target victims with fake products on YouTube, just as they do on social media. Followers are lured to mimicked channels, promising rewards, and scammed by them. According to the researchers, stream-jacking attacks on YouTube are being targeted by cybercriminals, who use accounts with a large number of followers to spread fraudulent messages to users. 

The researchers found that it is not uncommon for cybercriminals to target YouTube users with suspicious popups that promote the same content, but with malicious intent, in their end-users’ feeds. This campaign is primarily aimed at YouTube channels with large followings since cybercriminals can easily monetize them by requesting ransom from the channel owner or distributing malware to their subscribers and viewers.

Many of these channels are used by top-rated brands, including Tesla, that have millions of followers and millions of views on their videos. Most often, the content that the attackers publish is related to Tesla or other Elon Musk ventures (usually by way of deepfakes) and includes QR codes that lead to phishing sites or fraudulent websites. 

The criminals use a variety of tricks, such as restricting comments to those who are subscribers to the channel for more than ten or fifteen years (thereby reducing the risk that people who are aware of the scam will alert other viewers), and ensuring that their websites are protected with Cloudflare (thereby making automated analysis difficult). 

In addition, if YouTube detects that the channel is operating maliciously, the channel will be permanently deleted, which means that all videos, playlists, views, subscribers, monetisation, and so on, will no longer be available, even though this can be avoided if the channel owner contacts YouTube. There have been several scams taking advantage of the recent news coverage of the Bitcoin ETF. 

MicroStrategy and its former CEO, Michael Saylor, have been the subject of fraudulent broadcasts since late December 2023, exploiting title references to the Bitcoin ETF's potential success to build a reputation among viewers. 

Michael Saylor is often portrayed in these broadcasts as a deep fake who encourages people to participate in fake giveaways through looped deep fakes. To enhance the credibility of these compromised channels, they use official MicroStrategy emblems, banners, and playlists and, in some cases, even link to the official channels to enhance the credibility of these channels.

These thumbnails are the same across all instances of these videos, regardless of where they are accessed. The channel's name has undergone many alterations post-takeover, ranging from MicroStrategy US, MicroStrategy Live, Micro Strategy and many others, usually with subtle alterations such as trailing spaces, parentheses or spaces at the end.

Trading Tomorrow's Technology for Today's Privacy: The AI Conundrum in 2024

 


Artificial Intelligence (AI) is a technology that continually absorbs and transfers humanity's collective intelligence with machine learning algorithms. It is a technology that is all-pervasive, and it will soon be all-pervasive as well. It is becoming increasingly clear that, as technology advances, so does its approach to data management the lack thereof. Thus, as the start of 2024 approaches, certain developments will have long-lasting impacts. 

Taking advantage of Google's recent integration of Bard, its chat-based AI tool, into a host of other Google apps and services is a good example of how generative AI is being moved more directly into consumer life through the use of text, images, and voice. 

A super-charged version of Google Assistant, Bard is equipped with everything from Gmail, Docs, and Drive, to Google Maps, YouTube, Google Flights, and hotels, all of which are bundled with it. Using a conversational, natural-language mode, Bard can filter enormous amounts of data online, while providing personalized responses to individual users, all while doing so in an unprecedented way. 

Creating shopping lists, summarizing emails, booking trips — all things that a personal assistant would do — for those without one. As of 2023, we have seen many examples of how not everything one sees or hears on the internet is real, whether it be politics, movies, or even wars. 

Artificial intelligence technology continues to advance rapidly, and the advent of deep fakes has raised concern in the country about its potential to influence electoral politics, especially during the Lok Sabha elections that are planned to take place next year. 

There is a sharp rise in deep fakes that have caused widespread concern in the country. In a deepfake, artificial intelligence can be used to create videos or audio that make sense of the actions or statements of people they did not do or say, resulting in the spread of misinformation and damage to their reputation. 

In the wake of the massive leap in public consciousness about the importance of generative AI that occurred in 2023, individuals and businesses will be putting artificial intelligence at the centre of even more decisions in the coming year. 

Artificial intelligence is no longer a new concept. In 2023, ChatGPT, MidJourney, Google Bard, corporate chatbots, and other artificial intelligence tools have taken the internet by storm. Their capabilities have been commended by many, while others have expressed concerns regarding plagiarism and the threat they pose to certain careers, including those related to content creation in the marketing industry. 

There is no denying that artificial intelligence, no matter what you think about it, has dramatically changed the privacy landscape. Despite whatever your feelings about AI are, the majority of people will agree on the fact that AI tools are trained on data that is collected from the creators and the users of them. 

For privacy reasons, it can be difficult to maintain transparency regarding how this data is handled since it can be difficult to understand how it is being handled. Additionally, users may forget that their conversations with AI are not as private as text conversations with other humans and that they may inadvertently disclose sensitive data during these conversations. 

According to the GDPR, users are already protected from fully automated decisions making a decision about the course of their lives by the GDPR -- for example, an AI cannot deny a bank loan based on how it analyzes someone's financial situation. The proposed legislation in many parts of the world will eventually lead to more law enforcement regulating artificial intelligence (AI) in 2024. 

Additionally, AI developers will likely continue to refine their tools to change them into (hopefully) more privacy-conscious oriented tools as the laws governing them become more complex. As Zamir anticipates that Bard Extensions will become even more personalized and integrated with the online shopping experience, such as auto-filling out of checkout forms, tracking shipments, and automatically comparing prices, Bard extensions are on course to become even more integrated with the online shopping experience. 

All of that entails some risk, according to him, from the possibility of unauthorized access to personal and financial information during the process of filling out automated forms, the possibility of maliciously intercepting information on real-time tracking, and even the possibility of manipulated data in price comparisons. 

During 2024, there will be a major transformation in the tapestry of artificial intelligence, a transformation that will stir a debate on privacy and security. From Google's Bard to deepfake anxieties, let's embark on this technological odyssey with vigilant minds as users ride the wave of AI integration. Do not be blind to the implications of artificial intelligence. The future of AI is to be woven by a moral compass, one that guides innovation and ensures that AI responsibly enriches lives.

Google's Ad Blocker Crackdown Sparks Controversy

 

Concerns have been raised by consumers and proponents of digital rights as a result of Google's recent increased crackdown on ad blockers. The move exposes a multifaceted effort that involves purposeful browser slowdowns and strict actions on YouTube, as reported in pieces sources.

According to Channel News, YouTube's ad blocker crackdown has reached new heights. Users attempting to bypass ads on the platform are facing increased resistance, with reports of ad blockers becoming less effective. This raises questions about the future of ad blocking on one of the world's most popular video-sharing platforms.

Google has taken a controversial step by intentionally slowing down browsers to penalize users employing ad blockers. This aggressive tactic, designed to discourage the use of ad-blocking extensions, has sparked outrage among users who rely on these tools for a smoother online experience.

The Register delves deeper into Google's strategy, outlining the technical aspects of how the search giant is implementing browser slowdowns. The article suggests that this move is not only an attempt to protect its advertising revenue but also a way to assert control over the online advertising ecosystem.

While Google argues that these measures are necessary to maintain a fair and sustainable digital advertising landscape, critics argue that such actions limit user freedom and choice. The concern is not merely about the impact on ad-blocker users; it also raises questions about the broader implications for online privacy and the control that tech giants exert over users' online experiences.

As the internet becomes increasingly integral to daily life, the balance between user empowerment and the interests of digital platforms is a delicate one. Google's recent actions are sure to reignite the debate on the ethics of ad blocking and the extent to which tech companies can dictate user behavior.

Google's strong action against ad blockers serves as a reminder of the continuous conflict between user autonomy and the profit-driven objectives of digital titans. These activities have consequences that go beyond the advertising industry and spark a broader conversation about the future of online privacy and the power corporations have over the digital environment.

Allegations of Spying in the EU Hit YouTube as it Targets Ad Blockers

 

YouTube's widespread use of ads, many of which are unavoidable, has raised concerns among some users. While some accept ads as a necessary part of the free video streaming experience, privacy advocate Alexander Hanff has taken issue with YouTube and its parent company, Google, over their ad practices. Hanff has filed a civil complaint with the Irish Data Protection Commission, alleging that YouTube's use of JavaScript code to detect and disable ad blockers violates data protection regulations.

Additionally, Hanff has filed a similar complaint against Meta, the company behind Instagram and Facebook, claiming that Meta's collection of personal data without explicit consent is illegal. Meta is accused of using surveillance technology to track user behavior and tailoring ads based on this information, a practice that Hanff believes violates Irish law.

These complaints come amid a growing focus on data privacy and security in the EU, which has implemented stricter regulations for Big Tech companies. In response, Google has expanded its Ads Transparency Center to provide more details on how advertisers target consumers and how ads are displayed. 

The company has also established a separate Transparency Center to showcase its safety policy development and enforcement processes. Google has committed to continued collaboration with the European Commission to ensure compliance with regulations.

Hanff's complaints could be the first of many against Google, Meta, and other tech giants, as legislators and the public alike express increasing concerns over market competition and data privacy. 

If additional regulations are implemented, these companies will have to adapt their practices accordingly. The potential impact on their profits remains to be seen, but compliance could ultimately prove less costly than facing financial penalties.

YouTube Faces Struggle from EU Regulators for Dropping Use of Ad Blockers


Alexander Hanff, a privacy activist is suing the European Commission, claiming that YouTube’s new ad blocker detection violates European law. 

In response to the Hanff’s claims to the European Commission, German Pirate Party MEP asked for a legal position on two key issues: whether this type of detection is "absolutely necessary to provide a service such as YouTube" and whether "protection of information stored on the device (Article 5(3) ePR) also cover information as to whether the user's device hides or blocks certain page elements, or whether ad-blocking software is used on the device."

YouTube’s New Policy 

Recently, YouTube has made it mandatory for users to cease using ad blockers or else they will receive notifications that may potentially prevent them from accessing any material on the platform. The majority of nations will abide by the new regulations, which YouTube claims are intended to increase revenue for creators.

However, the reasons that the company provides are not likely to hold up in Europe. Experts in privacy have noted that YouTube's demand to allow advertisements to run for free users is against EU legislation. Since it can now identify users who have installed ad blockers to avoid seeing advertisements on the site, YouTube has really been accused of spying on its customers.

EU regulators has already warned tech giants like Google and Apple. Now, YouTube is the next platform that could face lengthy legal battles with the authorities as it attempts to defend the methods used to identify these blocks and compel free YouTube viewers to watch advertisements regularly in between videos. Google and other digital behemoths like Apple have previously faced the wrath of EU regulators. Due to YouTube's decision to show adverts for free users, many have uninstalled ad blockers from their browsers as a result of these developments.

According to experts, YouTube along with violating the digital laws, is also violating certain Fundamental consumer rights. Thus, it is likely that the company would have to change its position in the area if the platform is found to be in violation of the law with its anti-ad blocker regulations. This is something that Meta was recently forced to do with Instagram and Facebook.

The social networking giant has further decided on the policy that if its users (Facebook and Instagram) do not want to see ads while browsing the platforms, they will be required to sign up for its monthly subscriptions, where the platforms are free from advertisements.  

Taming Your Android: A Step-Step Guide to Restricting Background App Data

 


It is no secret that Android smartphones are the most popular devices among the young generation because of their ability to give you unlimited possibilities. It is unfortunate that beneath the chic surface of this device lurks an elusive piece of software that is capable of devouring tons of data. As they sneakily gnaw away at user's valuable data, leaving them in the dark as to where it all goes, they stealthily nibble and eat until they disappear. 

Certainly, smartphone users can enjoy a delightful experience with their mobile apps as a result of their rich variety of features. In addition, there are hundreds of types of software, ranging from games to photo editors to video editors to messengers on social media, to educational apps, to music players, to gaming apps, and many others. 

Users will need an Internet connection for most of these apps to give them the best experience, so they must use that data wisely. There is no doubt that data costs can add up quickly when users have several such apps on their devices since the software consumes a large amount of internet data as it runs. 

The best method for solving this problem is to limit how much data can be used by a specific app to make a difference. A method of resolving this problem is to set a restriction on the amount of data that is used by certain apps to prevent data overload. 

Despite Android devices being incredibly versatile and capable of handling a wide variety of tasks, they have the potential to drain user's data plans quite quickly, which is a big problem. The best way to minimize the amount of data they are using is to limit their background data consumption. Even when users are not actively using the app, some apps tend to snare up lots of data regularly. 

The good news is that Android provides a means of stopping any app from using data in the background, so you should not be concerned. It may well be possible to simplify the process and increase your options through the use of third-party apps. 

Depending on the app, some settings are also available that allow you to limit how much data is used, including those that exchange media. By deactivating data-consuming actions like media auto-downloads on WhatsApp, for example, users can reduce the use of their data on the app.

To prevent apps from using user's data in the background when their cell phone is turned off, they should turn off their wireless connection completely. Although this comes with some caveats, such as stopping all their apps from using data and not allowing them to be notified of background updates for the duration of the change, it does negate the cost of data. 

Limiting Background Data for All Applications There is a way users can extend the battery life of their Android devices by restricting background data on their devices. It should be noted that, when users prevent their device from downloading updates for apps, syncing with accounts, checking for new emails, and syncing with accounts, when backgrounds are set to off, the device will not update apps. 

In the end, perhaps one of the most important aspects of restricting background data is that it helps to control the amount of cellular data that is being used. A general rule of thumb is that limiting background data can help ensure that they do not exceed their monthly data allotment if they have a limited data plan. 

Using these steps, users can prevent other apps from accessing data on their Samsung, Google, OnePlus, or any other Android phone by blocking apps from accessing data. While the basic steps tend to be the same no matter which manufacturer your phone belongs to, be aware that the menus may differ based on the manufacturer. 

By swiping down from the top of the screen, users can access the settings of their devices. Once the settings icon is selected, tap it. 

To view data usage on the device, either go to Network & Internet > Data usage or Connections > Data usage, depending on how the user accesses the device. The top of that menu can be seen to display the amount of data the user has used during that session.

To find out how much data each app has been consuming recently, select the App or Mobile data usage option. On the list of most downloaded apps, there is often a preference for the apps that consume the most data. 

Choose the app that consumes the most data from the list. Users will be able to view data usage statistics for that application, including usage statistics for background apps. 

The amount of data that YouTube consumes alone may surprise them. To turn off cellular data consumption for a specific app, tap on the app and turn off the Allow background data usage option. 

Moreover, if allowing data usage is already disabled, then users should turn it off as well if they have not already done so. 

Whenever users' device's data saver is active, the app is not enabled and does not consume mobile data at the same time. Data Usage Warnings and Limits Setting a data warning and usage limit on their Android device can help users avoid costly overage fees. 

When they reach the data warning limit, their device will notify them that they are close to exceeding their data plan. If users continue to use data after reaching the limit, their device will automatically restrict their data usage. 

This means that they may not be able to access certain features, such as streaming video or music, until their next billing cycle.

Unveiling DogeRAT: The Malware Exploiting Counterfeit Netflix, Instagram, and YouTube

 


In a recent study, Indian analysts discovered a powerful malware known as DogeRAT. This malware infects several devices and targets a wide range of industries.

Social media apps spread this malicious software by pretending to be popular Android applications such as YouTube, Netflix, Instagram, and Opera Mini.  The operators of DogeRat are running a malicious campaign in which hackers try to steal information from victims, including banking details. They are also trying to control their devices to harm them. 

In this digital era, smartphones have become an integral part of our everyday lives. With the help of a few taps on the screen, it is possible to perform multiple tasks on the device. Even though smartphones are becoming more popular, many people are still unaware of the dangers lurking online. 

Furthermore, cybercriminals are continually devising innovative tactics to deceive even the smartest and most tech-savvy individuals when it comes to cybercrime. A number of these criminals have created dangerous counterfeit apps that mimic popular brands' logos, typefaces, and interfaces, creating worrisome counterfeit versions of popular apps. 

False applications, such as these, are loaded with malware designed to steal sensitive information about users. It has been reported that DogeRAT malware has been disguised to appear as legitimate mobile applications, such as a game, productivity tools, or entertainment apps, including Netflix, YouTube, and so on. It is disseminated through social networking sites and messaging apps, such as Telegram, where it is distributed. 

It is a new Android virus that infects Android smartphones and tablets using open-source software to spy on businesses and steal sensitive data such as financial information and personal information. 

When malware is installed on a victim's device, it has the potential to steal sensitive information, including contacts, messages, and other personal information. Even when a device has been infected, hackers can even gain remote access to the device, which can then be used to conduct malicious activities, such as spam messages, payments that are not authorized, modifying files, viewing call records, and even taking photos using the infected device's rear and front cameras. 

In addition to the modified Remote Access Trojans (RATs), they are now repurposing malicious apps and distributing them to spread their scams. It is not only cost-effective and simple to set up these campaigns, but they also result in significant profits because they only take a bit of time to execute. 

A guide to protecting against malware threats

In the past few months, malware attacks have been noticeable, even though they are not novel. To protect your device from malware, being aware of and precautionary against the latest threats is essential. 

Depending on the device you use, you need to consider some points to protect your device's data and your personal information from malware attacks, such as:

There are warnings about links and attachments that could contain malware or lead to malicious websites, so be careful about which links and attachments you open. 

The most effective defense against malware is to keep your software updated. Update your operating system and applications regularly to ensure security vulnerabilities are protected. 

Make sure your security solutions are reliable. Buy antivirus tools to protect your computer from malware and other threats. 

Do not click on links or open attachments in emails that seem too unbelievable to be true or suspicious: Be aware of suspicious messages and offers, and take precautions to avoid clicking on them. 

You need to become familiar with malware to protect yourself against cyberattacks, so you need to learn about some common attack techniques.   

Taking proactive measures and exercising caution are the most effective ways for individuals to combat this threat effectively, so using precaution is imperative. It is necessary to source applications exclusively from trusted and verified platforms and conduct in-depth authentication of developers and maintain vigilance regarding suspicious links, emails, and messages to ensure such elements are avoided.

To ensure overall security, it is essential to keep up to date with device updates, operating system upgrades, and antivirus software updates as often as possible. 

Moreover, it is strongly recommended that cyber-security practices are implemented, including utilizing strong passwords and enabling two-factor authentication as well as implementing strong and unique passwords. 

Users can significantly reduce their susceptibility to malware such as 'DogeRAT' by staying informed about emerging cybersecurity threats. This is done by consistently applying these precautionary measures to protect themselves from cyber threats.

Don't Get Hooked: How Scammers are Reeling in YouTube Users with Authentic Email Phishing

YouTube phishing scam

Are you a YouTube user? Beware of a new phishing scam that has been making rounds lately! In recent times, YouTube users have been targeted by a new phishing scam. The scammers use an authentic email address from YouTube, which makes it difficult to differentiate between a genuine email and a fraudulent one. 

What is a phishing scam?

Phishing scams are fraudulent attempts to obtain sensitive information, such as usernames, passwords, and credit card details, by disguising themselves as trustworthy entity in electronic communication. Typically, scammers use social engineering techniques to trick users into clicking on a malicious link or downloading malware.

What is the new YouTube phishing scam?

The new YouTube phishing scam involves the use of an authentic email address from YouTube. The email appears to be from YouTube's support team, and it informs the user that their channel is at risk of being deleted due to a copyright infringement violation. 

The email contains a link to a website where the user is asked to enter their YouTube login credentials. Once the user enters their login credentials, the scammers can access the user's account and potentially steal sensitive information or perform unauthorized actions.

How to identify the new YouTube phishing scam?

The new YouTube phishing scam is difficult to identify because the email address used by the scammers appears to be genuine. However, there are a few signs that you can look out for to identify the scam:

  • Check the sender's email address: Even though the email address appears to be genuine, you should always check the sender's email address carefully. In most cases, scammers use a similar email address to the genuine one but with a few minor differences.
  • Check the content of the email: The new YouTube phishing scam typically informs the user that their channel is at risk of being deleted due to a copyright infringement violation. However, if you have not received any copyright infringement notice, then you should be cautious.
  • Check the link in the email: Always check the link in the email before clicking on it. Hover your mouse over the link and check if the URL is genuine. If you are unsure, do not click on the link.

How to protect yourself from the new YouTube phishing scam?

To protect yourself from the new YouTube phishing scam, follow these tips:

  • Enable two-factor authentication: Two-factor authentication adds an extra layer of security to your account. Even if the scammers obtain your login credentials, they will not be able to access your account without the second factor of authentication.
  • Do not share your login credentials: Never share your login credentials with anyone, even if the email appears to be from a genuine source.
  • Report suspicious emails: If you receive a suspicious email, report it to YouTube immediately. This will help to prevent other users from falling victim to the scam.
  • Keep your software up to date: Keep your operating system and software up to date to ensure that you have the latest security patches and updates.

Stay cautious

The new phishing scam using an authentic email address is a serious threat to YouTube users. However, by following the tips mentioned in this blog, you can protect yourself from falling victim to the scam. Always be vigilant and cautious when dealing with emails that request sensitive information. Remember, if you are unsure, do not click on the link.


YouTube Charged for Data Gathering on UK Minors

A million children's personal data might be collected by YouTube, as per the research. According to the claim, YouTube violates the 'age-appropriate design code' set forth by the Information Commissioner's Office (ICO).

The UK's data protection rules pertaining to the personal information of minors must be complied with by online services in order to do so. In accordance with the Global Data Protection Regulation (GDPR) program, the UK put into effect the Data Protection Act 2018.

These details include the location from which kids view, the device they use, and their preferred types of videos, according to Duncan McCann, Head of Accountability at the 5Rights Foundation.

According to McCann, the streaming service has violated recently established child protection rules by capturing the location, viewing habits, and preferences of potentially millions of youngsters who visit the main YouTube website.

As per attorney and data protection specialist Jonathan Compton from DMH Stallard, YouTube could be hit with a hefty charge of up to £17.5 million, or 4% of its annual global revenue. Not only the YouTube website can be in violation of the ICO Children's Code. In a study published last month by Comparitech, researchers found that one in four Google Play apps did not adhere to the Age Appropriate Design Code. 

A spokesperson for YouTube said, "Over the years, we've made efforts to protect kids and families, like developing a dedicated kids app, implementing new data standards for children's content, and delivering more age-appropriate experiences."

Extra safeguards have been adopted to support children's privacy on YouTube, such as more protective default settings and a specific YouTube Supervised Experience, building on that long-standing strategy and adhering to the additional recommendations offered by the code. 




England's Online Safety Bill: A Quick Look

The polarizing Online Safety Bill will no longer include the harmful provision, the UK government has determined. The law was presented in the parliament early this year despite years of discussion.

Michelle Donelan, the culture secretary, said adult social media users will have more control over what they saw and refuted claims that regulations safeguarding them were being weakened.

According to media sources, the government responded to mounting worries about the now-scrapped section that would have caused platforms to censor speech severely. According to a BBC report, the condition would have required platforms that posed the greatest danger to remove legal but harmful content.

The government contends that the modifications do not compromise the safeguards for kids. Technology companies will still be required to prevent children, who are classified as those under 18, from viewing anything that could seriously hurt them. Businesses must disclose how they plan to verify the age of their users; some, like Instagram, are deploying age-verification technologies.

Ian Russell, the father of Molly Russell, a youngster who took her own life after watching online material about suicide and self-harm, claimed that the measure had been weakened and that the change might be made for political gain in order to hasten its passage.

It means that platforms like Facebook, Instagram, and YouTube would have been instructed to stop exposing users to content about eating disorders, self-harm, and misogynistic messages. If a platform's terms of service permit it, adults will be able to access and upload anything that is lawful; but, children must still be shielded from hazardous content.

There will be exceptions to allow for reasonable debate, but this might include anything that encourages eating disorders or incites hatred on the basis of race, ethnicity, sexual orientation, or gender reassignment.

Dr. Monica Horten, a tech policy specialist with the Open Rights Group, opined that the bill's definition of how businesses will determine the age of their customers is vague.

The connections and media regulator Ofcom, with the authority to penalize businesses up to 10% of their global turnover, will largely be responsible for enforcing the new rule.







OnionPoison: Malicious Tor Browser Installer Distributed through YouTube Video

 

Researchers at Kaspersky have detected a trojanized version of the Window installer for the Tor Browser, that is being distributed through a popular Chinese YouTube channel. 
 
The malware campaign, dubbed OnionPoison allegedly reaches internet users through the Chinese-language YouTube video. The video is providing users with information on ‘staying anonymous online.’ 
 
The threat actors attach a malicious URL link to the official Tor website, below the YouTube video. Additionally, adding another link to a cloud-sharing service hosting an installer for Tor was modified to include malicious code.  
 
The YouTube Channel has more than 180,000 subscribers, with the video being on top result for the YouTube query ‘Tor浏览器’ translating to “Tor Browser.” The video, posted on January 2022 had more than 64,000 views at the time of discovery (March 2022), reported Kaspersky. The malware installs a malicious Tor Browser that is structured to expose user data that involves a list of installed software, browsing history, and data the users may have entered in a website form. The researchers also found that the library bundled with Tor Browser is infected with spyware. 
 
“More importantly, one of the libraries bundled with the malicious Tor Browser is infected with spyware that collects various personal data and sends it to a command and control server. The spyware also provides the functionality to execute shell commands on the victim machine, giving the attacker control over it [...] We decided to dub this campaign ‘OnionPoison’, naming it after the onion routing technique that is used in Tor Browser.” reads the analysis conducted by Kaspersky. 
 
It is worth mentioning that the Tor browser is banned in China on account of China's extensive internet censorship. As a result, users often access the browser through third-party websites for downloading it. Hence, the users are most likely to be exposed to scams and be deceived into downloading the malicious installer.  
 
It is believed that the intention of the OnionPoison campaign may not be financially motivated as the threat actors did not recover any credentials or wallets.  
 
In regard to this, the researchers are warning China-based users and companies to avoid using third-party websites for downloading software to prevent becoming targets of threat actors.  
 

Social Media Used to Target Victims of Investment Scams

Security researchers have discovered a huge investment scam effort that uses online and telephone channels to target victims across Europe. Since fake investment scams have been around for a while, people are familiar with them.

Over 10,000 malicious websites tailored for consumers in the UK, Belgium, the Netherlands, Germany, Poland, Portugal, Norway, Sweden, and the Czech Republic are included in the "gigantic network infrastructure" spotted by Group-IB.

The scammers work hard to promote the campaigns on numerous social media sites, or even compromise Facebook and YouTube to get in front of as many users as they can.

The firm's aim is to mislead consumers into believing they have the chance to invest in high-yield chances and persuade them to deposit a minimum of 250 EUR ($255) to join up for the phony services.

Scam operation

  • Posts promoting phony investment schemes on hacked social media accounts, such as Facebook and YouTube, are the first to entice victims.
  • Images of regional or international celebrities are frequently used to give the illusion that the scam is real.
  • The scammers then demand contact information. In a sophisticated social engineering scam, a 'customer agent' from a call center contacts the victim and offers the investment terms and conditions.
  • Eventually, the victim is persuaded to make a deposit of at least 250 EUR, and the information they provided on the false website is either saved and utilized in other attacks or sold on the dark web.
  • After the victim deposits the money, they are given access to a fictitious investment dashboard that claims to allow them to monitor daily earnings.
  • When the victim tries to use the site to withdraw funds but is first asked for final payment, the fraud is discovered.

Over 5000 of the 11,197 domains used in the campaign were still operational as of this writing.

It is advisable to check that an investment platform is from a reputable broker when it interests you. It may also be possible to spot the fraud by searching for user evaluations and looking for patterns in a large number of comments. 


Hacker Alert! British Army's YouTube and Twitter Accounts Hijacked

 


About the Crypto Scam

Threat actors hacked the Twitter and YouTube accounts of the British army. A malicious third party compromised the accounts last Sunday, when the users opened the British army accounts, they were redirected to cryptocurrency scams. 

The Minister of Defence (MoD) press office reported the incident around 7 PM on Twitter. The tweet said that the office is aware of the breach of the army's YouTube and Twitter accounts and an inquiry has been set up to look into the issue. 

It is a matter of utmost importance for the army when it comes to information security, says the MoD office, the army is currently trying to resolve the problem. It said to offer no further comments until the investigation is completed and the issue has been solved. 

However, after four hours, an update said that problem had been fixed, here is the official tweet.

What are the reports saying?

Although only YouTube and Twitter were written in the posts, other reports suggest that the Facebook account was also hijacked. The reports disclosed that the threat actors posted various promotional links to various crypto and NFT scams, these include phishing links to a fraud mint of The Possessed NFT collection. 

On YouTube, the threat actors modified the entire account to make it look like investment agency Ark Invest, they posted live stream videos that featured celebrities like Elon Musk and Jack Dorsey. 

What makes this attack unique?

This is a very classic crypto scam, the hackers used videos to promote QR codes for viewers to send their crypto money to, and the viewers were told that they'll get double the investment if they do so. The MoD has now taken down all the content that was rebranded by the hackers. 

"Just last week, high street bank Santander warned of a predicted 87% year-on-year increase in celebrity-endorsed cryptocurrency scams in the UK in 2022. It reported a 61% increase in the cases it dealt with between Q4 2021 and Q1 2022, with the average cost of these scams increasing 65% year-on-year in the first quarter to reach £11,872" says InfoSecurity.

Stolen TikTok Videos have Infiltrated YouTube Shorts

 

Scammers are taking full advantage of the debut of Google's new TikTok competitor, YouTube Shorts, which has proven to be an excellent platform for feeding stolen content to billions of engaged viewers. Researchers have cautioned that this content is being exploited to conduct rackets such as advertising adult dating websites, hustling diet pills, and selling marked-up commodities. Although YouTube Shorts is still in beta, scammers have had plenty of time to shift their best TikTok-tested flimflams over to the Google cosmos, which is already populated by billions of viewers. 

Satnam Narang, a Tenable analyst, has been analyzing social media for over a decade and discovered that scammers are having great success stealing TikTok's most viral videos and exploiting them on YouTube Shorts to get viewers to click on a variety of sites and links. Narang examined 50 distinct YouTube channels and discovered that, as of December, they had accumulated 3.2 billion views across at least 38,293 videos stolen from TikTok creators. He stated that the YouTube channels had over 3 million subscribers. 

The most common type of fraud Narang discovered was the use of extremely popular TikTok videos, especially challenges showing gorgeous women, to serve links to adult dating sites that run affiliate programmes that pay for clicks.

These websites pay affiliates on a cost per action (CPA) or cost per lead (CPL) basis to incentivize them. Scammers, on the other hand, have started taking advantage of these affiliate offers to gain cash by duping users of social media networks. Scammers only need to persuade consumers to visit these adult dating websites and sign up with an email address, whether valid or not. When a visitor to an adult dating website becomes a registered user, the fraudster is able to get anywhere from $2–$4 for the successful CPL conversion. 

“While adult-dating scams proliferate across many platforms, the introduction of YouTube Shorts, with its enormous potential reach and built-in audience, is fertile ground that will only serve to help these scams become even more widespread,” Narang explained. “This trend is alarming because of how successful these tactics have become so quickly on YouTube Shorts, based on the volume of video views and subscribers on these fake channels promoting stolen content.” 

Viewers of YouTube Shorts were also offered advertisements with viral TikTok exercise videos for trending products, such as the pants dubbed "the leggings" on social media. The famous leggings, with a seam across the back to improve even the flattest posterior, were being offered on YouTube Shorts at a markup by scammers expecting the new breed of customers wouldn't notice the padded price, Narang discovered.

YouTube Videos Spread Password Stealing Malware

 

According to Greek legend, a Trojan is a form of malware that disguises itself as a legitimate file or software in order to fool unsuspecting users into downloading it on their computers. This is how naive users give cyberattackers unauthorized remote access. Threat actors will now be able to monitor a user's activities (web browsing, computer usage, and so on) in order to collect and extract sensitive data, erase files, or download more malware onto the PC, among other things. 

Threat actors are getting more inventive, as they have begun to utilize YouTube videos to spread malware via embedded links in video descriptions. Cluster25 security researcher Frost said that malware campaigns promoting various password-stealing Trojans have increased significantly on YouTube. Frost believes that two clusters of malicious activity are operating at the same time, one distributing RedLine malware and the other distributing Racoon Stealer. 

Malicious actors start by launching dozens of new YouTube channels dedicated to software cracks, licenses, how-to instructions, bitcoin, mining, game hacks, VPN software, and just about any other popular topic. These videos demonstrate how to complete a task using a specific piece of software or technology. Furthermore, the description of the YouTube video claims to provide a link to the associated programme that was used to disseminate the virus.

"We are aware of this campaign and are currently taking action to block activity by this threat actor and flagging all links to Safe Browsing. As always, we are continuously improving our detection methods and investing in new tools and features that automatically identify and stop threats like this one. It is also important that users remain aware of these types of threats and take appropriate action to further protect themselves," said Google. 

According to the researcher, thousands of videos and channels were created as part of the massive virus effort, with 100 new videos and 81 channels launched in only twenty minutes. Threat actors use stolen Google accounts to create new YouTube channels to spread malware, according to Frost, creating an infinite and ever-growing loop. 

"The threat actors have thousands of new channels available because they infect new clients every day. As part of these attacks, they steal victim's Google credentials, which are then used to create new YouTube Videos to distribute the malware," Frost said. 

These campaigns demonstrate the need of not to download programmes from the Internet at random, as video publishers cannot check every link published to sites like YouTube. As a result, before downloading and installing anything from a website, a user should study it to see if it has a solid reputation and can be trusted.