Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Google Ads. Show all posts

n8n Supply Chain Attack Exploits Community Nodes In Google Ads Integration to Steal Tokens


Hackers were found uploading a set of eight packages on the npm registry that pretended as integrations attacking the n8n workflow automation platform to steal developers’ OAuth credentials. 

About the exploit 

The package is called “n8n-nodes-hfgjf-irtuinvcm-lasdqewriit”, it copies Google Ads integration and asks users to connect their ad account in a fake form and steal OAuth credentials from servers under the threat actors’ control. 

Endor Labs released a report on the incident. "The attack represents a new escalation in supply chain threats,” it said. Adding that “unlike traditional npm malware, which often targets developer credentials, this campaign exploited workflow automation platforms that act as centralized credential vaults – holding OAuth tokens, API keys, and sensitive credentials for dozens of integrated services like Google Ads, Stripe, and Salesforce in a single location," according to the report. 

Attack tactic 

Experts are not sure if the packages share similar malicious functions. But Reversing labs Spectra Assure analysed a few packages and found no security issues. In one package called “n8n-nodes-zl-vietts,” it found a malicious component with malware history. 

The campaign might still be running as another updated version of the package “n8n-nodes-gg-udhasudsh-hgjkhg-official” was posted to npm recently.

Once installed as a community node, the malicious package works as a typical n8n integration, showing configuration screens. Once the workflow is started, it launches a code to decode the stored tokens via n8n’s master key and send the stolen data to a remote server. 

This is the first time a supply chain attack has specially targeted the n8n ecosystem, with hackers exploiting the trust in community integrations. 

New risks in ad integration 

The report exposed the security gaps due to untrusted workflows integration, which increases the attack surface. Experts have advised developers to audit packages before installing them, check package metadata for any malicious component, and use genuine n8n integrations. 

The findings highlight the security issues that come with integrating untrusted workflows, which can expand the attack surface. Developers are recommended to audit packages before installing them, scrutinize package metadata for any anomalies, and use official n8n integrations.

According to researchers Kiran Raj and Henrik Plate, "Community nodes run with the same level of access as n8n itself. They can read environment variables, access the file system, make outbound network requests, and, most critically, receive decrypted API keys and OAuth tokens during workflow execution.”

Phishing Expands Beyond Email: Why New Tactics Demand New Defences

 


Phishing has long been associated with deceptive emails, but attackers are now widening their reach. Malicious links are increasingly being delivered through social media, instant messaging platforms, text messages, and even search engine ads. This shift is reshaping the way organisations must think about defence.


From the inbox to every app

Work used to be confined to company networks and email inboxes, which made security controls easier to enforce. Today’s workplace is spread across cloud platforms, SaaS tools, and dozens of communication channels. Employees are accessible through multiple apps, and each one creates new openings for attackers.

Links no longer arrive only in email. Adversaries exploit WhatsApp, LinkedIn, Signal, SMS, and even in-app messaging, often using legitimate SaaS accounts to bypass email filters. With enterprises relying on hundreds of apps with varying security settings, the attack surface has grown dramatically.


Why detection lags behind

Phishing that occurs outside email is rarely reported because most industry data comes from email security vendors. If the email layer is bypassed, companies must rely heavily on user reports. Web proxies offer limited coverage, but advanced phishing kits now use obfuscation techniques, such as altering webpage code or hiding scripts to disguise what the browser is actually displaying.

Even when spotted, non-email phishing is harder to contain. A malicious post on social media cannot be recalled or blocked for all employees like an email. Attackers also rotate domains quickly, rendering URL blocks ineffective.


Personal and corporate boundaries blur

Another challenge is the overlap of personal and professional accounts. Staff routinely log into LinkedIn, X, WhatsApp, or Reddit on work devices. Malicious ads placed on search engines also appear credible to employees browsing for company resources.

This overlap makes corporate compromise more likely. Stolen credentials from personal accounts can provide access to business systems. In one high-profile incident in 2023, an employee’s personal Google profile synced credentials from a work device. When the personal device was breached, it exposed a support account linked to more than a hundred customers.


Real-world campaigns

Recent campaigns illustrate the trend. On LinkedIn, attackers used compromised executive accounts to promote fake investment opportunities, luring targets through legitimate services like Google Sites before leading them to phishing pages designed to steal Google Workspace credentials.

In another case, malicious Google ads appeared above genuine login pages. Victims were tricked into entering details on counterfeit sites hosted on convincing subdomains, later tied to a campaign by the Scattered Spider group.


The bigger impact of one breach

A compromised account grants far more than access to email. With single sign-on integrations, attackers can reach multiple connected applications, from collaboration tools to customer databases. This enables lateral movement within organisations, escalating a single breach into a widespread incident.

Traditional email filters are no longer enough. Security teams need solutions that monitor browser behaviour directly, detect attempts to steal credentials in real time, and block attacks regardless of where the link originates. In addition, enforcing multi-factor authentication, reducing unnecessary syncing across devices, and educating employees about phishing outside of email remain critical steps.

Phishing today is about targeting identity, not just inboxes. Organisations that continue to see it as an email-only problem risk being left unprepared against attackers who have already moved on.


Fake DeepSeek AI Installers Deliver BrowserVenom Malware



Cybersecurity researchers have released a warning about a sophisticated cyberattack campaign in which users are attempted to access DeepSeek-R1, a widely recognized large language model (LLM), which has been identified as a large language model. Cybercriminals have launched a malicious operation designed to exploit unsuspecting users through deceptive tactics to capitalise on the soaring global interest in artificial intelligence tools, and more specifically, open-source machine learning models (LLMs). 


As a result of a detailed investigation conducted by Kaspersky, a newly discovered Windows-based malware strain known as BrowserVenom is distributed by threat actors utilising a combination of malvertising and phishing techniques to distribute. In addition to intercepting and manipulating web traffic, this sophisticated malware enables attackers to stealthily retrieve sensitive data from users, including passwords, browsing history, and personal information.

It has been reported that cybercriminals are using Google Adwords to redirect users to a fraudulent website that has been carefully designed to replicate the official DeepSeek homepage by using a website name deepseek-platform[.]com. They are deceiving victims into downloading malicious files by imitating the branding and layout of a legitimate DeepSeek-R1 model installation, and they are deceiving them into doing so. 

The emergence of BrowserVenom has a significant impact on the cyber threat landscape, as attackers are utilising the growing interest in artificial intelligence technologies to deliver malware in order to increase the level of exposure. Aside from highlighting the sophistication of social engineering tactics that are becoming increasingly sophisticated, this campaign also serves as an effective reminder to verify the sources of software and tools that may be related to artificial intelligence. 

An analysis of security threats has revealed that attackers behind the BrowserVenom attack have created a deceptive installer posing as the authentic DeepSeek-R1 language model in order to deliver malicious payloads. This malicious software installer has been carefully disguised to make it seem authentic, and it contains a recently identified malware called BrowserVenom, an advanced malware that reroutes all browser traffic through the attacker's servers. 

Using this redirection capability, cybercriminals can intercept and manipulate internet traffic, giving them direct access to the sensitive personal information of millions of people. Despite the fact that BrowserVenom is an important piece of malware, its scope of functionality is especially worrying. Once embedded within a system, the malware can monitor user behaviour, harvest login credentials, retrieve session cookies, and steal financial data, emails, and documents that may even be transmitted in plaintext. 

As a result of this level of access, cybercriminals are able to access all the information they need to commit financial fraud, commit identity theft, or sell stolen data on underground marketplaces. Kaspersky reports that the campaign has already compromised systems in a number of countries. They have confirmed infection reports in Brazil, Cuba, Mexico, India, Nepal, South Africa, and Egypt, highlighting the threat’s global reach. 

An infection vector for DeepSeek is a phishing site that is designed to look just like DeepSeek's official platform, which is the primary channel through which it gets infected, inducing users to download the trojanized installer. Because BrowserVenom is still spreading, experts warn that it poses a persistent and ongoing threat to users worldwide, especially those who use open-source AI tools without verifying the authenticity of the source they are using. 

According to a comprehensive investigation of the BrowserVenom campaign, it appears that a highly orchestrated infection chain has been crafted which begins at a malicious phishing website hosted at https[:]//deepseek-platform[.]com. Malvertising tactics have been employed by the attackers to place sponsored search results strategically atop pages when users search for terms like "DeepSeek R1" and similar. 

Deceptive strategies are designed to take advantage of the growing popularity of open-source artificial intelligence models and trick users into visiting a lookalike website that is convincingly resembling the DeepSeek homepage in order to trick them into visiting a website based on a fake DeepSeek lookalike website. Upon arrival at the fake site, the fake site detects the operating system of the visitor silently. 

A single prominent button labelled “Try now” is displayed on the interface for Windows users - the primary targets of this attack - in order to get a DeepSeek-R1 model for free. There have been occurrences of the site serving slightly modified layouts on other platforms, but all versions share the same goal of luring users into clicking and unintentionally initiating an infection, regardless of which platform they're on. This malware was developed by the operators of the BrowserVenom malware to enhance the credibility of the malicious campaign and reduce the suspicion of users. 

To accomplish this, multiple CAPTCHA mechanisms have been integrated into the attack chain at various points to confuse the user. In addition to providing the fake DeepSeek-R1 download website with a sense of legitimacy, this clever use of CAPTCHA challenges is also a form of social engineering, implying that it is secure and trustworthy, which in turn reinforces the illusion of security. When a user clicks the "Try Now" button on the fraudulent DeepSeek platform, the first CAPTCHA will be triggered, according to cybersecurity researchers.

It is at this point that a victim is presented with a fake CAPTCHA page that mimics the appearance of a standard bot-verification interface. Interestingly enough, this isn't just a superficial challenge for the victim. By using an embedded snippet of JavaScript code, the embedded code evaluates whether a person is actually conducting the interaction, performing several verification checks to identify and block automated access to the system. 

Once users click the button, they will be redirected to a CAPTCHA verification page, which is allegedly designed to stop automated robots from accessing the download. However, there is a layer of heavily obfuscated JavaScript behind this screen that performs advanced checks to ensure that a visitor is actually a human, and not a security scanner, by performing advanced checks. The attackers have been operating similar malicious campaigns in the past using dynamic scripts and evasion logic, which emphasises the campaign's technical sophistication. 

A user is redirected to a secondary page located at proxy1.php once they have completed the CAPTCHA, where a “Download now” button appears once they have completed the CAPTCHA. When users click on this final prompt, they are prompted to download the tampered executable file AI_Launcher_1.21.exe, which they can find at 
https://r1deepseek-ai[.]com/gg/cc/AI_Launcher_1.21.exe. 

Using this executable, the malware can be successfully installed in the browser. This entire process, from the initial search to the installation of the malware, has been cleverly disguised to appear as a legitimate user experience to illustrate how cybercriminals are using both social engineering as well as technical sophistication to spread their malware on an international scale. 

Once a user has successfully completed the initial CAPTCHA, they are directed to a secondary page which displays the "Download" button to what is supposed to be an official DeepSeek installer. It should be noted, however, that if users click on this link, they are downloading a trojanized executable file called AI-Launcher-1.21.exe, which stealthily installs BrowserVenom malware. As part of this process, a second CAPTCHA is required. In this case, the prompt resembles the Cloudflare Turnstile verification, complete with the familiar “I am not a robot” checkbox. As a result, the user is misled throughout the entire infection process, creating an illusion of safety. 

It is the victim's choice to choose between two AI deployment platforms after the second CAPTCHA has been completed- "Ollama" or "LM Studio," both of which are legitimate tools for running local versions of AI models like DeepSeek. However, regardless of which option users select, the end result is the same - BrowserVenom malware is silently downloaded and executed in the background without being noticed. 

Cybercriminals are increasingly weaponising familiar security mechanisms to disguise malicious activity in cybercrime, and this sophisticated use of fake CAPTCHAs indicates a broader trend. There has actually been a rise in similar attacks over the past few years, including recent phishing attacks involving Cloudflare CAPTCHA pages that coax users into executing malicious commands with the hope of getting them to do so. 

As soon as the installer is executed, it entails the installation of a dual-layered operation that mixes both visual legitimacy and covert malicious activity. The user is presented with a convincing installation interface which appears to be a large language model deployment tool, but a hidden background process simultaneously deploys the browser malware, thereby presenting the false appearance of a legitimate tool. During this behind-the-scenes sequence, an attempt is made to bypass traditional security measures to maintain stealth while bypassing traditional security measures. 

A crucial evasion technique is used in the installation of the infection: the installer executes an AES-encrypted PowerShell command to exclude the Windows Defender scan of the user's directory. In this case, attackers improve the likelihood that malware will install undetected and successfully if the malware's operating path is removed from routine antivirus oversight.

Once the malware is installed, the installer then proceeds to download additional payloads from obfuscated scripts, further complicating the detection and analysis of the malware. Ultimately, the payload, BrowserVenom, is injected directly into system memory using a sophisticated technique which avoids putting the malicious code on disk, thus evading signature-based antivirus detections. 

Once embedded in the system, BrowserVenom's primary function is to redirect all browser traffic towards a proxy server controlled by the attacker. As part of this process, the malware installs a rogue root certificate that facilitates HTTPS interceptions and modifies the configuration of browsers on multiple platforms, including Google Chrome, Microsoft Edge, Mozilla Firefox, and other Chromium and Gecko-based browsers. 

By making these changes, the malware can intercept and manipulate secure web traffic without raising the suspicion of users. Furthermore, the malware updates user preferences as well as browser shortcuts to ensure persistence, even if the computer is rebooted or manual removal attempts are made. Researchers have found elements of Russian-language code embedded within the phishing website and distribution infrastructure of the malware that strongly suggests that Russian-speaking threat actors are involved in its development. 

This is the first case of confirmed infections reported by the FBI in Brazil, Cuba, Mexico, India, Nepal, South Africa, and Egypt, demonstrating the campaign's global spread and aggressive campaign strategy. In addition to communicating with a command-and-control (C2) infrastructure at the IP address 141.105.130[.]106, the malware also uses port 37121 as its primary port to communicate, which is hardcoded into the proxy settings it uses. This allows BrowserVenom to hijack and route victim traffic through attacker-controlled channels without user knowledge. 

The growing threat of cyberattacks that exploit the AI boom, particularly the increasing use of popular LLM tools as bait, is emphasised by security experts. It is strongly recommended that users adhere to strict digital hygiene, which includes verifying URLs, checking SSL certificates, and avoiding downloading software from unauthorised sources or advertisements.

A growing interest in artificial intelligence has led to a surge in abuse by sophisticated cybercriminal networks, which has made proactive vigilance essential for users throughout all geographies and industries. In light of the recent BrowserVenom incident, which highlights the deceptive tactics that cybercriminals are using in order to get the user to take action, it highlights the urgency for users to be more aware of AI-related threats. 

Today, adversaries are blending authentic interfaces, advanced evasion methods, and social engineering into one seamless attack, which makes traditional security habits no longer sufficient to thwart them. The cybersecurity mindset of organizations and individuals alike requires a combination of real-time threat intelligence, behavioral detection tools, and cautious digital behavior that is based on real-time threat intelligence. Increasingly sophisticated artificial intelligence is changing the landscape of artificial intelligence threats, which requires continuous vigilance to prevent a malicious innovation from getting a step ahead.

Google Unveils AI With Deep Reasoning and Creative Video Capabilities

 


This week, Google, as part of its annual Google Marketing Live 2025 event, unveiled a comprehensive suite of artificial intelligence-powered tools to help the company cement its position at the forefront of digital commerce and advertising on Wednesday, May 21, at a press conference.

Google's new tools are intended to revolutionise the way brands engage with consumers and drive measurable growth through artificial intelligence, and they are part of a strategic push that Google is making to redefine the future of advertising and online shopping. In her presentation, Vidhya Srinivasan, Vice President and General Manager of Google Ads and Commerce, stressed the importance of this change, saying, “The future of advertising is already here, fueled by artificial intelligence.” 

This declaration was followed by Google's announcement of advanced solutions that will enable businesses to use smarter bidding, dynamic creative creation, and intelligent, agent-based assistants in real-time, which can adjust to user behaviour and market conditions, as well as adapt to changing market conditions. Google has launched this major product at a critical time in its history, as generative AI platforms and conversational search tools are putting unprecedented pressure on traditional search and shopping channels, diverting users away from these methods. 

By leveraging technological disruptions as an opportunity for brands and marketers around the world, Google underscores its commitment to staying ahead of the curve by creating innovation-driven opportunities for brands and marketers. A long time ago, Google began to explore artificial intelligence, and since its inception in 1998, it has evolved steadily. Google’s journey into artificial intelligence dates back much earlier than many people think. 

While Google has always been known for its groundbreaking PageRank algorithm, its formal commitment to artificial intelligence accelerated throughout the mid-2000s when key milestones like the acquisition of Pyra Labs in 2003 and the launch of Google Translate in 2006 were key milestones. It is these early efforts that laid the foundation for analysing content and translating it using AI. It was not long before Google Instant was introduced in 2010 as an example of how predictive algorithms were enhancing user experience by providing real-time search query suggestions. 

In the years that followed, artificial intelligence research and innovation became increasingly important, as evidenced by Google X's establishment in 2011 and DeepMind's strategic acquisition in 2014, pioneers in reinforcement learning that created the historic algorithm AlphaGo. A new wave of artificial intelligence has been sweeping across the globe since 2016 with Google Assistant and advanced tools like TensorFlow, which have democratized machine learning development. 

Breakthroughs such as Duplex have highlighted AI's increasing conversational sophistication, but most recently, Google's AI has embraced multimodal capabilities, which is why models like BERT, LaMDA, and PaLM are revolutionising language understanding and dialogue in a way previously unknown to the world. AI has a rich legacy that underscores its crucial role in driving Google’s transformation across search, creativity, and business solutions, underpinned by this legacy. 

As part of its annual developer conference in 2025, Google I/O reaffirmed its leadership in the rapidly developing field of artificial intelligence by unveiling an impressive lineup of innovations that promise to revolutionize the way people interact with technology, reaffirming its leadership in this field. In addition to putting a heavy emphasis on artificial intelligence-driven transformation, this year's event showcased next-generation models and tools that are far superior to the ones displayed in previous years. 

Among the announcements made by AI are the addition of AI assistants with deeper contextual intelligence, to the creation of entire videos with dialogue, which highlights a monumental leap forward in both the creative and cognitive capabilities of AI in general. It was this technological display that was most highlighted by the unveiling of Gemini 2.5, Google's most advanced artificial intelligence model. This model is positioned as the flagship model of the Gemini series, setting new industry standards for outstanding performance across key dimensions, such as reasoning, speed, and contextual awareness, which is among the most important elements of the model. 

The Gemini 2.5 model has outperformed its predecessors and rivals, including Google's own Gemini Flash, which has redefined expectations for what artificial intelligence can do. Among the model's most significant advantages is its enhanced problem-solving ability, which makes it far more than just a tool for retrieving information; it is also a true cognitive assistant because it provides precise, contextually-aware responses to complex and layered queries. 

 It has significantly enhanced capabilities, but it operates at a faster pace and with better efficiency, which makes it easier to integrate into real-time applications, from customer support to high-level planning tools, seamlessly. Additionally, the model's advanced understanding of contextual cues allows it to conduct intelligent, more coherent conversations, allowing it to feel more like a human being collaborating rather than interacting with a machine. This development marks a paradigm shift in artificial intelligence in addition to incremental improvements. 

It is a sign that artificial intelligence is moving toward a point where systems are capable of reasoning, adapting, and contributing in meaningful ways across the creative, technical, and commercial spheres. Google I/O 2025 serves as a preview of a future where AI will become an integral part of productivity, innovation, and experience design for digital creators, businesses, and developers alike. 

Google has announced that it is adding major improvements to its Gemini large language model lineup, which marks another major step forward in Google's quest to develop more powerful, adaptive artificial intelligence systems, building on the momentum of its breakthroughs in artificial intelligence. The new iterations, Gemini 2.5 Flash and Gemini 2.5 Pro, feature significant architectural improvements that aim to optimise performance across a wide range of uses. 

It will be available in early June 2025 in general availability as Gemini 2.5 Flash, a fast and lightweight processor designed for high-speed and lightweight use, and the more advanced Pro version will appear shortly afterwards as well. Among the most notable features of the Pro model is the introduction of “Deep Think” which provides advanced reasoning techniques to handle complex tasks using parallel processing techniques to handle complex issues. 

As a result of its inspiration from AlphaGo's strategic modelling, Deep Think gives AI the ability to simultaneously explore various solution paths, producing faster and more accurate results. With this capability, the model is well-positioned to offer a cutting-edge solution for reasoning at the highest level, mathematical analysis, and programming that meets the demands of competition. When Demiss Hassabis, CEO of Google DeepMind, held a press briefing to highlight the model's breakthrough performance, he highlighted its impressive performance on the USAMO 2025, a challenging math challenge that is a challenging one in the world, and LiveCodeBench, another benchmark that is a popular one in advanced coding.

A statement by Hassabis said, “Deep Think pushed the performance of models to the limit, resulting in groundbreaking results.” Google is adopting a cautious release strategy to comply with its commitment to ethical AI deployment. In order to ensure safety, reliability, and transparency, Deep Think will initially be accessible only to a limited number of trusted testers who will be able to provide feedback. 

In addition to demonstrating Google's intent to responsibly scale frontier AI capabilities, this deliberate rollout emphasises the importance of maintaining trust and control while showcasing the company's commitment to it. In addition to its creative AI capabilities, Google announced two powerful models for generative media during its latest announcements: Veo 3 for video generation and Imagen 4. These models represent significant breakthroughs in generative media technology. 

There has been a shift in artificial intelligence-assisted content creation in recent years, and these innovations provide creators with a much deeper, more immersive toolkit that allows them to tell visual and audio stories in a way that is truly remarkable in terms of realism and precision. Veo 3 represents a transformative leap in video generation technology, and for the first time, artificial intelligence-generated videos do not only comprise silent, motion-only clips anymore, but also provide a wide range of visual effects and effects. 

As a result of the integration of fully synchronised audio with Veo 3, the experience felt more like a real cinematic production than a simple algorithmic output, with ambient sounds, sound effects, and even real-time dialogue between characters, as it was in the original film. "For the first time in history, we are entering into a new era of video creation," said Demis Hassabis, CEO of Google DeepMind, highlighting how both the visual fidelity and the auditory depth of the new model were highlighted. As a result of these breakthroughs, Google has developed Flow, a new AI-powered filmmaking platform exclusively for creative professionals, which integrates these breakthroughs into Flow. 

Flow is Google's latest generative modelling tool that combines the most advanced models into an intuitive interface, so storytellers can design cinematic sequences with greater ease and fluidity than ever before. In Flow, the company claims it will recreate the intuitive, inspired creative process, where iteration feels effortless and ideas evolve in a way that is effortless and effortless. Flow has already been used by several filmmakers to create short films that illustrate the creative potential of the technology, combining Flow's capabilities with traditional methods to create the films.

Additionally, Imagen 4 is the latest update to Google's image generation model, offering extraordinary improvements in visual clarity, fine detail, and especially in typography and text rendering, as well as providing unparalleled advancements in visual clarity and fine detail. With these improvements, it has become a powerful tool for marketers, designers, and content creators who need to create beautiful visuals combining high-quality imagery with precise, readable text. 

The Imagen 4 platform is a significant step forward in advancing the quality of visual storytelling based on artificial intelligence, whether for branding, digital campaigns, or presentations. Despite fierce competition from leading technology companies, Google has made significant advancements in autonomous artificial intelligence agents at a time when the landscape of intelligent automation is rapidly evolving.

It is no secret that Microsoft's GitHub Copilot has already demonstrated how powerful AI-driven development assistants can be, but OpenAI's CodeX platform continues to push the boundaries of what AI has to offer. It is in this context that Google introduced innovative tools like Stitch and Jules that could generate a complete website, a codebase, and a user interface automatically without any human input. These tools signal a revolution in how software developers develop and create digital content. A convergence of autonomous artificial intelligence technologies from a variety of industry giants underscores a trend towards automating increasingly complex knowledge tasks. 

Through the use of these AI systemorganisationsons can respond quickly to changing market demands and evolving consumer preferences by providing real-time recommendations and dynamic adjustments. Through such responsiveness, an organisation is able to optimise operational efficiency, maximise resource utilisation, and create sustainable growth by ensuring that the company remains tightly aligned with its strategic goals. AI provides businesses with actionable insights that enable them to compete more effectively in an increasingly complex and fast-paced market place by providing actionable insights. 

Aside from software and business applications, Google's AI innovations also have great potential to have a dramatic impact on the healthcare sector, where advancements in diagnostic accuracy and personalised treatment planning have the potential to greatly improve the outcomes for patients. Furthermore, improvements in the field of natural language processing and multimodal interaction models will help provide more intuitive, accessible and useful user interfaces for users from diverse backgrounds, thus reducing barriers to adoption and enabling them to make the most of technology. 

In the future, when artificial intelligence becomes an integral part of today's everyday lives, its influence will be transformative, affecting industries, redefining workflows, and generating profound social effects. The fact that Google leads the way in this space not only implies a future where artificial intelligence will augment human capabilities, but it also signals the arrival of a new era of progress in science, economics, and culture as a whole.

Scammers Exploit Google and PayPal’s Infrastructure to Steal Users Private Data

 

Cybersecurity experts discovered a sophisticated phishing campaign that used Google Ads and PayPal's infrastructure to defraud users and obtain sensitive personal information. 

The attackers abused vulnerabilities in Google's ad standards and PayPal's "no-code checkout" feature to create fake payment links that appeared authentic, duping victims into communicating with fake customer care agents. 

Malicious actors created fraudulent adverts imitating PayPal. These adverts shown in the top search results on Google, displaying the official PayPal domain to boost user trust. A flaw in Google's landing page regulations allowed these advertisements to send consumers to fraudulent sites hosted on PayPal's legitimate domain.

The URLs used the format paypal.com/ncp/payment/[unique ID], which was designed to allow merchants to securely accept payments without requiring technical knowledge. 

Scammers took advantage of this functionality by customising payment pages with misleading information, such as fake customer service phone numbers labelled as "PayPal Assistance." Victims, particularly those using mobile devices with limited screen area, were more likely to fall for the scam due to the challenges in spotting the fake nature of the links. 

Mobile devices: A key target 

Due to the inherent limitations of smaller screens, mobile users were the campaign's main target. Users of smartphones frequently rely on the top search results without scrolling further, which increases their vulnerability to clicking on malicious ads. Additionally, once they were directed to the phoney payment pages, users would see PayPal's official domain in their browser address bar, which further confirmed the scam's legitimacy. 

Victims who called the fake help numbers were most likely tricked into disclosing sensitive information or making unauthorised payments. According to MalwareBytes Report, this attack highlights how cybercriminals may use trusted platforms such as Google and PayPal to conduct sophisticated scams. Scammers successfully bypassed typical security measures by combining technical flaws with social engineering techniques, preying on people' trust in well-known brands.

The campaign has been reported to Google and PayPal, yet new malicious adverts utilising similar techniques continue to appear. Experts advise people to use caution when interacting with online adverts and to prioritise organic search results above sponsored links when looking for legitimate customer service information. Security technologies such as ad blockers and anti-phishing software can also help to reduce risks by blocking malicious links.

Cybercriminals Use Google Ads and URL Cloaking to Spread Malware

 

Cybercriminals are increasingly using Google ads and sophisticated cloaking techniques to push malware onto unsuspecting users. The latest example involves a fake Homebrew website that tricked users into downloading an infostealer designed to steal sensitive data, including login credentials and banking details. Security researcher Ryan Chenkie first noticed the malicious Google ad, which displayed the correct Homebrew URL, “brew.sh,” making it appear legitimate. 

However, once users clicked on the ad, they were redirected to a fraudulent clone hosted at “brewe.sh.” The deception was so convincing that even experienced users might not have spotted the trick before engaging with the site. The technique used in this campaign, known as URL cloaking, allows cybercriminals to manipulate how links appear in ads. According to Google, these attackers create thousands of accounts and use advanced text manipulation to bypass detection by both automated systems and human reviewers. This makes it difficult to catch fraudulent ads before they reach users. 

While Google has since removed the ad and is ramping up its security efforts, the issue highlights ongoing vulnerabilities in online advertising. The malware behind this attack, identified by security researcher JAMESWT as AmosStealer (also known as Atomic), is specifically designed for macOS systems. Developed in Swift, it is capable of running on both Intel and Apple Silicon devices. AmosStealer is a subscription-based malware service, sold to cybercriminals for $1,000 per month. 

Once installed, it can extract browser history, login credentials, bank account details, cryptocurrency wallet information, and other sensitive data. What makes this attack particularly alarming is its target audience. Homebrew is a package manager used primarily by macOS and Linux users, who are generally more tech-savvy than the average internet user. This suggests that cybercriminals are refining their tactics to deceive even experienced users. By leveraging Google’s ad platform to lend credibility to their fake sites, these attackers can reach a broader audience and increase their success rate.  

To protect against such malware campaigns, users should take extra precautions. Checking an ad’s displayed URL is no longer sufficient — verifying the website address after the page loads is crucial. Even a minor change in spelling, such as replacing a single letter, can indicate a fraudulent site. Another effective defense is avoiding Google ads altogether. Legitimate websites always appear in organic search results below the ads, so skipping the top links can help users avoid potential scams. 

Instead of clicking on ads, users should manually search for the company or product name to locate the official website. For those looking to minimize risks from malicious ads, alternative search engines like DuckDuckGo or Qwant offer more privacy-focused browsing experiences with stricter ad filtering. As cybercriminals continue to evolve their tactics, adopting safer browsing habits and remaining vigilant online is essential to avoiding security threats.

Hackers Employ Fake Mac Homebrew Google Ads in Novel Malicious Campaign

 

Hackers are once more exploiting Google advertisements to disseminate malware, using a fake Homebrew website to compromise Macs and Linux systems with an infostealer that harvests credentials, browsing data, and cryptocurrency wallets. 

Ryan Chenkie discovered the fraudulent Google ad campaign and warned on X regarding the potential of malware infection. The malware employed in this operation is AmosStealer (aka 'Atomic'), an infostealer intended for macOS devices and sold to malicious actors on a monthly subscription basis for $1,000. 

The malware recently appeared in various malvertising campaigns promoting bogus Google Meet conferencing pages, and it is now the preferred stealer for fraudsters targeting Apple customers. 

Targeting Homebrew customers 

Homebrew is a popular open-source package manager for macOS and Linux that lets you install, update, and manage software using the command line. 

A fraudulent Google advertising featured the correct Homebrew URL, "brew.sh," misleading even seasoned users into clicking it. However, the ad redirected users to a bogus Homebrew website hosted at "brewe.sh". Malvertisers have extensively exploited this URL strategy to trick users into visiting what appears to be a legitimate website for a project or organisation.

When the visitor arrives at the site, he or she is requested to install Homebrew by copying and pasting a command from the macOS Terminal or Linux shell prompt. The official Homebrew website provides a similar command for installing legitimate software. However, running the command displayed on the bogus website will download and execute malware on the device. 

Cybersecurity expert JAMESWT discovered that the malware injected in this case [VirusTotal] is Amos, a potent infostealer that targets over 50 cryptocurrency extensions, desktop wallets, and online browser data. Mike McQuaid, Homebrew's project leader, indicated that the project is aware of the situation but that it is beyond its control, criticising Google's lack of oversight. 

"Mac Homebrew Project Leader here. This seems taken down now," McQuaid stated on X. "There's little we can do about this really, it keeps happening again and again and Google seems to like taking money from scammers. Please signal-boost this and hopefully someone at Google will fix this for good.”

At the time of writing, the malicious ad has been removed, but the campaign could still run through other redirection domains, therefore Homebrew users should be aware of sponsored project adverts.

To mitigate the risk of malware infection, while clicking on a link in Google, make sure you are directed to the authentic site for a project or company before entering sensitive information or installing software. Another safe option is to bookmark official project websites that you need to visit frequently when sourcing software and utilise them instead of searching online every time.

Google Ads Phishing Scam Reaches New Extreme, Experts Warn of Ongoing Threat


Cybercriminals Target Google Ads Users in Sophisticated Phishing Attacks

Cybercriminals are intensifying their phishing campaigns against Google Ads users, employing advanced techniques to steal credentials and bypass two-factor authentication (2FA). This new wave of attacks is considered one of the most aggressive credential theft schemes, enabling hackers to gain unauthorized access to advertiser accounts and exploit them for fraudulent purposes.

According to cybersecurity firm Malwarebytes, attackers are creating highly convincing fake Google Ads login pages to deceive advertisers into entering their credentials. Once stolen, these login details allow hackers to fully control compromised accounts, running malicious ads or reselling access on cybercrime forums. Jérôme Segura, Senior Director of Research at Malwarebytes, described the campaign as a significant escalation in malvertising tactics, potentially affecting thousands of advertisers worldwide.

How the Attack Works

The attack process is alarmingly effective. Cybercriminals design fake Google Ads login pages that closely mimic official ones. When advertisers enter their credentials, the phishing kits deployed by attackers capture login details, session cookies, and even 2FA tokens. With this information, hackers can take over accounts instantly, running deceptive ads or selling access to these accounts on the dark web.

Additionally, attackers use techniques like cloaking to bypass Google’s ad policies. Cloaking involves showing different content to Google’s reviewers and unsuspecting users, allowing fraudulent ads to pass through Google's checks while leading victims to harmful websites.

Google’s Response and Recommendations

Google has acknowledged the issue and stated that measures are being taken to address the threat. “We have strict policies to prevent deceptive ads and actively remove bad actors from our platforms,” a Google spokesperson explained. The company is urging advertisers to take immediate steps if they suspect their accounts have been compromised. These steps include resetting passwords, reviewing account activity, and enabling enhanced security measures like security keys.

Cybersecurity experts, including Segura, recommend advertisers exercise caution when clicking on sponsored ads, even those that appear legitimate. Additional safety measures include:

  • Using ad blockers to limit exposure to malicious ads.
  • Regularly monitoring account activity for any unauthorized changes.
  • Being vigilant about the authenticity of login pages, especially for critical services like Google Ads.

Despite Google’s ongoing efforts to combat these attacks, the scale and sophistication of phishing campaigns continue to grow. This underscores the need for increased vigilance and robust cybersecurity practices to protect sensitive information and prevent accounts from being exploited by cybercriminals.