Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Gmail. Show all posts

Google Expands Gemini in Gmail, Forcing Billions to Reconsider Privacy, Control, and AI Dependence

 




Google has introduced one of the most extensive updates to Gmail in its history, warning that the scale of change driven by artificial intelligence may feel overwhelming for users. While some discussions have focused on surface-level changes such as switching email addresses, the company has emphasized that the real transformation lies in how AI is now embedded into everyday tools used by nearly two billion people. This shift requires far more serious attention.

At the center of this evolution is Gemini, Google’s artificial intelligence system, which is being integrated more deeply into Gmail and other core services. In a recent update shared through a short video message, Gmail’s product leadership acknowledged that the rapid pace of AI innovation can leave users feeling overloaded, with too many new features and decisions emerging at once.

Gmail has traditionally been built around convenience, scale, and seamless integration rather than strict privacy-first principles. Although its spam filters and malware detection systems are widely used and generally effective, they are not flawless. Importantly, Gmail has not typically been the platform users turn to for strong privacy assurances.

The introduction of Gemini changes this bbalance substantially. Google has clarified that it does not use email content to train its AI models. However, the way these tools function introduces new concerns. Features that automatically draft emails, summarize conversations, or search inbox content require access to emails that may contain highly sensitive personal or professional information.

To address this, Google describes Gemini as a temporary assistant that operates within a limited session. The company compares this interaction to allowing a helper into a private room containing your inbox. The assistant completes its task and then exits, with the accessed information disappearing afterward. According to Google, Gemini does not retain or learn from the data it processes during these interactions.

Despite these assurances, concerns remain. Even if the data is not stored long term, granting a cloud-based AI system access to private communications introduces an inherent level of risk. Additionally, while Google has denied automatically enrolling users into AI training programs, many of these AI-powered features are expected to be enabled by default. This shifts responsibility to users, who must actively decide how much access they are willing to allow.

This is not a decision that can be ignored. Once AI tools become integrated into daily workflows, they are difficult to remove. Relying on default settings or delaying action could result in long-term dependence on systems that users may not fully understand or control.

Shortly after promoting these updates, Gmail experienced a disruption that affected its core functionality. Users reported delays in sending and receiving emails, and Google acknowledged the issue while working on a fix. Initially, no estimated resolution time was provided. Later the same day, the company confirmed that the issue had been resolved.

According to Google’s official status update, the disruption was fixed on April 8, 2026, at 14:49 PDT. The cause was identified as a “noisy neighbor,” a term used in cloud computing to describe a situation where one service consumes excessive shared resources, negatively impacting the performance of others operating on the same infrastructure.

With a user base of approximately two billion, even a short-lived outage becomes of grave concern. More importantly, it emphasises the scale at which Gmail operates and reinforces why decisions around AI integration are critical for users worldwide.

The central issue now facing users is the balance between convenience and security. Google presents Gemini as a helpful and well-behaved assistant that enhances productivity without overstepping boundaries. However, like any guest given access to a private space, it requires clear rules and careful oversight.

This tension becomes even more visible when considering Google’s parallel efforts to strengthen security. The company recently expanded client-side encryption for Gmail on mobile devices. While this may sound similar to end-to-end encryption used in messaging apps, it is not the same. This form of encryption operates at an organizational level, primarily for enterprise users, and does not provide the same device-specific privacy protections commonly associated with true end-to-end encryption.

More critically, enabling this additional layer of encryption dynamically limits Gmail’s functionality. When it is turned on, several features become unavailable. Users can no longer use confidential mode, access delegated accounts, apply advanced email layouts, or send bulk emails using multi-send options. Features such as suggested meeting times, pop-out or full-screen compose windows, and sending emails to group recipients are also disabled.

In addition, personalization and usability tools are affected. Email signatures, emojis, and printing functions stop working. AI-powered tools, including Google’s intelligent writing and assistance features, are also unavailable. Other smart Gmail features are disabled, and certain mobile capabilities, such as screen recording and taking screenshots on Android devices, are restricted.

These limitations exist because encrypted data cannot be accessed by AI systems. As a result, users are forced to choose between stronger data protection and access to advanced features. The same mechanisms that secure information also prevent AI tools from functioning effectively.

This reflects a bigger challenge across the technology industry. Privacy and security measures often limit the capabilities of AI systems, which depend on access to data to operate. In Gmail’s case, these two priorities do not align easily and, in many ways, directly conflict.

From a wider perspective, this also highlights a fundamental limitation of email itself. The technology was developed in an earlier era and was not designed to handle modern cybersecurity threats. Its underlying structure lacks the robust protections found in newer communication platforms.

As artificial intelligence becomes more deeply integrated into everyday tools, users are being asked to make more informed and deliberate decisions about how their data is used. While Google presents Gemini as a controlled and temporary assistant, the responsibility ultimately lies with users to determine their comfort level.

For highly sensitive communication, relying solely on email may no longer be the safest option. Exploring alternative platforms with stronger built-in security may be necessary. Ultimately, this moment represents a critical choice: whether the convenience offered by AI is worth the level of access it requires.

Gmail Address Change Feature Fails to Address Core Security Risks, Report Warns

 

A recent update by Google allowing users to change their Gmail address has drawn attention, but cybersecurity experts say it does little to solve deeper issues tied to email privacy and security. 

The feature, which has gained visibility following its rollout in the United States, lets users modify their primary Gmail address while keeping the old one active as an alias. 

The change has been framed as a way to move beyond outdated or inappropriate usernames created years ago. Google CEO Sundar Pichai highlighted the shift in a public post, noting that users no longer need to be tied to early-era email identities. 

However, experts say the update does not address the main problem facing email users today, widespread exposure of email addresses to marketers, data brokers and cybercriminals. 

Once an email address is used online, it is likely to be stored across multiple databases, making it a long-term target for spam and phishing attempts. Changing the visible username does not remove that exposure, especially since older addresses continue to function. 

Jake Moore, a cybersecurity specialist at ESET, said the ability to edit email addresses reflects a broader shift in how digital identity works, but warned it could introduce new risks. “Old addresses will still work as aliases,” he said, adding that this could increase the risk of impersonation and phishing attacks. 

Security researchers also point to the absence of a built-in privacy feature similar to Apple’s “Hide My Email,” which allows users to generate disposable email addresses for sign-ups and online transactions. These temporary addresses can be disabled at any time, limiting long-term exposure. 

Without a comparable system, Gmail users who change their address may still need to share their primary email widely, continuing the cycle of data exposure. 

The update may also create new vulnerabilities in the short term. Cybersecurity reports indicate that attackers are already using the feature as a lure in phishing campaigns, sending emails that direct users to fake login pages designed to steal account credentials. 

There are also early signs of increased spam activity. Online forums have reported a rise in unwanted emails, with some researchers suggesting the address change feature could allow attackers to bypass existing spam filters and start fresh. 

According to security researchers cited by industry outlets, many email filtering systems rely heavily on known sender addresses. 

If attackers rotate or modify those addresses, they may temporarily evade detection until new filters are applied. At the same time, changing a Gmail address does not stop unwanted messages from reaching the original account, since it remains active in the background. 

Experts say the update highlights a broader issue in email security. While giving users more flexibility over their identity, it does not reduce reliance on a single, permanent address that is repeatedly shared across services. 

They suggest that more effective solutions would include tools that limit how widely a primary email address is distributed, along with stronger controls over incoming messages. 

For now, users are being advised to treat emails related to the new feature with caution, particularly those that include links to account settings, as these may be part of phishing attempts.

Hackers Use Fake Legal Emails to Spread Casbaneiro Malware

 



A coordinated phishing operation is targeting Spanish-speaking users in both Latin America and Europe, using layered infection methods to deploy banking malware on Windows systems.

The campaign delivers the Casbaneiro trojan, also referred to as Metamorfo, and relies on an additional malware strain called Horabot to assist in spreading the infection. Investigators have linked the activity to a Brazil-based cybercrime group tracked as Augmented Marauder and Water Saci, which was first publicly reported by Trend Micro in October 2025.

Technical findings shared by BlueVoyant researchers Thomas Elkins and Joshua Green show that the attackers operate through multiple entry points. Their approach combines phishing emails, automated messaging through WhatsApp, and social engineering techniques such as ClickFix. This setup allows them to simultaneously target everyday users and corporate environments. While WhatsApp-based scripts are mainly used to reach consumers in Latin America, the group also runs an email takeover mechanism aimed at breaching business systems in both Latin America and Europe.

The attack begins with an email crafted to resemble a legal notice, often framed as a court-related message. Recipients are urged to open a password-protected PDF file attached to the email. Inside the document, a link directs the user to a harmful website, which triggers the download of a compressed ZIP file. Opening this file leads to the execution of intermediate components, including HTML Application files and Visual Basic scripts.

The VBS script conducts several checks before continuing, including verifying the presence of antivirus tools such as Avast. These checks are designed to avoid analysis or detection. Once completed, the script contacts an external server to download further payloads. Among these are AutoIt-based loaders that unpack encrypted files with extensions like “.ia” and “.at,” eventually activating both Casbaneiro and Horabot on the infected system.

Casbaneiro serves as the main malware responsible for financial theft, while Horabot is used to expand the attack’s reach. After installation, Casbaneiro communicates with a command server to retrieve a PowerShell script. This script uses Horabot to extract contact lists from Microsoft Outlook and send phishing emails from the victim’s own account.

A key change in this campaign is the use of dynamically generated phishing documents. Instead of distributing a fixed malicious file, the malware sends a request to a remote server, including a randomly created four-digit code. The server responds by generating a unique, password-protected PDF designed to mimic a Spanish judicial summons. This file is then attached to phishing emails sent to new targets, making each message appear more personalized and credible.

The operation also uses a secondary Horabot-related file that acts as both a spam tool and an account hijacker. It targets email services such as Yahoo, Gmail, and Microsoft Live, enabling attackers to send phishing messages through compromised Outlook accounts. Researchers note that Horabot has been used in attacks across Latin America since at least November 2020.

Earlier campaigns linked to Water Saci relied heavily on WhatsApp Web to spread malware in a self-propagating manner, including banking threats like Maverick and Casbaneiro. More recent activity, as observed by Kaspersky, shows the use of ClickFix tactics, where users are tricked into executing malicious HTA files under the pretense of resolving technical issues.

Researchers conclude that the attackers are continuously refining their methods by combining multiple delivery channels. The use of WhatsApp automation, dynamically generated PDF lures, and ClickFix techniques allows them to bypass security controls more effectively. The group appears to operate parallel attack chains, switching between WhatsApp-driven distribution and email-based infection methods powered by Horabot, depending on the target environment.

This activity points to a wider change in how cybercriminal operations are structured, where threat actors increasingly depend on adaptable tactics, automated tools, and manipulation of user behavior to maintain and expand attacks across different regions.

Gmail Credentials Appear in Massive 183 Million Infostealer Data Leak, but Google Confirms No New Breach




A vast cache of 183 million email addresses and passwords has surfaced in the Have I Been Pwned (HIBP) database, raising concern among Gmail users and prompting Google to issue an official clarification. The newly indexed dataset stems from infostealer malware logs and credential-stuffing lists collected over time, rather than a fresh attack targeting Gmail or any other single provider.


The Origin of the Dataset

The large collection, analyzed by HIBP founder Troy Hunt, contains records captured by infostealer malware that had been active for nearly a year. The data, supplied by Synthient, amounted to roughly 3.5 terabytes, comprising nearly 23 billion rows of stolen information. Each entry typically includes a website name, an email address, and its corresponding password, exposing a wide range of online accounts across various platforms.

Synthient’s Benjamin Brundage explained that this compilation was drawn from continuous monitoring of underground marketplaces and malware operations. The dataset, referred to as the “Synthient threat data,” was later forwarded to HIBP for indexing and public awareness.


How Much of the Data Is New

Upon analysis, Hunt discovered that most of the credentials had appeared in previous breaches. Out of a 94,000-record sample, about 92 percent matched older data, while approximately 8 percent represented new and unseen credentials. This translates to over 16 million previously unrecorded email addresses, fresh data that had not been part of any known breaches or stealer logs before.

To test authenticity, Hunt contacted several users whose credentials appeared in the sample. One respondent verified that the password listed alongside their Gmail address was indeed correct, confirming that the dataset contained legitimate credentials rather than fabricated or corrupted data.


Gmail Accounts Included, but No Evidence of a Gmail Hack

The inclusion of Gmail addresses led some reports to suggest that Gmail itself had been breached. However, Google has publicly refuted these claims, stating that no new compromise has taken place. According to Google, the reports stem from a misunderstanding of how infostealer databases operate, they simply aggregate previously stolen credentials from different malware incidents, not from a new intrusion into Gmail systems.

Google emphasized that Gmail’s security systems remain robust and that users are protected through ongoing monitoring and proactive account protection measures. The company said it routinely detects large credential dumps and initiates password resets to protect affected accounts.

In a statement, Google advised users to adopt stronger account protection measures: “Reports of a Gmail breach are false. Infostealer databases gather credentials from across the web, not from a targeted Gmail attack. Users can enhance their safety by enabling two-step verification and adopting passkeys as a secure alternative to passwords.”


What Users Should Do

Experts recommend that individuals check their accounts on Have I Been Pwned to determine whether their credentials appear in this dataset. Users are also advised to enable multi-factor authentication, switch to passkeys, and avoid reusing passwords across multiple accounts.

Gmail users can utilize Google’s built-in Password Manager to identify weak or compromised passwords. The password checkup feature, accessible from Chrome’s settings, can alert users about reused or exposed credentials and prompt immediate password changes.

If an account cannot be accessed, users should proceed to Google’s account recovery page and follow the verification steps provided. Google also reminded users that it automatically requests password resets when it detects exposure in large credential leaks.


The Broader Security Implications

Cybersecurity professionals stress that while this incident does not involve a new system breach, it reinforces the ongoing threat posed by infostealer malware and poor password hygiene. Sachin Jade, Chief Product Officer at Cyware, highlighted that credential monitoring has become a vital part of any mature cybersecurity strategy. He explained that although this dataset results from older breaches, “credential-based attacks remain one of the leading causes of data compromise.”

Jade further noted that organizations should integrate credential monitoring into their broader risk management frameworks. This helps security teams prioritize response strategies, enforce adaptive authentication, and limit lateral movement by attackers using stolen passwords.

Ultimately, this collection of 183 million credentials serves as a reminder that password leaks, whether new or recycled, continue to feed cybercriminal activity. Continuous vigilance, proactive password management, and layered security practices remain the strongest defenses against such risks.


OpenAI Patches ChatGPT Gmail Flaw Exploited by Hackers in Deep Research Attacks

 

OpenAI has fixed a security vulnerability that could have allowed hackers to manipulate ChatGPT into leaking sensitive data from a victim’s Gmail inbox. The flaw, uncovered by cybersecurity company Radware and reported by Bloomberg, involved ChatGPT’s “deep research” feature. This function enables the AI to carry out advanced tasks such as web browsing and analyzing files or emails stored in services like Gmail, Google Drive, and Microsoft OneDrive. While useful, the tool also created a potential entry point for attackers to exploit.  

Radware discovered that if a user requested ChatGPT to perform a deep research task on their Gmail inbox, hackers could trigger the AI into executing malicious instructions hidden inside a carefully designed email. These hidden commands could manipulate the chatbot into scanning private messages, extracting information such as names or email addresses, and sending it to a hacker-controlled server. The vulnerability worked by embedding secret instructions within an email disguised as a legitimate message, such as one about human resources processes. 

The proof-of-concept attack was challenging to develop, requiring a detailed phishing email crafted specifically to bypass safeguards. However, if triggered under the right conditions, the vulnerability acted like a digital landmine. Once ChatGPT began analyzing the inbox, it would unknowingly carry out the malicious code and exfiltrate data “without user confirmation and without rendering anything in the user interface,” Radware explained. 

This type of exploit is particularly difficult for conventional security tools to catch. Since the data transfer originates from OpenAI’s own infrastructure rather than the victim’s device or browser, standard defenses like secure web gateways, endpoint monitoring, or browser policies are unable to detect or block it. This highlights the growing challenge of AI-driven attacks that bypass traditional cybersecurity protections. 

In response to the discovery, OpenAI stated that developing safe AI systems remains a top priority. A spokesperson told PCMag that the company continues to implement safeguards against malicious use and values external research that helps strengthen its defenses. According to Radware, the flaw was patched in August, with OpenAI acknowledging the fix in September.

The findings emphasize the broader risk of prompt injection attacks, where hackers insert hidden commands into web content or messages to manipulate AI systems. Both Anthropic and Brave Software recently warned that similar vulnerabilities could affect AI-enabled browsers and extensions. Radware recommends protective measures such as sanitizing emails to remove potential hidden instructions and enhancing monitoring of chatbot activities to reduce exploitation risks.

OpenAI Rolls Out Premium Data Connections for ChatGPT Users


The ChatGPT solution has become a transformative artificial intelligence solution widely adopted by individuals and businesses alike seeking to improve their operations. Developed by OpenAI, this sophisticated artificial intelligence platform has been proven to be very effective in assisting users with drafting compelling emails, developing creative content, or conducting complex data analysis by streamlining a wide range of workflows. 

OpenAI is continuously enhancing ChatGPT's capabilities through new integrations and advanced features that make it easier to integrate into the daily workflows of an organisation; however, an understanding of the platform's pricing models is vital for any organisation that aims to use it efficiently on a day-to-day basis. A business or an entrepreneur in the United Kingdom that is considering ChatGPT's subscription options may find that managing international payments can be an additional challenge, especially when the exchange rate fluctuates or conversion fees are hidden.

In this context, the Wise Business multi-currency credit card offers a practical solution for maintaining financial control as well as maintaining cost transparency. This payment tool, which provides companies with the ability to hold and spend in more than 40 currencies, enables them to settle subscription payments without incurring excessive currency conversion charges, which makes it easier for them to manage budgets as well as adopt cutting-edge technology. 

A suite of premium features has been recently introduced by OpenAI that aims to enhance the ChatGPT experience for subscribers by enhancing its premium features. There is now an option available to paid users to use advanced reasoning models that include O1 and O3, which allow users to make more sophisticated analytical and problem-solving decisions. 

The subscription comes with more than just enhanced reasoning; it also includes an upgraded voice mode that makes conversational interactions more natural, as well as improved memory capabilities that allow the AI to retain context over the course of a long period of time. It has also been enhanced with the addition of a powerful coding assistant designed to help developers automate workflows and speed up the software development process. 

To expand the creative possibilities even further, OpenAI has adjusted token limits, which allow for greater amounts of input and output text and allow users to generate more images without interruption. In addition to expedited image generation via a priority queue, subscribers have the option of achieving faster turnaround times during high-demand periods. 

In addition to maintaining full access to the latest models, paid accounts are also provided with consistent performance, as they are not forced to switch to less advanced models when server capacity gets strained-a limitation that free users may still have to deal with. While OpenAI has put in a lot of effort into enriching the paid version of the platform, the free users have not been left out. GPT-4o has effectively replaced the older GPT-4 model, allowing complimentary accounts to take advantage of more capable technology without having to fall back to a fallback downgrade. 

In addition to basic imaging tools, free users will also receive the same priority in generation queues as paid users, although they will also have access to basic imaging tools. With its dedication to making AI broadly accessible, OpenAI has made additional features such as ChatGPT Search, integrated shopping assistance, and limited memory available free of charge, reflecting its commitment to making AI accessible to the public. 

ChatGPT's free version continues to be a compelling option for people who utilise the software only sporadically-perhaps to write occasional emails, research occasionally, and create simple images. In addition, individuals or organisations who frequently run into usage limits, such as waiting for long periods of time for token resettings, may find that upgrading to a paid plan is an extremely beneficial decision, as it unlocks uninterrupted access as well as advanced capabilities. 

In order to transform ChatGPT into a more versatile and deeply integrated virtual assistant, OpenAI has introduced a new feature, called Connectors, which is designed to transform the platform into an even more seamless virtual assistant. It has been enabled by this new feature for ChatGPT to seamlessly interface with a variety of external applications and data sources, allowing the AI to retrieve and synthesise information from external sources in real time while responding to user queries. 

With the introduction of Connectors, the company is moving forward towards providing a more personal and contextually relevant experience for our users. In the case of an upcoming family vacation, for example, ChatGPT can be instructed by users to scan their Gmail accounts in order to compile all correspondence regarding the trip. This allows users to streamline travel plans rather than having to go through emails manually. 

With its level of integration, Gemini is similar to its rivals, which enjoy advantages from Google's ownership of a variety of popular services such as Gmail and Calendar. As a result of Connectors, individuals and businesses will be able to redefine how they engage with AI tools in a new way. OpenAI intends to create a comprehensive digital assistant by giving ChatGPT secure access to personal or organisational data that is residing across multiple services, by creating an integrated digital assistant that anticipates needs, surfaces critical insights, streamlines decision-making processes, and provides insights. 

There is an increased demand for highly customised and intelligent assistance, which is why other AI developers are likely to pursue similar integrations to remain competitive. The strategy behind Connectors is ultimately to position ChatGPT as a central hub for productivity — an artificial intelligence that is capable of understanding, organising, and acting upon every aspect of a user’s digital life. 

In spite of the convenience and efficiency associated with this approach, it also illustrates the need to ensure that personal information remains protected while providing robust data security and transparency in order for users to take advantage of these powerful integrations as they become mainstream. In its official X (formerly Twitter) account, OpenAI has recently announced the availability of Connectors that can integrate with Google Drive, Dropbox, SharePoint, and Box as part of ChatGPT outside of the Deep Research environment. 

As part of this expansion, users will be able to link their cloud storage accounts directly to ChatGPT, enabling the AI to retrieve and process their personal and professional data, enabling it to create responses on their own. As stated by OpenAI in their announcement, this functionality is "perfect for adding your own context to your ChatGPT during your daily work," highlighting the company's ambition of making ChatGPT more intelligent and contextually aware. 

It is important to note, however, that access to these newly released Connectors is confined to specific subscriptions and geographical restrictions. A ChatGPT Pro subscription, which costs $200 per month, is exclusive to ChatGPT Pro subscribers only and is currently available worldwide, except for the European Economic Area (EEA), Switzerland and the United Kingdom. Consequently, users whose plans are lower-tier, such as ChatGPT Plus subscribers paying $20 per month, or who live in Europe, cannot use these integrations at this time. 

Typically, the staggered rollout of new technologies is a reflection of broader challenges associated with regulatory compliance within the EU, where stricter data protection regulations as well as artificial intelligence governance frameworks often delay their availability. Deep Research remains relatively limited in terms of the Connectors available outside the company. However, Deep Research provides the same extensive integration support as Deep Research does. 

In the ChatGPT Plus and Pro packages, users leveraging Deep Research capabilities can access a much broader array of integrations — for example, Outlook, Teams, Gmail, Google Drive, and Linear — but there are some restrictions on regions as well. Additionally, organisations with Team plans, Enterprise plans, or Educational plans have access to additional Deep Research features, including SharePoint, Dropbox, and Box, which are available to them as part of their Deep Research features. 

Additionally, OpenAI is now offering the Model Context Protocol (MCP), a framework which allows workspace administrators to create customised Connectors based on their needs. By integrating ChatGPT with proprietary data systems, organizations can create secure, tailored integrations, enabling highly specialized use cases for internal workflows and knowledge management that are highly specialized. 

With the increasing adoption of artificial intelligence solutions by companies, it is anticipated that the catalogue of Connectors will rapidly expand, offering users the option of incorporating external data sources into their conversations. The dynamic nature of this market underscores that technology giants like Google have the advantage over their competitors, as their AI assistants, such as Gemini, can be seamlessly integrated throughout all of their services, including the search engine. 

The OpenAI strategy, on the other hand, relies heavily on building a network of third-party integrations to create a similar assistant experience for its users. It is now generally possible to access the new Connectors in the ChatGPT interface, although users will have to refresh their browsers or update the app in order to activate the new features. 

As AI-powered productivity tools continue to become more widely adopted, the continued growth and refinement of these integrations will likely play a central role in defining the future of AI-powered productivity tools. A strategic approach is recommended for organisations and professionals evaluating ChatGPT as generative AI capabilities continue to mature, as it will help them weigh the advantages and drawbacks of deeper integration against operational needs, budget limitations, and regulatory considerations that will likely affect their decisions.

As a result of the introduction of Connectors and the advanced subscription tiers, people are clearly on a trajectory toward more personalised and dynamic AI assistance, which is able to ingest and contextualise diverse data sources. As a result of this evolution, it is also becoming increasingly important to establish strong frameworks for data governance, to establish clear controls for access to the data, and to ensure adherence to privacy regulations.

If companies intend to stay competitive in an increasingly automated landscape by investing early in these capabilities, they can be in a better position to utilise the potential of AI and set clear policies that balance innovation with accountability by leveraging the efficiencies of AI in the process. In the future, the organisations that are actively developing internal expertise, testing carefully selected integrations, and cultivating a culture of responsible AI usage will be the most prepared to fully realise the potential of artificial intelligence and to maintain a competitive edge for years to come.

Russian Threat Actors Circumvent Gmail Security with App Password Theft


 

As part of Google's Threat Intelligence Group (GTIG), security researchers discovered a highly sophisticated cyber-espionage campaign orchestrated by Russian threat actors. They succeeded in circumventing Google's multi-factor authentication (MFA) protections for Gmail accounts by successfully circumventing it. 

A group of researchers found that the attackers used highly targeted and convincing social engineering tactics by impersonating Department of State officials in order to establish trust with their victims in the process. As soon as a rapport had been built, the perpetrators manipulated their victims into creating app-specific passwords. 

These passwords are unique 16-character codes created by Google which enable secure access to certain applications and devices when two-factor authentication is enabled. As a result of using these app passwords, which bypass conventional two-factor authentication, the attackers were able to gain persistent access to sensitive emails through Gmail accounts undetected. 

It is clear from this operation that state-sponsored cyber actors are becoming increasingly inventive, and there is also a persistent risk posed by seemingly secure mechanisms for recovering and accessing accounts. According to Google, this activity was carried out by a threat cluster designated UNC6293, which is closely related to the Russian hacking group known as APT29. It is believed that UNC6293 has been closely linked to APT29, a state-sponsored hacker collective. 

APT29 has garnered attention as one of the most sophisticated and sophisticated Advanced Persistent Threat (APT) groups sponsored by the Russian government, and according to intelligence analysts, that group is an extension of the Russian Foreign Intelligence Service (SVR). It is important to note that over the past decade this clandestine collective has orchestrated a number of high-profile cyber-espionage campaigns targeting strategic entities like the U.S. government, NATO member organizations, and prominent research institutes all over the world, including the U.S. government, NATO, and a wide range of academic institutions. 

APT29's operators have a reputation for carrying out prolonged infiltration operations that can remain undetected for extended periods of time, characterised by their focus on stealth and persistence. The tradecraft of their hackers is consistently based on refined social engineering techniques that enable them to blend into legitimate communications and exploit the trust of their intended targets through their tradecraft. 

By crafting highly convincing narratives and gradually manipulating individuals into compromising security controls in a step-by-step manner, APT29 has demonstrated that it has the ability to bypass even highly sophisticated technical defence systems. This combination of patience, technical expertise, and psychological manipulation has earned the group a reputation as one of the most formidable cyber-espionage threats associated with Russian state interests. 

A multitude of names are used by this prolific group in the cybersecurity community, including BlueBravo, Cloaked Ursa, Cosy Bear, CozyLarch, ICECAP, Midnight Blizzard, and The Dukes. In contrast to conventional phishing campaigns, which are based on a sense of urgency or intimidation designed to elicit a quick response, this campaign unfolded in a methodical manner over several weeks. 

There was a deliberate approach by the attackers, slowly creating a sense of trust and familiarity with their intended targets. To make their deception more convincing, they distributed phishing emails, which appeared to be official meeting invitations that they crafted. Often, these messages were carefully constructed to appear authentic and often included the “@state.gov” domain as the CC field for at least four fabricated email addresses. 

The aim of this tactic was to create a sense of legitimacy around the communication and reduce the likelihood that the recipients would scrutinise it, which in turn increased the chances of the communication being exploited effectively. It has been confirmed that the British writer, Keir Giles, a senior consulting fellow at Chatham House, a renowned global affairs think tank, was a victim of this sophisticated campaign. 

A report indicates Giles was involved in a lengthy email correspondence with a person who claimed to be Claudia S Weber, who represented the U.S. Department of State, according to reports. More than ten carefully crafted messages were sent over several weeks, deliberately timed to coincide with Washington's standard business hours. Over time, the attacker gradually gained credibility and trust among the people who sent the messages. 

It is worth noting that the emails were sent from legitimate addresses, which were configured so that no delivery errors would occur, which further strengthened the ruse. When this trust was firmly established, the adversary escalated the scheme by sending a six-page PDF document with a cover letter resembling an official State Department letterhead that appeared to be an official State Department document. 

As a result of the instructions provided in the document, the target was instructed to access Google's account settings page, to create a 16-character app-specific password labelled "ms.state.gov, and to return the code via email under the guise of completing secure onboarding. As a result of the app password, the threat actors ended up gaining sustained access to the victim's Gmail account, bypassing multi-factor authentication altogether as they were able to access their accounts regularly. 

As the Citizen Lab experts were reviewing the emails and PDF at Giles' request, they noted that the emails and PDF were free from subtle language inconsistencies and grammatical errors that are often associated with fraudulent communications. In fact, based on the precision of the language, researchers have suspected that advanced generative AI tools have been deployed to craft polished, credible content for the purpose of evading scrutiny and enhancing the overall effectiveness of the deception as well. 

There was a well-planned, incremental strategy behind the attack campaign that was specifically geared towards increasing the likelihood that the targeted targets would cooperate willingly. As one documented instance illustrates, the threat actor tried to entice a leading academic expert to participate in a private online discussion under the pretext of joining a secure State Department forum to obtain his consent.

In order to enable guest access to Google's platform, the victim was instructed to create an app-specific password using Google's account settings. In fact, the attacker used this credential to gain access to the victim's Gmail account with complete control over all multi-factor authentication procedures, enabling them to effectively circumvent all of the measures in place. 

According to security researchers, the phishing outreach was carefully crafted to look like a routine, legitimate onboarding process, thus making it more convincing. In addition to the widespread trust that many Americans place in official communications issued by U.S. government institutions, the attackers exploited the general lack of awareness of the dangers of app-specific passwords, as well as their widespread reliance on official communications. 

A narrative of official protocol, woven together with professional-sounding language, was a powerful way of making the perpetrators more credible and decreasing the possibility of the target questioning their authenticity in their request. According to cybersecurity experts, several individuals who are at higher risk from this campaign - journalists, policymakers, academics, and researchers - should enrol in Google's Advanced Protection Program (APP). 

A major component of this initiative is the restriction of access to only verified applications and devices, which offers enhanced safeguards. The experts also advise organisations that whenever possible, they should disable the use of app-specific passwords and set up robust internal policies that require any unusual or sensitive requests to be verified, especially those originating from reputable institutions or government entities, as well as implement robust internal policies requiring these types of requests. 

The intensification of training for personnel most vulnerable to these prolonged social engineering attacks, coupled with the implementation of clear, secure channels for communication between the organisation and its staff, would help prevent the occurrence of similar breaches in the future. As a result of this incident, it serves as an excellent reminder that even mature security ecosystems remain vulnerable to a determined adversary combining psychological manipulation with technical subterfuge when attempting to harm them. 

With threat actors continually refining their methods, organisations and individuals must recognise that robust cybersecurity is much more than merely a set of tools or policies. In order to combat cyberattacks as effectively as possible, it is essential to cultivate a culture of vigilance, scepticism, and continuous education. In particular, professionals who routinely take part in sensitive research, diplomatic relations, or public relations should assume they are high-value targets and adopt a proactive defence posture. 

Consequently, any unsolicited instructions must be verified by a separate, trusted channel, hardware security keys should be used to supplement authentication, and account settings should be reviewed regularly for unauthorised changes. For their part, institutions should ensure that security protocols are both accessible and clearly communicated as they are technically sound by investing in advanced threat intelligence, simulating sophisticated phishing scenarios, and investing in advanced threat intelligence. 

Fundamentally, resilience against state-sponsored cyber-espionage is determined by the ability to plan in advance not only how adversaries are going to deploy their tactics, but also the trust they will exploit in order to reach their goals.

Malicious PyPI Packages Exploit Gmail to Steal Sensitive Data

 

Cybersecurity researchers have uncovered a disturbing new tactic involving malicious PyPI packages that use Gmail to exfiltrate stolen data and communicate with threat actors. The discovery, made by security firm Socket, led to the removal of the infected packages from the Python Package Index (PyPI), although not before considerable damage had already occurred.

Socket reported identifying seven malicious packages on PyPI, some of which had been listed for more than four years. Collectively, these packages had been downloaded over 55,000 times. Most were spoofed versions of the legitimate "Coffin" package, with deceptive names such as Coffin-Codes-Pro, Coffin-Codes, NET2, Coffin-Codes-NET, Coffin-Codes-2022, Coffin2022, and Coffin-Grave. Another package was titled cfc-bsb.

According to the researchers, once installed, these packages would connect to Gmail using hardcoded credentials and initiate communication with a command-and-control (C2) server. They would then establish a WebSockets tunnel that leverages Gmail’s email server, allowing the traffic to bypass traditional firewalls and security systems.

This setup enabled attackers to remotely execute code, extract files, and gain unauthorized access to targeted systems.

Evidence suggests that the attackers were mainly targeting cryptocurrency assets. One of the email addresses used by the malware featured terms like “blockchain” and “bitcoin” — an indication of its intent.

“Coffin-Codes-Pro establishes a connection to Gmail’s SMTP server using hardcoded credentials, namely sphacoffin@gmail[.]com and a password,” the report says.
“It then sends a message to a second email address, blockchain[.]bitcoins2020@gmail[.]com politely and demurely signaling that the implant is working.”

Socket has issued a warning to all Python developers and users who may have installed these packages, advising them to remove the compromised libraries immediately, and rotate all sensitive credentials.

The researchers further advised developers to remain alert for suspicious outbound connections:

“especially SMTP traffic”, and warned them not to trust a package just because it was a few years old.
“To protect your codebase, always verify package authenticity by checking download counts, publisher history, and GitHub repository links,” they added.

“Regular dependency audits help catch unexpected or malicious packages early. Keep strict access controls on private keys, carefully limiting who can view or import them in development. Use isolated, dedicated environments when testing third-party scripts to contain potentially harmful code.”

Don’t Delete Spam Emails Too Quickly — Here’s Why


 

Most of us delete spam emails as soon as they land in our inbox. They’re irritating, unwanted, and often contain suspicious content. But what many people don’t know is that keeping them, at least briefly can actually help improve your email security in the long run.


How Spam Helps Train Your Email Filter

Email services like Gmail, Outlook, and others have systems that learn to detect unwanted emails over time. But for these systems to improve, they need to be shown which emails are spam. That’s why it’s better to mark suspicious messages as spam instead of just deleting them.

If you’re using a desktop email app like Outlook or Thunderbird, flagging such emails as “junk” helps the program recognize future threats better. If you're reading emails through a browser, you can select the unwanted message and use the “Spam” or “Move to Junk” option to send it to the right folder.

Doing this regularly not only protects your own inbox but can also help your co-workers if you’re using a shared office mail system. The more spam messages you report, the faster the system learns to block similar ones.


No Need to Worry About Storage

Spam folders usually empty themselves after 30 days. So you don’t have to worry about them piling up unless you want to manually clear them every month.


Never Click 'Unsubscribe' on Random Emails

Some emails, especially promotional ones, come with an unsubscribe button. While this can work with genuine newsletters, using it on spam emails is risky. Clicking “unsubscribe” tells scammers that your email address is real and active. This can lead to more dangerous emails or even malware attacks.


How to Stay Safe from Email Scams

1. Be alert. If something feels off, don’t open it.

2. Avoid acting quickly. Scammers often try to pressure you.

3. Don’t click on unknown links. Instead, visit websites directly.

4. Never open files from unknown sources. They can hide harmful programs.

5. Use security tools. Good antivirus software can detect harmful links and block spam automatically.


Helpful Software You Can Use

Programs like Bitdefender offer full protection from online threats. They can block viruses, dangerous attachments, and suspicious websites. Bitdefender also includes a chatbot where you can send messages to check if they’re scams. Another option is Avast One, which keeps your devices safe from fake websites and spam, even on your phone. Both are easy to use and budget-friendly.

While it may seem odd, keeping spam emails for a short time and using them to train your inbox filter can actually make your online experience safer. Just remember — never click links or download files from unknown senders. Taking small steps can protect you from big problems.

Gmail Users Face a New Dilemma Between AI Features and Data Privacy

 



Google’s Gmail is now offering two new upgrades, but here’s the catch— they don’t work well together. This means Gmail’s billions of users are being asked to pick a side: better privacy or smarter features. And this decision could affect how their emails are handled in the future.

Let’s break it down. One upgrade focuses on stronger protection of your emails, which works like advanced encryption. This keeps your emails private, even Google won’t be able to read them. The second upgrade brings in artificial intelligence tools to improve how you search and use Gmail, promising quicker, more helpful results.

But there’s a problem. If your emails are fully protected, Gmail’s AI tools can’t read them to include in its search results. So, if you choose privacy, you might lose out on the benefits of smarter searches. On the other hand, if you want AI help, you’ll need to let Google access more of your email content.

This challenge isn’t unique to Gmail. Many tech companies are trying to combine stronger security with AI-powered features, but the two don’t always work together. Apple tried solving this with a system that processes data securely on your device. However, delays in rolling out their new AI tools have made their solution uncertain for now.

Some reports explain the choice like this: if you turn on AI features, Google will use your data to power smart tools. If you turn it off, you’ll have better privacy, but lose some useful options. The real issue is that opting out isn’t always easy. Some settings may remain active unless you manually turn them off, and fully securing your emails still isn’t simple.

Even when extra security is enabled, email systems have limitations. For example, Apple’s iCloud Mail doesn’t use full end-to-end encryption because it must work with global email networks. So even private emails may not be completely safe.

This issue goes beyond Gmail. Other platforms are facing similar challenges. WhatsApp, for example, added a privacy mode that blocks saving chats and media, but also limits AI-related features. OpenAI’s ChatGPT can now remember what you told it in past conversations, which may feel helpful but also raises questions about how your personal data is being stored.

In the end, users need to think carefully. AI tools can make email more useful, but they come with trade-offs. Email has never been a perfectly secure space, and with smarter AI, new threats like scams and data misuse may grow. That’s why it’s important to weigh both sides before making a choice.



Google Rolls Out Simplified End-to-End Encryption for Gmail Enterprise Users

 

Google has begun the phased rollout of a new end-to-end encryption (E2EE) system for Gmail enterprise users, simplifying the process of sending encrypted emails across different platforms.

While businesses could previously adopt the S/MIME (Secure/Multipurpose Internet Mail Extensions) protocol for encrypted communication, it involved a resource-intensive setup — including issuing and managing certificates for all users and exchanging them before messages could be sent.

With the introduction of Gmail’s enhanced E2EE model, Google says users can now send encrypted emails to anyone, regardless of their email service, without needing to handle complex certificate configurations.

"This capability, requiring minimal efforts for both IT teams and end users, abstracts away the traditional IT complexity and substandard user experiences of existing solutions, while preserving enhanced data sovereignty, privacy, and security controls," Google said today.

The rollout starts in beta with support for encrypted messages sent within the same organization. In the coming weeks, users will be able to send encrypted emails to any Gmail inbox — and eventually to any email address, Google added.

"We're rolling this out in a phased approach, starting today, in beta, with the ability to send E2EE emails to Gmail users in your own organization. In the coming weeks, users will be able to send E2EE emails to any Gmail inbox, and, later this year, to any email inbox."

To compose an encrypted message, users can simply toggle the “Additional encryption” option while drafting their email. If the recipient is a Gmail user with either an enterprise or personal account, the message will decrypt automatically.

For users on the Gmail mobile app or non-Gmail email services, a secure link will redirect them to view the encrypted message in a restricted version of Gmail. These recipients can log in using a guest Google Workspace account to read and respond securely.

If the recipient already has S/MIME enabled, Gmail will continue to use that protocol automatically for encryption — just as it does today.

The new encryption capability is powered by Gmail's client-side encryption (CSE), a Workspace control that allows organizations to manage their own encryption keys outside of Google’s infrastructure. This ensures sensitive messages and attachments are encrypted locally on the client device before being sent to the cloud.

The approach supports compliance with various regulatory frameworks, including data sovereignty, HIPAA, and export control policies, by ensuring that encrypted content is inaccessible to both Google and any external entities.

Gmail’s CSE feature has been available to Google Workspace Enterprise Plus, Education Plus, and Education Standard customers since February 2023. It was initially introduced in beta for Gmail on the web in December 2022, following earlier launches across Google Drive, Docs, Sheets, Slides, Meet, and Calendar.

Gmail Upgrade Announced by Google with Three Billion Users Affected

 


The Google team has officially announced the launch of a major update to Gmail, which will enhance functionality, improve the user experience, and strengthen security. It is anticipated that this update to one of the world’s most commonly used email platforms will have a significant impact on both individuals as well as businesses, providing a more seamless, efficient, and secure way to manage digital communications for individuals and businesses alike.

The Gmail email service, which was founded in 2004 and has consistently revolutionized the email industry with its extensive storage, advanced features, and intuitive interface, has continuously revolutionized the email industry. In recent years, it has grown its capabilities by integrating with Google Drive, Google Chat, and Google Meet, thus strengthening its position within the larger Google Workspace ecosystem by extending its capabilities. 

The recent advancements from Google reflect the company’s commitment to innovation and leadership in the digital communication technology sector, particularly as the competitive pressures intensify in the email and productivity services sector. Privacy remains a crucial concern as the digital world continues to evolve. Google has stressed the company’s commitment to safeguarding user data, and is ensuring that user privacy remains of the utmost importance. 

In a statement released by the company, it was stated that the new tool could be managed through personalization settings, so users would be able to customize their experience according to their preferences, allowing them to tailor their experience accordingly. 

However, industry experts suggest that users check their settings carefully to ensure their data is handled in a manner that aligns with their privacy expectations, despite these assurances. Those who are seeking to gain a greater sense of control over their personal information may find it prudent to disable AI training features. In particular, this measured approach is indicative of broader discussions regarding the trade-off between advanced functionality and data privacy, especially as the competition from Microsoft and other major technology companies continues to gain ground. 

Increasingly, AI-powered services are analyzing user data and this has raised concerns about privacy and data security, which has led to a rise in privacy concerns. Chrome search histories, for example, offer highly personal insights into a person’s search patterns, as well as how those searches are phrased. As long as users grant permission to use historical data, the integration of AI will allow the company to utilize this historical data to create a better user experience.

It is also important to remember, however, that this technology is not simply a tool for executive assistants, but rather an extremely sophisticated platform that is operated by one of the largest digital marketing companies in the world. In the same vein, Microsoft's recent approach to integrating artificial intelligence with its services has created a controversy about user consent and data access, leading users to exercise caution and remain vigilant.

According to PC World, Copilot AI, the company's software for analyzing files stored on OneDrive, now has an automatic opt-in option. Users may not have been aware that this feature, introduced a few months ago, allowed them to consent to its use before the change. It has been assured that users will have full Although users have over their data they have AI-driven access to cloud-stored files, the transparency of such integrations is s being questioned as well as the extent of their data. There remain many concerns among businesses that are still being questioned. Businesses remain concerned aboutness, specifically about privacy issues.

The results of Global Data (cited by Verdict) indicate that more than 75% of organizations are concerned about these risks, contributing to a slowdown in the adoption of artificial intelligence. A study also indicates that 59% of organizations lack confidence in integrating artificial intelligence into their operations, with only 21% reporting an extensive or very extensive deployment of artificial intelligence. 

In the same way that individual users struggle to keep up with the rapid evolution of artificial intelligence technologies, businesses are often unaware of the security and privacy threats that these innovations pose. As a consequence, industry experts advise organizations to prioritize governance and control mechanisms before adopting AI-based solutions to maintain control over their data. CISOs (chief information security officers) might need to adopt a more cautious approach to mitigate potential risks, such as restricting AI adoption until comprehensive safeguards have been implemented. 

The introduction of AI-powered innovations is often presented as seamless and efficient tools, but they are supported by extensive frameworks for collecting and analyzing data. For these systems to work effectively, they must have well-defined policies in place that protect sensitive data from being exposed or misused. As AI adoption continues to grow, the importance of stringent regulation and corporate oversight will only increase. 

To improve the usability, security and efficiency of Gmail, as well as make it easier for both individuals and businesses, Google's latest update has been introduced to the Gmail platform. There are several features included in this update, including AI-driven features, improved interfaces, and improved search capabilities, which will streamline email management and strengthen security against cybersecurity threats. 

By integrating Google Workspace deeper, businesses will benefit from improved security measures that safeguard sensitive information while enabling teams to work more efficiently and effectively. This will allow businesses to collaborate more seamlessly while reducing cybersecurity risks. The improvements added by Google to Gmail allow it to be a critical tool within corporate environments, enhancing productivity, communication, and teamwork. With this update, Google confirms Gmail's reputation as a leading email and productivity tool. 

In addition to optimizing the user experience, integrating intelligent automation, strengthening security protocols, and expanding collaborative features, the platform maintains its position as a leading digital communication platform. During the rollout over the coming months, users can expect a more robust and secure email environment that keeps pace with the changing demands of today's digital interactions as the rollout progresses.

Why You Shouldn’t Delete Spam Emails Right Away

 



Unwanted emails, commonly known as spam, fill up inboxes daily. Many people delete them without a second thought, assuming it’s the best way to get rid of them. However, cybersecurity experts advise against this. Instead of deleting spam messages immediately, marking them as junk can improve your email provider’s ability to filter them out in the future.  


The Importance of Marking Emails as Spam  

Most email services, such as Gmail, Outlook, and Yahoo, use automatic spam filters to separate important emails from unwanted ones. These filters rely on user feedback to improve their accuracy. If you simply delete spam emails without marking them as junk, the system does not learn from them and may not filter similar messages in the future.  

Here’s how you can help improve your email’s spam filter:  

• If you use an email app (like Outlook or Thunderbird): Manually mark unwanted messages as spam if they appear in your inbox. This teaches the software to recognize similar messages and block them.  

• If you check your email in a web browser: If a spam message ends up in your inbox instead of the spam folder, select it and move it to the junk folder. This helps train the system to detect similar threats.  

By following these steps, you not only reduce spam in your inbox but also contribute to improving the filtering system for other users.  


Why You Should Never Click "Unsubscribe" on Suspicious Emails  

Many spam emails include an option to "unsubscribe," which might seem like an easy way to stop receiving them. However, clicking this button can be risky.  

Cybercriminals send millions of emails to random addresses, hoping to find active users. When you click "unsubscribe," you confirm that your email address is valid and actively monitored. Instead of stopping, spammers may send you even more unwanted emails. In some cases, clicking the link can also direct you to malicious websites or even install harmful software on your device.  

To stay safe, avoid clicking "unsubscribe" on emails from unknown sources. Instead, mark them as spam and move them to the junk folder.  


Simple Ways to Protect Yourself from Spam  

Spam emails are not just a nuisance; they can also be dangerous. Some contain links to fake websites, tricking people into revealing personal information. Others may carry harmful attachments that install malware on your device. To protect yourself, follow these simple steps:  

1. Stay Alert: If an email seems suspicious or asks for personal information, be cautious. Legitimate companies do not ask for sensitive details through email.  

2. Avoid Acting in a Hurry: Scammers often create a sense of urgency, pressuring you to act quickly. If an email claims you must take immediate action, think twice before responding.  

3. Do Not Click on Unknown Links: If an email contains a link, avoid clicking it. Instead, visit the official website by typing the web address into your browser.  

4. Avoid Opening Attachments from Unknown Senders: Malware can be hidden in email attachments, including PDFs, Word documents, and ZIP files. Open attachments only if you trust the sender.  

5. Use Security Software: Install antivirus and anti-spam software to help detect and block harmful emails before they reach your inbox.  


Spam emails may seem harmless, but how you handle them can affect your online security. Instead of deleting them right away, marking them as spam helps email providers refine their filters and block similar messages in the future. Additionally, never click "unsubscribe" in suspicious emails, as it can lead to more spam or even security threats. By following simple email safety habits, you can reduce risks and keep your inbox secure.

Google to Introduce QR Codes for Gmail 2FA Amid Rising Security Concerns

 

Google is set to introduce QR codes as a replacement for SMS-based two-factor authentication (2FA) codes for Gmail users in the coming months. While this security update aims to improve authentication methods, it also raises concerns, as QR code-related scams have been increasing. Even Google’s own threat intelligence team and law enforcement agencies have warned about the risks associated with malicious QR codes. QR codes, short for Quick Response codes, were originally developed in 1994 for the Japanese automotive industry. Unlike traditional barcodes, QR codes store data in both horizontal and vertical directions, allowing them to hold more information. 

A QR code consists of several components, including finder patterns in three corners that help scanners properly align the code. The black and white squares encode data in binary format, while error correction codes ensure scanning remains possible even if part of the code is damaged. When scanned, the embedded data—often a URL—is extracted and displayed to the user. However, the ability to store and quickly access URLs makes QR codes an attractive tool for cybercriminals. Research from Cisco Talos in November 2024 found that 60% of emails containing QR codes were spam, and many included phishing links. While some emails use QR codes for legitimate purposes, such as event registrations, others trick users into revealing sensitive information. 

According to Cisco Talos researcher Jaeson Schultz, phishing attacks often use QR codes for fraudulent multi-factor authentication requests to steal login credentials. There have been multiple incidents of QR code scams in recent months. In one case, a 70-year-old woman scanned a QR code at a parking meter, believing she was paying for parking, but instead, she unknowingly subscribed to a premium gaming service. Another attack involved scammers distributing printed QR codes disguised as official government severe weather alerts, tricking users into downloading malicious software. Google itself has warned that Russian cybercriminals have exploited QR codes to target victims through the Signal app’s linked devices feature. 

Despite these risks, users can protect themselves by following basic security practices. It is essential to verify where a QR code link leads before clicking. A legitimate QR code should provide additional context, such as a recognizable company name or instructions. Physical QR codes should be checked for tampering, as attackers often place fraudulent stickers over legitimate ones. Users should also avoid downloading apps directly from QR codes and instead use official app stores. 

Additionally, QR-based payment requests in emails should be verified through a company’s official website or customer service. By exercising caution, users can mitigate the risks associated with QR codes while benefiting from their convenience.

2FA Under Attack as Astaroth Phishing Kit Spreads

 


Astaroth is the latest phishing tool discovered by cybercriminals. It has advanced capabilities that allow it to circumvent security measures such as two-factor authentication (2FA) when used against it. In January 2025, Astaroth made its public debut across multiple platforms, including Gmail, Yahoo, and Office 365, with sophisticated technologies such as session hijacking and real-time credentials interceptions, which compromise user accounts across multiple platforms. 

SlashNext researchers claim Astaroth makes use of a reverse proxy called an evilginx-style proxy to place itself between legitimate login pages and users. As a result, the tool is capable of intercepting and capturing sensitive credentials, such as usernames, passwords, 2FA tokens, and session cookies, without triggering security alerts, thereby making the tool effective. 

It has been demonstrated that attackers who have obtained these session cookies will be able to hijack authenticated sessions, bypass additional security protocols, and gain unauthorized access to user accounts once they have acquired these cookies. Astaroth demonstrates the evolution of cyber threats and the sophistication of phishing techniques that compromise online security. This development highlights how cybercriminals have been evolving their methods of phishing over the years.

Clearly, Astaroth highlights how cybercriminals' tactics have evolved over the last decade, as phishing has evolved into a lucrative business. The sophistication of sophisticated attacks has now reached a point where it is now marketed like commercial software products, with regular updates, customer support, and testing guarantees attached to them. 

The attacker can intercept real-time credentials and use reverse proxy techniques in order to hijack authenticated sessions in order to bypass even the most robust phishing defences, such as Multi Factor Authentication (MFA), which are designed to protect against phishing attacks. Due to the widespread availability of phishing kits such as Astaroth, which significantly reduces the barrier to entry, less experienced cybercriminals are now capable of conducting highly effective attacks given that the barriers to entry have been significantly lowered. 

The key to mitigating these threats is to adopt a comprehensive, multilayered security strategy that is both comprehensive and multifaceted. It must have a password manager, endpoint security controls, real-time threat monitoring, and ongoing employee training to ensure that employees are aware of cybersecurity threats in real time. 

As an additional consideration, implementing Privillege Access Management (PAM) is equally vital, since it prevents unauthorized access to critical systems, even if login credentials are compromised, through the use of PAM. Business owners remain vulnerable to increasingly sophisticated phishing techniques that can circumvent the traditional defenses of their organisations without appropriate proactive security measures. 

The Astaroth phishing kit has been developed to enable a more effective method of bypassing multi-factor authentication (MFA). By using an evilginx reverse proxy, it intercepts authentication processes in real time as they are happening. By using Astaroth, attackers will be able to steal authenticated sessions and hack them seamlessly with no technical knowledge. Astaroth is different from traditional phishing tools, which capture only static credentials; instead, it dynamically retrieves authorization tokens, 2FA tokens, and session cookies. This tool is a man-in-the-middle attack that renders conventional anti-phishing defenses and multi-factor authentication protections ineffective by acting as an intermediary. 

Discovered by SlashNext Threat Researchers on cybercrime marketplaces, Astaroth is marketed as a tool that can be used easily. It is a 2-in-1 solution that sells for $2000 and includes six months of continuous updates, which includes the newest bypass techniques, as well as pre-purchase testing to demonstrate its effectiveness in real-world attacks if the buyer wants to establish credibility within cybercriminal networks. There is no doubt that the sophistication of phishing kits such as Astaroth, as well as the implementation of behaviour-based authentication, endpoint security controls, and continuous threat monitoring, are critical to organizations in order to defend themselves from these ever-evolving cyber threats that are continually evolving. 

As a means of expanding the company's customer base, Astaroth's developers have publicly revealed the methodologies they use to bypass security measures, such as reCAPTCHA or BotGuard, as a way of demonstrating the kit's effectiveness at circumventing automatic security measures. Cybercriminals in cybercrime forums and underground marketplaces are actively promoting Astaroth among their communities and are primarily distributing it through Telegram, leading to its widespread adoption among cybercriminals world-wide. 

There are several advantages to using these platforms, the most important of which is their accessibility, along with the anonymity they provide. This makes monitoring, tracking, and disrupting the sale and distribution of phishing kits very challenging for law enforcement agencies. There is a particular application known as Telegram which is commonly used by cybercriminals to communicate and to distribute their illicit activities due to its end-to-end encryption, private groups, and minimal oversight. This makes it very difficult for law enforcement to trace illicit activities on Telegram. 

It may not only facilitate the proliferation of Astaroth on the dark web, but also on underground marketplaces - both of which allow threat actors to engage in peer-to-peer transactions without disclosing their identities to each other. The fact that these platforms are decentralized, along with the fact that cryptocurrency payments are used in conjunction with them, adds more layers of protection for cybercriminals, making it even more difficult for authorities to take enforcement action against them. Astaroth continue to be embraced by cybercriminal communities and is lowering the barrier to entry for less-experienced attackers, which in turn is promoting phishing-as-a-service (PhaaS) models which are becoming more prevalent as a consequence. 

Due to the complexities posed by sophisticated phishing kits like Astaroth, security professionals emphasize the need for proactive security measures, which include real-time threat intelligence, endpoint detection, and multi-layered authentication strategies, as well as real-time threat intelligence. Aside from offering custom hosting solutions, Astaroth also offers bulletproof hosting, which will make Astaroth more resilient against legal authorities’ efforts to take down its websites. 

Cybercriminals are able to conduct attacks with minimal disruption in jurisdictions with weak regulatory oversight when using the phishing kit since it operates in jurisdictions that lack regulatory oversight. As a Field CTO of SlashNext, J Stephen Kowski believes that the emergence of Astaroth with regards to authentication is one of the most important implication that could be borne out by the fact that even the most robust authentication systems can be compromised if the attackers obtain the two-factor authentication (2FA) codes and session information during the authentication process in real time. 

Thomas Richards, Principal Consultant and Network and Red Team Practice Director at Black Duck, a Burlington, Massachusetts-based provider of application security solutions, has emphasized the sophistication and severity of the Astaroth phishing kit. According to Richards, this phishing kit demonstrates an advanced level of complexity, making it increasingly difficult for users to identify and avoid such attacks. "Traditional security awareness training often instructs users to recognize phishing attempts by looking for red flags such as suspicious URLs, grammatical errors, or lack of SSL certification. 

However, Astaroth’s highly sophisticated approach significantly reduces these indicators, making detection far more challenging," Richards stated. Furthermore, the infrastructure supporting these attacks is often hosted by providers that do not cooperate with law enforcement agencies, complicating efforts to dismantle these operations. In response to this growing threat, the United States and several European nations have imposed sanctions on countries that provide bulletproof hosting services, which are frequently exploited by cybercriminals to evade legal action. 

Richards advises users to exercise extreme caution when receiving emails that appear to originate from legitimate organizations and contain urgent requests for immediate action. Rather than clicking on embedded links, users should manually navigate to the official website to verify the authenticity of any alerts or account-related issues. This proactive approach is essential in mitigating the risks posed by advanced phishing campaigns like Astaroth. 

Organizations must implement advanced security measures beyond traditional login protections in order to protect themselves from these threats. According to Thomas Richards, a Principal Consultant and Network and Red Team Practice Director for Black Duck, a Burlington-based company that provides applications security solutions, Astaroth's phishing kit is sophisticated and quite severe. As Richards points out, this phishing kit shows a remarkable degree of complexity, which makes it increasingly difficult for users to identify and avoid attacks such as these as they run across them. 

It has always been taught to users during traditional security awareness training to look for red flags, such as suspicious URLs, grammatical errors, or a lack of SSL certification, so they can identify phishing attempts. Although these indicators are largely reduced by Astaroth's highly sophisticated approach, Richards noted that the detection of them is much more challenging as a result. The infrastructure that supports these malicious attacks is typically hosted by providers who do not cooperate with law enforcement agencies, which complicates the process of dismantling these attacks.

Several European countries and the United States have increased sanctions in response to its growing threat, increasing the chance that these countries (including the United States) will use defenseless host hosting services, which are regularly exploited by cybercriminals to avoid legal action and avoid repercussions for their crimes. 

The American scientist Richards urges users to exercise extreme caution if they receive an email that appears to be coming from a legitimate organization and contains urgent requests for action that need to be taken immediately. As a precaution, users should not click on embedded links in emails, but instead should visit the official site to verify the authenticity of any alerts they receive or account-related issues. Taking a proactive approach effectively mitigates the threats posed by advanced phishing campaigns such as Astaroth.