Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Google Cloud. Show all posts

Google Cloud to Enforce Multi-Factor Authentication for Enhanced Security in 2025

 


As part of its commitment to protecting users' privacy, Google has announced that by the end of 2025, all Google Cloud accounts will have to implement multi-factor authentication (MFA), also called two-step verification. Considering the sensitive nature of cloud deployments and the fact that phishing and stolen credentials remain among the top attack vectors observed by Mandiant Threat Intelligence, it seems likely that Google Cloud users should now be required to perform [2 steps of verification], as Mayank Upadhyay, Google Cloud's VP of Engineering and Distinguished Engineer, told the audience. 

By the end of 2025, Google's cloud division is planning to introduce an optional multi-factor authentication (MFA) feature for all users, as part of its efforts to improve account security as a part of its mission to improve security across the company. As part of a recent announcement by the tech giant, it was announced that it will begin the transition with a phased rollout, to help users adapt more smoothly to the changes. 

The technology industry and cyber security industry have long recommended multifactor authentication as a highly secure authentication method. With an additional step of verification, multi-factor authentication (MFA) dramatically reduces the risk of unauthorized logins, data breaches, and account takeovers, regardless of whether the user's password is compromised. As hackers continue to ramp up their sophisticated attacks on cloud infrastructure and sensitive data, Google is pushing for mandatory MFA as part of a growing trend in cybersecurity. 

According to recent announcements, Google is planning on requiring multi-factor authentication (MFA) for all Cloud accounts by the end of 2025, to protect cloud accounts. MFA is supposed to strengthen security while maintaining a smooth and convenient user experience online, which is exactly what Google claims. It has been reported that 70% of Google users have started using this feature and that security consultants are urging those users who are still on the fence to switch over to MFA at once. Users as well as admins who have access to Google Cloud will be affected by the implementation of the new process. 

Generally speaking, this change will not impact Google accounts of general consumer users. In a recent announcement sent made by Mayank Upadhyay, Google Cloud's VP of Engineering and Distinguished Engineer an official announcement the company stated that they plan to have mandatory MFA implemented throughout 2025 in a phased approach, with assistance being provided to help plan the deployment process. In response to Google's announcement, the company now states that it is taking a phased approach to the mandatory 2FA requirement that will apply to Google Cloud users; here's what that means in practice. 

There will be three phases to the implementation, and the first phase begins immediately with Google encouraging users to adopt 2FA if they have not yet had the chance to install 2FA protection on their account, but currently sign in with a password. Google estimates that 70% of online users have done this. As part of the first phase of the program, which is scheduled to begin in November 2024, the aim will be to encourage the adoption of MFA. The Google Cloud console will be regularly updated with helpful reminders and information. Resources will be available to help raise awareness, plan rollout and documentation of the MFA process, as well as to conduct testing and enable MFA for users with ease. The first phase of the project is scheduled to begin in November 2024 and will play a key role in facilitating the adoption of MFA. 

There will be several notes and reminders in the Google Cloud Console, including information you'll find helpful in raising awareness, planning rollouts, conducting tests, and ensuring that MFA is enabled smoothly for users, to help raise awareness. There will be a second phase that begins early next year and, at the start of the year, Google will start requiring MFA for users who sign in to Google Cloud with a password, whether they are new or existing. Nevertheless, Google has not yet expressed a concrete date for when it is planning to deploy the 2FA technology as part of phase two, which is scheduled for "early 2025". 

It is important to note, however, that all new Google Cloud users, whether or not they already have a password, will be required to implement two-factor authentication to sign in. As of now, this is a mandatory requirement, with no ifs, no buts. As soon as the Google Cloud Console, Firebase Console and iCloud are updated with the 2FA notification, Upadhyay will warn users that to continue using those tools, they need to enrol with the 2FA service. The final phase of Google Cloud's 2FA requirement will be rolled out by the end of 2025, it has been told and will be required for all users currently using federated authentication when logging into Google Cloud by that time. 

It was confirmed in the announcement that there will be flexible options for meeting this requirement. In other words, it appears to be an option for users to enable 2FA with their primary identity provider before accessing Google Cloud itself, or to add a layer of security through Google's system, using their Google account to enable 2FA through their cloud service. A senior director of technical field operations at Obsidian Security told me that the threat landscape has rapidly become more sophisticated as a result of this increased MFA prevalence. The breach data shows that 89% of compromised accounts have MFA enabled, according to Chris Fuller, senior director of technical field operations.

Several phishing-as-a-service toolkits, including the Mamba toolkit that you can buy for $250 a month, as well as non-human identity compromises, suggest that identity compromises will continue regardless of the technology used to carry out." Google's phased rollout is designed to ease users into the new requirement, which could have been met with resistance due to perceived friction in the user experience, especially when the requirement is implemented suddenly," Patrick Tiquet, Vice President of Security and Compliance at Keeper Security, said. Tiquet further emphasized that organizations leveraging Google Cloud will need to strategically prepare for MFA implementation across their workforce. 

This preparation includes comprehensive employee training on the critical role of multi-factor authentication in safeguarding organizational data and systems. Effective MFA adoption may be supported by tools such as password managers, which can streamline the process by securely storing and automatically filling MFA codes. Proper planning and training will be essential for organizations to successfully integrate MFA and enhance security measures across their teams.

Securing Generative AI: Tackling Unique Risks and Challenges

 

Generative AI has introduced a new wave of technological innovation, but it also brings a set of unique challenges and risks. According to Phil Venables, Chief Information Security Officer of Google Cloud, addressing these risks requires expanding traditional cybersecurity measures. Generative AI models are prone to issues such as hallucinations—where the model produces inaccurate or nonsensical content—and the leaking of sensitive information through model outputs. These risks necessitate the development of tailored security strategies to ensure safe and reliable AI use. 

One of the primary concerns with generative AI is data integrity. Models rely heavily on vast datasets for training, and any compromise in this data can lead to significant security vulnerabilities. Venables emphasizes the importance of maintaining the provenance of training data and implementing controls to protect its integrity. Without proper safeguards, models can be manipulated through data poisoning, which can result in the production of biased or harmful outputs. Another significant risk involves prompt manipulation, where adversaries exploit vulnerabilities in the AI model to produce unintended outcomes. 

This can include injecting malicious prompts or using adversarial tactics to bypass the model’s controls. Venables highlights the necessity of robust input filtering mechanisms to prevent such manipulations. Organizations should deploy comprehensive logging and monitoring systems to detect and respond to suspicious activities in real time. In addition to securing inputs, controlling the outputs of AI models is equally critical. Venables recommends the implementation of “circuit breakers”—mechanisms that monitor and regulate model outputs to prevent harmful or unintended actions. This ensures that even if an input is manipulated, the resulting output is still within acceptable parameters. Infrastructure security also plays a vital role in safeguarding generative AI systems. 

Venables advises enterprises to adopt end-to-end security practices that cover the entire lifecycle of AI deployment, from model training to production. This includes sandboxing AI applications, enforcing the least privilege principle, and maintaining strict access controls on models, data, and infrastructure. Ultimately, securing generative AI requires a holistic approach that combines innovative security measures with traditional cybersecurity practices. 

By focusing on data integrity, robust monitoring, and comprehensive infrastructure controls, organizations can mitigate the unique risks posed by generative AI. This proactive approach ensures that AI systems are not only effective but also safe and trustworthy, enabling enterprises to fully leverage the potential of this groundbreaking technology while minimizing associated risks.

Think You’re Safe? Cyberattackers Are Exploiting Flaws in Record Time

 


There has been unprecedented exploitation by attackers of vulnerabilities in the software, Mandiant announced. According to the newly released report of the Mandiant cybersecurity firm, after an analysis of 138 exploits published in 2023, on average, in five days an attacker already exploits a vulnerability. Because of this speed, very soon it has become paramount for organisations to make their system updates quickly. The study, published by Google Cloud bloggers, shows that this trend has greatly reduced the time taken for attackers to exploit both unknown vulnerabilities, known as zero-day, and known ones, called N-day.

Speed in the Exploitation Going Up

As indicated by Mandiant research, the time-to-exploit, which is a statistic indicating the average number of days taken by attackers to exploit a discovered vulnerability, has been reducing rapidly. During 2018, it took nearly 63 days for hackers to exploit vulnerabilities. However, in the case of 2023, hackers took merely five days for exploitation. This shows that the attackers are getting more efficient in exploiting those security vulnerabilities before the application developers could patch them satisfactorily.

Zero-Day and N-Day Vulnerabilities

The report makes a distinction between the zero-day vulnerabilities, being the undisclosed and unpatched flaws that attackers would exploit immediately, and N-day vulnerabilities, which are already known flaws that attackers aim at after patches have already been released. In the year 2023, types of vulnerabilities targeted by the attackers changed, with rates of zero-day exploitation, which rose to a ratio of 30:70 compared with N-day attacks. This trend shows that attackers now prefer zero-day exploits, which may be because they allow immediate access to systems and sensitive data before the vulnerability is known to the world.

Timing and Frequency of Exploitation

This again proves that N-day vulnerabilities are at their most vulnerable state during the first few weeks when the patch is released. Of the observed N-day vulnerabilities, 56% happened within the first month after a patch was released. Besides, 5% were attacked within just one day of the patch release while 29% attacked in the first week after release. This fast pace is something that makes the patches really important to apply to organizations as soon as possible after they are available.

Widening Scope for Attack Targets

For the past ten years, attackers have enormously widened their scope of attacks by targeting a growing list of vendors. According to the report, on this front, the count increased from 25 in the year 2018 to 56 in 2023. The widening of such a nature increases the trouble for teams, who have now encountered a significantly expanded attack surface along with the ever-increasing possibility of attacks at a number of systems and software applications.


Case Studies Exposing Different Exploits

Mandiant has published case studies on how attackers exploit vulnerabilities. For example, CVE-2023-28121 is a vulnerability in the WooCommerce Payments plugin for WordPress, which was published in March 2023. Although it had been previously secure, it became highly exploited after the technical details of how to exploit the flaw were published online. Attacks started a day after the release of a weaponized tool, peaking to 1.3 million attacks in one day. This fast growth shows how easy certain vulnerabilities can be in high demand by attackers when tools to exploit are generally available.


The case of the CVE-2023-27997 vulnerability that occurred with respect to the Secure Sockets Layer in Fortinet's FortiOS was another type that had a different timeline when it came to the attack. Even though media alert was very much all over when the vulnerability was first brought to the limelight, it took them about two or three months before executing the attack. This may probably be because of the difficulty with which the exploit needs to be carried out since there will be the use of intricate techniques to achieve it. On the other hand, the exploit for the WooCommerce plugin was quite easier where it only required the presence of an HTTP header.

Complexity of Patching Systems

While patching in due time is very essential, this is not that easy especially when updating such patches across massive systems. The CEO at Quarkslab says that Fred Raynal stated that patching two or three devices is feasible; however, patching thousands of them requires much coordination and lots of resources. Secondly, the complexity of patching in devices like a mobile phone is immense due to multiple layers which are required for updates to finally reach a user.

Some critical systems, like energy platforms or healthcare devices, have patching issues more difficult than others. System reliability and uninterrupted operation in such systems may be placed above the security updates. According to Raynal, companies in some instances even ban patching because of the risks of operational disruptions, leaving some of the devices with known vulnerabilities unpatched.

The Urgency of Timely Patching

Says Mandiant, it is such an attack timeline that organisations face the threat of attackers exploiting vulnerabilities faster than ever before. This is the report's finding while stating that it requires more than timely patching to stay ahead of attackers to secure the increasingly complex and multi-layered systems that make up more and more of the world's digital infrastructure.


Protecting Your Business from Snowflake Platform Exploitation by UNC5537

 

A recent report from Mandiant, a subsidiary of Google Cloud, has uncovered a significant cyber threat involving the exploitation of the Snowflake platform. A financially motivated threat actor, identified as UNC5537, targeted around 165 organizations' Snowflake customer instances, aiming to steal and exfiltrate data for extortion and sale. Snowflake, a widely-used cloud data platform, enables the storage and analysis of vast amounts of data. The threat actor gained access to this data by using compromised credentials, which were obtained either through infostealer malware or purchased from other cybercriminals. 

UNC5537 is known for advertising stolen data on cybercrime forums and attempting to extort victims. The sold data can be used for various malicious purposes, including cyber espionage, competitive intelligence, and financial fraud. The joint statement from Snowflake, Mandiant, and cybersecurity firm CrowdStrike clarifies that there is no evidence of a vulnerability, misconfiguration, or breach within Snowflake’s platform itself. 

Additionally, there is no indication that current or former Snowflake employees' credentials were compromised. Instead, the attackers acquired credentials from infostealer malware campaigns that infected systems not owned by Snowflake. This allowed them to access and exfiltrate data from the affected Snowflake customer accounts. Mandiant's research revealed that UNC5537 primarily used credentials stolen by various infostealer malware families, such as Vidar, Risepro, Redline, Racoon Stealer, Lumma, and Metastealer. Many of these credentials dated back to November 2020 but remained usable. The majority of credentials exploited by UNC5537 were exposed through previous infostealer malware incidents. 

The initial compromise often occurred on contractor systems used for personal activities like gaming and downloading pirated software, which are common vectors for spreading infostealers. Once obtained, the threat actor used these credentials to access Snowflake accounts and extract valuable customer data. UNC5537 also purchased credentials from cybercriminal marketplaces, often through Initial Access Brokers who specialize in selling stolen corporate access. The underground market for infostealer-obtained credentials is robust, with large lists of stolen credentials available for free or for purchase on the dark web and other platforms. 

According to Mandiant, 10% of overall intrusions in 2023 began with stolen credentials, making it the fourth most common initial intrusion vector. To protect your business from similar threats, it is crucial to implement robust cybersecurity measures. This includes regular monitoring and updating of all systems to protect against infostealer malware, enforcing strong password policies, and ensuring that all software is kept up to date with the latest security patches. Employee training on cybersecurity best practices, especially regarding the dangers of downloading pirated software and engaging in risky online behavior, is also essential. 

Moreover, consider using multi-factor authentication (MFA) to add an extra layer of security to your accounts. Regularly audit your systems for any unusual activity or unauthorized access attempts. Engage with reputable cybersecurity firms to conduct thorough security assessments and implement advanced threat detection solutions. By staying vigilant and proactive, businesses can better protect themselves from the threats posed by cybercriminals like UNC5537 and ensure the security and integrity of their data.

Google Launches Next-Gen Large Language Model, PaLM 2

Google has launched its latest large language model, PaLM 2, in a bid to regain its position as a leader in artificial intelligence. PaLM 2 is an advanced language model that can understand the nuances of human language and generate responses that are both accurate and natural-sounding.

The new model is based on a transformer architecture, which is a type of deep learning neural network that excels at understanding the relationships between words and phrases in a language. PaLM 2 is trained on a massive dataset of language, which enables it to learn from a diverse range of sources and improve its accuracy and comprehension over time.

PaLM 2 has several features that set it apart from previous language models. One of these is its ability to learn from multiple sources simultaneously, which allows it to understand a broader range of language than previous models. It can also generate more diverse and natural-sounding responses, making it ideal for applications such as chatbots and virtual assistants.

Google has already begun using PaLM 2 in its products and services, such as Google Search and Google Assistant. The model has also been made available to developers through Google Cloud AI, allowing them to build more advanced applications and services that can understand and respond to human language more accurately.

The launch of PaLM 2 is significant for Google, as it comes at a time when the company is facing increased competition from other tech giants such as Microsoft and OpenAI. Both of these companies have recently launched large language models of their own, which are also based on transformer architectures.

Google hopes that PaLM 2 will help it to regain its position as a leader in AI research and development. The company has invested heavily in machine learning and natural language processing over the years, and PaLM 2 is a testament to its ongoing commitment to these fields.

In conclusion, Google's PaLM 2 is an advanced language model that has the potential to revolutionize the way we interact with technology. Its ability to understand and respond to human language more accurately and naturally is a significant step forward in the development of AI, and it will be exciting to see how developers and businesses leverage this technology to build more advanced applications and services.


Cybersecurity and the Cloud in Modern Times

 


Due to the advent of remote work, most companies - even those in heritage industries - have had to adopt SaaS (software as a service) and other cloud tools to remain competitive and agile in the market. Several modern cloud-based platforms, including Zoom, Slack, and Salesforce have become critical to the effective collaboration of knowledge workers from their homes, which will allow them to work more efficiently. In the last few years, public cloud hosting providers like Amazon Web Services, Microsoft Azure, and Google Cloud have seen phenomenal growth and success. This is a consequence of this tailwind. As per Gartner's predictions, by 2022, $178 billion will be spent on cloud providers, up from $141 billion in 2021. 

The shift to the cloud has led to lots of challenges when it comes to cybersecurity, although public cloud providers have made it easy to use modern software tools. Cloud-first security represents a paradigm shift from traditional, on-premise security in the modern day. Before this change, customers had complete control over their environments and security. They hosted their applications in their own data centers and were responsible for controlling the environment. Customers operated their network in a "walled castle" - where they controlled and secured the network and applications themselves. 

Nevertheless, when customers consume public cloud services, they are obligated to share responsibility for security with the cloud service providers as a shared responsibility. 

If your company stores data in a cloud data center provided by Amazon Web Services, you will be responsible for configuring and managing your cybersecurity policies. This is part of your compliance program. The customer is responsible for monitoring security breaches regardless of whether they have complete control over the data in the Amazon Web Services data center. As a result, when customers adopt public clouds, they no longer have full control over their security in terms of what they do with their data. A major barrier to adopting the cloud is concern about security, which is often among the most common. 

In addition, it is more difficult to secure cloud environments than traditional environments. As a result of today's cloud computing architecture, many cloud service providers utilize what is known as microservices, a design that allows each component of an application (for example, a search bar, a recommendation page, a billing page, etc.) to be created independently. On-premise systems can support as many as ten times the amount of workloads (for example, virtual machines, servers, containers, microservices) that the cloud can support. As a result of this fragmentation and complexity, there is a tendency for access control issues to develop, as well as a higher chance of developer errors - such as leaving a sensitive password in an AWS database. This information can be exposed to the public. Simply put, there is a wider and more complex attack surface area in the cloud than there is in local computing environments. 

Embrace the cloud-first era of cybersecurity

There are not just complexities associated with the cloud, but there has also been an inversion from a top-down to a bottom-up sales model, leading to security buying decisions being made not by CISOs or CISMs, but rather by developers (Chief Information and Security Officers). 

Two reasons have contributed to this happening. Due to the cloud, applications can be developed more efficiently. Therefore, the importance of cybersecurity has become a part of the development process rather than just an afterthought in the past few years. Responsibility for creating code and product releases was traditionally assigned to developers, while the team that works with the CISO is in charge of the cybersecurity aspect. As a result, the responsibilities of each party were split. It has become so easy to update code or to release product updates every day or every week in modern companies due to the cloud. This has made it much easier for them to do so. It's common nowadays for our favorite apps to update themselves frequently. For instance Netflix, Amazon, and Uber, but not so long ago, this wasn't the norm. We had to manually patch them to get them to run smoothly. With the increased frequency of deploying revised code, cybersecurity has become a problem that developers now have to care about because of the increased frequency of application development. 

In the second place, the early adopters and the power users of the cloud are primarily digital start-ups and medium-sized businesses, which are more decentralized in their decision-making processes. Traditionally, CISOs at large enterprises have played an active role in making security decisions about the organization. A CISO, acting as the chief executive officer of the company, makes purchasing decisions on behalf of the rest of the organization. This was after rigorous proof of concept, negotiation, and cost-benefit processes. The different techniques used by start-ups and mid-scale customers to make security buying decisions are very different, and many often, they leave security decision-making to their developer team. 

As a result of this revolutionary top-down sales model, cybersecurity software is about to be built and sold in a completely different way. Developing a sales model that is suitable for developers is different from one designed for CISOs. There is no doubt that developers prefer self-serve features - they often like to try and offer their products to their customers before they have to purchase them. To achieve this goal, we need to build a self-serve and freemium sales model, so we can attract a large number of inbound, free users at the top of the funnel and build a customer base around them. In comparison with the traditional sales model used by security incumbents, this model is completely different, as the incumbents have hired huge sales teams that are responsible for outbound selling large deals to their CIOs in a sales-led approach.

Mainframes are Still Used in 9 Out of 10 Banks, Google Cloud Wishes to Mitigate

 



It has been announced that Google Cloud is introducing a simpler, more risk-averse way for enterprises to migrate their legacy mainframe estates to the cloud. Google Cloud's newly launched service is based on technology originally developed by Banco Santander and aims to simplify planning and execution.

As a result, customers can perform real-time testing before they transition to Google Cloud Platform as their primary system to ensure their cloud workloads are performing as expected, running securely, and meeting regulatory compliance requirements – without stopping their application or negatively impacting user experience.

In his interview with Protocol on Tuesday, Nirav Mehta told: "This is a simple concept, but it is difficult to implement - hasn't been done yet," Nirav Mehta, Google Cloud's senior director of product management for cloud infrastructure solutions and growth, said. As compared to moving mainframe applications to the cloud, this solution will substantially reduce the risk associated with doing so." 

A parallel instance of mainframe workloads is created by using virtual machines on the Google Cloud Platform (GCP) through Dual Run. As Mehta describes, a launcher/splitter is an architecture consisting of the necessary mechanisms to duplicate activity - and return the "primary" response of the system - at each interface that drives the incoming requests or triggers the scheduled workload and can handle both.

A dashboard that displays real-time monitoring shows the differences in transaction responses between the mainframe and GCP deployments that are displayed on the dashboard. The single output hub also ensures that there is a single point of contact during the roll-out period for all batch information that needs to be sent out and collected.

Once the customers are comfortable with the use of their mainframes as backups, they can retire their mainframes or use them as storage.

As long as your mainframe is the primary system that handles customer requests, it should remain the system of choice for quite some time to come. You can consider the cloud instance as nothing more than a secondary system. This will also run the same requests as the regular system, Mehta explained. As part of your monitoring process, you maintain a record of the responses coming back from both the mainframe and Google Cloud. This is to determine whether the Google Cloud instance is working equally well as the mainframe. Then at some point, you switch over to using Google Cloud as your primary source of data and the mainframe as your secondary source of data.

The Dual Run device, which is currently in the preview stage, was developed for a wide range of industries, including the financial services, health care, manufacturing, and retail industries, and the public sector as well. Approximately 90% of North America's biggest banks still use mainframes, according to Mehta, while 23 of the 25 largest U.S. retailers use mainframes as well.

"All of these companies are looking to modernize their old mainframe applications and take them to the cloud to maximize security, scalability, and cost efficiency," he said. However, because these systems are so mission-critical - and mainframes are especially unique in this regard since they've been around for so long and contain so much legacy technology - they perceive a lot of risks, so they do not bring them to the cloud."

In May, Banco Santander, a Google Cloud customer, published a report about the progress it has made in digitizing its core banking platform. It said that 80% of its IT infrastructure had been moved to the cloud using software developed in-house called Gravity, to automate the process. The technology is an exclusive license that Google Cloud has acquired, and its engineers have been working with Santander during the past six months to optimize the technology to make it more suitable for end-to-end mainframe migrations for customers in a wide variety of industries. 

Mehta explained that they only had a very limited use case for the software. The relevance of the solution to any mainframe customer has been elevated to a substantial extent thanks to the changes we have made. This is a huge deal for anyone running mainframes because it allows them to access data remotely.

A URL Parsing Bug Left an Internal Google Cloud Project Open to SSRF Attacks

 

According to security researcher David Schütz, a URL parsing flaw exposed an internal Google Cloud project to server-side request forgery (SSRF) attacks. The bug, which Schütz detailed in a video and blog post, might have allowed an attacker to gain access to sensitive resources and perhaps launch harmful code.

Server-side request forgery is a web security flaw that allows an attacker to force a server-side application to send HTTP requests to any domain the attacker chooses. The attacker may cause the server to connect to internal-only services within the organization's infrastructure in a conventional SSRF attack. They may also be able to force the server to connect to arbitrary external systems, exposing sensitive data such as authorization credentials. 

Unauthorized activities or access to data within the company can often arise from a successful SSRF attack, either in the vulnerable application itself or on other back-end systems with which the programme can interface. The SSRF vulnerability could allow an attacker to execute arbitrary commands in some circumstances. An SSRF vulnerability that establishes connections with external third-party systems could lead to malicious attacks that appear to come from the company that hosts the vulnerable application. 

While researching Discovery Documents, data structures that give specifications for Google API services, Schütz discovered the problem. While looking through the Discovery Documents, Schütz came upon an intriguing service named Jobs API, which had the appearance of being an internal service. The Jobs API led him to an application on the Google App Engine that acted as a proxy, allowing him to access the API through Google's public product marketing pages. The proxy acted as an intermediate between the user and the API, which meant it had an access token that could be used to launch SSRF attacks. 

Request URLs were run via a whitelist to restrict access to internal Google resources. Schütz, however, was able to fool the URL parser and bypass the whitelist, allowing him to send requests to any server he wanted. This allowed him to send requests from the proxy app to a Google Cloud VPS server. The request revealed the proxy app's access token, which he could then use to send requests to other Google Cloud projects.

“This issue feels like an industry-wide problem since different applications are parsing URLs based on different specifications,” Schütz said. “After disclosing the initial issue in the Google JS library, I have already seen this getting fixed in products from different companies as well. Even though, this issue still keeps popping up even at Google. This SSRF is a great example of it.”

Cyber Attackers Hijacked Google and Microsoft Services for Malicious Phishing Emails

 

Over recent months, the cybersecurity industry has seen a huge increase in malicious attackers exploiting the networks of Microsoft and Google to host and deliver threats through Office 365 and Azure. 

The actors who are at risk are quickly moving towards cloud-based business services during the pandemic by concealing themselves behind omnipresent, trustworthy services from Microsoft and Google to make their email phishing scams appear legitimate; and it works. 

In particular, during the first three months of the year 2021, researchers discovered that 7 million malicious e-mails were sent from Microsoft's 365, and also that 45 million were transported from Google's network. The Proofpoint team said that cyber-criminals had been able to send phishing e-mails and host attacks with Office 365, Azure, OneDrive, SharePoint, G-Suite, and Firebase. 

“The malicious message volume from these trusted cloud services exceeded that of any botnet in 2020, and the trusted reputation of these domains, including outlook.com and sharepoint.com, increases the difficulty of detection for defenders,” the report, issued on Wednesday, explained. “This authenticity perception is essential, as email recently regained its status as the top vector for ransomware; and threat actors increasingly leverage the supply chain and partner ecosystem to compromise accounts, steal credentials and siphon funds.” 

Proofpoint estimated that 95% of cloud account organizations had been attacked, and more than half of them succeeded. Additionally, more than 30% of those organizations were compromised. 

Once attackers have access to passwords, they can easily enter or exit several services and send out more, persuasive phishing emails. 

Proofpoint offered many examples of projects behind Microsoft and Google that tried to scam users to give up or deliver their details. 

Attackers exploited Gmail to host another operation throughout March, that provided them with the message of the fake benefits together with a Microsoft Excel attachment, that delivered The Trick Bank Trojan to steal credentials whenever macros were activated. 

Another Gmail-hosted February attack seeks to persuade users to use their passwords for accessing zip-on MS Word documents. Upon opening, Xorist ransomware has been delivered. 

The use of Gmail and Microsoft by attackers to give their emails a patina of credibility is part of a broader trend: threats are developing increasingly persuasive appeals. 

“Our research demonstrates that attackers are using both Microsoft and Google infrastructure to disseminate malicious messages and target people, as they leverage popular cloud-collaboration tools,” the Proofpoint report added. “When coupled with heightened ransomware, supply chain, and cloud account compromise, advanced people-centric email protection must remain a top priority for security leaders.”