A recent Google Cloud report has found a very troubling trend: nearly half of all cloud-related attacks in late 2024 were caused by weak or missing account credentials. This is seriously endangering businesses and giving attackers easy access to sensitive systems.
What the Report Found
The Threat Horizons Report, which was produced by Google's security experts, looked into cyberattacks on cloud accounts. The study found that the primary method of access was poor credential management, such as weak passwords or lack of multi-factor authentication (MFA). These weak spots comprised nearly 50% of all incidents Google Cloud analyzed.
Another factor was screwed up cloud services, which constituted more than a third of all attacks. The report further noted a frightening trend of attacks on the application programming interfaces (APIs) and even user interfaces, which were around 20% of the incidents. There is a need to point out several areas where cloud security seems to be left wanting.
How Weak Credentials Cause Big Problems
Weak credentials do not just unlock the doors for the attackers; it lets them bring widespread destruction. For instance, in April 2024, over 160 Snowflake accounts were breached due to the poor practices regarding passwords. Some of the high-profile companies impacted included AT&T, Advance Auto Parts, and Pure Storage and involved some massive data leakages.
Attackers are also finding accounts with lots of permissions — overprivileged service accounts. These simply make it even easier for hackers to step further into a network, bringing harm to often multiple systems within an organization's network. Google concluded that more than 60 percent of all later attacker actions, once inside, involve attempts to step laterally within systems.
The report warns that a single stolen password can trigger a chain reaction. Hackers can use it to take control of apps, access critical data, and even bypass security systems like MFA. This allows them to establish trust and carry out more sophisticated attacks, such as tricking employees with fake messages.
How Businesses Can Stay Safe
To prevent such attacks, organizations should focus on proper security practices. Google Cloud suggests using multi-factor authentication, limiting excessive permissions, and fixing misconfigurations in cloud systems. These steps will limit the damage caused by stolen credentials and prevent attackers from digging deeper.
This report is a reminder that weak passwords and poor security habits are not just small mistakes; they can lead to serious consequences for businesses everywhere.
Called Google Maps Timeline, from December, Google will save user location data for a maximum of 180 days. After the duration ends, the data will be erased from Google Cloud servers.
The new policy means Google can only save a user’s movements and whereabouts for 6 months, the user has an option to store the data on a personal device, but the cloud data will be permanently deleted from Google servers.
The new privacy change is welcomed, smartphones can balance privacy and convenience in terms of data storage, but nothing is more important than location data.
Users can change settings that suit them best, but the majority go with default settings. The problem here arises when Google uses user data for suggesting insights (based on anonymous location data), or improving Google services like ads products.
The Google Maps Timeline feature addresses questions about data privacy and security. The good things include:
Better privacy: By restricting the storage timeline of location data on the cloud, Google can reduce data misuse. Limiting the storage duration means less historical data is exposed to threat actors if there's a breach.
More control to users: When users have the option to retain location data on their devices, it gives them ownership over their personal data. Users can choose whether to delete their location history or keep the data.
Accountability from Google: The move is a positive sign toward building transparency and trust, showing a commitment to user privacy.
Services: Google features that use location history data for tailored suggestions might be impacted, and users may observe changes in correct location-based suggestions and targeted ads.
The problem in data recovery: For users who like to store their data for a longer duration, the new move can be a problem. Users will have to self-back up data if they want to keep it for more than 180 days.
There has been unprecedented exploitation by attackers of vulnerabilities in the software, Mandiant announced. According to the newly released report of the Mandiant cybersecurity firm, after an analysis of 138 exploits published in 2023, on average, in five days an attacker already exploits a vulnerability. Because of this speed, very soon it has become paramount for organisations to make their system updates quickly. The study, published by Google Cloud bloggers, shows that this trend has greatly reduced the time taken for attackers to exploit both unknown vulnerabilities, known as zero-day, and known ones, called N-day.
Speed in the Exploitation Going Up
As indicated by Mandiant research, the time-to-exploit, which is a statistic indicating the average number of days taken by attackers to exploit a discovered vulnerability, has been reducing rapidly. During 2018, it took nearly 63 days for hackers to exploit vulnerabilities. However, in the case of 2023, hackers took merely five days for exploitation. This shows that the attackers are getting more efficient in exploiting those security vulnerabilities before the application developers could patch them satisfactorily.
Zero-Day and N-Day Vulnerabilities
The report makes a distinction between the zero-day vulnerabilities, being the undisclosed and unpatched flaws that attackers would exploit immediately, and N-day vulnerabilities, which are already known flaws that attackers aim at after patches have already been released. In the year 2023, types of vulnerabilities targeted by the attackers changed, with rates of zero-day exploitation, which rose to a ratio of 30:70 compared with N-day attacks. This trend shows that attackers now prefer zero-day exploits, which may be because they allow immediate access to systems and sensitive data before the vulnerability is known to the world.
Timing and Frequency of Exploitation
This again proves that N-day vulnerabilities are at their most vulnerable state during the first few weeks when the patch is released. Of the observed N-day vulnerabilities, 56% happened within the first month after a patch was released. Besides, 5% were attacked within just one day of the patch release while 29% attacked in the first week after release. This fast pace is something that makes the patches really important to apply to organizations as soon as possible after they are available.
Widening Scope for Attack Targets
For the past ten years, attackers have enormously widened their scope of attacks by targeting a growing list of vendors. According to the report, on this front, the count increased from 25 in the year 2018 to 56 in 2023. The widening of such a nature increases the trouble for teams, who have now encountered a significantly expanded attack surface along with the ever-increasing possibility of attacks at a number of systems and software applications.
Case Studies Exposing Different Exploits
Mandiant has published case studies on how attackers exploit vulnerabilities. For example, CVE-2023-28121 is a vulnerability in the WooCommerce Payments plugin for WordPress, which was published in March 2023. Although it had been previously secure, it became highly exploited after the technical details of how to exploit the flaw were published online. Attacks started a day after the release of a weaponized tool, peaking to 1.3 million attacks in one day. This fast growth shows how easy certain vulnerabilities can be in high demand by attackers when tools to exploit are generally available.
The case of the CVE-2023-27997 vulnerability that occurred with respect to the Secure Sockets Layer in Fortinet's FortiOS was another type that had a different timeline when it came to the attack. Even though media alert was very much all over when the vulnerability was first brought to the limelight, it took them about two or three months before executing the attack. This may probably be because of the difficulty with which the exploit needs to be carried out since there will be the use of intricate techniques to achieve it. On the other hand, the exploit for the WooCommerce plugin was quite easier where it only required the presence of an HTTP header.
Complexity of Patching Systems
While patching in due time is very essential, this is not that easy especially when updating such patches across massive systems. The CEO at Quarkslab says that Fred Raynal stated that patching two or three devices is feasible; however, patching thousands of them requires much coordination and lots of resources. Secondly, the complexity of patching in devices like a mobile phone is immense due to multiple layers which are required for updates to finally reach a user.
Some critical systems, like energy platforms or healthcare devices, have patching issues more difficult than others. System reliability and uninterrupted operation in such systems may be placed above the security updates. According to Raynal, companies in some instances even ban patching because of the risks of operational disruptions, leaving some of the devices with known vulnerabilities unpatched.
The Urgency of Timely Patching
Says Mandiant, it is such an attack timeline that organisations face the threat of attackers exploiting vulnerabilities faster than ever before. This is the report's finding while stating that it requires more than timely patching to stay ahead of attackers to secure the increasingly complex and multi-layered systems that make up more and more of the world's digital infrastructure.
Google has launched its latest large language model, PaLM 2, in a bid to regain its position as a leader in artificial intelligence. PaLM 2 is an advanced language model that can understand the nuances of human language and generate responses that are both accurate and natural-sounding.
The new model is based on a transformer architecture, which is a type of deep learning neural network that excels at understanding the relationships between words and phrases in a language. PaLM 2 is trained on a massive dataset of language, which enables it to learn from a diverse range of sources and improve its accuracy and comprehension over time.
PaLM 2 has several features that set it apart from previous language models. One of these is its ability to learn from multiple sources simultaneously, which allows it to understand a broader range of language than previous models. It can also generate more diverse and natural-sounding responses, making it ideal for applications such as chatbots and virtual assistants.
Google has already begun using PaLM 2 in its products and services, such as Google Search and Google Assistant. The model has also been made available to developers through Google Cloud AI, allowing them to build more advanced applications and services that can understand and respond to human language more accurately.
The launch of PaLM 2 is significant for Google, as it comes at a time when the company is facing increased competition from other tech giants such as Microsoft and OpenAI. Both of these companies have recently launched large language models of their own, which are also based on transformer architectures.
Google hopes that PaLM 2 will help it to regain its position as a leader in AI research and development. The company has invested heavily in machine learning and natural language processing over the years, and PaLM 2 is a testament to its ongoing commitment to these fields.
In conclusion, Google's PaLM 2 is an advanced language model that has the potential to revolutionize the way we interact with technology. Its ability to understand and respond to human language more accurately and naturally is a significant step forward in the development of AI, and it will be exciting to see how developers and businesses leverage this technology to build more advanced applications and services.