Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Data Safety. Show all posts

Google Report Warns Cybercrime Poses a National Security Threat

 

When discussing national security threats in the digital landscape, attention often shifts to suspected state-backed hackers, such as those affiliated with China targeting the U.S. Treasury or Russian ransomware groups claiming to hold sensitive FBI data. However, a recent report from the Google Threat Intelligence Group highlights that financially motivated cybercrime, even when unlinked to state actors, can pose equally severe risks to national security.

“A single incident can be impactful enough on its own to have a severe consequence on the victim and disrupt citizens' access to critical goods and services,” Google warns, emphasizing the need to categorize cybercrime as a national security priority requiring global cooperation.

Despite cybercriminal activity comprising the vast majority of malicious online behavior, national security experts predominantly focus on state-sponsored hacking groups, according to the February 12 Google Threat Intelligence Group report. While state-backed attacks undoubtedly pose a critical threat, Google argues that cybercrime and state-sponsored cyber warfare cannot be evaluated in isolation.

“A hospital disrupted by a state-backed group using a wiper and a hospital disrupted by a financially motivated group using ransomware have the same impact on patient care,” Google analysts assert. “Likewise, sensitive data stolen from an organization and posted on a data leak site can be exploited by an adversary in the same way data exfiltrated in an espionage operation can be.”

The escalation of cyberattacks on healthcare providers underscores the severity of this threat. Millions of patient records have been stolen, and even blood donor supply chains have been affected. “Healthcare's share of posts on data leak sites has doubled over the past three years,” Google notes, “even as the number of data leak sites tracked by Google Threat Intelligence Group has increased by nearly 50% year over year.”

The report highlights how Russia has integrated cybercriminal capabilities into warfare, citing the military intelligence-linked Sandworm unit (APT44), which leverages cybercrime-sourced malware for espionage and disruption in Ukraine. Iran-based threat actors similarly deploy ransomware to generate revenue while conducting espionage. Chinese spy groups supplement their operations with cybercrime, and North Korean state-backed hackers engage in cyber theft to fund the regime. “North Korea has heavily targeted cryptocurrencies, compromising exchanges and individual victims’ crypto wallets,” Google states.

These findings illustrate how nation-states increasingly procure cyber capabilities through criminal networks, leveraging cybercrime to facilitate espionage, data theft, and financial gain. Addressing this challenge requires acknowledging cybercrime as a fundamental national security issue.

“Cybercrime involves collaboration between disparate groups often across borders and without respect to sovereignty,” Google explains. Therefore, any solution must involve international cooperation between law enforcement and intelligence agencies to track, arrest, and prosecute cybercriminals effectively.

whoAMI Name Assaults Can Compromise AWS Accounts to Malicious Code Execution

 

Datadog Security Labs researchers developed a new name confusion attack technique known as whoAMI, which allows threat actors to execute arbitrary code within an Amazon Web Services (AWS) account by uploading an Amazon Machine Image (AMI) with a specified name. 

The researchers warn that, at scale, this assault can impact thousands of AWS accounts, with approximately 1% of organisations believed to be vulnerable. An Amazon Machine Image (AMI) is a virtual machine image used to start Elastic Compute Cloud (EC2) instances. Users can use the AWS API to search for the latest version of an AMI or provide it by ID. 

Datadog Security Labs stated that anyone can publish an AMI to the Community AMI catalogue; in order to verify whether a user searching the catalogue for an AMI ID will receive an official AMI rather than one published by a malicious actor, he can specify the owner attribute. 

When searching for AMIs, using the owner attribute may ensure that results are from verified sources such as Amazon or trustworthy providers. If the owners property is not included in an AMI search, an attacker can publish a malicious AMI with a recent date, making it the first result in automated queries. The attack happens when a victim uses the name filter without specifying the owner, owner-alias, or owner-id criteria, and retrieves the most recently generated image. 

“To exploit this configuration, an attacker can create a malicious AMI with a name that matches the above pattern and that is newer than any other AMIs that also match the pattern. The attacker can then either make the AMI public or privately share it with the targeted AWS account.” reads the advisory published by the company. 

The researchers published a video proof-of-concept of the assault and developed an AMI with a C2 backdoor preinstalled (attacker AWS Account ID: 864899841852, victim AWS Account ID: 438465165216). 

“This research demonstrated the existence and potential impact of a name confusion attack targeting AWS’s community AMI catalog. Though the vulnerable components fall on the customer side of the shared responsibility model, there are now controls in place to help you prevent and/or detect this vulnerability in your environments and code,” the report concluded. “Since we initially shared our findings with AWS, they have released Allowed AMIs, an excellent new guardrail that can be used by all AWS customers to prevent the whoAMI attack from succeeding, and we strongly encourage adoption of this control. This is really great work by the EC2 team!” 

As of November last year, HashiCorp rectified the flaw in terraform-aws-provider 5.77, which now warns when "most_recent=true" is used without an owner filter. This will become an error in version 6.0.

Google Cloud to Enforce Multi-Factor Authentication for Enhanced Security in 2025

 


As part of its commitment to protecting users' privacy, Google has announced that by the end of 2025, all Google Cloud accounts will have to implement multi-factor authentication (MFA), also called two-step verification. Considering the sensitive nature of cloud deployments and the fact that phishing and stolen credentials remain among the top attack vectors observed by Mandiant Threat Intelligence, it seems likely that Google Cloud users should now be required to perform [2 steps of verification], as Mayank Upadhyay, Google Cloud's VP of Engineering and Distinguished Engineer, told the audience. 

By the end of 2025, Google's cloud division is planning to introduce an optional multi-factor authentication (MFA) feature for all users, as part of its efforts to improve account security as a part of its mission to improve security across the company. As part of a recent announcement by the tech giant, it was announced that it will begin the transition with a phased rollout, to help users adapt more smoothly to the changes. 

The technology industry and cyber security industry have long recommended multifactor authentication as a highly secure authentication method. With an additional step of verification, multi-factor authentication (MFA) dramatically reduces the risk of unauthorized logins, data breaches, and account takeovers, regardless of whether the user's password is compromised. As hackers continue to ramp up their sophisticated attacks on cloud infrastructure and sensitive data, Google is pushing for mandatory MFA as part of a growing trend in cybersecurity. 

According to recent announcements, Google is planning on requiring multi-factor authentication (MFA) for all Cloud accounts by the end of 2025, to protect cloud accounts. MFA is supposed to strengthen security while maintaining a smooth and convenient user experience online, which is exactly what Google claims. It has been reported that 70% of Google users have started using this feature and that security consultants are urging those users who are still on the fence to switch over to MFA at once. Users as well as admins who have access to Google Cloud will be affected by the implementation of the new process. 

Generally speaking, this change will not impact Google accounts of general consumer users. In a recent announcement sent made by Mayank Upadhyay, Google Cloud's VP of Engineering and Distinguished Engineer an official announcement the company stated that they plan to have mandatory MFA implemented throughout 2025 in a phased approach, with assistance being provided to help plan the deployment process. In response to Google's announcement, the company now states that it is taking a phased approach to the mandatory 2FA requirement that will apply to Google Cloud users; here's what that means in practice. 

There will be three phases to the implementation, and the first phase begins immediately with Google encouraging users to adopt 2FA if they have not yet had the chance to install 2FA protection on their account, but currently sign in with a password. Google estimates that 70% of online users have done this. As part of the first phase of the program, which is scheduled to begin in November 2024, the aim will be to encourage the adoption of MFA. The Google Cloud console will be regularly updated with helpful reminders and information. Resources will be available to help raise awareness, plan rollout and documentation of the MFA process, as well as to conduct testing and enable MFA for users with ease. The first phase of the project is scheduled to begin in November 2024 and will play a key role in facilitating the adoption of MFA. 

There will be several notes and reminders in the Google Cloud Console, including information you'll find helpful in raising awareness, planning rollouts, conducting tests, and ensuring that MFA is enabled smoothly for users, to help raise awareness. There will be a second phase that begins early next year and, at the start of the year, Google will start requiring MFA for users who sign in to Google Cloud with a password, whether they are new or existing. Nevertheless, Google has not yet expressed a concrete date for when it is planning to deploy the 2FA technology as part of phase two, which is scheduled for "early 2025". 

It is important to note, however, that all new Google Cloud users, whether or not they already have a password, will be required to implement two-factor authentication to sign in. As of now, this is a mandatory requirement, with no ifs, no buts. As soon as the Google Cloud Console, Firebase Console and iCloud are updated with the 2FA notification, Upadhyay will warn users that to continue using those tools, they need to enrol with the 2FA service. The final phase of Google Cloud's 2FA requirement will be rolled out by the end of 2025, it has been told and will be required for all users currently using federated authentication when logging into Google Cloud by that time. 

It was confirmed in the announcement that there will be flexible options for meeting this requirement. In other words, it appears to be an option for users to enable 2FA with their primary identity provider before accessing Google Cloud itself, or to add a layer of security through Google's system, using their Google account to enable 2FA through their cloud service. A senior director of technical field operations at Obsidian Security told me that the threat landscape has rapidly become more sophisticated as a result of this increased MFA prevalence. The breach data shows that 89% of compromised accounts have MFA enabled, according to Chris Fuller, senior director of technical field operations.

Several phishing-as-a-service toolkits, including the Mamba toolkit that you can buy for $250 a month, as well as non-human identity compromises, suggest that identity compromises will continue regardless of the technology used to carry out." Google's phased rollout is designed to ease users into the new requirement, which could have been met with resistance due to perceived friction in the user experience, especially when the requirement is implemented suddenly," Patrick Tiquet, Vice President of Security and Compliance at Keeper Security, said. Tiquet further emphasized that organizations leveraging Google Cloud will need to strategically prepare for MFA implementation across their workforce. 

This preparation includes comprehensive employee training on the critical role of multi-factor authentication in safeguarding organizational data and systems. Effective MFA adoption may be supported by tools such as password managers, which can streamline the process by securely storing and automatically filling MFA codes. Proper planning and training will be essential for organizations to successfully integrate MFA and enhance security measures across their teams.

Microsoft Introduces AI Solution for Erasing Ex from Memories

 


It reveals the story of a woman who is emotionally disturbed and seeks the help of artificial intelligence as she tries to erase her past in director Vikramaditya Motwane's new Hindi film, CTRL. There is no doubt that the movie focuses on data and privacy, but humans are social animals and they need someone to listen to them, guide them, or be there as they go through life.  The CEO of Microsoft AI, Mustafa Suleyman, spoke about this recently in a CNBC interview. 

During an interview with CNN, Suleyman explained that the company is engineering AI companions to watch "what we are doing and to remember what we are doing." This will create a close relationship between AI and humans. As a result of the announcement of AI assistants for the workplace, many companies like Microsoft, OpenAI, and Google have come up with such solutions.  

It has been announced by Microsoft CEO Satya Nadella that Windows will be launching a new feature called Recall. A semantic search is more than just a keyword search; it digs deep into users' digital history to recreate moments from the past, tracking them back to the time they happened. It was announced today by Microsoft's AI CEO, Mustafa Suleyman, that Copilot, the company's artificial intelligence assistant, has been redesigned. 

Copilot, a newly revamped version of Microsoft's most popular AI companion, shares the same vision of a companion for AI that will revolutionize the way users interact with technology daily in their day-to-day lives with the AI head. After joining Microsoft earlier this year, after the company strategically hired key staff from Inflection AI, Suleyman wrote a 700-word memo describing what he refers to as a "technological paradigm shift." 

Copilot has been redesigned to create an AI experience that is more personalized and supportive, similar to Inflection AI's Pi product, which adapts to users' requirements over time, similar to the Pi product. The announcement of AI assistants for the workplace has been made by a number of companies, including Microsoft, OpenAI, and Google.  The Wall Street Journal reported that Microsoft CEO Satya Nadella explained that "Recall is not just about documents." in an interview. 

A sophisticated AI model embedded directly inside the device begins to take screenshots of users' activity and then feeds the data collected into an on-board database that analyzes these activities. By using neural processing technology, all images and interactions can be made searchable, even going as far as searching images by themselves. There are some concerns regarding the events, with Elon Musk warning in a characteristic post that this is akin to an episode of Black Mirror. Going to turn this 'feature' off in the future." 

OpenAI has introduced the ChatGPT desktop application, now powered by the latest GPT-4o model, which represents a significant advancement in artificial intelligence technology. This AI assistant offers real-time screen-reading capabilities, positioning itself as an indispensable support tool for professionals in need of timely assistance. Its enhanced functionality goes beyond merely following user commands; it actively learns from the user's workflow, adapts to individual habits, and anticipates future needs, even taking proactive actions when required. This marks a new era of intelligent and responsive AI companions. 

Jensen Huang also highlighted the advanced capabilities of AI Companion 2.0, emphasizing that this system does not just observe and support workflows—it learns and evolves with them, making it a more intuitive and helpful partner for users in their professional endeavors. Meanwhile, Zoom has introduced Zoom Workplace, an AI-powered collaboration platform designed to elevate teamwork and productivity in corporate environments. The platform now offers over 40 new features, which include updates to the Zoom AI Companion for various services such as Zoom Phone, Team Chat, Events, Contact Center, and the "Ask AI Companion" feature. 

The AI Companion functions as a generative AI assistant seamlessly integrated throughout Zoom’s platform, enhancing productivity, fostering stronger collaboration among team members, and enabling users to refine and develop their skills through AI-supported insights and assistance. The rapid advancements in artificial intelligence continue to reshape the technological landscape, as companies like Microsoft, OpenAI, and Google lead the charge in developing AI companions to support both personal and professional endeavors.

These AI solutions are designed to not only enhance productivity but also provide a more personalized, intuitive experience for users. From Microsoft’s innovative Recall feature to the revamped Copilot and the broad integration of AI companions across platforms like Zoom, these developments mark a significant shift in how humans interact with technology. While the potential benefits are vast, these innovations also raise important questions about data privacy, human-AI relationships, and the ethical implications of such immersive technology. 

As AI continues to evolve and become a more integral part of everyday life, the balance between its benefits and the concerns it may generate will undoubtedly shape the future of AI integration across industries. Microsoft and its competitors remain at the forefront of this technological revolution, striving to create tools that are not only functional but also responsive to the evolving needs of users in a rapidly changing digital world.

Cyberattack on Maui's Community Clinic Affects 123,000 Individuals in May

 

The Community Clinic of Maui, also known as Mālama, recently notified over 123,000 individuals that their personal data had been compromised during a cyberattack in May. Hackers gained access to sensitive information between May 4 and May 7, including Social Security numbers, passport details, financial account information (such as CVV codes and expiration dates), and extensive medical records.

In addition to this, hackers obtained routing numbers, bank names, financial account details, and some biometric data. A total of 123,882 people were affected by the breach, which resulted in the clinic taking its servers offline.

Local reports suggested the incident was a ransomware attack, sparking public frustration as Mālama was forced to close for nearly two weeks. Upon reopening at the end of May, the clinic operated with limited services, and nurses had to rely on paper charts due to system-wide computer outages.

Following the attack, Mālama worked with law enforcement and cybersecurity experts to investigate the breach, with the findings confirmed on August 7. 

In a statement on its website, the clinic offered complimentary credit monitoring to those whose Social Security numbers may have been exposed, although a regulatory filing in Maine indicated that identity theft protection services were not provided. The organization has not responded to requests for clarification, and a law firm is reportedly exploring potential lawsuits against Mālama related to the breach.

The ransomware group LockBit, which was taken down by law enforcement earlier this year, claimed responsibility for the attack in June. On Tuesday, Europol and other agencies announced a coordinated effort to target the gang, resulting in four arrests and the seizure of servers critical to LockBit's operations in France, the U.K., and Spain.

In 2024, healthcare providers across the U.S. have been increasingly targeted by cyberattacks, disrupting services and threatening public safety. Notably, McLaren Health Care and Ascension, two major health systems, have faced severe ransomware incidents, and last week, one of the region's only Level 1 trauma centers had to turn away ambulances following a cyberattack.

Seattle Port Suffers Data Breach, Rhysida Ransomware Suspected

 

The ransomware attack has significantly disrupted the port's operations, highlighting the challenges that critical infrastructure providers face in the immediate aftermath of a cybersecurity breach. While recovery efforts are ongoing, the impact continues for some areas.

Most affected systems have been restored, but the port's website, internal portals, and the airport's mobile app remain offline. Despite this, officials reported that the majority of flights have adhered to their schedules, and cruise ship operations have remained unaffected.

The port made it clear that it refused to meet the attackers' demands, warning that the hackers may attempt to post stolen data on the dark web. In an update on Friday, the port stated, "The Port of Seattle does not plan to pay the criminals responsible for this cyberattack," said Steve Metruck, the port’s executive director. "Paying them would go against the values of the port and our responsibility to wisely manage taxpayer funds."

Port authorities have confirmed that some data was compromised by the Rhysida group in mid-to-late August. An investigation is ongoing to determine the specific nature of the stolen information, and those affected will be informed as soon as the analysis is complete.

In November 2023, the Cybersecurity and Infrastructure Security Agency (CISA) and the FBI issued a joint advisory regarding the Rhysida group.

Metruck emphasized the port's efforts not only to restore operations but to use the experience to strengthen future security. "We remain committed to building a more resilient port and will share insights from this incident to help safeguard other businesses, critical infrastructure, and the public," he said.

Fortinet Confirms Data Breach Involving Limited Number of Customers, Linked to Hacker "Fortibitch"

 

Fortinet has disclosed a data breach impacting a "small number" of its clients after a hacker, using the alias "Fortibitch," leaked 440GB of customer information on BreachForums. The hacker claimed to have accessed the data from an Azure SharePoint site, following the company's refusal to meet a ransom demand. This incident emphasizes the need for companies to secure data stored in third-party cloud services, cybersecurity experts have noted.

In a statement released on September 12, Fortinet reported that the breach involved unauthorized access to files stored on its cloud-based shared file drive. The company did not confirm the exact source of the breach but reassured that the affected data represented less than 0.3% of its over 775,000 customers—approximately 2,300 organizations. Fortinet also stated that no malicious activity had been detected around the compromised data, and no ransomware or data encryption was involved. The company has since implemented protective measures and directly communicated with impacted customers.

Dark Reading noted that the hacker also leaked financial and marketing documents, product information, HR data from India, and some employee records. After unsuccessful attempts to extort the company, the hacker released the data. There was also a mention of Fortinet’s acquisitions of Lacework and NextDLP, as well as references to a Ukrainian threat group, though no direct connections were identified.

This breach highlights the growing risk of cloud data exposure. A recent analysis by Metomic revealed that more than 40% of sensitive files on Google Drive were vulnerable, with many shared publicly or with external email addresses. Experts stress the importance of using multifactor authentication (MFA), limiting employee access, and regularly monitoring cloud environments to detect and mitigate potential security lapses. They also recommend encrypting sensitive data both in transit and at rest, and enforcing zero-trust principles to reduce the risk of unauthorized access.

Introducing the "World's Most Private VPN" – Now Open for Testers

 

Virtual Private Network (VPN) is a security tool that encrypts your internet connection and disguises your IP address. This is achieved by rerouting your data through an encrypted tunnel to one of the VPN’s servers.

While the technical details can be complex, using a VPN is straightforward: you select a server location and click connect. NymVPN distinguishes itself from other VPN services by offering users a choice on how their traffic is rerouted.

The Fast mode is designed for everyday online activities like messaging, casual browsing, and streaming. As suggested by its name, this mode prioritizes speed by rerouting traffic through a fully decentralized network utilizing two-hop servers. With upcoming support for WireGuard, users can anticipate even faster connections.

The Anonymous mode is tailored for highly sensitive activities and is what sets NymVPN apart from competitors. In this mode, traffic is routed through five different servers and supplemented with "network noise," making it exceptionally challenging for any third party to intercept the data.

NymVPN’s mix network is inspired by the concept of mix networks introduced by cryptographer David Chaum in the 1980s. The Mixnet approach, independently developed by Chelsea Manning while incarcerated for leaking classified documents to WikiLeaks, employs several strategies to confound data surveillance efforts, including data fragmentation, dummy data packets, timing delays, and data packet shuffling.

“With advancements in AI-driven data analytics, data surveillance capabilities are growing stronger. There’s a need for advanced decentralized networks that can thwart these tracking attempts, not just now but in the future,” explains the provider in a blog post.

NymVPN uses a mix network to disrupt data surveillance by employing techniques such as fragmenting data, adding dummy data packets, introducing timing delays, and shuffling data packets.

When the NymVPN was first launched in its Alpha phase in November, Halpin explained: “AI models are effective at analyzing data by identifying patterns. Our VPN counters this by adding fake traffic, mixing traffic, and scrambling the patterns. In essence, while our service functions like a VPN, it’s essentially an anti-artificial intelligence machine.”

How to Use NymVPN Beta

The NymVPN team is now inviting users to explore the VPN in its beta phase, test its features, and provide feedback.

To start using NymVPN, visit nymvpn.com and enter your email address. You’ll receive a confirmation email shortly; verify your subscription through the link provided.

While you wait, you can download the NymVPN app on your preferred device. The service offers applications for all major operating systems, including Android, iOS, Windows, macOS, and Linux.

Once you have installed the app, you’ll receive an anonymous credential, which you can enter under the "Add Your Credential" section in the NymVPN app's settings. You’re all set to explore and determine if this is truly the most private VPN available.