Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Cloud Computing. Show all posts

Cyberattacks Skyrocket in India, Are We Ready for the Digital Danger Ahead?


 

India is experiencing a rise in cyberattacks, particularly targeting its key sectors such as finance, government, manufacturing, and healthcare. This increase has prompted the Reserve Bank of India (RBI) to urge banks and financial institutions to strengthen their cybersecurity measures.

As India continues to digitise its infrastructure, it has become more vulnerable to cyberattacks. Earlier this year, hackers stole and leaked 7.5 million records from boAt, a leading Indian company that makes wireless audio and wearable devices. This is just one example of how cybercriminals are targeting Indian businesses and institutions.

The RBI has expressed concern about the growing risks in the financial sector due to rapid digitization. In 2023 alone, India’s national cybersecurity team, CERT-In, handled about 16 million cyber incidents, a massive increase from just 53,000 incidents in 2017. Most banks and non-banking financial companies (NBFCs) now see cybersecurity as a major challenge as they move towards digital technology. The RBI’s report highlights that the speed at which information and rumours can spread digitally could threaten financial stability. Cybercriminals are increasingly focusing on financial institutions rather than individual customers.

The public sector, including government agencies, has also seen a dramatic rise in cyberattacks. Many organisations report that these attacks have increased by at least 50%. Earlier this year, a hacking group targeted government agencies and energy companies using a type of malware known as HackBrowserData. Additionally, countries like Pakistan and China have been intensifying their cyberattacks on Indian organisations, with operations like the recent Cosmic Leopard campaign.

According to a report by Cloudflare, 83% of organisations in India experienced at least one cybersecurity incident in the last year, placing India among the top countries in Asia facing such threats. Globally, India is the fifth most breached nation, bringing attention  to the bigger picture which screams for stronger cybersecurity measures.

Indian companies are most worried about threats related to cloud computing, connected devices, and software vulnerabilities. The adoption of new technologies like artificial intelligence (AI) and cloud computing, combined with the shift to remote work, has accelerated digital transformation, but it also increases the need for stronger security measures.

Manu Dwivedi, a cybersecurity expert from PwC India, points out that AI-powered phishing and sophisticated social engineering techniques have made ransomware a top concern for organisations. As more companies use cloud services and open-source software, the risk of cyberattacks grows. Dwivedi also stresses the importance of protecting against insider threats, which requires a mix of strategy, culture, training, and governance.

AI is playing a growing role in both defending against and enabling cyberattacks. While AI has the potential to improve security, it also introduces new risks. Cybercriminals are beginning to use AI to create more advanced malware that can avoid detection. Dwivedi warns that as AI continues to evolve, it may become harder to track how these tools are being misused by attackers.

Partha Gopalakrishnan, founder of PG Advisors, emphasises the need for India to update its cybersecurity laws. The current law, the Information Technology Act of 2000, is outdated and does not fully address today’s digital threats. Gopalakrishnan also stressed upon the growing demand for AI skills in India, suggesting that businesses should focus on training in both AI and cybersecurity to close the skills gap. He warns that as AI becomes more accessible, it could empower a wider range of people to carry out sophisticated cyberattacks.

India’s digital growth presents great opportunities, but it also comes with strenuous challenges. It’s crucial for Indian businesses and government agencies to develop comprehensive cybersecurity strategies and stay vigilant.


Cloud Security Report Highlights Misconfiguration and IAM as Top Threats

Traditional cloud security issues once associated with service providers are declining in significance, as per the Cloud Security Alliance's 2024 Top Threats report,  However, new challenges persist.


Misconfigurations, weak identity and access management (IAM), and insecure application programming interfaces (APIs) continue to pose the most significant risks to cloud environments. These issues have held top rankings for several years, indicating their persistent nature and the industry's ongoing focus on addressing them.

Other critical concerns include inadequate cloud security strategies, vulnerabilities in third-party resources and software development, accidental data leaks, and system weaknesses. While threats like denial of service and shared technology vulnerabilities have diminished in impact, the report highlights the growing sophistication of attacks, including the use of artificial intelligence.

The cloud security landscape is also influenced by increasing supply chain risks, evolving regulations, and the rise of ransomware-as-a-service (RaaS). Organizations must adapt their security practices to address these challenges and protect their cloud environments.

The report's findings are based on a comprehensive survey of cybersecurity professionals, emphasizing the importance of these issues within the industry.
 
Key Takeaways:
* Misconfigurations, IAM, and API security remain top cloud security concerns.
* Attacks are becoming more sophisticated, requiring proactive security measures.
* Supply chain risks, regulatory changes, and ransomware pose additional threats.
* Organizations must prioritize cloud security to mitigate financial and reputational risks. 

3 Billion Attacks and Counting: The IDF’s Cyber Resilience

3 Billion Attacks and Counting: The IDF’s Cyber Resilience

The Battlefield: Cloud Computing

Cloud computing has become an integral part of modern military operations. The IDF relies heavily on cloud-based systems from troop management to logistics, communication, and intelligence gathering. These systems allow for flexibility, scalability, and efficient resource allocation. 

However, they also make attractive targets for cyber adversaries seeking to disrupt operations, steal sensitive information, or compromise critical infrastructure.

The Israel Defense Forces' cloud computing network has been subjected to almost three billion cyber attacks since the conflict between Israel and Hamas began on October 7, according to the officer in charge of the military's computer section. However, all of the attacks were detected and did not do any damage.

Col. Racheli Dembinsky, chief of the IDF's Center of Computing and Information Systems (Mamram), made the discovery on Wednesday during the "IT for IDF" conference in Rishon Lezion.

According to Dembinsky, the attacks targeted operational cloud computing, which is used by numerous systems that serve troops on the ground during conflict to communicate information and forces' whereabouts.

The Scale of the Threat

Three billion attacks may sound staggering, and indeed it is. These attacks targeted operational cloud computing resources used by troops on the ground during combat. Imagine the strain on the network as thousands of soldiers accessed critical data simultaneously while under fire. Despite this immense pressure, Mamram’s cybersecurity experts managed to fend off every attempt.

Dembinsky did not specify the types of assaults or the level of danger they posed, but she did state that they were all blocked and that no systems were penetrated at any time.

Mamram, the IDF's central computing system unit, is responsible for the infrastructure and defense of the military's remote servers.

Hamas terrorists stormed Israel on October 7, killing over 1,200 people, the majority of them were civilians, and capturing 251. It has also been stated that cyberattacks were launched against Israel on October 7. Dembinsky corroborated this.

The Human Element

While technology played a crucial role, the expertise and dedication of Mamram’s personnel truly made a difference. These cyber warriors worked tirelessly, analyzing attack vectors, identifying vulnerabilities, and devising countermeasures. Their commitment to safeguarding Israel’s digital infrastructure was unwavering.

Since the start of the war, certain cyberattacks have been effective against Israeli civilian computer systems. Iranian-backed hackers targeted the Israel State Archives in November, and it was only recently restored to service. Hackers also successfully targeted the computer systems of the city of Modiin Illit.

The Defense Strategy

Last month, Israel's cyber defense chief, Gaby Portnoy, stated that Iran's cyber attacks have become more active since the commencement of the war, not only against Israel but also against its allies.

The Decline of Serverless Computing: Lessons For Enterprises To Learn

In the rapidly changing world of cloud technology, serverless computing, once hailed as a groundbreaking innovation, is now losing its relevance. When it first emerged over a decade ago, serverless computing promised to free developers from managing detailed compute and storage configurations by handling everything automatically at the time of execution. It seemed like a natural evolution from Platform-as-a-Service (PaaS) systems, which were already simplifying aspects of computing. 

Many industry experts and enthusiasts jumped on the serverless bandwagon, predicting it would revolutionize cloud computing. However, some seasoned professionals, wary of the hype, recognized that serverless would play a strategic role rather than be a game-changer. Today, serverless technology is increasingly overshadowed by newer trends and innovations in the cloud marketplace. 

Why Did Serverless Lose Its Shine? 

Initially praised for simplifying infrastructure management and scalability, serverless computing has been pushed to the periphery by the rise of other cloud paradigms, such as edge computing and microclouds. These new paradigms offer more tailored solutions that cater to specific business needs, moving away from the one-size-fits-all approach of serverless computing. One significant factor in the decline of serverless is the explosion of generative AI. 

Cloud providers are heavily investing in AI-driven solutions, which require specialized computing resources and substantial data management capabilities. Traditional serverless models often fall short in meeting these demands, leading companies to opt for more static and predictable solutions. The concept of ubiquitous computing, which involves embedding computation into everyday objects, further exemplifies this shift. This requires continuous, low-latency processing that traditional serverless frameworks might struggle to deliver consistently. As a result, serverless models are increasingly marginalized in favour of more integrated and pervasive computing environments. 

What Can Enterprises Learn? 

For enterprises, the fading prominence of serverless cloud technology signals a need to reassess their technology strategies. Organizations must embrace emerging paradigms like edge computing, microclouds, and AI-driven solutions to stay competitive. 

The rise of AI and ubiquitous computing necessitates specialized computing resources and innovative application designs. Businesses should focus on selecting the right technology stack to meet their specific needs rather than chasing the latest cloud hype. While serverless has played a role in cloud evolution, its impact is limited compared to the newer, more nuanced solutions now available.

37signals Boosts Profits by Over $1 Million by Exiting Cloud Computing

 


This year, software company 37signals has made headlines with its decision to leave cloud computing, resulting in a significant profit boost of over $1 million (£790,000). This move highlights a growing trend among businesses reassessing the value of cloud services versus traditional in-house infrastructure. 37signals, known for its project management tool Basecamp and email service decided to transition away from cloud providers to manage its own servers. 

This shift has not only reduced their operating expenses but also provided greater control over their infrastructure. By avoiding the recurring costs associated with cloud services, 37signals has been able to retain more revenue, contributing directly to its increased profitability. The decision to leave the cloud stems from various factors. While cloud computing offers scalability and flexibility, it often comes with high costs that can accumulate over time, especially for companies with predictable workloads. 

By managing their own servers, companies like 37signals can optimize performance and cut costs associated with data transfer and storage. Furthermore, this move has implications for data security and privacy. Controlling their own infrastructure allows companies to implement stricter security measures tailored to their needs, reducing reliance on third-party vendors. This can be particularly important for firms handling sensitive information, as it minimizes potential vulnerabilities associated with shared cloud environments. 37signals’ successful transition away from cloud computing is part of a broader industry trend. Other companies are also evaluating the cost-benefit balance of cloud services. 

For some, the flexibility and ease of scaling offered by cloud solutions remain invaluable, while others, like 37signals, find that in-house infrastructure provides a more cost-effective and secure alternative. As more companies share their experiences and outcomes, it will be interesting to see how the landscape of cloud computing evolves. Businesses must carefully consider their unique needs, workloads, and security requirements when deciding whether to invest in cloud services or return to more traditional infrastructure solutions. 

The decision by 37signals to leave the cloud and the subsequent financial benefits they’ve reaped could encourage other companies to reevaluate their own strategies. By weighing the pros and cons, businesses can make informed decisions that align with their financial and operational goals.

Rethinking the Cloud: Why Companies Are Returning to Private Solutions


In the past ten years, public cloud computing has dramatically changed the IT industry, promising businesses limitless scalability and flexibility. By reducing the need for internal infrastructure and specialised personnel, many companies have eagerly embraced public cloud services. However, as their cloud strategies evolve, some organisations are finding that the expected financial benefits and operational flexibility are not always achieved. This has led to a new trend: cloud repatriation, where businesses move some of their workloads back from public cloud services to private cloud environments.

Choosing to repatriate workloads requires careful consideration and strategic thinking. Organisations must thoroughly understand their specific needs and the nature of their workloads. Key factors include how data is accessed, what needs to be protected, and cost implications. A successful repatriation strategy is nuanced, ensuring that critical workloads are placed in the most suitable environments.

One major factor driving cloud repatriation is the rise of edge computing. Research from Virtana indicates that most organisations now use hybrid cloud strategies, with over 80% operating in multiple clouds and around 75% utilising private clouds. This trend is especially noticeable in industries like retail, industrial sectors, transit, and healthcare, where control over computing resources is crucial. The growth of Internet of Things (IoT) devices has played a defining role, as these devices collect vast amounts of data at the network edge.

Initially, sending IoT data to the public cloud for processing made sense. But as the number of connected devices has grown, the benefits of analysing data at the edge have become clear. Edge computing offers near real-time responses, improved reliability for critical systems, and reduced downtime—essential for maintaining competitiveness and profitability. Consequently, many organisations are moving workloads back from the public cloud to take advantage of localised edge computing.

Concerns over data sovereignty and privacy are also driving cloud repatriation. In sectors like healthcare and financial services, businesses handle large amounts of sensitive data. Maintaining control over this information is vital to protect assets and prevent unauthorised access or breaches. Increased scrutiny from CIOs, CTOs, and boards has heightened the focus on data sovereignty and privacy, leading to more careful evaluations of third-party cloud solutions.

Public clouds may be suitable for workloads not bound by strict data sovereignty laws. However, many organisations find that private cloud solutions are necessary to meet compliance requirements. Factors to consider include the level of control, oversight, portability, and customization needed for specific workloads. Keeping data within trusted environments offers operational and strategic benefits, such as greater control over data access, usage, and sharing.

The trend towards cloud repatriation shows a growing realisation that the public cloud is only sometimes the best choice for every workload. Organisations are increasingly making strategic decisions to align their IT infrastructure with their specific needs and priorities. 



Apple's Private Cloud Compute: Enhancing AI with Unparalleled Privacy and Security

 

At Apple's WWDC 2024, much attention was given to its "Apple Intelligence" features, but the company also emphasized its commitment to user privacy. To support Apple Intelligence, Apple introduced Private Cloud Compute (PCC), a cloud-based AI processing system designed to extend Apple's rigorous security and privacy standards to the cloud. Private Cloud Compute ensures that personal user data sent to the cloud remains inaccessible to anyone other than the user, including Apple itself. 

Apple described it as the most advanced security architecture ever deployed for cloud AI compute at scale. Built with custom Apple silicon and a hardened operating system designed specifically for privacy, PCC aims to protect user data robustly. Apple's statement highlighted that PCC's security foundation lies in its compute node, a custom-built server hardware that incorporates the security features of Apple silicon, such as Secure Enclave and Secure Boot. This hardware is paired with a new operating system, a hardened subset of iOS and macOS, tailored for Large Language Model (LLM) inference workloads with a narrow attack surface. 

Although details about the new OS for PCC are limited, Apple plans to make software images of every production build of PCC publicly available for security research. This includes every application and relevant executable, and the OS itself, published within 90 days of inclusion in the log or after relevant software updates are available. Apple's approach to PCC demonstrates its commitment to maintaining high privacy and security standards while expanding its AI capabilities. By leveraging custom hardware and a specially designed operating system, Apple aims to provide a secure environment for cloud-based AI processing, ensuring that user data remains protected. 

Apple's initiative is particularly significant in the current digital landscape, where concerns about data privacy and security are paramount. Users increasingly demand transparency and control over their data, and companies are under pressure to provide robust protections against cyber threats. By implementing PCC, Apple not only addresses these concerns but also sets a new benchmark for cloud-based AI processing security. The introduction of PCC is a strategic move that underscores Apple's broader vision of integrating advanced AI capabilities with uncompromised user privacy. 

As AI technologies become more integrated into everyday applications, the need for secure processing environments becomes critical. PCC's architecture, built on the strong security foundations of Apple silicon, aims to meet this need by ensuring that sensitive data remains private and secure. Furthermore, Apple's decision to make PCC's software images available for security research reflects its commitment to transparency and collaboration within the cybersecurity community. This move allows security experts to scrutinize the system, identify potential vulnerabilities, and contribute to enhancing its security. Such openness is essential for building trust and ensuring the robustness of security measures in an increasingly interconnected world. 

In conclusion, Apple's Private Cloud Compute represents a significant advancement in cloud-based AI processing, combining the power of Apple silicon with a specially designed operating system to create a secure and private environment for user data. By prioritizing security and transparency, Apple sets a high standard for the industry, demonstrating that advanced AI capabilities can be achieved without compromising user privacy. As PCC is rolled out, it will be interesting to see how this initiative shapes the future of cloud-based AI and influences best practices in data security and privacy.

Why Active Directory Is A Big Deal?

 


In a cutting-edge study by XM Cyber and the Cyentia Institute, a comprehensive analysis has unveiled a startling reality: a staggering 80% of cybersecurity vulnerabilities within organisations stem from issues related to Active Directory. This might sound like tech jargon, but basically, it's a crucial part of how computers in a company talk to each other.

Active Directory functions as the central nervous system of an organisation's digital environment. Its vulnerabilities, often stemming from misconfigurations and attempts to compromise user credentials, pose significant risks. Tools like Mimikatz further exacerbate these vulnerabilities, enabling malicious actors to exploit weaknesses and gain unauthorised access.

Cloud Computing: New Risks, Same Problems

Even though we talk a lot about keeping things safe in the cloud, it turns out that's not always the case. More than half of the problems affecting important assets in companies come from cloud services. This means attackers can jump between regular computer networks and the cloud, making it harder to keep things safe.

Different Industries, Different Worries

When it comes to who's facing the most trouble, it depends on the industry. Some, like energy and manufacturing, have more issues with things being exposed on the internet. Others, like healthcare, deal with way more problems overall, which makes sense since they have a lot of sensitive data. Tailored strategies are essential, emphasising the importance of proactive measures to mitigate risks effectively.

What We Need to Do

Zur Ulianitzky, Vice President of Security Research at XM Cyber, emphasises the need for a holistic approach to exposure management. With a mere 2% of vulnerabilities residing in critical 'choke points,' organisations must broaden their focus beyond traditional vulnerability patching. Prioritising identity management, Active Directory security, and cloud hygiene is vital in making sure our cloud services are safe.

We need to be smarter about how we protect our computer systems. We can't just focus on fixing things after they've gone wrong. We need to be proactive and think about all the ways someone could try to break in. By doing this, we can make sure our businesses stay safe from cyber threats. Only through concerted efforts and strategic investments in cybersecurity can organisations stay ahead of the curve and protect against the ever-present spectre of cyber threats.



Nvidia Unveils Latest AI Chip, Promising 30x Faster Performance

 

Nvidia, a dominant force in the semiconductor industry, has once again raised the bar with its latest unveiling of the B200 "Blackwell" chip. Promising an astonishing 30 times faster performance than its predecessor, this cutting-edge AI chip represents a significant leap forward in computational capabilities. The announcement was made at Nvidia's annual developer conference, where CEO Jensen Huang showcased not only the groundbreaking new chip but also a suite of innovative software tools designed to enhance system efficiency and streamline AI integration for businesses. 

The excitement surrounding the conference was palpable, with attendees likening the atmosphere to the early days of tech presentations by industry visionaries like Steve Jobs. Bob O'Donnell from Technalysis Research, who was present at the event, remarked, "the buzz was in the air," underscoring the anticipation and enthusiasm for Nvidia's latest innovations. 

One of the key highlights of the conference was Nvidia's collaboration with major tech giants such as Amazon, Google, Microsoft, and OpenAI, all of whom expressed keen interest in leveraging the capabilities of the new B200 chip for their cloud-computing services and AI initiatives. With an 80% market share and a track record of delivering cutting-edge solutions, Nvidia aims to solidify its position as a leader in the AI space. 

In addition to the B200 chip, Nvidia also announced plans for a new line of chips tailored for automotive applications. These chips will enable functionalities like in-vehicle chatbots, further expanding the scope of AI integration in the automotive industry. Chinese electric vehicle manufacturers BYD and Xpeng have already signed up to incorporate Nvidia's new chips into their vehicles, signalling strong industry endorsement. 

Furthermore, Nvidia demonstrated its commitment to advancing robotics technology by introducing a series of chips specifically designed for humanoid robots. This move underscores the company's versatility and its role in shaping the future of AI-powered innovations across various sectors. Founded in 1993, Nvidia initially gained recognition for its graphics processing chips, particularly in the gaming industry. 

However, its strategic investments in machine learning capabilities have propelled it to the forefront of the AI revolution. Despite facing increasing competition from rivals like AMD and Intel, Nvidia remains a dominant force in the market, capitalizing on the rapid expansion of AI-driven technologies. As the demand for AI solutions continues to soar, Nvidia's latest advancements position it as a key player in driving innovation and shaping the trajectory of AI adoption in the business world. With its track record of delivering high-performance chips and cutting-edge software tools, Nvidia is poised to capitalize on the myriad opportunities presented by the burgeoning AI market.

Escalating Global Threats Targeting Cloud Infrastructure

 

Cloud computing's quick uptake has fundamentally changed how businesses manage and keep their data. However, as cloud environments become more and more popular, an alarming increase in cyber threats targeting them has also occurred. The sophistication of attacks on clouds is rising globally, according to recent studies and industry publications, illuminating the changing character of cyber threats.

According to a comprehensive global study on cybersecurity, the sophistication of attacks on clouds has witnessed a notable surge. The report emphasizes the need for enhanced security measures to counter these evolving threats. One of the key findings reveals that India, a major player in the IT industry, has experienced a significant increase in cloud-related cyber incidents. This highlights the urgency for organizations to prioritize their cloud security strategies to safeguard sensitive data.

Thales Data Threat Report's analysis highlights the threat's escalating severity. The biggest reasons of cloud data breaches on a worldwide scale, according to the research, are an increase in ransomware assaults and human mistakes. Organizations must deploy strong security measures to safeguard their cloud assets since fraudsters are using more sophisticated approaches. Ensuring the security, integrity, and availability of data is crucial as cloud-based services increasingly permeate company operations.

Experts caution that a proactive and multi-layered strategy for cybersecurity is necessary in light of these growing risks in cloud platforms. Traditional security measures alone are no longer sufficient. To effectively manage threats, organizations must use cutting-edge technologies and create a thorough security strategy. The importance of data security and encryption techniques, which are essential for securing cloud-stored data, is also emphasized in the paper.

The necessity for stronger security measures is also stressed by a research report on the worldwide cybersecurity business. In order to counter the increasingly complex nature of cyber threats, the research emphasizes the rising demand for cybersecurity solutions and services. It shows that businesses in a range of industries are putting more money into cutting-edge security tools to safeguard their cloud infrastructure and fend off complex threats.

Industry experts stress the value of keeping up with the most recent security trends and implementing preventative security measures in light of these findings. To inform employees of the possible hazards involved with cloud-based operations, organizations must emphasize security awareness training. Strong access controls, frequent vulnerability scans, and the use of threat intelligence tools are essential elements in enhancing cloud security.

Organizations must continue to be cautious and aggressive in their cybersecurity efforts as cloud threats' sophistication continues to rise internationally. Protecting cloud environments against developing cyber threats requires putting in place a thorough security strategy, utilizing cutting-edge technology, and promoting a culture of security awareness.



GAO Urges Federal Agencies to Implement Key Cloud Security Practices

The Government Accountability Office (GAO) has called on federal agencies to fully implement essential cloud security practices in order to enhance their cybersecurity posture. In a recent report, the GAO highlighted the importance of adopting and adhering to these practices to mitigate risks associated with cloud computing.

According to the GAO, four federal departments have not fully implemented cloud security practices, which puts their systems and data at increased vulnerability. The report emphasizes that addressing these shortcomings is critical for ensuring the confidentiality, integrity, and availability of sensitive information stored in the cloud.

Cloud computing offers numerous benefits to federal agencies, including increased efficiency, scalability, and cost-effectiveness. However, it also introduces unique cybersecurity challenges that must be addressed proactively. The GAO report outlines several key security practices that agencies should prioritize to strengthen their cloud security posture.

One of the primary recommendations is to implement strong identity and access management controls. This involves ensuring that only authorized individuals have access to sensitive data and systems and that user privileges are properly managed and monitored. By implementing multi-factor authentication and robust user access controls, agencies can significantly reduce the risk of unauthorized access.

Another crucial aspect highlighted by the GAO is the need for comprehensive data protection measures. This includes encrypting sensitive data both at rest and in transit, implementing secure data backup and recovery processes, and regularly testing the effectiveness of these measures. By employing encryption and backup protocols, agencies can minimize the impact of data breaches or system failures.

Additionally, the GAO emphasizes the importance of monitoring and logging activities within cloud environments. By implementing robust logging mechanisms and real-time monitoring tools, agencies can detect and respond to security incidents promptly. This enables them to identify unauthorized access attempts, suspicious activities, and potential vulnerabilities that could be exploited by attackers.

The GAO report further highlights the significance of training and awareness programs for agency personnel. It recommends providing comprehensive cybersecurity training to employees, ensuring they are aware of potential threats, best practices, and their role in maintaining a secure cloud environment. Regular training and awareness initiatives can help strengthen the overall security culture within agencies.

The GAO study concludes by serving as a reminder to government agencies of the significance of fully implementing important cloud security measures. Agencies can dramatically improve their cybersecurity posture in the cloud by giving priority to identity and access control, data protection, monitoring, and training. Federal agencies must act quickly on these recommendations and set aside the necessary funds to guarantee the integrity and security of their cloud-based systems and data.

The Rising Popularity of Remote Browser Isolation

Browser Isolation

The Importance of Browser Isolation in a Remote Work Environment

The COVID-19 pandemic has caused a seismic shift in the way we work, with remote work becoming the norm for many organizations. While this has brought numerous benefits, it has also presented new security challenges. In response, companies have turned to remote browser isolation as a solution. 

According to the "Innovation Insight for Remote Browser Isolation" report by Menlo Security, remote browser isolation is a rapidly evolving technology that is gaining popularity due to its ability to provide a secure browsing experience. In this blog, we will explore some of the key findings of this report and examine the growing importance of remote browser isolation in today's business landscape.

Amit Jain, who holds the position of Senior Director of Product Management at Zscaler, a cloud-based security company, suggests that due to the increasing number of remote employees utilizing cloud services, browser isolation has become essential in safeguarding both corporate cloud services and the employee's device.

He says, "For modern enterprises, the Internet is now the corporate network. This shift has enabled workers to work from anywhere while being able to access the information they need for their jobs through cloud-based apps and private apps via the Web, while this has provided maximum flexibility to workers, it has also significantly expanded the attack surface and has the potential to expose data."

Key Trends in Remote Browser Isolation: An Analysis of Menlo Security's Report

1. Growing Popularity of Remote Browser Isolation: It is quickly gaining traction as a key security technology, with many organizations recognizing its ability to protect against web-based threats.

2. Increased Need for Scalable Solutions: As more companies adopt remote work policies, the need for scalable remote browser isolation solutions has become more pressing. Many companies are exploring cloud-based solutions to meet this need.

3. The Importance of User Experience: Despite its security benefits, remote browser isolation can be challenging to implement in a way that provides a seamless user experience. The report highlights the importance of user experience in driving the adoption and suggests that solutions that prioritize ease of use are likely to gain traction.

4. New Threats and Attack Vectors: As with any security technology, remote browser isolation is not immune to evolving threats and attack vectors. The report discusses some of the emerging threats that remote browser isolation must contend with and suggests that ongoing innovation in this space will be critical in order to stay ahead of attackers.

5. Integration with Other Security Technologies: Remote browser isolation is most effective when integrated with other security technologies such as secure web gateways and endpoint security solutions. 

Browser Isolation Solutions: Will companies isolate?

Gartner says, "By 2022, 25% of enterprises will adopt browser isolation techniques for some high-risk users and use cases, up from less than 1% in 2017. By effectively isolating endpoints from browser-executable code, attacks that compromise end-user systems will be reduced by 70%, while eliminating the need to detect or identify malware."

Larger companies operating in regulated industries have tended to adopt remote browser isolation due to its ease of deployment and its physical air gap, which provides an additional layer of security. 

Small and medium-sized enterprises tend to opt for local browser isolation technology due to its flexibility. As expected, vendors have varying opinions on whether standalone or integrated solutions are preferable.

Mr. Jain from Zscaler said "The technology should be fully integrated into the zero trust platform providing threat protection for all Web activity and preventing data loss from sanctioned SaaS and corporate private apps. Moreover, HTML smuggling [and other] attacks can be better thwarted by an architecture which involves a tighter combination of browser isolation and sandbox technologies."

As cloud usage has increased, browser isolation has become even more important. Cloud services are often accessed through web browsers, and if a user's device is compromised, the sensitive data stored in the cloud is also at risk. However, using browser isolation significantly reduces the risk of a data breach.

Mark Guntrip, senior director at Menlo Security, said "It's not the fact of what we do — it's the fact that we do it without interfering with that digital experience of the end user." So they can interact with whatever they want. They can click on whatever they want, but we hold anything that's active away from them"



 Massive DDoS Attack was Thwarted by Cloudflare

 

Prioritized firms like gaming providers, hosting providers, cloud computing platforms, and cryptocurrency enterprises, according to Cloudflare, emanated from more than 30,000 IP addresses.
The greatest volumetric distributed denial-of-service (DDoS) attack that Cloudflare has seen to date was stopped.

The greatest attack, which is the largest documented HTTP DDoS attack, topped 71 million rps, per Cloudlare's analysis. The volume is 35% greater than the previous record, 45 million rps from June 2022, which had been recorded.

The FBI accused six suspects of their involvement in running 'Booter' or 'Stresser' platforms, which anybody can use to execute DDoS attacks, in response to this stream of continuously escalating attacks, and seized dozens of Internet domains. Operation PowerOFF, a larger, more coordinated worldwide law enforcement operation against DDoS-for-hire services, included the action.

Cloudflare has been collaborating with the victims to strike down the botnet and is providing service providers with a free botnet threat feed that will transmit threat intelligence from their IP and any ongoing attacks coming from their hosted autonomous system.

Researchers cautioned entities to take action immediately before the next campaign: protecting against DDoS attacks is crucial for organizations of all sizes, even while DDoS attacks on non-critical websites might not result in permanent harm or safety hazards. DDoS attacks against internet-facing equipment and patient-connect technology in the healthcare industry put patients' safety at risk.



An In-Depth Exploration Of Cloud Hacking And Its Methods

 


Regardless of the size of a business or industry, cloud computing practices are becoming an increasingly popular IT practice among companies. It is a technological process that provides different services through the Internet on an on-demand basis. The resources involved in this process are various kinds of tools and applications, including software, servers, databases, networking, and data storage. It has become the most common threat in the industry because cloud hacking has become more popular due to its growing popularity.

Cloud computing, by using the Internet to store files, offers the possibility of saving files to a remote database instead of a proprietary hard drive or a local storage device. If an electronic device has access to the internet, it can access the data on the web and the software program that runs the data. This is as long as it has internet access.

It has therefore become the preferred option for both people and businesses for several reasons, including cost savings, increased productivity, speed and efficiency, performance, and security. 

As cloud computing is growing more and more popular, it is hardly surprising that the cloud is a target for hackers, the threat of cyber-hacking has seen a rapid increase following the widespread adoption of cloud computing. 

Cloud computing resources must be integrated into a company's cybersecurity strategy as an integral part of the defense against cybercrime to bolster the company's defenses. Using ethical hackers to scan cloud computing environments for vulnerabilities will allow businesses to maintain the highest degree of security. This will enable them to patch any security flaws before the attackers can exploit them.

How Does Ethical Hacking Work in Cloud Computing?


Because the choices for cloud computing are so diverse, cloud computing is now being used in some form or another by 98 percent of companies. Cloud services are often perceived as more secure than their counterparts, although they have their own set of problems when it comes to cloud hacking. 

In the wake of the exponential rise of cyberattacks on cloud-based applications, businesses need to find trusted security experts who can fix vulnerabilities and close any holes that could lead to attackers entering their systems through these channels.

It is important to protect cloud computing resources from security vulnerabilities in ethical hacking, just as it is essential to protect any other part of the information technology system. In terms of ethical hacking, there are many hats that ethical hackers wear when it comes to cloud computing. A major part of what ethical hackers do in cloud computing is identify security weaknesses and vulnerabilities in the computing infrastructure for organizations. This is being done to strengthen the security of the cloud service.


The Types of Cloud Computing: What Are They?


It is imperative to know that there are several different types of cloud computing that you can select according to your requirements. As a first step to classifying cloud services, you should start by determining where the cloud services are physically located:

Cloud services that are available to the general public are often called public cloud services because they are hosted and provided by third parties.

Private clouds are the cloud services available only to private individuals who want to use them for personal purposes.  Depending on their needs, they can either be hosted by the company itself or by a third-party service provider.

Alternatively, we can say that the customer uses a hybrid cloud strategy, in which the customer uses both public and private cloud services, for e.g., he uses a public cloud application and a private cloud database to store sensitive data.

Ethical hackers should familiarize themselves with the following cloud computing offerings as examples of how they can make use of the internet:

There is a common misconception regarding what Software as a Service means. Software as a service (SaaS) means that the cloud provider is responsible for updating and maintaining the software applications for the customer. The use of SaaS for business purposes includes the use of productivity applications such as Microsoft Office 365 as a common example.

'PaaS' stands for the platform as a service, and it provides customers with the ability to develop and run applications on a platform to that they have access. There are several examples of cloud computing services available, such as Microsoft Azure and Google App Engine.

As the name suggests, Infrastructure as a Service (IaaS) offers its customers access to hardware resources, such as computing, memory, storage, and networks through a subscription-based service. It should be noted, however, that customers have to provide their software that runs on the infrastructure.

Cloud hacking methodology: Essentials


Following the explanation of “What is cloud hacking?” and “What is cloud exploitation?" we will examine the methodology of cloud hacking. These are some examples of the kinds of attacks that ethical hackers must be aware of in the world of cloud computing to protect themselves.

Attacks using brute force, a brute-force attack is the easiest way to break into a cloud-based service, which involves trying several different combinations of usernames and passwords to see which one works. After gaining access to the system, adversaries can proceed to wreak havoc on the system and exfiltrate data from the cloud the same way they can do with any other kind of attacker.

Phishing is a different strategy than brute force attacks. This is because it impersonates a trusted third party to steal credentials from users by impersonating that third party. This is a more sophisticated kind of attack where the message is tailored to a particular individual consisting of data that is very specific.

A credential stuffing attack is one in which employees at an organization reuse their usernames and passwords across multiple services within their company. This puts the company at risk of being the victim of a credential-stuffing attack. An adversary can verify whether or not a list of user credentials stolen from a previous attack is a valid account on a different IT system. This is done by browsing through its database containing the stolen credentials.

As the cloud computing industry moves further towards the advancement of cloud computing, ethical hackers play an active role in the process. There have been an increasing number of cyberattacks on cloud infrastructure over the past few years. Ethical hacking is a key factor in making sure all businesses of any size and in any sector have appropriate defenses in place.

Researchers Discovered a Vulnerability in Microsoft Azure's Cosmos DB

 

According to a copy of the email and a cyber security researcher, Microsoft warned thousands of its cloud computing customers, including some of the world's largest organizations, that intruders might read, update, or even delete their major databases. Researchers uncovered a "serious" vulnerability in Cosmos DB, a Microsoft Azure flagship database product, that allows an attacker to read, write, and remove data from Cosmos DB customers. 

Microsoft's proprietary database service Cosmos DB was launched in 2017 and is offered through the tech giant's cloud computing platform Azure. Coca-Cola, ExxonMobil, and Schneider Electric are just a few of the world's major organizations that utilize it to manage their data. Many of Microsoft's own programmes, such as Skype, Xbox, and Office, use Cosmos DB. 

Wiz's research team realized it was possible to gain access to keys that controlled access to databases owned by tens of thousands of companies. Ami Luttwak, Wiz's Chief Technology Officer, was previously the CTO of Microsoft's Cloud Security Group. Because Microsoft is unable to alter those keys on its own, consumers were emailed on Thursday and were told to create new ones. According to an email from Microsoft to Wiz, the company promised to pay them $40,000 for discovering and reporting the flaw. 

Wiz, which was founded by ex-Microsoft workers, identified the flaw on August 9, 2021. Three days later, the cybersecurity firm notified Microsoft about the problem. Microsoft's security teams disabled the vulnerable feature within 48 hours, according to Wiz. 

There was no evidence that the flaw had been exploited, according to Microsoft's notification to customers. The email stated, "We have no indication that external entities other than the researcher (Wiz) had access to the primary read-write key."

“This is the worst cloud vulnerability you can imagine. It is a long-lasting secret,” Luttwak told Reuters. “This is the central database of Azure, and we were able to get access to any customer database that we wanted.” Even clients who have not been contacted by Microsoft may have had their keys swiped by attackers, giving them access until their keys are changed, according to Luttwak. 

The flaw was found in Jupyter Notebook, a visualization tool that has been available for years but was only enabled by default in Cosmos in February. 

Microsoft has been plagued by bad security news for months. The same alleged Russian government hackers who entered SolarWinds and stole Microsoft source code broke into the company. Then, while a patch was being created, a large number of hackers got into Exchange email servers.

Researchers Detail the New Two-Step Cryptography Technique

 

The accessibility of computer system resources on-demand, in particular data storage and computational power, without direct active user management is cloud computing. The terminology is commonly used to characterize data centers for several Internet users. Cloud computing has as its primary objective the provision of rapid, simple, cost-effective computing and data stocking services. The cloud environment, however, presents data privacy problems. 

The key method used to strengthen cloud computing security is cryptography. By encrypting the saved or sent data, this mathematical technique protects it, so that only the intended recipient can understand it. Although various encryption techniques exist, though none are properly secured and new technologies are still being sought so that the increasing risks to privacy and security in data are countered. 

With all that in mind, the most important question that arises is “How the two-step cryptography technique works?” 

A group of researchers from Indian and Yemen described the revolutionary two-step cryptographic method – the first to combine genetic technologies with mathematical techniques. This explanatory study by the researchers is published in the International Journal of Intelligent Networks in KeAi. As per the writers of the report, a highly secure and flexible encrypted environment can be created which could trigger a paradigm shift in data secrecy. 

The paper’s corresponding author, Fursan Thabit of Swami Raman and Teerth Marathwada University in India, explains: “Some existing famous ciphers use the Feistel structure for encryption and decryption. Others use the Network SP (Substitution-Permutation). The first level of our encryption uses a logical-mathematical function inspired by a combination of the two. Not only does it improve the complexity of the encryption, but it also increases energy efficiency by reducing the number of encryption rounds required.” 

The second encryption layer by the researcher is influenced by genetic technological structures based on the Central Dogma of Molecular Biology (CDMB). It models the actual genetic code operations (binary to DNA base translations), transcription (DNA to mRNA regeneration), and translation (regeneration from mRNA to protein). 

They are the first to integrate the concepts of DNA, RNA, and genetic engineering for cryptographic matters and the first to merge the genetic encrypting process with mathematics to create a complex key. 

By evaluating the encrypting time, decryption time, output, and length of the ciphertexts produced, the researchers have assessed their novel algorithm robustness. They observed that their suggested algorithm has great safety strength and is extremely versatile compared with several other genetic encryption approaches and existing symmetric key encryption techniques. It takes less time than most other procedures as well. 

However, the algorithm's obvious structure – two layers of encryption that only incorporates four coding rounds - reduces the complexity of computing and processing strength. 

Thabit explains: “That clear structure means each round requires only simple math and a genetics simulation process.”

Data Breach at Digital Oceans Leaves Customer Billing Data Exposed

 

Digital Ocean, a cloud solutions provider, informs certain clients that the billing information they receive may indeed be breached as someone has exploited a flaw inside the central database of the company. 

US - Based Digital Ocean, Inc. is a supplier of cloud computing with global data centers located in New York City. Digital Ocean offers cloud services for developers which help build and scale applications distributed across multiple computers concurrently. 

Digital Ocean stated in an email to clients that the unauthorized access took place between 9th and 22nd April 2021 but was only "confirmed" seemingly on 26 April. 

“An unauthorized user gained access to some of your billing account details through a flaw that has been fixed,” the company told customers. Digital Ocean affirms that only a "small percentage" of its users have been affected and therefore no intervention is necessary. 

The billing information leaked includes the name, address, expiry date of the payment card, last four digits of the payment card, and the name of the bank issuing the card. The organization pointed out that the entire credit card details were not stored as this kind of information was not revealed. 

“According to our logs approximately 1% of billing profiles were impacted,” Tyler Healy, VP of security at Digital Ocean, told Security Week in an emailed statement. “This issue has been fixed and we have informed the impacted users and notified the relevant data protection authorities.”

Over one million programmers from each country in the world use its resources on its web portal added, Digital Ocean. 

Last year the company announced to its customers that some of their information had been disclosed after a document file had been published accidentally, though at that point it was sure that the documentation was not malicious. 

Furthermore, the email reads as “yesterday we learned that a digital ocean owned document from 2018 was unintentionally made available via a public link. This document contained your email addresses and/or account name (the name you gave your account at sign-up) as well as some data about your account that may have included Droplet count, bandwidth usage, some support or sales communications notes, and the amount you paid during 2018. After a detailed review by our security team, we identified it was accessed at least 5 times before the document was taken down.” 

They also affirmed that they will be teaching their employees how to protect customer data, establish new protocols to warn everyone timelier about possible exposures, and make adjustments in specification to avoid future exposure of data.

Over Rs 6 lakh attempted attacks on Mumbai cloud server honeypot

At least 678,013 login attempts were made on Mumbai cloud server honeypot making it the second biggest attack spread over a month, after Ohio, US, honeypot that recorded more than 950,000 login attempts during the same time period, among a total of 10 honeypots placed globally, global cyber security major Sophos said on Wednesday. This demonstrates how cybercriminals are automatically scanning for weak open cloud buckets.

A honeypot is a system intended to mimic likely targets of cyberattackers for security researchers to monitor cybercriminal behaviour. The first login attempt on the Mumbai honeypot was made within 55 minutes and 11 seconds of going live.

On average, the cloud servers were hit by 13 attempted attacks per minute, per honeypot. The honeypots were set-up in 10 of the most popular Amazon Web Services (AWS) data centres in the world, including California, Frankfurt, Ireland, London, Mumbai, Ohio, Paris, Sao Paulo, Singapore, and Sydney over a 30-day period.

Sophos announced the findings of its report, Exposed: Cyberattacks on Cloud Honeypots.

With businesses across the globe increasingly adopting Cloud technology, the report revealed the extent to which businesses migrating to hybrid and all-Cloud platforms are at risk. It has thus become vital for businesses to ensure compliance and to know what to protect.

“The aggressive speed and scale of attacks on devices demonstrates the use of botnets to target an organisation’s cloud platform. In some instances, it may be a human attacker. However, regardless of this, companies need to set a security strategy to protect what they are putting into the cloud,” said Sunil Sharma, managing director, sales at Sophos (India & SAARC).

However, multiple development teams within an organization and an ever-changing, auto-scaling environment make this difficult for IT security.

Key features in Sophos Cloud Optix include:

Smart Visibility - Automatic discovery of organization’s assets across AWS, Microsoft Azure and Google Cloud Platform (GCP) environments, via a single console, allowing security teams complete visibility into everything they have in the cloud and to respond and remediate security risks in minutes.

Continuous Cloud Compliance – Keeps up with continually changing compliance regulations and best practices policies by automatically detecting changes to cloud environments in near-time.

AI-Based Monitoring and Analytics - Shrinks incident response and resolution times from days or weeks to just minutes. The powerful artificial intelligence detects risky resource configurations and suspicious network behaviour with smart alerts and optional automatic risk remediation

"US’ Giant Military Contract Has a Hitch", Says Deap Ubhi, an Entrepreneur of Indian Descent.





The founder of a local search site “Burrp!”, Deap Ubhi is a lesser known entrepreneur.

He joined Amazon in 2014 and motivated start-ups and other organizations to embrace cloud computing products.

He in less than a couple of years left, on a journey to start a company that furnished technology to restaurants.

Later on, he joined a Pentagon effort to employ techies. He wished to make a super effective search engine and according to what he said, also to help American people.

But as it turns out, Ubhi’s part in the Pentagon has landed him right in midst of one of the most prominent federal IT contracts.

A $10 billion deal of getting cloud computing to Pentagon, attracted the top tech companies when the project was announced in 2017.

Microsoft, Amazon, IBM, Oracle and Google, all wanted to seal the deal in their own ways.



But there was a catch to it all; the contract would go to only ‘one’ cloud vendor. And Amazon happened to close the deal with the capability of fulfilling Pentagon’s demands.

This is where Ubhi came in, especially his ties with Amazon, a place where he now works again.

Oracle, who under no circumstances could have landed the deal, vehemently criticized the one-vendor attitude.

The organization is now fighting in a federal court about Ubhi’s alleged inclination towards Amazon and its effect on the said deal.

Before the suit was filed, Pentagon had no found no suspicious influence of Ubhi and hence kept evaluating the deal despite Oracle’s lawsuit.

Further on, more information about Ubhi was discovered and Pentagon declined a request for disclosing it.

The winner of the deal was to be announced in April. When contacted by Amazon, both Ubhi and Pentagon refused to comment.

Oracle didn’t comment on the issue outside the court but during the proceedings it mentioned Ubhi’s outspoken inclination towards Amazon by providing the proof of a tweet via Ubhi’s handle.

According to the White house press secretary, the president of the US is not a part of this war of the vendors.



President Trump has never been involved in a government contract before so if he as much as even points at something regarding this situation it would be a first.

The cloud contract is being overseen by a Defense Department Procurement Official, commonly known as the Joint Enterprise Defense Infrastructure (JEDI).

The detection of the officials who’s actually chose the winner has not been made yet.

The Pentagon’s transition to cloud computing is being seen to by a team directed by the chief information officer, Dana Deasy.

Cloud computing would contribute a lot in the battlefield and hence the American government is keen on giving the contract to the best.

Reportedly, for some time Ubhi worked on a market research for JEDI while he was working at Pentagon.

Oracle in the court cited the internal documents where Ubhi articulated support towards a single cloud approach.

Oracle also thinks Ubhi had something to do with the decision to select a single cloud provider.



In return, Amazon said that Ubhi worked on JEDI only for seven weeks that too at the early stages and that there were over 70 people involved in the development.

Amazon and Ubhi’s ‘Tablehero’ were to engage in a partnership of which there is no proof as yet. Ubhi hasn’t been replying to the emails of investors either.

Pentagon mentioned that the single cloud would let the movement be faster and ensure more security. This statement was later asserted by the Government Accountability Office.

Both IBM and oracle filed heavy protests against the Government accountability Office which was later denied in Oracle’s case and rejected for IBM.

Oracle, which has a small cloud market shares, then took the issue to the federal courts of the US.

The Oracle lawsuit stands to profit Microsoft as it now has improved capabilities and hence could be a strong competitor to Amazon.

It doesn’t matter whether Ubhi molded the contract. Pentagon’s justifications support its decision to use a single cloud approach.

The major motivation behind the decision has always been helping the defense make better data driven decisions.

Multi-factor authentication bypassed to hack Office 365 & G Suite Cloud accounts



Massive IMAP-based password-spraying attacks successfully breached Microsoft Office 365 and G Suite accounts, circumventing multi-factor authentication (MFA) according to an analysis by Proofpoint.

As noted by Proofpoint's Information Protection Research Team in a recent report, during a "recent six-month study of major cloud service tenants, Proofpoint researchers observed attackers are targeting legacy protocols with stolen credential dumps to increase the speed and efficiency of the brute force attacks.

Based on Proofpoint study, IMAP is the most abused protocol, IMAP is the protocol that bypasses MFA and lock-out options for failed logins.

This technique takes advantage of the fact that the legacy authentication IMAP protocol bypasses MFA, allowing malicious actors to perform credential stuffing attacks against assets that would have been otherwise protected.

These intelligent new brute force attacks bring a new approach to the traditional normal brute force attack that uses the combination of usernames and passwords.

Based on the Proofpoint analysis of over one hundred thousand unauthorized logins across millions of monitored cloud user-accounts and found that:

▬ 72% of tenants were targeted at least once by threat actors
▬ 40% of tenants had at least one compromised account in their environment
▬ Over 2% of active user-accounts were targeted by malicious actors
▬ 15 out of every 10,000 active user-accounts were successfully breached by attackers

Their analysis unearthed the fact that around 60% of all Microsoft Office 365 and G Suite tenants have been targeted using IMAP-based password-spraying attacks and, as a direct result, approximately 25% of G Suite and Office 365 tenants that were attacked also experienced a successful breach.

On the whole, after crunching down the numbers, Proofpoint reached the conclusion that threat actors managed to reach a surprising 44% success rate when it came to breaching accounts at targeted organizations.

The ultimate aim of the attackers is to launch internal phishing and to have a strong foothold within the organization. Internal phishing attempts are hard to detect when compared to the external ones.