Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label AI threats. Show all posts

Emerging Cybersecurity Threats in 2025: Shadow AI, Deepfakes, and Open-Source Risks

 

Cybersecurity continues to be a growing concern as organizations worldwide face an increasing number of sophisticated attacks. In early 2024, businesses encountered an alarming 1,308 cyberattacks per week—a sharp 28% rise from the previous year. This surge highlights the rapid evolution of cyber threats and the pressing need for stronger security strategies. As technology advances, cybercriminals are leveraging artificial intelligence, exploiting open-source vulnerabilities, and using advanced deception techniques to bypass security measures. 

One of the biggest cybersecurity risks in 2025 is ransomware, which remains a persistent and highly disruptive threat. Attackers use this method to encrypt critical data, demanding payment for its release. Many cybercriminals now employ double extortion tactics, where they not only lock an organization’s files but also threaten to leak sensitive information if their demands are not met. These attacks can cripple businesses, leading to financial losses and reputational damage. The growing sophistication of ransomware groups makes it imperative for companies to enhance their defensive measures, implement regular backups, and invest in proactive threat detection systems. 

Another significant concern is the rise of Initial Access Brokers (IABs), cybercriminals who specialize in selling stolen credentials to hackers. By gaining unauthorized access to corporate systems, these brokers enable large-scale cyberattacks, making it easier for threat actors to infiltrate networks. This trend has made stolen login credentials a valuable commodity on the dark web, increasing the risk of data breaches and financial fraud. Organizations must prioritize multi-factor authentication and continuous monitoring to mitigate these risks. 

A new and rapidly growing cybersecurity challenge is the use of unauthorized artificial intelligence tools, often referred to as Shadow AI. Employees frequently adopt AI-driven applications without proper security oversight, leading to potential data leaks and vulnerabilities. In some cases, AI-powered bots have unintentionally exposed sensitive financial information due to default settings that lack robust security measures. 

As AI becomes more integrated into workplaces, businesses must establish clear policies to regulate its use and ensure proper safeguards are in place. Deepfake technology has also emerged as a major cybersecurity threat. Cybercriminals are using AI-generated deepfake videos and audio recordings to impersonate high-ranking officials and deceive employees into transferring funds or sharing confidential data. 

A recent incident involved a Hong Kong-based company losing $25 million after an employee fell victim to a deepfake video call that convincingly mimicked their CFO. This alarming development underscores the need for advanced fraud detection systems and enhanced verification protocols to prevent such scams. Open-source software vulnerabilities are another critical concern. Many businesses and government institutions rely on open-source platforms, but these systems are increasingly being targeted by attackers. Cybercriminals have infiltrated open-source projects, gaining the trust of developers before injecting malicious code. 

A notable case involved a widely used Linux tool where a contributor inserted a backdoor after gradually establishing credibility within the project. If not for a vigilant security expert, the backdoor could have remained undetected, potentially compromising millions of systems. This incident highlights the importance of stricter security audits and increased funding for open-source security initiatives. 

To address these emerging threats, organizations and governments must take proactive measures. Strengthening regulatory frameworks, investing in AI-driven threat detection, and enhancing collaboration between cybersecurity experts and policymakers will be crucial in mitigating risks. The cybersecurity landscape is evolving at an unprecedented pace, and without a proactive approach, businesses and individuals alike will remain vulnerable to increasingly sophisticated attacks.

Phishing Attacks Surge by 30% in Australia Amid Growing Cyber Threats

 

kAustralia witnessed a sharp 30% rise in phishing emails last year, as cybercriminals increasingly targeted the Asia-Pacific (APAC) region, according to a recent study by security firm Abnormal Security. The APAC region’s expanding presence in critical industries, such as data centers and telecommunications, has made it a prime target for cyber threats.

Across APAC, credential phishing attacks surged by 30.5% between 2023 and 2024, with New Zealand experiencing a 30% rise. Japan and Singapore faced even greater increases at 37%. Among all advanced email-based threats—including business email compromise (BEC) and malware attacks—phishing saw the most significant spike.

“The surge in attack volume across the APAC region can likely be attributed to several factors, including the strategic significance of its countries as epicentres for trade, finance, and defence,” said Tim Bentley, Vice President of APJ at Abnormal Security.

“This makes organisations in the region attractive targets for complex email campaigns designed to exploit economic dynamics, disrupt essential industries, and steal sensitive data.”

Between 2023 and 2024, advanced email attacks across APAC—including Australia, New Zealand, Japan, and Singapore—rose by 26.9% on a median monthly basis. The increase was particularly notable between Q1 and Q2 of 2024 (16%) and further escalated from Q2 to Q3 (20%).

While phishing remains the primary attack method, BEC scams—including executive impersonation and payment fraud—grew by 6% year-over-year. A single successful BEC attack cost an average of USD $137,000 in 2023, according to Abnormal Security.

Australia has long been a key target for cybercriminals. A 2023 Rubrik survey revealed that Australian organizations faced the highest data breach rates globally.

Antoine Le Tard, Vice President for Asia-Pacific and Japan at Rubrik, previously noted that Australia’s status as an early adopter of cloud and enterprise security solutions may have led to rapid deployment at the expense of robust cybersecurity measures.

The Australian Signals Directorate reported that only 15% of government agencies met the minimum cybersecurity standards in 2024, a steep drop from 25% in 2023. The reluctance to adopt passkey authentication methods further reflects the cybersecurity maturity challenges in the public sector.

The widespread accessibility of AI chatbots has altered the cybersecurity landscape, making phishing attacks more sophisticated. Even jailbroken AI models enable cybercriminals to create phishing content effortlessly, reducing technical barriers for attackers.

AI-driven cyber threats are on the rise, with AI-powered chatbots listed among the top security risks for 2025. According to Vipre, BEC attacks in Q2 2024 increased by 20% year-over-year, with two-fifths of these scams generated using AI tools.

In June, HP intercepted a malware-laden email campaign featuring a script that was “highly likely” created using generative AI. Cybercriminals are also leveraging AI chatbots to establish trust with victims before launching scams—mirroring how businesses use AI for customer engagement.

Big Tech’s Data-Driven AI: Transparency, Consent, and Your Privacy

In the evolving world of AI, data transparency and user privacy are gaining significant attention as companies rely on massive amounts of information to fuel their AI models. While Big Tech giants need enormous datasets to train their AI systems, legal frameworks increasingly require these firms to clarify what they do with users’ personal data. Today, many major tech players use customer data to train AI models, but the specifics often remain obscure to the average user. 

In some instances, companies operate on an “opt-in” model where data usage requires explicit user consent. In others, it’s “opt-out”—data is used automatically unless the user takes steps to decline, and even this may vary based on regional regulations. For example, Meta’s data-use policies for Facebook and Instagram are “opt-out” only in Europe and Brazil, not the U.S., where laws like California’s Consumer Privacy Act enforce more transparency but allow limited control. 

The industry’s quest for data has led to a “land grab,” as companies race to stockpile information before emerging laws impose stricter guidelines. This data frenzy affects users differently across sectors: consumer platforms like social media often limit users’ choice to restrict data use, while enterprise software clients expect privacy guarantees.  

Controversy around data use has even caused some firms to change course. Adobe, following backlash over potentially using business customer data for training, pledged not to employ it for AI model development. Similarly, Apple has crafted a privacy-first architecture for its AI, promising on-device processing whenever possible and, when necessary, private cloud storage. Microsoft’s AI, including its Copilot+ features, has faced scrutiny as well. 

Privacy concerns delayed some features, prompting the company to refine how data like screenshots and app usage are managed. OpenAI, a leader in generative AI, offers varied data-use policies for free and paid users, giving businesses greater control over data than typical consumers.

Microsoft Warns of 600 Million Daily Cyberattacks and Sophisticated Nation-State Tactics

 

A new security report from Microsoft reveals a complex and evolving cyber landscape where cutting-edge technologies, state-sponsored activities, and organized crime are converging, posing unprecedented challenges. To combat these threats, a united global effort is more critical than ever.

According to Microsoft's 2024 Digital Defense Report, over 600 million cyberattacks by criminals and nation-states take place daily, targeting individuals, businesses, and governments worldwide.

A key finding of the 110-page report is the increasing sophistication of cyber threats. Both criminal organizations and state-sponsored actors are leveraging advanced technologies, including generative AI, to enhance their attacks. This technological evolution has made cyber defenses more difficult to maintain.

One of the report’s most concerning observations is the growing collaboration between cybercrime syndicates and nation-state groups. These partnerships are leading to the sharing of tools and techniques, further blurring the lines between criminal and government-backed cyber operations and creating more diverse and effective attack methods.

State-sponsored actors, particularly, are ramping up their cyber activities, motivated by goals ranging from financial gain to intelligence collection, with a strong focus on military targets. For example, Russian threat actors have outsourced parts of their cyber-espionage campaigns to criminal groups, targeting at least 50 Ukrainian military devices with malware. Meanwhile, Iranian actors have combined ransomware attacks with influence operations, and North Korean groups are developing new ransomware variants like FakePenny, aimed at aerospace and defense industries. Chinese cyber efforts remain consistent, continuing to target Taiwan and Southeast Asia.

With the U.S. presidential election approaching, the report raises concerns about foreign interference. Although the public conversation around this issue has quieted since 2020, Russia, Iran, and China are exploiting geopolitical tensions to undermine trust in democratic systems. Other hotspots for cyber activity include countries involved in military conflicts or regional disputes, such as Israel, Ukraine, the UAE, and Taiwan.

Microsoft stresses that addressing these growing threats requires collaboration between the public and private sectors, as well as advancements in policy and cybersecurity practices. Enhanced multi-factor authentication, attack surface reduction, and stronger protections for cloud infrastructure are increasingly essential as the cyber threat landscape continues to evolve.

Supreme Court Directive Mandates Self-Declaration Certificates for Advertisements

 

In a landmark ruling, the Supreme Court of India recently directed every advertiser and advertising agency to submit a self-declaration certificate confirming that their advertisements do not make misleading claims and comply with all relevant regulatory guidelines before broadcasting or publishing. This directive stems from the case of Indian Medical Association vs Union of India. 

To enforce this directive, the Ministry of Information and Broadcasting has issued comprehensive guidelines outlining the procedure for obtaining these certificates, which became mandatory from June 18, 2024, onwards. This move is expected to significantly impact advertisers, especially those using deepfakes generated by Generative AI (GenAI) on social media platforms like Instagram, Facebook, and YouTube. The use of deepfakes in advertisements has been a growing concern. 

In a previous op-ed titled “Urgently needed: A law to protect consumers from deepfake ads,” the rising menace of deepfake ads making misleading or fraudulent claims was highlighted, emphasizing the adverse effects on consumer rights and public figures. A survey conducted by McAfee revealed that 75% of Indians encountered deepfake content, with 38% falling victim to deepfake scams, and 18% directly affected by such fraudulent schemes. Alarmingly, 57% of those targeted mistook celebrity deepfakes for genuine content. The new guidelines aim to address these issues by requiring advertisers to provide bona fide details and final versions of advertisements to support their declarations. This measure is expected to aid in identifying and locating advertisers, thus facilitating tracking once complaints are filed. 

Additionally, it empowers courts to impose substantial fines on offenders. Despite the potential benefits, industry bodies such as the Indian Internet and Mobile Association of India (IAMAI), Indian Newspaper Association (INS), and the Indian Society of Advertisers (ISA) have expressed concerns over the additional compliance burden, particularly for smaller advertisers. These bodies argue that while self-certification has merit, the process needs to be streamlined to avoid hampering legitimate advertising activities. The challenge of regulating AI-enabled deepfake ads is further complicated by the sheer volume of digital advertisements, making it difficult for regulators to review each one. 

Therefore, it is suggested that online platforms be obligated to filter out deepfake ads, leveraging their technology and resources for efficient detection. The Ministry of Electronics and Information Technology highlighted the negligence of social media intermediaries in fulfilling their due diligence obligations under the IT Rules in a March 2024 advisory. 

Although non-binding, the advisory stipulates that intermediaries must not allow unlawful content on their platforms. The Supreme Court is set to hear the matter again on July 9, 2024, when industry bodies are expected to present their views on the new guidelines. This intervention could address the shortcomings of current regulatory approaches and set a precedent for robust measures against deceptive advertising practices. 

As the country grapples with the growing threat of dark patterns in online ads, the apex court’s involvement is crucial in ensuring consumer protection and the integrity of advertising practices in India.

Adapting Cybersecurity Policies to Combat AI-Driven Threats

 

Over the last few years, the landscape of cyber threats has significantly evolved. The once-common traditional phishing emails, marked by obvious language errors, clear malicious intent, and unbelievable narratives, have seen a decline. Modern email security systems can easily detect these rudimentary attacks, and recipients have grown savvy enough to recognize and ignore them. Consequently, this basic form of phishing is quickly becoming obsolete. 

However, as traditional phishing diminishes, a more sophisticated and troubling threat has emerged. Cybercriminals are now leveraging advanced generative AI (GenAI) tools to execute complex social engineering attacks. These include spear-phishing, VIP impersonation, and business email compromise (BEC). In light of these developments, Chief Information Security Officers (CISOs) must adapt their cybersecurity strategies and implement new, robust policies to address these advanced threats. One critical measure is implementing segregation of duties (SoD) in handling sensitive data and assets. 

For example, any changes to bank account information for invoices or payroll should require approval from multiple individuals. This multi-step verification process ensures that even if one employee falls victim to a social engineering attack, others can intercept and prevent fraudulent actions. Regular and comprehensive security training is also crucial. Employees, especially those handling sensitive information and executives who are prime targets for BEC, should undergo continuous security education. 

This training should include live sessions, security awareness videos, and phishing simulations based on real-world scenarios. By investing in such training, employees can become the first line of defense against sophisticated cyber threats. Additionally, gamifying the training process—such as rewarding employees for reporting phishing attempts—can boost engagement and effectiveness. Encouraging a culture of reporting suspicious emails is another essential policy. 

Employees should be urged to report all potentially malicious emails rather than simply deleting or ignoring them. This practice allows the Security Operations Center (SOC) team to stay informed about ongoing threats and enhances organizational security awareness. Clear policies should emphasize that it's better to report false positives than to overlook potential threats, fostering a vigilant and cautious organizational culture. To mitigate social engineering risks, organizations should restrict access to sensitive information on a need-to-know basis. 

Simple policy changes, like keeping company names private in public job listings, can significantly reduce the risk of social engineering attacks. Limiting the availability of organizational details helps prevent cybercriminals from gathering the information needed to craft convincing attacks. Given the rapid advancements in generative AI, it's imperative for organizations to adopt adaptive security systems. Shifting from static to dynamic security measures, supported by AI-enabled defensive tools, ensures that security capabilities remain effective against evolving threats. 

This proactive approach helps organizations stay ahead of the latest attack vectors. The rise of generative AI has fundamentally changed the field of cybersecurity. In a short time, these technologies have reshaped the threat landscape, making it essential for CISOs to continuously update their strategies. Effective, current policies are vital for maintaining a strong security posture. 

This serves as a starting point for CISOs to refine and enhance their cybersecurity policies, ensuring they are prepared for the challenges posed by AI-driven threats. In this ever-changing environment, staying ahead of cybercriminals requires constant vigilance and adaptation.

Africa's Cyber Threats Rise With AI Development

 

In 2023, a majority of African economies witnessed a decline in overall cyber threats, signaling a positive trend. However, notable exceptions were observed, with Kenya experiencing a substantial 68% increase in ransomware attacks, while South Africa encountered a notable 29% surge in phishing incidents targeting sensitive data. 

This evolving landscape underscores a significant paradigm shift. Cyber adversaries are increasingly setting their sights on critical infrastructure across Africa, accompanied by a discernible inclination towards integrating artificial intelligence (AI) into their modus operandi. Insights derived from Kaspersky's telemetry data reveal a growing reliance on AI, particularly large language models (LLMs), to orchestrate more sophisticated social engineering tactics. 

Following Are the Reasons Behind the Cyber-Threats

AI's Growing Influence: 

Kaspersky's Yamout highlights the surge in attacks in Africa, fueled by AI technologies like LLMs, making cybercrime more accessible. These advancements have led to the creation of convincing phishing emails, synthetic identities, and deepfakes, exacerbating existing AI inequalities. 

Hacking Critical Infrastructure: 

Kaspersky notes a significant attack on operational technology, with 38% of OT computers facing threats in 2023. Cybercriminals and nation-state groups, alongside rising tensions, contribute to this threat landscape, including the emergence of hacktivism driven by socio-cultural and economic motives. 

Mobile Internet, Mobile Threats: With mobile devices being the primary means of internet access in Africa, Dark Reading observes a 10% rise in mobile threats in 2023, including ransomware and SMS phishing attacks. The shift to remote work globally further amplifies mobile threats, presenting challenges in safeguarding personal and corporate data. 

Furthermore, according to Interpol's African Cyberthreat Assessment 2023 report, Africa has historically been a hotspot for social engineering threats, particularly noting the prevalence of BEC (business email compromise) actors like the SilverTerrier group. This underscores the persistent challenges posed by cybercriminals operating within the region. 

Kaspersky's report echoes these concerns, noting a growing trend of citizens in Africa and the META region being targeted by cybercriminals. This alarming development emphasizes the urgent need for enhanced cybersecurity measures to safeguard individuals and businesses against evolving threats. 

Further, analysis from a 2023 Positive Technologies report reveals that BEC attacks remain the primary cyber threat to organizations and individuals in the region. The financial, telecom, government, and retail sectors are particularly vulnerable, collectively accounting for over half of all reported attacks. 

The Positive Technologies report also highlights key findings regarding the nature of cyber attacks in Africa. Notably, 80% of attacks on African organizations involve malware, indicating the widespread use of malicious software to compromise systems and networks. 

Additionally, a staggering 91% of attacks targeting African citizens incorporate a social engineering component, illustrating the effectiveness of deceptive tactics in exploiting unsuspecting individuals. 

What can be done to measure the surge of cyber-attacks? 

Various studies advocate for patching software, managing credentials, and securing endpoints to combat ransomware groups exploiting vulnerabilities. Unpatched software, vulnerable web services, and weak remote access services are cited as common entry points for attackers in Africa.

Security Trends to Monitor in 2024

 

As the new year unfolds, the business landscape finds itself on the brink of a dynamic era, rich with possibilities, challenges, and transformative trends. In the realm of enterprise security, 2024 is poised to usher in a series of significant shifts, demanding careful attention from organizations worldwide.

Automation Takes Center Stage: In recent years, the integration of Artificial Intelligence (AI) and Machine Learning (ML) technologies has become increasingly evident, setting the stage for a surge in automation within the cybersecurity domain. As the threat landscape evolves, the use of AI and ML algorithms for automated threat detection is gaining prominence. This involves the analysis of vast datasets to identify anomalies and predict potential cyber attacks before they materialize.

Endpoint protection is experiencing heightened sophistication, with AI playing a pivotal role in proactively identifying and responding to real-time threats. Notably, Apple's introduction of declarative device management underscores the industry's shift towards automation, where AI integration enables endpoints to autonomously troubleshoot and resolve issues. This marks a significant step forward in reducing equipment downtime and achieving substantial cost savings.

Navigating the Dark Side of Generative AI: In 2024, the risks associated with the rapid adoption of generative AI technologies are coming to the forefront. The use of AI coding bots for code generation gained substantial traction in 2023, reaching a point where companies, including tech giant Samsung, had to impose bans on certain models like ChatGPT due to their role in writing code within office environments.

Despite the prevalence of large language models (LLMs) for code generation, concerns are rising about the integrity of the generated code. Companies, in their pursuit of agility, may deploy AI-generated code without thorough scrutiny for potential security flaws, posing a tangible risk of data breaches with severe consequences. Additionally, the year 2024 is anticipated to witness a surge in AI-driven cyber attacks, with attackers leveraging the technology to craft hyper-realistic phishing scams and automate social engineering endeavours.

Passwordless Authentication- Paradigm Shift: The persistent discourse around moving beyond traditional passwords is expected to materialize in a significant way in 2024. Biometric authentication, including fingerprint and face unlock technologies, is gaining familiarity as a promising candidate for a more secure and user-friendly authentication system.

The integration of passkeys, combining biometrics with other factors, offers several advantages, eliminating the need for users to remember passwords. This approach provides a secure and versatile user verification method across various devices and accounts. Major tech players like Google and Apple are actively introducing their own passkey solutions, signalling a collective industry push toward a password-less future. The developments in biometric authentication and the adoption of passkeys suggest that 2024 could be a pivotal year, marking a widespread shift towards more secure and user-friendly authentication methods.

Overall, the landscape of enterprise security beckons with immense potential, fueled by advancements in automation, the challenges of generative AI, and the imminent shift towards passwordless authentication. Businesses are urged to stay vigilant, adapt to these transformative trends, and navigate the evolving cybersecurity landscape for a secure and resilient future.