Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label AI Security. Show all posts

Over Half of Organizations Lack AI Cybersecurity Strategies, Mimecast Report Reveals

 

More than 55% of organizations have yet to implement dedicated strategies to counter AI-driven cyber threats, according to new research by Mimecast. The cybersecurity firm's latest State of Human Risk report, based on insights from 1,100 IT security professionals worldwide, highlights growing concerns over AI vulnerabilities, insider threats, and cybersecurity funding shortfalls.

The study reveals that 96% of organizations report improved risk management after adopting a formal cybersecurity strategy. However, security leaders face an increasingly complex threat landscape, with AI-powered attacks and insider risks posing significant challenges.

“Despite the complexity of challenges facing organisations—including increased insider risk, larger attack surfaces from collaboration tools, and sophisticated AI attacks—organisations are still too eager to simply throw point solutions at the problem,” said Mimecast’s human risk strategist VP, Masha Sedova. “With short-staffed IT and security teams and an unrelenting threat landscape, organisations must shift to a human-centric platform approach that connects the dots between employees and technology to keep the business secure.”

The report finds that 95% of organizations are leveraging AI for threat detection, endpoint security, and insider risk analysis. However, 81% express concerns over data leaks from generative AI (GenAI) tools. More than half lack structured strategies to combat AI-driven attacks, while 46% remain uncertain about their ability to defend against AI-powered phishing and deepfake threats.

Insider threats have surged by 43%, with 66% of IT leaders anticipating an increase in data loss from internal sources in the coming year. The report estimates that insider-driven data breaches, leaks, or theft cost an average of $13.9 million per incident. Additionally, 79% of organizations believe collaboration tools have heightened security risks, amplifying both intentional and accidental data breaches.

Despite 85% of organizations raising their cybersecurity budgets, 61% cite financial constraints as a barrier to addressing emerging threats and implementing AI-driven security solutions. The report underscores the need for increased investment in cybersecurity staffing, third-party security services, email security, and collaboration tool protection.

Although 87% of organizations conduct quarterly cybersecurity training, 33% of IT leaders remain concerned about employee mismanagement of email threats, while 27% cite security fatigue as a growing risk. 95% of organizations expect email-based cyber threats to persist in 2025, as phishing attacks continue to exploit human vulnerabilities.

Collaboration tools are expanding attack surfaces, with 44% of organizations reporting a rise in cyber threats originating from these platforms. 61% believe a cyberattack involving collaboration tools could disrupt business operations in 2025, raising concerns over data integrity and compliance.

The report highlights a shift from traditional security awareness training to proactive Human Risk Management. Notably, just 8% of employees are responsible for 80% of security incidents. Organizations are increasingly turning to AI-driven monitoring and behavioral analytics to detect and mitigate threats early. 72% of security leaders see human-centric cybersecurity solutions as essential in the next five years, signaling a shift toward advanced threat detection and risk mitigation.

The Need for Unified Data Security, Compliance, and AI Governance

 

Businesses are increasingly dependent on data, yet many continue to rely on outdated security infrastructures and fragmented management approaches. These inefficiencies leave organizations vulnerable to cyber threats, compliance violations, and operational disruptions. Protecting data is no longer just about preventing breaches; it requires a fundamental shift in how security, compliance, and AI governance are integrated into enterprise strategies. A proactive and unified approach is now essential to mitigate evolving risks effectively. 

The rapid advancement of artificial intelligence has introduced new security challenges. AI-powered tools are transforming industries, but they also create vulnerabilities if not properly managed. Many organizations implement AI-driven applications without fully understanding their security implications. AI models require vast amounts of data, including sensitive information, making governance a critical priority. Without robust oversight, these models can inadvertently expose private data, operate without transparency, and pose compliance challenges as new regulations emerge. 

Businesses must ensure that AI security measures evolve in tandem with technological advancements to minimize risks. Regulatory requirements are also becoming increasingly complex. Governments worldwide are enforcing stricter data privacy laws, such as GDPR and CCPA, while also introducing new regulations specific to AI governance. Non-compliance can result in heavy financial penalties, reputational damage, and operational setbacks. Businesses can no longer treat compliance as an afterthought; instead, it must be an integral part of their data security strategy. Organizations must shift from reactive compliance measures to proactive frameworks that align with evolving regulatory expectations. 

Another significant challenge is the growing issue of data sprawl. As businesses store and manage data across multiple cloud environments, SaaS applications, and third-party platforms, maintaining control becomes increasingly difficult. Security teams often lack visibility into where sensitive information resides, making it harder to enforce access controls and protect against cyber threats. Traditional security models that rely on layering additional tools onto existing infrastructures are no longer effective. A centralized, AI-driven approach to security and governance is necessary to address these risks holistically. 

Forward-thinking businesses recognize that managing security, compliance, and AI governance in isolation is inefficient. A unified approach consolidates risk management efforts into a cohesive, scalable framework. By breaking down operational silos, organizations can streamline workflows, improve efficiency through AI-driven automation, and proactively mitigate security threats. Integrating compliance and security within a single system ensures better regulatory adherence while reducing the complexity of data management. 

To stay ahead of emerging threats, organizations must modernize their approach to data security and governance. Investing in AI-driven security solutions enables businesses to automate data classification, detect vulnerabilities, and safeguard sensitive information at scale. Shifting from reactive compliance measures to proactive strategies ensures that regulatory requirements are met without last-minute adjustments. Moving away from fragmented security solutions and adopting a modular, scalable platform allows businesses to reduce risk and maintain resilience in an ever-evolving digital landscape. Those that embrace a forward-thinking, unified strategy will be best positioned for long-term success.

How Google Enhances AI Security with Red Teaming

 

Google continues to strengthen its cybersecurity framework, particularly in safeguarding AI systems from threats such as prompt injection attacks on Gemini. By leveraging automated red team hacking bots, the company is proactively identifying and mitigating vulnerabilities.

Google employs an agentic AI security team to streamline threat detection and response using intelligent AI agents. A recent report by Google highlights its approach to addressing prompt injection risks in AI systems like Gemini.

“Modern AI systems, like Gemini, are more capable than ever, helping retrieve data and perform actions on behalf of users,” the agent team stated. “However, data from external sources present new security challenges if untrusted sources are available to execute instructions on AI systems.”

Prompt injection attacks exploit AI models by embedding concealed instructions within input data, influencing system behavior. To counter this, Google is integrating advanced security measures, including automated red team hacking bots.

To enhance AI security, Google employs red teaming—a strategy that simulates real-world cyber threats to expose vulnerabilities. As part of this initiative, Google has developed a red-team framework to generate and test prompt injection attacks.

“Crafting successful indirect prompt injections,” the Google agent AI security team explained, “requires an iterative process of refinement based on observed responses.”

This framework leverages optimization-based attacks to refine prompt injection techniques, ensuring AI models remain resilient against sophisticated threats.

“Weak attacks do little to inform us of the susceptibility of an AI system to indirect prompt injections,” the report highlighted.

Although red team hacking bots challenge AI defenses, they also play a crucial role in reinforcing the security of systems like Gemini against unauthorized data access.

Key Attack Methodologies

Google evaluates Gemini's robustness using two primary attack methodologies:

1. Actor-Critic Model: This approach employs an attacker-controlled model to generate prompt injections, which are tested against the AI system. “These are passed to the AI system under attack,” Google explained, “which returns a probability score of a successful attack.” The bot then refines the attack strategy iteratively until a vulnerability is exploited.

2. Beam Search Technique: This method initiates a basic prompt injection that instructs Gemini to send sensitive information via email to an attacker. “If the AI system recognizes the request as suspicious and does not comply,” Google said, “the attack adds random tokens to the end of the prompt injection and measures the new probability of the attack succeeding.” The process continues until an effective attack method is identified.

By leveraging red team hacking bots and AI-driven security frameworks, Google is continuously improving AI resilience, ensuring robust protection against evolving threats.

Navigating 2025: Emerging Security Trends and AI Challenges for CISOs

 

Security teams have always needed to adapt to change, but 2025 is poised to bring unique challenges, driven by advancements in artificial intelligence (AI), sophisticated cyber threats, and evolving regulatory mandates. Chief Information Security Officers (CISOs) face a rapidly shifting landscape that requires innovative strategies to mitigate risks and ensure compliance.

The integration of AI-enabled features into products is accelerating, with large language models (LLMs) introducing new vulnerabilities that attackers may exploit. As vendors increasingly rely on these foundational models, CISOs must evaluate their organization’s exposure and implement measures to counter potential threats. 

"The dynamic landscape of cybersecurity regulations, particularly in regions like the European Union and California, demands enhanced collaboration between security and legal teams to ensure compliance and mitigate risks," experts note. Balancing these regulatory requirements with emerging security challenges will be crucial for protecting enterprises.

Generative AI (GenAI), while presenting security risks, also offers opportunities to strengthen software development processes. By automating vulnerability detection and bridging the gap between developers and security teams, AI can improve efficiency and bolster security frameworks.

Trends to Watch in 2025

1. Vulnerabilities in Proprietary LLMs Could Lead to Major Security Incidents

Software vendors are rapidly adopting AI-enabled features, often leveraging proprietary LLMs. However, these models introduce a new attack vector. Proprietary models reveal little about their internal guardrails or origins, making them challenging for security professionals to manage. Vulnerabilities in these models could have cascading effects, potentially disrupting the software ecosystem at scale.

2. Cloud-Native Workloads and AI Demand Adaptive Identity Management

The rise of cloud-native applications and AI-driven systems is reshaping identity management. Traditional, static access control systems must evolve to handle the surge in service-based identities. Adaptive frameworks are essential for ensuring secure and efficient access in dynamic digital environments.

3. AI Enhances Security in DevOps

A growing number of developers—58% according to recent surveys—recognize their role in application security. However, the demand for skilled security professionals in DevOps remains unmet.

AI is bridging this gap by automating repetitive tasks, offering smart coding recommendations, and integrating security into development pipelines. Authentication processes are also being streamlined, with AI dynamically assigning roles and permissions as services deploy across cloud environments. This integration enhances collaboration between developers and security teams while reducing risks.

CISOs must acknowledge the dual-edged nature of AI: while it introduces new risks, it also offers powerful tools to counter cyber threats. By leveraging AI to automate tasks, detect vulnerabilities, and respond to threats in real-time, organizations can strengthen their defenses and adapt to an evolving threat landscape.

The convergence of technology and security in 2025 calls for strategic innovation, enabling enterprises to not only meet compliance requirements but also proactively address emerging risks.


UIUC Researchers Expose Security Risks in OpenAI's Voice-Enabled ChatGPT-4o API, Revealing Potential for Financial Scams

 

Researchers recently revealed that OpenAI’s ChatGPT-4o voice API could be exploited by cybercriminals for financial scams, showing some success despite moderate limitations. This discovery has raised concerns about the misuse potential of this advanced language model.

ChatGPT-4o, OpenAI’s latest AI model, offers new capabilities, combining text, voice, and vision processing. These updates are supported by security features aimed at detecting and blocking malicious activity, including unauthorized voice replication.

Voice-based scams have become a significant threat, further exacerbated by deepfake technology and advanced text-to-speech tools. Despite OpenAI’s security measures, researchers from the University of Illinois Urbana-Champaign (UIUC) demonstrated how these protections could still be circumvented, highlighting risks of abuse by cybercriminals.

Researchers Richard Fang, Dylan Bowman, and Daniel Kang emphasized that current AI tools may lack sufficient restrictions to prevent misuse. They pointed out the risk of large-scale scams using automated voice generation, which reduces the need for human effort and keeps operational costs low.

Their study examined a variety of scams, including unauthorized bank transfers, gift card fraud, cryptocurrency theft, and social media credential theft. Using ChatGPT-4o’s voice capabilities, the researchers automated key actions like navigation, data input, two-factor authentication, and following specific scam instructions.

To bypass ChatGPT-4o’s data protection filters, the team used prompt “jailbreaking” techniques, allowing the AI to handle sensitive information. They simulated interactions with ChatGPT-4o by acting as gullible victims, testing the feasibility of different scams on real websites.

By manually verifying each transaction, such as those on Bank of America’s site, they found varying success rates. For example, Gmail credential theft was successful 60% of the time, while crypto-related scams succeeded in about 40% of attempts.

Cost analysis showed that carrying out these scams was relatively inexpensive, with successful cases averaging $0.75. More complex scams, like unauthorized bank transfers, cost around $2.51—still low compared to the potential profits such scams might yield.

OpenAI responded by emphasizing that their upcoming model, o1-preview, includes advanced safeguards to prevent this type of misuse. OpenAI claims that this model significantly outperforms GPT-4o in resisting unsafe content generation and handling adversarial prompts.

OpenAI also highlighted the importance of studies like UIUC’s for enhancing ChatGPT’s defenses. They noted that GPT-4o already restricts voice replication to pre-approved voices and that newer models are undergoing stringent evaluations to increase robustness against malicious use.

HiddenLayer Unveils "ShadowLogic" Technique for Implanting Codeless Backdoors in AI Models

 

Manipulating an AI model's computational graph can allow attackers to insert codeless, persistent backdoors, reports AI security firm HiddenLayer. This vulnerability could lead to malicious use of machine learning (ML) models in a variety of applications, including supply chain attacks.

Known as ShadowLogic, the technique works by tampering with the computational graph structure of a model, triggering unwanted behavior in downstream tasks. This manipulation opens the door to potential security breaches across AI supply chains.

Traditional backdoors offer unauthorized system access, bypassing security layers. Similarly, AI models can be exploited to include backdoors or manipulated to yield malicious outcomes. However, any changes in the model could potentially affect these hidden pathways.

HiddenLayer explains that using ShadowLogic enables threat actors to embed codeless backdoors that persist through model fine-tuning, allowing highly targeted and stealthy attacks.

Building on prior research showing that backdoors can be implemented during the training phase, HiddenLayer investigated how to inject a backdoor into a model's computational graph post-training. This bypasses the need for training phase vulnerabilities.

A computational graph is a mathematical blueprint that controls a neural network's operations. These graphs represent data inputs, mathematical functions, and learning parameters, guiding the model’s forward and backward propagation.

According to HiddenLayer, this graph acts like compiled code in a program, with specific instructions for the model. By manipulating the graph, attackers can override normal model logic, triggering predefined behavior when the model processes specific input, such as an image pixel, keyword, or phrase.

ShadowLogic leverages the wide range of operations supported by computational graphs to embed triggers, which could include checksum-based activations or even entirely new models hidden within the original one. HiddenLayer demonstrated this method on models like ResNet, YOLO, and Phi-3 Mini.

These compromised models behave normally but respond differently when presented with specific triggers. They could, for example, fail to detect objects or generate controlled responses, demonstrating the subtlety and potential danger of ShadowLogic backdoors.

Such backdoors introduce new vulnerabilities in AI models that do not rely on traditional code exploits. Embedded within the model’s structure, these backdoors are harder to detect and can be injected across a variety of graph-based architectures.

ShadowLogic is format-agnostic and can be applied to any model that uses graph-based architectures, regardless of the domain, including autonomous navigation, financial predictions, healthcare diagnostics, and cybersecurity.

HiddenLayer warns that no AI system is safe from this type of attack, whether it involves simple classifiers or advanced large language models (LLMs), expanding the range of potential targets.






Look Out For This New Emerging Threat In The World Of AI

 



As per a recent discovery, a team of researchers has surfaced a groundbreaking AI worm named 'Morris II,' capable of infiltrating AI-powered email systems, spreading malware, and stealing sensitive data. This creation, reminiscent of the notorious computer worm from 1988, poses a significant threat to users relying on AI applications such as Gemini Pro, ChatGPT 4.0, and LLaVA.

Developed by Ben Nassi, Stav Cohen, and Ron Bitton, Morris II exploits vulnerabilities in Generative AI (GenAI) models by utilising adversarial self-replicating prompts. These prompts trick the AI into replicating and distributing harmful inputs, leading to activities like spamming and unauthorised data access. The researchers explain that this approach enables the infiltration of GenAI-powered email assistants, putting users' confidential information, such as credit card details and social security numbers, at risk.

Upon discovering Morris II, the responsible research team promptly reported their findings to Google and OpenAI. While Google remained silent on the matter, an OpenAI spokesperson acknowledged the issue, stating that the worm exploits prompt-injection vulnerabilities through unchecked or unfiltered user input. OpenAI is actively working to enhance its systems' resilience and advises developers to implement methods ensuring they don't work with potentially harmful inputs.

The potential impact of Morris II raises concerns about the security of AI systems, prompting the need for increased vigilance among users and developers alike. As we delve into the specifics, Morris II operates by injecting prompts into AI models, coercing them into replicating inputs and engaging in malicious activities. This replication extends to spreading the harmful prompts to new agents within the GenAI ecosystem, perpetuating the threat across multiple systems.

To counter this threat, OpenAI emphasises the importance of implementing robust input validation processes. By ensuring that user inputs undergo thorough checks and filters, developers can mitigate the risk of prompt-injection vulnerabilities. OpenAI is also actively working to fortify its systems against such attacks, underscoring the evolving nature of cybersecurity in the age of artificial intelligence.

In essence, the emergence of Morris II serves as a stark reminder of the digital culture of cybersecurity threats within the world of artificial intelligence. Users and developers must stay vigilant, adopting best practices to safeguard against potential vulnerabilities. OpenAI's commitment to enhancing system resilience reflects the collaborative effort required to stay one step ahead of these risks in this ever-changing technological realm. As the story unfolds, it remains imperative for the AI community to address and mitigate such threats collectively, ensuring the continued responsible and secure development of artificial intelligence technologies.


Security Trends to Monitor in 2024

 

As the new year unfolds, the business landscape finds itself on the brink of a dynamic era, rich with possibilities, challenges, and transformative trends. In the realm of enterprise security, 2024 is poised to usher in a series of significant shifts, demanding careful attention from organizations worldwide.

Automation Takes Center Stage: In recent years, the integration of Artificial Intelligence (AI) and Machine Learning (ML) technologies has become increasingly evident, setting the stage for a surge in automation within the cybersecurity domain. As the threat landscape evolves, the use of AI and ML algorithms for automated threat detection is gaining prominence. This involves the analysis of vast datasets to identify anomalies and predict potential cyber attacks before they materialize.

Endpoint protection is experiencing heightened sophistication, with AI playing a pivotal role in proactively identifying and responding to real-time threats. Notably, Apple's introduction of declarative device management underscores the industry's shift towards automation, where AI integration enables endpoints to autonomously troubleshoot and resolve issues. This marks a significant step forward in reducing equipment downtime and achieving substantial cost savings.

Navigating the Dark Side of Generative AI: In 2024, the risks associated with the rapid adoption of generative AI technologies are coming to the forefront. The use of AI coding bots for code generation gained substantial traction in 2023, reaching a point where companies, including tech giant Samsung, had to impose bans on certain models like ChatGPT due to their role in writing code within office environments.

Despite the prevalence of large language models (LLMs) for code generation, concerns are rising about the integrity of the generated code. Companies, in their pursuit of agility, may deploy AI-generated code without thorough scrutiny for potential security flaws, posing a tangible risk of data breaches with severe consequences. Additionally, the year 2024 is anticipated to witness a surge in AI-driven cyber attacks, with attackers leveraging the technology to craft hyper-realistic phishing scams and automate social engineering endeavours.

Passwordless Authentication- Paradigm Shift: The persistent discourse around moving beyond traditional passwords is expected to materialize in a significant way in 2024. Biometric authentication, including fingerprint and face unlock technologies, is gaining familiarity as a promising candidate for a more secure and user-friendly authentication system.

The integration of passkeys, combining biometrics with other factors, offers several advantages, eliminating the need for users to remember passwords. This approach provides a secure and versatile user verification method across various devices and accounts. Major tech players like Google and Apple are actively introducing their own passkey solutions, signalling a collective industry push toward a password-less future. The developments in biometric authentication and the adoption of passkeys suggest that 2024 could be a pivotal year, marking a widespread shift towards more secure and user-friendly authentication methods.

Overall, the landscape of enterprise security beckons with immense potential, fueled by advancements in automation, the challenges of generative AI, and the imminent shift towards passwordless authentication. Businesses are urged to stay vigilant, adapt to these transformative trends, and navigate the evolving cybersecurity landscape for a secure and resilient future.