Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label AI Security. Show all posts

Silicon Valley Crosswalk Buttons Hacked With AI Voices Mimicking Tech Billionaires

 

A strange tech prank unfolded across Silicon Valley this past weekend after crosswalk buttons in several cities began playing AI-generated voice messages impersonating Elon Musk and Mark Zuckerberg.  

Pedestrians reported hearing bizarre and oddly personal phrases coming from audio-enabled crosswalk systems in Menlo Park, Palo Alto, and Redwood City. The altered voices were crafted to sound like the two tech moguls, with messages that ranged from humorous to unsettling. One button, using a voice resembling Zuckerberg, declared: “We’re putting AI into every corner of your life, and you can’t stop it.” Another, mimicking Musk, joked about loneliness and buying a Cybertruck to fill the void.  

The origins of the incident remain unknown, but online speculation points to possible hacktivism—potentially aimed at critiquing Silicon Valley’s AI dominance or simply poking fun at tech culture. Videos of the voice spoof spread quickly on TikTok and X, with users commenting on the surreal experience and sarcastically suggesting the crosswalks had been “venture funded.” This situation prompts serious concern. 

Local officials confirmed they’re investigating the breach and working to restore normal functionality. According to early reports, the tampering may have taken place on Friday. These crosswalk buttons aren’t new—they’re part of accessibility technology designed to help visually impaired pedestrians cross streets safely by playing audio cues. But this incident highlights how vulnerable public infrastructure can be to digital interference. Security researchers have warned in the past that these systems, often managed with default settings and unsecured firmware, can be compromised if not properly protected. 

One expert, physical penetration specialist Deviant Ollam, has previously demonstrated how such buttons can be manipulated using unchanged passwords or open ports. Polara, a leading manufacturer of these audio-enabled buttons, did not respond to requests for comment. The silence leaves open questions about how widespread the vulnerability might be and what cybersecurity measures, if any, are in place. This AI voice hack not only exposed weaknesses in public technology but also raised broader questions about the blending of artificial intelligence, infrastructure, and data privacy. 

What began as a strange and comedic moment at the crosswalk is now fueling a much larger conversation about the cybersecurity risks of increasingly connected cities. With AI becoming more embedded in daily life, events like this might be just the beginning of new kinds of public tech disruptions.

Hackers Can Spy on Screens Using HDMI Radiation and AI Models

 

You may feel safe behind your screen, but it turns out that privacy might be more of an illusion than a fact. New research reveals that hackers have found an alarming way to peek at what’s happening on your display—without ever touching your computer. By tapping into the faint electromagnetic radiation that HDMI cables emit, they can now “listen in” on your screen and reconstruct what’s being shown with startling accuracy. 

Here’s how it works: when digital signals travel through HDMI cables from your computer to a monitor, they unintentionally give off tiny bursts of radiation. These signals, invisible to the naked eye, can be picked up using radio antennas or small, discreet devices planted nearby. Once captured, advanced AI tools get to work, decoding the radiation into readable screen content. 

The results? Up to 70% accuracy in reconstructing text—meaning everything from passwords and emails to private messages could be exposed. This new technique represents a serious leap in digital espionage. It doesn’t rely on malware or breaking into a network. Instead, it simply listens to the electronic “whispers” your hardware makes. It’s silent, stealthy, and completely undetectable to the average user. 

Worryingly, this method is already reportedly in use against high-profile targets like government agencies and critical infrastructure sites. These organizations often store and manage sensitive data that, if leaked, could cause major damage. While some have implemented shielding to block these emissions, not all are fully protected. And because this form of surveillance leaves virtually no trace, many attacks could be flying under the radar entirely. 

Hackers can go about this in two main ways: one, by sneaking a signal-collecting device into a location; or two, by using specialized antennas from nearby—like the building next door. Either way, they can eavesdrop on what’s displayed without ever getting physically close to the device. This new threat underscores the need for stronger physical and digital protections. 

As cyberattacks become more innovative, simply securing your data with passwords and firewalls isn’t enough. Shielding cables and securing workspaces might soon be as important as having good antivirus software. The digital age has brought us many conveniences—but with it comes a new breed of invisible spies.

Over Half of Organizations Lack AI Cybersecurity Strategies, Mimecast Report Reveals

 

More than 55% of organizations have yet to implement dedicated strategies to counter AI-driven cyber threats, according to new research by Mimecast. The cybersecurity firm's latest State of Human Risk report, based on insights from 1,100 IT security professionals worldwide, highlights growing concerns over AI vulnerabilities, insider threats, and cybersecurity funding shortfalls.

The study reveals that 96% of organizations report improved risk management after adopting a formal cybersecurity strategy. However, security leaders face an increasingly complex threat landscape, with AI-powered attacks and insider risks posing significant challenges.

“Despite the complexity of challenges facing organisations—including increased insider risk, larger attack surfaces from collaboration tools, and sophisticated AI attacks—organisations are still too eager to simply throw point solutions at the problem,” said Mimecast’s human risk strategist VP, Masha Sedova. “With short-staffed IT and security teams and an unrelenting threat landscape, organisations must shift to a human-centric platform approach that connects the dots between employees and technology to keep the business secure.”

The report finds that 95% of organizations are leveraging AI for threat detection, endpoint security, and insider risk analysis. However, 81% express concerns over data leaks from generative AI (GenAI) tools. More than half lack structured strategies to combat AI-driven attacks, while 46% remain uncertain about their ability to defend against AI-powered phishing and deepfake threats.

Insider threats have surged by 43%, with 66% of IT leaders anticipating an increase in data loss from internal sources in the coming year. The report estimates that insider-driven data breaches, leaks, or theft cost an average of $13.9 million per incident. Additionally, 79% of organizations believe collaboration tools have heightened security risks, amplifying both intentional and accidental data breaches.

Despite 85% of organizations raising their cybersecurity budgets, 61% cite financial constraints as a barrier to addressing emerging threats and implementing AI-driven security solutions. The report underscores the need for increased investment in cybersecurity staffing, third-party security services, email security, and collaboration tool protection.

Although 87% of organizations conduct quarterly cybersecurity training, 33% of IT leaders remain concerned about employee mismanagement of email threats, while 27% cite security fatigue as a growing risk. 95% of organizations expect email-based cyber threats to persist in 2025, as phishing attacks continue to exploit human vulnerabilities.

Collaboration tools are expanding attack surfaces, with 44% of organizations reporting a rise in cyber threats originating from these platforms. 61% believe a cyberattack involving collaboration tools could disrupt business operations in 2025, raising concerns over data integrity and compliance.

The report highlights a shift from traditional security awareness training to proactive Human Risk Management. Notably, just 8% of employees are responsible for 80% of security incidents. Organizations are increasingly turning to AI-driven monitoring and behavioral analytics to detect and mitigate threats early. 72% of security leaders see human-centric cybersecurity solutions as essential in the next five years, signaling a shift toward advanced threat detection and risk mitigation.

The Need for Unified Data Security, Compliance, and AI Governance

 

Businesses are increasingly dependent on data, yet many continue to rely on outdated security infrastructures and fragmented management approaches. These inefficiencies leave organizations vulnerable to cyber threats, compliance violations, and operational disruptions. Protecting data is no longer just about preventing breaches; it requires a fundamental shift in how security, compliance, and AI governance are integrated into enterprise strategies. A proactive and unified approach is now essential to mitigate evolving risks effectively. 

The rapid advancement of artificial intelligence has introduced new security challenges. AI-powered tools are transforming industries, but they also create vulnerabilities if not properly managed. Many organizations implement AI-driven applications without fully understanding their security implications. AI models require vast amounts of data, including sensitive information, making governance a critical priority. Without robust oversight, these models can inadvertently expose private data, operate without transparency, and pose compliance challenges as new regulations emerge. 

Businesses must ensure that AI security measures evolve in tandem with technological advancements to minimize risks. Regulatory requirements are also becoming increasingly complex. Governments worldwide are enforcing stricter data privacy laws, such as GDPR and CCPA, while also introducing new regulations specific to AI governance. Non-compliance can result in heavy financial penalties, reputational damage, and operational setbacks. Businesses can no longer treat compliance as an afterthought; instead, it must be an integral part of their data security strategy. Organizations must shift from reactive compliance measures to proactive frameworks that align with evolving regulatory expectations. 

Another significant challenge is the growing issue of data sprawl. As businesses store and manage data across multiple cloud environments, SaaS applications, and third-party platforms, maintaining control becomes increasingly difficult. Security teams often lack visibility into where sensitive information resides, making it harder to enforce access controls and protect against cyber threats. Traditional security models that rely on layering additional tools onto existing infrastructures are no longer effective. A centralized, AI-driven approach to security and governance is necessary to address these risks holistically. 

Forward-thinking businesses recognize that managing security, compliance, and AI governance in isolation is inefficient. A unified approach consolidates risk management efforts into a cohesive, scalable framework. By breaking down operational silos, organizations can streamline workflows, improve efficiency through AI-driven automation, and proactively mitigate security threats. Integrating compliance and security within a single system ensures better regulatory adherence while reducing the complexity of data management. 

To stay ahead of emerging threats, organizations must modernize their approach to data security and governance. Investing in AI-driven security solutions enables businesses to automate data classification, detect vulnerabilities, and safeguard sensitive information at scale. Shifting from reactive compliance measures to proactive strategies ensures that regulatory requirements are met without last-minute adjustments. Moving away from fragmented security solutions and adopting a modular, scalable platform allows businesses to reduce risk and maintain resilience in an ever-evolving digital landscape. Those that embrace a forward-thinking, unified strategy will be best positioned for long-term success.

How Google Enhances AI Security with Red Teaming

 

Google continues to strengthen its cybersecurity framework, particularly in safeguarding AI systems from threats such as prompt injection attacks on Gemini. By leveraging automated red team hacking bots, the company is proactively identifying and mitigating vulnerabilities.

Google employs an agentic AI security team to streamline threat detection and response using intelligent AI agents. A recent report by Google highlights its approach to addressing prompt injection risks in AI systems like Gemini.

“Modern AI systems, like Gemini, are more capable than ever, helping retrieve data and perform actions on behalf of users,” the agent team stated. “However, data from external sources present new security challenges if untrusted sources are available to execute instructions on AI systems.”

Prompt injection attacks exploit AI models by embedding concealed instructions within input data, influencing system behavior. To counter this, Google is integrating advanced security measures, including automated red team hacking bots.

To enhance AI security, Google employs red teaming—a strategy that simulates real-world cyber threats to expose vulnerabilities. As part of this initiative, Google has developed a red-team framework to generate and test prompt injection attacks.

“Crafting successful indirect prompt injections,” the Google agent AI security team explained, “requires an iterative process of refinement based on observed responses.”

This framework leverages optimization-based attacks to refine prompt injection techniques, ensuring AI models remain resilient against sophisticated threats.

“Weak attacks do little to inform us of the susceptibility of an AI system to indirect prompt injections,” the report highlighted.

Although red team hacking bots challenge AI defenses, they also play a crucial role in reinforcing the security of systems like Gemini against unauthorized data access.

Key Attack Methodologies

Google evaluates Gemini's robustness using two primary attack methodologies:

1. Actor-Critic Model: This approach employs an attacker-controlled model to generate prompt injections, which are tested against the AI system. “These are passed to the AI system under attack,” Google explained, “which returns a probability score of a successful attack.” The bot then refines the attack strategy iteratively until a vulnerability is exploited.

2. Beam Search Technique: This method initiates a basic prompt injection that instructs Gemini to send sensitive information via email to an attacker. “If the AI system recognizes the request as suspicious and does not comply,” Google said, “the attack adds random tokens to the end of the prompt injection and measures the new probability of the attack succeeding.” The process continues until an effective attack method is identified.

By leveraging red team hacking bots and AI-driven security frameworks, Google is continuously improving AI resilience, ensuring robust protection against evolving threats.

Navigating 2025: Emerging Security Trends and AI Challenges for CISOs

 

Security teams have always needed to adapt to change, but 2025 is poised to bring unique challenges, driven by advancements in artificial intelligence (AI), sophisticated cyber threats, and evolving regulatory mandates. Chief Information Security Officers (CISOs) face a rapidly shifting landscape that requires innovative strategies to mitigate risks and ensure compliance.

The integration of AI-enabled features into products is accelerating, with large language models (LLMs) introducing new vulnerabilities that attackers may exploit. As vendors increasingly rely on these foundational models, CISOs must evaluate their organization’s exposure and implement measures to counter potential threats. 

"The dynamic landscape of cybersecurity regulations, particularly in regions like the European Union and California, demands enhanced collaboration between security and legal teams to ensure compliance and mitigate risks," experts note. Balancing these regulatory requirements with emerging security challenges will be crucial for protecting enterprises.

Generative AI (GenAI), while presenting security risks, also offers opportunities to strengthen software development processes. By automating vulnerability detection and bridging the gap between developers and security teams, AI can improve efficiency and bolster security frameworks.

Trends to Watch in 2025

1. Vulnerabilities in Proprietary LLMs Could Lead to Major Security Incidents

Software vendors are rapidly adopting AI-enabled features, often leveraging proprietary LLMs. However, these models introduce a new attack vector. Proprietary models reveal little about their internal guardrails or origins, making them challenging for security professionals to manage. Vulnerabilities in these models could have cascading effects, potentially disrupting the software ecosystem at scale.

2. Cloud-Native Workloads and AI Demand Adaptive Identity Management

The rise of cloud-native applications and AI-driven systems is reshaping identity management. Traditional, static access control systems must evolve to handle the surge in service-based identities. Adaptive frameworks are essential for ensuring secure and efficient access in dynamic digital environments.

3. AI Enhances Security in DevOps

A growing number of developers—58% according to recent surveys—recognize their role in application security. However, the demand for skilled security professionals in DevOps remains unmet.

AI is bridging this gap by automating repetitive tasks, offering smart coding recommendations, and integrating security into development pipelines. Authentication processes are also being streamlined, with AI dynamically assigning roles and permissions as services deploy across cloud environments. This integration enhances collaboration between developers and security teams while reducing risks.

CISOs must acknowledge the dual-edged nature of AI: while it introduces new risks, it also offers powerful tools to counter cyber threats. By leveraging AI to automate tasks, detect vulnerabilities, and respond to threats in real-time, organizations can strengthen their defenses and adapt to an evolving threat landscape.

The convergence of technology and security in 2025 calls for strategic innovation, enabling enterprises to not only meet compliance requirements but also proactively address emerging risks.


UIUC Researchers Expose Security Risks in OpenAI's Voice-Enabled ChatGPT-4o API, Revealing Potential for Financial Scams

 

Researchers recently revealed that OpenAI’s ChatGPT-4o voice API could be exploited by cybercriminals for financial scams, showing some success despite moderate limitations. This discovery has raised concerns about the misuse potential of this advanced language model.

ChatGPT-4o, OpenAI’s latest AI model, offers new capabilities, combining text, voice, and vision processing. These updates are supported by security features aimed at detecting and blocking malicious activity, including unauthorized voice replication.

Voice-based scams have become a significant threat, further exacerbated by deepfake technology and advanced text-to-speech tools. Despite OpenAI’s security measures, researchers from the University of Illinois Urbana-Champaign (UIUC) demonstrated how these protections could still be circumvented, highlighting risks of abuse by cybercriminals.

Researchers Richard Fang, Dylan Bowman, and Daniel Kang emphasized that current AI tools may lack sufficient restrictions to prevent misuse. They pointed out the risk of large-scale scams using automated voice generation, which reduces the need for human effort and keeps operational costs low.

Their study examined a variety of scams, including unauthorized bank transfers, gift card fraud, cryptocurrency theft, and social media credential theft. Using ChatGPT-4o’s voice capabilities, the researchers automated key actions like navigation, data input, two-factor authentication, and following specific scam instructions.

To bypass ChatGPT-4o’s data protection filters, the team used prompt “jailbreaking” techniques, allowing the AI to handle sensitive information. They simulated interactions with ChatGPT-4o by acting as gullible victims, testing the feasibility of different scams on real websites.

By manually verifying each transaction, such as those on Bank of America’s site, they found varying success rates. For example, Gmail credential theft was successful 60% of the time, while crypto-related scams succeeded in about 40% of attempts.

Cost analysis showed that carrying out these scams was relatively inexpensive, with successful cases averaging $0.75. More complex scams, like unauthorized bank transfers, cost around $2.51—still low compared to the potential profits such scams might yield.

OpenAI responded by emphasizing that their upcoming model, o1-preview, includes advanced safeguards to prevent this type of misuse. OpenAI claims that this model significantly outperforms GPT-4o in resisting unsafe content generation and handling adversarial prompts.

OpenAI also highlighted the importance of studies like UIUC’s for enhancing ChatGPT’s defenses. They noted that GPT-4o already restricts voice replication to pre-approved voices and that newer models are undergoing stringent evaluations to increase robustness against malicious use.

HiddenLayer Unveils "ShadowLogic" Technique for Implanting Codeless Backdoors in AI Models

 

Manipulating an AI model's computational graph can allow attackers to insert codeless, persistent backdoors, reports AI security firm HiddenLayer. This vulnerability could lead to malicious use of machine learning (ML) models in a variety of applications, including supply chain attacks.

Known as ShadowLogic, the technique works by tampering with the computational graph structure of a model, triggering unwanted behavior in downstream tasks. This manipulation opens the door to potential security breaches across AI supply chains.

Traditional backdoors offer unauthorized system access, bypassing security layers. Similarly, AI models can be exploited to include backdoors or manipulated to yield malicious outcomes. However, any changes in the model could potentially affect these hidden pathways.

HiddenLayer explains that using ShadowLogic enables threat actors to embed codeless backdoors that persist through model fine-tuning, allowing highly targeted and stealthy attacks.

Building on prior research showing that backdoors can be implemented during the training phase, HiddenLayer investigated how to inject a backdoor into a model's computational graph post-training. This bypasses the need for training phase vulnerabilities.

A computational graph is a mathematical blueprint that controls a neural network's operations. These graphs represent data inputs, mathematical functions, and learning parameters, guiding the model’s forward and backward propagation.

According to HiddenLayer, this graph acts like compiled code in a program, with specific instructions for the model. By manipulating the graph, attackers can override normal model logic, triggering predefined behavior when the model processes specific input, such as an image pixel, keyword, or phrase.

ShadowLogic leverages the wide range of operations supported by computational graphs to embed triggers, which could include checksum-based activations or even entirely new models hidden within the original one. HiddenLayer demonstrated this method on models like ResNet, YOLO, and Phi-3 Mini.

These compromised models behave normally but respond differently when presented with specific triggers. They could, for example, fail to detect objects or generate controlled responses, demonstrating the subtlety and potential danger of ShadowLogic backdoors.

Such backdoors introduce new vulnerabilities in AI models that do not rely on traditional code exploits. Embedded within the model’s structure, these backdoors are harder to detect and can be injected across a variety of graph-based architectures.

ShadowLogic is format-agnostic and can be applied to any model that uses graph-based architectures, regardless of the domain, including autonomous navigation, financial predictions, healthcare diagnostics, and cybersecurity.

HiddenLayer warns that no AI system is safe from this type of attack, whether it involves simple classifiers or advanced large language models (LLMs), expanding the range of potential targets.