Security experts have identified a new kind of cyber attack that hides instructions inside ordinary pictures. These commands do not appear in the full image but become visible only when the photo is automatically resized by artificial intelligence (AI) systems.
The attack works by adjusting specific pixels in a large picture. To the human eye, the image looks normal. But once an AI platform scales it down, those tiny adjustments blend together into readable text. If the system interprets that text as a command, it may carry out harmful actions without the user’s consent.
Researchers tested this method on several AI tools, including interfaces that connect with services like calendars and emails. In one demonstration, a seemingly harmless image was uploaded to an AI command-line tool. Because the tool automatically approved external requests, the hidden message forced it to send calendar data to an attacker’s email account.
The root of the problem lies in how computers shrink images. When reducing a picture, algorithms merge many pixels into fewer ones. Popular methods include nearest neighbor, bilinear, and bicubic interpolation. Each creates different patterns when compressing images. Attackers can take advantage of these predictable patterns by designing images that reveal commands only after scaling.
To prove this, the researchers released Anamorpher, an open-source tool that generates such images. The tool can tailor pictures for different scaling methods and software libraries like TensorFlow, OpenCV, PyTorch, or Pillow. By hiding adjustments in dark parts of an image, attackers can make subtle brightness shifts that only show up when downscaled, turning backgrounds into letters or symbols.
Mobile phones and edge devices are at particular risk. These systems often force images into fixed sizes and rely on compression to save processing power. That makes them more likely to expose hidden content.
The researchers also built a way to identify which scaling method a system uses. They uploaded test images with patterns like checkerboards, circles, and stripes. The artifacts such as blurring, ringing, or color shifts revealed which algorithm was at play.
This discovery also connects to core ideas in signal processing, particularly the Nyquist-Shannon sampling theorem. When data is compressed below a certain threshold, distortions called aliasing appear. Attackers use this effect to create new patterns that were not visible in the original photo.
According to the researchers, simply switching scaling methods is not a fix. Instead, they suggest avoiding automatic resizing altogether by setting strict upload limits. Where resizing is necessary, platforms should show users a preview of what the AI system will actually process. They also advise requiring explicit user confirmation before any text detected inside an image can trigger sensitive operations.
This new attack builds on past research into adversarial images and prompt injection. While earlier studies focused on fooling image-recognition models, today’s risks are greater because modern AI systems are connected to real-world tools and services. Without stronger safeguards, even an innocent-looking photo could become a gateway for data theft.
Flashpoint’s Global Threat Intelligence Index report is based on more than 3.6 petabytes of data studied by the experts. Hackers stole credentials from 5.8 million compromised devices, according to the report. The significant rise is problematic as stolen credentials can give hackers access to organizational data, even when the accounts are protected by multi-factor authentication (MFA).
The report also includes details that concern security teams.
Until June 2025, the firm has found over 20,000 exposed bugs, 12,200 of which haven’t been reported in the National Vulnerability Database (NVD). This means that security teams are not informed. 7000 of these have public exploits available, exposing organizations to severe threats.
According to experts, “The digital attack surface continues to expand, and the volume of disclosed vulnerabilities is growing at a record pace – up by a staggering 246% since February 2025.” “This explosion, coupled with a 179% increase in publicly available exploit code, intensifies the pressure on security teams. It’s no longer feasible to triage and remediate every vulnerability.”
Both these trends can cause ransomware attacks, as early access mostly comes through vulnerability exploitation or credential hacking. Total reports of breaches have increased by 179% since 2024, manufacturing (22%), technology (18%), and retail (13%) have been hit the most. The report has also disclosed 3104 data breaches in the first half of this year, linked to 9.5 billion hacked records.
Flashpoint reports that “Over the past four months, data breaches surged by 235%, with unauthorized access accounting for nearly 78% of all reported incidents. Data breaches are both the genesis and culmination of threat actor campaigns, serving as a source of continuous fuel for cybercrime activity.”
In June, the Identity Theft Resource Center (ITRC) warned that 2025 could become a record year for data cyberattacks in the US.
A recently observed version of the banking malware known as Coyote has begun using a lesser-known Windows feature, originally designed to help users with disabilities, to gather sensitive information from infected systems. This marks the first confirmed use of Microsoft’s UI Automation (UIA) framework by malware for this purpose in real-world attacks.
The UI Automation framework is part of Windows’ accessibility system. It allows assistive tools, such as screen readers, to interact with software by analyzing and controlling user interface (UI) elements, like buttons, text boxes, and navigation bars. Unfortunately, this same capability is now being turned into a tool for cybercrime.
What is the malware doing?
According to recent findings from cybersecurity researchers, this new Coyote variant targets online banking and cryptocurrency exchange platforms by monitoring user activity on the infected device. When a person accesses a banking or crypto website through a browser, the malware scans the visible elements of the application’s interface using UIA. It checks things like the tab names and address bar to figure out which website is open.
If the malware recognizes a target website based on a preset list of 75 financial services, it continues tracking activity. This list includes major banks and crypto platforms, with a focus on Brazilian users.
If the browser window title doesn’t give away the website, the malware digs deeper. It uses UIA to scan through nested elements in the browser, such as open tabs or address bars, to extract URLs. These URLs are then compared to its list of targets. While current evidence shows this technique is being used mainly for tracking, researchers have also demonstrated that it could be used to steal login credentials in the future.
Why is this alarming?
This form of cyberattack bypasses many traditional security tools like antivirus programs or endpoint detection systems, making it harder to detect. The concern grows when you consider that accessibility tools are supposed to help people with disabilities not become a pathway for cybercriminals.
The potential abuse of accessibility features is not limited to Windows. On Android, similar tactics have long been used by malicious apps, prompting developers to build stricter safeguards. Experts believe it may now be time for Microsoft to take similar steps to limit misuse of its accessibility systems.
While no official comment has been made regarding new protections, the discovery highlights how tools built for good can be misused if not properly secured. For now, the best defense remains being careful, both from users and from developers of operating systems and applications.