- System profile details
- Usernames and password
- Browser data (cookies, web history, login credentials)
- Cryptocurrency wallet info
- Telegram data
- OpenVPN profiles
- Keychain and Apple Notes data
- Files from local folders
The pop-ups can be annoying, and your first reaction is to remove them immediately, and you hit that “accept all” button. But is there anything else you can do?
Cookies are small files that are saved by web pages, and they have information for personalizing user experience, particularly for the most visited websites. The cookies may remember your login details, preferred news items, or your shopping preferences based on your browsing history. Cookies also help advertisers target your browsing behaviour via targeted ads.
Session cookies: These are for temporary use, like tracking items in your shopping cart. When a browser session is inactive, the cookies are automatically deleted.
Persistent cookies: As the name suggests, these cookies are used for longer periods. For example, saving logging details for accessing emails faster. They can expire from days to years.
When you are on a website, pop-ups inform you about the “essential cookies” that you can’t opt out of because if you do, you may not be able to use the website's online features, like shopping carts wouldn’t work. But in the settings, you can opt out of “non-essential cookies.”
Wayne Memorial Hospital in the US has informed its 163,440 people about a year old data breach in May 2024 that exposed details such as: names, social security numbers, user IDs, and passwords, financial account numbers, credit and debit card numbers, expiration dates, and CVV codes, medical history, diagnoses, treatments, prescriptions, lab test results and images, health insurance, Medicare, and Medicaid numbers, healthcare provider numbers, state-issued ID numbers, and dates of birth.
Initially, the hospital informed only 2,500 people about the attack in August 2024. Ransomware group Monti took responsibility for the attack and warned that it would leak the data by July 8, 2024.
Wayne Memorial Hospital, however, has not confirmed Monti’s claim. As of now, it is not known if the hospital paid a ransom, what amount Monti demanded, or why the hospital took more than a year to inform victims, or how the threat actors compromised the hospital infrastructure.
According to the notice sent to victims, “On June 3, 2024, WMH detected a ransomware event, whereby an unauthorized third party gained access to WMH’s network, encrypted some of WMH’s data, and left a ransom note on WMH’s network.” The forensic investigation by WMH found evidence of unauthorized access to a few WMH systems between “May 30, 2024, and June 3, 2024.”
The hospital has offered victims a one-year free credit monitoring and fraud assistance via CyberScout. The deadline to apply is three months from the date of the notice letter.
Monti is a ransomware gang that shares similarities with the Conti group. It was responsible for the first breach in February 2023. The group, however, has been working since June 2022. Monti is infamous for abusing software bugs like Log4Shell. Monti encrypts target systems and steals data as well. This pushes victims to pay ransom money in exchange for deleting stolen data and restoring the systems.
To date, Monti has claimed responsibility for 16 attacks. Out of these, two attacks hit healthcare providers.
In April 2023, Avezzano Sulmona L’Aquila (Italy) reported a ransomware attack that resulted in large-scale disruption for a month. Monti asked for $3 million ransom for the 500 GB of stolen data. ASL denies payment of the ransom.
Excelsior Othopedics informed 394,752 people about a June 2024 data compromise
As artificial intelligence becomes part of daily workflows, attackers are exploring new ways to exploit its weaknesses. Recent research has revealed a method where seemingly harmless images uploaded to AI systems can conceal hidden instructions, tricking chatbots into performing actions without the user’s awareness.
How hidden commands emerge
The risk lies in how AI platforms process images. To reduce computing costs, most systems shrink images before analysis, a step known as downscaling. During this resizing, certain pixel patterns, deliberately placed by an attacker can form shapes or text that the model interprets as user input. While the original image looks ordinary to the human eye, the downscaled version quietly delivers instructions to the system.
This technique is not entirely new. Academic studies as early as 2020 suggested that scaling algorithms such as bicubic or bilinear resampling could be manipulated to reveal invisible content. What is new is the demonstration of this tactic against modern AI interfaces, proving that such attacks are practical rather than theoretical.
Why this matters
Multimodal systems, which handle both text and images, are increasingly connected to calendars, messaging apps, and workplace tools. A hidden prompt inside an uploaded image could, in theory, request access to private information or trigger actions without explicit permission. One test case showed that calendar data could be forwarded externally, illustrating the potential for identity theft or information leaks.
The real concern is scale. As organizations integrate AI assistants into daily operations, even one overlooked vulnerability could compromise sensitive communications or financial data. Because the manipulation happens inside the preprocessing stage, traditional defenses such as firewalls or antivirus tools are unlikely to detect it.
Building safer AI systems
Defending against this form of “prompt injection” requires layered strategies. For users, simple precautions include checking how an image looks after resizing and confirming any unusual system requests. For developers, stronger measures are necessary: restricting image dimensions, sanitizing inputs before models interpret them, requiring explicit confirmation for sensitive actions, and testing models against adversarial image samples.
Researchers stress that piecemeal fixes will not be enough. Only systematic design changes such as enforcing secure defaults and monitoring for hidden instructions can meaningfully reduce the risks.
Images are no longer guaranteed to be safe when processed by AI systems. As attackers learn to hide commands where only machines can read them, users and developers alike must treat every upload with caution. By prioritizing proactive defenses, the industry can limit these threats before they escalate into real-world breaches.
Anthropic announced last week that it will update its terms of service and privacy policy to allow the use of chats for training its AI model “Claude.” Users of all subscription levels- Claude Free, Max, Pro, and Code subscribers- will be impacted by this new update. Anthropic’s new Consumer Terms and Privacy Policy will take effect from September 28, 2025.
If you are a Claude user, you can delay accepting the new policy by choosing ‘not now’, however, after September 28, your user account will be opted in by default to share your chat transcript for training the AI model.
The new policy has come after the genAI boom, thanks to the massive data that has prompted various tech companies to rethink their update policies (although quietly) and update their terms of service. With this, these companies can use your data to train their AI models or give it out to other companies to improve their AI bots.
"By participating, you’ll help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations. You’ll also help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users," Anthropic said.
Earlier this year, in July, Wetransfer, a famous file-sharing platform, fell into controversy when it changed its terms of service agreement, facing immediate backlash from its users and online community. WeTransfer wanted the files uploaded on its platform could be used for improving machine learning models. After the incident, the platform has been trying to fix things by removing “any mention of AI and machine learning from the document,” according to the Indian Express.
With rising concerns over the use of personal data for training AI models that compromise user privacy, companies are now offering users the option to opt out of data training for AI models.
Cybersecurity experts have found the first-ever AI-powered ransomware strain. Experts Peter Strycek and Anton Cherepanov from ESET found the strain and have termed it “PromptLock.” "During infection, the AI autonomously decides which files to search, copy, or encrypt — marking a potential turning point in how cybercriminals operate," ESET said.
The malware has not been spotted in any cyberattack as of yet, experts say. Promptlock appears to be in development and is poised for launch.
Although cyber criminals used GenAI tools to create malware in the past, PromptLock is the first ransomware case that is based on an AI model. According to Cherepanov’s LinkedIn post, Promptlock exploits the gpt-oss:20b model from OpenAI through the Ollama API to make new scripts.
Cherepanov’s LinkedIn post highlighted that the ransomware script can exfiltrate files and encrypt data, but may destroy files in the future. He said that “while multiple indicators suggest that the sample is a proof-of-concept (PoC) or a work-in-progress rather than an operational threat in the wild, we believe it is crucial to raise awareness within the cybersecurity community about such emerging risks.
According to Dark Reading’s conversation with ESET experts, the AI-based ransomware is a serious threat to security teams. Strycek and Cherepanov are trying to find out more about PromptLock, but they want to warn the security teams immediately about the ransomware.
ESET on X noted that "the PromptLock ransomware is written in #Golang, and we have identified both Windows and Linux variants uploaded to VirusTotal."
Threat actors have started using AI tools to launch phishing campaigns by creating fake content and malicious websites, thanks to the rapid adoption across the industry. However, AI-powered ransomware will be a worse challenge for cybersecurity defenders.
According to a report by Proofpoint, the majority of CISOs fear a material cyberattack in the next 12 months. These concerns highlight the increasing risks and cultural shifts among CISOs.
“76% of CISOs anticipate a material cyberattack in the next year, with human risk and GenAI-driven data loss topping their concerns,” Proofpoint said. In this situation, corporate stakeholders are trying to get a better understanding of the risks involved when it comes to tech and whether they are safe or not.
Experts believe that CISOs are being more open about these attacks, thanks to SEC disclosure rules, strict regulations, board expectations, and enquiries. The report surveyed 1,600 CISOs worldwide; all the organizations had more than 1000 employees.
The study highlights a rising concern about doing business amid incidents of cyberattacks. Although the majority of CISOs are confident about their cybersecurity culture, six out of 10 CISOs said their organizations are not prepared for a cyberattack. The majority of the CISOs were found in favour of paying ransoms to avoid the leak of sensitive data.
AI has risen both as a top concern as well as a top priority for CISOs. Two-thirds of CISOs believe that enabling GenAI tools is a top priority over the next two years, despite the ongoing risks. In the US, however, 80% CISOs worry about possible data breaches through GenAI platforms.
With adoption rates rising, organizations have started to move from restriction to governance. “Most are responding with guardrails: 67% have implemented usage guidelines, and 68% are exploring AI-powered defenses, though enthusiasm has cooled from 87% last year. More than half (59%) restrict employee use of GenAI tools altogether,” Proofpoint said.
Termed as “nodejs-smtp,” the package imitates the genuine email library nodemailer with the same README descriptions, page styling, and tagline, bringing around 347 downloads since it was uploaded to the npm registry earlier this year by a user “nikotimon.”
It is not available anymore. Socket experts Krill Boychenko said, "On import, the package uses Electron tooling to unpack Atomic Wallet's app.asar, replace a vendor bundle with a malicious payload, repackage the application, and remove traces by deleting its working directory.”
The aim is to overwrite the recipient address with hard-coded wallets handled by a cybercriminal. The package delivers by working as an SMTP-based mailer while trying to escape developers’ attention.
This has surfaced after ReversingLabs found an npm package called "pdf-to-office" that got the same results by releasing the “app.asar” archives linked to Exodus and Atomic wallets and changing the JavaScript file inside them to launch the clipper function.
According to Boychenko, “this campaign shows how a routine import on a developer workstation can quietly modify a separate desktop application and persist across reboots. He also said that “by using import time execution and Electron packaging, a lookalike mailer becomes a wallet drainer that alters Atomic and Exodus on compromised Windows systems."
The campaign has exposed how a routine import on a developer's pc can silently change a different desktop application and stay alive in reboots. By exploiting the import time execution and Electron packaging, an identical mailer turns into a wallet drainer. Security teams should be careful of incoming wallet drainers deployed through package registries.
Also known as SIMjacking, SIM swapping is a tactic where a cybercriminal convinces your ISP to port your phone number to their own SIM card. This results in the user losing access to their phone number and service provider, while the cybercriminal gains full access.
To convince the ISP of a SIM swap, the threat actor has to know about you. They can get the information from data breaches available on the dark web. You might also get tricked by a phishing scam and end up giving your info, or the threat actor may harvest your social media in case you have public information.
Once the information is received, the threat actor calls the customer support, requesting to move your number to a new SIM card. In most cases, your carrier doesn’t need much convincing.
An attacker with your phone number can impersonate you to friends and family, and extort money. Your phone security is also at risk, as most online services ask for your phone number for account recovery.
SIM swapping is dangerous as SMS based two-factor-authentication is still in use. Many services require us to activate 2FA on our accounts, and sometimes through SMS.
You can also check your carrier’s website to see if there’s any option to deactivate SIM change requests. This way, you can secure your phone number.
But when this isn’t available with your carrier, look out for the option to enable a PIN or secret phrase. A few companies allow users to set these, and call you back to confirm about your account.
Avoid using 2FA; use passkeys.
Use a SIM PIN for your phone to lock your SIM card.