Fraud has evolved into a calculated industry powered by technology, psychology, and precision targeting. Gone are the days when scams could ...
The mega-messenger from Meta is allegedly collecting user data to generate ad money, according to recent attacks on WhatsApp. WhatsApp strongly opposes these fresh accusations, but it didn't help that a message of its own appeared to imply the same.
There are two prominent origins of the recent attacks. Few experts are as well-known as Elon Musk, particularly when it occurs on X, the platform he owns. Musk asserted on the Joe Rogan Experience that "WhatsApp knows enough about what you're texting to know what ads to show you." "That is a serious security flaw."
These so-called "hooks for advertising" are typically thought to rely on metadata, which includes information on who messages whom, when, and how frequently, as well as other information from other sources that is included in a user's profile.
The message content itself is shielded by end-to-end encryption, which is the default setting for all 3 billion WhatsApp users. Signal's open-source encryption protocol, which the Meta platform adopted and modified for its own use, is the foundation of WhatsApp's security. So, in light of these new attacks, do you suddenly need to stop using WhatsApp?
In reality, WhatsApp's content is completely encrypted. There has never been any proof that Meta, WhatsApp, or anybody else can read the content itself. However, the platform you are utilizing is controlled by Meta, and it is aware of your identity. It does gather information on how you use the platform.
Additionally, it shares information with Meta so that it can "show relevant offers/ads." Signal has a small portion of WhatsApp's user base, but it does not gather metadata in the same manner. Think about using Signal instead for sensitive content. Steer clear of Telegram since it is not end-to-end encrypted and RCS because it is not yet cross-platform encrypted.
Remember that end-to-end encryption only safeguards your data while it is in transit. It has no effect on the security of your content on the device. I can read all of your messages, whether or not they are end-to-end encrypted, if I have control over your iPhone or Android.
A prompt injection occurs when hackers embed secret instructions inside what looks like an ordinary input. The AI can’t tell the difference between developer-given rules and user input, so it processes everything as one continuous prompt. This loophole lets attackers trick the model into following their commands — stealing data, installing malware, or even hijacking smart home devices.
Security experts warn that these malicious instructions can be hidden in everyday digital spaces — web pages, calendar invites, PDFs, or even emails. Attackers disguise their prompts using invisible Unicode characters, white text on white backgrounds, or zero-sized fonts. The AI then reads and executes these hidden commands without realizing they are malicious — and the user remains completely unaware that an attack has occurred.
For instance, a company might upload a market research report for analysis, unaware that the file secretly contains instructions to share confidential pricing data. The AI dutifully completes both tasks, leaking sensitive information without flagging any issue.
In another chilling example from the Black Hat security conference, hidden prompts in calendar invites caused AI systems to turn off lights, open windows, and even activate boilers — all because users innocently asked Gemini to summarize their schedules.
Prompt injection attacks mainly fall into two categories:
Direct Prompt Injection: Attackers directly type malicious commands that override the AI’s normal functions.
Indirect Prompt Injection: Hackers hide commands in external files or links that the AI processes later — a far stealthier and more dangerous method.
There are also advanced techniques like multi-agent infections (where prompts spread like viruses between AI systems), multimodal attacks (hiding commands in images, audio, or video), hybrid attacks (combining prompt injection with traditional exploits like XSS), and recursive injections (where AI generates new prompts that further compromise itself).
It’s crucial to note that prompt injection isn’t the same as “jailbreaking.” While jailbreaking tries to bypass safety filters for restricted content, prompt injection reprograms the AI entirely — often without the user realizing it.
Even though many solutions focus on corporate users, individuals can also protect themselves:
The University of Pennsylvania is investigating a cybersecurity incident after unknown hackers gained access to internal email accounts and sent thousands of misleading messages to students, alumni, and staff on Friday morning. The fraudulent emails, which appeared to come from the university’s Graduate School of Education (GSE), contained inflammatory and false statements aimed at discrediting the institution.
The messages, distributed through multiple legitimate @upenn.edu accounts, mocked the university’s data protection standards and included offensive remarks about its internal policies. Some messages falsely claimed the university violated the Family Educational Rights and Privacy Act (FERPA) and threatened to release private student data. Several recipients reported receiving the same message multiple times from different Penn-affiliated senders.
In a statement to media outlets, Penn spokesperson Ron Ozio confirmed that the university’s incident response team is actively handling the situation. He described the email as “fraudulent,” adding that the content “does not reflect the mission or actions of Penn or Penn GSE.” The university emphasized that it is coordinating with cybersecurity specialists to contain the breach and determine the extent of access obtained by the attackers.
Preliminary findings suggest the threat actors may have compromised university email accounts, likely through credential theft or phishing, and used them to send the mass messages. According to reports, the attackers claim to have obtained extensive data including donor, student, and alumni records, and have threatened to leak it online. However, Penn has not verified these claims and continues to assess which systems were affected.
The timing and tone of the hackers’ messages suggest that their motive may extend beyond simple disruption. The emails referenced university fundraising efforts and included statements like “please stop giving us money,” implying an intent to undermine donor confidence. Analysts also noted that the incident followed Penn’s public rejection of a White House initiative known as the “Compact for Academic Excellence in Higher Education.”
That proposal, which several universities declined to sign, sought to impose federal funding conditions that included banning affirmative action in admissions and hiring, freezing tuition for five years, capping international enrollment, and enforcing policies that critics say would marginalize LGBTQ+ and gender-nonconforming students. In response, Penn President J. Larry Jameson had stated that such conditions “conflict with the viewpoint diversity and freedom of expression central to higher education.”
The university has advised all recipients to disregard the fake messages and avoid clicking on any embedded links or attachments. Anyone concerned about personal information exposure has been urged to monitor their accounts and report suspicious activity. Penn has promised to issue direct notifications if any verified data exposure is confirmed.
The growing risk of reputational and data threats faced by universities, which hold vast troves of academic and financial records cannot be more critical. As investigations take place, cybersecurity experts stress that academic institutions must adopt continuous monitoring, strict credential management, and transparent communication with affected communities when such attacks occur.