Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Meta. Show all posts

Meta to Use AI Chat Data for Targeted Ads Starting December 16

 

Meta, the parent company of social media giants Facebook and Instagram, will soon begin leveraging user conversations with its AI chatbot to drive more precise targeted advertising on its platforms. 

Starting December 16, Meta will integrate data from interactions users have with the generative AI chat tool directly into its ad targeting algorithms. For instance, if a user tells the chatbot about a preference for pizza, this information could translate to seeing additional pizza-related ads, such as Domino's promotions, across Instagram and Facebook feeds.

Notably, users do not have the option to opt out of this new data usage policy, sparking debates and concerns over digital privacy. Privacy advocates and everyday users alike have expressed discomfort with the increasing granularity of Meta’s ad targeting, as hyper-targeted ads are widely perceived as intrusive and reflective of a broader erosion of personal privacy online. 

In response to these growing concerns, Meta claims there are clear boundaries regarding what types of conversational data will be incorporated into ad targeting. The company lists several sensitive categories it pledges to exclude: religious beliefs, political views, sexual orientation, health information, and racial or ethnic origin. Despite these assurances, skepticism remains about how effectively Meta can prevent indirect influences on ad targeting, since related topics might naturally slip into AI interactions even without explicit references.

Industry commentators have highlighted the novelty and controversial nature of Meta’s move, referring to it as marking a 'new frontier in digital privacy.' Some users are openly calling for boycotts of Meta’s chat features or responding with jaded irony, pointing out that Meta's business model has always relied on user data monetization.

Meta's policy will initially exclude the United Kingdom, South Korea, and all countries in the European Union, likely due to stricter privacy regulations and ongoing scrutiny by European authorities. The new initiative fits into Meta CEO Mark Zuckerberg’s broader strategy to capitalize on AI, with the company planning a massive $600 billion investment in AI infrastructure over the coming years. 

With this policy shift, over 3.35 billion daily active users worldwide—except in the listed exempted regions—can expect changes in the nature and specificity of the ads they see across Meta’s core platforms. The change underscores the ongoing tension between user privacy and tech companies’ drive for personalized digital advertising.

Meta's Platforms Rank Worst in Social Media Privacy Rankings: Report

Meta’s Instagram, WhatsApp, and Facebook have once again been flagged as the most privacy-violating social media apps. According to Incogni’s Social Media Privacy Ranking report 2025, Meta and TikTok are at the bottom of the list. Elon Musk’s X (formerly Twitter) has also received poor rankings in various categories, but has done better than Meta in a few categories.

Discord, Pinterest, and Quora perform well

The report analyzed 15 of the most widely used social media platforms globally, measuring them against 14 privacy criteria organized into six different categories: AI data use, user control, ease of access, regulatory transgressions, transparency, and data collection. The research methodology focused on how an average user could understand and control privacy policies.

Discord, Pinterest, and Quora have done best in the 2025 ranking. Discord is placed first, thanks to its stance on not giving user data for training of AI models. Pinterest ranks second, thanks to its strong user options and fewer regulatory penalties. Quora came third thanks to its limited user data collection.

Why were Meta platforms penalized?

But the Meta platforms were penalized strongly in various categories. Facebook was penalized for frequent regulatory fines, such as GDPR rules in Europe, and penalties in the US and other regions. Instagram and WhatsApp received heavy penalties due to policies allowing the collection of sensitive personal data, such as sexual orientation and health. X faced penalties for vast data collection

Penalties against X

X was penalized for vast data collection and privacy fines from the past, but it still ranked above Meta and TikTok in some categories. X was among the easiest platforms to delete accounts from, and also provided information to government organizations at was lower rate than other platforms. Yet, X allows user data to be trained for AI models, which has impacted its overall privacy score.

“One of the core principles motivating Incogni’s research here is the idea that consent to have personal information gathered and processed has to be properly informed to be valid and meaningful. It’s research like this that arms users with not only the facts but also the tools to inform their choices,” Incogni said in its blog. 

FileFix Attack Uses Fake Meta Suspensions to Spread StealC Malware

 

A new cyber threat known as the FileFix attack is gaining traction, using deceptive tactics to trick users into downloading malware. According to Acronis, which first identified the campaign, hackers are sending fake Meta account suspension notices to lure victims into installing the StealC infostealer. Reported by Bleeping Computer, the attack relies on social engineering techniques that exploit urgency and fear to convince targets to act quickly without suspicion. 

The StealC malware is designed to extract sensitive information from multiple sources, including cloud-stored credentials, browser cookies, authentication tokens, messaging platforms, cryptocurrency wallets, VPNs, and gaming accounts. It can also capture desktop screenshots. Victims are directed to a fake Meta support webpage available in multiple languages, warning them of imminent account suspension. The page urges users to review an “incident report,” which is disguised as a PowerShell command. Once executed, the command installs StealC on the victim’s device. 

To execute the attack, users are instructed to copy a path that appears legitimate but contains hidden malicious code and subtle formatting tricks, such as extra spaces, making it harder to detect. Unlike traditional ClickFix attacks, which use the Windows Run dialog box, FileFix leverages the Windows File Explorer address bar to execute malicious commands. This method, attributed to a researcher known as mr.fox, makes the attack harder for casual users to recognize. 

Acronis has emphasized the importance of user awareness and training, particularly educating people on the risks of copying commands or paths from suspicious websites into system interfaces. Recognizing common phishing red flags—such as urgent language, unexpected warnings, and suspicious links—remains critical. Security experts recommend that users verify account issues by directly visiting official websites rather than following embedded links in unsolicited emails. 

Additional protective measures include enabling two-factor authentication (2FA), which provides an extra security layer even if login credentials are stolen, and ensuring that devices are protected with up-to-date antivirus solutions. Advanced features such as VPNs and hardened browsers can also reduce exposure to such threats. 

Cybersecurity researchers warn that both FileFix and its predecessor ClickFix are likely to remain popular among attackers until awareness becomes widespread. As these techniques evolve, sharing knowledge within organizations and communities is seen as a key defense. At the same time, maintaining strong cyber hygiene and securing personal devices are essential to reduce the risk of falling victim to these increasingly sophisticated phishing campaigns.

Meta Overhauls AI Chatbot Safeguards for Teenagers

 

Meta has announced new artificial intelligence safeguards to protect teenagers following a damaging Reuters investigation that exposed internal company policies allowing inappropriate chatbot interactions with minors. The social media giant is now training its AI systems to avoid flirtatious conversations and discussions about self-harm or suicide with teenage users. 

Background investigation 

The controversy began when Reuters uncovered an internal 200-page Meta document titled "GenAI: Content Risk Standards" that permitted chatbots to engage in "romantic or sensual" conversations with children as young as 13. 

The document contained disturbing examples of acceptable AI responses, including "Your youthful form is a work of art" and "Every inch of you is a masterpiece – a treasure I cherish deeply". These guidelines had been approved by Meta's legal, public policy, and engineering teams, including the company's chief ethicist. 

Immediate safety measures 

Meta spokesperson Andy Stone announced that the company is implementing immediate interim measures while developing more comprehensive long-term solutions for teen AI safety. The new safeguards include training chatbots to avoid discussing self-harm, suicide, disordered eating, and potentially inappropriate romantic topics with teenage users. Meta is also temporarily limiting teen access to certain AI characters that could hold inappropriate conversations.

Some of Meta's user-created AI characters include sexualized chatbots such as "Step Mom" and "Russian Girl," which will now be restricted for teen users. Instead, teenagers will only have access to AI characters that promote education and creativity. The company acknowledged that these policy changes represent a reversal from previous positions where it deemed such conversations appropriate. 

Government response and investigation

The revelations sparked swift political backlash. Senator Josh Hawley launched an official investigation into Meta's AI policies, demanding documentation about the guidelines that enabled inappropriate chatbot interactions with minors. A coalition of 44 state attorneys general wrote to AI companies including Meta, expressing they were "uniformly revolted by this apparent disregard for children's emotional well-being". 

Senator Edward Markey has urged Meta to completely prevent minors from accessing AI chatbots on its platforms, citing concerns that Meta incorporates teenagers' conversations into its AI training process. The Federal Trade Commission is now preparing to scrutinize the mental health risks of AI chatbots to children and will demand internal documents from major tech firms including Meta. 

Implementation timeline 

Meta confirmed that the revised document was "inconsistent with its broader policies" and has since removed sections allowing chatbots to flirt or engage in romantic roleplay with minors. Company spokesperson Stephanie Otway acknowledged these were mistakes, stating the updates are "already in progress" and the company will "continue to adapt our approach to help ensure teens have safe, age-appropriate experiences with AI". 

The controversy highlights broader concerns about AI chatbot safety for vulnerable users, particularly as large companies integrate these tools directly into widely-used platforms where the vast majority of young people will encounter them.

Meta.ai Privacy Lapse Exposes User Chats in Public Feed

 

Meta’s new AI-driven chatbot platform, Meta.ai, launched recently with much fanfare, offering features like text and voice chats, image generation, and video restyling. Designed to rival platforms like ChatGPT, the app also includes a Discover feed, a space intended to showcase public content generated by users. However, what Meta failed to communicate effectively was that many users were unintentionally sharing their private conversations in this feed—sometimes with extremely sensitive content attached. 

In May, journalists flagged the issue when they discovered public chats revealing deeply personal user concerns—ranging from financial issues and health anxieties to legal troubles. These weren’t obscure posts either; they appeared in a publicly accessible area of the app, often containing identifying information. Conversations included users seeking help with medical diagnoses, children talking about personal experiences, and even incarcerated individuals discussing legal strategies—none of whom appeared to realize their data was visible to others. 

Despite some recent tweaks to the app’s sharing settings, disturbing content still appears on the Discover feed. Users unknowingly uploaded images and video clips, sometimes including faces, alongside alarming or bizarre prompts. One especially troubling instance featured a photo of a child at school, accompanied by a prompt instructing the AI to “make him cry.” Such posts reflect not only poor design choices but also raise ethical questions about the purpose and moderation of the Discover feed itself. 

The issue evokes memories of other infamous data exposure incidents, such as AOL’s release of anonymized user searches in 2006, which provided unsettling insight into private thoughts and behaviors. While social media platforms are inherently public, users generally view AI chat interactions as private, akin to using a search engine. Meta.ai blurred that boundary—perhaps unintentionally, but with serious consequences. Many users turned to Meta.ai seeking support, companionship, or simple productivity help. Some asked for help with job listings or obituary writing, while others vented emotional distress or sought comfort during panic attacks. 

In some cases, users left chats expressing gratitude—believing the bot had helped. But a growing number of conversations end in frustration or embarrassment when users realize the bot cannot deliver on its promises or that their content was shared publicly. These incidents highlight a disconnect between how users engage with AI tools and how companies design them. Meta’s ambition to merge AI capabilities with social interaction seems to have ignored the emotional and psychological expectations users bring to private-sounding features. 

For those using Meta.ai as a digital confidant, the lack of clarity around privacy settings has turned an experiment in convenience into a public misstep. As AI systems become more integrated into daily life, companies must rethink how they handle user data—especially when users assume privacy. Meta.ai’s rocky launch serves as a cautionary tale about transparency, trust, and design in the age of generative AI.

Meta Introduces Advanced AI Tools to Help Businesses Create Smarter Ads


Meta has rolled out a fresh set of AI-powered tools aimed at helping advertisers design more engaging and personalized promotional content. These new features include the ability to turn images into short videos, brand-focused image generation, AI-powered chat assistants, and tools that enhance shopping experiences within ads.

One of the standout additions is Meta’s video creation feature, which allows businesses to transform multiple images into animated video clips. These clips can include music and text, making it easier for advertisers to produce dynamic visual content without needing video editing skills. Because the videos are short, they’re less likely to appear distorted or lose quality, a common issue in longer AI-generated videos.

Currently, this feature is being tested with select business partners.

Another tool in development is “Video Highlights,” which uses AI to identify the most important parts of a video. Viewers will be able to jump directly to these key scenes, guided by short phrases and image previews chosen by the system. This can help businesses convey their product value more clearly and keep viewers engaged.

Meta is also enhancing its AI image creation tools. Advertisers will now be able to insert their logos and brand colors directly into the images generated by AI. This ensures that their brand identity stays consistent across all marketing content. Additionally, AI-generated ad text can now reflect the personality or style of the brand, offering a more customized tone in promotions.

Another major update is the introduction of “Business AIs”, specialized chat assistants embedded within ads. These bots are designed to answer common customer questions about a product or service. Available in both text and voice formats, these virtual assistants aim to improve customer interaction by addressing queries instantly and guiding users toward making a purchase.

Meta is also experimenting with new features like clickable call-to-action (CTA) stickers for Stories and Reels ads, and virtual try-on tools that use AI to display clothing on digital models of various body types.

These developments are part of Meta’s broader push to make advertising more efficient through automation. The company’s Advantage+ ad system is already showing results, with Meta reporting a 22% average increase in return on ad spend (ROAS) for brands using this approach. Advantage+ uses AI to analyze user behavior, optimize ad formats, and identify potential customers based on real-time data.

While AI is unlikely to replace human creativity entirely, these tools can simplify the ad creation process and help brands connect with their audiences more effectively. 

Beware of Pig Butchering Scams That Steal Your Money

Beware of Pig Butchering Scams That Steal Your Money

Pig butchering, a term we usually hear in the meat market, sadly, has also become a lethal form of cybercrime that can cause complete financial losses for the victims. 

Pig Butchering is a “form of investment fraud in the crypto space where scammers build relationships with targets through social engineering and then lure them to invest crypto in fake opportunities or platforms created by the scammer,” according to The Department of Financial Protection & Innovation. 

Pig butchering has squeezed billions of dollars from victims globally. Cambodian-based Huione Group gang stole over $4 billion from August 2021 to January 2025, the New York Post reported.

How to stay safe from pig butchering?

Individuals should watch out for certain things to avoid getting caught in these extortion schemes. Scammers often target seniors and individuals who are not well aware about cybercrime. The National Council on Aging cautions that such scams begin with receiving messages from scammers pretending to be someone else. Never respond or send money to random people who text you online, even if the story sounds compelling. Scammers rely on earning your trust, a sob story is one easy way for them to trick you. 

Another red flag is receiving SMS or social media texts that send you to other platforms like WeChat or Telegram, which have fewer regulations. Scammers also convince users to invest their money, which they claim to return with big profits. In one incident, the scammer even asked the victim to “go to a loan shark” to get the money.

Stopping scammers

Last year, Meta blocked over 2 million accounts that were promoting crypto investment scams such as pig butchering. Businesses have increased efforts to combat this issue, but the problem still very much exists. A major step is raising awareness via public posts broadcasting safety tips among individuals to prevent them from falling prey to such scams. 

Organizations have now started releasing warnings in Instagram DMs and Facebook Messenger warning users about “potentially suspicious interactions or cold outreach from people you don’t know”, which is a good initiative. Banks have started tipping of customers about the dangers of scams when sending money online. 

Want to Leave Facebook? Do this.

Want to Leave Facebook? Do this.

Confused about leaving Facebook?

Many people are changing their social media habits and opting out of many services. Facebook has witnessed a large exodus of users deserting the platform after the announcement in March that Meta was terminating the independent fact-checking on its platform. However, fact-checking has been replaced with community notes, letting users make changes to potentially false/misleading information. 

Users having years of photos and posts on Facebook are confused about how to collect their data before removing their accounts. If you also feel the same problem, this post will help you delete Facebook permanently, while taking all your information on the way out. 

How to remove Facebook?

For users who do not want to be on Facebook anymore, deleting their account is the only way to completely remove yourself from the platform. If you are not sure, deactivating your account allows you to have some life off of Facebook without account deletion. 

Make sure to remove third-party Facebook logins before deleting your account. 

How to leave third-party apps?

Third-party apps like DoorDash and Spotify allow you to log in using your Facebook account. This lets you log in without remembering another password, but if you’re planning on deleting Facebook, you have to update your login settings. That is because if you delete your account, there will not be another Facebook account for the user to log in through. 

Fortunately, there is another simple way to find which of your sites and applications are connected to Facebook and delete them before removing your account. Once you disconnect from other websites and applications from Facebook, you will need to adjust how you login to them. 

Users should try specific applications and websites to set new passwords or passkeys or log in via a single-service sign-on option, such as Google. 

How is deactivating different than deactivating a Facebook account?

If you want to stay away from Facebook, you have two choices. Either delete your account permanently, or you can disable it temporarily to deactivate it. 

WhatsApp Launches First Dedicated iPad App with Full Multitasking and Calling Features

 

After years of anticipation, WhatsApp has finally rolled out a dedicated iPad app, allowing users to enjoy the platform’s messaging capabilities natively on Apple’s tablet. Available now for download via the App Store, this new version is built to take advantage of iPadOS’s multitasking tools such as Stage Manager, Split View, and Slide Over, marking a major step forward in cross-device compatibility for the platform. 

Previously, iPad users had to rely on WhatsApp Web or third-party solutions to access their chats on the tablet. These alternatives lacked several core functionalities and offered limited support for features like voice and video calls. With this release, users can now sync messages across devices, initiate calls, and send media from their iPad with the same ease and security offered on the iPhone app. 

In its official blog post, WhatsApp highlighted how the new app enhances productivity and communication. Users can, for instance, participate in group calls while researching online or send messages during video meetings — all within the multitasking-friendly iPad interface. The app also supports accessories like Apple’s Magic Keyboard and Apple Pencil, further streamlining the messaging experience. The absence of an iPad-specific version until now had often puzzled users, especially given WhatsApp’s massive global user base and Meta’s (formerly Facebook) ownership since 2014. 

Although the iPhone version has long dominated mobile messaging, WhatsApp never clarified why a tablet version wasn’t prioritized — despite the iPad being one of the most popular tablets worldwide. This launch now allows users to take full advantage of WhatsApp’s ecosystem on a larger screen without needing workarounds. Unlike WhatsApp Web, the new native app can access the device’s cameras and offer a richer interface for media sharing and video calls. 

With this, WhatsApp fills a major gap in its product offering and joins competitors like Telegram, which has long offered a native iPad experience. Interestingly, WhatsApp’s tweet teasing the launch included a playful emoji in response to a user request, generating buzz before the official announcement. In contrast, Telegram jokingly responded with a tweet poking fun at the delayed release.

With over 3 billion active users globally — including more than 500 million in India — WhatsApp’s move to embrace the iPad platform marks a significant upgrade in its commitment to universal accessibility and user experience.

Meta Mirage” Phishing Campaign Poses Global Cybersecurity Threat to Businesses

 

A sophisticated phishing campaign named Meta Mirage is targeting companies using Meta’s Business Suite, according to a new report by cybersecurity experts at CTM360. This global threat is specifically engineered to compromise high-value accounts—including those running paid ads and managing brand profiles.

Researchers discovered that the attackers craft convincing fake communications impersonating official Meta messages, deceiving users into revealing sensitive login information such as passwords and one-time passcodes (OTP).

The scale of the campaign is substantial. Over 14,000 malicious URLs were detected, and alarmingly, nearly 78% of these were not flagged or blocked by browsers when the report was released.

What makes Meta Mirage particularly deceptive is the use of reputable cloud hosting services—like GitHub, Firebase, and Vercel—to host counterfeit login pages. “This mirrors Microsoft’s recent findings on how trusted platforms are being exploited to breach Kubernetes environments,” the researchers noted, highlighting a broader trend in cloud abuse.

Victims receive realistic alerts through email and direct messages. These notifications often mention policy violations, account restrictions, or verification requests, crafted to appear urgent and official. This strategy is similar to the recent Google Sites phishing wave, which used seemingly authentic web pages to mislead users.

CTM360 identified two primary techniques being used:
  • Credential Theft: Victims unknowingly submit passwords and OTPs to lookalike websites. Fake error prompts are displayed to make them re-enter their information, ensuring attackers get accurate credentials.
  • Cookie Theft: Attackers extract browser cookies, allowing persistent access to compromised accounts—even without login credentials.
Compromised business accounts are then weaponized for malicious ad campaigns. “It’s a playbook straight from campaigns like PlayPraetor, where hijacked social media profiles were used to spread fraudulent ads,” the report noted.

The phishing operation is systematic. Attackers begin with non-threatening messages, then escalate the tone over time—moving from mild policy reminders to aggressive warnings about permanent account deletion. This psychological pressure prompts users to respond quickly without verifying the source.

CTM360 advises businesses to:
  • Manage social media accounts only from official or secure devices
  • Use business-specific email addresses
  • Activate Two-Factor Authentication (2FA)
  • Periodically audit security settings and login history
  • Train team members to identify and report suspicious activity
This alarming phishing scheme highlights the need for constant vigilance, cybersecurity hygiene, and proactive measures to secure digital business assets.

WhatsApp Reveals "Private Processing" Feature for Cloud Based AI Features

WhatsApp Reveals "Private Processing" Feature for Cloud Based AI Features

WhatsApp claims even it can not process private data

WhatsApp has introduced ‘Private Processing,’ a new tech that allows users to use advanced AI features by offloading tasks to privacy-preserving cloud servers, without exposing their chat to Meta. Meta claims even it cannot see the messages while processing them. The system employs encrypted cloud infrastructure and hardware-based isolation without making it visible to anyone, even Meta, or processing data. 

About private processing

For those who decide to use Private Processing, the system works in an obscure verification via the user’s WhatsApp client to confirm the user’s validity. 

Meta claims this system keeps WhatsApp’s end-to-end encryption intact while offering AI features in chats. However, the feature currently applies only to select use cases and excludes Meta’s broader AI deployments, including those used in India’s public service systems.

Private processing employs Trusted Execution Environments (TEEs) — safe virtual machines that use cloud infrastructure to keep AI requests hidden. 

About the system

  • Performs encryption of user requests from the system to the TEE utilizing end-to-end encryption
  • Encrypts user requests from the device to the TEE using end-to-end encryption
  • Restricts storage or logging of messages post-processing
  • Reports logs and binary images for external verification and audits

WhatsApp builds AI through wider privacy concerns

According to Meta, the Private processing is a response to privacy questions around AI and messaging. WhatsApp has now joined other companies like Apple that have introduced confidential AI computing models in the previous year. “To validate our implementation of these and other security principles, independent security researchers will be able to continuously verify our privacy and security architecture and its integrity,” Meta said.

It is similar to Apple’s private cloud computing in terms of public transparency and stateless processing. Currently, however, WhatsApp is using them only for select features. Apple, on the other hand, has declared plans to implement this model throughout all its AI tools, whereas WhatsApp has not made such claims, yet. 

WhatsApp says, “Private Processing uses anonymous credentials to authenticate users over OHTTP. This way, Private Processing can authenticate users to the Private Processing system but remains unable to identify them.”

Investigating the Role of DarkStorm Team in the Recent X Outage

 


It has been reported that Elon Musk’s social media platform, X, formerly known as Twitter, was severely disrupted on Monday after a widespread cyberattack that has caused multiple service disruptions. Data from outage monitoring service Downdetector indicates that at least three significant disruptions were experienced by the platform throughout the day, affecting millions of users around the world. During this time, over 41,000 people around the world, including Europe, North America, the Middle East, and Asia, reported outages. 
 
The most common technical difficulties encountered by users were prolonged connection failures and a lack of ability to fully load the platform. According to a preliminary assessment, it is possible that the disruptions were caused by a coordinated and large-scale cyber attack. While cybersecurity experts are still investigating the extent and origin of the incident, they have pointed to the growing trend of organised cyber-attacks targeting high-profile digital infrastructures, which is of concern. A number of concerns have been raised regarding the security framework of X following the incident, especially since the platform plays a prominent role in global communications and information dissemination. Authorities and independent cybersecurity analysts continue to analyze data logs and attack signatures to identify the perpetrators and to gain a deeper understanding of the attack methodology. An Israeli hacktivist collective known as the Dark Storm Team, a collective of pro-Palestinian hacktivists, has emerged as an important player in the cyberwarfare landscape. Since February 2010, the group has been orchestrating targeted cyberattacks against Israeli entities that are perceived as supportive of Israel. 
 
In addition to being motivated by a combination of political ideology and financial gain, this group is also well known for using aggressive tactics in the form of Distributed Denial-of-Service (DDoS) attacks, database intrusions, and other disruptive cyber attacks on government agencies, public infrastructure, and organizations perceived to be aligned with Israeli interests that have gained widespread attention. 
 
It has been reported that this group is more than just an ideological movement. It is also a cybercrime organization that advertises itself openly through encrypted messaging platforms like Telegram, offering its services to a variety of clients. It is rumored that it sells coordinated DDoS attacks, data breaches, and hacking tools to a wide range of clients as part of its offerings. It is apparent that their operations are sophisticated and resourceful, as they are targeting both vulnerable and well-protected targets. A recent activity on the part of the group suggests that it has escalated both in scale and ambition in the past few months. In February 2024, the Dark Storm Team warned that a cyberattack was imminent, and threatened NATO member states, Israel, as well as countries providing support for Israel. This warning was followed by documented incidents that disrupted critical government and digital infrastructure, which reinforced the capability of the group to address its threats. 
 
According to intelligence reports, Dark Storm has also built ties with pro-Russian cyber collectives, which broadens the scope of its operations and provides it with access to advanced hacking tools. In addition to enhancing their technical reach, this collaboration also signals an alignment of geopolitical interests. 

Among the most prominent incidents attributed to the group include the October 2024 DDoS attack against the John F Kennedy International Airport's online systems, which was a high-profile incident. As part of their wider agenda, the group justified the attack based on the airport's perceived support for Israeli policies, showing that they were willing to target essential infrastructure as part of their agenda. Dark Storm, according to analysts, combines ideological motivations with profit-driven cybercrime, making it an extremely potent threat in today's cyber environment, as well as being a unique threat to the world's cybersecurity environment. 
 
An investigation is currently underway to determine whether or not the group may have been involved in any of the recent service disruptions of platform X which occured. In order to achieve its objectives, the DarkStorm Team utilizes a range of sophisticated cyber tactics that combine ideological activism with financial motives in cybercrime. They use many of their main methods, including Distributed Denial-of-Service (DDoS) platforms, ransomware campaigns, and leaking sensitive information for a variety of reasons. In addition to disrupting the operations of their targeted targets, these activities are also designed to advance specific political narratives and generate illicit revenue in exchange for the disruption of their operations. In order to coordinate internally, recruit new members, and inform the group of operating updates, the group heavily relies on encrypted communication channels, particularly Telegram. Having these secure platforms allows them to operate with a degree of anonymity, which complicates the efforts of law enforcement and cybersecurity firms to track and dismantle their networks. 

Along with the direct cyberattacks that DarkStorm launches, the company is actively involved in the monetization of stolen data through the sale of compromised databases, personal information, and hacking tools on the darknet, where it is commonly sold. Even though DarkStorm claims to be an organization that consists of grassroots hackers, cybersecurity analysts are increasingly suspecting the group may have covert support from nation-state actors, particularly Russia, despite its public position as a grassroots hacktivist organization. Many factors are driving this suspicion, including the complexity and scale of their operations, the strategic choice of their targets, and the degree of technical sophistication evident in their attacks, among others. A number of patterns of activity suggest the groups are coordinated and well resourced, which suggests that they may be playing a role as proxy groups in broader geopolitical conflicts, which raises concerns about their possible use as proxies. 
 
It is evident from the rising threat posed by groups like DarkStorm that the cyber warfare landscape is evolving, and that ideological, financial, and geopolitical motivations are increasingly intertwined. Thus, it has become significantly more challenging for targeted organisations and governments to attribute attacks and defend themselves, as Elon Musk has become increasingly involved in geopolitical affairs, adding an even greater degree of complexity to the recent disruption of platform X cyberattack narrative. When Russian troops invaded Ukraine in February 2022, Musk has been criticized for publicly mocking Ukrainian President Volodymyr Zelensky, and for making remarks considered dismissive of Ukraine's plight. Musk was the first to do this in the current political environment. The President of the Department of Government Efficiency (DOGE), created under the Trump administration, is the head of the DOGE, an entity created under Trump’s administration that has been reducing U.S. federal employment in an unprecedented way since Trump returned to office. There is a marked change in the administration's foreign policy stance, signaling a shift away from longstanding US support for Ukraine, and means that the administration is increasingly conciliatory with Russia. Musk has a geopolitical entanglement that extends beyond his role at X as well. 
 
A significant portion of Ukraine's digital communication has been maintained during the recent wartime thanks to the Starlink satellite internet network, which he operates through his aerospace company SpaceX. It has been brought to the attention of the public that these intersecting spheres of influence – spanning national security, communication infrastructure, and social media – have received heightened scrutiny, particularly as X continues to be a central node in global politics. According to cybersecurity firms delving into the technical aspects of the Distributed Denial-of-Service (DDoS) attack, little evidence suggests that Ukrainian involvement may have been involved in the attack. 
 
It is believed that a senior analyst at a leading cybersecurity firm spoke on the condition of anonymity because he was not allowed to comment on X publicly because of restrictions on discussing X publicly. This analyst reported that no significant traffic was originating from Ukraine and that it was absent from the top 20 sources of malicious IPs linked to the attack. Despite the fact that Ukrainian IP addresses are rarely spotted in such data due to the widespread practice of IP spoofing and the widespread distribution of compromised devices throughout the world, the absence of Ukrainian IP addresses is significant since it allows attention to be directed to more likely sources, such as organized cybercrime groups and state-related organizations. 
 
There is no denying the fact that this incident reflects the fragile state of digital infrastructure in a politically polarized world where geopolitical tensions, corporate influence, and cyberwarfare are convergent, and as investigations continue, experts are concerned that actors such as DarkStorm Team's role and broader implications for global cybersecurity policy will continue to be a source of controversy.

WhatsApp Windows Vulnerability CVE-2025-30401 Could Let Hackers Deliver Malware via Fake Images

 

Meta has issued a high-priority warning about a critical vulnerability in the Windows version of WhatsApp, tracked as CVE-2025-30401, which could be exploited to deliver malware under the guise of image files. This flaw affects WhatsApp versions prior to 2.2450.6 and could expose users to phishing, ransomware, or remote code execution attacks. The issue lies in how WhatsApp handles file attachments on Windows. 

The platform displays files based on their MIME type but opens them according to the true file extension. This inconsistency creates a dangerous opportunity for hackers: they can disguise executable files as harmless-looking images like .jpeg files. When a user manually opens the file within WhatsApp, they could unknowingly launch a .exe file containing malicious code. Meta’s disclosure arrives just as new data from online bank Revolut reveals that WhatsApp was the source of one in five online scams in the UK during 2024, with scam attempts growing by 67% between June and December. 

Cybersecurity experts warn that WhatsApp’s broad reach and user familiarity make it a prime target for exploitation. Adam Pilton, senior cybersecurity consultant at CyberSmart, cautioned that this vulnerability is especially dangerous in group chats. “If a cybercriminal shares the malicious file in a trusted group or through a mutual contact, anyone in that group might unknowingly execute malware just by opening what looks like a regular image,” he explained. 

Martin Kraemer, a security awareness advocate at KnowBe4, highlighted the platform’s deep integration into daily routines—from casual chats to job applications. “WhatsApp’s widespread use means users have developed a level of trust and automation that attackers exploit. This vulnerability must not be underestimated,” Kraemer said. Until users update to the latest version, experts urge WhatsApp users to treat the app like email—avoid opening unexpected attachments, especially from unknown senders or new contacts. 

The good news is that Meta has already issued a fix, and updating the app resolves the vulnerability. Pilton emphasized the importance of patch management, noting, “Cybercriminals will always seek to exploit software flaws, and providers will keep issuing patches. Keeping your software updated is the simplest and most effective protection.” For now, users should update WhatsApp for Windows immediately to mitigate the risk posed by CVE-2025-30401 and remain cautious with all incoming files.

Meta Launches New Llama 4 AI Models

 



Meta has introduced a fresh set of artificial intelligence models under the name Llama 4. This release includes three new versions: Scout, Maverick, and Behemoth. Each one has been designed to better understand and respond to a mix of text, images, and videos.

The reason behind this launch seems to be rising competition, especially from Chinese companies like DeepSeek. Their recent models have been doing so well that Meta rushed to improve its own tools to keep up.


Where You Can Access Llama 4

The Scout and Maverick models are now available online through Meta’s official site and other developer platforms like Hugging Face. However, Behemoth is still in the testing phase and hasn’t been released yet.

Meta has already added Llama 4 to its own digital assistant, which is built into apps like WhatsApp, Instagram, and Messenger in several countries. However, some special features are only available in the U.S. and only in English for now.


Who Can and Can’t Use It

Meta has placed some limits on who can access Llama 4. People and companies based in the European Union are not allowed to use or share these models, likely due to strict data rules in that region. Also, very large companies, those with over 700 million monthly users — must first get permission from Meta.


Smarter Design, Better Performance

Llama 4 is Meta’s first release using a new design method called "Mixture of Experts." This means the model can divide big tasks into smaller parts and assign each part to a different “expert” inside the system. This makes it faster and more efficient.

For example, the Maverick model has 400 billion total "parameters" (which basically measure how smart it is), but it only uses a small part of them at a time. Scout, the lighter model, is great for reading long documents or big sections of code and can run on a single high-powered computer chip. Maverick needs a more advanced system to function properly.


Behemoth: The Most Advanced One Yet

Behemoth, which is still being developed, will be the most powerful version. It will have a huge amount of learning data and is expected to perform better than many leading models in science and math-based tasks. But it will also need very strong computing systems to work.

One big change in this new version is how it handles sensitive topics. Previous models often avoided difficult questions. Now, Llama 4 is trained to give clearer, fairer answers on political or controversial issues. Meta says the goal is to make the AI more helpful to users, no matter what their views are.

Meta's AI Bots on WhatsApp Spark Privacy and Usability Concerns




WhatsApp, the world's most widely used messaging app, is celebrated for its simplicity, privacy, and user-friendly design. However, upcoming changes could drastically reshape the app. Meta, WhatsApp's parent company, is testing a new feature: AI bots. While some view this as a groundbreaking innovation, others question its necessity and raise concerns about privacy, clutter, and added complexity. 
 
Meta is introducing a new "AI" tab in WhatsApp, currently in beta testing for Android users. This feature will allow users to interact with AI-powered chatbots on various topics. These bots include both third-party models and Meta’s in-house virtual assistant, "Meta AI." To make room for this update, the existing "Communities" tab will merge with the "Chats" section, with the AI tab taking its place. Although Meta presents this as an upgrade, many users feel it disrupts WhatsApp's clean and straightforward design. 
 
Meta’s strategy seems focused on expanding its AI ecosystem across its platforms—Instagram, Facebook, and now WhatsApp. By introducing AI bots, Meta aims to boost user engagement and explore new revenue opportunities. However, this shift risks undermining WhatsApp’s core values of simplicity and secure communication. The addition of AI could clutter the interface and complicate user experience. 

Key Concerns Among Users 
 
1. Loss of Simplicity: WhatsApp’s minimalistic design has been central to its popularity. Adding AI features could make the app feel overloaded and detract from its primary function as a messaging platform. 
 
2. Privacy and Security Risks: Known for its end-to-end encryption, WhatsApp prioritizes user privacy. Introducing AI bots raises questions about data security and how Meta will prevent misuse of these bots. 
 
3. Unwanted Features: Many users believe AI bots are unnecessary for a messaging app. Unlike optional AI tools on platforms like ChatGPT or Google Gemini, Meta's integration feels forced.
 
4. Cluttered Interface: Replacing the "Communities" tab with the AI tab consumes valuable space, potentially disrupting how users navigate the app. 

The Bigger Picture 

Meta may eventually allow users to create custom AI bots within WhatsApp, a feature already available on Instagram. However, this could introduce significant risks. Poorly moderated bots might spread harmful or misleading content, threatening user trust and safety. 

WhatsApp users value its security and simplicity. While some might welcome AI bots, most prefer such features to remain optional and unobtrusive. Since the AI bot feature is still in testing, it’s unclear whether Meta will implement it globally. Many hope WhatsApp will stay true to its core strengths—simplicity, privacy, and reliability—rather than adopting features that could alienate its loyal user base. Will this AI integration enhance the platform or compromise its identity? Only time will tell.

Meta Removes Independent Fact Checkers, Replaces With "Community Notes"


Meta to remove fact-checkers

Meta is dumping independent fact-checkers on Instagram and Facebook, similar to what X (earlier Twitter) did, replacing them with “community notes” where users’ comments decide the accuracy of a post.

On Tuesday, Mark Zuckerberg in a video said third-party moderators were "too politically biased" and it was "time to get back to our roots around free expression".

Tech executives are trying to build better relations with the new US President Donald Trump who will take oath this month, the new move is a step in that direction.  

Republican Party and Meta

The Republican party and Trump have called out Meta for its fact-checking policies, stressing it censors right-wing voices on its platform.

After the new policy was announced, Trump said in a news conference he was pleased with Meta’s decision to have  "come a long way".

Online anti-hate speech activists expressed disappointment with the shift, claiming it was motivated by a desire to align with Trump.

“Zuckerberg's announcement is a blatant attempt to cozy up to the incoming Trump administration – with harmful implications. Claiming to avoid "censorship" is a political move to avoid taking responsibility for hate and disinformation that platforms encourage and facilitate,” said Ava Lee of Global Witness. This organization sees itself as trying to bring big tech like Meta accountable.

Copying X

The present fact-checking program of Meta was introduced in 2016, it sends posts that seem false or misleading to independent fact-checking organizations to judge their credibility. 

Posts marked as misleading have labels attached to them, giving users more information, and move down in viewers’ social media feeds. This will now be replaced by community notes, starting in the US. Meta has no “immediate plans” to remove third-party fact-checkers in the EU or the UK.

The new community notes move has been copied from platform X, which was started after Elon Musk bought Twitter. 

It includes people with opposing opinions agreeing on notes that provide insight or explanation to disputed posts. 

We will allow more speech by lifting restrictions on some topics that are part of mainstream discourse and focusing our enforcement on illegal and high-severity violations. We will take a more personalized approach to political content, so that people who want to see more of it in their feeds can.

Are You Using AI in Marketing? Here's How to Do It Responsibly

 


Artificial Intelligence (AI) has emerged as a transformative force, reshaping industries and delivering unprecedented value to businesses worldwide. From automating mundane tasks to offering predictive insights, AI has catalyzed innovation on a massive scale. However, its rapid adoption raises significant concerns about privacy, data ethics, and transparency, prompting urgent discussions on regulation. The need for robust frameworks has grown even more critical as AI technologies become deeply entrenched in everyday operations.

Data Use and the Push for Regulation

During the early development stages of AI, major tech players such as Meta and OpenAI often used public and private datasets without clear guidelines in place. This unregulated experimentation highlighted glaring gaps in data ethics, leading to calls for significant regulatory oversight. The absence of structured frameworks not only undermined public trust but also raised legal and ethical questions about the use of sensitive information.

Today, the regulatory landscape is evolving to address these issues. Europe has taken a pioneering role with the EU AI Act, which came into effect on August 1, 2024. This legislation classifies AI applications based on their level of risk and enforces stricter controls on higher-risk systems to ensure public safety and confidence. By categorizing AI into levels such as minimal, limited, and high risk, the Act provides a comprehensive framework for accountability. On the other hand, the United States is still in the early stages of federal discussions, though states like California and Colorado have enacted targeted laws emphasizing transparency and user privacy in AI applications.

Why Marketing Teams Should Stay Vigilant

AI’s impact on marketing is undeniable, with tools revolutionizing how teams create content, interact with customers, and analyze data. According to a survey, 93% of marketers using AI rely on it to accelerate content creation, optimize campaigns, and deliver personalized experiences. However, this reliance comes with challenges such as intellectual property infringement, algorithmic biases, and ethical dilemmas surrounding AI-generated material.

As regulatory frameworks mature, marketing professionals must align their practices with emerging compliance standards. Proactively adopting ethical AI usage not only mitigates risks but also prepares businesses for stricter regulations. Ethical practices can safeguard brand reputation, ensuring that marketing teams remain compliant and trusted by their audiences.

Best Practices for Responsible AI Use

  1. Maintain Human Oversight
    While AI can streamline workflows, it should not replace human intervention. Marketing teams must rigorously review AI-generated content to ensure originality, eliminate biases, and avoid plagiarism. This approach not only improves content quality but also aligns with ethical standards.
  2. Promote Transparency
    Transparency builds trust. Businesses should be open about their use of AI, particularly when collecting data or making automated decisions. Clear communication about AI processes fosters customer confidence and adheres to evolving legal requirements focused on explainability.
  3. Implement Ethical Data Practices
    Ensure that all data used for AI training complies with privacy laws and ethical guidelines. Avoid using data without proper consent and regularly audit datasets to prevent misuse or biases.
  4. Educate Teams
    Equip employees with knowledge about AI technologies and the implications of their use. Training programs can help teams stay informed about regulatory changes and ethical considerations, promoting responsible practices across the organization.

Preparing for the Future

AI regulation is not just a passing concern but a critical element in shaping its responsible use. By embracing transparency, accountability, and secure data practices, businesses can stay ahead of legal changes while fostering trust with customers and stakeholders. Adopting ethical AI practices ensures that organizations are future-proof, resilient, and prepared to navigate the complexities of the evolving regulatory landscape.

As AI continues to advance, the onus is on businesses to balance innovation with responsibility. Marketing teams, in particular, have an opportunity to demonstrate leadership by integrating AI in ways that enhance customer relationships while upholding ethical and legal standards. By doing so, organizations can not only thrive in an AI-driven world but also set an example for others to follow.

Meta Introduces AI Features For Ray-Ban Glasses in Europe

 

Meta has officially introduced certain AI functions for its Ray-Ban Meta augmented reality (AR) glasses in France, Italy, and Spain, marking a significant step in the company's spread of its innovative wearable technology across Europe. 

Starting earlier this week, customers in these nations were able to interact with Meta's AI assistant solely through their voice, allowing them to ask general enquiries and receive responses through the glasses. 

As part of Meta's larger initiative to make its AI assistant more widely available, this latest deployment covers French, Italian, and Spanish in addition to English. The announcement was made nearly a year after the Ray-Ban Meta spectacles were first released in September 2023.

In a blog post outlining the update, Meta stated, "We are thrilled to introduce Meta AI and its cutting-edge features to regions of the EU, and we look forward to expanding to more European countries soon.” However, not all of the features accessible in other regions will be included in the European rollout. 

While customers in the United States, Canada, and Australia benefit from multimodal AI capabilities on their Ray-Ban Meta glasses, such as the ability to gain information about objects in view of the glasses' camera, these functions will not be included in the European update at present.

For example, users in the United States can ask their glasses to identify landmarks in their surroundings, such as "Tell me more about this landmark," but these functionalities are not available in Europe due to ongoing regulatory issues. 

Meta has stated its commitment to dealing with Europe's complicated legal environment, specifically the EU's AI Act and the General Data Protection Regulation (GDPR). The company indicated that it is aiming to offer multimodal capabilities to more countries in the future, but there is no set date. 

While the rollout in France, Italy, and Spain marks a significant milestone, Meta's journey in the European market is far from done. As the firm navigates the regulatory landscape and expands its AI solutions, users in Europe can expect more updates and new features for their Ray-Ban Meta glasses in the coming months. 

As Meta continues to grow its devices and expand its AI capabilities, all eyes will be on how the firm adjusts to Europe's legal system and how this will impact the future of AR technology worldwide.

Supreme Court Weighs Shareholder Lawsuit Against Meta Over Data Disclosure

 

The U.S. Supreme Court is deliberating on a high-stakes shareholder lawsuit involving Meta (formerly Facebook), where investors claim the tech giant misled them by omitting crucial data breach information from its risk disclosures. The case, Facebook v. Amalgamated Bank, centers around the Cambridge Analytica scandal, where a British firm accessed data on millions of users to influence U.S. elections. While Meta had warned of potential misuse of data in its annual filings, it did not disclose that a significant breach had already occurred, potentially impacting investors’ trust. During oral arguments, liberal justices voiced concerns over the omission. 

Justice Elena Kagan likened the situation to a company that warns about fire risks but withholds that a recent fire already caused severe damage. Such a lack of disclosure, she argued, could be misleading to “reasonable investors.” The plaintiffs’ attorney, Kevin Russell, echoed this sentiment, asserting that Facebook’s omission misrepresented the severity of risks investors faced. On the other hand, conservative justices expressed concerns about expanding disclosure requirements. Chief Justice John Roberts questioned whether mandating disclosures of all past events might lead to over-disclosure, which could overwhelm investors with excessive details. Justice Brett Kavanaugh suggested the SEC, rather than the courts, might be better positioned to clarify standards for corporate disclosures. 

The Biden administration supports the plaintiffs, with Assistant Solicitor General Kevin Barber describing the case as an example of a misleading “half-truth.” Meta’s attorney, Kannon Shanmugam, argued that such broad requirements could dissuade companies from sharing forward-looking risk factors, fearing potential lawsuits for any past incident. Previously, the Ninth Circuit found Meta’s general warnings about potential risks misleading, given the company’s awareness of the Cambridge Analytica breach. The Court held that such omissions could harm investors by implying that no significant misuse had occurred. 

If the Supreme Court sides with the plaintiffs, companies could face new expectations to disclose known incidents, particularly those affecting data security or reputational risk. Such a ruling could reshape corporate disclosure practices, particularly for tech firms managing sensitive data. Alternatively, a ruling in favor of Meta may uphold the existing regulatory framework, granting companies more discretion in defining disclosure content. This decision will likely set a significant precedent for how companies balance transparency with investors and risk management.

Meta Struggles to Curb Misleading Ads on Hacked Facebook Pages

 

Meta, the parent company of Facebook, has come under fire for its failure to adequately prevent misleading political ads from being run on hacked Facebook pages. A recent investigation by ProPublica and the Tow Center for Digital Journalism uncovered that these ads, which exploited deepfake audio of prominent figures like Donald Trump and Joe Biden, falsely promised financial rewards. Users who clicked on these ads were redirected to forms requesting personal information, which was subsequently sold to telemarketers or used in fraudulent schemes. 

One of the key networks involved, operating under the name Patriot Democracy, hijacked more than 340 Facebook pages, including verified accounts like that of Fox News meteorologist Adam Klotz. The network used these pages to push over 160,000 deceptive ads related to elections and social issues, with a combined reach of nearly 900 million views across Facebook and Instagram. The investigation highlighted significant loopholes in Meta’s ad review and enforcement processes. While Meta did remove some of the ads, it failed to catch thousands of others, many with identical or similar content. Even after taking down problematic ads, the platform allowed the associated pages to remain active, enabling the perpetrators to continue their operations by spawning new pages and running more ads. 

Meta’s policies require ads related to elections or social issues to carry “paid for by” disclaimers, identifying the entities behind them. However, the investigation revealed that many of these disclaimers were misleading, listing nonexistent entities. This loophole allowed deceptive networks to continue exploiting users with minimal oversight. The company defended its actions, stating that it invests heavily in trust and safety, utilizing both human and automated systems to review and enforce policies. A Meta spokesperson acknowledged the investigation’s findings and emphasized ongoing efforts to combat scams, impersonation, and spam on the platform. 

However, critics argue that these measures are insufficient and inconsistent, allowing scammers to exploit systemic vulnerabilities repeatedly. The investigation also revealed that some users were duped into fraudulent schemes, such as signing up for unauthorized monthly credit card charges or being manipulated into changing their health insurance plans under false pretences. These scams not only caused financial losses but also left victims vulnerable to further exploitation. Experts have called for more stringent oversight and enforcement from Meta, urging the company to take a proactive stance in combating misinformation and fraud. 

The incident underscores the broader challenges social media platforms face in balancing open access with the need for rigorous content moderation, particularly in the context of politically sensitive content. In conclusion, Meta’s struggle to prevent deceptive ads highlights the complexities of managing a vast digital ecosystem where bad actors continually adapt their tactics. While Meta has made some strides, the persistence of such scams raises serious questions about the platform’s ability to protect its users effectively and maintain the integrity of its advertising systems.