Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Meta. Show all posts

WhatsApp Windows Vulnerability CVE-2025-30401 Could Let Hackers Deliver Malware via Fake Images

 

Meta has issued a high-priority warning about a critical vulnerability in the Windows version of WhatsApp, tracked as CVE-2025-30401, which could be exploited to deliver malware under the guise of image files. This flaw affects WhatsApp versions prior to 2.2450.6 and could expose users to phishing, ransomware, or remote code execution attacks. The issue lies in how WhatsApp handles file attachments on Windows. 

The platform displays files based on their MIME type but opens them according to the true file extension. This inconsistency creates a dangerous opportunity for hackers: they can disguise executable files as harmless-looking images like .jpeg files. When a user manually opens the file within WhatsApp, they could unknowingly launch a .exe file containing malicious code. Meta’s disclosure arrives just as new data from online bank Revolut reveals that WhatsApp was the source of one in five online scams in the UK during 2024, with scam attempts growing by 67% between June and December. 

Cybersecurity experts warn that WhatsApp’s broad reach and user familiarity make it a prime target for exploitation. Adam Pilton, senior cybersecurity consultant at CyberSmart, cautioned that this vulnerability is especially dangerous in group chats. “If a cybercriminal shares the malicious file in a trusted group or through a mutual contact, anyone in that group might unknowingly execute malware just by opening what looks like a regular image,” he explained. 

Martin Kraemer, a security awareness advocate at KnowBe4, highlighted the platform’s deep integration into daily routines—from casual chats to job applications. “WhatsApp’s widespread use means users have developed a level of trust and automation that attackers exploit. This vulnerability must not be underestimated,” Kraemer said. Until users update to the latest version, experts urge WhatsApp users to treat the app like email—avoid opening unexpected attachments, especially from unknown senders or new contacts. 

The good news is that Meta has already issued a fix, and updating the app resolves the vulnerability. Pilton emphasized the importance of patch management, noting, “Cybercriminals will always seek to exploit software flaws, and providers will keep issuing patches. Keeping your software updated is the simplest and most effective protection.” For now, users should update WhatsApp for Windows immediately to mitigate the risk posed by CVE-2025-30401 and remain cautious with all incoming files.

Meta Launches New Llama 4 AI Models

 



Meta has introduced a fresh set of artificial intelligence models under the name Llama 4. This release includes three new versions: Scout, Maverick, and Behemoth. Each one has been designed to better understand and respond to a mix of text, images, and videos.

The reason behind this launch seems to be rising competition, especially from Chinese companies like DeepSeek. Their recent models have been doing so well that Meta rushed to improve its own tools to keep up.


Where You Can Access Llama 4

The Scout and Maverick models are now available online through Meta’s official site and other developer platforms like Hugging Face. However, Behemoth is still in the testing phase and hasn’t been released yet.

Meta has already added Llama 4 to its own digital assistant, which is built into apps like WhatsApp, Instagram, and Messenger in several countries. However, some special features are only available in the U.S. and only in English for now.


Who Can and Can’t Use It

Meta has placed some limits on who can access Llama 4. People and companies based in the European Union are not allowed to use or share these models, likely due to strict data rules in that region. Also, very large companies, those with over 700 million monthly users — must first get permission from Meta.


Smarter Design, Better Performance

Llama 4 is Meta’s first release using a new design method called "Mixture of Experts." This means the model can divide big tasks into smaller parts and assign each part to a different “expert” inside the system. This makes it faster and more efficient.

For example, the Maverick model has 400 billion total "parameters" (which basically measure how smart it is), but it only uses a small part of them at a time. Scout, the lighter model, is great for reading long documents or big sections of code and can run on a single high-powered computer chip. Maverick needs a more advanced system to function properly.


Behemoth: The Most Advanced One Yet

Behemoth, which is still being developed, will be the most powerful version. It will have a huge amount of learning data and is expected to perform better than many leading models in science and math-based tasks. But it will also need very strong computing systems to work.

One big change in this new version is how it handles sensitive topics. Previous models often avoided difficult questions. Now, Llama 4 is trained to give clearer, fairer answers on political or controversial issues. Meta says the goal is to make the AI more helpful to users, no matter what their views are.

Meta's AI Bots on WhatsApp Spark Privacy and Usability Concerns




WhatsApp, the world's most widely used messaging app, is celebrated for its simplicity, privacy, and user-friendly design. However, upcoming changes could drastically reshape the app. Meta, WhatsApp's parent company, is testing a new feature: AI bots. While some view this as a groundbreaking innovation, others question its necessity and raise concerns about privacy, clutter, and added complexity. 
 
Meta is introducing a new "AI" tab in WhatsApp, currently in beta testing for Android users. This feature will allow users to interact with AI-powered chatbots on various topics. These bots include both third-party models and Meta’s in-house virtual assistant, "Meta AI." To make room for this update, the existing "Communities" tab will merge with the "Chats" section, with the AI tab taking its place. Although Meta presents this as an upgrade, many users feel it disrupts WhatsApp's clean and straightforward design. 
 
Meta’s strategy seems focused on expanding its AI ecosystem across its platforms—Instagram, Facebook, and now WhatsApp. By introducing AI bots, Meta aims to boost user engagement and explore new revenue opportunities. However, this shift risks undermining WhatsApp’s core values of simplicity and secure communication. The addition of AI could clutter the interface and complicate user experience. 

Key Concerns Among Users 
 
1. Loss of Simplicity: WhatsApp’s minimalistic design has been central to its popularity. Adding AI features could make the app feel overloaded and detract from its primary function as a messaging platform. 
 
2. Privacy and Security Risks: Known for its end-to-end encryption, WhatsApp prioritizes user privacy. Introducing AI bots raises questions about data security and how Meta will prevent misuse of these bots. 
 
3. Unwanted Features: Many users believe AI bots are unnecessary for a messaging app. Unlike optional AI tools on platforms like ChatGPT or Google Gemini, Meta's integration feels forced.
 
4. Cluttered Interface: Replacing the "Communities" tab with the AI tab consumes valuable space, potentially disrupting how users navigate the app. 

The Bigger Picture 

Meta may eventually allow users to create custom AI bots within WhatsApp, a feature already available on Instagram. However, this could introduce significant risks. Poorly moderated bots might spread harmful or misleading content, threatening user trust and safety. 

WhatsApp users value its security and simplicity. While some might welcome AI bots, most prefer such features to remain optional and unobtrusive. Since the AI bot feature is still in testing, it’s unclear whether Meta will implement it globally. Many hope WhatsApp will stay true to its core strengths—simplicity, privacy, and reliability—rather than adopting features that could alienate its loyal user base. Will this AI integration enhance the platform or compromise its identity? Only time will tell.

Meta Removes Independent Fact Checkers, Replaces With "Community Notes"


Meta to remove fact-checkers

Meta is dumping independent fact-checkers on Instagram and Facebook, similar to what X (earlier Twitter) did, replacing them with “community notes” where users’ comments decide the accuracy of a post.

On Tuesday, Mark Zuckerberg in a video said third-party moderators were "too politically biased" and it was "time to get back to our roots around free expression".

Tech executives are trying to build better relations with the new US President Donald Trump who will take oath this month, the new move is a step in that direction.  

Republican Party and Meta

The Republican party and Trump have called out Meta for its fact-checking policies, stressing it censors right-wing voices on its platform.

After the new policy was announced, Trump said in a news conference he was pleased with Meta’s decision to have  "come a long way".

Online anti-hate speech activists expressed disappointment with the shift, claiming it was motivated by a desire to align with Trump.

“Zuckerberg's announcement is a blatant attempt to cozy up to the incoming Trump administration – with harmful implications. Claiming to avoid "censorship" is a political move to avoid taking responsibility for hate and disinformation that platforms encourage and facilitate,” said Ava Lee of Global Witness. This organization sees itself as trying to bring big tech like Meta accountable.

Copying X

The present fact-checking program of Meta was introduced in 2016, it sends posts that seem false or misleading to independent fact-checking organizations to judge their credibility. 

Posts marked as misleading have labels attached to them, giving users more information, and move down in viewers’ social media feeds. This will now be replaced by community notes, starting in the US. Meta has no “immediate plans” to remove third-party fact-checkers in the EU or the UK.

The new community notes move has been copied from platform X, which was started after Elon Musk bought Twitter. 

It includes people with opposing opinions agreeing on notes that provide insight or explanation to disputed posts. 

We will allow more speech by lifting restrictions on some topics that are part of mainstream discourse and focusing our enforcement on illegal and high-severity violations. We will take a more personalized approach to political content, so that people who want to see more of it in their feeds can.

Are You Using AI in Marketing? Here's How to Do It Responsibly

 


Artificial Intelligence (AI) has emerged as a transformative force, reshaping industries and delivering unprecedented value to businesses worldwide. From automating mundane tasks to offering predictive insights, AI has catalyzed innovation on a massive scale. However, its rapid adoption raises significant concerns about privacy, data ethics, and transparency, prompting urgent discussions on regulation. The need for robust frameworks has grown even more critical as AI technologies become deeply entrenched in everyday operations.

Data Use and the Push for Regulation

During the early development stages of AI, major tech players such as Meta and OpenAI often used public and private datasets without clear guidelines in place. This unregulated experimentation highlighted glaring gaps in data ethics, leading to calls for significant regulatory oversight. The absence of structured frameworks not only undermined public trust but also raised legal and ethical questions about the use of sensitive information.

Today, the regulatory landscape is evolving to address these issues. Europe has taken a pioneering role with the EU AI Act, which came into effect on August 1, 2024. This legislation classifies AI applications based on their level of risk and enforces stricter controls on higher-risk systems to ensure public safety and confidence. By categorizing AI into levels such as minimal, limited, and high risk, the Act provides a comprehensive framework for accountability. On the other hand, the United States is still in the early stages of federal discussions, though states like California and Colorado have enacted targeted laws emphasizing transparency and user privacy in AI applications.

Why Marketing Teams Should Stay Vigilant

AI’s impact on marketing is undeniable, with tools revolutionizing how teams create content, interact with customers, and analyze data. According to a survey, 93% of marketers using AI rely on it to accelerate content creation, optimize campaigns, and deliver personalized experiences. However, this reliance comes with challenges such as intellectual property infringement, algorithmic biases, and ethical dilemmas surrounding AI-generated material.

As regulatory frameworks mature, marketing professionals must align their practices with emerging compliance standards. Proactively adopting ethical AI usage not only mitigates risks but also prepares businesses for stricter regulations. Ethical practices can safeguard brand reputation, ensuring that marketing teams remain compliant and trusted by their audiences.

Best Practices for Responsible AI Use

  1. Maintain Human Oversight
    While AI can streamline workflows, it should not replace human intervention. Marketing teams must rigorously review AI-generated content to ensure originality, eliminate biases, and avoid plagiarism. This approach not only improves content quality but also aligns with ethical standards.
  2. Promote Transparency
    Transparency builds trust. Businesses should be open about their use of AI, particularly when collecting data or making automated decisions. Clear communication about AI processes fosters customer confidence and adheres to evolving legal requirements focused on explainability.
  3. Implement Ethical Data Practices
    Ensure that all data used for AI training complies with privacy laws and ethical guidelines. Avoid using data without proper consent and regularly audit datasets to prevent misuse or biases.
  4. Educate Teams
    Equip employees with knowledge about AI technologies and the implications of their use. Training programs can help teams stay informed about regulatory changes and ethical considerations, promoting responsible practices across the organization.

Preparing for the Future

AI regulation is not just a passing concern but a critical element in shaping its responsible use. By embracing transparency, accountability, and secure data practices, businesses can stay ahead of legal changes while fostering trust with customers and stakeholders. Adopting ethical AI practices ensures that organizations are future-proof, resilient, and prepared to navigate the complexities of the evolving regulatory landscape.

As AI continues to advance, the onus is on businesses to balance innovation with responsibility. Marketing teams, in particular, have an opportunity to demonstrate leadership by integrating AI in ways that enhance customer relationships while upholding ethical and legal standards. By doing so, organizations can not only thrive in an AI-driven world but also set an example for others to follow.

Meta Introduces AI Features For Ray-Ban Glasses in Europe

 

Meta has officially introduced certain AI functions for its Ray-Ban Meta augmented reality (AR) glasses in France, Italy, and Spain, marking a significant step in the company's spread of its innovative wearable technology across Europe. 

Starting earlier this week, customers in these nations were able to interact with Meta's AI assistant solely through their voice, allowing them to ask general enquiries and receive responses through the glasses. 

As part of Meta's larger initiative to make its AI assistant more widely available, this latest deployment covers French, Italian, and Spanish in addition to English. The announcement was made nearly a year after the Ray-Ban Meta spectacles were first released in September 2023.

In a blog post outlining the update, Meta stated, "We are thrilled to introduce Meta AI and its cutting-edge features to regions of the EU, and we look forward to expanding to more European countries soon.” However, not all of the features accessible in other regions will be included in the European rollout. 

While customers in the United States, Canada, and Australia benefit from multimodal AI capabilities on their Ray-Ban Meta glasses, such as the ability to gain information about objects in view of the glasses' camera, these functions will not be included in the European update at present.

For example, users in the United States can ask their glasses to identify landmarks in their surroundings, such as "Tell me more about this landmark," but these functionalities are not available in Europe due to ongoing regulatory issues. 

Meta has stated its commitment to dealing with Europe's complicated legal environment, specifically the EU's AI Act and the General Data Protection Regulation (GDPR). The company indicated that it is aiming to offer multimodal capabilities to more countries in the future, but there is no set date. 

While the rollout in France, Italy, and Spain marks a significant milestone, Meta's journey in the European market is far from done. As the firm navigates the regulatory landscape and expands its AI solutions, users in Europe can expect more updates and new features for their Ray-Ban Meta glasses in the coming months. 

As Meta continues to grow its devices and expand its AI capabilities, all eyes will be on how the firm adjusts to Europe's legal system and how this will impact the future of AR technology worldwide.

Supreme Court Weighs Shareholder Lawsuit Against Meta Over Data Disclosure

 

The U.S. Supreme Court is deliberating on a high-stakes shareholder lawsuit involving Meta (formerly Facebook), where investors claim the tech giant misled them by omitting crucial data breach information from its risk disclosures. The case, Facebook v. Amalgamated Bank, centers around the Cambridge Analytica scandal, where a British firm accessed data on millions of users to influence U.S. elections. While Meta had warned of potential misuse of data in its annual filings, it did not disclose that a significant breach had already occurred, potentially impacting investors’ trust. During oral arguments, liberal justices voiced concerns over the omission. 

Justice Elena Kagan likened the situation to a company that warns about fire risks but withholds that a recent fire already caused severe damage. Such a lack of disclosure, she argued, could be misleading to “reasonable investors.” The plaintiffs’ attorney, Kevin Russell, echoed this sentiment, asserting that Facebook’s omission misrepresented the severity of risks investors faced. On the other hand, conservative justices expressed concerns about expanding disclosure requirements. Chief Justice John Roberts questioned whether mandating disclosures of all past events might lead to over-disclosure, which could overwhelm investors with excessive details. Justice Brett Kavanaugh suggested the SEC, rather than the courts, might be better positioned to clarify standards for corporate disclosures. 

The Biden administration supports the plaintiffs, with Assistant Solicitor General Kevin Barber describing the case as an example of a misleading “half-truth.” Meta’s attorney, Kannon Shanmugam, argued that such broad requirements could dissuade companies from sharing forward-looking risk factors, fearing potential lawsuits for any past incident. Previously, the Ninth Circuit found Meta’s general warnings about potential risks misleading, given the company’s awareness of the Cambridge Analytica breach. The Court held that such omissions could harm investors by implying that no significant misuse had occurred. 

If the Supreme Court sides with the plaintiffs, companies could face new expectations to disclose known incidents, particularly those affecting data security or reputational risk. Such a ruling could reshape corporate disclosure practices, particularly for tech firms managing sensitive data. Alternatively, a ruling in favor of Meta may uphold the existing regulatory framework, granting companies more discretion in defining disclosure content. This decision will likely set a significant precedent for how companies balance transparency with investors and risk management.

Meta Struggles to Curb Misleading Ads on Hacked Facebook Pages

 

Meta, the parent company of Facebook, has come under fire for its failure to adequately prevent misleading political ads from being run on hacked Facebook pages. A recent investigation by ProPublica and the Tow Center for Digital Journalism uncovered that these ads, which exploited deepfake audio of prominent figures like Donald Trump and Joe Biden, falsely promised financial rewards. Users who clicked on these ads were redirected to forms requesting personal information, which was subsequently sold to telemarketers or used in fraudulent schemes. 

One of the key networks involved, operating under the name Patriot Democracy, hijacked more than 340 Facebook pages, including verified accounts like that of Fox News meteorologist Adam Klotz. The network used these pages to push over 160,000 deceptive ads related to elections and social issues, with a combined reach of nearly 900 million views across Facebook and Instagram. The investigation highlighted significant loopholes in Meta’s ad review and enforcement processes. While Meta did remove some of the ads, it failed to catch thousands of others, many with identical or similar content. Even after taking down problematic ads, the platform allowed the associated pages to remain active, enabling the perpetrators to continue their operations by spawning new pages and running more ads. 

Meta’s policies require ads related to elections or social issues to carry “paid for by” disclaimers, identifying the entities behind them. However, the investigation revealed that many of these disclaimers were misleading, listing nonexistent entities. This loophole allowed deceptive networks to continue exploiting users with minimal oversight. The company defended its actions, stating that it invests heavily in trust and safety, utilizing both human and automated systems to review and enforce policies. A Meta spokesperson acknowledged the investigation’s findings and emphasized ongoing efforts to combat scams, impersonation, and spam on the platform. 

However, critics argue that these measures are insufficient and inconsistent, allowing scammers to exploit systemic vulnerabilities repeatedly. The investigation also revealed that some users were duped into fraudulent schemes, such as signing up for unauthorized monthly credit card charges or being manipulated into changing their health insurance plans under false pretences. These scams not only caused financial losses but also left victims vulnerable to further exploitation. Experts have called for more stringent oversight and enforcement from Meta, urging the company to take a proactive stance in combating misinformation and fraud. 

The incident underscores the broader challenges social media platforms face in balancing open access with the need for rigorous content moderation, particularly in the context of politically sensitive content. In conclusion, Meta’s struggle to prevent deceptive ads highlights the complexities of managing a vast digital ecosystem where bad actors continually adapt their tactics. While Meta has made some strides, the persistence of such scams raises serious questions about the platform’s ability to protect its users effectively and maintain the integrity of its advertising systems.