Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Meta. Show all posts

Meta's AI Bots on WhatsApp Spark Privacy and Usability Concerns




WhatsApp, the world's most widely used messaging app, is celebrated for its simplicity, privacy, and user-friendly design. However, upcoming changes could drastically reshape the app. Meta, WhatsApp's parent company, is testing a new feature: AI bots. While some view this as a groundbreaking innovation, others question its necessity and raise concerns about privacy, clutter, and added complexity. 
 
Meta is introducing a new "AI" tab in WhatsApp, currently in beta testing for Android users. This feature will allow users to interact with AI-powered chatbots on various topics. These bots include both third-party models and Meta’s in-house virtual assistant, "Meta AI." To make room for this update, the existing "Communities" tab will merge with the "Chats" section, with the AI tab taking its place. Although Meta presents this as an upgrade, many users feel it disrupts WhatsApp's clean and straightforward design. 
 
Meta’s strategy seems focused on expanding its AI ecosystem across its platforms—Instagram, Facebook, and now WhatsApp. By introducing AI bots, Meta aims to boost user engagement and explore new revenue opportunities. However, this shift risks undermining WhatsApp’s core values of simplicity and secure communication. The addition of AI could clutter the interface and complicate user experience. 

Key Concerns Among Users 
 
1. Loss of Simplicity: WhatsApp’s minimalistic design has been central to its popularity. Adding AI features could make the app feel overloaded and detract from its primary function as a messaging platform. 
 
2. Privacy and Security Risks: Known for its end-to-end encryption, WhatsApp prioritizes user privacy. Introducing AI bots raises questions about data security and how Meta will prevent misuse of these bots. 
 
3. Unwanted Features: Many users believe AI bots are unnecessary for a messaging app. Unlike optional AI tools on platforms like ChatGPT or Google Gemini, Meta's integration feels forced.
 
4. Cluttered Interface: Replacing the "Communities" tab with the AI tab consumes valuable space, potentially disrupting how users navigate the app. 

The Bigger Picture 

Meta may eventually allow users to create custom AI bots within WhatsApp, a feature already available on Instagram. However, this could introduce significant risks. Poorly moderated bots might spread harmful or misleading content, threatening user trust and safety. 

WhatsApp users value its security and simplicity. While some might welcome AI bots, most prefer such features to remain optional and unobtrusive. Since the AI bot feature is still in testing, it’s unclear whether Meta will implement it globally. Many hope WhatsApp will stay true to its core strengths—simplicity, privacy, and reliability—rather than adopting features that could alienate its loyal user base. Will this AI integration enhance the platform or compromise its identity? Only time will tell.

Meta Removes Independent Fact Checkers, Replaces With "Community Notes"


Meta to remove fact-checkers

Meta is dumping independent fact-checkers on Instagram and Facebook, similar to what X (earlier Twitter) did, replacing them with “community notes” where users’ comments decide the accuracy of a post.

On Tuesday, Mark Zuckerberg in a video said third-party moderators were "too politically biased" and it was "time to get back to our roots around free expression".

Tech executives are trying to build better relations with the new US President Donald Trump who will take oath this month, the new move is a step in that direction.  

Republican Party and Meta

The Republican party and Trump have called out Meta for its fact-checking policies, stressing it censors right-wing voices on its platform.

After the new policy was announced, Trump said in a news conference he was pleased with Meta’s decision to have  "come a long way".

Online anti-hate speech activists expressed disappointment with the shift, claiming it was motivated by a desire to align with Trump.

“Zuckerberg's announcement is a blatant attempt to cozy up to the incoming Trump administration – with harmful implications. Claiming to avoid "censorship" is a political move to avoid taking responsibility for hate and disinformation that platforms encourage and facilitate,” said Ava Lee of Global Witness. This organization sees itself as trying to bring big tech like Meta accountable.

Copying X

The present fact-checking program of Meta was introduced in 2016, it sends posts that seem false or misleading to independent fact-checking organizations to judge their credibility. 

Posts marked as misleading have labels attached to them, giving users more information, and move down in viewers’ social media feeds. This will now be replaced by community notes, starting in the US. Meta has no “immediate plans” to remove third-party fact-checkers in the EU or the UK.

The new community notes move has been copied from platform X, which was started after Elon Musk bought Twitter. 

It includes people with opposing opinions agreeing on notes that provide insight or explanation to disputed posts. 

We will allow more speech by lifting restrictions on some topics that are part of mainstream discourse and focusing our enforcement on illegal and high-severity violations. We will take a more personalized approach to political content, so that people who want to see more of it in their feeds can.

Are You Using AI in Marketing? Here's How to Do It Responsibly

 


Artificial Intelligence (AI) has emerged as a transformative force, reshaping industries and delivering unprecedented value to businesses worldwide. From automating mundane tasks to offering predictive insights, AI has catalyzed innovation on a massive scale. However, its rapid adoption raises significant concerns about privacy, data ethics, and transparency, prompting urgent discussions on regulation. The need for robust frameworks has grown even more critical as AI technologies become deeply entrenched in everyday operations.

Data Use and the Push for Regulation

During the early development stages of AI, major tech players such as Meta and OpenAI often used public and private datasets without clear guidelines in place. This unregulated experimentation highlighted glaring gaps in data ethics, leading to calls for significant regulatory oversight. The absence of structured frameworks not only undermined public trust but also raised legal and ethical questions about the use of sensitive information.

Today, the regulatory landscape is evolving to address these issues. Europe has taken a pioneering role with the EU AI Act, which came into effect on August 1, 2024. This legislation classifies AI applications based on their level of risk and enforces stricter controls on higher-risk systems to ensure public safety and confidence. By categorizing AI into levels such as minimal, limited, and high risk, the Act provides a comprehensive framework for accountability. On the other hand, the United States is still in the early stages of federal discussions, though states like California and Colorado have enacted targeted laws emphasizing transparency and user privacy in AI applications.

Why Marketing Teams Should Stay Vigilant

AI’s impact on marketing is undeniable, with tools revolutionizing how teams create content, interact with customers, and analyze data. According to a survey, 93% of marketers using AI rely on it to accelerate content creation, optimize campaigns, and deliver personalized experiences. However, this reliance comes with challenges such as intellectual property infringement, algorithmic biases, and ethical dilemmas surrounding AI-generated material.

As regulatory frameworks mature, marketing professionals must align their practices with emerging compliance standards. Proactively adopting ethical AI usage not only mitigates risks but also prepares businesses for stricter regulations. Ethical practices can safeguard brand reputation, ensuring that marketing teams remain compliant and trusted by their audiences.

Best Practices for Responsible AI Use

  1. Maintain Human Oversight
    While AI can streamline workflows, it should not replace human intervention. Marketing teams must rigorously review AI-generated content to ensure originality, eliminate biases, and avoid plagiarism. This approach not only improves content quality but also aligns with ethical standards.
  2. Promote Transparency
    Transparency builds trust. Businesses should be open about their use of AI, particularly when collecting data or making automated decisions. Clear communication about AI processes fosters customer confidence and adheres to evolving legal requirements focused on explainability.
  3. Implement Ethical Data Practices
    Ensure that all data used for AI training complies with privacy laws and ethical guidelines. Avoid using data without proper consent and regularly audit datasets to prevent misuse or biases.
  4. Educate Teams
    Equip employees with knowledge about AI technologies and the implications of their use. Training programs can help teams stay informed about regulatory changes and ethical considerations, promoting responsible practices across the organization.

Preparing for the Future

AI regulation is not just a passing concern but a critical element in shaping its responsible use. By embracing transparency, accountability, and secure data practices, businesses can stay ahead of legal changes while fostering trust with customers and stakeholders. Adopting ethical AI practices ensures that organizations are future-proof, resilient, and prepared to navigate the complexities of the evolving regulatory landscape.

As AI continues to advance, the onus is on businesses to balance innovation with responsibility. Marketing teams, in particular, have an opportunity to demonstrate leadership by integrating AI in ways that enhance customer relationships while upholding ethical and legal standards. By doing so, organizations can not only thrive in an AI-driven world but also set an example for others to follow.

Meta Introduces AI Features For Ray-Ban Glasses in Europe

 

Meta has officially introduced certain AI functions for its Ray-Ban Meta augmented reality (AR) glasses in France, Italy, and Spain, marking a significant step in the company's spread of its innovative wearable technology across Europe. 

Starting earlier this week, customers in these nations were able to interact with Meta's AI assistant solely through their voice, allowing them to ask general enquiries and receive responses through the glasses. 

As part of Meta's larger initiative to make its AI assistant more widely available, this latest deployment covers French, Italian, and Spanish in addition to English. The announcement was made nearly a year after the Ray-Ban Meta spectacles were first released in September 2023.

In a blog post outlining the update, Meta stated, "We are thrilled to introduce Meta AI and its cutting-edge features to regions of the EU, and we look forward to expanding to more European countries soon.” However, not all of the features accessible in other regions will be included in the European rollout. 

While customers in the United States, Canada, and Australia benefit from multimodal AI capabilities on their Ray-Ban Meta glasses, such as the ability to gain information about objects in view of the glasses' camera, these functions will not be included in the European update at present.

For example, users in the United States can ask their glasses to identify landmarks in their surroundings, such as "Tell me more about this landmark," but these functionalities are not available in Europe due to ongoing regulatory issues. 

Meta has stated its commitment to dealing with Europe's complicated legal environment, specifically the EU's AI Act and the General Data Protection Regulation (GDPR). The company indicated that it is aiming to offer multimodal capabilities to more countries in the future, but there is no set date. 

While the rollout in France, Italy, and Spain marks a significant milestone, Meta's journey in the European market is far from done. As the firm navigates the regulatory landscape and expands its AI solutions, users in Europe can expect more updates and new features for their Ray-Ban Meta glasses in the coming months. 

As Meta continues to grow its devices and expand its AI capabilities, all eyes will be on how the firm adjusts to Europe's legal system and how this will impact the future of AR technology worldwide.

Supreme Court Weighs Shareholder Lawsuit Against Meta Over Data Disclosure

 

The U.S. Supreme Court is deliberating on a high-stakes shareholder lawsuit involving Meta (formerly Facebook), where investors claim the tech giant misled them by omitting crucial data breach information from its risk disclosures. The case, Facebook v. Amalgamated Bank, centers around the Cambridge Analytica scandal, where a British firm accessed data on millions of users to influence U.S. elections. While Meta had warned of potential misuse of data in its annual filings, it did not disclose that a significant breach had already occurred, potentially impacting investors’ trust. During oral arguments, liberal justices voiced concerns over the omission. 

Justice Elena Kagan likened the situation to a company that warns about fire risks but withholds that a recent fire already caused severe damage. Such a lack of disclosure, she argued, could be misleading to “reasonable investors.” The plaintiffs’ attorney, Kevin Russell, echoed this sentiment, asserting that Facebook’s omission misrepresented the severity of risks investors faced. On the other hand, conservative justices expressed concerns about expanding disclosure requirements. Chief Justice John Roberts questioned whether mandating disclosures of all past events might lead to over-disclosure, which could overwhelm investors with excessive details. Justice Brett Kavanaugh suggested the SEC, rather than the courts, might be better positioned to clarify standards for corporate disclosures. 

The Biden administration supports the plaintiffs, with Assistant Solicitor General Kevin Barber describing the case as an example of a misleading “half-truth.” Meta’s attorney, Kannon Shanmugam, argued that such broad requirements could dissuade companies from sharing forward-looking risk factors, fearing potential lawsuits for any past incident. Previously, the Ninth Circuit found Meta’s general warnings about potential risks misleading, given the company’s awareness of the Cambridge Analytica breach. The Court held that such omissions could harm investors by implying that no significant misuse had occurred. 

If the Supreme Court sides with the plaintiffs, companies could face new expectations to disclose known incidents, particularly those affecting data security or reputational risk. Such a ruling could reshape corporate disclosure practices, particularly for tech firms managing sensitive data. Alternatively, a ruling in favor of Meta may uphold the existing regulatory framework, granting companies more discretion in defining disclosure content. This decision will likely set a significant precedent for how companies balance transparency with investors and risk management.

Meta Struggles to Curb Misleading Ads on Hacked Facebook Pages

 

Meta, the parent company of Facebook, has come under fire for its failure to adequately prevent misleading political ads from being run on hacked Facebook pages. A recent investigation by ProPublica and the Tow Center for Digital Journalism uncovered that these ads, which exploited deepfake audio of prominent figures like Donald Trump and Joe Biden, falsely promised financial rewards. Users who clicked on these ads were redirected to forms requesting personal information, which was subsequently sold to telemarketers or used in fraudulent schemes. 

One of the key networks involved, operating under the name Patriot Democracy, hijacked more than 340 Facebook pages, including verified accounts like that of Fox News meteorologist Adam Klotz. The network used these pages to push over 160,000 deceptive ads related to elections and social issues, with a combined reach of nearly 900 million views across Facebook and Instagram. The investigation highlighted significant loopholes in Meta’s ad review and enforcement processes. While Meta did remove some of the ads, it failed to catch thousands of others, many with identical or similar content. Even after taking down problematic ads, the platform allowed the associated pages to remain active, enabling the perpetrators to continue their operations by spawning new pages and running more ads. 

Meta’s policies require ads related to elections or social issues to carry “paid for by” disclaimers, identifying the entities behind them. However, the investigation revealed that many of these disclaimers were misleading, listing nonexistent entities. This loophole allowed deceptive networks to continue exploiting users with minimal oversight. The company defended its actions, stating that it invests heavily in trust and safety, utilizing both human and automated systems to review and enforce policies. A Meta spokesperson acknowledged the investigation’s findings and emphasized ongoing efforts to combat scams, impersonation, and spam on the platform. 

However, critics argue that these measures are insufficient and inconsistent, allowing scammers to exploit systemic vulnerabilities repeatedly. The investigation also revealed that some users were duped into fraudulent schemes, such as signing up for unauthorized monthly credit card charges or being manipulated into changing their health insurance plans under false pretences. These scams not only caused financial losses but also left victims vulnerable to further exploitation. Experts have called for more stringent oversight and enforcement from Meta, urging the company to take a proactive stance in combating misinformation and fraud. 

The incident underscores the broader challenges social media platforms face in balancing open access with the need for rigorous content moderation, particularly in the context of politically sensitive content. In conclusion, Meta’s struggle to prevent deceptive ads highlights the complexities of managing a vast digital ecosystem where bad actors continually adapt their tactics. While Meta has made some strides, the persistence of such scams raises serious questions about the platform’s ability to protect its users effectively and maintain the integrity of its advertising systems.

Vietnamese Hackers Target Digital Marketers in Malware Attack

 



Cyble Research and Intelligence Lab recently unearthed an elaborate, multi-stage malware attack targeting not only job seekers but also digital marketing professionals. The hackers are a Vietnamese threat actor who was utilising different sophisticated attacks on systems by making use of a Quasar RAT tool that gives a hacker complete control of an infected computer. 


Phishing emails and LNK files as entry points

The attack initiates with phishing emails claiming an attached archive file. Inside the archive is a malicious LNK, disguised as a PDF. Once the LNK is launched, it executes PowerShell commands, which download additional malicious scripts from a third-party source, thus avoiding most detection solutions. The method proves very potent in non-virtualized environments in which malware remains undiscovered inside the system.


Quasar RAT Deployment

Then, the attackers decrypt the malware payload with hardcoded keys. Quasar RAT - a kind of RAT allowing hackers to obtain total access over the compromised system - is started up. Data can be stolen, other malware can be planted, and even the infected device can be used remotely by the attackers.

The campaign targets digital marketers primarily in the United States, using Meta (Facebook, Instagram) advertisements. The malware files utilised in the attack were designed for this type of user, which has amplified its chances.


Spread using Ducktail Malware

In July 2022, the same Vietnamese threat actors expanded their activities through the launch of Ducktail malware that specifically targeted digital marketing professionals. The group included information stealers and other RATs in its attacks. The group has used MaaS platforms to scale up and make their campaign versatile over time.


Evasion of Detection in Virtual Environments

Its superiority in evading virtual environment detection makes this malware attack all the more sophisticated. Here, attackers use the presence of the "output.bat" file to determine whether it's running in a virtual environment or not by scanning for several hard drive manufacturers and virtual machine signatures like "QEMU," "VirtualBox," etc. In case malware detects it's been run from a virtual machine, it lets execution stop analysis right away.

It proceeds with the attack if no virtual environment is detected. Here, it decodes more scripts, to which include a fake PDF and a batch file. These are stored in the victim's Downloads folder using seemingly innocent names such as "PositionApplied_VoyMedia.pdf."


Decryption and Execution Methods

Once the PowerShell script is fully executed, then decrypted strings from the "output.bat" file using hardcoded keys and decompressed through GZip streams. Then, it will produce a .NET executable running in the memory which will be providing further evasion for the malware against detection by antivirus software.

But the malware itself, also performs a whole cycle of checks to determine whether it is running in a sandbox or emulated environment. It can look for some known file names and DLL modules common in virtualized settings as well as measure discrepancies in time to detect emulation. If these checks return a result that suggests a virtual environment, then the malware will throw an exception, bringing all subsequent activity to a halt.

Once the malware has managed to infect a system, it immediately looks for administrative privileges. If they are not found, then it uses PowerShell commands for privilege escalation. Once it gains administrative control, it ensures persistence in the sense that it copies itself to a hidden folder inside the Windows directory. It also modifies the Windows registry so that it can execute automatically at startup.


Defence Evasion and Further Damage 

For the same purpose, the malware employs supplementary defence evasion techniques to go unnoticed. It disables Windows event tracing functions which makes it more difficult to track its activities by security software. In addition to this, it encrypts and compresses key components in a way that their actions are even more unidentifiable.

This last stage of the attack uses Quasar RAT. Both data stealing and long-term access to the infected system are done through the use of a remote access tool. This adapted version of Quasar RAT is less detectable, so the attackers will not easily have it identified or removed by security software.

This is a multi-stage malware attack against digital marketing professionals, especially those working in Meta advertising. It's a very sophisticated and dangerous operation with phishing emails, PowerShell commands combined with advanced evasion techniques to make it even harder to detect and stop. Security experts advise on extreme caution while handling attachment files from emails, specifically in a non-virtualized environment; all the software and systems must be up to date to prevent this kind of threat, they conclude.


Harvard Student Uses Meta Ray-Ban 2 Glasses and AI for Real-Time Data Scraping

A recent demonstration by Harvard student AnhPhu Nguyen using Meta Ray-Ban 2 smart glasses has revealed the alarming potential for privacy invasion through advanced AI-powered facial recognition technology. Nguyen’s experiment involved using these $379 smart glasses, equipped with a livestreaming feature, to capture faces in real-time. He then employed publicly available software to scan the internet for more images and data related to the individuals in view. 

By linking facial recognition data with databases such as voter registration records and other publicly available sources, Nguyen was able to quickly gather sensitive personal information like names, addresses, phone numbers, and even social security numbers. This process takes mere seconds, thanks to the integration of an advanced Large Language Model (LLM) similar to ChatGPT, which compiles the scraped data into a comprehensive profile and sends it to Nguyen’s phone. Nguyen claims his goal is not malicious, but rather to raise awareness about the potential threats posed by this technology. 

To that end, he has even shared a guide on how to remove personal information from certain databases he used. However, the effectiveness of these solutions is minimal compared to the vast scale of potential privacy violations enabled by facial recognition software. In fact, the concern over privacy breaches is only heightened by the fact that many databases and websites have already been compromised by bad actors. Earlier this year, for example, hackers broke into the National Public Data background check company, stealing information on three billion individuals, including every social security number in the United States. 

 This kind of privacy invasion will likely become even more widespread and harder to avoid as AI systems become more capable. Nguyen’s experiment demonstrated how easily someone could exploit a few small details to build trust and deceive people in person, raising ethical and security concerns about the future of facial recognition and data gathering technologies. While Nguyen has chosen not to release the software he developed, which he has dubbed “I-Xray,” the implications are clear. 

If a college student can achieve this level of access and sophistication, it is reasonable to assume that similar, if not more invasive, activities could already be happening on a much larger scale. This echoes the privacy warnings raised by whistleblowers like Edward Snowden, who have long warned of the hidden risks and pervasive surveillance capabilities in the digital age.