Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Meta. Show all posts

What Happens When Spyware Hits a Phone and How to Stay Safe

 



Although advanced spyware attacks do not affect most smartphone users, cybersecurity researchers stress that awareness is essential as these tools continue to spread globally. Even individuals who are not public figures are advised to remain cautious.

In December, hundreds of iPhone and Android users received official threat alerts stating that their devices had been targeted by spyware. Shortly after these notifications, Apple and Google released security patches addressing vulnerabilities that experts believe were exploited to install the malware on a small number of phones.

Spyware poses an extreme risk because it allows attackers to monitor nearly every activity on a smartphone. This includes access to calls, messages, keystrokes, screenshots, notifications, and even encrypted platforms such as WhatsApp and Signal. Despite its intrusive capabilities, spyware is usually deployed in targeted operations against journalists, political figures, activists, and business leaders in sensitive industries.

High-profile cases have demonstrated the seriousness of these attacks. Former Amazon chief executive Jeff Bezos and Hanan Elatr, the wife of murdered Saudi dissident Jamal Khashoggi, were both compromised through Pegasus spyware developed by the NSO Group. These incidents illustrate how personal data can be accessed without user awareness.

Spyware activity remains concentrated within these circles, but researchers suggest its reach may be expanding. In early December, Google issued threat notifications and disclosed findings showing that an exploit chain had been used to silently install Predator spyware. Around the same time, the U.S. Cybersecurity and Infrastructure Security Agency warned that attackers were actively exploiting mobile messaging applications using commercial surveillance tools.

One of the most dangerous techniques involved is known as a zero-click attack. In such cases, a device can be infected without the user clicking a link, opening a message, or downloading a file. According to Malwarebytes researcher Pieter Arntz, once infected, attackers can read messages, track keystrokes, capture screenshots, monitor notifications, and access banking applications. Rocky Cole of iVerify adds that spyware can also extract emails and texts, steal credentials, send messages, and access cloud accounts.

Spyware may also spread through malicious links, fake applications, infected images, browser vulnerabilities, or harmful browser extensions. Recorded Future’s Richard LaTulip notes that recent research into malicious extensions shows how tools that appear harmless can function as surveillance mechanisms. These methods, often associated with nation-state actors, are designed to remain hidden and persistent.

Governments and spyware vendors frequently claim such tools are used only for law enforcement or national security. However, Amnesty International researcher Rebecca White states that journalists, activists, and others have been unlawfully targeted worldwide, using spyware as a method of repression. Thai activist Niraphorn Onnkhaow was targeted multiple times during pro-democracy protests between 2020 and 2021, eventually withdrawing from activism due to fears her data could be misused.

Detecting spyware is challenging. Devices may show subtle signs such as overheating, performance issues, or unexpected camera or microphone activation. Official threat alerts from Apple, Google, or Meta should be treated seriously. Leaked private information can also indicate compromise.

To reduce risk, Apple offers Lockdown Mode, which limits certain functions to reduce attack surfaces. Apple security executive Ivan Krstić states that widespread iPhone malware has not been observed outside mercenary spyware campaigns. Apple has also introduced Memory Integrity Enforcement, an always-on protection designed to block memory-based exploits.

Google provides Advanced Protection for Android, enhanced in Android 16 with intrusion logging, USB safeguards, and network restrictions.

Experts recommend avoiding unknown links, limiting app installations, keeping devices updated, avoiding sideloading, and restarting phones periodically. However, confirmed infections often require replacing the device entirely. Organizations such as Amnesty International, Access Now, and Reporters Without Borders offer assistance to individuals who believe they have been targeted.

Security specialists advise staying cautious without allowing fear to disrupt normal device use.

Instagram Refutes Breach Allegations After Claims of 17 Million User Records Circulating Online

 



Instagram has firmly denied claims of a new data breach following reports that personal details linked to more than 17 million accounts are being shared across online forums. The company stated that its internal systems were not compromised and that user accounts remain secure.

The clarification comes after concerns emerged around a technical flaw that allowed unknown actors to repeatedly trigger password reset emails for Instagram users. Meta, Instagram’s parent company, confirmed that this issue has been fixed. According to the company, the flaw did not provide access to accounts or expose passwords. Users who received unexpected reset emails were advised to ignore them, as no action is required.

Public attention intensified after cybersecurity alerts suggested that a large dataset allegedly connected to Instagram accounts had been released online. The data, which was reportedly shared without charge on several hacking forums, was claimed to have been collected through an unverified Instagram API vulnerability dating back to 2024.

The dataset is said to include information from over 17 million profiles. The exposed details reportedly vary by record and include usernames, internal account IDs, names, email addresses, phone numbers, and, in some cases, physical addresses. Analysis of the data shows that not all records contain complete personal details, with some entries listing only basic identifiers such as a username and account ID.

Researchers discussing the incident on social media platforms have suggested that the data may not be recent. Some claim it could originate from an older scraping incident, possibly dating back to 2022. However, no technical evidence has been publicly provided to support these claims. Meta has also stated that it has no record of Instagram API breaches occurring in either 2022 or 2024.

Instagram has previously dealt with scraping-related incidents. In one earlier case, a vulnerability allowed attackers to collect and sell personal information associated with millions of accounts. Due to this history, cybersecurity experts believe the newly surfaced dataset could be a collection of older information gathered from multiple sources over several years, rather than the result of a newly discovered vulnerability.

Attempts to verify the origin of the data have so far been unsuccessful. The individual responsible for releasing the dataset did not respond to requests seeking clarification on when or how the information was obtained.

At present, there is no confirmation that this situation represents a new breach of Instagram’s systems. No evidence has been provided to demonstrate that the data was extracted through a recently exploited flaw, and Meta maintains that there has been no unauthorized access to its infrastructure.

While passwords are not included in the leaked information, users are still urged to remain cautious. Such datasets are often used in phishing emails, scam messages, and social engineering attacks designed to trick individuals into revealing additional information.

Users who receive password reset emails or login codes they did not request should delete them and take no further action. Enabling two-factor authentication is fiercely recommended, as it provides an added layer of security against unauthorized access attempts.


Facebook Tests Paid Access for Sharing Multiple Links

 



Facebook is testing a new policy that places restrictions on how many external links certain users can include in their posts. The change, which is currently being trialled on a limited basis, introduces a monthly cap on link sharing unless users pay for a subscription.

Some users in the United Kingdom and the United States have received in-app notifications informing them that they will only be allowed to share a small number of links in Facebook posts without payment. To continue sharing links beyond that limit, users are offered a subscription priced at £9.99 per month.

Meta, the company that owns Facebook, has confirmed the test and described it as limited in scope. According to the company, the purpose is to assess whether the option to post a higher volume of link-based content provides additional value to users who choose to subscribe.

Industry observers say the experiment reflects Meta’s broader effort to generate revenue from more areas of its platforms. Social media analyst Matt Navarra said the move signals a shift toward monetising essential platform functions rather than optional extras.

He explained that the test is not primarily about identity verification. Instead, it places practical features that users rely on for visibility and reach behind a paid tier. In his view, Meta is now charging for what he describes as “survival features” rather than premium add-ons.

Meta already offers a paid service called Meta Verified, which provides subscribers on Facebook and Instagram with a blue verification badge, enhanced account support, and safeguards against impersonation. Navarra said that after attaching a price to these services, Meta now appears to be applying a similar approach to content distribution itself.

He noted that this includes the basic ability to direct users away from Facebook to external websites, a function that creators and businesses depend on to grow audiences, drive traffic, and promote services.

Navarra was among those who received a notification about the test. He said he was informed that from 16 December onward, he would only be able to include two links per month in Facebook posts unless he subscribed.

For creators and businesses, he said the message is clear. If Facebook plays a role in their audience growth or traffic strategy, that access may now require payment. He added that while platforms have been moving in this direction for some time, the policy makes it explicit.

The test comes as social media platforms increasingly encourage users to verify their accounts in exchange for added features or improved engagement. Platforms such as LinkedIn have also adopted similar models.

After acquiring Twitter in 2022, Elon Musk restructured the platform’s verification system, now known as X. Blue verification badges were made available only to paying users, who also received increased visibility in replies and recommendation feeds.

That approach proved controversial and resulted in regulatory scrutiny, including a fine imposed by European authorities in December. Despite the criticism, Meta later introduced a comparable paid verification model.

Meta has also announced plans to introduce a “community notes” system, similar to X, allowing users to flag potentially misleading posts. This follows reductions in traditional moderation and third-party fact-checking efforts.

According to Meta, the link-sharing test applies only to a selected group of users who operate Pages or use Facebook’s professional mode. These tools are widely used by creators and businesses to publish content and analyse audience engagement.

Navarra said the test highlights a difficult reality for creators. He argued that Facebook is becoming less reliable as a source of external traffic and is increasingly steering users away from treating the platform as a traffic engine.

He added that the experiment reinforces a long-standing pattern. Meta, he said, ultimately designs its systems to serve its own priorities first.

According to analysts, tests like this underline the risks of building a business that depends too heavily on a single platform. Changes to access, visibility, or pricing can occur with little warning, leaving creators and businesses vulnerable.

Meta has emphasized  that the policy remains a trial. However, the experiment illustrates how social media companies continue to reassess which core functions remain free and which are moving behind paywalls.

Meta Begins Removing Under-16 Users Ahead of Australia’s New Social Media Ban

 



Meta has started taking down accounts belonging to Australians under 16 on Instagram, Facebook and Threads, beginning a week before Australia’s new age-restriction law comes into force. The company recently alerted users it believes are between 13 and 15 that their profiles would soon be shut down, and the rollout has now begun.

Current estimates suggest that a large number of accounts will be affected, including roughly hundreds of thousands across Meta’s platforms. Since Threads operates through Instagram credentials, any underage Instagram account will also lose access to Threads.

Australia’s new policy, which becomes fully active on 10 December, prevents anyone under 16 from holding an account on major social media sites. This law is the first of its kind globally. Platforms that fail to take meaningful action can face penalties reaching up to 49.5 million Australian dollars. The responsibility to monitor and enforce this age limit rests with the companies, not parents or children.

A Meta spokesperson explained that following the new rules will require ongoing adjustments, as compliance involves several layers of technology and review. The company has argued that the government should shift age verification to app stores, where users could verify their age once when downloading an app. Meta claims this would reduce the need for children to repeatedly confirm their age across multiple platforms and may better protect privacy.

Before their accounts are removed, underage users can download and store their photos, videos and messages. Those who believe Meta has made an incorrect assessment can request a review and prove their age by submitting government identification or a short video-based verification.

The new law affects a wide list of services, including Facebook, Instagram, Snapchat, TikTok, Threads, YouTube, X, Reddit, Twitch and Kick. However, platforms designed for younger audiences or tools used primarily for education, such as YouTube Kids, Google Classroom and messaging apps like WhatsApp, are not included. Authorities have also been examining whether children are shifting to lesser-known apps, and companies behind emerging platforms like Lemon8 and Yope have already begun evaluating whether they fall under the new rules.

Government officials have stated that the goal is to reduce children’s exposure to harmful online material, which includes violent content, misogynistic messages, eating disorder promotion, suicide-related material and grooming attempts. A national study reported that the vast majority of children aged 10 to 15 use social media, with many encountering unsafe or damaging content.

Critics, however, warn that age verification tools may misidentify users, create privacy risks or fail to stop determined teenagers from using alternative accounts. Others argue that removing teens from regulated platforms might push them toward unmonitored apps, reducing online safety rather than improving it.

Australian authorities expect challenges in the early weeks of implementation but maintain that the long-term goal is to reduce risks for the youngest generation of online users.



Meta Cleared of Monopoly Charges in FTC Antitrust Case

 

A U.S. federal judge ruled that Meta does not hold a monopoly in the social media market, rejecting the FTC's antitrust lawsuit seeking divestiture of Instagram and WhatsApp. The FTC, joined by multiple states, filed the suit in December 2020, alleging Meta (formerly Facebook) violated Section 2 of the Sherman Act by acquiring Instagram for $1 billion in 2012 and WhatsApp for $19 billion in 2014. 

These moves were part of a supposed "buy-or-bury" strategy to eliminate rivals in "personal social networking services" (PSNS), stifling innovation, increasing ads, and weakening privacy. The agency claimed Meta's dominance left consumers with few alternatives, excluding platforms like TikTok and YouTube from its narrow market definition.

Trial and ruling

U.S. District Judge James Boasberg oversaw a seven-week trial ending in May 2025, featuring testimony from Meta CEO Mark Zuckerberg, who highlighted competition from TikTok and YouTube. In an 89-page opinion on November 18, 2025, Boasberg ruled the FTC failed to prove current monopoly power, noting the social media landscape's rapid evolution with surging apps, new features, and AI content.He emphasized Meta's market share—below 50% and declining in a broader market including Snapchat, TikTok, and YouTube—showed no insulation from rivals.

Key arguments and evidence

The FTC presented internal emails suggesting Zuckerberg feared Instagram and WhatsApp as threats, arguing acquisitions suppressed competition and harmed users via heavier ads and less privacy. Boasberg dismissed this, finding direct evidence like supra-competitive profits or price hikes insufficient for monopoly proof, and rejected the PSNS market as outdated given overlapping uses across apps.Meta countered that regulators approved the deals initially and that forcing divestiture would hurt U.S. innovation.I

Implications

Meta hailed the decision as affirming fierce competition and its contributions to growth, avoiding operational upheaval for its 3.54 billion daily users. The FTC expressed disappointment and is reviewing options, marking a setback amid wins against Google but ongoing cases versus Apple and Amazon. Experts view it as reinforcing consumer-focused antitrust in dynamic tech markets.

WhatsApp’s “We See You” Post Sparks Privacy Panic Among Users

 

WhatsApp found itself in an unexpected storm this week after a lighthearted social media post went terribly wrong. The Meta-owned messaging platform, known for emphasizing privacy and end-to-end encryption, sparked alarm when it posted a playful message on X that read, “people who end messages with ‘lol’ we see you, we honor you.” What was meant as a fun cultural nod quickly became a PR misstep, as users were unsettled by the phrase “we see you,” which seemed to contradict WhatsApp’s most fundamental promise—that it can’t see users’ messages at all. 

Within minutes, the post went viral, amassing over five million views and an avalanche of concerned replies. “What about end-to-end encryption?” several users asked, worried that WhatsApp was implying it had access to private conversations. The company quickly attempted to clarify the misunderstanding, replying, “We meant ‘we see you’ figuratively lol (see what we did there?). Your personal messages are protected by end-to-end encryption and no one, not even WhatsApp, can see them.” 

Despite the clarification, the irony wasn’t lost on users—or critics. A platform that has spent years assuring its three billion users that their messages are private had just posted a statement that could easily be read as the opposite. The timing and phrasing of the post made it a perfect recipe for confusion, especially given the long-running public skepticism around Meta’s privacy practices. WhatsApp continued to explain that the message was simply a humorous way to connect with users who frequently end their chats with “lol.” 

The company reiterated that nothing about its encryption or privacy commitments had changed, emphasizing that personal messages remain visible only to senders and recipients. “We see you,” they clarified, was intended as a metaphor for understanding user habits—not an admission of surveillance. The situation became even more ironic considering it unfolded on X, Elon Musk’s platform, where he has previously clashed with WhatsApp over privacy concerns. 

Musk has repeatedly criticized Meta’s handling of user data, and many expect him to seize on this incident as yet another opportunity to highlight his stance on digital privacy. Ultimately, the backlash served as a reminder of how easily tone can be misinterpreted when privacy is the core of your brand. A simple social media joke, meant to be endearing, became a viral lesson in communication strategy. 

For WhatsApp, the encryption remains intact, the messages still unreadable—but the marketing team has learned an important rule: never joke about “seeing” your users when your entire platform is built on not seeing them at all.

Privacy Laws Struggle to Keep Up with Meta’s ‘Luxury Surveillance’ Glasses


Meta’s newest smart glasses have reignited concerns about privacy, as many believe the company is inching toward a world where constant surveillance becomes ordinary. 

Introduced at Meta’s recent Connect event, the glasses reflect the kind of future that science fiction has long warned about, where everyone can record anyone at any moment and privacy nearly disappears. This is not the first time the tech industry has tried to make wearable cameras mainstream. 

More than ten years ago, Google launched Google Glass, which quickly became a public failure. People mocked its users as “Glassholes,” criticizing how easily the device could invade personal space. The backlash revealed that society was not ready for technology that quietly records others without their consent. 

Meta appears to have taken a different approach. By partnering with Ray-Ban, the company has created glasses that look fashionable and ordinary. Small cameras are placed near the nose bridge or along the outer rims, and a faint LED light is the only sign that recording is taking place. 

The glasses include a built-in display, voice-controlled artificial intelligence, and a wristband that lets the wearer start filming or livestreaming with a simple gesture. All recorded footage is instantly uploaded to Meta’s servers. 

Even with these improvements in design, the legal and ethical issues remain. Current privacy regulations are too outdated to deal with the challenges that come with such advanced wearable devices. 

Experts believe that social pressure and public disapproval may still be stronger than any law in discouraging misuse. As Meta promotes its vision of smart eyewear, critics warn that what is really being made normal is a culture of surveillance. 

The sleek design and luxury branding may make the technology more appealing, but the real risk lies in how easily people may accept being watched everywhere they go.

Tech Giants Pour Billions Into AI Race for Market Dominance

 

Tech giants are intensifying their investments in artificial intelligence, fueling an industry boom that has driven stock markets to unprecedented heights. Fresh earnings reports from Meta, Alphabet, and Microsoft underscore the immense sums being poured into AI infrastructure—from data centers to advanced chips—despite lingering doubts about the speed of returns.

Meta announced that its 2025 capital expenditures will range between $70 billion and $72 billion, slightly higher than its earlier forecast. The company also revealed plans for substantially larger spending growth in 2026 as it seeks to compete more aggressively with players like OpenAI.

During a call with analysts, CEO Mark Zuckerberg defended Meta’s aggressive investment strategy, emphasizing AI’s transformative potential in driving both new product development and enhancing its core advertising business. He described the firm’s infrastructure as operating in a “compute-starved” state and argued that accelerating spending was essential to unlocking future growth.

Alphabet, parent to Google and YouTube, also raised its annual capital spending outlook to between $91 billion and $93 billion—up from $85 billion earlier this year. This nearly doubles what the company spent in 2024 and highlights its determination to stay at the forefront of large-scale AI development.

Microsoft’s quarterly report similarly showcased its expanding investment efforts. The company disclosed $34.9 billion in capital expenditures through September 30, surpassing analyst expectations and climbing from $24 billion in the previous quarter. CEO Satya Nadella said Microsoft continues to ramp up AI spending in both infrastructure and talent to seize what he called a “massive opportunity.” He noted that Azure and the company’s broader portfolio of AI tools are already having tangible real-world effects.

Investor enthusiasm surrounding these bold AI commitments has helped lift the share prices of all three firms above the broader S&P 500 index. Still, Wall Street remains keenly interested in seeing whether these heavy capital outlays will translate into measurable profits.

Bank of America senior economist Aditya Bhave observed that robust consumer activity and AI-driven business investment have been the key pillars supporting U.S. economic resilience. As long as the latter remains strong, he said, it signals continued GDP growth. Despite an 83 percent profit drop for Meta due to a one-time tax charge, Microsoft and Alphabet reported profit increases of 12 percent and 33 percent, respectively.

EU Accuses Meta of Violating Digital Services Act Over Content Reporting Rules

 

The European Commission has accused Meta of breaching the European Union’s Digital Services Act (DSA), alleging that Facebook and Instagram fail to provide users with simple and accessible ways to report illegal content. 

In a preliminary ruling, the Commission said Meta’s platforms use “dark patterns” or deceptive design techniques that make it unnecessarily difficult for users to flag material such as child sexual abuse or terrorist content. 

“Neither Facebook nor Instagram appear to provide a user-friendly and easily accessible ‘Notice and Action’ mechanism,” the Commission said in a statement. “Meta’s systems impose several unnecessary steps and additional demands on users.” 

The EC also found that Meta’s appeal processes do not allow users to present explanations or evidence when contesting content moderation decisions, limiting their ability to challenge removals or restrictions. 

If the findings are confirmed, Meta could face penalties of up to 6% of its global annual turnover, along with possible periodic fines for non-compliance. Meta has the opportunity to respond before a final decision is issued. 

Meta pushes back 

Meta said it disagrees with the European Commission’s interpretation and maintains that its operations comply with the DSA. “We disagree with any suggestion that we have breached the DSA,” the company said. 

“We have made significant changes to our content reporting options, appeals process, and data access tools since the law came into force, and we believe these meet the EU’s requirements.”

Transatlantic tensions rise 

The case comes amid mounting tensions between Brussels and Washington over the regulation of US tech giants. The Trump administration has warned that EU measures targeting American firms could trigger new tariffs. US Federal Trade Commission (FTC) Chair Andrew Ferguson recently sent letters to several technology companies, cautioning that “censoring Americans to comply with a foreign power’s laws” could violate US law. 

TikTok also under scrutiny 

Meta is not alone in facing EU scrutiny. The Commission also said it had preliminary evidence that Meta and TikTok failed to provide adequate data access to independent researchers, another key DSA requirement. The EC argued that the platforms’ processes for granting researchers access to public data are “burdensome” and result in “partial or unreliable data”, undermining studies on issues such as online harms to minors. TikTok, for its part, said it remains “committed to transparency” and has shared data with nearly 1,000 research teams. However, the company warned that some DSA requirements may conflict with Europe’s data privacy law, the GDPR. 

“If it is not possible to fully comply with both, we urge regulators to provide clarity on how these obligations should be reconciled,” TikTok said. 

What’s next 

The EU’s investigation adds to the growing list of challenges facing global social media companies under the DSA, a sweeping law designed to increase accountability and transparency in online platforms. 

If confirmed, the ruling could set a major precedent for enforcement under the DSA, which has already prompted major compliance efforts across the tech industry.

EU Accuses Meta of Breaching Digital Rules, Raises Questions on Global Tech Compliance

 




The European Commission has accused Meta Platforms, the parent company of Facebook and Instagram, of violating the European Union’s Digital Services Act (DSA) by making it unnecessarily difficult for users to report illegal online content and challenge moderation decisions.

In its preliminary findings, the Commission said both platforms lack a user-friendly “Notice and Action” system — the mechanism that allows people to flag unlawful material such as child sexual abuse content or terrorist propaganda. Regulators noted that users face multiple steps and confusing options before they can file a report. The Commission also claimed that Meta’s interface relies on “dark patterns”, which are design features that subtly discourage users from completing certain actions, such as submitting reports.

According to the Commission, Meta’s appeal process also falls short of DSA requirements. The current system allegedly prevents users from adding explanations or submitting supporting evidence when disputing a moderation decision. This, the regulator said, limits users’ ability to express why they believe a decision was unfair and weakens the overall transparency of Meta’s content moderation practices.

The European Commission’s findings are not final, and Meta has the opportunity to respond before any enforcement action is taken. If the Commission confirms these violations, it could issue a non-compliance decision, which may result in penalties of up to 6 percent of Meta’s global annual revenue. The Commission may also impose recurring fines until the company aligns its operations with EU law.

Meta, in a public statement, said it “disagrees with any suggestion” that it breached the DSA. The company stated that it has already made several updates to comply with the law, including revisions to content reporting options, appeals procedures, and data access tools.

The European Commission also raised similar concerns about TikTok, saying that both companies have limited researchers’ access to public data on their platforms. The DSA requires large online platforms to provide sufficient data access so independent researchers can analyze potential harms — for example, whether minors are exposed to illegal or harmful content. The Commission’s review concluded that the data-access tools of Facebook, Instagram, and TikTok are burdensome and leave researchers with incomplete or unreliable datasets, which hinders academic and policy research.

TikTok responded that it has provided data to almost 1,000 research teams and remains committed to transparency. However, the company noted that the DSA’s data-sharing obligations sometimes conflict with the General Data Protection Regulation (GDPR), making it difficult to comply with both laws simultaneously. TikTok urged European regulators to offer clarity on how these two frameworks should be balanced.

Beyond Europe, the investigation may strain relations with the United States. American officials have previously criticized the EU for imposing regulatory burdens on U.S.-based tech firms. U.S. FTC Chairman Andrew Ferguson recently warned companies that censoring or modifying content to satisfy foreign governments could violate U.S. law. Former President Donald Trump has also expressed opposition to EU digital rules and even threatened tariffs against countries enforcing them.

For now, the Commission’s investigation continues. If confirmed, the case could set a major precedent for how global social media companies manage user safety, transparency, and accountability under Europe’s strict online governance laws.


Meta to Use AI Chat Data for Targeted Ads Starting December 16

 

Meta, the parent company of social media giants Facebook and Instagram, will soon begin leveraging user conversations with its AI chatbot to drive more precise targeted advertising on its platforms. 

Starting December 16, Meta will integrate data from interactions users have with the generative AI chat tool directly into its ad targeting algorithms. For instance, if a user tells the chatbot about a preference for pizza, this information could translate to seeing additional pizza-related ads, such as Domino's promotions, across Instagram and Facebook feeds.

Notably, users do not have the option to opt out of this new data usage policy, sparking debates and concerns over digital privacy. Privacy advocates and everyday users alike have expressed discomfort with the increasing granularity of Meta’s ad targeting, as hyper-targeted ads are widely perceived as intrusive and reflective of a broader erosion of personal privacy online. 

In response to these growing concerns, Meta claims there are clear boundaries regarding what types of conversational data will be incorporated into ad targeting. The company lists several sensitive categories it pledges to exclude: religious beliefs, political views, sexual orientation, health information, and racial or ethnic origin. Despite these assurances, skepticism remains about how effectively Meta can prevent indirect influences on ad targeting, since related topics might naturally slip into AI interactions even without explicit references.

Industry commentators have highlighted the novelty and controversial nature of Meta’s move, referring to it as marking a 'new frontier in digital privacy.' Some users are openly calling for boycotts of Meta’s chat features or responding with jaded irony, pointing out that Meta's business model has always relied on user data monetization.

Meta's policy will initially exclude the United Kingdom, South Korea, and all countries in the European Union, likely due to stricter privacy regulations and ongoing scrutiny by European authorities. The new initiative fits into Meta CEO Mark Zuckerberg’s broader strategy to capitalize on AI, with the company planning a massive $600 billion investment in AI infrastructure over the coming years. 

With this policy shift, over 3.35 billion daily active users worldwide—except in the listed exempted regions—can expect changes in the nature and specificity of the ads they see across Meta’s core platforms. The change underscores the ongoing tension between user privacy and tech companies’ drive for personalized digital advertising.

Meta's Platforms Rank Worst in Social Media Privacy Rankings: Report

Meta’s Instagram, WhatsApp, and Facebook have once again been flagged as the most privacy-violating social media apps. According to Incogni’s Social Media Privacy Ranking report 2025, Meta and TikTok are at the bottom of the list. Elon Musk’s X (formerly Twitter) has also received poor rankings in various categories, but has done better than Meta in a few categories.

Discord, Pinterest, and Quora perform well

The report analyzed 15 of the most widely used social media platforms globally, measuring them against 14 privacy criteria organized into six different categories: AI data use, user control, ease of access, regulatory transgressions, transparency, and data collection. The research methodology focused on how an average user could understand and control privacy policies.

Discord, Pinterest, and Quora have done best in the 2025 ranking. Discord is placed first, thanks to its stance on not giving user data for training of AI models. Pinterest ranks second, thanks to its strong user options and fewer regulatory penalties. Quora came third thanks to its limited user data collection.

Why were Meta platforms penalized?

But the Meta platforms were penalized strongly in various categories. Facebook was penalized for frequent regulatory fines, such as GDPR rules in Europe, and penalties in the US and other regions. Instagram and WhatsApp received heavy penalties due to policies allowing the collection of sensitive personal data, such as sexual orientation and health. X faced penalties for vast data collection

Penalties against X

X was penalized for vast data collection and privacy fines from the past, but it still ranked above Meta and TikTok in some categories. X was among the easiest platforms to delete accounts from, and also provided information to government organizations at was lower rate than other platforms. Yet, X allows user data to be trained for AI models, which has impacted its overall privacy score.

“One of the core principles motivating Incogni’s research here is the idea that consent to have personal information gathered and processed has to be properly informed to be valid and meaningful. It’s research like this that arms users with not only the facts but also the tools to inform their choices,” Incogni said in its blog. 

FileFix Attack Uses Fake Meta Suspensions to Spread StealC Malware

 

A new cyber threat known as the FileFix attack is gaining traction, using deceptive tactics to trick users into downloading malware. According to Acronis, which first identified the campaign, hackers are sending fake Meta account suspension notices to lure victims into installing the StealC infostealer. Reported by Bleeping Computer, the attack relies on social engineering techniques that exploit urgency and fear to convince targets to act quickly without suspicion. 

The StealC malware is designed to extract sensitive information from multiple sources, including cloud-stored credentials, browser cookies, authentication tokens, messaging platforms, cryptocurrency wallets, VPNs, and gaming accounts. It can also capture desktop screenshots. Victims are directed to a fake Meta support webpage available in multiple languages, warning them of imminent account suspension. The page urges users to review an “incident report,” which is disguised as a PowerShell command. Once executed, the command installs StealC on the victim’s device. 

To execute the attack, users are instructed to copy a path that appears legitimate but contains hidden malicious code and subtle formatting tricks, such as extra spaces, making it harder to detect. Unlike traditional ClickFix attacks, which use the Windows Run dialog box, FileFix leverages the Windows File Explorer address bar to execute malicious commands. This method, attributed to a researcher known as mr.fox, makes the attack harder for casual users to recognize. 

Acronis has emphasized the importance of user awareness and training, particularly educating people on the risks of copying commands or paths from suspicious websites into system interfaces. Recognizing common phishing red flags—such as urgent language, unexpected warnings, and suspicious links—remains critical. Security experts recommend that users verify account issues by directly visiting official websites rather than following embedded links in unsolicited emails. 

Additional protective measures include enabling two-factor authentication (2FA), which provides an extra security layer even if login credentials are stolen, and ensuring that devices are protected with up-to-date antivirus solutions. Advanced features such as VPNs and hardened browsers can also reduce exposure to such threats. 

Cybersecurity researchers warn that both FileFix and its predecessor ClickFix are likely to remain popular among attackers until awareness becomes widespread. As these techniques evolve, sharing knowledge within organizations and communities is seen as a key defense. At the same time, maintaining strong cyber hygiene and securing personal devices are essential to reduce the risk of falling victim to these increasingly sophisticated phishing campaigns.

Meta Overhauls AI Chatbot Safeguards for Teenagers

 

Meta has announced new artificial intelligence safeguards to protect teenagers following a damaging Reuters investigation that exposed internal company policies allowing inappropriate chatbot interactions with minors. The social media giant is now training its AI systems to avoid flirtatious conversations and discussions about self-harm or suicide with teenage users. 

Background investigation 

The controversy began when Reuters uncovered an internal 200-page Meta document titled "GenAI: Content Risk Standards" that permitted chatbots to engage in "romantic or sensual" conversations with children as young as 13. 

The document contained disturbing examples of acceptable AI responses, including "Your youthful form is a work of art" and "Every inch of you is a masterpiece – a treasure I cherish deeply". These guidelines had been approved by Meta's legal, public policy, and engineering teams, including the company's chief ethicist. 

Immediate safety measures 

Meta spokesperson Andy Stone announced that the company is implementing immediate interim measures while developing more comprehensive long-term solutions for teen AI safety. The new safeguards include training chatbots to avoid discussing self-harm, suicide, disordered eating, and potentially inappropriate romantic topics with teenage users. Meta is also temporarily limiting teen access to certain AI characters that could hold inappropriate conversations.

Some of Meta's user-created AI characters include sexualized chatbots such as "Step Mom" and "Russian Girl," which will now be restricted for teen users. Instead, teenagers will only have access to AI characters that promote education and creativity. The company acknowledged that these policy changes represent a reversal from previous positions where it deemed such conversations appropriate. 

Government response and investigation

The revelations sparked swift political backlash. Senator Josh Hawley launched an official investigation into Meta's AI policies, demanding documentation about the guidelines that enabled inappropriate chatbot interactions with minors. A coalition of 44 state attorneys general wrote to AI companies including Meta, expressing they were "uniformly revolted by this apparent disregard for children's emotional well-being". 

Senator Edward Markey has urged Meta to completely prevent minors from accessing AI chatbots on its platforms, citing concerns that Meta incorporates teenagers' conversations into its AI training process. The Federal Trade Commission is now preparing to scrutinize the mental health risks of AI chatbots to children and will demand internal documents from major tech firms including Meta. 

Implementation timeline 

Meta confirmed that the revised document was "inconsistent with its broader policies" and has since removed sections allowing chatbots to flirt or engage in romantic roleplay with minors. Company spokesperson Stephanie Otway acknowledged these were mistakes, stating the updates are "already in progress" and the company will "continue to adapt our approach to help ensure teens have safe, age-appropriate experiences with AI". 

The controversy highlights broader concerns about AI chatbot safety for vulnerable users, particularly as large companies integrate these tools directly into widely-used platforms where the vast majority of young people will encounter them.

Meta.ai Privacy Lapse Exposes User Chats in Public Feed

 

Meta’s new AI-driven chatbot platform, Meta.ai, launched recently with much fanfare, offering features like text and voice chats, image generation, and video restyling. Designed to rival platforms like ChatGPT, the app also includes a Discover feed, a space intended to showcase public content generated by users. However, what Meta failed to communicate effectively was that many users were unintentionally sharing their private conversations in this feed—sometimes with extremely sensitive content attached. 

In May, journalists flagged the issue when they discovered public chats revealing deeply personal user concerns—ranging from financial issues and health anxieties to legal troubles. These weren’t obscure posts either; they appeared in a publicly accessible area of the app, often containing identifying information. Conversations included users seeking help with medical diagnoses, children talking about personal experiences, and even incarcerated individuals discussing legal strategies—none of whom appeared to realize their data was visible to others. 

Despite some recent tweaks to the app’s sharing settings, disturbing content still appears on the Discover feed. Users unknowingly uploaded images and video clips, sometimes including faces, alongside alarming or bizarre prompts. One especially troubling instance featured a photo of a child at school, accompanied by a prompt instructing the AI to “make him cry.” Such posts reflect not only poor design choices but also raise ethical questions about the purpose and moderation of the Discover feed itself. 

The issue evokes memories of other infamous data exposure incidents, such as AOL’s release of anonymized user searches in 2006, which provided unsettling insight into private thoughts and behaviors. While social media platforms are inherently public, users generally view AI chat interactions as private, akin to using a search engine. Meta.ai blurred that boundary—perhaps unintentionally, but with serious consequences. Many users turned to Meta.ai seeking support, companionship, or simple productivity help. Some asked for help with job listings or obituary writing, while others vented emotional distress or sought comfort during panic attacks. 

In some cases, users left chats expressing gratitude—believing the bot had helped. But a growing number of conversations end in frustration or embarrassment when users realize the bot cannot deliver on its promises or that their content was shared publicly. These incidents highlight a disconnect between how users engage with AI tools and how companies design them. Meta’s ambition to merge AI capabilities with social interaction seems to have ignored the emotional and psychological expectations users bring to private-sounding features. 

For those using Meta.ai as a digital confidant, the lack of clarity around privacy settings has turned an experiment in convenience into a public misstep. As AI systems become more integrated into daily life, companies must rethink how they handle user data—especially when users assume privacy. Meta.ai’s rocky launch serves as a cautionary tale about transparency, trust, and design in the age of generative AI.

Meta Introduces Advanced AI Tools to Help Businesses Create Smarter Ads


Meta has rolled out a fresh set of AI-powered tools aimed at helping advertisers design more engaging and personalized promotional content. These new features include the ability to turn images into short videos, brand-focused image generation, AI-powered chat assistants, and tools that enhance shopping experiences within ads.

One of the standout additions is Meta’s video creation feature, which allows businesses to transform multiple images into animated video clips. These clips can include music and text, making it easier for advertisers to produce dynamic visual content without needing video editing skills. Because the videos are short, they’re less likely to appear distorted or lose quality, a common issue in longer AI-generated videos.

Currently, this feature is being tested with select business partners.

Another tool in development is “Video Highlights,” which uses AI to identify the most important parts of a video. Viewers will be able to jump directly to these key scenes, guided by short phrases and image previews chosen by the system. This can help businesses convey their product value more clearly and keep viewers engaged.

Meta is also enhancing its AI image creation tools. Advertisers will now be able to insert their logos and brand colors directly into the images generated by AI. This ensures that their brand identity stays consistent across all marketing content. Additionally, AI-generated ad text can now reflect the personality or style of the brand, offering a more customized tone in promotions.

Another major update is the introduction of “Business AIs”, specialized chat assistants embedded within ads. These bots are designed to answer common customer questions about a product or service. Available in both text and voice formats, these virtual assistants aim to improve customer interaction by addressing queries instantly and guiding users toward making a purchase.

Meta is also experimenting with new features like clickable call-to-action (CTA) stickers for Stories and Reels ads, and virtual try-on tools that use AI to display clothing on digital models of various body types.

These developments are part of Meta’s broader push to make advertising more efficient through automation. The company’s Advantage+ ad system is already showing results, with Meta reporting a 22% average increase in return on ad spend (ROAS) for brands using this approach. Advantage+ uses AI to analyze user behavior, optimize ad formats, and identify potential customers based on real-time data.

While AI is unlikely to replace human creativity entirely, these tools can simplify the ad creation process and help brands connect with their audiences more effectively. 

Beware of Pig Butchering Scams That Steal Your Money

Beware of Pig Butchering Scams That Steal Your Money

Pig butchering, a term we usually hear in the meat market, sadly, has also become a lethal form of cybercrime that can cause complete financial losses for the victims. 

Pig Butchering is a “form of investment fraud in the crypto space where scammers build relationships with targets through social engineering and then lure them to invest crypto in fake opportunities or platforms created by the scammer,” according to The Department of Financial Protection & Innovation. 

Pig butchering has squeezed billions of dollars from victims globally. Cambodian-based Huione Group gang stole over $4 billion from August 2021 to January 2025, the New York Post reported.

How to stay safe from pig butchering?

Individuals should watch out for certain things to avoid getting caught in these extortion schemes. Scammers often target seniors and individuals who are not well aware about cybercrime. The National Council on Aging cautions that such scams begin with receiving messages from scammers pretending to be someone else. Never respond or send money to random people who text you online, even if the story sounds compelling. Scammers rely on earning your trust, a sob story is one easy way for them to trick you. 

Another red flag is receiving SMS or social media texts that send you to other platforms like WeChat or Telegram, which have fewer regulations. Scammers also convince users to invest their money, which they claim to return with big profits. In one incident, the scammer even asked the victim to “go to a loan shark” to get the money.

Stopping scammers

Last year, Meta blocked over 2 million accounts that were promoting crypto investment scams such as pig butchering. Businesses have increased efforts to combat this issue, but the problem still very much exists. A major step is raising awareness via public posts broadcasting safety tips among individuals to prevent them from falling prey to such scams. 

Organizations have now started releasing warnings in Instagram DMs and Facebook Messenger warning users about “potentially suspicious interactions or cold outreach from people you don’t know”, which is a good initiative. Banks have started tipping of customers about the dangers of scams when sending money online. 

Want to Leave Facebook? Do this.

Want to Leave Facebook? Do this.

Confused about leaving Facebook?

Many people are changing their social media habits and opting out of many services. Facebook has witnessed a large exodus of users deserting the platform after the announcement in March that Meta was terminating the independent fact-checking on its platform. However, fact-checking has been replaced with community notes, letting users make changes to potentially false/misleading information. 

Users having years of photos and posts on Facebook are confused about how to collect their data before removing their accounts. If you also feel the same problem, this post will help you delete Facebook permanently, while taking all your information on the way out. 

How to remove Facebook?

For users who do not want to be on Facebook anymore, deleting their account is the only way to completely remove yourself from the platform. If you are not sure, deactivating your account allows you to have some life off of Facebook without account deletion. 

Make sure to remove third-party Facebook logins before deleting your account. 

How to leave third-party apps?

Third-party apps like DoorDash and Spotify allow you to log in using your Facebook account. This lets you log in without remembering another password, but if you’re planning on deleting Facebook, you have to update your login settings. That is because if you delete your account, there will not be another Facebook account for the user to log in through. 

Fortunately, there is another simple way to find which of your sites and applications are connected to Facebook and delete them before removing your account. Once you disconnect from other websites and applications from Facebook, you will need to adjust how you login to them. 

Users should try specific applications and websites to set new passwords or passkeys or log in via a single-service sign-on option, such as Google. 

How is deactivating different than deactivating a Facebook account?

If you want to stay away from Facebook, you have two choices. Either delete your account permanently, or you can disable it temporarily to deactivate it. 

WhatsApp Launches First Dedicated iPad App with Full Multitasking and Calling Features

 

After years of anticipation, WhatsApp has finally rolled out a dedicated iPad app, allowing users to enjoy the platform’s messaging capabilities natively on Apple’s tablet. Available now for download via the App Store, this new version is built to take advantage of iPadOS’s multitasking tools such as Stage Manager, Split View, and Slide Over, marking a major step forward in cross-device compatibility for the platform. 

Previously, iPad users had to rely on WhatsApp Web or third-party solutions to access their chats on the tablet. These alternatives lacked several core functionalities and offered limited support for features like voice and video calls. With this release, users can now sync messages across devices, initiate calls, and send media from their iPad with the same ease and security offered on the iPhone app. 

In its official blog post, WhatsApp highlighted how the new app enhances productivity and communication. Users can, for instance, participate in group calls while researching online or send messages during video meetings — all within the multitasking-friendly iPad interface. The app also supports accessories like Apple’s Magic Keyboard and Apple Pencil, further streamlining the messaging experience. The absence of an iPad-specific version until now had often puzzled users, especially given WhatsApp’s massive global user base and Meta’s (formerly Facebook) ownership since 2014. 

Although the iPhone version has long dominated mobile messaging, WhatsApp never clarified why a tablet version wasn’t prioritized — despite the iPad being one of the most popular tablets worldwide. This launch now allows users to take full advantage of WhatsApp’s ecosystem on a larger screen without needing workarounds. Unlike WhatsApp Web, the new native app can access the device’s cameras and offer a richer interface for media sharing and video calls. 

With this, WhatsApp fills a major gap in its product offering and joins competitors like Telegram, which has long offered a native iPad experience. Interestingly, WhatsApp’s tweet teasing the launch included a playful emoji in response to a user request, generating buzz before the official announcement. In contrast, Telegram jokingly responded with a tweet poking fun at the delayed release.

With over 3 billion active users globally — including more than 500 million in India — WhatsApp’s move to embrace the iPad platform marks a significant upgrade in its commitment to universal accessibility and user experience.

Meta Mirage” Phishing Campaign Poses Global Cybersecurity Threat to Businesses

 

A sophisticated phishing campaign named Meta Mirage is targeting companies using Meta’s Business Suite, according to a new report by cybersecurity experts at CTM360. This global threat is specifically engineered to compromise high-value accounts—including those running paid ads and managing brand profiles.

Researchers discovered that the attackers craft convincing fake communications impersonating official Meta messages, deceiving users into revealing sensitive login information such as passwords and one-time passcodes (OTP).

The scale of the campaign is substantial. Over 14,000 malicious URLs were detected, and alarmingly, nearly 78% of these were not flagged or blocked by browsers when the report was released.

What makes Meta Mirage particularly deceptive is the use of reputable cloud hosting services—like GitHub, Firebase, and Vercel—to host counterfeit login pages. “This mirrors Microsoft’s recent findings on how trusted platforms are being exploited to breach Kubernetes environments,” the researchers noted, highlighting a broader trend in cloud abuse.

Victims receive realistic alerts through email and direct messages. These notifications often mention policy violations, account restrictions, or verification requests, crafted to appear urgent and official. This strategy is similar to the recent Google Sites phishing wave, which used seemingly authentic web pages to mislead users.

CTM360 identified two primary techniques being used:
  • Credential Theft: Victims unknowingly submit passwords and OTPs to lookalike websites. Fake error prompts are displayed to make them re-enter their information, ensuring attackers get accurate credentials.
  • Cookie Theft: Attackers extract browser cookies, allowing persistent access to compromised accounts—even without login credentials.
Compromised business accounts are then weaponized for malicious ad campaigns. “It’s a playbook straight from campaigns like PlayPraetor, where hijacked social media profiles were used to spread fraudulent ads,” the report noted.

The phishing operation is systematic. Attackers begin with non-threatening messages, then escalate the tone over time—moving from mild policy reminders to aggressive warnings about permanent account deletion. This psychological pressure prompts users to respond quickly without verifying the source.

CTM360 advises businesses to:
  • Manage social media accounts only from official or secure devices
  • Use business-specific email addresses
  • Activate Two-Factor Authentication (2FA)
  • Periodically audit security settings and login history
  • Train team members to identify and report suspicious activity
This alarming phishing scheme highlights the need for constant vigilance, cybersecurity hygiene, and proactive measures to secure digital business assets.