Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Meta. Show all posts

Meta Introduces AI Features For Ray-Ban Glasses in Europe

 

Meta has officially introduced certain AI functions for its Ray-Ban Meta augmented reality (AR) glasses in France, Italy, and Spain, marking a significant step in the company's spread of its innovative wearable technology across Europe. 

Starting earlier this week, customers in these nations were able to interact with Meta's AI assistant solely through their voice, allowing them to ask general enquiries and receive responses through the glasses. 

As part of Meta's larger initiative to make its AI assistant more widely available, this latest deployment covers French, Italian, and Spanish in addition to English. The announcement was made nearly a year after the Ray-Ban Meta spectacles were first released in September 2023.

In a blog post outlining the update, Meta stated, "We are thrilled to introduce Meta AI and its cutting-edge features to regions of the EU, and we look forward to expanding to more European countries soon.” However, not all of the features accessible in other regions will be included in the European rollout. 

While customers in the United States, Canada, and Australia benefit from multimodal AI capabilities on their Ray-Ban Meta glasses, such as the ability to gain information about objects in view of the glasses' camera, these functions will not be included in the European update at present.

For example, users in the United States can ask their glasses to identify landmarks in their surroundings, such as "Tell me more about this landmark," but these functionalities are not available in Europe due to ongoing regulatory issues. 

Meta has stated its commitment to dealing with Europe's complicated legal environment, specifically the EU's AI Act and the General Data Protection Regulation (GDPR). The company indicated that it is aiming to offer multimodal capabilities to more countries in the future, but there is no set date. 

While the rollout in France, Italy, and Spain marks a significant milestone, Meta's journey in the European market is far from done. As the firm navigates the regulatory landscape and expands its AI solutions, users in Europe can expect more updates and new features for their Ray-Ban Meta glasses in the coming months. 

As Meta continues to grow its devices and expand its AI capabilities, all eyes will be on how the firm adjusts to Europe's legal system and how this will impact the future of AR technology worldwide.

Supreme Court Weighs Shareholder Lawsuit Against Meta Over Data Disclosure

 

The U.S. Supreme Court is deliberating on a high-stakes shareholder lawsuit involving Meta (formerly Facebook), where investors claim the tech giant misled them by omitting crucial data breach information from its risk disclosures. The case, Facebook v. Amalgamated Bank, centers around the Cambridge Analytica scandal, where a British firm accessed data on millions of users to influence U.S. elections. While Meta had warned of potential misuse of data in its annual filings, it did not disclose that a significant breach had already occurred, potentially impacting investors’ trust. During oral arguments, liberal justices voiced concerns over the omission. 

Justice Elena Kagan likened the situation to a company that warns about fire risks but withholds that a recent fire already caused severe damage. Such a lack of disclosure, she argued, could be misleading to “reasonable investors.” The plaintiffs’ attorney, Kevin Russell, echoed this sentiment, asserting that Facebook’s omission misrepresented the severity of risks investors faced. On the other hand, conservative justices expressed concerns about expanding disclosure requirements. Chief Justice John Roberts questioned whether mandating disclosures of all past events might lead to over-disclosure, which could overwhelm investors with excessive details. Justice Brett Kavanaugh suggested the SEC, rather than the courts, might be better positioned to clarify standards for corporate disclosures. 

The Biden administration supports the plaintiffs, with Assistant Solicitor General Kevin Barber describing the case as an example of a misleading “half-truth.” Meta’s attorney, Kannon Shanmugam, argued that such broad requirements could dissuade companies from sharing forward-looking risk factors, fearing potential lawsuits for any past incident. Previously, the Ninth Circuit found Meta’s general warnings about potential risks misleading, given the company’s awareness of the Cambridge Analytica breach. The Court held that such omissions could harm investors by implying that no significant misuse had occurred. 

If the Supreme Court sides with the plaintiffs, companies could face new expectations to disclose known incidents, particularly those affecting data security or reputational risk. Such a ruling could reshape corporate disclosure practices, particularly for tech firms managing sensitive data. Alternatively, a ruling in favor of Meta may uphold the existing regulatory framework, granting companies more discretion in defining disclosure content. This decision will likely set a significant precedent for how companies balance transparency with investors and risk management.

Meta Struggles to Curb Misleading Ads on Hacked Facebook Pages

 

Meta, the parent company of Facebook, has come under fire for its failure to adequately prevent misleading political ads from being run on hacked Facebook pages. A recent investigation by ProPublica and the Tow Center for Digital Journalism uncovered that these ads, which exploited deepfake audio of prominent figures like Donald Trump and Joe Biden, falsely promised financial rewards. Users who clicked on these ads were redirected to forms requesting personal information, which was subsequently sold to telemarketers or used in fraudulent schemes. 

One of the key networks involved, operating under the name Patriot Democracy, hijacked more than 340 Facebook pages, including verified accounts like that of Fox News meteorologist Adam Klotz. The network used these pages to push over 160,000 deceptive ads related to elections and social issues, with a combined reach of nearly 900 million views across Facebook and Instagram. The investigation highlighted significant loopholes in Meta’s ad review and enforcement processes. While Meta did remove some of the ads, it failed to catch thousands of others, many with identical or similar content. Even after taking down problematic ads, the platform allowed the associated pages to remain active, enabling the perpetrators to continue their operations by spawning new pages and running more ads. 

Meta’s policies require ads related to elections or social issues to carry “paid for by” disclaimers, identifying the entities behind them. However, the investigation revealed that many of these disclaimers were misleading, listing nonexistent entities. This loophole allowed deceptive networks to continue exploiting users with minimal oversight. The company defended its actions, stating that it invests heavily in trust and safety, utilizing both human and automated systems to review and enforce policies. A Meta spokesperson acknowledged the investigation’s findings and emphasized ongoing efforts to combat scams, impersonation, and spam on the platform. 

However, critics argue that these measures are insufficient and inconsistent, allowing scammers to exploit systemic vulnerabilities repeatedly. The investigation also revealed that some users were duped into fraudulent schemes, such as signing up for unauthorized monthly credit card charges or being manipulated into changing their health insurance plans under false pretences. These scams not only caused financial losses but also left victims vulnerable to further exploitation. Experts have called for more stringent oversight and enforcement from Meta, urging the company to take a proactive stance in combating misinformation and fraud. 

The incident underscores the broader challenges social media platforms face in balancing open access with the need for rigorous content moderation, particularly in the context of politically sensitive content. In conclusion, Meta’s struggle to prevent deceptive ads highlights the complexities of managing a vast digital ecosystem where bad actors continually adapt their tactics. While Meta has made some strides, the persistence of such scams raises serious questions about the platform’s ability to protect its users effectively and maintain the integrity of its advertising systems.

Vietnamese Hackers Target Digital Marketers in Malware Attack

 



Cyble Research and Intelligence Lab recently unearthed an elaborate, multi-stage malware attack targeting not only job seekers but also digital marketing professionals. The hackers are a Vietnamese threat actor who was utilising different sophisticated attacks on systems by making use of a Quasar RAT tool that gives a hacker complete control of an infected computer. 


Phishing emails and LNK files as entry points

The attack initiates with phishing emails claiming an attached archive file. Inside the archive is a malicious LNK, disguised as a PDF. Once the LNK is launched, it executes PowerShell commands, which download additional malicious scripts from a third-party source, thus avoiding most detection solutions. The method proves very potent in non-virtualized environments in which malware remains undiscovered inside the system.


Quasar RAT Deployment

Then, the attackers decrypt the malware payload with hardcoded keys. Quasar RAT - a kind of RAT allowing hackers to obtain total access over the compromised system - is started up. Data can be stolen, other malware can be planted, and even the infected device can be used remotely by the attackers.

The campaign targets digital marketers primarily in the United States, using Meta (Facebook, Instagram) advertisements. The malware files utilised in the attack were designed for this type of user, which has amplified its chances.


Spread using Ducktail Malware

In July 2022, the same Vietnamese threat actors expanded their activities through the launch of Ducktail malware that specifically targeted digital marketing professionals. The group included information stealers and other RATs in its attacks. The group has used MaaS platforms to scale up and make their campaign versatile over time.


Evasion of Detection in Virtual Environments

Its superiority in evading virtual environment detection makes this malware attack all the more sophisticated. Here, attackers use the presence of the "output.bat" file to determine whether it's running in a virtual environment or not by scanning for several hard drive manufacturers and virtual machine signatures like "QEMU," "VirtualBox," etc. In case malware detects it's been run from a virtual machine, it lets execution stop analysis right away.

It proceeds with the attack if no virtual environment is detected. Here, it decodes more scripts, to which include a fake PDF and a batch file. These are stored in the victim's Downloads folder using seemingly innocent names such as "PositionApplied_VoyMedia.pdf."


Decryption and Execution Methods

Once the PowerShell script is fully executed, then decrypted strings from the "output.bat" file using hardcoded keys and decompressed through GZip streams. Then, it will produce a .NET executable running in the memory which will be providing further evasion for the malware against detection by antivirus software.

But the malware itself, also performs a whole cycle of checks to determine whether it is running in a sandbox or emulated environment. It can look for some known file names and DLL modules common in virtualized settings as well as measure discrepancies in time to detect emulation. If these checks return a result that suggests a virtual environment, then the malware will throw an exception, bringing all subsequent activity to a halt.

Once the malware has managed to infect a system, it immediately looks for administrative privileges. If they are not found, then it uses PowerShell commands for privilege escalation. Once it gains administrative control, it ensures persistence in the sense that it copies itself to a hidden folder inside the Windows directory. It also modifies the Windows registry so that it can execute automatically at startup.


Defence Evasion and Further Damage 

For the same purpose, the malware employs supplementary defence evasion techniques to go unnoticed. It disables Windows event tracing functions which makes it more difficult to track its activities by security software. In addition to this, it encrypts and compresses key components in a way that their actions are even more unidentifiable.

This last stage of the attack uses Quasar RAT. Both data stealing and long-term access to the infected system are done through the use of a remote access tool. This adapted version of Quasar RAT is less detectable, so the attackers will not easily have it identified or removed by security software.

This is a multi-stage malware attack against digital marketing professionals, especially those working in Meta advertising. It's a very sophisticated and dangerous operation with phishing emails, PowerShell commands combined with advanced evasion techniques to make it even harder to detect and stop. Security experts advise on extreme caution while handling attachment files from emails, specifically in a non-virtualized environment; all the software and systems must be up to date to prevent this kind of threat, they conclude.


Harvard Student Uses Meta Ray-Ban 2 Glasses and AI for Real-Time Data Scraping

A recent demonstration by Harvard student AnhPhu Nguyen using Meta Ray-Ban 2 smart glasses has revealed the alarming potential for privacy invasion through advanced AI-powered facial recognition technology. Nguyen’s experiment involved using these $379 smart glasses, equipped with a livestreaming feature, to capture faces in real-time. He then employed publicly available software to scan the internet for more images and data related to the individuals in view. 

By linking facial recognition data with databases such as voter registration records and other publicly available sources, Nguyen was able to quickly gather sensitive personal information like names, addresses, phone numbers, and even social security numbers. This process takes mere seconds, thanks to the integration of an advanced Large Language Model (LLM) similar to ChatGPT, which compiles the scraped data into a comprehensive profile and sends it to Nguyen’s phone. Nguyen claims his goal is not malicious, but rather to raise awareness about the potential threats posed by this technology. 

To that end, he has even shared a guide on how to remove personal information from certain databases he used. However, the effectiveness of these solutions is minimal compared to the vast scale of potential privacy violations enabled by facial recognition software. In fact, the concern over privacy breaches is only heightened by the fact that many databases and websites have already been compromised by bad actors. Earlier this year, for example, hackers broke into the National Public Data background check company, stealing information on three billion individuals, including every social security number in the United States. 

 This kind of privacy invasion will likely become even more widespread and harder to avoid as AI systems become more capable. Nguyen’s experiment demonstrated how easily someone could exploit a few small details to build trust and deceive people in person, raising ethical and security concerns about the future of facial recognition and data gathering technologies. While Nguyen has chosen not to release the software he developed, which he has dubbed “I-Xray,” the implications are clear. 

If a college student can achieve this level of access and sophistication, it is reasonable to assume that similar, if not more invasive, activities could already be happening on a much larger scale. This echoes the privacy warnings raised by whistleblowers like Edward Snowden, who have long warned of the hidden risks and pervasive surveillance capabilities in the digital age.

Meta Penalized $101 Million for Storing Passwords in Plaintext, Faces Heightened EU Oversight

 

Meta, the parent company of Facebook, has been fined Euro 91 million (USD 101 million) by the Irish Data Protection Commission (DPC) following the revelation that the company stored millions of user passwords in plaintext.  

Plaintext refers to readable data that does not need a decryption key to access. It can be any file or message, including text or binary data, that has not been encrypted yet. Plaintext is often used in tasks like document writing, coding, and email. In encryption, plaintext is the input that gets converted into ciphertext, which is the secured, unreadable version.

The breach, discovered during an internal review and disclosed in 2019, involved sensitive user data being accessible to over 2,000 engineers, who collectively queried the password database more than 9 million times. This fine adds to Meta’s growing list of penalties under the European Union’s General Data Protection Regulation (GDPR), which has cost the company more than Euro 2 billion since the regulation was introduced in 2018. Notably, Meta is appealing a record Euro 1.2 billion fine issued last year, making the company one of the most scrutinized by European regulators. 

Meta identified the security lapse during a routine check of its data storage practices. The company stated that no evidence was found to suggest that any internal personnel had misused the passwords or that external entities had accessed the data. Despite these assurances, the incident brought to light a major oversight, as modern security protocols universally require passwords to be encrypted through cryptographic hashing rather than stored in plaintext. 

Password hashing, the standard across most industries, ensures that original passwords cannot be easily retrieved. Algorithms like Bcrypt, PBKDF2, and SHA512crypt are specifically designed to slow down attempts to crack hashed passwords, using computationally expensive processes that deter attackers. Meta's failure to employ such methods represents a serious departure from accepted practices. 

Graham Doyle, Deputy Commissioner at the DPC, highlighted the risks of Meta’s actions: "Storing user passwords in plaintext is widely recognized as a significant security vulnerability. Such data must be protected adequately to prevent abuse." 

As Meta continues to grapple with regulatory fines and pressures, this latest penalty underscores the EU's rigorous enforcement of data protection laws under GDPR. The company faces growing demands to revamp its security protocols and align with global privacy standards to avoid further sanctions.   

Meta Fined €91 Million by EU Privacy Regulator for Improper Password Storage

 

On Friday, Meta was fined €91 million ($101.5 million) by the European Union's primary privacy regulator for accidentally storing some user passwords without proper encryption or protection.

The investigation began five years ago when Meta informed Ireland's Data Protection Commission (DPC) that it had mistakenly saved certain passwords in plaintext format. At the time, Meta publicly admitted to the issue, and the DPC confirmed that no external parties had access to the passwords.

"It is a widely accepted practice that passwords should not be stored in plaintext due to the potential risk of misuse by unauthorized individuals," stated Graham Doyle, Deputy Commissioner of the Irish DPC.

A Meta spokesperson mentioned that the company took swift action to resolve the error after it was detected during a 2019 security audit. Additionally, there is no evidence suggesting the passwords were misused or accessed inappropriately.

Throughout the investigation, Meta cooperated fully with the DPC, the spokesperson added in a statement on Friday.

Given that many major U.S. tech firms base their European operations in Ireland, the DPC serves as the leading privacy regulator in the EU. To date, Meta has been fined a total of €2.5 billion for violations under the General Data Protection Regulation (GDPR), which was introduced in 2018. This includes a record €1.2 billion penalty issued in 2023, which Meta is currently appealing.

AI-powered Ray-Ban Meta Smart Glasses Raise Concerns About Data Privacy

AI-powered Ray-Ban Meta Smart Glasses Raise Concerns About Data Privacy

Ray-Ban Meta smart glasses are the new wearable tech in the market. Launched in 2021, these AI-powered smart glasses have sparked debates in the community. Though useful, the tech has raised concerns over data security and privacy among users.

Feature of Smart Glasses

The AI-powered glasses are filled with a range of advanced features that improve user experience. These features include open-ear speakers, a touch panel, camera. The glasses can also play music, click images take videos, and also offer real-time info via the Meta AI assistant. These features give an idea of a future where tech is involved in our daily lives.

Data Privacy and Security: Concerns

Meta makes most of its money from advertising, this raises concerns about how images clicked through glasses will be used by the company. Meta has a history of privacy and data security concerns, users are skeptical about how their data will be used if Mera captures the images without consent.

Another issue adding injury to this concern is Meta smart glasses introducing AI. AI has already caused controversies over its inaccurate information, its easy manipulation, and racial biases.

When users capture images or videos via smart glasses, Meta Cloud processes them with AI. Meta's website says "All photos processed with AI are stored and used to improve Meta products and will be used to train Meta’s AI with help from trained reviewers"

According to Meta, the processing analyses text, objects, and other contents of the image, and any info collected is used under Meta's Privacy Policy. In simple terms, images sent to clouds can be used to train Meta's AI, a potential for misuse.

What do Users Think?

The evolving tech like smart glasses has had a major impact on how we script our lives, but it has also sparked debates around privacy and user surveillance.

For instance, people in Canada can be photographed publically without their consent, but if the purpose is commercial, suitable restrictions are applied to prevent harm or distress.

Meta has released guidelines to encourage users to exercise caution and respect rights of the others while wearing the glasses. The guidelines suggest giving a formal announcement if you want to use the camera for live streaming and turning off the device when entering a private place.

Meta's reliability on user behavior to assure privacy standards is not enough to combat the concerns around surveillance, consent, and data misuse. Meta's history of privacy battles and its data-driven business model raise questions about whether the current measures can uphold privacy in the evolving digital landscape.

FTC Report Exposes Mass Data Surveillance by Some of the Social Media Giants in the World



According to a new report published by the Federal Trade Commission (FTC), it was found that Facebook - that has since become Meta, YouTube, WhatsApp, and others - have been highly involved in mass surveillance practices while banking in billions of dollars. The investigation, which began from December 2020, exposed the scale of these platforms' collection, monetization, and exploitation of personal information belonging to users.

The FTC's 129-page report exposed how such companies, including Amazon's Twitch, Reddit, Twitter (now X), and TikTok's ByteDance, accumulate vast loads of personal data. This data, mainly collected by these services without the full awareness of users, becomes the foundation of many profitable business models-as is often the case with paid-for targeted advertising. Meta reported that 98% of its second-quarter revenue of $39.07 billion came from ads on Facebook and Instagram, which rely on data harvested from users.

Data Collection Beyond Expectation 

What perhaps really scarring is the number of data and how that's amassed. Companies pay for more information from third-party brokers, which includes income levels, location data, and personal interests of users, to create profiles of online behaviour. Such data is used to fine-tune targeted ads while upgrading profitability, yet users are largely unaware of the extent of all these practices.

Lack of User Control

Despite all that is collected, the report comes to the following conclusion: users have little control over what is done with their personal information. Of course, people are informed that their data is used to deliver targeted advertising and recommendations, but they do not have meaningful tools to direct or limit that use. In most cases, even after user requests to delete all of their information, platforms retain at least de-identified data, or cannot remove all traces of personal information.

Recommendations of FTC for Transparency

The report was a call to these organisations to be open and tell people just what data is being collected and what the data is going to be used for, so consumers have some stake in their information. The FTC also recommended stronger federal legislation of privacy to restrict surveillance and place more control in consumer's hands about data.

The results of such probes have therefore led to several debates on privacy and regulations that protect those users in a modern digital world where the personal information of users is simultaneously tracked and monetized. The FTC report further emphasised the need for companies to be more transparent in adopting practices that offer safeguards regarding user privacy.

Big Tech Prioritizes Security with Zuckerberg at the Helm

 


Reports indicate that some of the largest tech firms are paying millions of dollars each year to safeguard the CEOs of their companies, with some companies paying more than others depending on the industry. There has been a significant increase in the costs relating to security for top executives, including the cost of monitoring at home, personal security, bodyguards, and consulting services, according to a Fortune report.

There was a lot of emphasis placed on securing high-profile CEOs, considering the risks they could incur, according to Bill Herzog, CEO of LionHeart Security Services. Even though it has been two months since Meta cut thousands of jobs on its technical teams, its employees are still feeling the consequences. 

The Facebook core app is supported by employees in many ways, from groups to messaging, and employees who have spent weeks redistributing responsibilities left behind by their departed colleagues, according to four current and former employees who were asked to remain anonymous to speak about internal issues. 

Many remaining employees are likely adjusting to new management, learning completely new roles, and - in some cases - just trying to get their heads around what is happening. The cost of security services offered by LionHeart Security Services is $60 per hour or more, which could represent an annual budget of over $1 million for two guards working full-time. 

In terms of personal security for Mark Zuckerberg, Meta has invested $23.4 million in 2023, breaking the lead among the competitors. The amount of $9.4 million is comprised of direct security costs, while a pre-tax allowance of $14 million is reserved for additional security-related expenses that may arise in the future. 

The investment by Alphabet Inc. in 2023 will amount to about $6.8 million, while Tesla Inc. has paid $2.4 million for the security services of its CEO Elon Musk, in 2023. Additionally, other technology giants, such as NVIDIA Corporation and Apple Inc. have also invested heavily to ensure the safety of their CEOs, with the two companies spending $2.2 million and $820,309, respectively, in 2023. 

In recent years, tech companies have become more aware of the importance of security for their top executives. Due to the increasing risks associated with high-profile clients, the costs of these services have increased as a result of the increase in demand. The fact that these organizations have invested significant amounts of money into security measures over the years makes it clear that they place a high level of importance on the safety of their leaders, which is reflected in their significant investments in these measures. 

The article also highlights the potential risks that are involved in leading a major tech company in today's world, due to technological advancements. Since Zuckerberg joined Meta's platforms over a decade ago, he has faced increasing scrutiny to prove he is doing what is necessary to ensure the safety of children on its platforms. Facebook's founder, Mark Zuckerberg, apologized directly to parents who have complained their children are suffering harm due to content on Meta's platforms, including Facebook and Instagram, during a recent hearing of the Senate Judiciary Committee. 

This apology came after intense questioning from lawmakers about Meta’s efforts to protect children from harmful content, including non-consensual explicit images. Despite Meta’s investments in safety measures, the company continues to face criticism for not doing enough to prevent these harms. Zuckerberg's apology reflected both an acknowledgement of these issues and his willingness to accept responsibility for them. 

However, it also highlighted the ongoing challenges Meta faces in addressing safety concerns in the future. In a multifaceted and complex answer to the question of whether Mark Zuckerberg should step down as Meta's CEO, there are many issues to consider. It is important to point out that there are high ethical concerns and controversy surrounding his conduct that have seriously compromised the public's trust in the leadership of the country. 

Meta has been well positioned for success due to his visionary approach and deep insight into the company which has greatly contributed to the success of the organization. What is important in the end is what will benefit the company's future, that is what matters in the end. However, if Zuckerberg can demonstrate that he is in fact trying to address ethical issues, as well as make the platform more transparent, and if he can prove it well and truly, then he might do well to keep the position at Meta, despite the fears that he may lose it. 

The business may require a change in leadership if these issues persist, which will lead to the restoration of trust, which will enable the business to maintain a more sustainable and ethical outlook.

Scammers Exploit Messaging Apps and Social Media in Singapore


 


Singapore is experiencing the dread of scams and cybercrimes in abundance as we speak, with fraudsters relying more on messaging and social media platforms to target unsuspecting victims. As per the recent figures from the Singapore Police Force (SPF), platforms like Facebook, Instagram, WhatsApp, and Telegram have become common avenues for scammers, with 45% of cases involving these platforms. 

There was a marked increase in the prevalence of scams and cybercrime during the first half of 2024, accounting for 28,751 cases from January to June, compared to 24,367 in 2023. Scams, in particular, made up 92.5% of these incidents, reflecting a 16.3% year-on-year uptick. Financial losses linked to these scams totaled SG$385.6 million (USD 294.65 million), marking a substantial increase of 24.6% from the previous year. On average, each victim lost SG$14,503, a 7.1% increase from last year.

Scammers largely employed social engineering techniques, manipulating victims into transferring money themselves, which accounted for 86% of reported cases. Messaging apps were a key tool for these fraudsters, with 8,336 cases involving these platforms, up from 6,555 cases the previous year. WhatsApp emerged as the most frequently used platform, featuring in more than half of these incidents. Telegram as well was a go-to resort, with a 137.5% increase in cases, making it the platform involved in 45% of messaging-related scams.

Social media platforms were also widely used, with 7,737 scam cases reported. Facebook was the most commonly exploited platform, accounting for 64.4% of these cases, followed by Instagram at 18.6%. E-commerce scams were particularly prevalent on Facebook, with 50.9% of victims targeted through this platform.

Although individuals under 50 years old represented 74.2% of scam victims, those aged 65 and older faced the highest average financial losses. Scams involving impersonation of government officials were the most costly, with an average loss of SG$116,534 per case. Investment scams followed, with average losses of SG$40,080. These scams typically involved prolonged social engineering tactics, where fraudsters gradually gained the trust of their victims to carry out the fraud.

On a positive note, the number of malware-related scam cases saw a notable drop of 86.2% in the first half of 2024, with the total amount lost decreasing by 96.8% from SG$9.1 million in 2023 to SG$295,000 this year.

Despite the reduction in certain scam types, phishing scams and impersonation scams involving government officials continue to pose serious threats. Phishing scams alone accounted for SG$13.3 million in losses, making up 3.4% of total scam-related financial losses. The SPF reported 3,447 phishing cases, which involved fraudulent emails, text messages, and phone calls from scammers posing as officials from government agencies, financial institutions, and other businesses. Additionally, impersonation scams involving government employees increased by 58%, with 580 cases reported, leading to SG$67.5 million in losses, a 67.1% increase from the previous year.

As scammers continue to adapt and refine their methods, it remains crucial for the public to stay alert, especially when using messaging and social media platforms. Sound awareness and cautious behaviour is non negotiable in avoiding these scams.


Russian Disinformation Network Struggles to Survive Crackdown


 

The Russian disinformation network, known as Doppelgänger, is facing difficulties as it attempts to secure its operations in response to increased efforts to shut it down. According to a recent report by the Bavarian State Office for the Protection of the Constitution (BayLfV), the network has been scrambling to protect its systems and data after its activities were exposed.

Doppelgänger’s Activities and Challenges

Doppelgänger has been active in spreading false information across Europe since at least May 2022. The network has created numerous fake social media accounts, fraudulent websites posing as reputable news sources, and its own fake news platforms. These activities have primarily targeted Germany, France, the United States, Ukraine, and Israel, aiming to mislead the public and spread disinformation.

BayLfV’s report indicates that Doppelgänger’s operators were forced to take immediate action to back up their systems and secure their operations after it was revealed that European hosting companies were unknowingly providing services to the network. The German agency monitored the network closely and discovered details about the working patterns of those involved, noting that they operated during Russian office hours and took breaks on Russian holidays.

Connections to Russia

Further investigation by BayLfV uncovered clear links between Doppelgänger and Russia. The network used Russian IP addresses and the Cyrillic alphabet in its operations, reinforcing its connection to the Kremlin. The network's activities were timed with Moscow and St. Petersburg working hours, further suggesting coordination with Russian time zones.

This crackdown comes after a joint investigation by digital rights groups Qurium and EU DisinfoLab, which exposed Doppelgänger's infrastructure spread across at least ten European countries. Although German authorities were aware of the network’s activities, they had not taken proper action until recently.

Social Media Giant Meta's Response

Facebook’s parent company, Meta, has been actively working to combat Doppelgänger’s influence on its platforms. Meta reported that the network has been forced to change its tactics due to ongoing enforcement efforts. Since May, Meta has removed over 5,000 accounts and pages linked to Doppelgänger, disrupting its operations.

In an attempt to avoid detection, Doppelgänger has shifted its focus to spoofing websites of nonpolitical and entertainment news outlets, such as Cosmopolitan and The New Yorker. However, Meta noted that most of these efforts are being caught quickly, either before they go live or shortly afterward, indicating that the network is struggling to maintain its previous level of influence.

Impact on Doppelgänger’s Operations

The pressure from law enforcement and social media platforms is clearly affecting Doppelgänger’s ability to operate. Meta highlighted that the quality of the network’s disinformation campaigns has declined as it struggles to adapt to the persistent enforcement. The goal is to continue increasing the cost of these operations for Doppelgänger, making it more difficult for the network to continue spreading false information.

This ongoing crackdown on Doppelgänger demonstrates the challenges in combating disinformation and the importance of coordinated efforts to protect the integrity of information in today’s digital environment


The UK Erupts in Riots as Big Tech Stays Silent


 

For the past week, England and parts of Northern Ireland have been gripped by unrest, with communities experiencing heightened tensions and an extensive police presence. Social media platforms have played an unjust role in spreading information, some of it harmful, during this period of turmoil. Despite this, major technology companies have remained largely silent, refusing to address their role in the situation publicly.

Big Tech's Reluctance to Speak

Journalists at BBC News have been actively seeking responses from major tech firms regarding their actions during the unrest. However, these companies have not been forthcoming. With the exception of Telegram, which issued a brief statement, platforms like Meta, TikTok, Snapchat, and Signal have refrained from commenting on the matter.

Telegram's involvement became particularly concerning when a list containing the names and addresses of immigration lawyers was circulated on its platform. The Law Society of England and Wales expressed serious concerns, treating the list as a credible threat to its members. Although Telegram did not directly address the list, it did confirm that its moderators were monitoring the situation and removing content that incites violence, in line with the platform's terms of service.

Elon Musk's Twitter and the Spread of Misinformation

The platform formerly known as Twitter, now rebranded as X under Elon Musk's ownership, has also drawn massive attention. The site has been a hub for false claims, hate speech, and conspiracy theories during the unrest. Despite this, X has remained silent, offering no public statements. Musk, however, has been vocal on the platform, making controversial remarks that have only added fuel to the fire.

Musk's tweets have included inflammatory statements, such as predicting a civil war and questioning the UK's approach to protecting communities. His posts have sparked criticism from various quarters, including the UK Prime Minister's spokesperson. Musk even shared, and later deleted, an image promoting a conspiracy theory about detainment camps in the Falkland Islands, further underlining the platform's problematic role during this crisis.

Experts Weigh In on Big Tech's Silence

Industry experts believe that tech companies are deliberately staying silent to avoid getting embroiled in political controversies and regulatory challenges. Matt Navarra, a social media analyst, suggests that these firms hope public attention will shift away, allowing them to avoid accountability. Meanwhile, Adam Leon Smith of BCS, The Chartered Institute for IT, criticised the silence as "incredibly disrespectful" to the public.

Hanna Kahlert, a media analyst at Midia Research, offered a strategic perspective, arguing that companies might be cautious about making public statements that could later constrain their actions. These firms, she explained, prioritise activities that drive ad revenue, often at the expense of public safety and social responsibility.

What Does It Look Like?

As the UK grapples with the fallout from this unrest, there are growing calls for stronger regulation of social media platforms. The Online Safety Act, set to come into effect early next year, is expected to give the regulator Ofcom more powers to hold these companies accountable. However, some, including London Mayor Sadiq Khan, question whether the Act will be sufficient.

Prime Minister Rishi Sunak has acknowledged the need for a broader review of social media in light of recent events. Professor Lorna Woods, an expert in internet law, pointed out that while the new legislation might address some issues, it might not be comprehensive enough to tackle all forms of harmful content.

A recent YouGov poll revealed that two-thirds of the British public want social media firms to be more accountable. As big tech remains silent, it appears that the UK is on the cusp of regulatory changes that could reshape the future of social media in the country.


Why Did Turkey Suddenly Ban Instagram? The Shocking Reason Revealed


 

On Friday, Turkey's Information and Communication Technologies Authority (ICTA) unexpectedly blocked Instagram access across the country. The ICTA, responsible for overseeing internet regulations, did not provide any specific reason for the ban. However, according to reports from Yeni Safak, a newspaper supportive of the government, the ban was likely a response to Instagram removing posts by Turkish users that expressed condolences for Hamas leader Ismail Haniyeh's death.

Many Turkish users faced difficulties accessing Instagram following the ban. Fahrettin Altun, the communications director for the Turkish presidency, publicly condemned Instagram, accusing it of censoring messages of sympathy for Haniyeh, whom he called a martyr. This incident has sparked significant controversy within Turkey.

Haniyeh’s Death and Its Aftermath

Ismail Haniyeh, the political leader of Hamas and a close associate of Turkish President Recep Tayyip Erdogan, was killed in an attack in Tehran on Wednesday, an act allegedly carried out by Israel. His death prompted widespread reactions in Turkey, with many taking to social media to express their condolences and solidarity, leading to the conflict with Instagram.

A History of Social Media Restrictions in Turkey

This is not the first instance of social media restrictions in Turkey. The country, with a population of 85 million, includes over 50 million Instagram users, making such bans highly impactful. From April 2017 to January 2020, Turkey blocked access to Wikipedia due to articles that linked the Turkish government to extremism, tellingly limiting the flow of information.

This recent action against Instagram is part of a broader pattern of conflicts between the Turkish government and social media companies. In April, Meta, the parent company of Facebook, had to suspend its Threads network in Turkey after authorities blocked its information sharing with Instagram. This surfaces ongoing tensions between Turkey and major social media firms.

The blockage of Instagram illustrates the persistent struggle between the Turkish government and social media platforms over content regulation and freedom of expression. These restrictions pose crucial challenges to the dissemination of information and public discourse, affecting millions who rely on these platforms for news and communication. 

Turkey's decision to block Instagram is a testament to the complex dynamics between the government and digital platforms. As the situation pertains, it will be essential to observe the responses from both Turkish authorities and the affected social media companies to grasp the broader implications for digital communication and freedom of speech in Turkey.


Tech Giants Face Backlash Over AI Privacy Concerns






Microsoft recently faced material backlash over its new AI tool, Recall, leading to a delayed release. Recall, introduced last month as a feature of Microsoft's new AI companion, captures screen images every few seconds to create a searchable library. This includes sensitive information like passwords and private conversations. The tool's release was postponed indefinitely after criticism from data privacy experts, including the UK's Information Commissioner's Office (ICO).

In response, Microsoft announced changes to Recall. Initially planned for a broad release on June 18, 2024, it will first be available to Windows Insider Program users. The company assured that Recall would be turned off by default and emphasised its commitment to privacy and security. Despite these assurances, Microsoft declined to comment on claims that the tool posed a security risk.

Recall was showcased during Microsoft's developer conference, with Yusuf Mehdi, Corporate Vice President, highlighting its ability to access virtually anything on a user's PC. Following its debut, the ICO vowed to investigate privacy concerns. On June 13, Microsoft announced updates to Recall, reinforcing its "commitment to responsible AI" and privacy principles.

Adobe Overhauls Terms of Service 

Adobe faced a wave of criticism after updating its terms of service, which many users interpreted as allowing the company to use their work for AI training without proper consent. Users were required to agree to a clause granting Adobe a broad licence over their content, leading to suspicions that Adobe was using this content to train generative AI models like Firefly.

Adobe officials, including President David Wadhwani and Chief Trust Officer Dana Rao, denied these claims and clarified that the terms were misinterpreted. They reassured users that their content would not be used for AI training without explicit permission, except for submissions to the Adobe Stock marketplace. The company acknowledged the need for clearer communication and has since updated its terms to explicitly state these protections.

The controversy began with Firefly's release in March 2023, when artists noticed AI-generated imagery mimicking their styles. Users like YouTuber Sasha Yanshin cancelled their Adobe subscriptions in protest. Adobe's Chief Product Officer, Scott Belsky, admitted the wording was unclear and emphasised the importance of trust and transparency.

Meta Faces Scrutiny Over AI Training Practices

Meta, the parent company of Facebook and Instagram, has also been criticised for using user data to train its AI tools. Concerns were raised when Martin Keary, Vice President of Product Design at Muse Group, revealed that Meta planned to use public content from social media for AI training.

Meta responded by assuring users that it only used public content and did not access private messages or information from users under 18. An opt-out form was introduced for EU users, but U.S. users have limited options due to the lack of national privacy laws. Meta emphasised that its latest AI model, Llama 2, was not trained on user data, but users remain concerned about their privacy.

Suspicion arose in May 2023, with users questioning Meta's security policy changes. Meta's official statement to European users clarified its practices, but the opt-out form, available under Privacy Policy settings, remains a complex process. The company can only address user requests if they demonstrate that the AI "has knowledge" of them.

The recent actions by Microsoft, Adobe, and Meta highlight the growing tensions between tech giants and their users over data privacy and AI development. As these companies navigate user concerns and regulatory scrutiny, the debate over how AI tools should handle personal data continues to intensify. The tech industry's future will heavily depend on balancing innovation with ethical considerations and user trust.


Nvidia Climbs to Second Place in Global Market Value, Surpassing Apple

 


This month, Nvidia has achieved a historic achievement by overtaking Apple to become the world's second most valuable company, a feat that has only been possible because of the overwhelming demand for its advanced chips that are used to handle artificial intelligence tasks. A staggering $1.8 trillion has been added to the market value of the Santa Clara, California-based company's shares over the past year, increasing its market value by a staggering 147% this year. 

Nvidia has achieved a market capitalisation of over $3 trillion as a result of this surge, becoming the first semiconductor company to achieve this milestone. The value of Nvidia's shares has skyrocketed over the past few years, making it the second most valuable company in the world and larger than Apple, thanks to its surge in value. As a consequence of the excitement regarding artificial intelligence, which is largely based on Nvidia chips, the company has seen its shares rise dramatically over the past few years.

The popularity of the company has resulted in it becoming the largest company in Silicon Valley, which has led it to replace Apple, which has seen its share price fall due to concerns regarding iPhone sales in China and other concerns. Several weeks from now, Nvidia will be split ten times for ten shares, a move that could greatly increase the appeal of its stock to investors on a personal level. Nvidia’s surge over Apple’s market value signals a shift in Silicon Valley, where the co-founded company by Steve Jobs has dominated the field since the iPhone was launched in 2007. While Apple gained 0.78 per cent, the world’s most valuable company, Microsoft gained 1.91 per cent in value. 

As a result of the company’s graphics processing units fuelling a boom in artificial intelligence (AI), Nvidia’s rally continues an extraordinary streak of gains for the company. There has been a 260 per cent increase in revenue for the company in recent years, as tech titans such as Microsoft, Meta, Google, and Amazon race to implement artificial intelligence. 

Last month, Nvidia announced a 10-for-1 stock split as a way of making stock ownership more accessible to employees and investors. In the first half of this year, Nvidia shares have more than doubled in value after almost tripling in value in 2023. With the implementation of the split on Friday, the company will be able to appeal to a larger number of small-time investors, as the company's shares will become even more attractive. 

As a consequence of Microsoft, Meta Platforms, and Alphabet, all of these major tech companies are eager to enhance their artificial intelligence capabilities, which is why Nvidia's stock price has surged 147% in 2024. According to recent revenue estimates, the company's stock has gained close to $150 million in market capitalisation in one day, which is more than the entire market capitalization of AT&T. As well as a 4.5% increase in the PHLX chip index, many companies have benefited from the current optimism surrounding artificial intelligence, including Super Micro Computer, which builds AI-optimized servers using Nvidia chips. 

During his visit to the Computex tech fair in Taiwan, former Taipei resident Jensen Huang, chairman & CEO of Nvidia, received extensive media coverage that highlighted both his influence on the company's growing importance as well as his association with the event. Compared to Apple, there are challenges facing Apple due to weak demand for iPhones in China and stiff competition from its Chinese competitors. According to some analysts, Apple misses out on incorporating AI features compared to other tech giants because the company has been so slow in incorporating them. 

According to LSEG data, Nvidia's stock trades today at 39 times expected earnings, but the stock is still considered less expensive than a year ago, when the stock traded at more than 70 times expected earnings, indicating it's less expensive than it used to be.

Navigating Meta’s AI Data Training: Opt-Out Challenges and Privacy Considerations

Navigating Meta’s AI Data Training: Opt-Out Challenges and Privacy Considerations

The privacy policy update

Meta will reportedly amend its privacy policy beginning June 26 to allow its AI to be educated on your data. 

The story spread on social media after Meta sent out emails and notifications to subscribers in the United Kingdom and the European Union informing them of the change and offering them the option to opt out of data collecting. 

One UK-based user, Phillip Bloom, publicly published the message, informing everyone about the impending changes, which appear to also affect Instagram users.

The AI training process

These changes provide Meta permission to use your information and personal material from Meta-related services to train its AI. This implies that the social media giant will be able to use public Facebook posts, Instagram photographs and captions, and messages to Meta's AI chatbots to train its huge language model and other AI capabilities.

Meta states that private messages will not be included in the training data, and the business emphasizes in its emails and notifications that each user (in a protected region) has the "right to object" to the data being utilized. 

Once implemented, the new policy will begin automatically extracting information from the affected types of material. To avoid Meta removing your content, you can opt out right now by going to this Facebook help website. 

Keep in mind that this page will only load if you are in the European Union, the United Kingdom, or any country where Meta is required by law to provide an opt-out option.

Opting out: EU and UK users

If you live in the European Union, the United Kingdom, or another country with severe enough data protection regulations for Meta to provide an opt-out, go to the support page listed above, fill out the form, and submit it. 

You'll need to select your nation and explain why you're opting out in a text box, and you'll have the option to offer more information below that. You should receive a response indicating whether Meta will honor your request to opt out of having your data utilized. 

Prepare to fight—some users say that their requests are being denied, even though in countries governed by legislation such as the European Union's GDPR, Meta should be required to honor your request.

Challenges for users outside the EU and UK

There are a few caveats to consider. While the opt-out protects you, it does not guarantee that your postings will be protected if they are shared by friends or family members who have not opted out of using data for AI training. 

Make sure that any family members who use Facebook or other Meta services opt out, if possible. This move isn't surprising given that Meta has been gradually expanding its AI offerings on its platforms. 

As a result, the utilization of user data, particularly among Meta services, was always expected. There is too much data for the corporation to pass up as training material for its numerous AI programs.

Meta to Train AI with Public Facebook and Instagram Posts

 


 

Meta, the company behind Facebook and Instagram, is set to begin using public posts from European users to train its artificial intelligence (AI) systems starting June 26. This decision has sparked discussions about privacy and GDPR compliance.

Utilising Public Data for AI

European users of Facebook and Instagram have recently been notified that their public posts could be used to help develop Meta's AI technologies. The information that might be utilised includes posts, photos, captions, and messages sent to an AI, but private messages are excluded. Meta has emphasised that only public data from user profiles will be used, and data from users under 18 will not be included.

GDPR Compliance and Legitimate Interest

Under the General Data Protection Regulation (GDPR), companies can process personal data if they demonstrate a legitimate interest. Meta argues that improving AI systems constitutes such an interest. Despite this, users have the right to opt out of having their data used for this purpose by submitting a form through Facebook or Instagram, although these forms are currently unavailable.

Even if users opt out, their data may still be used if they are featured in another user's public posts or images. Meta has provided a four-week notice period before collecting data to comply with privacy regulations.

Regulatory Concerns and Delays

The Irish Data Protection Commission (DPC) intervened following Meta's announcement, resulting in a temporary delay. The DPC requested clarifications from Meta, which the company has addressed. Meta assured that only public data from EU users would be utilized and confirmed that data from minors would not be included.

Meta’s AI Development Efforts

Meta is heavily investing in AI research and development. The company’s latest large language model, Llama 3, released in April, powers its Meta AI assistant, though it is not yet available in Europe. Meta has previously used public posts to train its AI assistant but did not include this data in training the Llama 2 model.

In addition to developing AI software, Meta is also working on the hardware needed for AI operations, introducing custom-made chips last month.

Meta's initiative to use public posts for AI training highlights the ongoing balance between innovation and privacy. While an opt-out option is provided, its current unavailability and the potential use of data from non-consenting users underscore the complexities of data privacy.

European users should remain informed about their rights under GDPR and utilize the opt-out process when available. Despite some limitations, Meta's efforts to notify users and offer an opt-out reflect a step towards balancing technological advancement with privacy concerns.

This development represents a striking move in Meta's AI journey and accentuates the critical role of transparency and regulatory oversight in handling personal data responsibly.


Facebook Account Takeovers: Can Tech Giant Stop Hijacking Scams?

 

A Go Public investigation discovered that Meta has allowed a scam campaign to flourish on Facebook, as fraudsters lock users out of their accounts and mimic them. 

According to the CBC, Lesa Lowery is one of the many victims. For three days, she watched helplessly as Facebook scammers duped her friends out of thousands of dollars for counterfeit things. Her Facebook account was taken in early March. 

Lowery had her account hacked after changing her password in response to a Facebook-like email. The scammer locked her out, costing her friends $2,500. Many of Lowery's friends reported the incident to Facebook, but Meta did not. The scammer removed warnings and blocked friends. Lowery's ex-neighbor, Carol Stevens, lost $250 in the swindle. 

Are Meta’s efforts enough? 

Claudiu Popa, author of "The Canadian Cyberfraud Handbook," lambasted Meta for generating billions but failing to secure users, despite the fact that Meta's sales increased 16% to $185 billion last year. 

Meta wrote Go Public, stating that it has "over 15,000 reviewers across the globe" to fix breaches, but did not explain why the retirement home fraud proceeded.

Popa, a cybercrime specialist, believes that fraudsters employ AI to identify victims and create convincing emails. According to Sapio Research, 85% of cybersecurity professionals believe that AI-powered assaults have increased.

In March, 41 US state attorneys general stated that Meta assisted customers as the number of Facebook account takeovers increased. Meta indicated that it attempted to fix the issue but did not disclose specifics. Credential stuffing assaults and data breaches can result in account takeovers and dump sales.

According to The Register, Meta was taken over by Facebook via phone number recycling in the US. New telecom customers receive abandoned numbers without being disconnected from the previous owner's accounts. An outdated number may get a password reset request or a two-factor authentication token, potentially allowing unauthorised access.

Meta is aware of phone number recycling-related account takeovers; however, the social media giant noted that it "does not have control over telecom providers" reissuing phone numbers, and that users who had phone numbers linked to their Facebook accounts were no longer registered with them. 

Meanwhile, cybersecurity experts propose that the government take measures to address Facebook account takeovers. According to Popa, companies like Meta rely on legislation to protect users and respond fast to fraud.

Are Big Tech Companies Getting Rich from AI?

 


Big Tech companies like Amazon, Microsoft, and Alphabet have showcased impressive earnings, with a substantial boost from their advancements in artificial intelligence (AI) technology. Amazon's quarterly report revealed a 13% increase in net sales, primarily attributed to its AWS cloud computing segment, which saw a 17% sales boost, fueled by new AI functions like Amazon Q AI assistant and Amazon Bedrock generative AI service. Similarly, Alphabet's stock price surged nearly 10% following its robust earnings report, emphasising its AI-driven results. Microsoft also exceeded expectations, with its AI-heavy intelligent cloud division witnessing a 21% increase in revenue.

The Federal Communications Commission (FCC) has reinstated net neutrality rules, ensuring equal treatment of internet content by service providers. This move aims to prevent blocking, slowing down, or charging more for faster service for certain content, reinstating regulations repealed in 2017. Advocates argue that net neutrality preserves fair access, while opponents express concerns over regulatory burdens on broadband providers.

Strategies for Addressing Ransomware Threats

Ransomware attacks continue to pose a considerable threat to businesses, highlighting the unavoidable need for proactive measures. Halcyon CEO Jon Miller emphasises the importance of understanding ransomware risks and implementing robust backup systems. Having a clear plan of action in case of an attack is essential, including measures to minimise disruption and restore systems efficiently. While paying ransom may be a last resort in certain scenarios, it often leads to repeated targeting and underscores the necessity of enhancing overall security posture. Collaboration among companies and sharing of threat intelligence can also strengthen defences against ransomware attacks.

Meta's AI-enabled Smart Glasses

Meta's collaboration with Ray-Ban resulted in AI-enabled smart glasses, offering a seamless interface between the physical and online world. Priced at $299, these glasses provide enhanced functionalities like connecting with smartphones, music streaming, and camera features. Despite some limitations in identifying objects, these glasses signify a potential gateway to widespread adoption of virtual reality (VR) technology.

IBM and Nvidia Announce Major Acquisitions

IBM's acquisition of HashiCorp for $6.4 billion aims to bolster its cloud solutions with HashiCorp's expertise in managing cloud systems and applications. Similarly, Nvidia's purchase of GPU orchestrator Run:ai enhances its capabilities in efficiently utilising chips for processing needs, further solidifying its competitive edge.

As businesses increasingly adopt AI technology, collaborative decision-making and comprehensive training initiatives are essential for successful implementation. IBM's survey suggests that 40% of employees will require AI-related training and reskilling in the next three years, emphasising the urgency of investing in workforce development.

In essence, the recent earnings reports and strategic moves by tech giants translate the decisive role of AI in driving innovation and financial growth. However, amidst technological advancements, addressing cybersecurity threats like ransomware and ensuring equitable access to the internet remain crucial considerations for businesses and policymakers alike.