Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Facebook. Show all posts

Supreme Court Weighs Shareholder Lawsuit Against Meta Over Data Disclosure

 

The U.S. Supreme Court is deliberating on a high-stakes shareholder lawsuit involving Meta (formerly Facebook), where investors claim the tech giant misled them by omitting crucial data breach information from its risk disclosures. The case, Facebook v. Amalgamated Bank, centers around the Cambridge Analytica scandal, where a British firm accessed data on millions of users to influence U.S. elections. While Meta had warned of potential misuse of data in its annual filings, it did not disclose that a significant breach had already occurred, potentially impacting investors’ trust. During oral arguments, liberal justices voiced concerns over the omission. 

Justice Elena Kagan likened the situation to a company that warns about fire risks but withholds that a recent fire already caused severe damage. Such a lack of disclosure, she argued, could be misleading to “reasonable investors.” The plaintiffs’ attorney, Kevin Russell, echoed this sentiment, asserting that Facebook’s omission misrepresented the severity of risks investors faced. On the other hand, conservative justices expressed concerns about expanding disclosure requirements. Chief Justice John Roberts questioned whether mandating disclosures of all past events might lead to over-disclosure, which could overwhelm investors with excessive details. Justice Brett Kavanaugh suggested the SEC, rather than the courts, might be better positioned to clarify standards for corporate disclosures. 

The Biden administration supports the plaintiffs, with Assistant Solicitor General Kevin Barber describing the case as an example of a misleading “half-truth.” Meta’s attorney, Kannon Shanmugam, argued that such broad requirements could dissuade companies from sharing forward-looking risk factors, fearing potential lawsuits for any past incident. Previously, the Ninth Circuit found Meta’s general warnings about potential risks misleading, given the company’s awareness of the Cambridge Analytica breach. The Court held that such omissions could harm investors by implying that no significant misuse had occurred. 

If the Supreme Court sides with the plaintiffs, companies could face new expectations to disclose known incidents, particularly those affecting data security or reputational risk. Such a ruling could reshape corporate disclosure practices, particularly for tech firms managing sensitive data. Alternatively, a ruling in favor of Meta may uphold the existing regulatory framework, granting companies more discretion in defining disclosure content. This decision will likely set a significant precedent for how companies balance transparency with investors and risk management.

Facebook, Nvidia Push SCOTUS to Limit Investor Lawsuits

 




The US Supreme Court is set to take two landmark cases over Facebook and Nvidia that may rewrite the way investors sue the tech sector after scandals. Two firms urge the Court to narrow legal options available for investment groups, saying claims made were unrealistic.


Facebook's Cambridge Analytica Case

The current scandal is that of Cambridge Analytica, which allowed third-party vendors access to hundreds of millions of user information without adequate check or follow-up. Facebook reportedly paid over $5 billion to the FTC and SEC this year alone due to purportedly lying to the users as well as to the investors about how it uses data. Still, investor class-action lawsuits over the scandal remain, and Facebook is appealing to the Supreme Court in an effort to block such claims.

Facebook argues that the previous data risks disclosed were hypothetical and therefore should not have been portrayed as if they already had happened. The company also argues that forcing it to disclose all past data incidents may lead to "over disclosure," making the reports filled with data not helpful but rather confusing for investors. Facebook thinks disclosure rules should be flexible; if the SEC wants some specific incidents disclosed, it should create new regulations for that purpose.


Nvidia and the Cryptocurrency Boom

The second is that of Nvidia, the world's biggest graphics chip maker, which, allegedly, had played down how much of its 2017-2018 revenue was from cryptocurrency mining. When the crypto market collapsed, Nvidia was forced to cut its earnings forecast, which was an unexpected move for investors. Subsequently, the SEC charged Nvidia with $5.5 million for not disclosing how much of its revenue was tied to the erratic crypto market.

Investors argue that the statements from Nvidia were misleading due to the actual risks but point out that Nvidia responds by saying that such misrepresentation was not done out of malice. However, they argue that demand cannot be predicted in such an ever-changing market and so would lead to unintentional mistakes. According to them, the existing laws for securities lawsuits already impose very high standards to deter the "fishing expedition," where investors try to sue over financial losses without proper evidence. Nvidia's lawyers opine that relaxing these standards would invite more cases; henceforth the economy is harmed as a whole.


Possible Impact of Supreme Court on Investor Litigation


The Supreme Court will hear arguments for Facebook on November 6th, and the case for Nvidia is scheduled for Nov 13th. Judgments could forever alter the framework under which tech companies can be held accountable to the investor class. A judgement in favour of Facebook and Nvidia would make it tougher for shareholders to file a claim and collect damages after a firm has suffered a crisis. It could give tech companies respite but, at the same time, narrow legal options open to shareholders.

These cases come at a time when the trend of business-friendly rulings from the Supreme Court is lowering the regulatory authority of agencies such as the SEC. Legal experts believe that this new conservative majority on the court may be more open than ever to appeals limiting "nuisance" lawsuits, arguing that these cases threaten business stability and economic growth.

Dealing with such cases, the Court would decide whether the federal rules must permit private investors to enforce standards of corporate accountability or if such responsibility of accountability should rest primarily with the regulatory bodies like the SEC.


Big Tech Prioritizes Security with Zuckerberg at the Helm

 


Reports indicate that some of the largest tech firms are paying millions of dollars each year to safeguard the CEOs of their companies, with some companies paying more than others depending on the industry. There has been a significant increase in the costs relating to security for top executives, including the cost of monitoring at home, personal security, bodyguards, and consulting services, according to a Fortune report.

There was a lot of emphasis placed on securing high-profile CEOs, considering the risks they could incur, according to Bill Herzog, CEO of LionHeart Security Services. Even though it has been two months since Meta cut thousands of jobs on its technical teams, its employees are still feeling the consequences. 

The Facebook core app is supported by employees in many ways, from groups to messaging, and employees who have spent weeks redistributing responsibilities left behind by their departed colleagues, according to four current and former employees who were asked to remain anonymous to speak about internal issues. 

Many remaining employees are likely adjusting to new management, learning completely new roles, and - in some cases - just trying to get their heads around what is happening. The cost of security services offered by LionHeart Security Services is $60 per hour or more, which could represent an annual budget of over $1 million for two guards working full-time. 

In terms of personal security for Mark Zuckerberg, Meta has invested $23.4 million in 2023, breaking the lead among the competitors. The amount of $9.4 million is comprised of direct security costs, while a pre-tax allowance of $14 million is reserved for additional security-related expenses that may arise in the future. 

The investment by Alphabet Inc. in 2023 will amount to about $6.8 million, while Tesla Inc. has paid $2.4 million for the security services of its CEO Elon Musk, in 2023. Additionally, other technology giants, such as NVIDIA Corporation and Apple Inc. have also invested heavily to ensure the safety of their CEOs, with the two companies spending $2.2 million and $820,309, respectively, in 2023. 

In recent years, tech companies have become more aware of the importance of security for their top executives. Due to the increasing risks associated with high-profile clients, the costs of these services have increased as a result of the increase in demand. The fact that these organizations have invested significant amounts of money into security measures over the years makes it clear that they place a high level of importance on the safety of their leaders, which is reflected in their significant investments in these measures. 

The article also highlights the potential risks that are involved in leading a major tech company in today's world, due to technological advancements. Since Zuckerberg joined Meta's platforms over a decade ago, he has faced increasing scrutiny to prove he is doing what is necessary to ensure the safety of children on its platforms. Facebook's founder, Mark Zuckerberg, apologized directly to parents who have complained their children are suffering harm due to content on Meta's platforms, including Facebook and Instagram, during a recent hearing of the Senate Judiciary Committee. 

This apology came after intense questioning from lawmakers about Meta’s efforts to protect children from harmful content, including non-consensual explicit images. Despite Meta’s investments in safety measures, the company continues to face criticism for not doing enough to prevent these harms. Zuckerberg's apology reflected both an acknowledgement of these issues and his willingness to accept responsibility for them. 

However, it also highlighted the ongoing challenges Meta faces in addressing safety concerns in the future. In a multifaceted and complex answer to the question of whether Mark Zuckerberg should step down as Meta's CEO, there are many issues to consider. It is important to point out that there are high ethical concerns and controversy surrounding his conduct that have seriously compromised the public's trust in the leadership of the country. 

Meta has been well positioned for success due to his visionary approach and deep insight into the company which has greatly contributed to the success of the organization. What is important in the end is what will benefit the company's future, that is what matters in the end. However, if Zuckerberg can demonstrate that he is in fact trying to address ethical issues, as well as make the platform more transparent, and if he can prove it well and truly, then he might do well to keep the position at Meta, despite the fears that he may lose it. 

The business may require a change in leadership if these issues persist, which will lead to the restoration of trust, which will enable the business to maintain a more sustainable and ethical outlook.

Security Alert for Gmail, Facebook, and Amazon Users

 


The number of hacks that occur on Google, Gmail, and Amazon accounts keeps on rising, causing users to become anxious. By using phishing tactics, hackers are targeting users' passwords for Gmail, Facebook, and Amazon through phishing campaigns that pose significant risks to their personal information. 

A new notice has appeared warning users of Google Mail, Facebook, and Amazon that there has been a new attack on password hacking that puts their personal information at risk because society has gone digital and protecting your credentials is "the name of the game." There is no denying the fact that these platforms are among the most popular in the world, so it is vital to have a good understanding of what threats are coming and what possibilities there are to prevent these threats. 

Overall, cybersecurity experts predict a steady increase for the year, but they also note that the complexity of password hacks for Gmail and Facebook, as well as attempts to access Amazon accounts, has grown dramatically as well. It has been found that the complexity of password hacks for Gmail and Facebook has increased dramatically as a result of increased complexity in the attacks. 

Typically, these hacking attempts benefit from phishing attacks, brute force attacks, and social engineering attacks, all of which are designed to take advantage of overly trustful users or weaknesses within the platforms that make them vulnerable. Several new threat analyses, including those conducted by Kaspersky Labs, reveal that password theft attacks have become increasingly common against Amazon users, Facebook users, and, most of all, Google users. There have been several attacks targeting these platforms, including those aimed at stealing passwords. 

Kaspersky reported an increase of 40% in attempts of hackers to entice users to access malicious sites impersonating these brands in comparison to last year based on a study it conducted. It is no surprise that malicious hackers are seeking credentials for Gmail, Facebook, and Amazon accounts to spread their malicious programming. As a matter of fact, these accounts may be exploited to reach the full heights of cybercrime by committing data theft, malware distribution, and credit card fraud all at the same time. 

A Google account is a skeleton key that can be used to unlock an entire treasure trove of other account credentials, as well as personal information, enabling fraudsters to access a treasure trove of private information. The information contained in a user's Gmail inbox is immeasurable when compared to that contained in their inbox on the web, and the chances are that they will have one given how popular this web-based free email service is with most people these days. As per Kaspersky reports, hackers are mainly targeting Google, Amazon, and Facebook passwords in their effort to steal personal information. 

During the first half of 2024, Kaspersky Security reported a 243% increase in the number of attack attempts, with the company itself preventing approximately 4 million attempts. It is estimated that Facebook users were exposed to 3.7 million phishing attempts during the same period, and Amazon users were exposed to 3 million.  In an interview with Kaspersky Internet Security, Olga Svistunova, who is an expert in data security at the company, warned that a criminal with access to a Gmail account may be able to access "multiple services". 

Thus, it is important to note that not only may business information be leaked as a result, but also the personal information of customers can also be leaked as a result. To target these platforms, hackers are looking for account passwords, as getting access to these platforms allows them to commit fraud, distribute malware, and steal sensitive information. It is proposed that Google accounts are especially valuable since they can be used to hack into other accounts and to collect personal information that can be used in fraud attempts. 

According to researchers at GuidePoint Research and Intelligence Team, Rui Ataide and Hermes Bojaxhi of the GuidePoint Research and Intelligence Team, there is an ongoing phishing campaign targeting more than 130 U.S. organizations, which has been detected as a new and worrying one. There have been so many misuses of the term "highly sophisticated threat actor" in recent years that it almost has lost all meaning, but the tactics and intrusion capabilities that were employed by this as-yet-unnamed attacker have led the GRIT researchers to conclude that this attacker deserves to be called such a label. 

A spear-phishing attack, as with other spear-phishing campaigns, revolves around the targeting of specific employees within an organization rather than attempting to hit every single email account in an organization with a scattergun approach, as is so often the case with so-called spear-phishing campaigns. The attack has also targeted other tech giants, including Microsoft and Apple, as well as numerous smaller companies. Additionally, DHL, Mastercard, Netflix, eBay, and HSBC are also among the companies involved.  

Cloud security provider Netskope, in a recent report, found a 2,000-fold increase in traffic to phishing pages sent through Microsoft Sway, a cloud-based application that provides users with the ability to create visual instructions, newsletters, and presentations through the use of visual illustrations. Hackers are increasingly exploiting a technique known as “quishing,” a form of phishing that utilizes QR codes to deceive users into logging into malicious websites, thereby stealing their passwords. This method is particularly effective as QR codes can bypass email scanners designed to detect text-based threats. 

Additionally, since QR codes are frequently scanned with mobile devices—which often lack the robust security measures found on desktops and laptops—users become more vulnerable to these types of attacks. A new variant of QR code phishing has been recently detailed by J. Stephen Kowski, the Field Chief Technology Officer at SlashNext, in a LinkedIn article. Unlike traditional QR code phishing, which typically involves an image-based QR code redirecting users to a malicious site, this new method leverages Unicode text characters to create QR codes. 

According to Kowski, this approach presents three significant challenges for defenders: it evades image-based analysis, ensures accurate screen rendering, and creates a duality in appearance between the screen rendering and plain text, making detection more difficult. Given these emerging threats, individuals who frequently use platforms such as Google’s Gmail, Facebook, and Amazon, as well as other major online services, should exercise caution to avoid becoming victims of identity theft. The risk of falling prey to password-hacking attempts can be significantly reduced by adhering to best practices in security hygiene across different accounts and maintaining a high level of vigilance. 

In today’s technology-driven world, personal awareness and proactive measures serve as the first line of defence against such cyber threats. Protecting Business Accounts from Phishing Attacks 

1. Recognize Phishing Indicators

- Generic Domain Extensions: Be cautious of emails from generic domains like "@gmail.com" instead of corporate domains, as attackers use these to impersonate businesses.

- Misspelt Domains: Watch for near-identical domains that slightly alter legitimate ones, such as "Faceb0ok.com." These deceptive domains are used to trick users into providing sensitive information. 

- Content Quality: Legitimate communications are typically polished and professional. Spelling errors, poor grammar, and unprofessional formatting are red flags of phishing attempts. 

- Urgency and Fear Tactics: Phishing messages often create a sense of urgency, pressuring recipients to act quickly to avoid negative consequences, such as account suspensions or security breaches. 

- Unusual Requests: Be wary of unexpected requests for money, personal information, or prompts to click links or download attachments. Hackers often impersonate trusted entities to deceive recipients. 

2. Implement Security Software 

- Install robust security tools, including firewalls, spam filters, and antivirus software, to guard against phishing attacks. 

- Utilize web filters to restrict access to malicious websites. - Regularly update software to patch vulnerabilities and protect against new threats. 

3. Use Multi-Factor Authentication (MFA) 

- Enhance account security by implementing MFA, which requires a second verification factor (e.g., a code, fingerprint, or secret question) in addition to a password. 

- MFA significantly reduces the risk of unauthorized access and helps safeguard business credentials. By staying vigilant, maintaining updated security software, and utilizing MFA, businesses can better protect their accounts and sensitive information from phishing attacks.

EU Claims Meta’s Paid Ad-Free Option Violates Digital Competition Rules

 

European Union regulators have accused Meta Platforms of violating the bloc’s new digital competition rules by compelling Facebook and Instagram users to either view ads or pay to avoid them. This move comes as part of Meta’s strategy to comply with Europe's stringent data privacy regulations.

Starting in November, Meta began offering European users the option to pay at least 10 euros ($10.75) per month for ad-free versions of Facebook and Instagram. This was in response to a ruling by the EU’s top court, which mandated that Meta must obtain user consent before displaying targeted ads, a decision that jeopardized Meta’s business model of personalized advertising.

The European Commission, the EU’s executive body, stated that preliminary findings from its investigation indicate that Meta’s “pay or consent” model breaches the Digital Markets Act (DMA) of the 27-nation bloc. According to the commission, Meta’s approach fails to provide users the right to “freely consent” to the use of their personal data across its various services for personalized ads.

The commission also criticized Meta for not offering a less personalized service that is equivalent to its social networks. Meta responded by stating that their subscription model for no ads aligns with the direction of the highest court in Europe and complies with the DMA. The company expressed its intent to engage in constructive dialogue with the European Commission to resolve the investigation.

The investigation was launched soon after the DMA took effect in March, aiming to prevent tech “gatekeepers” from dominating digital markets through heavy financial penalties. One of the DMA's objectives is to reduce the power of Big Tech firms that have amassed vast amounts of personal data, giving them an advantage over competitors in online advertising and social media services. The commission suggested that Meta should offer an option that doesn’t rely on extensive personal data sharing for advertising purposes.

European Commissioner Thierry Breton, who oversees the bloc’s digital policy, emphasized that the DMA aims to empower users to decide how their data is used and to ensure that innovative companies can compete fairly with tech giants regarding data access.

Meta now has the opportunity to respond to the commission’s findings, with the investigation due to conclude by March 2025. The company could face fines of up to 10% of its annual global revenue, potentially amounting to billions of euros. Under the DMA, Meta is classified as one of seven online gatekeepers, with Facebook, Instagram, WhatsApp, Messenger, and its online ad business listed among two dozen “core platform services” that require the highest level of regulatory scrutiny.

This accusation against Meta is part of a series of regulatory actions by Brussels against major tech companies. Recently, the EU charged Apple with preventing app makers from directing users to cheaper options outside its App Store and accused Microsoft of violating antitrust laws by bundling its Teams app with its Office software.


Terrorist Tactics: How ISIS Duped Viewers with Fake CNN and Al Jazeera Channels


ISIS, a terrorist organization allegedly launched two fake channels on Google-owned video platforms YouTube and Facebook. CNN and Al Jazeera claimed to be global news platforms through their YouTube feeds. This goal was to provide credibility and ease the spread of ISIS propaganda.

According to research by the Institute for Strategic Dialogue, they managed two YouTube channels as well as two accounts on Facebook and X (earlier Twitter) with the help of the outlet 'War and Media'.

The campaign went live in March of this year. Furthermore, false profiles that resembled reputable channels were used on Facebook and YouTube to spread propaganda. These videos remained live on YouTube for more than a month. It's unclear when they were taken from Facebook.

The Deceptive Channels

ISIS operatives set up multiple fake channels on YouTube, each mimicking the branding and style of reputable news outlets. These channels featured professionally edited videos, complete with logos and graphics reminiscent of CNN and Al Jazeera. The content ranged from news updates to opinion pieces, all designed to lend an air of credibility.

Tactics and Objectives

1. Impersonation: By posing as established media organizations, ISIS aimed to deceive viewers into believing that the content was authentic. Unsuspecting users might stumble upon these channels while searching for legitimate news, inadvertently consuming extremist propaganda.

2. Content Variety: The fake channels covered various topics related to ISIS’s global expansion. Videos included recruitment messages, calls for violence, and glorification of terrorist acts. The diversity of content allowed them to reach a broader audience.

3. Evading Moderation: YouTube’s content moderation algorithms struggled to detect these fake channels. The professional production quality and branding made it challenging to distinguish them from genuine news sources. As a result, the channels remained active for over a month before being taken down.

Challenges for Social Media Platforms

  • Algorithmic Blind Spots: Algorithms designed to identify extremist content often fail when faced with sophisticated deception. The reliance on visual cues (such as logos) can be exploited by malicious actors.
  • Speed vs. Accuracy: Platforms must strike a balance between rapid takedowns and accurate content assessment. Delayed action allows harmful content to spread, while hasty removal risks false positives.
  • User Vigilance: Users play a crucial role in reporting suspicious content. However, the resemblance to legitimate news channels makes it difficult for them to discern fake from real.

Why is this harmful for Facebook, X users, and YouTube users?

A new method of creating phony social media channels for renowned news broadcasters such as CNN and Al Jazeera reveals how the terrorist organization's approach to avoiding content moderation on social media platforms has developed.

Unsuspecting users may be influenced by "honeypot" efforts, which, according to the research, will become more sophisticated, making it even more difficult to restrict the spread of terrorist content online.

Behind the Breach: How ARRL Fought Back Against Cyber Intruders


The American Radio Relay League (ARRL), the primary body for amateur radio in the United States, has released new details about the May 2024 cyberattack. The ARRL cyberattack took down its Logbook of the World (LoTW), leaving many members dissatisfied with the organization's perceived lack of information.

ARRL Targeted in Sophisticated Cyber Attack

According to a recent ARRL update, on or around May 12, 2024, the company was attacked by a rogue international cyber gang via its network. When the ARRL cyberattack was discovered, the organization quickly contacted the FBI and enlisted the assistance of third-party specialists in the investigation and cleanup efforts.

The FBI classified the ARRL cyberattack as "unique," owing to its nature of infiltrating network devices, servers, cloud-based services, and PCs.

ARRL's management swiftly formed an incident response team to contain the damage, repair servers, and test apps for appropriate operation.

In a statement, ARRL reiterated its commitment to resolve the issue: thank you for being patient and understanding as our staff works with an exceptional team of specialists to restore full operation to our systems and services. We will continue to provide members with updates as needed and to the degree possible."

The Attack

The cyber attack on ARRL was well-coordinated and multifaceted:

  • Infiltration: The attackers gained unauthorized access to ARRL’s network devices and servers. They exploited vulnerabilities, likely through phishing emails or compromised credentials.
  • Scope: The attack affected various systems, including communication channels, member databases, and administrative tools. The attackers aimed to disrupt services and compromise sensitive information.
  • Attribution: While ARRL has not publicly disclosed the identity of the cyber group, experts believe it to be an international entity with advanced capabilities.

ARRL’s Response

  • Emergency Measures: ARRL immediately isolated affected systems, shut down compromised servers, and engaged cybersecurity experts to assess the damage.
  • Collaboration with Law Enforcement: The organization promptly reported the incident to the FBI, which launched an investigation. Cooperation with law enforcement agencies is crucial in such cases.
  • Transparency: ARRL communicated transparently with its members, providing regular updates via email, website announcements, and social media. Transparency builds trust and helps members stay informed.
  • Recovery Efforts: ARRL worked tirelessly to restore services. Backups were crucial for data recovery, and the organization implemented additional security measures.

Lessons Learned

  • Vigilance: Organizations, regardless of size, must remain vigilant against cyber threats. Regular security audits, employee training, and robust incident response plans are essential.
  • Collaboration: Cybersecurity is a collective effort. Collaboration with law enforcement, industry peers, and security experts enhances resilience.
  • Communication: Transparent communication during a crisis fosters trust and ensures that affected parties receive timely information.
Despite ARRL's efforts, many members believed that the organization was not open with information. A Facebook user wrote a lengthy article criticizing ARRL's communication technique.

Meta to Train AI with Public Facebook and Instagram Posts

 


 

Meta, the company behind Facebook and Instagram, is set to begin using public posts from European users to train its artificial intelligence (AI) systems starting June 26. This decision has sparked discussions about privacy and GDPR compliance.

Utilising Public Data for AI

European users of Facebook and Instagram have recently been notified that their public posts could be used to help develop Meta's AI technologies. The information that might be utilised includes posts, photos, captions, and messages sent to an AI, but private messages are excluded. Meta has emphasised that only public data from user profiles will be used, and data from users under 18 will not be included.

GDPR Compliance and Legitimate Interest

Under the General Data Protection Regulation (GDPR), companies can process personal data if they demonstrate a legitimate interest. Meta argues that improving AI systems constitutes such an interest. Despite this, users have the right to opt out of having their data used for this purpose by submitting a form through Facebook or Instagram, although these forms are currently unavailable.

Even if users opt out, their data may still be used if they are featured in another user's public posts or images. Meta has provided a four-week notice period before collecting data to comply with privacy regulations.

Regulatory Concerns and Delays

The Irish Data Protection Commission (DPC) intervened following Meta's announcement, resulting in a temporary delay. The DPC requested clarifications from Meta, which the company has addressed. Meta assured that only public data from EU users would be utilized and confirmed that data from minors would not be included.

Meta’s AI Development Efforts

Meta is heavily investing in AI research and development. The company’s latest large language model, Llama 3, released in April, powers its Meta AI assistant, though it is not yet available in Europe. Meta has previously used public posts to train its AI assistant but did not include this data in training the Llama 2 model.

In addition to developing AI software, Meta is also working on the hardware needed for AI operations, introducing custom-made chips last month.

Meta's initiative to use public posts for AI training highlights the ongoing balance between innovation and privacy. While an opt-out option is provided, its current unavailability and the potential use of data from non-consenting users underscore the complexities of data privacy.

European users should remain informed about their rights under GDPR and utilize the opt-out process when available. Despite some limitations, Meta's efforts to notify users and offer an opt-out reflect a step towards balancing technological advancement with privacy concerns.

This development represents a striking move in Meta's AI journey and accentuates the critical role of transparency and regulatory oversight in handling personal data responsibly.


Gmail and Facebook Users Advised to Secure Their Accounts Immediately

 



In a recent report by Action Fraud, it has been disclosed that millions of Gmail and Facebook users are at risk of cyberattacks, with Brits losing a staggering £1.3 million to hackers. The data reveals that a concerning 22,530 individuals fell victim to account breaches in the past year alone.

According to Pauline Smith, Head of Action Fraud, the ubiquity of social media and email accounts makes everyone susceptible to fraudulent activities and cyberattacks. As technology advances, detecting fraud becomes increasingly challenging, emphasising the critical need for enhanced security measures.

The report highlights three primary methods exploited by hackers to compromise accounts: on-platform chain hacking, leaked passwords, and phishing. On-platform chain hacking involves cybercriminals seizing control of one account to infiltrate others. Additionally, leaked passwords from data breaches pose a significant threat to account security.

To safeguard against such threats, Action Fraud recommends adopting robust security practices. Firstly, users are advised to create strong and unique passwords for each of their email and social media accounts. One effective method suggested is combining three random words that hold personal significance, balancing memorability with security.

Moreover, implementing 2-Step Verification (2SV) adds an extra layer of protection to accounts. With 2SV, users are prompted to provide additional verification, such as a code sent to their phone, when logging in from a new device or making significant changes to account settings. This additional step fortifies account security, mitigating the risk of unauthorised access even if passwords are compromised.

Recognizing the signs of phishing scams is also crucial in preventing account breaches. Users should remain vigilant for indicators such as spelling errors, urgent requests for information, and suspicious inquiries. By staying informed and cautious, individuals can reduce their vulnerability to cyber threats.

In response to the escalating concerns, tech giants like Google have implemented measures to enhance password security. Features such as password security alerts notify users of compromised, weak, or reused passwords, empowering them to take proactive steps to safeguard their accounts.

The prevalence of online account breaches demands users to stay on their tiptoes when it comes to online security. By adopting best practices such as creating strong passwords, enabling 2-Step Verification, and recognizing phishing attempts, users can safeguard their personal information and financial assets from malicious actors.



Technical Glitch Causes Global Disruption for Meta Users

 


In a recent setback for Meta users, a widespread service outage occurred on March 5th, affecting hundreds of thousands worldwide. Meta's spokesperson, Andy Stone, attributed the disruption to a "technical issue," apologising for any inconvenience caused.

Shortly after the incident, multiple hacktivist groups, including Skynet, Godzilla, and Anonymous Sudan, claimed responsibility. However, cybersecurity firm Cyberint revealed that the disruption might have been a result of a cyberattack, as abnormal traffic patterns indicative of a DDoS attack were detected.

The outage left Facebook and Instagram users unable to access the platforms, with many being inexplicably logged out. Some users, despite entering correct credentials, received "incorrect password" messages, raising concerns about a potential hacking event. Both desktop and mobile users, totaling over 550,000 on Facebook and 90,000 on Instagram globally, were impacted.

This isn't the first time Meta (formerly Facebook) faced such issues. In late 2021, a six-hour outage occurred when the Border Gateway Protocol (BGP) routes were withdrawn, effectively making Facebook servers inaccessible. The BGP functions like a railroad switchman, directing data packets' paths, and the absence of these routes caused a communication breakdown.

As the outage unfolded, users found themselves abruptly logged out of the platform, exacerbating the inconvenience. The disruption's ripple effect triggered concerns among users, with fears of a potential cyberattack amplifying the chaos.

It's worth noting that hacktivist groups often claim responsibility for disruptions they may not have caused, aiming to boost their perceived significance and capabilities. In this case, the true source of the disruption remains under investigation, and Meta continues to work on strengthening its systems against potential cyber threats.

In the contemporary sphere of technology, where service interruptions have become more prevalent, it is vital for online platforms to educate themselves on cybersecurity measures. Users are urged to exercise vigilance and adhere to best practices in online security, thus effectively mitigating the repercussions of such incidents.

This incident serves as a reminder of the interconnected nature of online platforms and the potential vulnerabilities that arise from technical glitches or malicious activities. Meta assures users that they are addressing the issue promptly and implementing measures to prevent future disruptions.

As the digital world persists in evolution, users and platforms alike must adapt to the dynamic landscape, emphasising the importance of cybersecurity awareness and resilient systems to ensure a secure online experience for all.




Meta's AI Ambitions Raised Privacy and Toxicity Concerns

In a groundbreaking announcement following Meta CEO Mark Zuckerberg's latest earnings report, concerns have been raised over the company's intention to utilize vast troves of user data from Facebook and Instagram to train its own AI systems, potentially creating a competing chatbot. 

Zuckerberg's revelation that Meta possesses more user data than what was employed in training ChatGPT has sparked widespread apprehension regarding privacy and toxicity issues. The decision to harness personal data from Facebook and Instagram posts and comments for the development of a rival chatbot has drawn scrutiny from both privacy advocates and industry observers. 

This move, unveiled by Zuckerberg, has intensified anxieties surrounding the handling of sensitive user information within Meta's ecosystem. As reported by Bloomberg, the disclosure of Meta's strategic shift towards leveraging its extensive user data for AI development has set off a wave of concerns regarding the implications for user privacy and the potential amplification of toxic behaviour within online interactions. 

Additionally, the makers will potentially offer it free of charge to the public which led to different concerns in the tech community. While the prospect of accessible AI technology may seem promising, critics argue that Zuckerberg's ambitious plans lack adequate consideration for the potential consequences and ethical implications. 

Following the new development, Mark Zuckerberg reported to the public that he sees Facebook's continued user growth as an opportunity to leverage data from Facebook and Instagram to develop powerful, general-purpose artificial intelligence. With hundreds of billions of publicly shared images and tens of billions of public videos on these platforms, along with a significant volume of public text posts, Zuckerberg believes this data can provide unique insights and feedback loops to advance AI technology. 

Furthermore, as per Zuckerberg, Meta has access to an even larger dataset than Common Crawl, comprised of user-generated content from Facebook and Instagram, which could potentially enable the development of a more sophisticated chatbot. This advantage extends beyond sheer volume; the interactive nature of the data, particularly from comment threads, is invaluable for training conversational AI agents. This strategy mirrors OpenAI's approach of mining dialogue-rich platforms like Reddit to enhance the capabilities of its chatbot. 

What is Threatening? 

Meta's plan to train its AI on personal posts and conversations from Facebook comments raises significant privacy concerns. Additionally, the internet is rife with toxic content, including personal attacks, insults, racism, and sexism, which poses a challenge for any chatbot training system. Apple, known for its cautious approach, has faced delays in its Siri relaunch due to these issues. However, Meta's situation may be particularly problematic given the nature of its data sources. 

Facebook's Two Decades: Four Transformative Impacts on the World

 

As Facebook celebrates its 20th anniversary, it's a moment to reflect on the profound impact the platform has had on society. From revolutionizing social media to sparking privacy debates and reshaping political landscapes, Facebook, now under the umbrella of Meta, has left an indelible mark on the digital world. Here are four key ways in which Facebook has transformed our lives:

1. Revolutionizing Social Media Landscape:
Before Facebook, platforms like MySpace existed, but Mark Zuckerberg's creation quickly outshone them upon its 2004 launch. Within a year, it amassed a million users, surpassing MySpace within four years, propelled by innovations like photo tagging. Despite fluctuations, Facebook steadily grew, reaching over a billion monthly users by 2012 and 2.11 billion daily users by 2023. Despite waning popularity among youth, Facebook remains the world's foremost social network, reshaping online social interaction.

2. Monetization and Privacy Concerns:
Facebook demonstrated the value of user data, becoming a powerhouse in advertising alongside Google. However, its data handling has been contentious, facing fines for breaches like the Cambridge Analytica scandal. Despite generating over $40 billion in revenue in the last quarter of 2023, Meta, Facebook's parent company, has faced legal scrutiny and fines for mishandling personal data.

3. Politicization of the Internet:
Facebook's targeted advertising made it a pivotal tool in political campaigning worldwide, with significant spending observed, such as in the lead-up to the 2020 US presidential election. It also facilitated grassroots movements like the Arab Spring. However, its role in exacerbating human rights abuses, as seen in Myanmar, has drawn criticism.

4. Meta's Dominance:
Facebook's success enabled Meta, previously Facebook, to acquire and amplify companies like WhatsApp, Instagram, and Oculus. Meta boasts over three billion daily users across its platforms. When unable to acquire rivals, Meta has been accused of replicating their features, facing regulatory challenges and accusations of market dominance. The company is shifting focus to AI and the Metaverse, indicating a departure from its Facebook-centric origins.

Looking ahead, Facebook's enduring popularity poses a challenge amidst rapid industry evolution and Meta's strategic shifts. As Meta ventures into the Metaverse and AI, the future of Facebook's dominance remains uncertain, despite its monumental impact over the past two decades.

Mark Zuckerberg Apologizes to Families in Fiery US Senate Hearing

Mark Zuckerberg Apologizes to Families in Fiery US Senate Hearing

In a recent US Senate hearing, Mark Zuckerberg, the CEO of Meta (formerly Facebook), faced intense scrutiny over the impact of social media platforms on children. Families who claimed their children had been harmed by online content were present, and emotions ran high throughout the proceedings.

The Apology and Its Context

Zuckerberg's apology came after families shared heartbreaking stories of self-harm and suicide related to social media content. The hearing focused on protecting children online, and it provided a rare opportunity for US senators to question tech executives directly. Other CEOs, including those from TikTok, Snap, X (formerly Twitter), and Discord, were also in the hot seat.

The central theme was clear: How can we ensure the safety and well-being of young users in the digital age? The families' pain and frustration underscored the urgency of this question.

The Instagram Prompt and Child Sexual Abuse Material

One important topic during the hearing was an Instagram prompt related to child sexual abuse material. Zuckerberg acknowledged that the prompt was a mistake and expressed regret. The prompt mistakenly directed users to search for explicit content when they typed certain keywords. This incident raised concerns about the effectiveness of content moderation algorithms and the need for continuous improvement.

Zuckerberg defended the importance of free expression but also recognized the responsibility that comes with it. He emphasized the need to strike a balance between allowing diverse viewpoints and preventing harm. The challenge lies in identifying harmful content without stifling legitimate discourse.

Directing Users Toward Helpful Resources

During his testimony, Zuckerberg highlighted efforts to guide users toward helpful resources. When someone searches for self-harm-related content, Instagram now directs them to resources that promote mental health and well-being. While imperfect, this approach reflects a commitment to mitigating harm.

The Role of Parents and Educators

Zuckerberg encouraged parents to engage with their children about online safety and set boundaries. He acknowledged that technology companies cannot solve these issues alone; collaboration with schools and communities is essential.

Mark Zuckerberg's apology was a significant moment, but it cannot be the end. Protecting children online requires collective action from tech companies, policymakers, parents, and educators. We must continue to address the challenges posed by social media while fostering a healthy digital environment for the next generation.

As the hearing concluded, the families' pain remained palpable. Their stories serve as a stark reminder that behind every statistic and algorithm lies a real person—a child seeking connection, validation, and safety. 

Trading Tomorrow's Technology for Today's Privacy: The AI Conundrum in 2024

 


Artificial Intelligence (AI) is a technology that continually absorbs and transfers humanity's collective intelligence with machine learning algorithms. It is a technology that is all-pervasive, and it will soon be all-pervasive as well. It is becoming increasingly clear that, as technology advances, so does its approach to data management the lack thereof. Thus, as the start of 2024 approaches, certain developments will have long-lasting impacts. 

Taking advantage of Google's recent integration of Bard, its chat-based AI tool, into a host of other Google apps and services is a good example of how generative AI is being moved more directly into consumer life through the use of text, images, and voice. 

A super-charged version of Google Assistant, Bard is equipped with everything from Gmail, Docs, and Drive, to Google Maps, YouTube, Google Flights, and hotels, all of which are bundled with it. Using a conversational, natural-language mode, Bard can filter enormous amounts of data online, while providing personalized responses to individual users, all while doing so in an unprecedented way. 

Creating shopping lists, summarizing emails, booking trips — all things that a personal assistant would do — for those without one. As of 2023, we have seen many examples of how not everything one sees or hears on the internet is real, whether it be politics, movies, or even wars. 

Artificial intelligence technology continues to advance rapidly, and the advent of deep fakes has raised concern in the country about its potential to influence electoral politics, especially during the Lok Sabha elections that are planned to take place next year. 

There is a sharp rise in deep fakes that have caused widespread concern in the country. In a deepfake, artificial intelligence can be used to create videos or audio that make sense of the actions or statements of people they did not do or say, resulting in the spread of misinformation and damage to their reputation. 

In the wake of the massive leap in public consciousness about the importance of generative AI that occurred in 2023, individuals and businesses will be putting artificial intelligence at the centre of even more decisions in the coming year. 

Artificial intelligence is no longer a new concept. In 2023, ChatGPT, MidJourney, Google Bard, corporate chatbots, and other artificial intelligence tools have taken the internet by storm. Their capabilities have been commended by many, while others have expressed concerns regarding plagiarism and the threat they pose to certain careers, including those related to content creation in the marketing industry. 

There is no denying that artificial intelligence, no matter what you think about it, has dramatically changed the privacy landscape. Despite whatever your feelings about AI are, the majority of people will agree on the fact that AI tools are trained on data that is collected from the creators and the users of them. 

For privacy reasons, it can be difficult to maintain transparency regarding how this data is handled since it can be difficult to understand how it is being handled. Additionally, users may forget that their conversations with AI are not as private as text conversations with other humans and that they may inadvertently disclose sensitive data during these conversations. 

According to the GDPR, users are already protected from fully automated decisions making a decision about the course of their lives by the GDPR -- for example, an AI cannot deny a bank loan based on how it analyzes someone's financial situation. The proposed legislation in many parts of the world will eventually lead to more law enforcement regulating artificial intelligence (AI) in 2024. 

Additionally, AI developers will likely continue to refine their tools to change them into (hopefully) more privacy-conscious oriented tools as the laws governing them become more complex. As Zamir anticipates that Bard Extensions will become even more personalized and integrated with the online shopping experience, such as auto-filling out of checkout forms, tracking shipments, and automatically comparing prices, Bard extensions are on course to become even more integrated with the online shopping experience. 

All of that entails some risk, according to him, from the possibility of unauthorized access to personal and financial information during the process of filling out automated forms, the possibility of maliciously intercepting information on real-time tracking, and even the possibility of manipulated data in price comparisons. 

During 2024, there will be a major transformation in the tapestry of artificial intelligence, a transformation that will stir a debate on privacy and security. From Google's Bard to deepfake anxieties, let's embark on this technological odyssey with vigilant minds as users ride the wave of AI integration. Do not be blind to the implications of artificial intelligence. The future of AI is to be woven by a moral compass, one that guides innovation and ensures that AI responsibly enriches lives.

Meta Extends Ad-Free Facebook and Instagram Premium Access Worldwide



With the introduction of its ad-free subscription service, Meta, the parent company of Facebook and Instagram, offers European users the chance to enjoy their favourite social platforms without being bombarded with advertisements. The recent ruling of the EU's Court of Justice ordered Meta to obtain the consent of users before personalizing any ads for those users in response to a recent ruling issued by the Court of Justice of the EU. With this move, Meta is showing that it is complying with the regulatory framework that is changing in the European Union. 

According to the announcement, users in these regions will have the opportunity to choose between continuing to use the platforms for free ad-support or signing up for a free ad-free subscription experience in November. There is no possibility that the user information will be used for targeting adverts during the subscription period. 

Facebook and Instagram users in the European Union are soon going to be able to enjoy an ad-free experience but at a cost. Starting in November of this year, we will be able to opt into the new, premium service offered by Meta, which is the company’s parent company that owns the platforms. Meta is the company behind the platforms that operates the platforms and is the parent company of Meta. 

Regarding pricing, 18-and-up users will be asked to pay €9.99 per month (roughly $10.55 per month) if they want to access sites without advertisements through a web browser, and €12.99 for users who want to access websites through streamlined iOS and Android apps. Facebook users will not be shown ads on Facebook or Instagram after enrolling in the program, and their data and online activities will not be used to tailor future ads based on their browsing activity. 

Every additional account added to a user's Account Center in the future will be charged an additional fee of €6 per month for the web and €8 per month for iOS and Android devices beginning on March 1, 2024, by way of an increase of fees every month.

Historically, Meta has operated solely by offering free social networking services to its users, and by selling advertising to companies who wish to reach those users. As a result of data privacy laws and other government policies that are affecting technology companies, especially in Europe, it illustrate the fact that companies have been redesigning their products to comply with those policies. 

It is estimated that more than 450 million Europeans, across 27 countries, use Amazon, Apple, Google, TikTok and other companies to comply with new rules in the European Union. The number of people using Facebook each month is estimated to be 258 million, according to Meta's estimates. According to Meta's estimate, 257 million people use Instagram every month as well. 

For iOS and Android, it is important to note that the prices are adjusted based on the fees imposed by Apple and Google by their respective purchasing policies. The subscription will be valid until March 1, 2024, for all linked accounts within the Account Center of a user for six months. A monthly fee of €6 will, however, be charged starting March 1, 2024, for each additional account listed in a user's Account Center, starting on the web and €8 for iOS and Android. 

Meta was effectively barred from combining data collected from users across its various platforms - including Facebook, Instagram and WhatsApp - as well as from outside websites and apps in July, by the European Court of Justice, the highest court in the European Union, to protect the privacy of users. The E.U. regulators issued the fine in January for forcing Meta users to accept personalized ads as a requirement of using Facebook in a condition of fines of three billion euros. That decision was issued in response to a violation of privacy regulations. This may be a solution to comply in full with the judgment provided that we offer a subscription service without displaying adverts to our subscribers in Europe, Meta said in response to the European Court of Justice's judgement of July. 

A subscription can allow users to access the platforms without being exposed to the advertising that is displayed to their subscribers. There has been no paid and ad-free subscription for services like Facebook and Instagram since Facebook and its founder Mark Zuckerberg were formed in the early days of the company. As far as they are concerned, they have always believed that they can only offer their services for free, provided that advertisements accompany them. 

However, Meta is now offering a way for Instagram and Facebook users to subscribe to both services through one simple option. Due to pressure from the European Union, the move was made after the move was put forward, and therefore, the option is only available to customers in the European Union. 

This means that Instagram users in India will remain exposed to ads no matter whether they choose them or not, and will still see them on their feeds. In any case, if Instagram subscription plans prove to be popular in the European Union and Meta sees value in them, it might be possible for similar Instagram subscription plans to be introduced to India in the future.

It does seem quite a steep subscription price - even more so if users look at the figures in Indian rupees which would be Rs 880-Rs 1150 - but given that it allows users to enjoy Instagram and Facebook in ad-free settings, it is tempting. As well as this promise, Meta also promises that users of their paid plans will not be able to use their personal information for targeted marketing purposes. 

A short time ago Mark Zuckerberg said in an interview that Facebook wants their users to have free access to their service and added ads to it so that users and the company benefit from the process. This is one of the things that has been talked about again and again by Facebook and their CEO.

There will be no change to the ad-supported experience that Facebook and Instagram currently provide to users who choose to continue using the service for free. In Meta, users will be able to control their ad preferences and the ads shown to them as well as the data used for ad targeting by using tools and settings that will enable them to influence what ads they see and what data is used.

It is important to note that advertisers will continue to be able to target users who have opted for free, ad-supported online services in Europe, so they will still be able to conduct personalised advertising campaigns. To preserve both user and business value on its platforms, Meta commits to investing in new tools that offer enhanced controls over ad experiences on its platforms, so it can preserve value for both.

Meta is actively exploring options to provide teenagers with a responsible ad experience in line with the evolving regulatory landscape so that they will be able to explore advertising in a safe environment. Users over 18 will have the option of becoming subscribers for an ad-free experience, and Meta is actively exploring options to support teenagers in this area.

Privacy Class Action Targets OpenAI and Microsoft

A new consumer privacy class action lawsuit has targeted OpenAI and Microsoft, which is a significant step. This legal action is a response to alleged privacy violations in how they handled user data, and it could be a turning point in the continuing debate over internet companies and consumer privacy rights.

The complaint, which was submitted on September 6, 2023, claims that OpenAI and Microsoft both failed to protect user information effectively, infringing on the rights of consumers to privacy. According to the plaintiffs, the corporations' policies for gathering, storing, and exchanging data did not adhere to current privacy laws.

According to the plaintiffs, OpenAI and Microsoft were accused of amassing vast quantities of personal data without explicit user consent, potentially exposing sensitive information to unauthorized third parties. The complaint also raises concerns about the transparency of these companies' data-handling policies.

This lawsuit follows a string of high-profile privacy-related incidents in the tech industry, emphasizing the growing importance of protecting user data. Critics argue that as technology continues to play an increasingly integral role in daily life, companies must take more proactive measures to ensure the privacy and security of their users.

The case against OpenAI and Microsoft echoes similar legal battles involving other tech giants, including Meta (formerly Facebook), further underscoring the need for comprehensive privacy reform. Sarah Silverman, a prominent figure in the entertainment industry, recently filed a lawsuit against OpenAI, highlighting the potentially far-reaching implications of this case.

The outcome of this lawsuit could potentially set a precedent for future legal action against companies that fall short of safeguarding consumer privacy. It may also prompt a broader conversation about the role of regulatory bodies in enforcing stricter privacy standards within the tech industry.

As the legal proceedings unfold, all eyes will be on the courts to see how this case against OpenAI and Microsoft will shape the future of consumer privacy rights in the United States and potentially serve as a catalyst for more robust data protection measures across the industry.

Vietnamese Cybercriminals Exploit Malvertising to Target Facebook Business Accounts

Cybercriminals associated with the Vietnamese cybercrime ecosystem are exploiting social media platforms, including Meta-owned Facebook, as a means to distribute malware. 

According to Mohammad Kazem Hassan Nejad, a researcher from WithSecure, malicious actors have been utilizing deceptive ads to target victims with various scams and malvertising schemes. This tactic has become even more lucrative with businesses increasingly using social media for advertising, providing attackers with a new type of attack vector – hijacking business accounts.

Over the past year, cyber attacks against Meta Business and Facebook accounts have gained popularity, primarily driven by activity clusters like Ducktail and NodeStealer, known for targeting businesses and individuals operating on Facebook. 

Social engineering plays a crucial role in gaining unauthorized access to user accounts, with victims being approached through platforms such as Facebook, LinkedIn, WhatsApp, and freelance job portals like Upwork. Search engine poisoning is another method employed to promote fake software, including CapCut, Notepad++, OpenAI ChatGPT, Google Bard, and Meta Threads.

Common tactics among these cybercrime groups include the misuse of URL shorteners, the use of Telegram for command-and-control (C2), and legitimate cloud services like Trello, Discord, Dropbox, iCloud, OneDrive, and Mediafire to host malicious payloads.

Ducktail, for instance, employs lures related to branding and marketing projects to infiltrate individuals and businesses on Meta's Business platform. In recent attacks, job and recruitment-related themes have been used to activate infections. 

Potential targets are directed to fraudulent job postings on platforms like Upwork and Freelancer through Facebook ads or LinkedIn InMail. These postings contain links to compromised job description files hosted on cloud storage providers, leading to the deployment of the Ducktail stealer malware.

The Ducktail malware is designed to steal saved session cookies from browsers, with specific code tailored to take over Facebook business accounts. These compromised accounts are sold on underground marketplaces, fetching prices ranging from $15 to $340.

Recent attack sequences observed between February and March 2023 involve the use of shortcut and PowerShell files to download and launch the final malware. The malware has evolved to harvest personal information from various platforms, including X (formerly Twitter), TikTok Business, and Google Ads. It also uses stolen Facebook session cookies to create fraudulent ads and gain elevated privileges.

One of the primary methods used to take over a victim's compromised account involves adding the attacker's email address, changing the password, and locking the victim out of their Facebook account.

The malware has incorporated new features, such as using RestartManager (RM) to kill processes that lock browser databases, a technique commonly found in ransomware. Additionally, the final payload is obfuscated using a loader to dynamically decrypt and execute it, making analysis and detection more challenging.

To hinder analysis efforts, the threat actors use uniquely generated assembly names and rely on SmartAssembly, bloating, and compression to obfuscate the malware.

Researchers from Zscaler also observed instances where the threat actors initiated contact using compromised LinkedIn accounts belonging to users in the digital marketing field, leveraging the authenticity of these accounts to aid in social engineering tactics. This highlights the worm-like propagation of Ducktail, where stolen LinkedIn credentials and cookies are used to log in to victims' accounts and expand their reach.

Ducktail is just one of many Vietnamese threat actors employing shared tools and tactics for fraudulent schemes. A Ducktail copycat known as Duckport, which emerged in late March 2023, engages in information stealing and Meta Business account hijacking. Notably, Duckport differs from Ducktail in terms of Telegram channels used for command and control, source code implementation, and distribution, making them distinct threats.

Duckport employs a unique technique of sending victims links to branded sites related to the impersonated brand or company, redirecting them to download malicious archives from file hosting services. Unlike Ducktail, Duckport replaces Telegram as a channel for passing commands to victims' machines and incorporates additional information stealing and account hijacking capabilities, along with taking screenshots and abusing online note-taking services as part of its command and control chain.

"The Vietnamese-centric element of these threats and high degree of overlaps in terms of capabilities, infrastructure, and victimology suggests active working relationships between various threat actors, shared tooling and TTPs across these threat groups, or a fractured and service-oriented Vietnamese cybercriminal ecosystem (akin to ransomware-as-a-service model) centered around social media platforms such as Facebook," WithSecure said.

Fines for Facebook Privacy Breaches in Norway Crack Down on Meta

 


A fine of 1 million crowns ($98,500) will be imposed on the owner of Facebook, Meta Platforms, by the Norwegian Data Protection Authority (Datatilsynet) starting August 14 due to a privacy breach that occurred before that date. A significant penalty of this magnitude could have major implications for other countries in Europe as well since it may set a precedent.

In a court filing, Meta Platforms has requested that a court in Norway stay a fine imposed by the Nordic country's information regulator on the company that owns Facebook and Instagram. It argued that the company breached users' privacy via Facebook and Instagram. 

It appears that Meta Platforms has filed a court filing requesting a temporary injunction against the order to prevent execution. During a two-day hearing to be held on August 22, the petition will be presented by the company. Media inquiries should be directed to Meta company's Norwegian lawyer, according to company's Norwegian lawyer. An inquiry for comment was not responded to by Meta Platforms. 

According to Datatilsynet, Meta Platforms have been instructed not to collect any personal data related to users in Norway, including their physical locations as part of behavioral advertising, i.e. advertising that is targeted at specific user groups. 

Big Tech companies tend to do this type of thing a lot. Tobias Judin, Head of Datatilsynet's international section, said that the company will be fined 1 million crowns per day as of next Monday if the company does not comply with the court order. 

Meta Platforms have filed a court protest against the imposition of the fine, according to Norway's data regulator, Datatilsynet. Datatilsynet will be able to make the fine permanent by referring the decision to the European Data Protection Board, which also holds the authority to endorse the Norwegian regulator's decision, after which the fine will be effective until November 3 at which point it could be made permanent by the Norwegian regulator. 

Successful adoption of this decision would have an impact on the entire European region if it were to be approved. Currently, Datatilsynet has not taken any further steps in implementing these measures. In a recent announcement, Meta announced that it intends to seek consent from users in the European Union before allowing businesses to use targeted advertisements based on how they interact with Meta's services like Instagram and Facebook. 

Judin pointed out that Meta's proposed method of seeking consent from users was insufficient and that such a step would not be wise. As a result, he required Meta to immediately cease all data processing, and not to resume it until a fully functional consent mechanism had been established. There is a violation of people's rights with the implementation of Monday, even though many people are unaware of this violation. 

A Meta spokesperson explained that the decision to modify their approach was prompted by regulatory obligations in the European region, which came as a result of an order issued in January by the Irish Data Protection Commissioner regarding EU-wide data protection regulations. 

According to the Irish authority, which acts as Meta's primary regulator within the European Union, the company is now required to review the legal basis of the methods that it uses to target customers with advertisements. While Norway may not be a member of the European Union, it remains a member of the European Single Market, even though it is not a member of the EU.