The US Supreme Court is set to take two landmark cases over Facebook and Nvidia that may rewrite the way investors sue the tech sector after scandals. Two firms urge the Court to narrow legal options available for investment groups, saying claims made were unrealistic.
Facebook's Cambridge Analytica Case
The current scandal is that of Cambridge Analytica, which allowed third-party vendors access to hundreds of millions of user information without adequate check or follow-up. Facebook reportedly paid over $5 billion to the FTC and SEC this year alone due to purportedly lying to the users as well as to the investors about how it uses data. Still, investor class-action lawsuits over the scandal remain, and Facebook is appealing to the Supreme Court in an effort to block such claims.
Facebook argues that the previous data risks disclosed were hypothetical and therefore should not have been portrayed as if they already had happened. The company also argues that forcing it to disclose all past data incidents may lead to "over disclosure," making the reports filled with data not helpful but rather confusing for investors. Facebook thinks disclosure rules should be flexible; if the SEC wants some specific incidents disclosed, it should create new regulations for that purpose.
Nvidia and the Cryptocurrency Boom
The second is that of Nvidia, the world's biggest graphics chip maker, which, allegedly, had played down how much of its 2017-2018 revenue was from cryptocurrency mining. When the crypto market collapsed, Nvidia was forced to cut its earnings forecast, which was an unexpected move for investors. Subsequently, the SEC charged Nvidia with $5.5 million for not disclosing how much of its revenue was tied to the erratic crypto market.
Investors argue that the statements from Nvidia were misleading due to the actual risks but point out that Nvidia responds by saying that such misrepresentation was not done out of malice. However, they argue that demand cannot be predicted in such an ever-changing market and so would lead to unintentional mistakes. According to them, the existing laws for securities lawsuits already impose very high standards to deter the "fishing expedition," where investors try to sue over financial losses without proper evidence. Nvidia's lawyers opine that relaxing these standards would invite more cases; henceforth the economy is harmed as a whole.
Possible Impact of Supreme Court on Investor Litigation
The Supreme Court will hear arguments for Facebook on November 6th, and the case for Nvidia is scheduled for Nov 13th. Judgments could forever alter the framework under which tech companies can be held accountable to the investor class. A judgement in favour of Facebook and Nvidia would make it tougher for shareholders to file a claim and collect damages after a firm has suffered a crisis. It could give tech companies respite but, at the same time, narrow legal options open to shareholders.
These cases come at a time when the trend of business-friendly rulings from the Supreme Court is lowering the regulatory authority of agencies such as the SEC. Legal experts believe that this new conservative majority on the court may be more open than ever to appeals limiting "nuisance" lawsuits, arguing that these cases threaten business stability and economic growth.
Dealing with such cases, the Court would decide whether the federal rules must permit private investors to enforce standards of corporate accountability or if such responsibility of accountability should rest primarily with the regulatory bodies like the SEC.
According to research by the Institute for Strategic Dialogue, they managed two YouTube channels as well as two accounts on Facebook and X (earlier Twitter) with the help of the outlet 'War and Media'.
The campaign went live in March of this year. Furthermore, false profiles that resembled reputable channels were used on Facebook and YouTube to spread propaganda. These videos remained live on YouTube for more than a month. It's unclear when they were taken from Facebook.
ISIS operatives set up multiple fake channels on YouTube, each mimicking the branding and style of reputable news outlets. These channels featured professionally edited videos, complete with logos and graphics reminiscent of CNN and Al Jazeera. The content ranged from news updates to opinion pieces, all designed to lend an air of credibility.
1. Impersonation: By posing as established media organizations, ISIS aimed to deceive viewers into believing that the content was authentic. Unsuspecting users might stumble upon these channels while searching for legitimate news, inadvertently consuming extremist propaganda.
2. Content Variety: The fake channels covered various topics related to ISIS’s global expansion. Videos included recruitment messages, calls for violence, and glorification of terrorist acts. The diversity of content allowed them to reach a broader audience.
3. Evading Moderation: YouTube’s content moderation algorithms struggled to detect these fake channels. The professional production quality and branding made it challenging to distinguish them from genuine news sources. As a result, the channels remained active for over a month before being taken down.
A new method of creating phony social media channels for renowned news broadcasters such as CNN and Al Jazeera reveals how the terrorist organization's approach to avoiding content moderation on social media platforms has developed.
Unsuspecting users may be influenced by "honeypot" efforts, which, according to the research, will become more sophisticated, making it even more difficult to restrict the spread of terrorist content online.
According to a recent ARRL update, on or around May 12, 2024, the company was attacked by a rogue international cyber gang via its network. When the ARRL cyberattack was discovered, the organization quickly contacted the FBI and enlisted the assistance of third-party specialists in the investigation and cleanup efforts.
The FBI classified the ARRL cyberattack as "unique," owing to its nature of infiltrating network devices, servers, cloud-based services, and PCs.
ARRL's management swiftly formed an incident response team to contain the damage, repair servers, and test apps for appropriate operation.
In a statement, ARRL reiterated its commitment to resolve the issue: thank you for being patient and understanding as our staff works with an exceptional team of specialists to restore full operation to our systems and services. We will continue to provide members with updates as needed and to the degree possible."
The cyber attack on ARRL was well-coordinated and multifaceted:
Meta, the company behind Facebook and Instagram, is set to begin using public posts from European users to train its artificial intelligence (AI) systems starting June 26. This decision has sparked discussions about privacy and GDPR compliance.
Utilising Public Data for AI
European users of Facebook and Instagram have recently been notified that their public posts could be used to help develop Meta's AI technologies. The information that might be utilised includes posts, photos, captions, and messages sent to an AI, but private messages are excluded. Meta has emphasised that only public data from user profiles will be used, and data from users under 18 will not be included.
GDPR Compliance and Legitimate Interest
Under the General Data Protection Regulation (GDPR), companies can process personal data if they demonstrate a legitimate interest. Meta argues that improving AI systems constitutes such an interest. Despite this, users have the right to opt out of having their data used for this purpose by submitting a form through Facebook or Instagram, although these forms are currently unavailable.
Even if users opt out, their data may still be used if they are featured in another user's public posts or images. Meta has provided a four-week notice period before collecting data to comply with privacy regulations.
Regulatory Concerns and Delays
The Irish Data Protection Commission (DPC) intervened following Meta's announcement, resulting in a temporary delay. The DPC requested clarifications from Meta, which the company has addressed. Meta assured that only public data from EU users would be utilized and confirmed that data from minors would not be included.
Meta’s AI Development Efforts
Meta is heavily investing in AI research and development. The company’s latest large language model, Llama 3, released in April, powers its Meta AI assistant, though it is not yet available in Europe. Meta has previously used public posts to train its AI assistant but did not include this data in training the Llama 2 model.
In addition to developing AI software, Meta is also working on the hardware needed for AI operations, introducing custom-made chips last month.
Meta's initiative to use public posts for AI training highlights the ongoing balance between innovation and privacy. While an opt-out option is provided, its current unavailability and the potential use of data from non-consenting users underscore the complexities of data privacy.
European users should remain informed about their rights under GDPR and utilize the opt-out process when available. Despite some limitations, Meta's efforts to notify users and offer an opt-out reflect a step towards balancing technological advancement with privacy concerns.
This development represents a striking move in Meta's AI journey and accentuates the critical role of transparency and regulatory oversight in handling personal data responsibly.
In a recent report by Action Fraud, it has been disclosed that millions of Gmail and Facebook users are at risk of cyberattacks, with Brits losing a staggering £1.3 million to hackers. The data reveals that a concerning 22,530 individuals fell victim to account breaches in the past year alone.
According to Pauline Smith, Head of Action Fraud, the ubiquity of social media and email accounts makes everyone susceptible to fraudulent activities and cyberattacks. As technology advances, detecting fraud becomes increasingly challenging, emphasising the critical need for enhanced security measures.
The report highlights three primary methods exploited by hackers to compromise accounts: on-platform chain hacking, leaked passwords, and phishing. On-platform chain hacking involves cybercriminals seizing control of one account to infiltrate others. Additionally, leaked passwords from data breaches pose a significant threat to account security.
To safeguard against such threats, Action Fraud recommends adopting robust security practices. Firstly, users are advised to create strong and unique passwords for each of their email and social media accounts. One effective method suggested is combining three random words that hold personal significance, balancing memorability with security.
Moreover, implementing 2-Step Verification (2SV) adds an extra layer of protection to accounts. With 2SV, users are prompted to provide additional verification, such as a code sent to their phone, when logging in from a new device or making significant changes to account settings. This additional step fortifies account security, mitigating the risk of unauthorised access even if passwords are compromised.
Recognizing the signs of phishing scams is also crucial in preventing account breaches. Users should remain vigilant for indicators such as spelling errors, urgent requests for information, and suspicious inquiries. By staying informed and cautious, individuals can reduce their vulnerability to cyber threats.
In response to the escalating concerns, tech giants like Google have implemented measures to enhance password security. Features such as password security alerts notify users of compromised, weak, or reused passwords, empowering them to take proactive steps to safeguard their accounts.
The prevalence of online account breaches demands users to stay on their tiptoes when it comes to online security. By adopting best practices such as creating strong passwords, enabling 2-Step Verification, and recognizing phishing attempts, users can safeguard their personal information and financial assets from malicious actors.
In a recent setback for Meta users, a widespread service outage occurred on March 5th, affecting hundreds of thousands worldwide. Meta's spokesperson, Andy Stone, attributed the disruption to a "technical issue," apologising for any inconvenience caused.
Shortly after the incident, multiple hacktivist groups, including Skynet, Godzilla, and Anonymous Sudan, claimed responsibility. However, cybersecurity firm Cyberint revealed that the disruption might have been a result of a cyberattack, as abnormal traffic patterns indicative of a DDoS attack were detected.
The outage left Facebook and Instagram users unable to access the platforms, with many being inexplicably logged out. Some users, despite entering correct credentials, received "incorrect password" messages, raising concerns about a potential hacking event. Both desktop and mobile users, totaling over 550,000 on Facebook and 90,000 on Instagram globally, were impacted.
This isn't the first time Meta (formerly Facebook) faced such issues. In late 2021, a six-hour outage occurred when the Border Gateway Protocol (BGP) routes were withdrawn, effectively making Facebook servers inaccessible. The BGP functions like a railroad switchman, directing data packets' paths, and the absence of these routes caused a communication breakdown.
As the outage unfolded, users found themselves abruptly logged out of the platform, exacerbating the inconvenience. The disruption's ripple effect triggered concerns among users, with fears of a potential cyberattack amplifying the chaos.
It's worth noting that hacktivist groups often claim responsibility for disruptions they may not have caused, aiming to boost their perceived significance and capabilities. In this case, the true source of the disruption remains under investigation, and Meta continues to work on strengthening its systems against potential cyber threats.
In the contemporary sphere of technology, where service interruptions have become more prevalent, it is vital for online platforms to educate themselves on cybersecurity measures. Users are urged to exercise vigilance and adhere to best practices in online security, thus effectively mitigating the repercussions of such incidents.
This incident serves as a reminder of the interconnected nature of online platforms and the potential vulnerabilities that arise from technical glitches or malicious activities. Meta assures users that they are addressing the issue promptly and implementing measures to prevent future disruptions.
As the digital world persists in evolution, users and platforms alike must adapt to the dynamic landscape, emphasising the importance of cybersecurity awareness and resilient systems to ensure a secure online experience for all.
Zuckerberg's apology came after families shared heartbreaking stories of self-harm and suicide related to social media content. The hearing focused on protecting children online, and it provided a rare opportunity for US senators to question tech executives directly. Other CEOs, including those from TikTok, Snap, X (formerly Twitter), and Discord, were also in the hot seat.
The central theme was clear: How can we ensure the safety and well-being of young users in the digital age? The families' pain and frustration underscored the urgency of this question.
One important topic during the hearing was an Instagram prompt related to child sexual abuse material. Zuckerberg acknowledged that the prompt was a mistake and expressed regret. The prompt mistakenly directed users to search for explicit content when they typed certain keywords. This incident raised concerns about the effectiveness of content moderation algorithms and the need for continuous improvement.
Zuckerberg defended the importance of free expression but also recognized the responsibility that comes with it. He emphasized the need to strike a balance between allowing diverse viewpoints and preventing harm. The challenge lies in identifying harmful content without stifling legitimate discourse.
During his testimony, Zuckerberg highlighted efforts to guide users toward helpful resources. When someone searches for self-harm-related content, Instagram now directs them to resources that promote mental health and well-being. While imperfect, this approach reflects a commitment to mitigating harm.
Zuckerberg encouraged parents to engage with their children about online safety and set boundaries. He acknowledged that technology companies cannot solve these issues alone; collaboration with schools and communities is essential.
Mark Zuckerberg's apology was a significant moment, but it cannot be the end. Protecting children online requires collective action from tech companies, policymakers, parents, and educators. We must continue to address the challenges posed by social media while fostering a healthy digital environment for the next generation.
As the hearing concluded, the families' pain remained palpable. Their stories serve as a stark reminder that behind every statistic and algorithm lies a real person—a child seeking connection, validation, and safety.