Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Growing Concern as Authorities Assess Cyber Incident at Real Estate Finance Firm

  An extreme cyber intrusion which led to considerable concern among U.S. financial institutions over the weekend has been hailed by leadin...

All the recent news you need to know

Big Tech’s New Rule: AI Age Checks Are Rolling Out Everywhere

 



Large online platforms are rapidly shifting to biometric age assurance systems, creating a scenario where users may lose access to their accounts or risk exposing sensitive personal information if automated systems make mistakes.

Online platforms have struggled for decades with how to screen underage users from adult-oriented content. Everything from graphic music tracks on Spotify to violent clips circulating on TikTok has long been available with minimal restrictions.

Recent regulatory pressure has changed this landscape. Laws such as the United Kingdom’s Online Safety Act and new state-level legislation in the United States have pushed companies including Reddit, Spotify, YouTube, and several adult-content distributors to deploy AI-driven age estimation and identity verification technologies. Pornhub’s parent company, Aylo, is also reevaluating whether it can comply with these laws after being blocked in more than a dozen US states.

These new systems require users to hand over highly sensitive personal data. Age estimation relies on analyzing one or more facial photos to infer a user’s age. Verification is more exact, but demands that the user upload a government-issued ID, which is among the most sensitive forms of personal documentation a person can share online.

Both methods depend heavily on automated facial recognition algorithms. The absence of human oversight or robust appeals mechanisms magnifies the consequences when these tools misclassify users. Incorrect age estimation can cut off access to entire categories of content or trigger more severe actions. Similar facial analysis systems have been used for years in law enforcement and in consumer applications such as Google Photos, with well-documented risks and misidentification incidents.

Refusing these checks often comes with penalties. Many services will simply block adult content until verification is completed. Others impose harsher measures. Spotify, for example, warns that accounts may be deactivated or removed altogether if age cannot be confirmed in regions where the platform enforces a minimum age requirement. According to the company, users are given ninety days to complete an ID check before their accounts face deletion.

This shift raises pressing questions about the long-term direction of these age enforcement systems. Companies frequently frame them as child-safety measures, but users are left wondering how long these platforms will protect or delete the biometric data they collect. Corporate promises can be short-lived. Numerous abandoned websites still leak personal data years after shutting down. The 23andMe bankruptcy renewed fears among genetic testing customers about what happens to their information if a company collapses. And even well-intentioned apps can create hazards. A safety-focused dating application called Tea ended up exposing seventy-two thousand users’ selfies and ID photos after a data breach.

Even when companies publicly state that they do not retain facial images or ID scans, risks remain. Discord recently revealed that age verification materials, including seventy thousand IDs, were compromised after a third-party contractor called 5CA was breached.

Platforms assert that user privacy is protected by strong safeguards, but the details often remain vague. When asked how YouTube secures age assurance data, Google offered only a general statement claiming that it employs advanced protections and allows users to adjust their privacy settings or delete data. It did not specify the precise security controls in place.

Spotify has outsourced its age assurance system to Yoti, a digital identity provider. The company states that it does not store facial images or ID scans submitted during verification. Yoti receives the data directly and deletes it immediately after the evaluation, according to Spotify. The platform retains only minimal information about the outcome: the user’s age in years, the method used, and the date the check occurred. Spotify adds that it uses measures such as pseudonymization, encryption, and limited retention policies to prevent unauthorized access. Yoti publicly discloses some technical safeguards, including use of TLS 1.2 by default and TLS 1.3 where supported.

Privacy specialists argue that these assurances are insufficient. Adam Schwartz, privacy litigation director at the Electronic Frontier Foundation, told PCMag that facial scanning systems represent an inherent threat, regardless of whether they are being used to predict age, identity, or demographic traits. He reiterated the organization’s stance supporting a ban on government deployment of facial recognition and strict regulation for private-sector use.

Schwartz raises several issues. Facial age estimation is imprecise by design, meaning it will inevitably classify some adults as minors and deny them access. Errors in facial analysis also tend to fall disproportionately on specific groups. Misidentification incidents involving people of color and women are well documented. Google Photos once mislabeled a Black software engineer and his friend as animals, underlining systemic flaws in training data and model accuracy. These biases translate directly into unequal treatment when facial scans determine whether someone is allowed to enter a website.

He also warns that widespread facial scanning increases privacy and security risks because faces function as permanent biometric identifiers. Unlike passwords, a person cannot replace their face if it becomes part of a leaked dataset. Schwartz notes that at least one age verification vendor has already suffered a breach, underscoring material vulnerabilities in the system.

Another major problem is the absence of meaningful recourse when AI misjudges a user’s age. Spotify’s approach illustrates the dilemma. If the algorithm flags a user as too young, the company may lock the account, enforce viewing restrictions, or require a government ID upload to correct the error. This places users in a difficult position, forcing them to choose between potentially losing access or surrendering more sensitive data.

Do not upload identity documents unless required, check a platform’s published privacy and retention statements before you comply, and use account recovery channels if you believe an automated decision is wrong. Companies and regulators must do better at reducing vendor exposure, increasing transparency, and ensuring appeals are effective. 

Despite these growing concerns, users continue to find ways around verification tools. Discord users have discovered that uploading photos of fictional characters can bypass facial age checks. Virtual private networks remain a viable method for accessing age-restricted platforms such as YouTube, just as they help users access content that is regionally restricted. Alternative applications like NewPipe offer similar functionality to YouTube without requiring formal age validation, though these tools often lack the refinement and features of mainstream platforms.


Australia Bans Under-16s from Social Media Starting December

 

Australia is introducing a world-first ban blocking under-16s from most major social media platforms, and Meta has begun shutting down or freezing teen accounts in advance of the law taking effect. 

From 10 December, Australians under 16 will be barred from using platforms including Instagram, Facebook, Threads, TikTok, YouTube, X, Reddit, Snapchat and others, with services facing fines up to A$50m if they do not take “reasonable steps” to keep underage users out. Prime Minister Anthony Albanese has called the measure “world-leading”, arguing it will protect children from online pressure, unwanted contact and other risks. 

Meta’s account shutdown plan

Meta has started messaging users it believes are 13–15 years old, telling them their Instagram, Facebook and Threads accounts will be deactivated from 4 December and that no new under-16 accounts can be created from that date.Affected teens are being urged to update contact details so they can be notified when eligible to rejoin and are given options to download and save their photos, videos and messages before deactivation. 

Teens who say they are old enough to stay on the platforms can challenge Meta’s decision by submitting a “video selfie” for facial age estimation or uploading a driving licence or other government ID. These and other age-assurance tools were recently tested for the government by the Australian Childrens’ eSafety provider, which concluded that no single foolproof solution exists and that each method has trade-offs.

Enforcement, concerns and workarounds

Australia’s e-Safety Commissioner says the goal is to shield teens from harm while online, but platforms warn tech-savvy young people may try to circumvent restrictions and argue instead for laws requiring parental consent for under-16s. In a related move, Roblox has said it will block under-16s from chatting with unknown adults and will introduce mandatory age verification for chat in Australia, New Zealand and the Netherlands from December before expanding globally. 

The e-Safety regulator has listed the services subject to the ban: Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, X and YouTube. Exempt services include Discord, GitHub, Google Classroom, Lego Play, Messenger, Roblox, Steam and Steam Chat, WhatsApp and YouTube Kids, which are viewed as either educational, messaging-focused or more controlled environments for younger users.

Google’s High-Stakes AI Strategy: Chips, Investment, and Concerns of a Tech Bubble

 

At Google’s headquarters, engineers work on Google’s Tensor Processing Unit, or TPU—custom silicon built specifically for AI workloads. The device appears ordinary, but its role is anything but. Google expects these chips to eventually power nearly every AI action across its platforms, making them integral to the company’s long-term technological dominance. 

Pichai has repeatedly described AI as the most transformative technology ever developed, more consequential than the internet, smartphones, or cloud computing. However, the excitement is accompanied by growing caution from economists and financial regulators. Institutions such as the Bank of England have signaled concern that the rapid rise in AI-related company valuations could lead to an abrupt correction. Even prominent industry leaders, including OpenAI CEO Sam Altman, have acknowledged that portions of the AI sector may already display speculative behavior. 

Despite those warnings, Google continues expanding its AI investment at record speed. The company now spends over $90 billion annually on AI infrastructure, tripling its investment from only a few years earlier. The strategy aligns with a larger trend: a small group of technology companies—including Microsoft, Meta, Nvidia, Apple, and Tesla—now represents roughly one-third of the total value of the U.S. S&P 500 market index. Analysts note that such concentration of financial power exceeds levels seen during the dot-com era. 

Within the secured TPU lab, the environment is loud, dominated by cooling units required to manage the extreme heat generated when chips process AI models. The TPU differs from traditional CPUs and GPUs because it is built specifically for machine learning applications, giving Google tighter efficiency and speed advantages while reducing reliance on external chip suppliers. The competition for advanced chips has intensified to the point where Silicon Valley executives openly negotiate and lobby for supply. 

Outside Google, several AI companies have seen share value fluctuations, with investors expressing caution about long-term financial sustainability. However, product development continues rapidly. Google’s recently launched Gemini 3.0 model positions the company to directly challenge OpenAI’s widely adopted ChatGPT.  

Beyond financial pressures, the AI sector must also confront resource challenges. Analysts estimate that global data centers could consume energy on the scale of an industrialized nation by 2030. Still, companies pursue ever-larger AI systems, motivated by the possibility of reaching artificial general intelligence—a milestone where machines match or exceed human reasoning ability. 

Whether the current acceleration becomes a long-term technological revolution or a temporary bubble remains unresolved. But the race to lead AI is already reshaping global markets, investment patterns, and the future of computing.

Virtual Machines on Nutanix AHV now in Akira’s Crosshairs; Enterprises must Close Gaps

 



Security agencies have issued a new warning about the Akira ransomware group after investigators confirmed that the operators have added Nutanix AHV virtual machines to their list of targets. This represents a significant expansion of the group’s capabilities, which had already included attacks on VMware ESXi and Microsoft Hyper-V environments. The update signals that Akira is no longer limiting itself to conventional endpoints or common hypervisors and is now actively pursuing a wider range of virtual infrastructure used in large organisations.

Although Akira was first known for intrusions affecting small and medium businesses across North America, Europe and Australia, the pattern of attacks has changed noticeably over the last year. Incident reports now show that the group is striking much larger companies, particularly those involved in manufacturing, IT services, healthcare operations, banking and financial services, and food-related industries. This shift suggests a strategic move toward high-value victims where disruptions can cause substantial operational impact and increase the pressure to pay ransom demands.

Analysts observing the group’s behaviour note that Akira has not simply created a few new variants. Instead, it has invested considerable effort into developing ransomware that functions across multiple operating systems, including Windows and Linux, and across several virtualisation platforms. Building such wide-reaching capability requires long-term planning, and researchers interpret this as evidence that the group aims to remain active for an extended period.


How attackers get into networks 

Investigations into real-world intrusions show that Akira typically begins by taking advantage of weak points in remote access systems and devices connected to the internet. Many victims used VPN systems that lacked multifactor authentication, making them vulnerable to attackers trying common password combinations or using previously leaked credentials. The group has also exploited publicly known vulnerabilities in networking products from major vendors and in backup platforms that had not been updated with security patches.

In addition to these weaknesses, Akira has used targeted phishing emails, misconfigured Remote Desktop Protocol portals, and exposed SSH interfaces on network routers. In some breaches, compromising a router allowed attackers to tunnel deeper into internal networks and reach critical servers, especially outdated backup systems that had not been maintained.

Once inside, the attackers survey the entire environment. They run commands designed to identify domain controllers and trust relationships between systems, giving them a map of how the network is structured. To avoid being detected, they often use remote-access tools that are normally employed by IT administrators, making their activity harder to differentiate from legitimate work. They also disable security software, create administrator-level user accounts for long-term access, and deploy tools capable of running commands on multiple machines at once.


Data theft and encryption techniques 

Akira uses a double-extortion method. The attackers first locate and collect sensitive corporate information, which they compress and transfer out of the network using well-known tools such as FileZilla, WinRAR, WinSCP or RClone. Some investigations show that this data extraction process can be completed in just a few hours. Once the information has been removed, they launch the ransomware encryptor, which uses modern encryption algorithms that are designed to work quickly and efficiently. Over time, the group has changed the file extensions that appear after encryption and has modified the names and placement of ransom notes. The ransomware also removes Windows shadow copies to block easy recovery options.


Why the threat continues to succeed 

Cybersecurity experts point out that Akira benefits from long-standing issues that many organisations fail to address. Network appliances, remote access devices, and backup servers often remain unpatched for months, giving attackers opportunities to exploit vulnerabilities that should have been resolved. These overlooked systems create gaps that remain unnoticed until an intrusion is already underway.


How organisations can strengthen defences 

While applying patches, enabling multifactor authentication, and keeping offline backups remain essential, the recent wave of incidents shows that more comprehensive measures are necessary. Specialists recommend dividing networks into smaller segments to limit lateral movement, monitoring administrator-level activity closely, and extending security controls to backup systems and virtualisation consoles. Organisations should also conduct complete ransomware readiness exercises that include not only technical recovery procedures but also legal considerations, communication strategies, and preparations for potential data leaks.

Security researchers emphasise that companies must approach defence with the same mindset attackers use to find vulnerabilities. Identifying weaknesses before adversaries exploit them can make the difference between a minor disruption and a large-scale crisis.



Meta Cleared of Monopoly Charges in FTC Antitrust Case

 

A U.S. federal judge ruled that Meta does not hold a monopoly in the social media market, rejecting the FTC's antitrust lawsuit seeking divestiture of Instagram and WhatsApp. The FTC, joined by multiple states, filed the suit in December 2020, alleging Meta (formerly Facebook) violated Section 2 of the Sherman Act by acquiring Instagram for $1 billion in 2012 and WhatsApp for $19 billion in 2014. 

These moves were part of a supposed "buy-or-bury" strategy to eliminate rivals in "personal social networking services" (PSNS), stifling innovation, increasing ads, and weakening privacy. The agency claimed Meta's dominance left consumers with few alternatives, excluding platforms like TikTok and YouTube from its narrow market definition.

Trial and ruling

U.S. District Judge James Boasberg oversaw a seven-week trial ending in May 2025, featuring testimony from Meta CEO Mark Zuckerberg, who highlighted competition from TikTok and YouTube. In an 89-page opinion on November 18, 2025, Boasberg ruled the FTC failed to prove current monopoly power, noting the social media landscape's rapid evolution with surging apps, new features, and AI content.He emphasized Meta's market share—below 50% and declining in a broader market including Snapchat, TikTok, and YouTube—showed no insulation from rivals.

Key arguments and evidence

The FTC presented internal emails suggesting Zuckerberg feared Instagram and WhatsApp as threats, arguing acquisitions suppressed competition and harmed users via heavier ads and less privacy. Boasberg dismissed this, finding direct evidence like supra-competitive profits or price hikes insufficient for monopoly proof, and rejected the PSNS market as outdated given overlapping uses across apps.Meta countered that regulators approved the deals initially and that forcing divestiture would hurt U.S. innovation.I

Implications

Meta hailed the decision as affirming fierce competition and its contributions to growth, avoiding operational upheaval for its 3.54 billion daily users. The FTC expressed disappointment and is reviewing options, marking a setback amid wins against Google but ongoing cases versus Apple and Amazon. Experts view it as reinforcing consumer-focused antitrust in dynamic tech markets.

Aisuru Botnet Launches 15.72 Tbps DDoS Attack on Microsoft Azure Network

 

Microsoft has reported that its Azure platform recently experienced one of the largest distributed denial-of-service attacks recorded to date, attributed to the fast-growing Aisuru botnet. According to the company, the attack reached a staggering peak of 15.72 terabits per second and originated from more than 500,000 distinct IP addresses across multiple regions. The traffic surge consisted primarily of high-volume UDP floods and was directed toward a single public-facing Azure IP address located in Australia. At its height, the attack generated nearly 3.64 billion packets per second. 

Microsoft said the activity was linked to Aisuru, a botnet categorized in the same threat class as the well-known Turbo Mirai malware family. Like Mirai, Aisuru spreads by compromising vulnerable Internet of Things (IoT) hardware, including home routers and cameras, particularly those operating on residential internet service providers in the United States and additional countries. Azure Security senior product marketing manager Sean Whalen noted that the attack displayed limited source spoofing and used randomized ports, which ultimately made network tracing and provider-level mitigation more manageable. 

The same botnet has been connected to other record-setting cyber incidents in recent months. Cloudflare previously associated Aisuru with an attack that measured 22.2 Tbps and generated over 10.6 billion packets per second in September 2025, one of the highest traffic bursts observed in a short-duration DDoS event. Despite lasting only 40 seconds, that incident was comparable in bandwidth consumption to more than one million simultaneous 4K video streams. 

Within the same timeframe, researchers from Qi’anxin’s XLab division attributed another 11.5 Tbps attack to Aisuru and estimated the botnet was using around 300,000 infected devices. XLab’s reporting indicates rapid expansion earlier in 2025 after attackers compromised a TotoLink router firmware distribution server, resulting in the infection of approximately 100,000 additional devices. 

Industry reporting also suggests the botnet has targeted vulnerabilities in consumer equipment produced by major vendors, including D-Link, Linksys, Realtek-based systems, Zyxel hardware, and network equipment distributed through T-Mobile. 

The botnet’s growing presence has begun influencing unrelated systems such as DNS ranking services. Cybersecurity journalist Brian Krebs reported that Cloudflare removed several Aisuru-controlled domains from public ranking dashboards after they began appearing higher than widely used legitimate platforms. Cloudflare leadership confirmed that intentional traffic manipulation distorted ranking visibility, prompting new internal policies to suppress suspected malicious domain patterns. 

Cloudflare disclosed earlier this year that DDoS attacks across its network surged dramatically. The company recorded a 198% quarter-to-quarter rise and a 358% year-over-year increase, with more than 21.3 million attempted attacks against customers during 2024 and an additional 6.6 million incidents directed specifically at its own services during an extended multi-vector campaign.

Featured