Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Virtual Machines on Nutanix AHV now in Akira’s Crosshairs; Enterprises must Close Gaps

  Security agencies have issued a new warning about the Akira ransomware group after investigators confirmed that the operators have added N...

All the recent news you need to know

Meta Cleared of Monopoly Charges in FTC Antitrust Case

 

A U.S. federal judge ruled that Meta does not hold a monopoly in the social media market, rejecting the FTC's antitrust lawsuit seeking divestiture of Instagram and WhatsApp. The FTC, joined by multiple states, filed the suit in December 2020, alleging Meta (formerly Facebook) violated Section 2 of the Sherman Act by acquiring Instagram for $1 billion in 2012 and WhatsApp for $19 billion in 2014. 

These moves were part of a supposed "buy-or-bury" strategy to eliminate rivals in "personal social networking services" (PSNS), stifling innovation, increasing ads, and weakening privacy. The agency claimed Meta's dominance left consumers with few alternatives, excluding platforms like TikTok and YouTube from its narrow market definition.

Trial and ruling

U.S. District Judge James Boasberg oversaw a seven-week trial ending in May 2025, featuring testimony from Meta CEO Mark Zuckerberg, who highlighted competition from TikTok and YouTube. In an 89-page opinion on November 18, 2025, Boasberg ruled the FTC failed to prove current monopoly power, noting the social media landscape's rapid evolution with surging apps, new features, and AI content.He emphasized Meta's market share—below 50% and declining in a broader market including Snapchat, TikTok, and YouTube—showed no insulation from rivals.

Key arguments and evidence

The FTC presented internal emails suggesting Zuckerberg feared Instagram and WhatsApp as threats, arguing acquisitions suppressed competition and harmed users via heavier ads and less privacy. Boasberg dismissed this, finding direct evidence like supra-competitive profits or price hikes insufficient for monopoly proof, and rejected the PSNS market as outdated given overlapping uses across apps.Meta countered that regulators approved the deals initially and that forcing divestiture would hurt U.S. innovation.I

Implications

Meta hailed the decision as affirming fierce competition and its contributions to growth, avoiding operational upheaval for its 3.54 billion daily users. The FTC expressed disappointment and is reviewing options, marking a setback amid wins against Google but ongoing cases versus Apple and Amazon. Experts view it as reinforcing consumer-focused antitrust in dynamic tech markets.

Aisuru Botnet Launches 15.72 Tbps DDoS Attack on Microsoft Azure Network

 

Microsoft has reported that its Azure platform recently experienced one of the largest distributed denial-of-service attacks recorded to date, attributed to the fast-growing Aisuru botnet. According to the company, the attack reached a staggering peak of 15.72 terabits per second and originated from more than 500,000 distinct IP addresses across multiple regions. The traffic surge consisted primarily of high-volume UDP floods and was directed toward a single public-facing Azure IP address located in Australia. At its height, the attack generated nearly 3.64 billion packets per second. 

Microsoft said the activity was linked to Aisuru, a botnet categorized in the same threat class as the well-known Turbo Mirai malware family. Like Mirai, Aisuru spreads by compromising vulnerable Internet of Things (IoT) hardware, including home routers and cameras, particularly those operating on residential internet service providers in the United States and additional countries. Azure Security senior product marketing manager Sean Whalen noted that the attack displayed limited source spoofing and used randomized ports, which ultimately made network tracing and provider-level mitigation more manageable. 

The same botnet has been connected to other record-setting cyber incidents in recent months. Cloudflare previously associated Aisuru with an attack that measured 22.2 Tbps and generated over 10.6 billion packets per second in September 2025, one of the highest traffic bursts observed in a short-duration DDoS event. Despite lasting only 40 seconds, that incident was comparable in bandwidth consumption to more than one million simultaneous 4K video streams. 

Within the same timeframe, researchers from Qi’anxin’s XLab division attributed another 11.5 Tbps attack to Aisuru and estimated the botnet was using around 300,000 infected devices. XLab’s reporting indicates rapid expansion earlier in 2025 after attackers compromised a TotoLink router firmware distribution server, resulting in the infection of approximately 100,000 additional devices. 

Industry reporting also suggests the botnet has targeted vulnerabilities in consumer equipment produced by major vendors, including D-Link, Linksys, Realtek-based systems, Zyxel hardware, and network equipment distributed through T-Mobile. 

The botnet’s growing presence has begun influencing unrelated systems such as DNS ranking services. Cybersecurity journalist Brian Krebs reported that Cloudflare removed several Aisuru-controlled domains from public ranking dashboards after they began appearing higher than widely used legitimate platforms. Cloudflare leadership confirmed that intentional traffic manipulation distorted ranking visibility, prompting new internal policies to suppress suspected malicious domain patterns. 

Cloudflare disclosed earlier this year that DDoS attacks across its network surged dramatically. The company recorded a 198% quarter-to-quarter rise and a 358% year-over-year increase, with more than 21.3 million attempted attacks against customers during 2024 and an additional 6.6 million incidents directed specifically at its own services during an extended multi-vector campaign.

Google CEO Flags Irrational Trends in AI Funding Surge

 


Sundar Pichai, CEO of Alphabet, has recently warned that the rapid increase in artificial intelligence investment is exhibiting signs of "irrationality" in at least some sectors of the global economy as he issued a candid assessment that has sharpened the global conversation around the accelerated artificial intelligence economy. 

When Pichai spoke exclusively with the BBC at Google's headquarters in California, he expressed concern about the rapid pace with which capital is flowing into the sector. He also pointed out that any company, regardless of the size or the scope, could suffer from the distortions that may occur when markets expand too quickly, even Google itself.

Despite the intense scrutiny of the AI landscape that is being fueled in part by Alphabet's own rapid rise, his comments come at a time when AI is gaining traction. Despite the company's rapid rise, Alphabet's market value has doubled within seven months, reaching $3.5 trillion. In his remarks, Pichai acknowledged that this transformational period will be a time of growth for the industry, but warned that as with previous technology booms, the market risks "overshooting" in terms of investments. 

By drawing a parallel between the boom and collapse of Internet valuations in the late 1990's, he highlighted the historical pattern in which optimism can lead to instability, resulting in steep corrections, bankruptcy, and widespread job losses, especially when the economy is at a low point. In tempered with caution, Pichai underscored how AI infrastructure is currently being developed at an unprecedented scale, underscoring his optimism. 

A spokesperson for Alphabet commented that the company's annual investment has tripled in just four years, rising from approximately $30 billion to more than $90 billion. This investment is the culmination of commitments from other major players and, taken together, the sector now has more than a trillion dollars in cumulative investment. 

The rapid escalation of technological components has been described by him as part of a broader "scale equation," in which computer technology that was established over the course of several decades is now being replicated at an extraordinary pace within just a few years despite being laid decades ago. The interview included a comprehensive discussion of several challenges shaping the AI landscape during which he addressed such topics as the escalating demand for energy, the impacts this has on climate targets, the UK’s role in investment in the future, concerns about model accuracy, and the long-term outlook for employment in an automated society. 

There is a growing level of scrutiny on the Artificial Intelligence market right now, fueled in part by Alphabet's own dramatic rise, which is causing the market to be scrutinized to a new level. According to investors, the company's valuation has doubled within seven months to reach $3.5 trillion, buoyed by their confidence in its ability to withstand the competitive pressure from OpenAI, bringing its value to $3.5 trillion. 

As part of Alphabet’s efforts to develop specialized AI superchips, analysts have also focused on creating a competitive edge over Nvidia, which recently became the first firm to cross a $5 trillion valuation, which is directly competing with Alphabet. However, in spite of this surge in market value, some observers are skeptical, pointing out that OpenAI is surrounded by an intricate network of approximately $1.4 trillion in investments. 

Even though the company generates a tiny fraction of the investment it receives, it still generates a relatively significant amount of revenue. It's now time to make comparisons to the dot-com era, when optimism fueled runaway valuations before they crashed into widespread losses and corporate failures in the late 1990s. In addition, the issue of ripple effects on jobs, household savings, and retirement assets has been brought to the forefront once more, as have concerns over the ripple effects of history. 


A prominent theme of Pichai's remarks was the company's global expansion, in which he highlighted the firm's commitment to the United Kingdom as a key hub for AI development in the future. The company pledged in September that it would invest £5 billion over the next two years on strengthening UK infrastructure and research, including major investments in its DeepMind Artificial Intelligence arm based in London. 

A few days ago, Pichiai announced that, for the first time, Google plans on training their advanced models within the UK. This ambition, long emphasized by government leaders who believe that domestic model training could be a decisive step towards securing the country's position as the third major AI power in the world, after the United States and China. As for Alphabet's long-term stance, he reiterated the company's commitment to the UK, saying it is "committed to investing a lot of money in the country." 

As well as acknowledging the enormous energy challenges that accompany the rapid expansion of artificial intelligence systems, Pichai also mentioned that the AI industry is facing formidable energy challenges. Using data from the International Energy Agency which shows that artificial intelligence activity consumes roughly 1.5% of global electricity, he warned that nations, including the UK, should act quickly and create new power sources and infrastructure. The failure to do so, he said, could adversely affect economic growth.

It has been acknowledged that some of Alphabet's climate objectives have been delayed as a result of the growing energy demands of the company's AI operations, though he reiterated Alphabet's commitment to achieving net zero emissions by 2030 through continued investment in new energy technologies. Additionally, Pichai also spoke about the wider changes that AI is driving in society, calling it "the most profound technology" that humans have ever developed. 

While he recognized that AI will likely result in significant disruptions to the workplace across sectors, he also stressed that AI will also provide new forms of opportunity. He predicted that advanced systems would have a significant impact on workplaces across industries. According to him, the jobs of the future will be dominated by those who are able to work alongside AI tools, whether in the field of education, medicine, or any other.

Individuals who adapt as soon as possible will benefit most from the coming technological revolution. Amidst a global race to harness AI, Pichai's remarks ultimately serve as both a warning and a roadmap for those seeking to capitalize on its transformative potential: disciplined investment, a stronger infrastructure, and a workforce capable of embracing rapid innovation will all be crucial for AI to become more powerful than ever. 

It is now imperative that policymakers take proactive measures to ensure energy security and thoughtful regulation; investors should take note of the importance of balancing ambitions with caution; and workers should take advantage of this chance to gain new skills that will define the next era in productivity. According to him, the companies and nations that navigate this transition with clarity and foresight will be the ones shaping the future of the artificial intelligence-driven economy.

Why Long-Term AI Conversations Are Quietly Becoming a Major Corporate Security Weakness

 



Many organisations are starting to recognise a security problem that has been forming silently in the background. Conversations employees hold with public AI chatbots can accumulate into a long-term record of sensitive information, behavioural patterns, and internal decision-making. As reliance on AI tools increases, these stored interactions may become a serious vulnerability that companies have not fully accounted for.

The concern resurfaced after a viral trend in late 2024 in which social media users asked AI models to highlight things they “might not know” about themselves. Most treated it as a novelty, but the trend revealed a larger issue. Major AI providers routinely retain prompts, responses, and related metadata unless users disable retention or use enterprise controls. Over extended periods, these stored exchanges can unintentionally reveal how employees think, communicate, and handle confidential tasks.

This risk becomes more severe when considering the rise of unapproved AI use at work. Recent business research shows that while the majority of employees rely on consumer AI tools to automate or speed up tasks, only a fraction of companies officially track or authorise such usage. This gap means workers frequently insert sensitive data into external platforms without proper safeguards, enlarging the exposure surface beyond what internal security teams can monitor.

Vendor assurances do not fully eliminate the risk. Although companies like OpenAI, Google, and others emphasize encryption and temporary chat options, their systems still operate within legal and regulatory environments. One widely discussed court order in 2025 required the preservation of AI chat logs, including previously deleted exchanges. Even though the order was later withdrawn and the company resumed standard deletion timelines, the case reminded businesses that stored conversations can resurface unexpectedly.

Technical weaknesses also contribute to the threat. Security researchers have uncovered misconfigured databases operated by AI firms that contained user conversations, internal keys, and operational details. Other investigations have demonstrated that prompt-based manipulation in certain workplace AI features can cause private channel messages to leak. These findings show that vulnerabilities do not always come from user mistakes; sometimes the supporting AI infrastructure itself becomes an entry point.

Criminals have already shown how AI-generated impersonation can be exploited. A notable example involved attackers using synthetic voice technology to imitate an executive, tricking an employee into transferring funds. As AI models absorb years of prompt history, attackers could use stylistic and behavioural patterns to impersonate employees, tailor phishing messages, or replicate internal documents.

Despite these risks, many companies still lack comprehensive AI governance. Studies reveal that employees continue to insert confidential data into AI systems, sometimes knowingly, because it speeds up their work. Compliance requirements such as GDPR’s strict data minimisation rules make this behaviour even more dangerous, given the penalties for mishandling personal information.

Experts advise organisations to adopt structured controls. This includes building an inventory of approved AI tools, monitoring for unsanctioned usage, conducting risk assessments, and providing regular training so staff understand what should never be shared with external systems. Some analysts also suggest that instead of banning shadow AI outright, companies should guide employees toward secure, enterprise-level AI platforms.

If companies fail to act, each casual AI conversation can slowly accumulate into a dataset capable of exposing confidential operations. While AI brings clear productivity benefits, unmanaged use may convert everyday workplace conversations into one of the most overlooked security liabilities of the decade.

Russian-Linked Surveillance Tech Firm Protei Hacked, Website Defaced and Data Published

 

A telecommunications technology provider with ties to Russian surveillance infrastructure has reportedly suffered a major cybersecurity breach. The company, Protei, which builds systems used by telecom providers to monitor online activity and restrict access to websites and platforms, had its website defaced and internal data stolen, according to information reviewed by TechCrunch. The firm originally operated from Russia but is now based in Jordan and supplies technology to clients across multiple regions, including the Middle East, Europe, Africa, Mexico, Kazakhstan and Pakistan. 

Protei develops a range of systems used by telecom operators, including conferencing platforms and connectivity services. However, the company is most widely associated with deep packet inspection (DPI) tools and network filtering technologies — software commonly used in countries where governments impose strict controls on online information flow and communication. These systems allow network providers to inspect traffic patterns, identify specific services or websites and enforce blocks or restrictions. 

It remains uncertain exactly when the intrusion occurred, but archived pages from the Wayback Machine indicate the public defacement took place on November 8. The altered site contained a short message referencing the firm’s involvement in DPI technology and surveillance infrastructure. Although the webpage was restored quickly, the attackers reportedly extracted approximately 182 gigabytes of data from Protei’s systems, including email archives dating back several years. 

A copy of the exposed files was later supplied to Distributed Denial of Secrets (DDoSecrets), an organization known for cataloging leaked data from governments, law enforcement agencies and companies operating in surveillance or censorship markets. DDoSecrets confirmed receiving the dataset and made it available to researchers and journalists. 

Prior to publication, TechCrunch reached out to Protei leadership for clarification. Mohammad Jalal, who oversees the company’s Jordan branch, did not initially respond. After publication, he issued an email claiming the company is not connected to Russia and stating that Protei had no confirmed knowledge of unauthorized data extraction from its servers. 

The message left by the hacker suggested an ideological motive rather than a financial one. The wording referenced SORM — Russia’s lawful interception framework that enables intelligence agencies to access telecommunications data. Protei’s network filtering and DPI tools are believed to complement SORM deployments in regions where governments restrict digital freedoms. 

Reports from research organizations have previously linked Protei technology to censorship infrastructure. In 2023, Citizen Lab documented exchanges suggesting that Iranian telecommunications companies sought Protei’s systems to log network activity and block access to selected websites. Documents reviewed by the group indicated the company’s ability to deploy population-level filtering and targeted restrictions. 

The breach adds to growing scrutiny surrounding technology vendors supplying surveillance capabilities internationally, especially in environments where privacy protections and freedom of expression remain vulnerable.

Waymo Robotaxi Films Deadly San Francisco Shooting

 

A Waymo autonomous vehicle may have captured video footage of a fatal shooting incident in San Francisco's Mission neighborhood over the weekend, highlighting the emerging role of self-driving cars as potential witnesses in criminal investigations. The incident resulted in one man's death and left another person critically injured.

The incident and arrest

According to 9-1-1 dispatcher calls cited by the San Francisco Standard, a Waymo robotaxi was parked near the crime scene during the shooting. Police have identified the suspect as 23-year-old Larry Hudgson Jr., who was subsequently arrested without incident in a nearby neighborhood and booked into county jail. It remains unclear whether law enforcement has formally requested footage from the autonomous vehicle.

Privacy concerns

Waymo vehicles are equipped with extensive surveillance technology, featuring at least 29 cameras on their interiors and exteriors that continuously monitor their surroundings. This comprehensive camera coverage has drawn criticism from privacy advocates who describe the vehicles as "little mobile narcs" capable of widespread surveillance. The company maintains it does not routinely share data with law enforcement without proper legal requests.

Company policy on law enforcement access

Waymo co-CEO Tekedra Mawakana explained the company's approach during an interview with the New York Times podcast Hard Fork, emphasizing transparency in their privacy policy. The company follows legal processes when responding to footage requests and narrows the scope as necessary. Waymo representatives have stated they actively challenge data requests lacking valid legal basis or those considered overbroad.

This incident exemplifies how smart devices increasingly contribute to the surveillance economy and criminal investigations. Similar cases include Amazon being ordered to provide Echo device data for a 2017 New Hampshire murder investigation, Tesla cameras assisting in hate crime arrests in 2021, and Uber Eats delivery bot footage used in an abduction case. As autonomous vehicles become more prevalent in American cities, their role as digital witnesses in criminal cases appears inevitable.

Featured