Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Meta Cleared of Monopoly Charges in FTC Antitrust Case

  A U.S. federal judge ruled that Meta does not hold a monopoly in the social media market, rejecting the FTC's antitrust lawsuit seekin...

All the recent news you need to know

Aisuru Botnet Launches 15.72 Tbps DDoS Attack on Microsoft Azure Network

 

Microsoft has reported that its Azure platform recently experienced one of the largest distributed denial-of-service attacks recorded to date, attributed to the fast-growing Aisuru botnet. According to the company, the attack reached a staggering peak of 15.72 terabits per second and originated from more than 500,000 distinct IP addresses across multiple regions. The traffic surge consisted primarily of high-volume UDP floods and was directed toward a single public-facing Azure IP address located in Australia. At its height, the attack generated nearly 3.64 billion packets per second. 

Microsoft said the activity was linked to Aisuru, a botnet categorized in the same threat class as the well-known Turbo Mirai malware family. Like Mirai, Aisuru spreads by compromising vulnerable Internet of Things (IoT) hardware, including home routers and cameras, particularly those operating on residential internet service providers in the United States and additional countries. Azure Security senior product marketing manager Sean Whalen noted that the attack displayed limited source spoofing and used randomized ports, which ultimately made network tracing and provider-level mitigation more manageable. 

The same botnet has been connected to other record-setting cyber incidents in recent months. Cloudflare previously associated Aisuru with an attack that measured 22.2 Tbps and generated over 10.6 billion packets per second in September 2025, one of the highest traffic bursts observed in a short-duration DDoS event. Despite lasting only 40 seconds, that incident was comparable in bandwidth consumption to more than one million simultaneous 4K video streams. 

Within the same timeframe, researchers from Qi’anxin’s XLab division attributed another 11.5 Tbps attack to Aisuru and estimated the botnet was using around 300,000 infected devices. XLab’s reporting indicates rapid expansion earlier in 2025 after attackers compromised a TotoLink router firmware distribution server, resulting in the infection of approximately 100,000 additional devices. 

Industry reporting also suggests the botnet has targeted vulnerabilities in consumer equipment produced by major vendors, including D-Link, Linksys, Realtek-based systems, Zyxel hardware, and network equipment distributed through T-Mobile. 

The botnet’s growing presence has begun influencing unrelated systems such as DNS ranking services. Cybersecurity journalist Brian Krebs reported that Cloudflare removed several Aisuru-controlled domains from public ranking dashboards after they began appearing higher than widely used legitimate platforms. Cloudflare leadership confirmed that intentional traffic manipulation distorted ranking visibility, prompting new internal policies to suppress suspected malicious domain patterns. 

Cloudflare disclosed earlier this year that DDoS attacks across its network surged dramatically. The company recorded a 198% quarter-to-quarter rise and a 358% year-over-year increase, with more than 21.3 million attempted attacks against customers during 2024 and an additional 6.6 million incidents directed specifically at its own services during an extended multi-vector campaign.

Google CEO Flags Irrational Trends in AI Funding Surge

 


Sundar Pichai, CEO of Alphabet, has recently warned that the rapid increase in artificial intelligence investment is exhibiting signs of "irrationality" in at least some sectors of the global economy as he issued a candid assessment that has sharpened the global conversation around the accelerated artificial intelligence economy. 

When Pichai spoke exclusively with the BBC at Google's headquarters in California, he expressed concern about the rapid pace with which capital is flowing into the sector. He also pointed out that any company, regardless of the size or the scope, could suffer from the distortions that may occur when markets expand too quickly, even Google itself.

Despite the intense scrutiny of the AI landscape that is being fueled in part by Alphabet's own rapid rise, his comments come at a time when AI is gaining traction. Despite the company's rapid rise, Alphabet's market value has doubled within seven months, reaching $3.5 trillion. In his remarks, Pichai acknowledged that this transformational period will be a time of growth for the industry, but warned that as with previous technology booms, the market risks "overshooting" in terms of investments. 

By drawing a parallel between the boom and collapse of Internet valuations in the late 1990's, he highlighted the historical pattern in which optimism can lead to instability, resulting in steep corrections, bankruptcy, and widespread job losses, especially when the economy is at a low point. In tempered with caution, Pichai underscored how AI infrastructure is currently being developed at an unprecedented scale, underscoring his optimism. 

A spokesperson for Alphabet commented that the company's annual investment has tripled in just four years, rising from approximately $30 billion to more than $90 billion. This investment is the culmination of commitments from other major players and, taken together, the sector now has more than a trillion dollars in cumulative investment. 

The rapid escalation of technological components has been described by him as part of a broader "scale equation," in which computer technology that was established over the course of several decades is now being replicated at an extraordinary pace within just a few years despite being laid decades ago. The interview included a comprehensive discussion of several challenges shaping the AI landscape during which he addressed such topics as the escalating demand for energy, the impacts this has on climate targets, the UK’s role in investment in the future, concerns about model accuracy, and the long-term outlook for employment in an automated society. 

There is a growing level of scrutiny on the Artificial Intelligence market right now, fueled in part by Alphabet's own dramatic rise, which is causing the market to be scrutinized to a new level. According to investors, the company's valuation has doubled within seven months to reach $3.5 trillion, buoyed by their confidence in its ability to withstand the competitive pressure from OpenAI, bringing its value to $3.5 trillion. 

As part of Alphabet’s efforts to develop specialized AI superchips, analysts have also focused on creating a competitive edge over Nvidia, which recently became the first firm to cross a $5 trillion valuation, which is directly competing with Alphabet. However, in spite of this surge in market value, some observers are skeptical, pointing out that OpenAI is surrounded by an intricate network of approximately $1.4 trillion in investments. 

Even though the company generates a tiny fraction of the investment it receives, it still generates a relatively significant amount of revenue. It's now time to make comparisons to the dot-com era, when optimism fueled runaway valuations before they crashed into widespread losses and corporate failures in the late 1990s. In addition, the issue of ripple effects on jobs, household savings, and retirement assets has been brought to the forefront once more, as have concerns over the ripple effects of history. 


A prominent theme of Pichai's remarks was the company's global expansion, in which he highlighted the firm's commitment to the United Kingdom as a key hub for AI development in the future. The company pledged in September that it would invest £5 billion over the next two years on strengthening UK infrastructure and research, including major investments in its DeepMind Artificial Intelligence arm based in London. 

A few days ago, Pichiai announced that, for the first time, Google plans on training their advanced models within the UK. This ambition, long emphasized by government leaders who believe that domestic model training could be a decisive step towards securing the country's position as the third major AI power in the world, after the United States and China. As for Alphabet's long-term stance, he reiterated the company's commitment to the UK, saying it is "committed to investing a lot of money in the country." 

As well as acknowledging the enormous energy challenges that accompany the rapid expansion of artificial intelligence systems, Pichai also mentioned that the AI industry is facing formidable energy challenges. Using data from the International Energy Agency which shows that artificial intelligence activity consumes roughly 1.5% of global electricity, he warned that nations, including the UK, should act quickly and create new power sources and infrastructure. The failure to do so, he said, could adversely affect economic growth.

It has been acknowledged that some of Alphabet's climate objectives have been delayed as a result of the growing energy demands of the company's AI operations, though he reiterated Alphabet's commitment to achieving net zero emissions by 2030 through continued investment in new energy technologies. Additionally, Pichai also spoke about the wider changes that AI is driving in society, calling it "the most profound technology" that humans have ever developed. 

While he recognized that AI will likely result in significant disruptions to the workplace across sectors, he also stressed that AI will also provide new forms of opportunity. He predicted that advanced systems would have a significant impact on workplaces across industries. According to him, the jobs of the future will be dominated by those who are able to work alongside AI tools, whether in the field of education, medicine, or any other.

Individuals who adapt as soon as possible will benefit most from the coming technological revolution. Amidst a global race to harness AI, Pichai's remarks ultimately serve as both a warning and a roadmap for those seeking to capitalize on its transformative potential: disciplined investment, a stronger infrastructure, and a workforce capable of embracing rapid innovation will all be crucial for AI to become more powerful than ever. 

It is now imperative that policymakers take proactive measures to ensure energy security and thoughtful regulation; investors should take note of the importance of balancing ambitions with caution; and workers should take advantage of this chance to gain new skills that will define the next era in productivity. According to him, the companies and nations that navigate this transition with clarity and foresight will be the ones shaping the future of the artificial intelligence-driven economy.

Why Long-Term AI Conversations Are Quietly Becoming a Major Corporate Security Weakness

 



Many organisations are starting to recognise a security problem that has been forming silently in the background. Conversations employees hold with public AI chatbots can accumulate into a long-term record of sensitive information, behavioural patterns, and internal decision-making. As reliance on AI tools increases, these stored interactions may become a serious vulnerability that companies have not fully accounted for.

The concern resurfaced after a viral trend in late 2024 in which social media users asked AI models to highlight things they “might not know” about themselves. Most treated it as a novelty, but the trend revealed a larger issue. Major AI providers routinely retain prompts, responses, and related metadata unless users disable retention or use enterprise controls. Over extended periods, these stored exchanges can unintentionally reveal how employees think, communicate, and handle confidential tasks.

This risk becomes more severe when considering the rise of unapproved AI use at work. Recent business research shows that while the majority of employees rely on consumer AI tools to automate or speed up tasks, only a fraction of companies officially track or authorise such usage. This gap means workers frequently insert sensitive data into external platforms without proper safeguards, enlarging the exposure surface beyond what internal security teams can monitor.

Vendor assurances do not fully eliminate the risk. Although companies like OpenAI, Google, and others emphasize encryption and temporary chat options, their systems still operate within legal and regulatory environments. One widely discussed court order in 2025 required the preservation of AI chat logs, including previously deleted exchanges. Even though the order was later withdrawn and the company resumed standard deletion timelines, the case reminded businesses that stored conversations can resurface unexpectedly.

Technical weaknesses also contribute to the threat. Security researchers have uncovered misconfigured databases operated by AI firms that contained user conversations, internal keys, and operational details. Other investigations have demonstrated that prompt-based manipulation in certain workplace AI features can cause private channel messages to leak. These findings show that vulnerabilities do not always come from user mistakes; sometimes the supporting AI infrastructure itself becomes an entry point.

Criminals have already shown how AI-generated impersonation can be exploited. A notable example involved attackers using synthetic voice technology to imitate an executive, tricking an employee into transferring funds. As AI models absorb years of prompt history, attackers could use stylistic and behavioural patterns to impersonate employees, tailor phishing messages, or replicate internal documents.

Despite these risks, many companies still lack comprehensive AI governance. Studies reveal that employees continue to insert confidential data into AI systems, sometimes knowingly, because it speeds up their work. Compliance requirements such as GDPR’s strict data minimisation rules make this behaviour even more dangerous, given the penalties for mishandling personal information.

Experts advise organisations to adopt structured controls. This includes building an inventory of approved AI tools, monitoring for unsanctioned usage, conducting risk assessments, and providing regular training so staff understand what should never be shared with external systems. Some analysts also suggest that instead of banning shadow AI outright, companies should guide employees toward secure, enterprise-level AI platforms.

If companies fail to act, each casual AI conversation can slowly accumulate into a dataset capable of exposing confidential operations. While AI brings clear productivity benefits, unmanaged use may convert everyday workplace conversations into one of the most overlooked security liabilities of the decade.

Russian-Linked Surveillance Tech Firm Protei Hacked, Website Defaced and Data Published

 

A telecommunications technology provider with ties to Russian surveillance infrastructure has reportedly suffered a major cybersecurity breach. The company, Protei, which builds systems used by telecom providers to monitor online activity and restrict access to websites and platforms, had its website defaced and internal data stolen, according to information reviewed by TechCrunch. The firm originally operated from Russia but is now based in Jordan and supplies technology to clients across multiple regions, including the Middle East, Europe, Africa, Mexico, Kazakhstan and Pakistan. 

Protei develops a range of systems used by telecom operators, including conferencing platforms and connectivity services. However, the company is most widely associated with deep packet inspection (DPI) tools and network filtering technologies — software commonly used in countries where governments impose strict controls on online information flow and communication. These systems allow network providers to inspect traffic patterns, identify specific services or websites and enforce blocks or restrictions. 

It remains uncertain exactly when the intrusion occurred, but archived pages from the Wayback Machine indicate the public defacement took place on November 8. The altered site contained a short message referencing the firm’s involvement in DPI technology and surveillance infrastructure. Although the webpage was restored quickly, the attackers reportedly extracted approximately 182 gigabytes of data from Protei’s systems, including email archives dating back several years. 

A copy of the exposed files was later supplied to Distributed Denial of Secrets (DDoSecrets), an organization known for cataloging leaked data from governments, law enforcement agencies and companies operating in surveillance or censorship markets. DDoSecrets confirmed receiving the dataset and made it available to researchers and journalists. 

Prior to publication, TechCrunch reached out to Protei leadership for clarification. Mohammad Jalal, who oversees the company’s Jordan branch, did not initially respond. After publication, he issued an email claiming the company is not connected to Russia and stating that Protei had no confirmed knowledge of unauthorized data extraction from its servers. 

The message left by the hacker suggested an ideological motive rather than a financial one. The wording referenced SORM — Russia’s lawful interception framework that enables intelligence agencies to access telecommunications data. Protei’s network filtering and DPI tools are believed to complement SORM deployments in regions where governments restrict digital freedoms. 

Reports from research organizations have previously linked Protei technology to censorship infrastructure. In 2023, Citizen Lab documented exchanges suggesting that Iranian telecommunications companies sought Protei’s systems to log network activity and block access to selected websites. Documents reviewed by the group indicated the company’s ability to deploy population-level filtering and targeted restrictions. 

The breach adds to growing scrutiny surrounding technology vendors supplying surveillance capabilities internationally, especially in environments where privacy protections and freedom of expression remain vulnerable.

Waymo Robotaxi Films Deadly San Francisco Shooting

 

A Waymo autonomous vehicle may have captured video footage of a fatal shooting incident in San Francisco's Mission neighborhood over the weekend, highlighting the emerging role of self-driving cars as potential witnesses in criminal investigations. The incident resulted in one man's death and left another person critically injured.

The incident and arrest

According to 9-1-1 dispatcher calls cited by the San Francisco Standard, a Waymo robotaxi was parked near the crime scene during the shooting. Police have identified the suspect as 23-year-old Larry Hudgson Jr., who was subsequently arrested without incident in a nearby neighborhood and booked into county jail. It remains unclear whether law enforcement has formally requested footage from the autonomous vehicle.

Privacy concerns

Waymo vehicles are equipped with extensive surveillance technology, featuring at least 29 cameras on their interiors and exteriors that continuously monitor their surroundings. This comprehensive camera coverage has drawn criticism from privacy advocates who describe the vehicles as "little mobile narcs" capable of widespread surveillance. The company maintains it does not routinely share data with law enforcement without proper legal requests.

Company policy on law enforcement access

Waymo co-CEO Tekedra Mawakana explained the company's approach during an interview with the New York Times podcast Hard Fork, emphasizing transparency in their privacy policy. The company follows legal processes when responding to footage requests and narrows the scope as necessary. Waymo representatives have stated they actively challenge data requests lacking valid legal basis or those considered overbroad.

This incident exemplifies how smart devices increasingly contribute to the surveillance economy and criminal investigations. Similar cases include Amazon being ordered to provide Echo device data for a 2017 New Hampshire murder investigation, Tesla cameras assisting in hate crime arrests in 2021, and Uber Eats delivery bot footage used in an abduction case. As autonomous vehicles become more prevalent in American cities, their role as digital witnesses in criminal cases appears inevitable.

Digital Deception Drives a Sophisticated Era of Cybercrime


 

Digital technology is becoming more and more pervasive in the everyday lives, but a whole new spectrum of threats is quietly emerging behind the curtain, quietly advancing beneath the surface of routine online behavior. 

A wide range of cybercriminals are leveraging an ever-expanding toolkit to take advantage of the emotional manipulation embedded in deepfake videos, online betting platforms, harmful games and romance scams, as well as sophisticated phishing schemes and zero-day exploits to infiltrate not only devices, but the habits and vulnerabilities of the users as well. 

Google's preferred sources have long stressed the importance of understanding how attackers attack, which is the first line of defence for any organization. The Cyberabad Police was the latest agency to extend an alert to households, which adds an additional urgency to this issue. 

According to the authorities' advisory, Caught in the Digital Web Vigilance is the Only Shield, it is clear criminals are not forcing themselves into homes anymore, rather they are slipping silently through mobile screens, influencing children, youth, and families with manipulative content that shapes their behaviors, disrupts their mental well-being, and undermines society at large. 

There is no doubt that digital hygiene has become an integral part of modern cybercrime and is not an optional thing anymore, but rather a necessary necessity in an era where deception has become a key weapon. 

Approximately 60% of breaches now have been linked to human behavior, according to Verizon Business Business 2025 Data Breach Investigations Report (DBIR). These findings reinforce how human behavior remains intimately connected with cyber risk. Throughout the report, social engineering techniques such as phishing and pretexting, as well as other forms of social engineering, are being adapted across geographies, industries, and organizational scales as users have a tendency to rely on seemingly harmless digital interactions on a daily basis. 

DBIR finds that cybercriminals are increasingly posing as trusted entities, exploiting familiar touchpoints like parcel delivery alerts or password reset prompts, knowing that these everyday notifications naturally encourage a quick click, exploiting the fact that these everyday notifications naturally invite a quick click. 

In addition, the findings of the DBIR report demonstrate how these once-basic tricks have been turned into sophisticated deception architectures where the web itself has become a weapon. With the advent of fake software updates, which mimic the look and feel of legitimate pop-ups, and links that appear to be embedded in trusted vendor newsletters may quietly redirect users to compromised websites, this has become one of the most alarming developments. 

It has been found that attackers are coaxing individuals into pasting malicious commands into the enterprise system, turning essential workplace tools into self-destructive devices. In recent years, infected attachments and rogue sites have been masquerading as legitimate webpages, cloaking attacks behind the façade of security, even long-standing security tools that are being repurposed; verification prompts and "prove you are human" checkpoints are being manipulated to funnel users towards infected attachments and malicious websites. 

A number of Phishing-as-a-Service platforms are available for the purpose of stealing credentials in a more precise and sophisticated manner, and cybercriminals are now intentionally harvesting Multi-Factor Authentication data based on targeted campaigns that target specific sectors, further expanding the scope of credential theft. 

In the resulting threat landscape, security itself is frequently used as camouflage, and the strength of the defensive systems is only as strong as the amount of trust that users place in the screens before them. It is important to point out that even as cyberattack techniques become more sophisticated, experts contend that the fundamentals of security remain unchanged: a company or individual cannot be effectively protected against a cyberattack without understanding their own vulnerabilities. 

The industry continues to emphasise the importance of improving visibility, reducing the digital attack surface, and adopting best practices in order to stay ahead of an expanding number of increasingly adaptive adversaries; however, the risks extend far beyond the corporate perimeter. There has been a growing body of research from Cybersecurity Experts United that found that 62% of home burglaries have been associated with personal information posted online that led to successful break-ins, underscoring that digital behaviour now directly influences physical security. 

A deeper layer to these crimes is the psychological impact that they have on victims, ranging from persistent anxiety to long-term trauma. In addition, studies reveal oversharing on social media is now a key enabler for modern burglars, with 78% of those who confess to breaching homeowner's privacy admitting to mining publicly available posts for clues about travel plans, property layouts, and periods of absence from the home. 

It has been reported that houses mentioned in travel-related updates are 35% more likely to be targeted as a result, and that burglaries that take place during vacation are more common in areas where social media usage is high; notably, it has been noted that a substantial percentage of these incidents involve women who publicly announced their travel plans online. It has become increasingly apparent that this convergence of online exposure and real-world harm also has a reverberating effect in many other areas. 

Fraudulent transactions, identity theft, and cyber enabled scams frequently spill over into physical crimes such as robbery and assault, which security specialists predict will only become more severe if awareness campaigns and behavioral measures are not put in place to combat it. The increase in digital connectivity has highlighted the importance of comprehensive protective measures ranging from security precautions at home during travel to proper management of online identities to combat the growing number of online crimes and their consequences on a real-world basis. 

The line between physical and digital worlds is becoming increasingly blurred as security experts warn, and so resilience will become as important as technological safeguards in terms of resilience. As cybercrime evolves with increasingly complex tactics-whether it is subtle manipulation, data theft, or the exploitation of online habits, which expose homes and families-the need for greater public awareness and more informed organizational responses grows increasingly. 

A number of authorities emphasize that reducing risk is not a matter of isolating isolated measures but of adopting a holistic security mindset. This means limiting what we share, questioning what we click on, and strengthening the security systems that protect both our networks as well as our everyday lives. Especially in a time when criminals increasingly weaponize trust, information and routine behavior, collective vigilance may be our strongest defensive strategy in an age in which criminals are weaponizing trust and information.

Featured