Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

AI Emotional Monitoring in the Workplace Raises New Privacy and Ethical Concerns

 

As artificial intelligence becomes more deeply woven into daily life, tools like ChatGPT have already demonstrated how appealing digital emotional support can be. While public discussions have largely focused on the risks of using AI for therapy—particularly for younger or vulnerable users—a quieter trend is unfolding inside workplaces. Increasingly, companies are deploying generative AI systems not just for productivity but to monitor emotional well-being and provide psychological support to employees. 

This shift accelerated after the pandemic reshaped workplaces and normalized remote communication. Now, industries including healthcare, corporate services and HR are turning to software that can identify stress, assess psychological health and respond to emotional distress. Unlike consumer-facing mental wellness apps, these systems sit inside corporate environments, raising questions about power dynamics, privacy boundaries and accountability. 

Some companies initially introduced AI-based counseling tools that mimic therapeutic conversation. Early research suggests people sometimes feel more validated by AI responses than by human interaction. One study found chatbot replies were perceived as equally or more empathetic than responses from licensed therapists. This is largely attributed to predictably supportive responses, lack of judgment and uninterrupted listening—qualities users say make it easier to discuss sensitive topics. 

Yet the workplace context changes everything. Studies show many employees hesitate to use employer-provided mental health tools due to fear that personal disclosures could resurface in performance reviews or influence job security. The concern is not irrational: some AI-powered platforms now go beyond conversation, analyzing emails, Slack messages and virtual meeting behavior to generate emotional profiles. These systems can detect tone shifts, estimate personal stress levels and map emotional trends across departments. 

One example involves workplace platforms using facial analytics to categorize emotional expression and assign wellness scores. While advocates claim this data can help organizations spot burnout and intervene early, critics warn it blurs the line between support and surveillance. The same system designed to offer empathy can simultaneously collect insights that may be used to evaluate morale, predict resignations or inform management decisions. 

Research indicates that constant monitoring can heighten stress rather than reduce it. Workers who know they are being analyzed tend to modulate behavior, speak differently or avoid emotional honesty altogether. The risk of misinterpretation is another concern: existing emotion-tracking models have demonstrated bias against marginalized groups, potentially leading to misread emotional cues and unfair conclusions. 

The growing use of AI-mediated emotional support raises broader organizational questions. If employees trust AI more than managers, what does that imply about leadership? And if AI becomes the primary emotional outlet, what happens to the human relationships workplaces rely on? 

Experts argue that AI can play a positive role, but only when paired with transparent data use policies, strict privacy protections and ethical limits. Ultimately, technology may help supplement emotional care—but it cannot replace the trust, nuance and accountability required to sustain healthy workplace relationships.

Google’s High-Stakes AI Strategy: Chips, Investment, and Concerns of a Tech Bubble

 

At Google’s headquarters, engineers work on Google’s Tensor Processing Unit, or TPU—custom silicon built specifically for AI workloads. The device appears ordinary, but its role is anything but. Google expects these chips to eventually power nearly every AI action across its platforms, making them integral to the company’s long-term technological dominance. 

Pichai has repeatedly described AI as the most transformative technology ever developed, more consequential than the internet, smartphones, or cloud computing. However, the excitement is accompanied by growing caution from economists and financial regulators. Institutions such as the Bank of England have signaled concern that the rapid rise in AI-related company valuations could lead to an abrupt correction. Even prominent industry leaders, including OpenAI CEO Sam Altman, have acknowledged that portions of the AI sector may already display speculative behavior. 

Despite those warnings, Google continues expanding its AI investment at record speed. The company now spends over $90 billion annually on AI infrastructure, tripling its investment from only a few years earlier. The strategy aligns with a larger trend: a small group of technology companies—including Microsoft, Meta, Nvidia, Apple, and Tesla—now represents roughly one-third of the total value of the U.S. S&P 500 market index. Analysts note that such concentration of financial power exceeds levels seen during the dot-com era. 

Within the secured TPU lab, the environment is loud, dominated by cooling units required to manage the extreme heat generated when chips process AI models. The TPU differs from traditional CPUs and GPUs because it is built specifically for machine learning applications, giving Google tighter efficiency and speed advantages while reducing reliance on external chip suppliers. The competition for advanced chips has intensified to the point where Silicon Valley executives openly negotiate and lobby for supply. 

Outside Google, several AI companies have seen share value fluctuations, with investors expressing caution about long-term financial sustainability. However, product development continues rapidly. Google’s recently launched Gemini 3.0 model positions the company to directly challenge OpenAI’s widely adopted ChatGPT.  

Beyond financial pressures, the AI sector must also confront resource challenges. Analysts estimate that global data centers could consume energy on the scale of an industrialized nation by 2030. Still, companies pursue ever-larger AI systems, motivated by the possibility of reaching artificial general intelligence—a milestone where machines match or exceed human reasoning ability. 

Whether the current acceleration becomes a long-term technological revolution or a temporary bubble remains unresolved. But the race to lead AI is already reshaping global markets, investment patterns, and the future of computing.

Massive Leak Exposes 1.3 Billion Passwords and 2 Billion Emails — Check If Your Credentials Are at Risk

 

If you haven’t recently checked whether your login details are floating around online, now is the time. A staggering 1.3 billion unique passwords and 2 billion unique email addresses have surfaced publicly — and not due to a fresh corporate breach.

Instead, this massive cache was uncovered after threat-intelligence firm Synthient combed through both the open web and the dark web for leaked credentials. You may recognize the company, as they previously discovered 183 million compromised email accounts.

Much of this enormous collection is made up of credential-stuffing lists, which bundle together login details stolen from various older breaches. Cybercriminals typically buy and trade these lists to attempt unauthorized logins across multiple platforms.

This time, Synthient pulled together all 2 billion emails and 1.3 billion passwords, and with help from Troy Hunt and Have I Been Pwned (HIBP), the entire dataset can now be searched so users can determine if their personal information is exposed.

The compilation was created by Synthient founder Benjamin Brundage, who spent months gathering leaked credentials from countless sources across hacker forums and malware dumps. The dataset includes both older breach data and newly stolen information harvested through info-stealing malware, which quietly extracts passwords from infected devices.

According to Troy Hunt, Brundage provided the raw data while Hunt independently verified its authenticity.

To test its validity, Hunt used one of his old email addresses — one he already knew had appeared in past credential lists. As expected, that address and several associated passwords were included in the dataset.

After that, Hunt contacted a group of HIBP subscribers for verification. By choosing some users whose data had never appeared in a breach and others with previously exposed data, he confirmed that the new dataset wasn’t just recycled information — fresh, previously unseen credentials were indeed present.

HIBP has since integrated the exposed passwords into its Pwned Passwords service. Importantly, this database never links email addresses to passwords, maintaining privacy while still allowing users to check if their passwords are compromised.

To see if any of your current passwords have been leaked, visit the Pwned Passwords page and enter them. Your passwords are never sent to a server — the entire check is processed locally in your browser through an anonymity-preserving method.

If any password you use appears in the results, change it immediately. You can rely on a password manager to generate strong replacements, or use free password generators from tools like Bitwarden, LastPass, and ProtonPass.

The single most important cybersecurity rule remains the same: never reuse passwords. When criminals obtain one set of login credentials, they try them across other platforms — an attack method known as credential stuffing. Because so many people still repeat passwords, these attacks remain highly successful.

Make sure every account you own uses a strong, complex, and unique password. Password managers and built-in password generators are the easiest way to handle this.

Even the best password may not protect you if it’s stolen through a breach or malware. That’s why Two-Factor Authentication (2FA) is crucial. With a second verification step — such as an authenticator app or security key — criminals won’t be able to access your account even if they know the password.

You should also safeguard your devices against malware using reputable antivirus tools on Windows, Mac, and Android. Info-stealing malware, often spread through phishing attacks, remains one of the most common ways passwords are siphoned directly from user devices.

If you’re interested in going beyond passwords altogether, consider switching to passkeys. These use cryptographic key pairs rather than passwords, making them unguessable, non-reusable, and resistant to phishing attempts.

Think of your password as the lock on your home’s front door: the stronger it is, the harder it is for intruders to break in. But even with strong habits, your information can still be exposed through breaches outside your control — one reason many experts, including Hunt, see passkeys as the future.

While it’s easy to panic after reading about massive leaks like this, staying consistent with good digital hygiene and regularly checking your exposure will keep you one step ahead of cybercriminals.

Why Long-Term AI Conversations Are Quietly Becoming a Major Corporate Security Weakness

 



Many organisations are starting to recognise a security problem that has been forming silently in the background. Conversations employees hold with public AI chatbots can accumulate into a long-term record of sensitive information, behavioural patterns, and internal decision-making. As reliance on AI tools increases, these stored interactions may become a serious vulnerability that companies have not fully accounted for.

The concern resurfaced after a viral trend in late 2024 in which social media users asked AI models to highlight things they “might not know” about themselves. Most treated it as a novelty, but the trend revealed a larger issue. Major AI providers routinely retain prompts, responses, and related metadata unless users disable retention or use enterprise controls. Over extended periods, these stored exchanges can unintentionally reveal how employees think, communicate, and handle confidential tasks.

This risk becomes more severe when considering the rise of unapproved AI use at work. Recent business research shows that while the majority of employees rely on consumer AI tools to automate or speed up tasks, only a fraction of companies officially track or authorise such usage. This gap means workers frequently insert sensitive data into external platforms without proper safeguards, enlarging the exposure surface beyond what internal security teams can monitor.

Vendor assurances do not fully eliminate the risk. Although companies like OpenAI, Google, and others emphasize encryption and temporary chat options, their systems still operate within legal and regulatory environments. One widely discussed court order in 2025 required the preservation of AI chat logs, including previously deleted exchanges. Even though the order was later withdrawn and the company resumed standard deletion timelines, the case reminded businesses that stored conversations can resurface unexpectedly.

Technical weaknesses also contribute to the threat. Security researchers have uncovered misconfigured databases operated by AI firms that contained user conversations, internal keys, and operational details. Other investigations have demonstrated that prompt-based manipulation in certain workplace AI features can cause private channel messages to leak. These findings show that vulnerabilities do not always come from user mistakes; sometimes the supporting AI infrastructure itself becomes an entry point.

Criminals have already shown how AI-generated impersonation can be exploited. A notable example involved attackers using synthetic voice technology to imitate an executive, tricking an employee into transferring funds. As AI models absorb years of prompt history, attackers could use stylistic and behavioural patterns to impersonate employees, tailor phishing messages, or replicate internal documents.

Despite these risks, many companies still lack comprehensive AI governance. Studies reveal that employees continue to insert confidential data into AI systems, sometimes knowingly, because it speeds up their work. Compliance requirements such as GDPR’s strict data minimisation rules make this behaviour even more dangerous, given the penalties for mishandling personal information.

Experts advise organisations to adopt structured controls. This includes building an inventory of approved AI tools, monitoring for unsanctioned usage, conducting risk assessments, and providing regular training so staff understand what should never be shared with external systems. Some analysts also suggest that instead of banning shadow AI outright, companies should guide employees toward secure, enterprise-level AI platforms.

If companies fail to act, each casual AI conversation can slowly accumulate into a dataset capable of exposing confidential operations. While AI brings clear productivity benefits, unmanaged use may convert everyday workplace conversations into one of the most overlooked security liabilities of the decade.

Cloudflare Outage Traced to Internal File Error After Initial Fears of Massive DDoS Attack

Cloudflare experienced a major disruption yesterday that knocked numerous websites and online services offline. At first, the company suspected it was under a massive “hyper-scale” DDoS attack.

“I worry this is the big botnet flexing,” Cloudflare co-founder and CEO Matthew Prince wrote in an internal chat, referring to concerns that the Aisuru botnet might be responsible. However, the team later confirmed that the issue originated from within Cloudflare’s own infrastructure: a critical configuration file unexpectedly grew in size and spread across the network.

This oversized file caused failures in software responsible for reading the data used by Cloudflare’s bot management system, which relies on machine learning to detect harmful traffic. As a result, Cloudflare’s core CDN, security tools, and other services were impacted.

“After we initially wrongly suspected the symptoms we were seeing were caused by a hyper-scale DDoS attack, we correctly identified the core issue and were able to stop the propagation of the larger-than-expected feature file and replace it with an earlier version of the file,” Prince explained in a post-mortem.

According to Prince, the issue began when changes to database permissions caused the system to generate duplicate entries inside a “feature file” used by the company’s bot detection model. The file then doubled in size and automatically replicated across Cloudflare’s global network.

Machines that route traffic through Cloudflare read this file to keep the bot management system updated. But the software had a strict size limit for this configuration file, and the bloated version exceeded that threshold, causing widespread failures. Once the old version was restored, traffic began returning to normal — though it took another 2.5 hours to stabilize the network after the sudden surge in requests.

Prince apologized for the downtime, noting the heavy dependence many online platforms have on Cloudflare. “On behalf of the entire team at Cloudflare, I would like to apologize for the pain we caused the Internet today,” he wrote, adding that outages are especially serious due to “Cloudflare’s importance in the Internet ecosystem.”

Cloudflare’s bot management system assigns bot scores using machine learning, helping customers filter legitimate traffic from malicious requests. The configuration file powering this system is updated every five minutes to adapt quickly to changing bot behaviors.

The faulty file was generated by a query on a ClickHouse database cluster. After new permissions were added, the query began returning additional metadata—duplicating columns and producing more rows than expected. Because the system caps features at 200, the oversized file triggered a panic state once deployed across Cloudflare’s servers.

The result was a dramatic surge in 5xx server errors. The pattern appeared irregular at first because only some database nodes were generating the bad file. Every five minutes, the system could push either a correct or incorrect version depending on which node handled the query, creating cyclical failures that initially resembled a distributed attack.

Eventually, all ClickHouse nodes began producing the faulty file consistently. Cloudflare resolved the issue by stopping the distribution of the corrupted file, manually injecting a stable version, and restarting its core proxy services. The network returned to normal later that day.

Prince called this Cloudflare’s most significant outage since 2019. To prevent similar incidents, the company plans to strengthen safeguards around internal configuration files, introduce more global kill switches, prevent system overloads caused by error logs, and review failure points across core components.

While Prince emphasized that no system can be guaranteed immune to outages, he noted that past failures have led Cloudflare to build more resilient systems each time.

Genesis Mission Launches as US Builds Closed-Loop AI System Linking National Laboratories

 

The United States has announced a major federal scientific initiative known as the Genesis Mission, framed by the administration as a transformational leap forward in how national research will be conducted. Revealed on November 24, 2025, the mission is described by the White House as the most ambitious federal science effort since the Manhattan Project. The accompanying executive order tasks the Department of Energy with creating an interconnected “closed-loop AI experimentation platform” that will join the nation’s supercomputers, 17 national laboratories, and decades of research datasets into one integrated system. 

Federal statements position the initiative as a way to speed scientific breakthroughs in areas such as quantum engineering, fusion, advanced semiconductors, biotechnology, and critical materials. DOE has called the system “the most complex scientific instrument ever built,” describing it as a mechanism designed to double research productivity by linking experiment automation, data processing, and AI models into a single continuous pipeline. The executive order requires DOE to progress rapidly, outlining milestones across the next nine months that include cataloging datasets, mapping computing capacity, and demonstrating early functionality for at least one scientific challenge. 

The Genesis Mission will not operate solely as a federal project. DOE’s launch materials confirm that the platform is being developed alongside a broad coalition of private, academic, nonprofit, cloud, and industrial partners. The roster includes major technology companies such as Microsoft, Google, OpenAI for Government, NVIDIA, AWS, Anthropic, Dell Technologies, IBM, and HPE, alongside aerospace companies, semiconductor firms, and energy providers. Their involvement signals that Genesis is designed not only to modernize public research, but also to serve as part of a broader industrial and national capability. 

However, key details remain unclear. The administration has not provided a cost estimate, funding breakdown, or explanation of how platform access will be structured. Major news organizations have already noted that the order contains no explicit budget allocation, meaning future appropriations or resource repurposing will determine implementation. This absence has sparked debate across the AI research community, particularly among smaller labs and industry observers who worry that the platform could indirectly benefit large frontier-model developers facing high computational costs. 

The order also lays the groundwork for standardized intellectual-property agreements, data governance rules, commercialization pathways, and security requirements—signaling a tightly controlled environment rather than an open-access scientific commons. Certain community reactions highlight how the initiative could reshape debates around open-source AI, public research access, and the balance of federal and private influence in high-performance computing. While its long-term shape is not yet clear, the Genesis Mission marks a pivotal shift in how the United States intends to organize, govern, and accelerate scientific advancement using artificial intelligence and national infrastructure.

UK’s Proposed Ban on Ransomware Payments Sparks Debate as Attacks Surge in 2025

 

Ransomware incidents continue to escalate, reigniting discussions around whether organizations should ever pay attackers. Cybercriminals are increasingly leveraging ransomware to extort significant sums from companies desperate to protect their internal and customer data.

Recent research revealed a 126% jump in ransomware activity in the first quarter of 2025, compared to the previous quarter — a spike that has prompted urgent attention.

In reaction to this rise, the UK government has proposed banning ransomware payments, a move intended to curb organizations from transferring large sums to cybercriminals in hopes of restoring their data or avoiding public scrutiny. Under the current proposal, the ban would initially apply to public sector bodies and Critical National Infrastructure (CNI) organizations, though there is growing interest in extending the policy across all UK businesses.

If this wider ban takes effect, organizations will need to adapt to a reality where paying attackers is no longer an option. Instead, they will have to prioritize robust resilience measures, thorough incident response planning, and faster recovery capabilities.

This raises a central debate: Are ransomware payment bans the right solution? And if implemented, how can organizations protect themselves without relying on a financial “escape route”?

Many organizations have long viewed ransom payments as a convenient way to restore operations — a perceived “get out of jail free” shortcut that avoids lengthy reporting, disclosure, or regulatory scrutiny.

But the reality is stark: when dealing with criminals, there are no guarantees. Paying a ransom reinforces an already thriving network of cybercriminal operations.

In spite of this, organizations continue to pay. Recent studies indicate that 41% of organizations in 2025 admitted to paying ransom demands, although only 67% of those who paid actually regained full access to their data. These figures highlight the willingness of companies to divert large budgets to ransom fees — investments that could otherwise strengthen cyber defenses and prevent attacks altogether.

There are strong arguments on both sides of the UK proposal. A payment ban removes the burden of negotiating with threat actors who have no obligation to keep their word. It also eliminates the possibility of paying for data that attackers may never return after receiving the funds.

Another issue is the ongoing stigma around publicly acknowledging a ransomware attack. To protect their reputation, many organizations choose to quietly meet attackers’ demands — enabling criminals to operate undetected and without law enforcement involvement.

A ban would change this dynamic entirely. Without the option to pay, organizations would be forced to report incidents, helping authorities investigate and track cybercriminal activity more effectively.

The broader hope behind the proposal is that, without profit incentives, ransomware attacks will eventually fade out. While optimistic, the UK government views this approach as one of the few viable long-term strategies to reduce ransomware incidents.

However, the near-term outlook is more complex. Attacks are unlikely to stop immediately, and eliminating the option to pay could leave organizations without a practical mechanism for retrieving highly sensitive data — including customer information — in the aftermath of an attack.

If ransomware payments become illegal, organizations must proactively invest in stronger cyber resilience. Small and medium businesses, which often lack internal cybersecurity expertise, can significantly benefit from partnering with a Managed Service Provider (MSP). MSPs manage IT systems and cybersecurity operations, allowing business leaders to focus on growth and innovation. Research shows that over 80% of SMEs now rely on MSPs for cybersecurity support.

Regular security awareness training is also essential. Educating employees on identifying phishing attempts and suspicious activity helps reduce human errors that often lead to ransomware breaches.

Furthermore, a tested and well-structured incident response plan is critical. Many organizations overlook this step, but it plays a major role in containing damage during an attack.

With the UK edging closer to implementing a nationwide ransomware payment ban, organizations cannot afford to wait. Strengthening cyber resilience is the most effective path forward. This includes deploying advanced security tools, working with MSPs, and building a thorough — and regularly tested — incident response strategy.

Businesses that act early will be far better equipped to withstand attacks in a world where paying ransom is no longer an option.

Akira Ramps up Ransomware Activity With New Variant And More Aggressive Intrusion Methods

 


Akira, one of the most active ransomware operations this year, has expanded its capabilities and increased the scale of its attacks, according to new threat intelligence shared by global security agencies. The group’s operators have upgraded their ransomware toolkit, continued to target a broad range of sectors, and sharply increased the financial impact of their attacks.

Data collected from public extortion portals shows that by the end of September 2025 the group had claimed roughly 244.17 million dollars in ransom proceeds. Analysts note that this figure represents a steep rise compared to estimates released in early 2024. Current tracking data places Akira second in overall activity among hundreds of monitored ransomware groups, with more than 620 victim organisations listed this year.

The growing number of incidents has prompted an updated joint advisory from international cyber authorities. The latest report outlines newly observed techniques, warns of the group’s expanded targeting, and urges all organisations to review their defensive posture.

Researchers confirm that Akira has introduced a new ransomware strain, commonly referenced as Akira v2. This version is designed to encrypt files at higher speeds and make data recovery significantly harder. Systems affected by the new variant often show one of several extensions, which include akira, powerranges, akiranew, and aki. Victims typically find ransom instructions stored as text files in both the main system directory and user folders.

Investigations show that Akira actors gain entry through several familiar but effective routes. These include exploiting security gaps in edge devices and backup servers, taking advantage of authentication bypass and scripting flaws, and using buffer overflow vulnerabilities to run malicious code. Stolen or brute forced credentials remain a common factor, especially when multi factor authentication is disabled.

Once inside a network, the attackers quickly establish long-term access. They generate new domain accounts, including administrative profiles, and have repeatedly created an account named itadm during intrusions. The group also uses legitimate system tools to explore networks and identify sensitive assets. This includes commands used for domain discovery and open-source frameworks designed for remote execution. In many cases, the attackers uninstall endpoint detection products, change firewall rules, and disable antivirus tools to remain unnoticed.

The group has also expanded its focus to virtual and cloud based environments. Security teams recently observed the encryption of virtual machine disk files on Nutanix AHV, in addition to previous activity on VMware ESXi and Hyper-V platforms. In one incident, operators temporarily powered down a domain controller to copy protected virtual disk files and load them onto a new virtual machine, allowing them to access privileged credentials.

Command and control activity is often routed through encrypted tunnels, and recent intrusions show the use of tunnelling services to mask traffic. Authorities warn that data theft can occur within hours of initial access.

Security agencies stress that the most effective defence remains prompt patching of known exploited vulnerabilities, enforcing multi factor authentication on all remote services, monitoring for unusual account creation, and ensuring that backup systems are fully secured and tested.