Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

OpenAI’s Evolving Mission: A Shift from Safety to Profit?

 

Now under scrutiny, OpenAI - known for creating ChatGPT - has quietly adjusted its guiding purpose. Its 2023 vision once stressed developing artificial intelligence to benefit people without limits imposed by profit goals, specifically stating "safely benefits humanity." Yet late findings in a November 2025 tax filing for the prior year show that "safely" no longer appears. This edit arrives alongside structural shifts toward revenue-driven operations. Though small in wording, the change feeds debate over long-term priorities. While finances now shape direction more openly, questions grow about earlier promises. Notably absent is any public explanation for dropping the term tied to caution. Instead, emphasis moves elsewhere. What remains clear: intent may have shifted beneath the surface. Whether oversight follows such changes stays uncertain. 

This shift has escaped widespread media attention, yet it matters deeply - particularly while OpenAI contends with legal actions charging emotional manipulation, fatalities, and careless design flaws. Rather than downplay the issue, specialists in charitable governance see the silence as telling, suggesting financial motives may now outweigh user well-being. What unfolds here offers insight into public oversight of influential groups that can shape lives for better or worse. 

What began in 2015 as a nonprofit effort aimed at serving the public good slowly shifted course due to rising costs tied to building advanced AI systems. By 2019, financial demands prompted the launch of a for-profit arm under the direction of chief executive Sam Altman. That change opened doors - Microsoft alone had committed more than USD 13 billion by 2024 through repeated backing. Additional capital injections followed, nudging the organization steadily toward standard commercial frameworks. In October 2025, a formal separation took shape: one part remained a nonprofit entity named OpenAI Foundation, while operations moved into a new corporate body called OpenAI Group. Though this group operates as a public benefit corporation required to weigh wider social impacts, how those duties are interpreted and shared depends entirely on decisions made behind closed doors by its governing board. 

Not long ago, the mission changed - now it says “to ensure that artificial general intelligence benefits all of humanity.” Gone are the promises to do so safely and without limits tied to profit. Some see this edit as clear evidence of growing focus on revenue over caution. Even though safety still appears on OpenAI’s public site, cutting it from core texts feels telling. Oversight becomes harder when governance lines blur between parts of the organization. Just a fraction of ownership remains with the Foundation - around 25% of shares in the Group. That marks a sharp drop from earlier authority levels. With many leaders sitting on both boards at once, impartial review grows unlikely. Doubts surface about how much power the safety committee actually has under these conditions.

UK May Enforce Partial Ransomware Payment Ban as Cyber Reforms Advance

Governments across the globe test varied methods to reduce cybercrime, yet outlawing ransomware payouts stands out as especially controversial. A move toward limiting such payments gains traction in the United Kingdom, suggests Jen Ellis, an expert immersed in shaping national responses to ransomware threats.  

Banning ransom payments might come soon in Britain, according to Ellis, who shares leadership of the Ransomware Task Force at the Institute for Security and Technology. While she expects this step, she warns against seeing it as a fix-all move. From her point of view, curbing victim payouts does little to reduce how often hackers strike - since offenders operate beyond such rules. Still, paying ransoms brings moral weight: those funds flow into networks built on digital crime. Though impact may be narrow, letting money change hands rewards illegal behavior. 

Now comes the part where Ellis anticipates UK authorities will boost their overall cybersecurity setup before touching payment rules. Lately, an upgraded Cyber Action Plan has emerged - this one reshapes goals meant to sharpen how the country prepares for and reacts to digital threats. Out in the open now, this document hints at a fresh push to overhaul national defenses online. 

A key new law now moving forward is the Cyber Security and Resilience Bill, having just reached its second parliamentary debate stage. Should it become law, stricter rules on disclosing breaches will apply, while monitoring weak points in supplier networks becomes compulsory for many businesses outside government. With these steps, clearer insight into digital threats emerges - alongside fewer large-scale dangers tied to external vendors. Though details remain under review, accountability shifts noticeably toward proactive defense. 

After advances in these efforts, according to Ellis, officials might consider limiting ransomware payments. Though unclear when or how broadly such limits would take effect, she anticipates they would not apply uniformly. It remains undecided if constraints would affect solely major entities, focus on particular sectors, or permit exceptions based on set conditions. Whether groups allowed to make payments must first gain authorization - especially to align with sanction rules - is also unsettled. 

In talking with the Information Security Media Group lately, Ellis touched on shifts in how ransomware groups operate. Not every group follows the same pattern - some now avoid extreme disruption, though outfits like Scattered Spider still stand out by acting boldly and unpredictably. Payment restrictions came up too, since they might reshape what both hackers and targeted organizations expect from these incidents. 

Working alongside security chiefs and tech firms, Ellis leads NextJenSecurity to deepen insight into digital threats. Her involvement extends beyond the private sector - advising UK government units like the Cabinet Office’s cyber panel. Institutions ranging from the Royal United Services Institute to the CVE Program include her in key functions. Engagement with policy experts and advocacy groups forms part of her broader effort to reshape how online risks are understood.

State-Backed Hackers Are Turning to AI Tools to Plan, Build, and Scale Cyber Attacks

 



Cybersecurity investigators at Google have confirmed that state-sponsored hacking groups are actively relying on generative artificial intelligence to improve how they research targets, prepare cyber campaigns, and develop malicious tools. According to the company’s threat intelligence teams, North Korea–linked attackers were observed using the firm’s AI platform, Gemini, to collect and summarize publicly available information about organizations and employees they intended to target. This type of intelligence gathering allows attackers to better understand who works at sensitive companies, what technical roles exist, and how to approach victims in a convincing way.

Investigators explained that the attackers searched for details about leading cybersecurity and defense companies, along with information about specific job positions and salary ranges. These insights help threat actors craft more realistic fake identities and messages, often impersonating recruiters or professionals to gain the trust of their targets. Security experts warned that this activity closely resembles legitimate professional research, which makes it harder for defenders to distinguish normal online behavior from hostile preparation.

The hacking group involved, tracked as UNC2970, is linked to North Korea and overlaps with a network widely known as Lazarus Group. This group has previously run a long-term operation in which attackers pretended to offer job opportunities to professionals in aerospace, defense, and energy companies, only to deliver malware instead. Researchers say this group continues to focus heavily on defense-related targets and regularly impersonates corporate recruiters to begin contact with victims.

The misuse of AI is not limited to one actor. Multiple hacking groups connected to China and Iran were also found using AI tools to support different phases of their operations. Some groups used AI to gather targeted intelligence, including collecting email addresses and account details. Others relied on AI to analyze software weaknesses, prepare technical testing plans, interpret documentation from open-source tools, and debug exploit code. Certain actors used AI to build scanning tools and malicious web shells, while others created fake online identities to manipulate individuals into interacting with them. In several cases, attackers claimed to be security researchers or competition participants in order to bypass safety restrictions built into AI systems.

Researchers also identified malware that directly communicates with AI services to generate harmful code during an attack. One such tool, HONESTCUE, requests programming instructions from AI platforms and receives source code that is used to build additional malicious components on the victim’s system. Instead of storing files on disk, this malware compiles and runs code directly in memory using legitimate system tools, making detection and forensic analysis more difficult. Separately, investigators uncovered phishing kits designed to look like cryptocurrency exchanges. These fake platforms were built using automated website creation tools from Lovable AI and were used to trick victims into handing over login credentials. Parts of this activity were linked to a financially motivated group known as UNC5356.

Security teams also reported an increase in so-called ClickFix campaigns. In these schemes, attackers use public sharing features on AI platforms to publish convincing step-by-step guides that appear to fix common computer problems. In reality, these instructions lead users to install malware that steals personal and financial data. This trend was first flagged in late 2025 by Huntress.

Another growing threat involves model extraction attacks. In these cases, adversaries repeatedly query proprietary AI systems in order to observe how they respond and then train their own models to imitate the same behavior. In one large campaign, attackers sent more than 100,000 prompts to replicate how an AI model reasons across many tasks in different languages. Researchers at Praetorian demonstrated that a functional replica could be built using a relatively small number of queries and limited training time. Experts warned that keeping AI model parameters secret is not enough, because every response an AI system provides can be used as training data for attackers.

Google, which launched its AI Cyber Defense Initiative in 2024, stated that artificial intelligence is increasingly amplifying the capabilities of cybercriminals by improving their efficiency and speed. Company representatives cautioned that as attackers integrate AI into routine operations, the volume and sophistication of attacks will continue to rise. Security specialists argue that defenders must adopt similar AI-powered tools to automate threat detection, accelerate response times, and operate at the same machine-level speed as modern attacks.


Shadowserver Finds 6,000 Exposed SmarterMail Servers Hit by Critical Flaw

 

Over six thousand SmarterMail systems sit reachable online, possibly at risk due to a serious login vulnerability, found by the nonprofit cybersecurity group Shadowserver. Attention grows as hackers increasingly aim for outdated corporate mail setups left unprotected.  


On January 8, watchTowr informed SmarterTools about the security weakness. Released one week later, the patch arrived before an official CVE number appeared. Later named CVE-2026-23760, its severity earned a top-tier rating because of how deeply intruders could penetrate systems. Critical access capabilities made this bug especially dangerous. 

A security notice logged in the NIST National Vulnerability Database points to an issue in earlier releases of SmarterMail - versions before build 9511. This flaw sits within the password reset API, where access control does not function properly. Instead of blocking unknown users, the force-reset-password feature accepts input without requiring proof of identity. Missing checks on both token validity and current login details create an open door. Without needing prior access, threat actors may trigger resets for admin accounts using only known usernames. Such exploitation grants complete takeover of affected systems. 

Attackers can take over admin accounts by abusing this weakness, gaining full access to vulnerable SmarterMail systems through remote code execution. Knowing just one administrator username is enough, according to watchTowr, making it much easier to carry out such attacks. 

More than six thousand SmarterMail servers are now under watch by Shadowserver, each marked as probably exposed. Across North America, over four thousand two hundred sit in this group. Almost a thousand others appear in Asia. Widespread risk emerges where patches remain unused. Organizations slow to update face higher chances of compromise. 

Scans showing over 8,550 vulnerable SmarterMail systems came to light through data provided by Macnica analyst Yutaka Sejiyama, reported to BleepingComputer. Though attackers continue targeting the flaw, response levels across networks vary widely - this uneven pace only adds weight to ongoing worries about delayed fixes.  

On January 21, watchTowr noted it had detected active exploitation attempts. The next day, confirmation came through Huntress, a cybersecurity company spotting similar incidents. Rather than isolated cases, what they saw pointed to broad, automated attacks aimed at exposed servers. 

Early warnings prompted CISA to list CVE-2026-23760 in its active threat database, requiring federal bodies across the U.S. to fix it before February 16. Because flaws like this often become entry points, security teams face rising pressure - especially when hostile groups exploit them quickly. Government systems, along with corporate networks, stand at higher risk once these weaknesses go public. 

On its own, Shadowserver noted close to 800,000 IP addresses showing open Telnet signatures during incidents tied to a serious authentication loophole in GNU Inetutils' telnetd - highlighting how outdated systems still connected to the web can widen security exposure.

Federal Court Fines FIIG $2.5 Million for Major Cybersecurity Breaches; Schools Push Phone-Free Policies

 


Fixed income manager FIIG Securities has been ordered by the Federal Court to pay $2.5 million in penalties over serious cybersecurity shortcomings. The ruling follows findings that the firm failed to adequately safeguard client data over a four-year period, culminating in a significant cyberattack in 2023.

The breach impacted approximately 18,000 clients and resulted in the theft of around 385 gigabytes of sensitive data. Information exposed on the dark web included driver’s licences, passport details, bank account information and tax file numbers.

According to the court, between 13 March 2019 and 8 June 2023, FIIG failed to implement essential cybersecurity safeguards. These failures included insufficient allocation of financial and technological resources, lack of qualified cybersecurity personnel, absence of multi-factor authentication for remote access, weak password and privileged account controls, inadequate firewall and software configurations, and failure to conduct regular penetration testing and vulnerability scans.

The firm also lacked a structured software update process to address security vulnerabilities, did not have properly trained IT staff monitoring threat alerts, failed to provide mandatory cybersecurity awareness training to employees, and did not maintain or regularly test an appropriate cyber incident response plan.

In addition to the $2.5 million penalty, the court ordered FIIG to contribute $500,000 toward ASIC’s legal costs. The company must also undertake a compliance program, including appointing an independent expert to review and strengthen its cybersecurity and cyber resilience frameworks.

This marks the first instance in which the Federal Court has imposed civil penalties for cybersecurity breaches under general Australian Financial Services (AFS) licence obligations.

“FIIG admitted that it failed to comply with its AFS licence obligations and that adequate cyber security measures – suited to a firm of its size and the sensitivity of client data held – would have enabled it to detect and respond to the data breach sooner.

“It also admitted that complying with its own policies and procedures could have supported earlier detection and prevented some or all of the client information from being downloaded.”

ASIC deputy chair Sarah Court emphasised the regulator’s stance on cybersecurity compliance: “Cyber-attacks and data breaches are escalating in both scale and sophistication, and inadequate controls put clients and companies at real risk.

“ASIC expects financial services licensees to be on the front foot every day to protect their clients. FIIG wasn’t – and they put thousands of clients at risk.

“In this case, the consequences far exceeded what it would have cost FIIG to implement adequate controls in the first place.”

Responding to the ruling, FIIG stated: “FIIG accepts the Federal Court’s ruling related to a cybersecurity incident that occurred in 2023 and will comply with all obligations. We cooperated fully throughout the process and have continued to strengthen our systems, governance and controls. No client funds were impacted, and we remain focused on supporting our clients and maintaining the highest standards of information security.”

ASIC Steps Up Cyber Enforcement

The case underscores ASIC’s growing focus on cybersecurity enforcement within the financial services sector.

In July 2025, ASIC initiated civil proceedings against Fortnum Private Wealth Limited, alleging failures to appropriately manage and mitigate cybersecurity risks. Earlier, in May 2022, the Federal Court determined that AFS licensee RI Advice had breached its obligations by failing to maintain adequate risk management systems to address cybersecurity threats.

The Court stated: “Clients entrust licensees with sensitive and confidential information, and that trust carries clear responsibilities.”

In its 2026 key priorities document, ASIC identified cyberattacks, data breaches and weak operational resilience as major risks capable of undermining market integrity and harming consumers.

“Digitisation, legacy systems, reliance on third parties, and evolving threat actor capability continue to elevate cyber risk in ASIC’s view. ASIC is urging directors and financial services license holders to maintain robust risk management frameworks, test their operational resilience and crisis responses, and address vulnerabilities with their third-party service providers.”

Smartphone Restrictions Gain Momentum in Schools

Separately, debate over smartphone use in schools continues to intensify as institutions adopt phone-free policies to improve learning outcomes and student wellbeing.

Addressing concerns about the cost and necessity of phone restrictions, one advocate explained:

"Yes it can seem an expensive way of keeping phones out of schools, and some people question why they can't just insist phones remain in a student's bag," he explains.

"But smartphones create anxiety, fixation, and FOMO - a fear of missing out. The only way to genuinely allow children to concentrate in lessons, and to enjoy break time, is to lock them away."

Supporters argue that schools introducing phone-free systems have seen tangible improvements.

"There have been notable improvements in academic performance, and headteachers also report reductions in bullying," he explains.

Vale of York Academy implemented phone pouches in November. Headteacher Gillian Mills told the BBC:

"It's given us an extra level of confidence that students aren't having their learning interrupted.

"We're not seeing phone confiscations now, which took up time, or the arguments about handing phones over, but also teachers are saying that they are able to teach."

The political landscape is also responding. Conservative leader Kemi Badenoch has pledged to enforce a nationwide smartphone ban in schools if elected, while the Labour government has opted to leave decisions to headteachers and launched a consultation on limiting social media access for under-16s.

As part of broader measures, Ofsted will gain authority to assess school phone policies, with ministers signalling expectations that schools become “phone-free by default”.

Some parents, however, prefer their children to carry phones for safety during travel.

"The first week or so after we install the system is a nightmare," he adds. "Kids refuse, or try and break the pouches open. But once they realise no-one else has a phone, most of them embrace it as a kind of freedom."

The broader societal debate continues as smartphone use expands alongside social media and AI-driven content ecosystems.

"We're getting so many enquiries now. People want to ban phones at weddings, in theatres, and even on film sets," he says.

"Effectively carrying a computer around in your hand has many benefits, but smartphones also open us up to a lot of misdirection and misinformation.

"Enforcing a break, especially for young people, has so many positives, not least for their mental health."

Dugoni believes society may be approaching a critical moment:

"We're getting close to threatening the root of what makes us human, in terms of social interaction, critical thinking faculties, and developing the skills to operate in the modern world," he explains.

AI and Network Attacks Redefine Cybersecurity Risks on Safer Internet Day 2026

 

As Safer Internet Day 2026 approaches, expanding AI capabilities and a rise in network-based attacks are reshaping digital risk. Automated systems now drive both legitimate platforms and criminal activity, prompting leaders at Ping Identity, Cloudflare, KnowBe4, and WatchGuard to call for updated approaches to identity management, network security, and user education. Traditional defences are struggling against faster, more adaptive threats, pushing organisations to rethink protections across access, infrastructure, and human behaviour. While innovation delivers clear benefits, it also equips attackers with powerful tools, increasing risks for businesses, schools, and policymakers who fail to adapt.  

Ping Identity highlights a widening gap between legacy security models and modern AI operations. Systems designed for static environments are ill-suited to dynamic AI applications that operate independently and make real-time decisions. Alex Laurie, the company’s go-to-market CTO, explained that AI agents now behave like active users, initiating processes, accessing sensitive data, and choosing next steps without human prompts. Because their actions closely resemble those of real people, distinguishing between human and machine activity is increasingly difficult. Without proper oversight, these agents can introduce unpredictable risks and expand organisational attack surfaces. 

Laurie advocates moving beyond static credentials toward continuous, verified trust. Instead of assuming legitimacy after login, organisations should validate identity, intent, and context at every interaction. Access decisions must adapt in real time, guided by behaviour and current risk conditions. This approach enables AI innovation while protecting data and users in an environment filled with autonomous digital actors. 

Cloudflare also warns of AI’s dual-use nature. While it boosts efficiency, it accelerates cybercrime by making attacks faster, cheaper, and harder to detect. Pat Breen cited Australian data from 2024–25, when more than 1,200 cyber incidents required response, including a sharp rise in denial-of-service attacks. Such disruptions immediately impact essential services like healthcare, banking, education, transport, and government systems. Whether AI ultimately increases safety or risk depends on how quickly cyber defences evolve. 

KnowBe4’s Erich Kron stresses the importance of digital mindfulness as AI-generated content and deepfakes spread. Identifying fake content is no longer a technical skill but a basic life skill. Verifying information, protecting personal data, using strong authentication, and keeping software updated are critical habits for reducing harm. WatchGuard Technologies reports a shift away from malware toward network-focused attacks. 

Anthony Daniel notes that this trend reinforces the need for Zero Trust strategies that verify every connection. Safer Internet Day underscores that cybersecurity is a shared responsibility, strengthened through consistent, everyday actions.

Black Hat Researcher Proves Air Gaps Fail to Secure Data

 

Air gaps, long hailed as the ultimate defense for sensitive data, are under siege according to Black Hat researcher Mordechai Guri. In a compelling presentation, Guri demonstrated multiple innovative methods to exfiltrate information from supposedly isolated computers, shattering the myth of complete offline security. These techniques exploit everyday hardware components, proving that physical disconnection alone cannot guarantee protection in high-stakes environments like government and military networks.

Guri's BeatCoin malware turns computer speakers into covert transmitters, emitting near-ultrasonic sounds inaudible to humans but detectable by nearby smartphones up to 10 meters away. This allows private keys or other secrets to leak out effortlessly. Even disabling speakers fails, as Fansmitter modulates fan speeds to alter blade frequencies, creating acoustic signals receivable by listening devices within 8 meters. For scenarios without microphones, the Mosquito attack repurposes speakers as rudimentary microphones via GPIO manipulation, enabling ultrasonic data transmission between air-gapped machines.

Electromagnetic exploits further erode air-gap defenses. AirHopper manipulates monitor cables to radiate FM-band signals, capturable by a smartphone's built-in receiver. GSMem leverages CPU-RAM pathways to generate cellular-like transmissions detectable by basic feature phones, while USBee transforms USB ports into antennas for broad leakage. These methods highlight how standard peripherals become unwitting conduits for data escape.

Faraday cages, designed to block electromagnetic waves, offer no sanctuary either. Guri's ODINI attack generates low-frequency magnetic fields from CPU cores, penetrating these shields.PowerHammer goes further by inducing parasitic signals on building power lines, tappable by attackers monitoring electrical infrastructure.Such persistence underscores the vulnerability of even fortified setups.

While these attacks assume initial malware infection—often via USB or insiders—real-world precedents like Stuxnet validate the threat. Organizations must layer defenses with anomaly detection, hardware restrictions, and continuous monitoring beyond mere air-gapping. Guri's work urges a reevaluation of "secure" isolation strategies in an era of sophisticated side-channel threats.

SolarWinds Web Help Desk Compromised for RCE Multi Stage


SolarWinds compromised 

The threat actors used internet-exposed SolarWinds Web Help Desk (WHD) instances to gain initial access and then proceed laterally across the organization's network to other high-value assets, according to Microsoft's disclosure of a multi-stage attack. 

However, it is unclear if the activity used a previously patched vulnerability (CVE-2025-26399, CVSS score: 9.8) or recently revealed vulnerabilities (CVE-2025-40551, CVSS score: 9.8, and CVE-2025-40536, CVSS score: 8.1), according to the Microsoft Defender Security Research Team.

"Since the attacks occurred in December 2025 and on machines vulnerable to both the old and new set of CVEs at the same time, we cannot reliably confirm the exact CVE used to gain an initial foothold," the company said in the report. 

About the exploit

CVE-2025-40551 and CVE-2025-26399 both relate to untrusted data deserialization vulnerabilities that could result in remote code execution, and CVE-2025-400536 is a security control bypass vulnerability that might enable an unauthenticated attacker to access some restricted functionality.

Citing proof of active exploitation in the field, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) added CVE-2025-40551 to its list of known exploited vulnerabilities (KEVs) last week. By February 6, 2026, agencies of the Federal Civilian Executive Branch (FCEB) were required to implement the solutions for the defect. 

The impact 

The successful exploitation of the exposed SolarWinds WHD instance in the attacks that Microsoft discovered gave the attackers the ability to execute arbitrary commands within the WHD application environment and accomplish unauthenticated remote code execution.

Microsoft claimed that in at least one instance, the threat actors used a DCSync attack, in which they impersonated a Domain Controller (DC) and asked an Active Directory (AD) database for password hashes and other private data. 

What can users do?

Users are recommended to update WHD instances, identify and eliminate any unauthorized RMM tools, rotate admin and service accounts, and isolate vulnerable workstations to minimize the breach in order to combat the attack. 

"This activity reflects a common but high-impact pattern: a single exposed application can provide a path to full domain compromise when vulnerabilities are unpatched or insufficiently monitored," the creator of Windows stated.