Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

CISO Burnout Is Costing Businesses More Than Money

  Businesses are increasingly feeling the financial and operational impact of CISO burnout, as overstretched security leaders make slower de...

All the recent news you need to know

Anthropic AI Cyberattack Capabilities Raise Alarm Over Vulnerability Exploitation Risks

 

Now emerging: artificial intelligence reshapes cybersecurity faster than expected, yet evidence from Anthropic shows it might fuel digital threats more intensely than ever before. Recently disclosed results indicate their high-level AI does not just detect flaws in code - it proceeds on its own to take advantage of them. This ability signals a turning point, subtly altering what attacks may look like ahead. A different kind of risk takes shape when machines act without waiting. What worries experts comes down to recent shifts in how attacks unfold. 

One key moment arrived when Anthropic uncovered a complex spying effort. In that case, hackers - likely backed by governments - didn’t just plan with artificial intelligence; they let it carry out actions during the breach itself. That shift matters because it shows machine-driven systems now doing tasks once handled only by people inside digital invasions. Surprisingly, Anthropic revealed what its newest test model, Claude Mythos Preview, can do. The firm says it found countless serious flaws in common operating systems and software - flaws that stayed hidden for long stretches of time. Not just spotting issues, the system linked several weaknesses at once, building working attack methods, something usually done by expert humans. 

What stands out is how little oversight was needed during these operations. What stands out is how this combination - spotting weaknesses and acting on them - marks a notable shift. Not just incremental change, but something sharper: specialists like Mantas Mazeika point to AI-powered threats moving into uncharted territory, with automated systems ramping up attack frequency and reach. Another angle emerges through Allie Mellen's observation - the gap between detecting a flaw and weaponizing it shrinks fast under AI pressure, cutting response windows for companies down to almost nothing. Among the issues highlighted by Anthropic were lingering flaws in OpenBSD and FFmpeg - examples surfaced through the model’s analysis - alongside intricate sequences of exploitation targeting Linux servers. 

With such discoveries, questions grow about whether current defenses can match accelerating threats empowered by artificial intelligence. Now, Anthropic is holding back public access entirely. Access goes only to a select group of tech firms through a special program meant to spot weaknesses early. The move comes as others in tech worry just as much about misuse. Safety outweighs speed when the stakes involve advanced systems. Still, experts suggest such progress brings both danger and potential. Though risky, new tools might help uncover flaws early - shielding networks ahead of breaches. 

Yet success depends on collaboration: firms, officials, and digital defenders must reshape how they handle code fixes and protection strategies. Without shared initiative, gains could falter under old habits. Now shaping the digital frontier, advancing AI shifts how threats emerge and respond. With speed on their side, those aiming to breach systems find new openings just as quickly as protectors build stronger shields. Staying ahead means defense must grow not just faster, but smarter - matching each leap taken by adversaries before gaps widen.

Chrome Advances User Protection with new Infostealer Mitigation Features


 

Google Chrome has taken a significant step toward hardening browser-level authentication security in response to the growing threat landscape by introducing Device Bound Session Credentials in its latest Windows update. 

As part of Chrome 146, this mechanism has been developed to address a long-standing vulnerability in web session management by preventing authenticated sessions from being portable across devices. It is based on the use of hardware-backed trust anchors that bind session credentials directly to the user's machine, thereby significantly increasing the barrier to attackers attempting to reuse stolen authentication tokens. 

With the implementation of cryptographic safeguards at the device level, the update reflects a broader shift in browser security architecture towards reducing the impact of credential theft rather than merely addressing it. This foundation is the basis for Device Bound Session Credentials, which generate a unique public/private key pair within secure hardware components, such as the Trusted Platform Module of Windows systems, which is used to authenticate sessions.

By design, session credentials cannot be replicated or transferred even if they are compromised at the software layer, as these keys are not exportable. With the feature now available to Windows users, and Mac OS support expected in subsequent versions, it addresses the mechanics of modern session hijacking. 

A typical attack scenario involves the execution of malicious payloads which launch informationstealer malware, which harvests cookies stored on your browser or intercepts newly established sessions unknowingly. For example, LummaC2 is one of the prominent infostealer malware families. 

The persistence of these cookies often beyond a single login instance gives attackers a durable means of unauthorized access, bypassing traditional authentication controls such as passwords and multi-factor authentication systems, and allowing them to bypass these controls. 

In addition to disrupting the attack chain at a structural level, Chrome's latest enhancement also limits the reuse and monetization of stolen session data across threat actor ecosystems by cryptographically anchoring session validity to the originating device.

Initially introduced in 2024, the underlying security model combines authentication with hardware integrity in order to ensure that authentication is linked to a user identity as well as hardware integrity. By cryptographically assuring each active session with device-resident security components, such as the Trusted Platform Module on Windows and Secure Enclave on macOS, this is accomplished. 

The hardware-supported environment generates and safeguards asymmetric key pairs that are used to encrypt and validate session data, while the private key is strictly not transferable. Consequently, even if session artifacts such as cookies were to be extracted from the browser, they would not be capable of being reused on another system without the appropriate cryptographic context. 

By ensuring that session validity is intrinsically linked to the device that generated it, this design shifts the attack surface fundamentally. During the lifecycle of a session, the mechanism introduces an additional verification layer. It is essential for the browser to demonstrate possession of the private key associated with the short-lived session cookies to the server in order to grant and renew them. 

Rather than being a static token, each session is effectively a continuously validated cryptographic exchange. The system defaults to conventional session handling in environments without secure hardware support, preserving backward compatibility. 

Early telemetry indicates that the approach is already altering attacker economics by a measurable decline in session theft attempts. As part of the collaboration between Microsoft and the organization, the architecture is designed to evolve into an open web standard, while also incorporating privacy-centric safeguards. 

The use of device-specific, non-reusable keys prevents cross-site correlations of user activity by design, enhancing both security and privacy without adding additional tracking vectors to the system. The framework is designed to integrate easily with existing web architectures without imposing significant operational overhead upon service providers on an implementation level. 

Google Chrome assumes responsibility for key management, cryptographic validation, and dynamic cookie rotation for hardware-bound session security, resulting in minimal backend modification needed to implement hardware-bound session security. 

In this manner, the protocol maintains compatibility with traditional session handling models while simultaneously adding an additional layer of trust beneath them. Additionally, the protocol is designed according to strict principles of data minimization: only a per-session public key is shared for authentication, thus preventing the exposure of persistent device identifiers and minimizing the risk of cross-site tracking. 

Under the supervision of the World Wide Web Consortium and Microsoft, the Web Application Security Working Group has developed this open standard in consultation with identity platform providers such as Okta, ensuring interoperability across diverse authentication ecosystems. After a controlled deployment in 2025, early results indicate a significant decrease in session hijacking incidents. This reinforces our confidence in its broader rollout, which is now available for Windows in Chrome 146 and is anticipated for macOS in the near future. 

At the same time, development efforts are underway to extend capabilities to federated identity models, enable cross-origin key binding, and utilize existing trusted credentials, such as mutual TLS and hardware security keys, while exploring software-based alternatives to broaden enterprise adoption. Despite the introduction of hardware-based protections, adversarial adaptation has not been eliminated. 

There have been emerging bypass techniques targeted at Chrome's Application-Bound Encryption layer, largely through the misuse of internal debugging interfaces that were originally intended to facilitate the development and remote management of Chrome. It is possible to circumvent traditional safeguards by enabling remote debugging over designated ports, which enables attackers to extract cookies directly from the browser rather than resorting to more detectable methods such as memory scraping and process injection.

With regard to this method, observed with infostealer strains such as Phemedrone, it is comparatively stealthy since it takes advantage of legitimate browser functionality to evade conventional detection mechanisms. Browser processes initiated with debugging flags and anomalous activity targeting common ports such as 9222 are indications of compromise. 

The Application-Bound Encryption technology was initially adopted for Windows environments, however similar techniques have been demonstrated to bypass protections across macOS and Linux environments, as well as native credential storage systems. Despite the ongoing efforts to comprehensively attribute malware families, the underlying vector suggests an overall pattern of exploitation that could be replicated across the threat landscape if comprehensive attribution remains incomplete. 

As a result, security teams will note that there remains a persistent “cat-and-mouse” dynamic in identity and access management, in which defensive innovations are quickly countered with countermeasures. Within weeks of the initial release of the feature, bypass strategies were emerging, demonstrating the need to monitor continuously, harden configurations, and apply layered defense strategies in order to maintain session-based authentication integrity. 

The development illustrates the broader need for organizations to move beyond single-layer defenses and adopt a multi-tiered, multi-layered security posture. While hardware-bound session protection represents a significant advancement, its effectiveness ultimately depends on complementary controls across the environment. 

Consequently, security teams should enforce strict browser configurations, monitor for anomalous debugging activity, and restrict the access to remote management interfaces. Further reducing the window of exploitation can be achieved by integrating endpoint detection with identity-aware access controls, as well as shortening session lifespans and ensuring continuous authentication checks. 

The browser vendors should continue to refine these mechanisms, so enterprises should align their defensive strategies accordingly. Session security should be treated as an evolving discipline requiring ongoing vigilance and adaptive response, rather than a fixed safeguard.

Critical SGLang Vulnerability Allows Remote Code Execution via Malicious AI Model Files

 



A newly disclosed high-severity flaw in SGLang could enable attackers to remotely execute code on affected servers through specially crafted AI model files.

The issue, tracked as CVE-2026-5760, has received a CVSS score of 9.8 out of 10, placing it in the critical category. Security analysts have identified it as a command injection weakness that allows arbitrary code execution.

SGLang is an open-source framework built to efficiently run large language and multimodal models. Its popularity is reflected in its development activity, with more than 5,500 forks and over 26,000 stars on its public repository.

According to the CERT Coordination Center, the flaw affects the “/v1/rerank” endpoint. An attacker can exploit this functionality to run malicious code within the context of the SGLang service by using a specially designed GPT-Generated Unified Format (GGUF) model file.

The attack relies on embedding a malicious payload inside the tokenizer.chat_template parameter of the model file. This payload uses a server-side template injection technique through the Jinja2 templating engine and includes a specific trigger phrase that activates the vulnerable execution path.

Once the victim downloads and loads the model, often from repositories such as Hugging Face, the risk becomes active. When a request reaches the “/v1/rerank” endpoint, SGLang processes the chat template using its templating engine. At that moment, the injected payload is executed, allowing the attacker to run arbitrary Python code on the server and achieve remote code execution.

Security researcher Stuart Beck traced the root cause to unsafe template handling. Specifically, the framework uses a standard Jinja2 environment instead of a sandboxed configuration. Without isolation controls, untrusted templates can execute system-level code during rendering.

The attack unfolds in a defined sequence: a malicious GGUF model is created with an embedded payload; it includes a trigger phrase tied to the Qwen3 reranker logic located in “entrypoints/openai/serving_rerank.py”; the victim loads the model; a request hits the rerank endpoint; and the template is rendered using an unsafe environment, leading to execution of attacker-controlled Python code.

This vulnerability falls into the same class as earlier issues such as CVE-2024-34359, a critical flaw in llama_cpp_python, and CVE-2025-61620, which affected another model-serving system. These cases highlight a recurring pattern where unsafe template or model handling introduces execution risks.

To mitigate the issue, CERT/CC recommends replacing the current template engine configuration with a sandboxed alternative such as ImmutableSandboxedEnvironment. This would prevent execution of arbitrary Python code during template rendering. At the time of disclosure, no confirmed patch or vendor response had been issued.

From a broader security lens, this incident reinforces a growing concern in AI infrastructure. Model files are increasingly being treated as trusted inputs, despite their ability to carry executable logic. As adoption expands, organizations must validate external models, restrict execution environments, and continuously monitor inference systems to reduce the risk of compromise.

ChipSoft Ransomware Attack Disrupts Dutch Healthcare Systems and HiX EHR Services

 

A sudden cyberattack targeting ChipSoft triggered widespread interruptions in essential health IT operations throughout the Netherlands, leading officials to isolate key network segments. While public access tools went down, medical staff also lost functionality within core administrative environments - prompting urgent questions around resilience under pressure and protection of sensitive records. 

Because of the cyberattack, ChipSoft shut down multiple services such as Zorgportaal, HiX Mobile, and Zorgplatform to limit possible damage. Hospitals across the nation rely on ChipSoft's main system, HiX, making it a key player in digital medical records. As a result, clinics received warnings urging them to cut connections to ChipSoft platforms until safety is confirmed. Preventive steps like these aim to reduce risks while experts handle the breach. 

Later came confirmation via local news outlets, following early signals from public posts on the web. A company-issued message raised concern, citing signs of intrusion into operational systems. This notice hinted at data exposure without confirming full compromise. Not long afterward, official classification arrived: Z-CERT labeled it a ransomware event. Coordination across impacted health entities started under their guidance. Outages began spreading through several hospitals after the incident unfolded. Sint Jans Gasthuis in Weert felt effects early, followed by disruptions at Laurentius Hospital in Roermond. Digital tools slowed down or stopped working altogether at VieCuri Medical Center in Venlo. 

Flevo Hospital in Almere also saw restricted system availability soon afterward. Even though certain departments kept running, performance gaps between locations revealed deeper weaknesses. When cyber incidents strike, medical technology networks often struggle more than expected. Healthcare tech firms often serve many hospitals at once, making them prime targets for ransomware attacks. 

When one falls victim, consequences tend to ripple through linked facilities without warning. Patient treatment slows down, daily operations stumble, records become unreachable. Despite mentioning efforts to reduce harm, ChipSoft has shared little about what information might be exposed. Confirmation on how deep the breach goes remains absent so far. After this event came several earlier breaches across medical tech companies worldwide - proof of rising exposure. 

With hospitals shifting more operations online, criminals now zero in on those holding vast amounts of vital data. Sometimes it's not about speed but access; value draws attention over time. Systems once isolated now face constant probing from distant actors watching for gaps. Right now, work continues to regain control - officials alongside digital defense units are measuring harm while bringing services back online. 

This breach by ChipSoft highlights once more how vital strong cyber protections are within medical infrastructure, since short outages might lead to severe outcomes beyond screens.

Apple Scam Targets Millions of iPhone Users

 

Apple users are once again being warned about a scam designed to look official, urgent, and believable. In this latest scheme, criminals send messages that appear to come from Apple Pay or Apple support, claiming there is suspicious activity, a locked account, or an unusually large charge. The goal is not to hack the iPhone itself, but to make the user panic and hand over information voluntarily. Because the messages use Apple branding and familiar wording, many victims may not realize they are dealing with fraud until money or login access has already been lost. 

What makes the scam especially dangerous is the way it combines pressure with a fake path to safety. Victims are often told to call a phone number or follow a link to resolve the problem, but that number connects them to a scammer pretending to be an Apple fraud specialist. Once the call begins, the attacker may ask for Apple ID credentials, verification codes, bank details, or even instructions to move money into a “safe” account. In some cases, scammers also try to convince victims to withdraw cash, creating a sense that immediate action is necessary to protect their funds. 

The psychology behind the scam is simple but effective. People are more likely to act quickly when they believe their account, payment card, or Apple Pay wallet is under attack. Scammers exploit that fear by sounding calm, professional, and helpful, which can make their requests feel legitimate. They may already know a few personal details about the target, making the call seem even more convincing. That mix of urgency, familiarity, and authority is why these scams continue to succeed across large groups of iPhone users. 

Users can protect themselves by treating unexpected Apple alerts with caution. Apple support does not ask for passwords, one-time codes, or instructions to transfer money, and it will not pressure users to act immediately over an unsolicited call. The safest response is to ignore the contact method in the message and independently open the official Apple app or website to check the account status. Users should also avoid clicking links in suspicious emails or texts, since those links may lead to fake login pages built to steal credentials. 

This scam is a reminder that modern fraud often targets human trust rather than software flaws. As attackers become better at mimicking legitimate Apple communications, users need to slow down and verify every urgent request before responding. A few extra seconds of caution can be the difference between protecting an account and losing access to money or personal data. In a world where scams increasingly look polished and professional, skepticism is one of the strongest defenses available.

From Demo to Deployment Why AI Projects Struggle to Scale


 

In many cases, the enthusiasm surrounding artificial intelligence peaks during demonstrations, when controlled environments create an overwhelming vision of seamless capability. However, one of the most challenging aspects of enterprise technology adoption remains the transition from that initial promise to sustained operational value. 

The apparent simplicity of embedding such systems into real-world operations, where consistency, resilience, and accountability are non-negotiable, often masks the complexity involved. It is generally not the intelligence of the model that causes difficulties in practice, rather the organization's ability to operationalise it within existing production ecosystems within the organization. 

In the early stages of the pilot program, technical feasibility is established successfully, demonstrating that AI can perform defined tasks under ideal conditions. In order to scale that capability, it is necessary to demonstrate a thorough understanding of model accuracy. A clear integration of systems, alignment with legacy and modern infrastructure, clearly defined ownership across teams, disciplined cost management, and compliance with evolving regulatory frameworks are necessary. 

An important distinction between experimentation and operationalisation becomes the decisive factor for the failure of most AI initiatives beyond the pilot phase. This gap becomes particularly evident when controlled demonstrations are encountered with unpredictability in live environments. In order to minimize friction during demonstrations, structured datasets, stable inputs, and narrowly focused application scenarios are used.

Production systems, on the other hand, are subject to fragmented data pipelines, inconsistent input patterns, incomplete contextual signals, and stringent latency requirements. Edge cases, on the other hand, are not exceptions, but the norm, and systems need to maintain stability under varying loads and constraints. As a result, organizations typically lose the initial momentum generated by a successful demo when attempting wider deployment, revealing previously concealed limitations.

Consequently, the challenge is not to design an artificial intelligence system that performs well in isolation, but to design one that can sustain performance under continuous operational pressure. In addition to model development, AI systems that are considered production-grade have to be designed in a distributed system environment that addresses fault tolerance, observability, scalability, and cost efficiency in a systematic manner. 

In order to be effective, they must integrate seamlessly with existing services, provide monitoring and feedback loops, and evolve without introducing instability. In the transition from prototype to production phase, the majority of AI initiatives fail, highlighting the importance of architectural discipline and operational maturity. In addition to the visible challenges associated with deployment, there is another fundamental constraint silently determining the fate of most artificial intelligence initiatives, namely the data ecosystem in which it is embedded. 

While organizations frequently focus on model selection and tooling, the real determinant of success lies in the structure, governance, and reliability of the data environment, which supports continuous learning and decision-making at an appropriate scale. Despite this prerequisite, many enterprise settings remain unmet. 

According to industry assessments, a significant portion of organizations are lacking confidence in the capability to manage data efficiently for artificial intelligence (AI), suggesting deeper structural gaps in the collection, organization, and maintenance of data. Despite substantial data volumes, they are often distributed among disconnected systems, including enterprise resource planning platforms, customer relationship management tools, legacy on-premises databases, spreadsheets, and a growing number of third-party services. 

Inconsistencies in schema design are caused by fragmentation, and weak or missing metadata layers contribute to limited visibility into the data lineage as well as inadequate governance controls. A system such as this will be forced to produce stable and reproducible outcomes when it has incomplete or unreliable inputs. The consequences of this misalignment are evident during production deployment. Models trained on fragmented or poorly governed data environments will exhibit unpredictable behavior over time and will not generalize across applications. 

Inconsistencies in data source dependencies start compromising operational workflows, eroding stakeholder trust. When confidence is declining, leadership often responds by stifling or suspending the rollout of broader artificial intelligence initiatives, not because of technological deficiencies, but rather because of a lack of supporting data infrastructure to support the rollout. Moreover, this reinforces the broader pattern observed across enterprises that the transition from experimentation to operational scale is governed as much by data maturity as it is by system architecture. 

The discussion around artificial intelligence has begun to shift from capability to control as organizations move beyond isolated deployments. The scale of technology initially appears to be a concern, but gradually turns out to be a matter of designing accountability systems, in which speed, governance, and operational clarity should coexist without friction. 

Having reached this stage, success is no longer determined by isolated breakthroughs but by an organization's ability to integrate artificial intelligence into the operating fabric of its organization. Many enterprises instinctively adopt centralised oversight structures, such as review boards and governance councils, as a way of standardizing decision-making in response to increased complexity and risk exposure. However, these mechanisms are insufficient to ensure AI adoption occurs across a wide range of business units as AI adoption accelerates across multiple business units. 

Scale-achieving organizations integrate governance directly into execution pathways rather than relying solely on episodic review processes. In place of evaluating each initiative individually, they define enterprise-wide standards and reusable solutions that align with varying levels of risk to enable lower-risk use cases through streamlined deployment paths, while higher-risk applications are systematically evaluated through structured frameworks with clearly assigned ownership, ensuring that their use is secure. 

Through this approach, ambiguity is reduced, approval cycles are shortened, and teams are able to operate confidently within predefined boundaries. However, another constraint emerges in the form of data usage hesitancy, which has quietly limited AI initiatives. Because of concerns regarding security, compliance, and control, organizations often delay or restrict the use of real operational data. 

It is imperative to implement tangible operational safeguards to overcome this barrier in addition to policy assurances. Providing the assurance that data remains within controlled network environments, establishing clear lifecycle management protocols, and providing real-time visibility into system usage and cost dynamics are all necessary to create the confidence necessary to expand adoption to a wider audience.

With the maturation of these mechanisms, decision makers are given the assurance needed to extend the capabilities of AI into critical workflows without introducing unmanaged risks. Scaling AI is no longer a matter of increasing the number of models but rather a matter of aligning organizational structures in support of these models.

The ability of companies to expand AI initiatives with significantly reduced friction is facilitated by the establishment of clear ownership models, harmonising processes across departments, establishing unified data foundations, and integrating governance into daily operations. On the other hand, organizations whose AI is maintained as a standalone technology function may experience fragmented adoption, inconsistent results, and a decline in stakeholder trust. 

In this shift, leadership is expected to meet new challenges. Long-term success is determined not by the sophistication of individual models, but by how disciplined AI operations are implemented across organizations. Every deployment must be able to withstand scrutiny under real-world conditions, where outputs need to be explainable, defendable, and reliable. 

In response, forward-looking leaders are refocusing on the central question how confidently can AI be scaled - rather than how rapidly it can be deployed. As governance is integrated into development and operational workflows, the perceived tradeoff between speed and control begins to dissolve, allowing the two to strengthen each other. 

A recurring challenge across AI initiatives from stalled pilots to fragmentation of data and governance bottlenecks indicates the absence of a coherent operating model. An effective organization addresses this by developing a framework that connects business value to execution. 

AI will be required to deliver a set of outcomes, integration pathways are established into existing systems and decision processes, roles and workflows have to be redesigned to accommodate AI-driven operations, and mechanisms are embedded to ensure trust, safety, and continuous oversight are implemented. 

Upon alignment of these elements, artificial intelligence becomes a repeatable, scalable capability that is integrated into an organization's core operations instead of an experimentation process. For organizations that wish to make AI ambitions a reality, disciplined execution rather than rapid experimentation is the path forward. 

The development of enforceable standards, the investment in resilient data and systems foundations, and the alignment of accountability between business and technical functions are essential to success. Leading organizations that prioritize operational readiness, measurable outcomes, and controlled scalability are better prepared to transform artificial intelligence from isolated success stories into dependable enterprise capabilities. 

Those organizations that approach AI as an operational investment rather than a technological initiative will gain a competitive advantage in a market that is increasingly focused on trust, transparency, and performance.

Featured