Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Wall Street Banks Test Anthropic Mythos AI as Regulators Warn of Rising Cybersecurity Threats

  Now showing up in high-security finance circles: early tests of cutting-edge AI aimed at boosting cyber resilience, driven by rising regul...

All the recent news you need to know

Karnataka Unveils AI-Driven Bill to Enforce Swift Social Media Safety

 

Karnataka is set to revolutionize social media regulation with the draft Karnataka Responsible Social Media & Digital Safety Bill, 2026, submitted to Chief Minister Siddaramaiah. Prepared by the Karnataka State Policy and Planning Commission (KSPPC), this legislation emphasizes artificial intelligence (AI), rapid content moderation, and robust user protections, marking India's first state-level, AI-compliant, citizen-centric digital safety framework. S Mohanadass Hegde, a KSPPC member, highlighted its potential to foster responsible digital citizenship amid rising AI-driven threats. 

The primary focus is  on tackling AI-generated content and deepfakes through mandatory labelling, precise legal definitions, and strict penalties for misuse. Platforms face enforceable timelines, required to remove harmful content within 24 to 48 hours, shifting from advisory central guidelines to binding state actions. This departs from national laws like the Information Technology Act, 2000, and IT Rules, 2021, which prioritize due diligence without such tight deadlines.

The bill establishes the Karnataka Digital Safety & Social Media Regulatory Authority to monitor compliance and address region-specific digital risks swiftly. Users gain rights to report harmful content, access time-bound grievance redressal, and protections against harassment and misinformation. Hegde noted that localized oversight enables faster responses than central bodies, enhancing enforcement through tech tools like fake news detection, deepfake tracking, and real-time dashboards. 

Prevention takes center stage with a digital awareness and media literacy program promoting fact-checking, critical thinking, and responsible online behavior. This educational push targets mental well-being, particularly for youth vulnerable to harmful trends and addiction risks, balancing punishment with proactive measures. A team member emphasized education as key to curbing violations before they escalate. Implementation unfolds in phases: initial awareness and institutional setup, followed by technology integration and full enforcement. Slated for legal vetting and monsoon session introduction in June-July 2026, the draft positions Karnataka as a leader in decentralized digital governance, offering a blueprint for other states amid evolving AI challenges.

SystemBC Infrastructure Breach Sheds Light on The Gentlemen Ransomware Network


 

Parallel to this, operators appear to employ public channels to reinforce coercion, selectively disclosing victim information in order to increase pressure and speed up payment, demonstrating a hybrid strategy combining technical sophistication with calculated psychological advantage. 

Check Point recently conducted an analysis which further contextualizes the scale of the operation, revealing that telemetry from a SystemBC command-and-control node reveals that 1,570 compromised systems have been compromised. As a covert access facilitator, the malware’s architecture is designed to establish SOCKS5-based tunneling within infected environments while maintaining communication with its control infrastructure via RC4-encrypted channels, which enable the malware to establish secure communication with its control infrastructure. 

Aside from providing persistent remote access, this also allows for staged delivery of secondary payloads, which may be deployed either on the disk or directly in memory. This complicates traditional detection mechanisms. Since surfacing in July 2025, The Gentlemen have rapidly expanded their operational tempo, with hundreds of victims publicly listed on its leak infrastructure, emphasizing both the efficiency and effectiveness of its affiliate model as well as its double-extortion strategies. 

There is still no definitive indication of the initial intrusion vector, but observed attack patterns suggest the use of exposed services and credential compromise followed by a structured intrusion lifecycle that incorporates reconnaissance, propagation, and the deployment of tools, including frameworks such as Cobalt Strike and SystemBC. 

There is particular concern regarding the group's demonstration of the use of Group Policy Objects by the group to propagate malicious components across domains, which indicates a degree of post-exploitation control which allows attackers to scale their impact quickly and remain stealthy. In addition to providing important context for its role within this campaign, the broader technical background of SystemBC traces to at least 2019 when it was designed as a covert SOCKS5 tunneling and proxying malware family. 

In the past several years, its evolution into a payload delivery mechanism has made it particularly appealing to ransomware operators, who have exploited its ability to discreetly deploy and execute secondary tools within compromised environments. It has been observed that, despite partial disruption attempts by law enforcement in 2024, SystemBC's infrastructure has proven highly resilient, and previous threat intelligence indicates sustained activity at scale, including the compromise of large numbers of commercial virtual private servers used to relay malicious traffic. 

It is currently being discovered that the majority of victims associated with its deployment are located in enterprise-intensive regions such as the United States, the United Kingdom, Germany, Australia, and Romania, which confirms the assessment that infections are largely the result of human-operated intrusions rather than indiscriminate mass exploitation. It has been observed that the attack workflows reflect a high degree of operational control following compromise in the observed incidents. 

Researchers found that attackers operated using domain controllers with elevated administrative privileges to validate credentials, perform reconnaissance, and move laterally. A variety of tools associated with advanced intrusion sets was deployed to facilitate the extension of access across networked systems, often through remote procedure calls, including credential harvesting utilities such as Mimikatz and adversary simulation frameworks such as Cobalt Strike. 

As a result of preparing and propagating the ransomware payload internally, such as Group Policy Objects, the malware was executed almost simultaneously across domain-joined assets. In the encryption routine, unique ephemeral keys are generated per file through the use of elliptic curve key exchange, combined with high-speed symmetric encryption, and partial encryption strategies are applied to optimize execution time on larger datasets. 

In addition to encrypting files, this malware systematically disables databases, backup services, and virtualisation processes, including forcefully shutting down virtual machines in ESXi environments as well as deleting shadow copies of data and system logs to hinder recovery and forensic investigation. There is still some uncertainty as to the precise role of SystemBC within The Gentlemen's broader operational stack, particularly the question of whether it is centrally managed or affiliate-driven. 

The convergence of proxy malware, post-exploitation frameworks, and a significant botnet footprint suggests a maturing and modular threat model. Researchers conclude that this integration indicates that the transition toward structured and scaleable attack orchestration is being initiated, supported by shared infrastructure and tools. 

The defensive guidance also incorporates signature-based detection artifacts like YARA rules and detailed indicators of compromise in order to assist organizations in identifying and mitigating similar intrusion patterns before they escalate into a full-scale ransomware attack. SystemBC has a long history of providing covert SOCKS5 tunnelling and traffic proxying services as a malware family dating back to at least 2019 that provides important context for its role within this campaign.

Due to its evolution into a payload delivery mechanism, it proved to be particularly valuable to ransomware operators. These operators were able to discreetly introduce and execute secondary tooling within compromised systems. Although law enforcement attempted to partially disrupt SystemBC's infrastructure in 2024, the infrastructure that underpins it has demonstrated notable resilience, as prior threat intelligence indicates sustained activity, including compromises of large volumes of virtual private servers, which are often used to relay malicious traffic.

It is currently being discovered that the majority of victims associated with its deployment are located in enterprise-intensive regions such as the United States, the United Kingdom, Germany, Australia, and Romania, which confirms the assessment that infections are largely the result of human-operated intrusions rather than indiscriminate mass exploitation. It has been observed that the attack workflows reflect a high degree of operational control following compromise in the observed incidents. 

It is noted by investigators that threat actors appeared to use domain controllers with elevated administrative privileges to validate credentials, conduct reconnaissance, and control lateral movement. In order to extend access across networked systems, often by way of remote procedure calls, sophisticated tools used to perform credential harvesting such as Mimikatz and adversary simulation frameworks such as Cobalt Strike have been deployed, including credential harvesting utilities such as Mimikatz. 

It was possible to stage and propagate ransomware payloads internally and deploy them using native mechanisms such as Group Policy Objects, resulting in near-simultaneous execution across domain-joined assets. The encryption routine itself uses a hybrid cryptographic model combining elliptic curve key exchange with high-speed symmetric encryption, generating individual ephemeral keys for each file and applying partial encryption strategies to optimize execution time on larger datasets. 

It is believed that this integration indicates a move toward more structured and scalable attack orchestration supported by shared infrastructure and tools. The defensive guidance includes detailed indications of compromise as well as signature-based detection artifacts such as YARA rules, which provide organizations with the ability to identify and mitigate similar intrusion patterns before they develop into large-scale ransomware attacks.

DARWIS Taka: A Web Vulnerability Scanner with AI-Powered Validation


DARWIS Taka, a new web vulnerability scanner, is now available for free and runs via Docker. It pairs a rules-based scanning engine with an optional AI layer that reviews each finding before it reaches the report, aimed squarely at the false-positive problem that has dogged vulnerability scanning for years.

Built in Rust, Taka ships with 88 detection rules across 29 categories covering common web vulnerabilities, and produces JSON or self-contained HTML reports.  Setup instructions, the Docker configuration, and documentation are published on GitHub at github.com/CSPF-Founder/taka-docker.

Two modes of AI validation

Taka's AI layer runs in one of two modes. In passive (evidence-analysis) mode, the model reviews the data the scanner already collected and returns a verdict without sending any further traffic to the target. In active mode, the AI acts as a second-stage tester: it proposes a small number of targeted follow-up requests, such as paired true and false payloads for a suspected SQL injection, Taka executes them, and the responses are fed back to the AI for differential analysis. Active mode is more decisive on borderline findings but generates additional traffic.

In both modes, every result is tagged with a verdict (confirmed, likely false positive, or inconclusive), a confidence score, and the AI's written reasoning. The report surfaces those labels alongside a summary of how many findings fell into each bucket. Nothing is dropped silently, so reviewers see what the AI believed and why, and can focus triage on the findings marked confirmed.

The validation layer currently supports Anthropic and OpenAI. The project team has tested Taka extensively with Anthropic's Claude Sonnet, which gave the best balance of reasoning quality and speed in their evaluation, and recommends it for the strongest results. AI validation is optional; without a key, Taka runs as a standard scanner with its own false-positive controls.

Scoring by evidence, not by single matches

Most scanners trigger on the first matcher that fires, which is why a single stray string in a response can produce a flood of bogus alerts. Taka uses a weighted scoring system instead. Each matcher in a rule, whether a status code, a regex, a header check, or a timing comparison, carries an integer weight reflecting how strong a signal it is. The rule declares a detection threshold, and a finding is raised only when the combined weight of the matchers that fired meets or exceeds that threshold.

Built to run against real systems

A circuit breaker halts scanning against hosts showing signs of distress, per-host rate limiting caps concurrent requests, and a passive mode disables all attack payloads for environments where only non-intrusive checks are acceptable. Three scan depth levels (quick, standard, deep) trade coverage against runtime, while a two-phase execution model keeps time-based blind rules from interfering with the rest of the scan.

A web interface ships with the tool for launching scans, inspecting findings alongside the raw evidence, and revisiting results.

Only the optional AI validation requires a third-party API key, supplied by the user. Taka is aimed at security engineers, penetration testers, bug bounty hunters, DevSecOps teams, and developers who want a scanner that respects their triage time.

Full setup instructions are available at github.com/CSPF-Founder/taka-docker.

Google Expands Gemini in Gmail, Forcing Billions to Reconsider Privacy, Control, and AI Dependence

 




Google has introduced one of the most extensive updates to Gmail in its history, warning that the scale of change driven by artificial intelligence may feel overwhelming for users. While some discussions have focused on surface-level changes such as switching email addresses, the company has emphasized that the real transformation lies in how AI is now embedded into everyday tools used by nearly two billion people. This shift requires far more serious attention.

At the center of this evolution is Gemini, Google’s artificial intelligence system, which is being integrated more deeply into Gmail and other core services. In a recent update shared through a short video message, Gmail’s product leadership acknowledged that the rapid pace of AI innovation can leave users feeling overloaded, with too many new features and decisions emerging at once.

Gmail has traditionally been built around convenience, scale, and seamless integration rather than strict privacy-first principles. Although its spam filters and malware detection systems are widely used and generally effective, they are not flawless. Importantly, Gmail has not typically been the platform users turn to for strong privacy assurances.

The introduction of Gemini changes this bbalance substantially. Google has clarified that it does not use email content to train its AI models. However, the way these tools function introduces new concerns. Features that automatically draft emails, summarize conversations, or search inbox content require access to emails that may contain highly sensitive personal or professional information.

To address this, Google describes Gemini as a temporary assistant that operates within a limited session. The company compares this interaction to allowing a helper into a private room containing your inbox. The assistant completes its task and then exits, with the accessed information disappearing afterward. According to Google, Gemini does not retain or learn from the data it processes during these interactions.

Despite these assurances, concerns remain. Even if the data is not stored long term, granting a cloud-based AI system access to private communications introduces an inherent level of risk. Additionally, while Google has denied automatically enrolling users into AI training programs, many of these AI-powered features are expected to be enabled by default. This shifts responsibility to users, who must actively decide how much access they are willing to allow.

This is not a decision that can be ignored. Once AI tools become integrated into daily workflows, they are difficult to remove. Relying on default settings or delaying action could result in long-term dependence on systems that users may not fully understand or control.

Shortly after promoting these updates, Gmail experienced a disruption that affected its core functionality. Users reported delays in sending and receiving emails, and Google acknowledged the issue while working on a fix. Initially, no estimated resolution time was provided. Later the same day, the company confirmed that the issue had been resolved.

According to Google’s official status update, the disruption was fixed on April 8, 2026, at 14:49 PDT. The cause was identified as a “noisy neighbor,” a term used in cloud computing to describe a situation where one service consumes excessive shared resources, negatively impacting the performance of others operating on the same infrastructure.

With a user base of approximately two billion, even a short-lived outage becomes of grave concern. More importantly, it emphasises the scale at which Gmail operates and reinforces why decisions around AI integration are critical for users worldwide.

The central issue now facing users is the balance between convenience and security. Google presents Gemini as a helpful and well-behaved assistant that enhances productivity without overstepping boundaries. However, like any guest given access to a private space, it requires clear rules and careful oversight.

This tension becomes even more visible when considering Google’s parallel efforts to strengthen security. The company recently expanded client-side encryption for Gmail on mobile devices. While this may sound similar to end-to-end encryption used in messaging apps, it is not the same. This form of encryption operates at an organizational level, primarily for enterprise users, and does not provide the same device-specific privacy protections commonly associated with true end-to-end encryption.

More critically, enabling this additional layer of encryption dynamically limits Gmail’s functionality. When it is turned on, several features become unavailable. Users can no longer use confidential mode, access delegated accounts, apply advanced email layouts, or send bulk emails using multi-send options. Features such as suggested meeting times, pop-out or full-screen compose windows, and sending emails to group recipients are also disabled.

In addition, personalization and usability tools are affected. Email signatures, emojis, and printing functions stop working. AI-powered tools, including Google’s intelligent writing and assistance features, are also unavailable. Other smart Gmail features are disabled, and certain mobile capabilities, such as screen recording and taking screenshots on Android devices, are restricted.

These limitations exist because encrypted data cannot be accessed by AI systems. As a result, users are forced to choose between stronger data protection and access to advanced features. The same mechanisms that secure information also prevent AI tools from functioning effectively.

This reflects a bigger challenge across the technology industry. Privacy and security measures often limit the capabilities of AI systems, which depend on access to data to operate. In Gmail’s case, these two priorities do not align easily and, in many ways, directly conflict.

From a wider perspective, this also highlights a fundamental limitation of email itself. The technology was developed in an earlier era and was not designed to handle modern cybersecurity threats. Its underlying structure lacks the robust protections found in newer communication platforms.

As artificial intelligence becomes more deeply integrated into everyday tools, users are being asked to make more informed and deliberate decisions about how their data is used. While Google presents Gemini as a controlled and temporary assistant, the responsibility ultimately lies with users to determine their comfort level.

For highly sensitive communication, relying solely on email may no longer be the safest option. Exploring alternative platforms with stronger built-in security may be necessary. Ultimately, this moment represents a critical choice: whether the convenience offered by AI is worth the level of access it requires.

CISO Burnout Is Costing Businesses More Than Money

 

Businesses are increasingly feeling the financial and operational impact of CISO burnout, as overstretched security leaders make slower decisions, miss critical signals, and eventually leave their roles. The pressure of rising cyber threats, regulatory demands, and limited resources is turning the CISO position into a high‑turnover, high‑cost liability rather than a strategic asset. 

Why CISOs are burning out 

CISOs today face an “always‑on” workload, with AI‑driven attacks, expanding digital estates, and constant audits leaving little room for rest. Many report chronic stress, decision fatigue, and missed family events, while still working well beyond contracted hours to keep up. Boards often understand the pressure in theory, but fail to translate this into better staffing, budgets, or clearer priorities.

When a burned‑out CISO resigns or takes extended leave, firms pay not only recruitment and onboarding costs, but also the hidden price of lost productivity and disrupted projects. One expert estimates total CISO replacement costs can exceed 200% of salary when incident‑related losses, staff turnover, and delayed IT initiatives are factored in. Incidents that might have been caught earlier are more likely to slip through, raising breach‑related expenses and reputational damage. 

Impact on security and board confidence 

Burnout erodes cyber resilience by weakening threat detection, slowing crisis‑time decisions, and degrading communication of risk to the board. As CISOs disengage, security can become an afterthought, initiatives stall, and internal morale in security teams drops. This visibly undermines confidence at the top, making it harder to secure long‑term investment in modern security programs.

To break the cycle, companies must invest in prevention: realistic job design, adequate headcount, clear mandates, and mental‑health support. Some firms are shifting toward fractional or portfolio‑style CISOs, spreading responsibility and reducing single‑point pressure. Firms that treat CISO well‑being as a core part of risk management will likely see better retention, stronger security posture, and lower overall breach‑related costs.

Anthropic AI Cyberattack Capabilities Raise Alarm Over Vulnerability Exploitation Risks

 

Now emerging: artificial intelligence reshapes cybersecurity faster than expected, yet evidence from Anthropic shows it might fuel digital threats more intensely than ever before. Recently disclosed results indicate their high-level AI does not just detect flaws in code - it proceeds on its own to take advantage of them. This ability signals a turning point, subtly altering what attacks may look like ahead. A different kind of risk takes shape when machines act without waiting. What worries experts comes down to recent shifts in how attacks unfold. 

One key moment arrived when Anthropic uncovered a complex spying effort. In that case, hackers - likely backed by governments - didn’t just plan with artificial intelligence; they let it carry out actions during the breach itself. That shift matters because it shows machine-driven systems now doing tasks once handled only by people inside digital invasions. Surprisingly, Anthropic revealed what its newest test model, Claude Mythos Preview, can do. The firm says it found countless serious flaws in common operating systems and software - flaws that stayed hidden for long stretches of time. Not just spotting issues, the system linked several weaknesses at once, building working attack methods, something usually done by expert humans. 

What stands out is how little oversight was needed during these operations. What stands out is how this combination - spotting weaknesses and acting on them - marks a notable shift. Not just incremental change, but something sharper: specialists like Mantas Mazeika point to AI-powered threats moving into uncharted territory, with automated systems ramping up attack frequency and reach. Another angle emerges through Allie Mellen's observation - the gap between detecting a flaw and weaponizing it shrinks fast under AI pressure, cutting response windows for companies down to almost nothing. Among the issues highlighted by Anthropic were lingering flaws in OpenBSD and FFmpeg - examples surfaced through the model’s analysis - alongside intricate sequences of exploitation targeting Linux servers. 

With such discoveries, questions grow about whether current defenses can match accelerating threats empowered by artificial intelligence. Now, Anthropic is holding back public access entirely. Access goes only to a select group of tech firms through a special program meant to spot weaknesses early. The move comes as others in tech worry just as much about misuse. Safety outweighs speed when the stakes involve advanced systems. Still, experts suggest such progress brings both danger and potential. Though risky, new tools might help uncover flaws early - shielding networks ahead of breaches. 

Yet success depends on collaboration: firms, officials, and digital defenders must reshape how they handle code fixes and protection strategies. Without shared initiative, gains could falter under old habits. Now shaping the digital frontier, advancing AI shifts how threats emerge and respond. With speed on their side, those aiming to breach systems find new openings just as quickly as protectors build stronger shields. Staying ahead means defense must grow not just faster, but smarter - matching each leap taken by adversaries before gaps widen.

Featured