Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Security Researchers Uncover QEMU-Powered Evasion in Payouts King Ransomware

  Several recent incidents of ransomware activity attributed to the Payouts King operation have highlighted a systematic shift toward virtua...

All the recent news you need to know

Salesforce’s New “Headless 360” Lets AI Agents Run Its Platform

 


Salesforce has introduced what it describes as the most crucial architectural overhaul in its 27-year history, launching a new initiative called “Headless 360.” The update is designed to allow artificial intelligence agents to control and operate the company’s entire platform without requiring a traditional graphical interface such as a dashboard or browser.

The announcement was made during the company’s annual TDX developer conference in San Francisco, where Salesforce revealed that it is releasing more than 100 new developer tools and capabilities. These tools immediately enable AI systems to interact directly with Salesforce environments. The move reflects a deeper shift in enterprise software, where the rise of intelligent agents capable of reasoning and executing tasks is forcing companies to rethink whether conventional user interfaces are still necessary.

Salesforce’s answer to that question is direct: instead of designing software primarily for human interaction, the platform is now being rebuilt so that machines can access and operate it programmatically. According to the company, this transformation began over two years ago with a strategic decision to expose all internal capabilities rather than keeping them hidden behind user interfaces.

This shift is taking place during a period of uncertainty in the broader software industry. Concerns that advanced AI models developed by companies like OpenAI and Anthropic could disrupt traditional software business models have already impacted market performance. Industry indicators, including software-focused exchange-traded funds, have declined substantially, reflecting investor anxiety about the long-term relevance of existing SaaS platforms.

Senior leadership at Salesforce has indicated that the new architecture is based on practical challenges observed while deploying AI systems across enterprise clients. According to internal insights, building an AI agent is only the initial step. Organizations also face ongoing challenges related to development workflows, system reliability, updates, and long-term maintenance.

To address these challenges, Headless 360 is structured around three foundational pillars.

The first pillar focuses on development flexibility. Salesforce has introduced more than 60 tools based on Model Context Protocol, along with over 30 pre-configured coding capabilities. These allow external AI coding agents, including systems such as Claude Code, Cursor, Codex, and Windsurf, to gain direct, real-time access to a company’s Salesforce environment. This includes data, workflows, and underlying business logic. Developers are no longer required to use Salesforce’s own integrated development environment and can instead operate from any terminal or external setup.

In addition, Salesforce has upgraded its native development environment, Agentforce Vibes 2.0, by introducing an “open agent harness.” This system supports multiple agent frameworks, including those from OpenAI and Anthropic, and dynamically adjusts capabilities depending on which AI model is being used. The platform also supports multiple models simultaneously, including advanced systems like Claude Sonnet and GPT-5, while maintaining full awareness of the organization’s data from the start.

A notable technical enhancement is the introduction of native React support. During demonstrations, developers created a fully functional application using React instead of Salesforce’s traditional Lightning framework. The application connected to Salesforce data through GraphQL while still inheriting built-in security controls. This significantly expands front-end flexibility for developers.

The second pillar focuses on deployment. Salesforce has introduced an “experience layer” that separates how an AI agent functions from how it is presented to users. This allows developers to design an experience once and deploy it across multiple platforms, including Slack, mobile applications, Microsoft Teams, ChatGPT, Claude, Gemini, and other compatible environments. Importantly, this can be done without rewriting code for each platform. The approach represents a change from requiring users to enter Salesforce interfaces to delivering Salesforce-powered experiences directly within existing workflows.

The third pillar addresses trust, control, and scalability. Salesforce has introduced a comprehensive set of tools that manage the entire lifecycle of AI agents. These include systems for testing, evaluation, monitoring, and experimentation. A central component is “Agent Script,” a new programming language designed to combine structured, rule-based logic with the flexible reasoning capabilities of AI models. It allows organizations to define which parts of a process must follow strict rules and which parts can rely on AI-driven decision-making.

Additional tools include a Testing Center that identifies logical errors and policy violations before deployment, custom evaluation systems that define performance standards, and an A/B testing interface that allows multiple agent versions to run simultaneously under real-world conditions.

One of the key technical challenges addressed by Salesforce is the difference between probabilistic and deterministic systems. AI agents do not always produce identical results, which can create instability in enterprise environments where consistency is critical. Early adopters reported that once agents were deployed, even small modifications could lead to unpredictable outcomes, forcing teams to repeat extensive testing processes.

Agent Script was developed to solve this problem by introducing a structured framework. It defines agent behavior as a state machine, where certain steps are fixed and controlled while others allow flexible reasoning. This approach ensures both reliability and adaptability.

Salesforce also distinguishes between two types of AI system architectures. Customer-facing agents, such as those used in sales or support, require strict control to ensure they follow predefined rules and maintain brand consistency. These operate within structured workflows. In contrast, employee-facing agents are designed to operate more freely, exploring multiple paths and refining their outputs dynamically before presenting results. Both systems operate on a unified underlying architecture, allowing organizations to manage them without maintaining separate platforms.

The company is also expanding its ecosystem. It now supports integration with a wide range of AI models, including those from Google and other providers. A new marketplace brings together thousands of applications and tools, supported by a $50 million initiative aimed at encouraging further development.

At the same time, Salesforce is taking a flexible approach to emerging technical standards such as Model Context Protocol. Rather than relying on a single method, the company is offering APIs, command-line interfaces, and protocol-based integrations simultaneously to remain adaptable as the industry evolves.

A real-world example surfaced during the announcement demonstrated how one company built an AI-powered customer service agent in just 12 days. The system now handles approximately half of customer interactions, improving efficiency while reducing operational costs.

Finally, Salesforce is also changing its business model. The company is shifting away from traditional per-user pricing toward a consumption-based approach, reflecting a future where AI agents, rather than human users, perform the majority of work within enterprise systems.

This transformation suggests a new layer in strategic operations. Instead of resisting the rise of AI, Salesforce is restructuring its platform to align with it, betting that its existing data infrastructure, enterprise integrations, and accumulated operational logic will continue to provide value even as software becomes increasingly autonomous.

Nvidia’s AI Launch Sparks Quantum Stock Surge, Minting Xanadu’s CEO a Billionaire

 

Quantum computing stocks jumped after Nvidia unveiled its Ising open-source AI model family, a move that investors interpreted as a strong validation of the sector. The result was a sharp rally in several names, with Xanadu standing out as the biggest winner and its founder Christian Weedbrook briefly joining the billionaire ranks.

The core issue is that Nvidia’s announcement did not introduce a new quantum computer; instead, it introduced software tools aimed at two of quantum computing’s hardest problems: calibration and error correction. Nvidia said Ising can make decoding up to 2.5 times faster and three times more accurate than pyMatching, which helped convince traders that the path to practical quantum systems may be improving faster than expected. 

That enthusiasm quickly turned into extreme stock moves. Xanadu’s shares climbed from under $8 to roughly $40 in six trading sessions, while the Toronto exchange paused trading several times because of the speed of the move. Similar gains appeared across the sector, including D-Wave, IonQ, Rigetti, Infleqtion, and Quantum Computing, showing that the market was bidding up the whole group rather than just one company. 

For Xanadu, the rally created an extraordinary paper windfall. Weedbrook owns 15.6% of the company through multiple voting shares, and his stake was valued at about $1.5 billion to $1.6 billion during the surge. The story is notable because the company’s valuation moved dramatically on sentiment tied to Nvidia’s broader endorsement of quantum-related tooling, not on a fresh commercial breakthrough from Xanadu itself. 

The main issue is that quantum computing remains a high-expectation, low-certainty industry. Nvidia’s move suggests that investors increasingly view AI and quantum as complementary technologies, especially if software can help make fragile quantum hardware more usable. But the volatility also highlights the risk: when a sector is still early and speculative, a single announcement can create massive gains, even before the business fundamentals fully catch up.

Tinder And Zoom Introduce World ID Iris Scanning To Verify Humans Amid Rising AI Fake Profiles

 

Now comes eye-scan tech on Tinder and Zoom, rolling out to confirm real people behind profiles amid rising fears about AI mimics and bots. This move leans on identity checks from World ID - backed by Tools for Humanity - to tell actual humans apart. Verification lights up through unique iris patterns, quietly working when someone logs in. Not every user sees it yet; testing shapes how widely it spreads. Behind the scenes, privacy safeguards aim to shield biometric data tightly. Shifts like these respond to digital trust gaps widening across social apps lately. Scanning begins at the iris, that ring of color in the eye, using either an app or a round gadget made for this purpose. After confirmation comes through, a distinct digital ID lands on the person's smartphone. 

This key travels with them, opening access wherever systems accept it to prove someone is human, not automated software. Rising floods of fake online personas built by artificial intelligence fuel efforts like this one. Impersonations crafted by deepfakes grow more common, pushing such verification into sharper focus. Backed by Sam Altman - also at the helm of OpenAI - the project made its debut in San Francisco. At the event, he suggested the web may soon be flooded with machine-made content more than human output. Truth online might hinge on tools able to tell actual humans apart from artificial ones. 

Such systems, according to him, are likely to grow unavoidable. Fake accounts plague both Tinder and Zoom, complicating trust on these platforms. Driven by artificial intelligence, counterfeit profiles on Tinder deploy synthetic photos alongside prewritten messages. These setups often unfold into romantic deception aimed at seizing cash or sensitive details. Reports indicate massive monetary damage worldwide due to similar frauds lately. Losses tally in the billions across nations within just a few years. 

Surprisingly, Zoom faces a distinct yet connected challenge - deepfake-driven impersonation at work. A well-documented incident saw fraudsters deploy synthetic audio and video to mimic corporate leaders, tricking staff into sending large sums. Here, World ID steps in, adding stronger verification when stakes run high. Later came iris scans, after Match Group already introduced video selfies to fight fake profiles on Tinder. Though not required, this newer check offers a tougher way to prove who you really are. People at the company say it helps users feel more certain about others’ real identities. 

What matters most is trust during interactions. Because irises differ so much between people, World ID uses them as a key part of its method. This setup aims to protect user privacy by creating an individual code instead of keeping sensitive data like home locations or full names. Even though it does not collect traditional identity markers, the technology still confirms real individuals. Growth has been steady, with expanding adoption seen on various digital services. 

A large number of people - already in the millions - have gone through the sign-up process. Now shaping how we confirm who's behind a screen, artificial intelligence pushes biometrics deeper into everyday applications. Though concerns linger about data safety and user acceptance, this trend mirrors wider attempts across tech sectors to tackle rising confusion between real people and sophisticated automated fakes. Despite hesitation in some areas, systems that verify physical traits gain ground as tools for clearer online identities.

Fake CAPTCHA Lures Power IRSF Fraud and Crypto Theft Campaigns


 

Research by Infoblox reveals a new fraud operation that combines routine web security practices with telecom billing abuse, resulting in unauthorized mobile activity by using counterfeit CAPTCHA interfaces. 

In this scheme, familiar human verification prompts are repurposed as covert triggers for International Revenue Share Fraud, effectively converting a typical browser interaction into an event that is monetized through telecom billing. 

Several studies have demonstrated that users who navigate what appears to be a legitimate verification process may unknowingly authorize premium or international SMS transmissions, creating a direct revenue stream for threat actors. 

IRSF has presented challenges to telecom operators for decades, but this implementation introduces a previously undetected delivery vector that takes advantage of user trust in widely used web validation mechanisms in order to accomplish the delivery. 

While individual charges may appear insignificant, the cumulative impacts at scale present carriers with measurable financial exposure, along with an increase in customer disputes resulting from opaque and unrecognized billing activity. 

Based on the analysis, it appears that the campaign has been operating since mid-2020, resulting from a sustained and carefully developed exploitation approach. Through the utilization of classic social engineering techniques as well as browser manipulation tactics, including back-button hijacking, the infrastructure effectively limits user navigation and reinforces the illusion of a legitimate verification process. 

In addition, dozens of originating numbers were identified in multiple international jurisdictions, emphasizing the geographical dispersion of the monetization layer underpinning the scheme. The staged CAPTCHA sequence is particularly designed to trigger multiple outbound SMS events silently, routing messages to a variety of premium-rate destinations in place of a single endpoint, thus maximizing revenue generation per interaction by triggering multiple SMS events.

A delay in the manifestation of associated charges which often occurs weeks after the event—obscures attribution further, reducing the possibility of user recalling or disputing the charges at bill time. In particular, the integration of malicious traffic distribution systems within this operation is significant, as is the repurposing of infrastructure typically utilized for malware delivery and phishing redirection into SMS fraud orchestration in a high volume. 

Threat actors can scale a campaign efficiently while maintaining operational stealth by utilizing layers of redirection and evasion mechanisms through this convergence. These findings have led to the discovery of a highly orchestrated, multi-phase fraud scheme that combines behavioral manipulation with telemarketing monetization. 

By utilizing a pool of internationally distributed numbers - many of which are registered in regions with higher SMS termination costs, including Azerbaijan, Egypt, and Myanmar - the operation maximizes per transaction yields.

It is common practice for victims to be funneled through a series of convincing CAPTCHA challenges that are intended to trigger outbound messaging events to numerous premium-rate destinations discreetly, often resulting in several SMS transmissions within the same session. This layered interaction model, strengthened by browser-level interference, such as history manipulation, prevents users from leaving the website while maintaining the illusion that the application is legitimate. 

In this fraud model, the threat actor exploits inter-carrier settlement mechanisms to route traffic toward high-fee endpoints under revenue-sharing arrangements by leveraging inter-carrier settlement mechanisms. Moreover, the integration of traffic distribution systems provides an additional level of operational precision, allowing targeted victimization while dynamically concealing malicious infrastructure from detection systems. 

Based on industry assessments, artificially inflated traffic associated with such schemes remains among the most financially damaging types of messaging abuse, as significant portions of telecom operators report both elevated traffic volumes as well as significant revenue leaks associated with such schemes. 

Individual users' seemingly trivial costs aggregate into a scalable and persistent revenue stream within this context, demonstrating the ongoing viability of IRSF to serve as a global fraud vector. Detailed investigations conducted by Infoblox and Confiant further illustrate how Keitaro Tracker abuse has enabled large-scale fraud ecosystems by acting as an enabler.

It was originally designed as a self-hosted ad performance tracking tool, but its conditional routing capabilities have been systematically repurposed by threat actors, who often operate with illegally obtained or cracked licenses, as a covert traffic distribution system and cloaking tool. By misusing this information, victims are diverted from seemingly legitimate entry points, such as sponsored social media advertisements, to fraudulent investment platforms claiming to be AI-driven and guaranteed high returns. 

As a method of enhancing credibility and engagement, campaigns frequently employ fabricated media narratives, including spoofed news coverage, synthetic endorsements, and deepfake video content attributed to actors such as FaiKast. In a four-month observation period, telemetry indicates more than 120 discrete campaigns were deployed in conjunction with Keitaro-linked infrastructure, resulting in significant DNS activity across thousands of domains. 

The majority of this traffic has been attributed to cryptocurrency-related fraud, particularly wallet draining schemes disguised as promotional airdrops involving widely recognized blockchain services and assets. 

The convergence of legacy investment scam tactics with adaptive traffic orchestration and artificial intelligence-based deception techniques demonstrates how scalable infrastructure is intertwined with persuasive social engineering to ensure maximum reach and financial extraction in an evolving threat landscape.

In terms of execution, the scheme contains carefully optimized conversion funnels that maximize engagement as well as monetization. The typical interaction sequence, which consists of multiple CAPTCHA stages, can result in as many as 60 outbound SMS messages to a distributed network of international phone numbers, resulting in an additional charge of around $30 per session for each outbound SMS message. 

Although this cost model is modest when considered individually, it scales well across large victim pools when replicated, especially in countries with high- and mid-level termination rates across Europe and Eurasia. It is possible to further refine the campaign logic through client-side state management, which uses cookies, which track progression metrics such as “successRate” and dynamically determine user pathways.

By selectively advancing, redirecting, or filtering participants into parallel fraud streams, adaptive routing improves targeting precision while fragmenting detection efforts by distributing traffic among multiple controlled endpoints, which increases detection efficiency. 

Additionally, browser manipulation techniques, specifically JavaScript-driven history tampering, continue to be used, thereby ensuring persistence by redirecting users back into the fraudulent flow upon attempt to exit through standard navigation controls. 

As a result, the user is faced with a constrained browsing environment that prolongs interaction time and increases the possibility of repeating chargeable events before disengaging. Overall, the operation illustrates a shift in fraud engineering as telecom exploitation, adaptive web scripting, and traffic orchestration are converged into a unified, revenue-generating system. 

By embedding monetization triggers within seemingly benign user interactions, and by reinforcing those triggers with persistence mechanisms, such as cookie-driven logic and navigation controls, threat actors are successfully industrializing high volume, low value fraud. According to Information Blox, these campaigns are not only technically sophisticated, but also exploit systemic gaps in web platforms, advertising networks, and telecom billing frameworks. 

Increasingly, these tactics have become more sophisticated, and they require more coordinated mitigation in addition to detection, so tighter controls across digital advertising supply chains, improved browser-level safeguards, and greater transparency regarding cross-border messaging charges will be required to limit the scaleability of such abuses.

Can AI Own Its Work? A Debate That Started With a Monkey Photo

 



A single photograph captured in a remote forest over a decade ago has become central to one of the most complex legal questions of the digital age: what happens when creative work is produced without direct human authorship? The answer now carries long-term consequences for artificial intelligence, creative industries, and ownership rights in the modern world.

The image in question originated in 2011, when wildlife photographer David Slater was documenting crested black macaques in Indonesia. These monkeys are not only endangered but also known for their highly expressive faces, making them attractive subjects for photography. However, Slater faced difficulty capturing close-up shots because the animals were wary of human presence.

To work around this, he positioned his camera on a tripod, enabled automatic focus, and used a flash, allowing the monkeys to approach and interact with the equipment without feeling threatened. His approach relied on curiosity rather than control. Eventually, one macaque handled the camera and pressed the shutter button while looking directly into the lens. The resulting image, widely known as the “monkey selfie,” appeared almost intentional, with the animal’s expression resembling a posed portrait.

While the photograph initially brought attention and recognition, it soon triggered an unexpected legal dispute. The core issue was deceptively simple: if a photograph is not taken by a human, can anyone claim ownership over it?

The situation escalated when the image was uploaded to Wikipedia, making it freely accessible worldwide. Slater objected to this distribution, arguing that he had lost approximately £10,000 in potential earnings because the image could now be used without payment. However, the Wikimedia Foundation refused to remove the photograph. Its reasoning was based on copyright law, which generally requires a human creator. Since the image was captured by an animal, the organisation classified it as public domain material.

This interpretation was later reinforced by the U.S. Copyright Office, which formally clarified that works produced without human authorship cannot be registered. In its guidance, the office explicitly listed a photograph taken by a monkey as an example of ineligible material, establishing a clear precedent.

The dispute took another unusual turn when People for the Ethical Treatment of Animals filed a lawsuit attempting to assign copyright ownership to the macaque itself. Although framed as a legal claim over the photograph, the case was widely interpreted as an effort to establish broader legal rights for animals. After several years of legal proceedings, a court dismissed the case, concluding that animals do not have the legal capacity to initiate lawsuits.

Legal experts later observed that, although the case focused on animal authorship, it introduced a broader conceptual challenge that would become more relevant with the rise of artificial intelligence. According to intellectual property lawyer Ryan Abbott, the debate could easily extend beyond animals to machines capable of producing creative outputs.

This possibility became reality when computer scientist Stephen Thaler attempted to secure copyright protection for an image generated by his AI system, DABUS. Thaler described the system as capable of independently producing ideas, arguing that it should be recognised as the sole creator of its output. He characterised the system as exhibiting a form of machine-based cognition, though this view is strongly disputed within the scientific community.

Despite these claims, the Copyright Office rejected the application, applying the same reasoning used in the monkey selfie case. Because the work was not created by a human, it could not qualify for copyright protection. This rejection led to a legal challenge that progressed through multiple levels of the U.S. judicial system.

When the case reached the Supreme Court of the United States, the court declined to hear it, leaving lower court rulings intact. The outcome effectively confirmed that, under current U.S. law, works generated entirely by artificial intelligence cannot be owned by anyone, including the developer of the system or the individual who prompted it.

This position has reverberating implications for the creative economy. Copyright law exists to allow creators and organisations to control and monetise their work. Without ownership rights, it becomes difficult to build sustainable business models around fully AI-generated content. Legal scholar Stacey Dogan noted that this limitation reduces the likelihood of a future where machine-generated content completely replaces human-created media.

At the same time, the rapid expansion of generative AI tools continues to complicate the landscape. These systems function by analysing large datasets and producing outputs based on user instructions, often referred to as prompts. While they can generate text, images, and video at scale, their outputs raise questions about originality and authorship, particularly when human involvement is minimal.

Recent industry developments illustrate this uncertainty. Experimental AI-generated content has attracted large audiences online, suggesting a level of public interest, even if motivations such as novelty or criticism play a role. However, some technology companies have begun reassessing their AI content strategies, particularly where ownership and profitability remain unclear.

Expert opinion on the value of fully AI-generated content remains divided. Some specialists argue that such content lacks depth or authenticity, while others view AI as a useful tool for supporting human creativity rather than replacing it. This perspective positions AI as a collaborator rather than an independent creator.

Legal approaches also vary internationally. In the United Kingdom, copyright law allows ownership of computer-generated works by assigning authorship to the individual responsible for arranging their creation. However, this framework is currently being reconsidered as policymakers evaluate whether it remains appropriate in the context of modern AI systems.

One of the most complex unresolved issues involves hybrid creation. When humans actively guide, refine, and edit AI-generated outputs, determining ownership becomes less straightforward. A notable example involves an AI-assisted artwork that won a competition after extensive prompting and editing, raising questions about how much human contribution is required for copyright protection.

This debate is not entirely new. When photography first emerged, similar concerns were raised about whether cameras, rather than humans, were responsible for creative output. Over time, legal systems adapted by recognising the role of human intention and decision-making. Artificial intelligence now presents a more advanced version of that same challenge.

For now, the legal position in the United States remains clear: without meaningful human involvement, creative works cannot be protected by copyright. However, as AI becomes increasingly integrated into creative processes, the distinction between human and machine contribution is becoming more difficult to define.

What began as an unexpected interaction between a monkey and a camera has therefore evolved into a defining case in the global conversation about creativity, ownership, and technology. The decisions made in courts today will shape how creative work is produced, distributed, and valued in the future.



PhantomCore Exploits TrueConf Flaws to Breach Russian Networks

 

A pro-Ukrainian hacktivist group known as PhantomCore has been exploiting vulnerabilities in TrueConf video conferencing software to infiltrate Russian networks since September 2025. According to a Positive Technologies report, the attackers chained three undisclosed flaws in TrueConf Server, allowing them to bypass authentication, read sensitive files, and execute arbitrary commands remotely. Despite patches being released by TrueConf on August 27, 2025, the group independently reverse-engineered these issues, launching widespread attacks on Russian organizations without relying on public exploits. 

The vulnerabilities include BDU:2025-10114 (CVSS 7.5), an insufficient access control flaw enabling unauthenticated requests to admin endpoints like /admin/*; BDU:2025-10115 (CVSS 7.5), which permits arbitrary file reads; and the critical BDU:2025-10116 (CVSS 9.8), a command injection vulnerability for full OS command execution. This exploit chain grants attackers initial foothold on vulnerable servers, facilitating lateral movement and persistence within victim environments. 

PhantomCore's operations highlight their sophistication, as they maintain stealth for extended periods—up to 78 days in some cases—while targeting sectors like government, defense, and manufacturing. PhantomCore's tactics extend beyond TrueConf exploits, incorporating phishing with password-protected RAR archives containing PhantomRAT malware, a shift from earlier ZIP-based methods. Positive Technologies noted over 180 infections from May to July 2025 alone, peaking on June 30, with at least 49 hosts still under attacker control as of early 2026. The group's pro-Ukrainian affiliation aligns with geopolitical motives, focusing exclusively on Russian entities amid ongoing cyber-espionage waves. 

Organizations running TrueConf face heightened risks if unpatched, as attackers evolve tools to evade detection and conduct large-scale breaches. Immediate mitigations include applying the August 2025 patches, monitoring admin endpoints and command logs for anomalies, and segmenting video conferencing servers from core networks. Enhanced defenses against lateral movement, such as network micro-segmentation and behavioral analytics, are crucial to counter PhantomCore's persistence. 

This campaign underscores the dangers of unpatched collaboration tools in sensitive environments, where private zero-days can fuel nation-aligned hacktivism. Russian firms must prioritize vulnerability management and threat hunting, as PhantomCore's adaptability signals ongoing threats into 2026. By staying vigilant, defenders can disrupt such stealthy intrusions before they escalate to data exfiltration or sabotage.

Featured