The California Privacy Protection Agency (CalPrivacy) has taken enforcement action against Datamasters, a marketing firm operated by Ricke...
Europol recently arrested 34 people in Spain who are alleged to have a role in a global criminal gang called Black Axe. The operation was conducted by Spanish National Police and Bavarian State Criminal Police Office and Europol.
Twenty eight individuals were arrested in Seville, three in Madrid and two in Malaga, and the last one in Barcelona. Among the 34 suspects, 10 individuals are from Nigeria.
“The action resulted in 34 arrests and significant disruptions to the group's activities. Black Axe is a highly structured, hierarchical group with its origins in Nigeria and a global presence in dozens of countries,” Europol said in a press release on its website.
Black Axe is infamous for its role in various cyber crimes like frauds, human trafficking, prostitution, drug trafficking, armed robbery, kidnapping, and malicious spiritual activities. The gang annually earns roughly billions of euros via these operations that have a massive impact.
Officials suspect that Black Axe is responsible for fraud worth over 5.94 million euros. During the operation, the investigating agencies froze 119352 euros in bank accounts and seized 66403 euros in cash during home searches.
Germany and Spain's cross-border cooperation includes the deployment of two German officers on the scene on the day of action, the exchange of intelligence, and the provision of analytical support to Spanish investigators.
The core group of the organized crime network, which recruits money mules in underprivileged communities with high unemployment rates, was the objective of the operation. The majority of these susceptible people are of Spanish nationality and are used to support the illegal activities of the network.
Europol provided a variety of services to help this operation, such as intelligence analysis, a data sprint in Madrid, and on-the-spot assistance. Mapping the organization's structure across nations, centralizing data, exchanging important intelligence packages, and assisting with coordinated national investigations have all been made possible by Europol.
In order to solve the problems caused by the group's scattered little cases, cross-border activities, and the blurring of crimes into "ordinary" local offenses, this strategy seeks to disrupt the group's operations and recover assets.
A critical security vulnerability has been identified in LangChain’s core library that could allow attackers to extract sensitive system data from artificial intelligence applications. The flaw, tracked as CVE-2025-68664, affects how the framework processes and reconstructs internal data, creating serious risks for organizations relying on AI-driven workflows.
LangChain is a widely adopted framework used to build applications powered by large language models, including chatbots, automation tools, and AI agents. Due to its extensive use across the AI ecosystem, security weaknesses within its core components can have widespread consequences.
The issue stems from how LangChain handles serialization and deserialization. These processes convert data into a transferable format and then rebuild it for use by the application. In this case, two core functions failed to properly safeguard user-controlled data that included a reserved internal marker used by LangChain to identify trusted objects. As a result, untrusted input could be mistakenly treated as legitimate system data.
This weakness becomes particularly dangerous when AI-generated outputs or manipulated prompts influence metadata fields used during logging, event streaming, or caching. When such data passes through repeated serialization and deserialization cycles, the system may unknowingly reconstruct malicious objects. This behavior falls under a known security category involving unsafe deserialization and has been rated critical, with a severity score of 9.3.
In practical terms, attackers could craft inputs that cause AI agents to leak environment variables, which often store highly sensitive information such as access tokens, API keys, and internal configuration secrets. In more advanced scenarios, specific approved components could be abused to transmit this data outward, including through unauthorized network requests. Certain templating features may further increase risk if invoked after unsafe deserialization, potentially opening paths toward code execution.
The vulnerability was discovered during security reviews focused on AI trust boundaries, where the researcher traced how untrusted data moved through internal processing paths. After responsible disclosure in early December 2025, the LangChain team acknowledged the issue and released security updates later that month.
The patched versions introduce stricter handling of internal object markers and disable automatic resolution of environment secrets by default, a feature that was previously enabled and contributed to the exposure risk. Developers are strongly advised to upgrade immediately and review related dependencies that interact with LangChain-core.
Security experts stress that AI outputs should always be treated as untrusted input. Organizations are urged to audit logging, streaming, and caching mechanisms, limit deserialization wherever possible, and avoid exposing secrets unless inputs are fully validated. A similar vulnerability identified in LangChain’s JavaScript ecosystem accentuates broader security challenges as AI frameworks become more interconnected.
As AI adoption accelerates, maintaining strict data boundaries and secure design practices is essential to protecting both systems and users from newly developing threats.