Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Sensitive data. Show all posts

LangChain Security Issue Puts AI Application Data at Risk

 



A critical security vulnerability has been identified in LangChain’s core library that could allow attackers to extract sensitive system data from artificial intelligence applications. The flaw, tracked as CVE-2025-68664, affects how the framework processes and reconstructs internal data, creating serious risks for organizations relying on AI-driven workflows.

LangChain is a widely adopted framework used to build applications powered by large language models, including chatbots, automation tools, and AI agents. Due to its extensive use across the AI ecosystem, security weaknesses within its core components can have widespread consequences.

The issue stems from how LangChain handles serialization and deserialization. These processes convert data into a transferable format and then rebuild it for use by the application. In this case, two core functions failed to properly safeguard user-controlled data that included a reserved internal marker used by LangChain to identify trusted objects. As a result, untrusted input could be mistakenly treated as legitimate system data.

This weakness becomes particularly dangerous when AI-generated outputs or manipulated prompts influence metadata fields used during logging, event streaming, or caching. When such data passes through repeated serialization and deserialization cycles, the system may unknowingly reconstruct malicious objects. This behavior falls under a known security category involving unsafe deserialization and has been rated critical, with a severity score of 9.3.

In practical terms, attackers could craft inputs that cause AI agents to leak environment variables, which often store highly sensitive information such as access tokens, API keys, and internal configuration secrets. In more advanced scenarios, specific approved components could be abused to transmit this data outward, including through unauthorized network requests. Certain templating features may further increase risk if invoked after unsafe deserialization, potentially opening paths toward code execution.

The vulnerability was discovered during security reviews focused on AI trust boundaries, where the researcher traced how untrusted data moved through internal processing paths. After responsible disclosure in early December 2025, the LangChain team acknowledged the issue and released security updates later that month.

The patched versions introduce stricter handling of internal object markers and disable automatic resolution of environment secrets by default, a feature that was previously enabled and contributed to the exposure risk. Developers are strongly advised to upgrade immediately and review related dependencies that interact with LangChain-core.

Security experts stress that AI outputs should always be treated as untrusted input. Organizations are urged to audit logging, streaming, and caching mechanisms, limit deserialization wherever possible, and avoid exposing secrets unless inputs are fully validated. A similar vulnerability identified in LangChain’s JavaScript ecosystem accentuates broader security challenges as AI frameworks become more interconnected.

As AI adoption accelerates, maintaining strict data boundaries and secure design practices is essential to protecting both systems and users from newly developing threats.

FCC Tightens Rules on Foreign-Made Drones to Address U.S. Security Risks



The U.S. Federal Communications Commission has introduced new restrictions targeting drones and essential drone-related equipment manufactured outside the United States, citing concerns that such technology could pose serious national security and public safety risks.

Under this decision, the FCC has updated its Covered List to include uncrewed aircraft systems and their critical components that are produced in foreign countries. The move is being implemented under authority provided by recent provisions in the National Defense Authorization Act. In addition to drones themselves, the restrictions also apply to associated communication and video surveillance equipment and services.

The FCC explained that while drones are increasingly used for legitimate purposes such as innovation, infrastructure monitoring, and public safety operations, they can also be misused. According to the agency, malicious actors including criminals, hostile foreign entities, and terrorist groups could exploit drone technology to conduct surveillance, disrupt operations, or carry out physical attacks.

The decision was further shaped by an assessment carried out by an interagency group within the Executive Branch that specializes in national security. This review concluded that certain foreign-produced drones and their components present unacceptable risks to U.S. national security as well as to the safety and privacy of people within the country.

Officials noted that these risks include unauthorized monitoring, potential theft of sensitive data, and the possibility of drones being used for disruptive or destructive activities over U.S. territory. Components such as data transmission systems, navigation tools, flight controllers, ground stations, batteries, motors, and communication modules were highlighted as areas of concern.

The FCC also linked the timing of the decision to upcoming large-scale international events that the United States is expected to host, including the 2026 FIFA World Cup and the 2028 Summer Olympics. With increased drone activity likely during such events, regulators aim to strengthen control over national airspace and reduce potential security threats.

While the restrictions emphasize the importance of domestic production, the FCC clarified that exemptions may be granted. If the U.S. Department of Homeland Security determines that a specific drone or component does not pose a security risk, it may still be allowed for use.

The agency also reassured consumers that the new rules do not prevent individuals from continuing to use drones they have already purchased. Retailers are similarly permitted to sell and market drone models that received government approval earlier this year.

This development follows the recent signing of the National Defense Authorization Act for Fiscal Year 2026 by U.S. President Donald Trump, which includes broader measures aimed at protecting U.S. airspace from unmanned aircraft that could threaten public safety.

The FCC’s action builds on earlier updates to the Covered List, including the addition of certain foreign technology firms in the past, as part of a wider effort to limit national security risks linked to critical communications and surveillance technologies.




Hackers Are Posing as Police to Steal User Data from Tech Companies

 


Cybersecurity investigators are warning about a spreading threat in which cybercriminals impersonate law enforcement officers to unlawfully obtain sensitive user information from major technology companies. These attackers exploit emergency data request systems that are designed to help police respond quickly in life-threatening situations.

In one documented incident earlier this year, a US internet service provider received what appeared to be an urgent email from a police officer requesting user data. The request was treated as authentic, and within a short time, the company shared private details belonging to a gamer based in New York. The information included personal identifiers such as name, residential address, phone numbers, and email contact. Later investigations revealed that the email was fraudulent and not sent by any law enforcement authority.

Journalistic review of internal evidence indicates that the message originated from an organized hacking group that profits by selling stolen personal data. These groups offer what is commonly referred to as doxing services, where private information is extracted from companies and delivered to paying clients.

One individual associated with the operation admitted involvement in the incident and claimed that similar impersonation tactics have worked against multiple large technology platforms. According to the individual, the process requires minimal time and relies on exploiting weak verification procedures. Some companies acknowledged receiving inquiries about these incidents but declined to provide further comment.

Law enforcement officials have expressed concern over the misuse of officer identities, particularly when attackers use real names, badge numbers, and department references to appear legitimate. This tactic exponentially increases the likelihood that companies will comply without deeper scrutiny.

Under normal circumstances, police data requests are processed through formal legal channels, often taking several days. Emergency requests, however, are designed to bypass standard timelines when immediate harm is suspected. Hackers take advantage of this urgency by submitting forged documents that mimic legitimate legal language, seals, and citations.

Once attackers obtain a small amount of publicly accessible data, such as a username or IP address, they can convincingly frame their requests. In some cases, falsified warrants were used to seek even more sensitive records, including communication logs.

Evidence reviewed by journalists suggests the operation is extensive, involving hundreds of fraudulent requests and generating substantial financial gain. Materials such as call recordings and internal documents indicate repeated successful interactions with corporate legal teams. In certain cases, companies later detected irregularities and blocked further communication, introducing additional safeguards without disclosing technical details.

A concerning weakness lies in the fragmented nature of US law enforcement communication systems. With thousands of agencies using different email domains and formats, companies struggle to establish consistent verification standards. Attackers exploit this by registering domains that closely resemble legitimate police addresses and spoofing official phone numbers.

Experts note that many companies still rely on email-based systems for emergency data requests and publicly available submission guidelines. While intended to assist law enforcement, these instructions can unintentionally provide attackers with ready-made templates.

Although warnings about fake emergency requests have circulated for years, recent findings show the practice remains widespread. The issue gives centre stage to a broader challenge in balancing rapid response with rigorous verification, especially when human judgment is pressured by perceived urgency. Without systemic improvements, trust-based processes will continue to be abused.


Why Long-Term AI Conversations Are Quietly Becoming a Major Corporate Security Weakness

 



Many organisations are starting to recognise a security problem that has been forming silently in the background. Conversations employees hold with public AI chatbots can accumulate into a long-term record of sensitive information, behavioural patterns, and internal decision-making. As reliance on AI tools increases, these stored interactions may become a serious vulnerability that companies have not fully accounted for.

The concern resurfaced after a viral trend in late 2024 in which social media users asked AI models to highlight things they “might not know” about themselves. Most treated it as a novelty, but the trend revealed a larger issue. Major AI providers routinely retain prompts, responses, and related metadata unless users disable retention or use enterprise controls. Over extended periods, these stored exchanges can unintentionally reveal how employees think, communicate, and handle confidential tasks.

This risk becomes more severe when considering the rise of unapproved AI use at work. Recent business research shows that while the majority of employees rely on consumer AI tools to automate or speed up tasks, only a fraction of companies officially track or authorise such usage. This gap means workers frequently insert sensitive data into external platforms without proper safeguards, enlarging the exposure surface beyond what internal security teams can monitor.

Vendor assurances do not fully eliminate the risk. Although companies like OpenAI, Google, and others emphasize encryption and temporary chat options, their systems still operate within legal and regulatory environments. One widely discussed court order in 2025 required the preservation of AI chat logs, including previously deleted exchanges. Even though the order was later withdrawn and the company resumed standard deletion timelines, the case reminded businesses that stored conversations can resurface unexpectedly.

Technical weaknesses also contribute to the threat. Security researchers have uncovered misconfigured databases operated by AI firms that contained user conversations, internal keys, and operational details. Other investigations have demonstrated that prompt-based manipulation in certain workplace AI features can cause private channel messages to leak. These findings show that vulnerabilities do not always come from user mistakes; sometimes the supporting AI infrastructure itself becomes an entry point.

Criminals have already shown how AI-generated impersonation can be exploited. A notable example involved attackers using synthetic voice technology to imitate an executive, tricking an employee into transferring funds. As AI models absorb years of prompt history, attackers could use stylistic and behavioural patterns to impersonate employees, tailor phishing messages, or replicate internal documents.

Despite these risks, many companies still lack comprehensive AI governance. Studies reveal that employees continue to insert confidential data into AI systems, sometimes knowingly, because it speeds up their work. Compliance requirements such as GDPR’s strict data minimisation rules make this behaviour even more dangerous, given the penalties for mishandling personal information.

Experts advise organisations to adopt structured controls. This includes building an inventory of approved AI tools, monitoring for unsanctioned usage, conducting risk assessments, and providing regular training so staff understand what should never be shared with external systems. Some analysts also suggest that instead of banning shadow AI outright, companies should guide employees toward secure, enterprise-level AI platforms.

If companies fail to act, each casual AI conversation can slowly accumulate into a dataset capable of exposing confidential operations. While AI brings clear productivity benefits, unmanaged use may convert everyday workplace conversations into one of the most overlooked security liabilities of the decade.

Nearly 50% of IoT Device Connections Pose Security Threats, Study Finds

 




A new security analysis has revealed that nearly half of all network communications between Internet of Things (IoT) devices and traditional IT systems come from devices that pose serious cybersecurity risks.

The report, published by cybersecurity company Palo Alto Networks, analyzed data from over 27 million connected devices across various organizations. The findings show that 48.2 percent of these IoT-to-IT connections came from devices classified as high risk, while an additional 4 percent were labeled critical risk.

These figures underline a growing concern that many organizations are struggling to secure the rapidly expanding number of IoT devices on their networks. Experts noted that a large portion of these devices operate with outdated software, weak default settings, or insecure communication protocols, making them easy targets for cybercriminals.


Why It’s a Growing Threat

IoT devices, ranging from smart security cameras and sensors to industrial control systems are often connected to the same network as computers and servers used for daily business operations. This creates a problem: once a vulnerable IoT device is compromised, attackers can move deeper into the network, access sensitive data, and disrupt normal operations.

The study emphasized that the main cause behind such widespread exposure is poor network segmentation. Many organizations still run flat networks, where IoT devices and IT systems share the same environment without proper separation. This allows a hacker who infiltrates one device to move easily between systems and cause greater harm.


How Organizations Can Reduce Risk

Security professionals recommend several key actions for both small businesses and large enterprises to strengthen their defenses:

1. Separate Networks:

Keep IoT devices isolated from core IT infrastructure through proper network segmentation. This prevents threats in one area from spreading to another.

2. Adopt Zero Trust Principles:

Follow a security model that does not automatically trust any device or user. Each access request should be verified, and only the minimum level of access should be allowed.

3. Improve Device Visibility:

Maintain an accurate inventory of all devices connected to the network, including personal or unmanaged ones. This helps identify and secure weak points before they can be exploited.

4. Keep Systems Updated:

Regularly patch and update device firmware and software. Unpatched systems often contain known vulnerabilities that attackers can easily exploit.

5. Use Strong Endpoint Protection:

Deploy Endpoint Detection and Response (EDR) or Extended Detection and Response (XDR) tools across managed IT systems, and use monitoring solutions for IoT devices that cannot run these tools directly.


As organizations rely more on connected devices to improve efficiency, the attack surface grows wider. Without proper segmentation, monitoring, and consistent updates, one weak device can become an entry point for cyberattacks that threaten entire operations.

The report reinforces an important lesson: proactive network management is the foundation of cybersecurity. Ensuring visibility, limiting trust, and continuously updating systems can significantly reduce exposure to emerging IoT-based threats.




Aussie Telecom Breach Raises Alarm Over Customer Data Safety

 




A recent cyberattack on TPG Telecom has reignited concerns about how safe personal information really is in the hands of major companies. What the provider initially downplayed as a “limited” incident has in fact left hundreds of thousands of customers vulnerable to online scams.

The intrusion was uncovered on August 16, when unusual activity was detected in the systems of iiNet, one of TPG’s subsidiary brands. Hackers were able to get inside by misusing stolen employee logins, which granted access to iiNet’s order management platform. This internal tool is mainly used to handle service requests, but it contained far more sensitive data than many would expect.


Investigators now estimate that the attackers walked away with:

• Roughly 280,000 email addresses linked to iiNet accounts

• Close to 20,000 landline phone numbers

• Around 10,000 customer names, addresses, and contact details

• About 1,700 modem setup credentials


Although no banking details or government ID documents were exposed, cybersecurity experts caution that this type of information is highly valuable for criminals. Email addresses and phone numbers can be exploited to craft convincing phishing campaigns, while stolen modem passwords could give attackers the chance to install malware or hijack internet connections.

TPG has apologised for the breach and is reaching out directly to customers whose details were involved. Those not affected are also being notified for reassurance. So far, there have been no confirmed reports of the stolen records being used maliciously.

Even so, the risks are far from minor. Phishing messages that appear to come from trusted sources can lead victims to unknowingly share bank credentials, install harmful software, or hand over personal details that enable identity theft. As a result, affected customers are being urged to remain alert, treat incoming emails with suspicion, and update passwords wherever possible, especially on home modems.

The company has said it is cooperating with regulators and tightening its security protocols. But the case underlines a growing reality: personal data does not need to include credit card numbers to become a target. Seemingly routine details, when collected in bulk, can still provide criminals with the tools they need to run scams.

As cyberattacks grow more frequent, customers are left with the burden of vigilance, while companies face rising pressure to prove that “limited” breaches do not translate into large-scale risks.



Stop! Don’t Let That AI App Spy on Your Inbox, Photos, and Calls

 



Artificial intelligence is now part of almost everything we use — from the apps on your phone to voice assistants and even touchscreen menus at restaurants. What once felt futuristic is quickly becoming everyday reality. But as AI gets more involved in our lives, it’s also starting to ask for more access to our private information, and that should raise concerns.

Many AI-powered tools today request broad permissions, sometimes more than they truly need to function. These requests often include access to your email, contacts, calendar, messages, or even files and photos stored on your device. While the goal may be to help you save time, the trade-off could be your privacy.

This situation is similar to how people once questioned why simple mobile apps like flashlight or calculator apps — needed access to personal data such as location or contact lists. The reason? That information could be sold or used for profit. Now, some AI tools are taking the same route, asking for access to highly personal data to improve their systems or provide services.

One example is a new web browser powered by AI. It allows users to search, summarize emails, and manage calendars. But in exchange, it asks for a wide range of permissions like sending emails on your behalf, viewing your saved contacts, reading your calendar events, and sometimes even seeing employee directories at workplaces. While companies claim this data is stored locally and not misused, giving such broad access still carries serious risks.

Other AI apps promise to take notes during calls or schedule appointments. But to do this, they often request live access to your phone conversations, calendar, contacts, and browsing history. Some even go as far as reading photos on your device that haven’t been uploaded yet. That’s a lot of personal information for one assistant to manage.

Experts warn that these apps are capable of acting independently on your behalf, which means you must trust them not just to store your data safely but also to use it responsibly. The issue is, AI can make mistakes and when that happens, real humans at these companies might look through your private information to figure out what went wrong.

So before granting an AI app permission to access your digital life, ask yourself: is the convenience really worth it? Giving these tools full access is like handing over a digital copy of your entire personal history, and once it’s done, there’s no taking it back.

Always read permission requests carefully. If an app asks for more than it needs, it’s okay to say no.

EU Border Security Database Found to Have Serious Cyber Flaws

 



A recent investigative report has revealed critical cybersecurity concerns in one of the European Union’s key border control systems. The system in question, known as the Second Generation Schengen Information System (SIS II), is a large-scale database used across Europe to track criminal suspects, unauthorized migrants, and missing property. While this system plays a major role in maintaining regional safety, new findings suggest its digital backbone may be weaker than expected.

According to a joint investigation by Bloomberg and Lighthouse Reports, SIS II contains a significant number of unresolved security issues. Though there is no confirmed case of data being stolen, experts warn that poor account management and delayed software fixes could leave the system open to misuse. One of the main issues flagged was the unusually high number of user accounts with access to the database; many of which reportedly had no clear purpose.

SIS II has been in use since 2013 and stores over 90 million records, most of which involve things like stolen vehicles and documents. However, about 1.7 million entries involve individuals. These personal records often remain unknown to those listed until they are stopped by police or immigration officers, raising concerns about privacy and oversight in the event of a breach.

One legal researcher familiar with European digital systems warned that a successful cyberattack could lead to wide-ranging consequences, potentially affecting millions of people across the EU.

Another growing concern is that SIS II is currently hosted on a closed, internal network—but that is about to change. The system is expected to be integrated with a new border management tool called the Entry/Exit System (EES), which will require travelers to provide fingerprints and facial images when entering or leaving countries in the Schengen zone. Since the EES will be accessible online, experts worry it could create a new path for hackers to reach SIS II, making the whole network more vulnerable.

The technical work behind SIS II is managed by a French company, but investigations show that fixing critical security problems has taken far longer than expected. Some fixes reportedly took several months or even years to implement, despite contractual rules that require urgent patches to be handled within two months.

The EU agency responsible for overseeing SIS II, known as EU-Lisa, contracts much of the technical work to private firms. Internal audits raised concerns that management wasn’t always informed about known security risks. In response, the agency claimed that it regularly tests and monitors all systems under its supervision.

As Europe prepares to roll out more connected security tools, experts stress the need for stronger safeguards to protect sensitive data and prevent future breaches.