Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Privacy Risks. Show all posts

Protecting Sensitive Data When Employees Use AI Chatbots


 

In today's digitised world, where artificial intelligence tools are rapidly reshaping the way people work, communicate, and work together, it's important to be aware that a quiet but pressing risk has emerged-that what individuals choose to share with chatbots may not remain entirely private for everyone involved.

A patient can use ChatGPT to receive health advice about an embarrassing health condition, or an employee can upload sensitive corporate documents into Google's Gemini system to generate a summary of them, but the information they disclose will ultimately play a part in the algorithms that power these systems. 

It has come to the attention of a lot of experts that AI models, built on large datasets collected from all across the internet, such as blogs and news articles, as well as from social media posts, are often trained without user consent, resulting in not only copyright problems but also significant privacy concerns. 

In light of the opaque nature of machine learning processes, experts warn that once data has been ingested into a model's training pool, it will be almost impossible to remove it. In this world we live in, individuals and businesses alike are forced to ask themselves what level of trust we can place in tools that, while extremely powerful, may also expose us to unseen risks. 

Considering that we are living in a hybrid age, where artificial intelligence tools such as ChatGPT are rapidly becoming a new frontier for data breaches, this is particularly true in the age of hybrid work. While these platforms offer businesses a number of valuable features, including the ability to draft content and troubleshoot software, they also carry inherent risks. 

Experts warn that poor management of them can result in leakage of training data, violations of privacy, and accidental disclosure of sensitive company data. The latest Fortinet Work From Anywhere study highlights the magnitude of the problem: nearly 62% of organisations have reported experiencing data breaches as a result of switching to remote working. 

Analysts believe some of these incidents could have been prevented if employees had stayed on-premises with company-managed devices and applications and had not experienced the same issues. Nevertheless, security experts claim that the solution is not to return to the office again, but rather to create a robust framework for data loss prevention (DLP) in a decentralised work environment to safeguard the information.

To prevent sensitive information from being lost, stolen, or leaked across networks, storage systems, endpoints, and cloud environments, a robust DLP strategy combines tools, technologies, and best practices. A successful framework focuses on data at rest, in motion, and in use and ensures that they are continuously monitored and protected. 

Experts outline four essential components that a framework must have to succeed: Make sure the company data is classified and assigned security levels across the network, and that the network is secure. Maintain strict adherence to compliance when storing, deleting, and retaining user information. Make sure staff are educated regarding clear policies that prevent accidental sharing of information or unauthorised access to information. 

Embrace protection tools that can detect phishing, ransomware, insider threats, and unintentional exposures in order to protect the organisation's data. It is not enough to use technology alone to protect organisations; it is also essential to have clear policies in place. With DLP implemented correctly, organisations are not only less likely to suffer from leaks, but they are also more likely to comply with industry standards, government regulations, and the like. 

The balance between innovation and responsibility in the digital age, particularly in the era of digital transformation, is crucial for businesses that adopt hybrid work and AI-based tools. According to the UK General Data Protection Regulation (UK GDPR), businesses that utilise AI platforms, such as ChatGPT, must adhere to a set of strict obligations designed to protect personal information from unauthorised access.

In terms of data protection, any data that could identify the individual - such as an employee file, customer contact details, or client database - falls within the regulations' scope, and ultimately, business owners are responsible for protecting that data, even when it is handled by third parties. In order to cope with this scenario, companies will need to carefully evaluate how external platforms process, store, and protect their data. 

They often do so through legally binding Data Processing Agreements that specify confidentiality standards, privacy controls, and data deletion requirements for the platforms. It is equally important to ensure that organisations communicate with individuals when their information is incorporated into artificial intelligence tools and, if necessary, obtain explicit consent from them.

As part of the law, firms are also required to implement “appropriate technical and organisational measures.” These measures include checking whether AI vendors are storing their data overseas, ensuring that it is kept in order to prevent misuse, and determining what safeguards are in place. Besides the risks of financial penalties or fines that are imposed for failing to comply, there is also the risk of eroding employee and customer trust, which can be more difficult to repair than the financial penalties themselves. 

When it comes to ensuring safe data practices in the age of artificial intelligence, businesses are increasingly turning to Data Loss Prevention (DLP) solutions as a way of automating the otherwise unmanageable task of monitoring vast networks of users, devices, and applications, which can be a daunting task. The state and flow of information have determined the four primary categories of DLP software that have emerged. 

Often, DLP tools utilise artificial intelligence and machine learning to identify suspicious traffic within and outside a company's system — whether by downloading, transferring, or through mobile connections — by tracking data movement within and outside a company's systems. In addition to preventing unauthorised activities at the source, endpoint DLP is also installed directly on users' computers, which monitors memory, cached data, and files being accessed or transferred as they occur. 

In general, cloud DLP solutions are intended to safeguard information stored in online environments such as backups, archives, and databases. They are characterised by encryption, scanning, and access controls that are used for securing corporate assets. While Email DLP is largely responsible for keeping sensitive details from being leaked through internal and external correspondence, it is also designed to prevent these sensitive details from getting shared accidentally, maliciously or through a compromised mailbox as well. 

Despite some businesses' concerns about whether Extended Detection and Response (XDR) platforms are adequate, experts think that DLP serves a different purpose than XDR: XDR provides broad threat detection and incident response, while DLP focuses on protecting sensitive data, categorising information, reducing breach risks, and ultimately maintaining company reputations.

A number of major technology companies have adopted varying approaches to dealing with the data their AI chatbots have collected from their users, often raising concerns about transparency and control. Google, for example, maintains conversations with its Gemini chatbot by default for 18 months, but the setting can be modified if users desire. Although activity tracking can be disabled, these chats remain in storage for at least 72 hours even if they are not reviewed by human moderators in order to refine the system. 

However, Google warns users that sharing confidential information is not advisable and that any conversations that have already been flagged for human review cannot be erased. As part of Meta's artificial intelligence assistant, which can be found on Facebook, WhatsApp, and Instagram, it is trained to understand public posts, photos, captions, and data scraped from around the web. However, the application cannot handle private messages. 

There is no doubt that citizens of the European Union and the United Kingdom have the right to object to the use of their information for training under stricter privacy laws. However, those living in countries without such protections, such as the United States, have fewer options than their citizens in other countries. The opt-out process for Meta is quite complicated, and it is available only where it is available; users must submit evidence of their interactions with the chatbot as evidence of the opt-out. 

It is worth noting that Microsoft's Copilot does not provide an opt-out mechanism for personal accounts; users are only limited in their ability to delete their interaction history through their account settings, and there is no option to prevent future data retention. These practices demonstrate how AI privacy controls can be patchy, with users' choices often being more influenced by the laws and regulations of their jurisdiction, rather than corporate policy. 

The responsibility organisations as they navigate this evolving landscape relates not only to complying with regulations or implementing technical safeguards, but also to cultivating a culture of digital responsibility in their organisations. Employees need to be taught how important it is to understand and respect the value of their information, and how important it is to exercise caution when using AI-powered applications. 

By taking proactive measures such as implementing clear guidelines on chatbot usage, conducting regular risk assessments, and ensuring that vendors are compliant with stringent data protection standards, an organisation can significantly reduce the threat exposure they are exposed to. 

The businesses that implement a strong governance framework, at the same time, are not only protected but are also able to take advantage of AI's advantages with confidence, enhancing productivity, streamlining workflows, and maintaining competitiveness in an era of data-driven economies. Thus, it is essential to embrace AI responsibly, balancing innovation and vigilance, so that it isn't avoided, but rather embraced responsibly. 

A company's use of AI can be transformed from a potential liability to a strategic asset by combining regulatory compliance, advanced DLP solutions, and transparent communication with staff and stakeholders. It is important to remember that trust is currency in a marketplace where security is king, and companies that protect sensitive data will not only prevent costly breaches from occurring but also strengthen their reputation in the long run.

Two-factor authentication complicates security with privacy risks, unreliability, and permanent lockouts

 

Two-factor authentication has become the default standard for online security, showing up everywhere from banking portals to productivity tools. Its purpose is clear: even if someone steals your credentials, they still need a second verification step, usually through an email code, SMS, or an authenticator app. In theory, this additional barrier makes hacking more difficult, but in practice, the burden often falls more heavily on legitimate users than on attackers. For many people, what should be a security measure becomes a frustrating obstacle course, with multiple windows, constant device switching, and codes arriving at the least convenient times. 

The problem lies in balancing protection with usability. While the odds of a random hacker attempting to log in may be low, users are the ones repeatedly forced through verification loops. VPN usage adds to the issue, since changing IP addresses often triggers additional checks. Instead of making accounts safer, the process can feel more like punishment for ordinary login attempts. 

Despite being promoted as a cornerstone of modern cybersecurity, two-factor authentication is only as strong as the delivery method. SMS codes remain widely used, even though SIM swapping is a well-documented threat. Email-based codes can also be problematic—if someone gains access to your primary inbox, they inherit every linked account. Even Big Tech companies sometimes struggle with reliable implementation, with failed code deliveries or inconsistent prompts leaving users stranded. A network outage or downtime at a provider can completely block access to essential services. 

Beyond inconvenience, 2FA introduces hidden privacy and security trade-offs. Every login generates more email or text messages, forcing users to hand over personal phone numbers and email addresses to multiple companies. This not only clutters inboxes but also creates new opportunities for spam or unwanted marketing. Providers like email hosts and carriers gain visibility into user activity, tracking which apps are accessed and when, raising further concerns about surveillance and data use. For users who value a clean inbox and minimal exposure, the system feels invasive rather than protective. 

The most damaging consequence is the risk of permanent lockouts. Losing access to a backup email or phone number can create a cascade of failures that trap users outside critical accounts. Recovery systems, often automated or handled by AI chatbots, provide little flexibility. Some users have experienced losing access entirely because verification codes went to accounts with their own 2FA requirements, resulting in a cycle that cannot be broken. The fallout can disrupt personal, academic, and professional life, with little recourse available. 

While two-factor authentication was designed as an essential layer of defense against account takeovers, its execution often causes more harm than good. Between unreliability, privacy risks, inbox clutter, and the looming threat of irreversible lockouts, the cost of this security tool raises serious questions about whether its benefits truly outweigh the risks.

China’s Ministry of State Security Warns of Biometric Data Risks in Crypto Reward Schemes

 

China’s Ministry of State Security (MSS) has issued a strong warning over the collection of biometric information by foreign companies in exchange for cryptocurrency rewards, describing the practice as a potential danger to both personal privacy and national security. The announcement, released on the MSS’s official WeChat account, highlighted reported incidents of large-scale iris scanning linked to digital token distributions. 

Although the statement did not specifically name the organization involved, the description closely matches Worldcoin, a project developed by Tools for Humanity. Worldcoin has drawn global attention for its use of spherical “orb” devices that scan an individual’s iris to generate a unique digital identity, which is then tied to distributions of its cryptocurrency, WLD. 

According to the MSS, the transfer of highly sensitive biometric data to foreign entities carries risks that extend far beyond its intended use. Such information could be misused in ways that compromise personal safety or even national security. The agency’s remarks add to a growing chorus of global concerns about how biometric data is handled, particularly within the cryptocurrency and decentralized finance sectors. 

Worldcoin, launched in 2023, has already faced investigations and regulatory pushback in several countries. Concerns have largely centered around data protection practices and whether users fully understand and consent to the collection of their biometric information. In May, Indonesian regulators suspended the company’s permit, citing irregularities in its identity verification services. The project later announced a voluntary pause of its proof-of-personhood operations in Indonesia to clarify compliance requirements. 

China has long maintained a restrictive approach toward cryptocurrencies, banning trading and initial coin offerings while warning against speculative risks. The MSS’s latest statement broadens this position, suggesting that data collection tied to crypto incentives is not only a consumer protection issue but also one of national security—particularly when foreign companies are involved in managing or storing sensitive personal data.  

The issue reflects a wider international debate about balancing innovation with privacy. Proponents of biometric-based verification argue it offers a scalable way to distinguish real human users from bots in the Web3 ecosystem. Critics counter that once biometric information is collected, the possibility of data leaks, misuse, or unauthorized access remains, even with encryption.

Similar privacy concerns have emerged globally. In Europe, regulators are reviewing Worldcoin’s activities under the GDPR framework, while Kenya suspended new registrations in 2023. The MSS has now urged Chinese citizens to be cautious about offers that involve trading personal data for cryptocurrency, signaling that further oversight of such projects could follow.

Free VPN Big Mama Raises Security Concerns Amid Cybercrime Links

 

Big Mama VPN, a free virtual private network app, is drawing scrutiny for its involvement in both legitimate and questionable online activities. The app, popular among Android users with over a million downloads, provides a free VPN service while also enabling users to sell access to their home internet connections. This service is marketed as a residential proxy, allowing buyers to use real IP addresses for activities ranging from ad verification to scraping pricing data. However, cybersecurity experts warn of significant risks tied to this dual functionality. 

Teenagers have recently gained attention for using Big Mama VPN to cheat in the virtual reality game Gorilla Tag. By side-loading the app onto Meta’s Oculus headsets, players exploit location delays to gain an unfair advantage. While this usage might seem relatively harmless, the real issue lies in how Big Mama’s residential proxy network operates. Researchers have linked the app to cybercrime forums where it is heavily promoted for use in activities such as distributed denial-of-service (DDoS) attacks, phishing campaigns, and botnets. Cybersecurity firm Trend Micro discovered that Meta VR headsets are among the most popular devices using Big Mama VPN, alongside Samsung and Xiaomi devices. 

They also identified a vulnerability in the VPN’s system, which could have allowed proxy users to access local networks. Big Mama reportedly addressed and fixed this flaw within a week of it being flagged. However, the larger problem persists: using Big Mama exposes users to significant privacy risks. When users download the VPN, they implicitly consent to having their internet connection routed for other users. This is outlined in the app’s terms and conditions, but many users fail to fully understand the implications. Through its proxy marketplace, Big Mama sells access to tens of thousands of IP addresses worldwide, accepting payments exclusively in cryptocurrency. 

Cybersecurity researchers at firms like Orange Cyberdefense and Kela have linked this marketplace to illicit activities, with over 1,000 posts about Big Mama appearing on cybercrime forums. Big Mama’s ambiguous ownership further complicates matters. While the company is registered in Romania, it previously listed an address in Wyoming. Its representative, using the alias Alex A, claims the company does not advertise on forums and logs user activity to cooperate with law enforcement. Despite these assurances, the app has been repeatedly flagged for its potential role in cyberattacks, including an incident reported by Cisco Talos. 

Free VPNs like Big Mama often come with hidden costs, sacrificing user privacy and security for financial viability. By selling access to residential proxies, Big Mama has opened doors for cybercriminals to exploit unsuspecting users’ internet connections. This serves as a cautionary tale about the dangers of free services in the digital age. Users are advised to exercise extreme caution when downloading apps, especially from unofficial sources, and to consider the potential trade-offs involved in using free VPN services.

The Privacy Risks of ChatGPT and AI Chatbots

 


AI chatbots like ChatGPT have captured widespread attention for their remarkable conversational abilities, allowing users to engage on diverse topics with ease. However, while these tools offer convenience and creativity, they also pose significant privacy risks. The very technology that powers lifelike interactions can also store, analyze, and potentially resurface user data, raising critical concerns about data security and ethical use.

The Data Behind AI's Conversational Skills

Chatbots like ChatGPT rely on Large Language Models (LLMs) trained on vast datasets to generate human-like responses. This training often includes learning from user interactions. Much like how John Connor taught the Terminator quirky catchphrases in Terminator 2: Judgment Day, these systems refine their capabilities through real-world inputs. However, this improvement process comes at a cost: personal data shared during conversations may be stored and analyzed, often without users fully understanding the implications.

For instance, OpenAI’s terms and conditions explicitly state that data shared with ChatGPT may be used to improve its models. Unless users actively opt-out through privacy settings, all shared information—from casual remarks to sensitive details like financial data—can be logged and analyzed. Although OpenAI claims to anonymize and aggregate user data for further study, the risk of unintended exposure remains.

Real-World Privacy Breaches

Despite assurances of data security, breaches have occurred. In May 2023, hackers exploited a vulnerability in ChatGPT’s Redis library, compromising the personal data of around 101,000 users. This breach underscored the risks associated with storing chat histories, even when companies emphasize their commitment to privacy. Similarly, companies like Samsung faced internal crises when employees inadvertently uploaded confidential information to chatbots, prompting some organizations to ban generative AI tools altogether.

Governments and industries are starting to address these risks. For instance, in October 2023, President Joe Biden signed an executive order focusing on privacy and data protection in AI systems. While this marks a step in the right direction, legal frameworks remain unclear, particularly around the use of user data for training AI models without explicit consent. Current practices are often classified as “fair use,” leaving consumers exposed to potential misuse.

Protecting Yourself in the Absence of Clear Regulations

Until stricter regulations are implemented, users must take proactive steps to safeguard their privacy while interacting with AI chatbots. Here are some key practices to consider:

  1. Avoid Sharing Sensitive Information
    Treat chatbots as advanced algorithms, not confidants. Avoid disclosing personal, financial, or proprietary information, no matter how personable the AI seems.
  2. Review Privacy Settings
    Many platforms offer options to opt out of data collection. Regularly review and adjust these settings to limit the data shared with AI

Sevco Report Exposes Privacy Risks in iOS and macOS Due to Mirroring Bug

 

A new cybersecurity report from Sevco has uncovered a critical vulnerability in macOS 15.0 Sequoia and iOS 18, which exposes personal data through iPhone apps when devices are mirrored onto work computers. The issue arose when Sevco researchers detected personal iOS apps showing up on corporate Mac devices. This triggered a deeper investigation into the problem, revealing a systemic issue affecting multiple upstream software vendors and customers. The bug creates two main concerns: employees’ personal data could be unintentionally accessed by their employers, and companies could face legal risks for collecting that data.  

Sevco highlighted that while employees may worry about their personal lives being exposed, companies also face potential data liability even if the access occurs unintentionally. This is especially true when personal iPhones are connected to company laptops or desktops, leading to private data becoming accessible. Sean Wright, a cybersecurity expert, commented that the severity of the issue depends on the level of trust employees have in their employers. According to Wright, individuals who are uncomfortable with their employers having access to their personal data should avoid using personal devices for work-related tasks or connecting them to corporate systems. Sevco’s report recommended several actions for companies and employees to mitigate this risk. 

Firstly, employees should stop using the mirroring app to prevent the exposure of personal information. In addition, companies should advise their employees not to connect personal devices to work computers. Another key step involves ensuring that third-party vendors do not inadvertently gather sensitive data from work devices. The cybersecurity experts at Sevco urged companies to take these steps while awaiting an official patch from Apple to resolve the issue. When Apple releases the patch, Sevco recommends that companies promptly apply it to halt the collection of private employee data. 

Moreover, companies should purge any previously collected employee information that might have been gathered through this vulnerability. This would help eliminate liability risks and ensure compliance with data protection regulations. This report highlights the importance of maintaining clear boundaries between personal and work devices. With an increasing reliance on seamless technology, including mirroring apps, the risks associated with these tools also escalate. 

While the convenience of moving between personal phones and work computers is appealing, privacy issues should not be overlooked. The Sevco report emphasizes the importance of being vigilant about security and privacy in the workplace, especially when using personal devices for professional tasks. Both employees and companies need to take proactive steps to safeguard personal information and reduce potential legal risks until a fix is made available.

The Hidden Risk of Airport Phone Charging Stations and Why You Should Avoid It

The Hidden Risk of Airport Phone Charging Stations

Security experts have highlighted three compelling reasons why tourists should avoid charging their phones at airports. In light of these risks, it’s advisable to exercise caution when using public charging stations, especially at airports. Protecting your personal information should always be a priority!

Hidden dangers of airport phone charging stations

Malicious Software (Malware): Charging stations at airports can be tampered with to install malicious software (malware) on your device. This malware can quietly steal sensitive information like passwords and banking details. The Federal Bureau of Investigation (FBI) has also issued a warning against using public phone charging stations, including those found at airports.

Juice Jacking: Hackers use a technique called “juice jacking” to compromise devices. They install malware through a corrupted USB port, which can lock your device or even export all your data and passwords directly to the perpetrator. Since the power supply and data stream on smartphones pass through the same cable, hackers can take control of your personal information.

Data Exposure: Even if the charging station hasn’t been tampered with, charging your mobile phone at an airport can lead to unintentional data exposure. Charging stations can transfer both data and power. While phones prompt users to choose between “Charge only” and “Transfer files” modes, this protection is often bypassed with charging stations. As a result, your device could be vulnerable to data interception or exploitation, which can later be used for identity theft or sold on the dark web.

Protecting Your Personal Information

So, what can you do to safeguard your data? Here are some tips:

  1. Carry Your Own Charger: Invest in a portable charger or carry your own charging cable. This way, you won’t have to rely on public stations.
  2. Use Wall Outlets: If possible, use wall outlets instead of USB ports. Wall outlets are less likely to be compromised.
  3. Avoid Public USB Ports: If you must use a public charging station, choose a wall outlet or invest in a USB data blocker—a small device that allows charging while blocking data transfer.
  4. Enable USB Restricted Mode: Some smartphones offer a USB Restricted Mode. Enable it to prevent unauthorized data access via USB.
  5. Stay Informed: Keep an eye out for security advisories and warnings. Awareness is your best defense.