Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label personal data security. Show all posts

The Privacy Risks of ChatGPT and AI Chatbots

 


AI chatbots like ChatGPT have captured widespread attention for their remarkable conversational abilities, allowing users to engage on diverse topics with ease. However, while these tools offer convenience and creativity, they also pose significant privacy risks. The very technology that powers lifelike interactions can also store, analyze, and potentially resurface user data, raising critical concerns about data security and ethical use.

The Data Behind AI's Conversational Skills

Chatbots like ChatGPT rely on Large Language Models (LLMs) trained on vast datasets to generate human-like responses. This training often includes learning from user interactions. Much like how John Connor taught the Terminator quirky catchphrases in Terminator 2: Judgment Day, these systems refine their capabilities through real-world inputs. However, this improvement process comes at a cost: personal data shared during conversations may be stored and analyzed, often without users fully understanding the implications.

For instance, OpenAI’s terms and conditions explicitly state that data shared with ChatGPT may be used to improve its models. Unless users actively opt-out through privacy settings, all shared information—from casual remarks to sensitive details like financial data—can be logged and analyzed. Although OpenAI claims to anonymize and aggregate user data for further study, the risk of unintended exposure remains.

Real-World Privacy Breaches

Despite assurances of data security, breaches have occurred. In May 2023, hackers exploited a vulnerability in ChatGPT’s Redis library, compromising the personal data of around 101,000 users. This breach underscored the risks associated with storing chat histories, even when companies emphasize their commitment to privacy. Similarly, companies like Samsung faced internal crises when employees inadvertently uploaded confidential information to chatbots, prompting some organizations to ban generative AI tools altogether.

Governments and industries are starting to address these risks. For instance, in October 2023, President Joe Biden signed an executive order focusing on privacy and data protection in AI systems. While this marks a step in the right direction, legal frameworks remain unclear, particularly around the use of user data for training AI models without explicit consent. Current practices are often classified as “fair use,” leaving consumers exposed to potential misuse.

Protecting Yourself in the Absence of Clear Regulations

Until stricter regulations are implemented, users must take proactive steps to safeguard their privacy while interacting with AI chatbots. Here are some key practices to consider:

  1. Avoid Sharing Sensitive Information
    Treat chatbots as advanced algorithms, not confidants. Avoid disclosing personal, financial, or proprietary information, no matter how personable the AI seems.
  2. Review Privacy Settings
    Many platforms offer options to opt out of data collection. Regularly review and adjust these settings to limit the data shared with AI

DNA Testing Firm Atlas Biomed Vanishes: Concerns Over Sensitive Data

 

DNA-testing company Atlas Biomed appears to have halted operations without notifying customers about the fate of their sensitive genetic data. Based in London, the firm provided insights into users' genetic profiles and potential health risks. Customers report being unable to access their reports online, and the company has not responded to inquiries from the BBC.

Disgruntled clients describe the situation as "very alarming," expressing fears about the handling of their "most personal information." The Information Commissioner’s Office (ICO) confirmed receiving a complaint about the company. A spokesperson stated: "People have the right to expect that organisations will handle their personal information securely and responsibly." Experts warn that users of DNA-testing services are often "completely at the mercy" of companies when it comes to safeguarding sensitive data.

Lisa Topping from Essex, who paid £100 for a genetic report, described her frustration after the company’s website vanished. "I don’t know what someone else could do with [the data], but it’s the most personal information… I don’t know how comfortable I feel that they have just disappeared," she said.

Another customer, Kate Lake from Kent, paid £139 in 2023 but never received her report. Despite promises of a refund, the company went silent. "It’s like no-one was at home," she explained, demanding answers about the fate of her data.

Attempts by the BBC to contact the firm have been unsuccessful. Phone numbers are inactive, the London office appears abandoned, and social media accounts have been dormant since mid-2023. Online comments reveal widespread customer complaints.

Atlas Biomed remains registered with Companies House but has not filed accounts since December 2022. Notably, two active officers are listed at a Moscow address linked to a Russian billionaire, who has since resigned from the company.

Cybersecurity expert Prof. Alan Woodward remarked on the "odd" connections: "If people knew the provenance of this company and how it operates, they might not be quite so ready to trust them with their DNA."

While no misuse of customer data has been confirmed, the lack of transparency raises concerns. Prof. Carissa Veliz, author of Privacy is Power, emphasized the unique sensitivity of DNA: "It is uniquely yours, you can’t change it, and it reveals your – and your family’s – biological strengths and weaknesses."

She added, "When you give your data to a company, you are completely at their mercy. We shouldn’t have to wait until something happens."

Atlas Biomed’s silence leaves its customers uncertain and alarmed about the safety of their most personal information.