Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label GDPR Breach. Show all posts

Slack Faces Backlash Over AI Data Policy: Users Demand Clearer Privacy Practices

 

In February, Slack introduced its AI capabilities, positioning itself as a leader in the integration of artificial intelligence within workplace communication. However, recent developments have sparked significant controversy. Slack's current policy, which collects customer data by default for training AI models, has drawn widespread criticism and calls for greater transparency and clarity. 

The issue gained attention when Gergely Orosz, an engineer and writer, pointed out that Slack's terms of service allow the use of customer data for training AI models, despite reassurances from Slack engineers that this is not the case. Aaron Maurer, a Slack engineer, acknowledged the need for updated policies that explicitly detail how Slack AI interacts with customer data. This discrepancy between policy language and practical application has left many users uneasy. 

Slack's privacy principles state that customer data, including messages and files, may be used to develop AI and machine learning models. In contrast, the Slack AI page asserts that customer data is not used to train Slack AI models. This inconsistency has led users to demand that Slack update its privacy policies to reflect the actual use of data. The controversy intensified as users on platforms like Hacker News and Threads voiced their concerns. Many felt that Slack had not adequately notified users about the default opt-in for data sharing. 

The backlash prompted some users to opt out of data sharing, a process that requires contacting Slack directly with a specific request. Critics argue that this process is cumbersome and lacks transparency. Salesforce, Slack's parent company, has acknowledged the need for policy updates. A Salesforce spokesperson stated that Slack would clarify its policies to ensure users understand that customer data is not used to train generative AI models and that such data never leaves Slack's trust boundary. 

However, these changes have yet to address the broader issue of explicit user consent. Questions about Slack's compliance with the General Data Protection Regulation (GDPR) have also arisen. GDPR requires explicit, informed consent for data collection, which must be obtained through opt-in mechanisms rather than default opt-ins. Despite Slack's commitment to GDPR compliance, the current controversy suggests that its practices may not align fully with these regulations. 

As more users opt out of data sharing and call for alternative chat services, Slack faces mounting pressure to revise its data policies comprehensively. This situation underscores the importance of transparency and user consent in data practices, particularly as AI continues to evolve and integrate into everyday tools. 

The recent backlash against Slack's AI data policy highlights a crucial issue in the digital age: the need for clear, transparent data practices that respect user consent. As Slack works to update its policies, the company must prioritize user trust and regulatory compliance to maintain its position as a trusted communication platform. This episode serves as a reminder for all companies leveraging AI to ensure their data practices are transparent and user-centric.

OpenAI’s ChatGPT accused of GDPR breaches

ChatGPT

OpenAI, the maker of ChatGPT, has been accused of a series of data protection breaches in a GDPR complaint filed by a privacy researcher. The complaint argues that OpenAI infringes EU privacy rules in areas such as lawful basis, transparency, fairness, data access rights, and privacy by design.

The Complaint

The complaint frames the new generative AI technology and its maker’s approach to developing and operating the viral tool as essentially a systematic breach of the pan-EU regime. It suggests that OpenAI has overlooked another requirement in the GDPR to undertake prior consultation with regulators (Article 36) — since, if it had conducted a proactive assessment that identified high risks to people’s rights unless mitigating measures were applied it should have given pause for thought.

Previous Concerns

This is not the first GDPR concern lobbed in ChatGPT’s direction. Italy’s privacy watchdog generated headlines after it ordered OpenAI to stop processing data locally — directing the US-based company to tackle a preliminary list of problems it identified in areas including lawful basis, information disclosures, user controls, and child safety.

Potential Consequences

If OpenAI’s alleged GDPR breach is confirmed, the company is looking at a fine of up to 4% annual turnover, or €20 million - whichever sum is greater. Neither OpenAI nor the Polish data watchdog have commented on the complaint so far.

OpenAI’s ChatGPT has been accused of breaching GDPR regulations and if found guilty, could face significant consequences. It remains to be seen how this situation will unfold and its impact on the future of AI technology.