The launch of DeepSeek AI has created waves in the tech world, offering powerful artificial intelligence models at a fraction of the cost compared to established players like OpenAI and Google.
However, its rapid rise in popularity has also sparked serious concerns about data security, with critics drawing comparisons to TikTok and its ties to China.
Government officials and cybersecurity experts warn that the open-source AI assistant could pose a significant risk to American users.
On Thursday, two U.S. lawmakers announced plans to introduce legislation banning DeepSeek from all government devices, citing fears that the Chinese Communist Party (CCP) could access sensitive data collected by the app. This move follows similar actions in Australia and several U.S. states, with New York recently enacting a statewide ban on government systems.
The growing concern stems from China’s data laws, which require companies to share user information with the government upon request. Like TikTok, DeepSeek’s data could be mined for intelligence purposes or even used to push disinformation campaigns. Although the AI app is the current focus of security conversations, experts say that the risks extend beyond any single model, and users should exercise caution with all AI systems.
Unlike social media platforms that users can consciously avoid, AI models like DeepSeek are more difficult to track. Dimitri Sirota, CEO of BigID, a cybersecurity company specializing in AI security compliance, points out that many companies already use multiple AI models, often switching between them without users’ knowledge. This fluidity makes it challenging to control where sensitive data might end up.
Kelcey Morgan, senior manager of product management at Rapid7, emphasizes that businesses and individuals should take a broad approach to AI security. Instead of focusing solely on DeepSeek, companies should develop comprehensive practices to protect their data, regardless of the latest AI trend.
The potential for China to use DeepSeek’s data for intelligence is not far-fetched, according to cybersecurity experts.
With significant computing power and data processing capabilities, the CCP could combine information from multiple sources to create detailed profiles of American users. Though this might not seem urgent now, experts warn that today’s young, casual users could grow into influential figures worth targeting in the future.
To stay safe, experts advise treating AI interactions with the same caution as any online activity. Users should avoid sharing sensitive information, be skeptical of unusual questions, and thoroughly review an app’s terms and conditions. Ultimately, staying informed and vigilant about where and how data is shared will be critical as AI technologies continue to evolve and become more integrated into everyday life.