Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Data Regulation. Show all posts

Why European Regulators Are Investigating Chinese AI firm DeepSeek

 


European authorities are raising concerns about DeepSeek, a thriving Chinese artificial intelligence (AI) company, due to its data practices. Italy, Ireland, Belgium, Netherlands, France regulators are examining the data collection methods of this firm, seeing whether they comply with the European General Data Protection Regulation or, if they also might consider that personal data is anyway transferred unlawfully to China.

Hence, due to these issues, the Italian authority has released a temporary restrainment to access the DeepSeek chatbot R1 for the time-being under which investigation will be conducted on what and how data get used, and how much has affected training in the AI model.  


What Type of Data Does DeepSeek Actually Collect? 

DeepSeek collects three main forms of information from the user: 

1. Personal data such as names and emails.  

2. Device-related data, including IP addresses.  

3. Data from third parties, such as Apple or Google logins.  

Moreover, there is an action that an app would be able to opt to take if at all that user was active elsewhere on those devices for "Community Security." Unlike many companies I have said where there are actual timelines or limits on data retention, it is stated that retention of data can happen indefinitely by DeepSeek. This can also include possible sharing with others-advertisers, analytics firms, governments, and copyright holders.  

Noting that most AI companies like the case of OpenAI's ChatGPT and Anthropic's Claude have met such privacy issues, experts would observe that DeepSeek doesn't expressly provide users the rights to deletion or restrictions on its use of their data as mandated requirement in the GDPR.  


The Collected Data Where it Goes  

One of major problems of DeepSeek is that it saves user data in China. Supposedly, the company has secure security measures in place for the data set and observes local laws for data transfer, but from a legal perspective, there is no valid basis being presented by DeepSeek concerning the storing of data from its European users outside the EU.  

According to the EDPB, privacy laws in China lay more importance on "stability of community than that of individual privacy," thus permitting broadly-reaching access to personal data for purposes such as national security or criminal investigations. Yet it is not clear whether that of foreign users will be treated differently than that of Chinese citizens. 


Cybersecurity and Privacy Threats 

As accentuated by cyber crime indices in 2024, China is one of the countries most vulnerable to cyberattacks. Cisco's latest report shows that DeepSeek's AI model does not have such strong security against hacking attempts. Other AI models can block at least some "jailbreak" cyberattacks, while DeepSeek turned out to be completely vulnerable to such assaults, which have made it softer for manipulation. 


Should Users Worry? 

According to experts, users ought to exercise caution when using DeepSeek and avoid sharing highly sensitive personal details. The uncertain policies of the company with respect to data protection, storage in China, and relatively weak security defenses could avail pretty heavy risks to users' privacy and as such warrant such caution. 

European regulators will then determine whether DeepSeek will be allowed to conduct business in the EU as investigations continue. Until then, users should weigh risks against their possible exposure when interacting with the platform. 



CFPB US Agency Proposes Rule to Block Data Brokers from Selling Sensitive Personal Information

The Consumer Financial Protection Bureau (CFPB) has proposed a groundbreaking rule to restrict data brokers from selling Americans’ personal and financial information, marking a significant step toward strengthening privacy protections in the digital age. The rule, introduced under the Fair Credit Reporting Act (FCRA), targets practices that exploit regulatory loopholes, particularly the sale of sensitive data such as Social Security numbers and phone numbers.

CFPB's Initiative to Curb Data Exploitation

CFPB Director Rohit Chopra emphasized the agency’s commitment to addressing the “widespread evasion” of federal privacy laws by data brokers. He noted that these companies often operate outside the regulatory frameworks governing credit bureaus and tenant screening firms, profiting from data sales while exposing consumers to significant risks. 

"This rule represents a decisive step to ensure that those trafficking in Americans' most sensitive information face accountability," Chopra stated during a press briefing.

The proposed rule aims to reclassify data brokers under the same legal framework as credit bureaus and background check companies, thereby closing a longstanding regulatory gap. It would impose restrictions on selling data that identifies individuals, such as Social Security numbers, income histories, and credit scores, limiting the ability of data brokers to monetize private information.

Building on Momentum from Federal Initiatives

The CFPB’s proposal aligns with momentum from President Biden’s recent executive order targeting the sale of Americans’ personal data. The move reflects growing public and governmental scrutiny of data brokers, who have faced criticism for exploiting lax regulations to generate profits at the expense of consumer privacy.

Chopra underscored the dangers of unregulated data sales, describing the risks as "staggering." He highlighted the threat to individuals and national security posed by the unrestricted availability of Americans’ private information to virtually anyone willing to pay.

FCRA and the Call for Stronger Privacy Protections

The FCRA, enacted in 1970, was designed to ensure the privacy and accuracy of consumer data managed by reporting agencies. However, the absence of comprehensive national data protection laws has left Americans more vulnerable compared to citizens in other Western democracies.

If enacted, the new rule would represent a significant step in federal efforts to regulate data brokers, building on Congress’s original intent in passing the FCRA—to protect Americans’ personal data. The public will have until March 2025 to provide comments on the proposed rule, which could face challenges from the incoming administration's deregulatory stance.

Bipartisan Support and Industry Reactions

Despite potential political obstacles, Chopra pointed to bipartisan acknowledgment of the risks posed by data brokers: "This isn’t a partisan issue. The dangers of unregulated access to Americans’ private data are recognized across the political spectrum."

Stakeholder reactions, including those from consumer advocacy groups and the data broker industry, are expected to shape the final form of the rule. While some industry players may resist the changes, advocates for stronger privacy protections view the proposal as a much-needed step to safeguard consumer rights in an increasingly data-driven economy.

Potential Impact on the Digital Economy

If adopted, the rule would signify a pivotal shift in how sensitive data is handled in the U.S., setting a potential precedent for broader privacy protections. By regulating data brokers more stringently, the CFPB aims to strike a balance between protecting privacy rights and accommodating commercial interests.

Next Steps for the Proposed Rule

To advance the proposal, the CFPB recommends:

  1. Engaging Public Feedback
    Encourage diverse stakeholders to participate in the comment period to address concerns and refine the rule.
  2. Strengthening Compliance Mechanisms
    Develop clear guidelines and enforcement measures to ensure adherence by data brokers.
  3. Collaborating with Lawmakers
    Build bipartisan support to overcome political hurdles and facilitate legislative backing for the rule.
  4. Raising Awareness
    Educate consumers about their privacy rights and the implications of data sales on their personal security.

Looking Ahead

As the CFPB leads the charge on this critical issue, the debate over privacy rights versus commercial interests enters a decisive phase. The proposed rule has the potential to reshape the digital economy’s relationship with personal data, paving the way for stronger consumer protections and greater accountability among data brokers.

World's First AI Law: A Tough Blow for Tech Giants

World's First AI Law: A Tough Blow for Tech Giants

In May, EU member states, lawmakers, and the European Commission — the EU's executive body — finalized the AI Act, a significant guideline that intends to oversee how corporations create, use, and use AI. 

The European Union's major AI law goes into effect on Thursday, bringing significant implications for American technology companies.

About the AI Act

The AI Act is a piece of EU legislation that regulates AI. The law, first suggested by the European Commission in 2020, seeks to combat the harmful effects of artificial intelligence.

The legislation establishes a comprehensive and standardized regulatory framework for AI within the EU.

It will largely target huge U.S. tech businesses, which are currently the main architects and developers of the most advanced AI systems.

However, the laws will apply to a wide range of enterprises, including non-technology firms.

Tanguy Van Overstraeten, head of legal firm Linklaters' technology, media, and technology practice in Brussels, described the EU AI Act as "the first of its kind in the world." It is expected to influence many enterprises, particularly those building AI systems, as well as those implementing or simply employing them in certain scenarios, he said.

High-risk and low-risk AI systems

High-risk AI systems include self-driving cars, medical equipment, loan decisioning systems, educational scores, and remote biometric identification systems.

The regulation also prohibits all AI uses that are judged "unacceptable" in terms of danger. Unacceptable-risk artificial intelligence applications include "social scoring" systems that evaluate citizens based on data gathering and analysis, predictive policing, and the use of emotional detection technology in the workplace or schools.

Implication for US tech firms

Amid a global craze over artificial intelligence, US behemoths such as Microsoft, Google, Amazon, Apple, and Meta have been aggressively working with and investing billions of dollars in firms they believe can lead the field.

Given the massive computer infrastructure required to train and run AI models, cloud platforms such as Microsoft Azure, Amazon Web Services, and Google Cloud are critical to supporting AI development.

In this regard, Big Tech companies will likely be among the most aggressively targeted names under the new regulations.

Generative AI and EU

The EU AI Act defines generative AI as "general-purpose" artificial intelligence. This title refers to tools that are designed to do a wide range of jobs on a par with, if not better than, a person.

General-purpose AI models include but are not limited to OpenAI's GPT, Google's Gemini, and Anthropic's Claude.

The AI Act imposes stringent standards on these systems, including compliance with EU copyright law, disclosure of how models are trained, routine testing, and proper cybersecurity measures.