Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label European Union. Show all posts

Why European Regulators Are Investigating Chinese AI firm DeepSeek

 


European authorities are raising concerns about DeepSeek, a thriving Chinese artificial intelligence (AI) company, due to its data practices. Italy, Ireland, Belgium, Netherlands, France regulators are examining the data collection methods of this firm, seeing whether they comply with the European General Data Protection Regulation or, if they also might consider that personal data is anyway transferred unlawfully to China.

Hence, due to these issues, the Italian authority has released a temporary restrainment to access the DeepSeek chatbot R1 for the time-being under which investigation will be conducted on what and how data get used, and how much has affected training in the AI model.  


What Type of Data Does DeepSeek Actually Collect? 

DeepSeek collects three main forms of information from the user: 

1. Personal data such as names and emails.  

2. Device-related data, including IP addresses.  

3. Data from third parties, such as Apple or Google logins.  

Moreover, there is an action that an app would be able to opt to take if at all that user was active elsewhere on those devices for "Community Security." Unlike many companies I have said where there are actual timelines or limits on data retention, it is stated that retention of data can happen indefinitely by DeepSeek. This can also include possible sharing with others-advertisers, analytics firms, governments, and copyright holders.  

Noting that most AI companies like the case of OpenAI's ChatGPT and Anthropic's Claude have met such privacy issues, experts would observe that DeepSeek doesn't expressly provide users the rights to deletion or restrictions on its use of their data as mandated requirement in the GDPR.  


The Collected Data Where it Goes  

One of major problems of DeepSeek is that it saves user data in China. Supposedly, the company has secure security measures in place for the data set and observes local laws for data transfer, but from a legal perspective, there is no valid basis being presented by DeepSeek concerning the storing of data from its European users outside the EU.  

According to the EDPB, privacy laws in China lay more importance on "stability of community than that of individual privacy," thus permitting broadly-reaching access to personal data for purposes such as national security or criminal investigations. Yet it is not clear whether that of foreign users will be treated differently than that of Chinese citizens. 


Cybersecurity and Privacy Threats 

As accentuated by cyber crime indices in 2024, China is one of the countries most vulnerable to cyberattacks. Cisco's latest report shows that DeepSeek's AI model does not have such strong security against hacking attempts. Other AI models can block at least some "jailbreak" cyberattacks, while DeepSeek turned out to be completely vulnerable to such assaults, which have made it softer for manipulation. 


Should Users Worry? 

According to experts, users ought to exercise caution when using DeepSeek and avoid sharing highly sensitive personal details. The uncertain policies of the company with respect to data protection, storage in China, and relatively weak security defenses could avail pretty heavy risks to users' privacy and as such warrant such caution. 

European regulators will then determine whether DeepSeek will be allowed to conduct business in the EU as investigations continue. Until then, users should weigh risks against their possible exposure when interacting with the platform. 



EU Officially Announce USB-C as Global Charging Standard

 


For tech enthusiasts and environmentalists in the European Union (EU), December 28, 2024, marked a major turning point as USB-C officially became the required standard for electronic gadgets.

The new policy mandates that phones, tablets, cameras, and other electronic devices marketed in the EU must have USB-C connectors. This move aims to minimise e-waste and make charging more convenient for customers. Even industry giants like Apple are required to adapt, signaling the end of proprietary charging standards in the region.

Apple’s Transition to USB-C

Apple has been slower than most Android manufacturers in adopting USB-C. The company introduced USB-C connectors with the iPhone 15 series in 2023, while older models, such as the iPhone 14 and the iPhone SE (3rd generation), continued to use the now-outdated Lightning connector.

To comply with the new EU regulations, Apple has discontinued the iPhone 14 and iPhone SE in the region, as these models include Lightning ports. While they remain available through third-party retailers until supplies run out, the regulation prohibits brands from directly selling non-USB-C devices in the EU. However, outside the EU, including in major markets like the United States, India, and China, these models are still available for purchase.

Looking Ahead: USB-C as the Future

Apple’s decision aligns with its broader strategy to phase out the Lightning connection entirely. The transition is expected to culminate in early 2025 with the release of a USB-C-equipped iPhone SE. This shift not only ensures compliance with EU regulations but also addresses consumer demands for a more streamlined charging experience.

The European Commission (EC) celebrated the implementation of this law with a playful yet impactful tweet, highlighting the benefits of a universal charging standard. “Today’s the day! USB-C is officially the common standard for electronic devices in the EU! It means: The same charger for all new phones, tablets & cameras; Harmonised fast-charging; Reduced e-waste; No more ‘Sorry, I don’t have that cable,’” the EC shared on X (formerly Twitter).

Environmental and Consumer Benefits

This law aims to alleviate the frustration of managing multiple chargers while addressing the growing environmental issues posed by e-waste. By standardising charging technology, the EU hopes to:

  • Simplify consumer choices
  • Extend the lifespan of accessories like cables and adapters
  • Reduce the volume of electronic waste

With the EU leading this shift, other regions may follow suit, further promoting sustainability and convenience in the tech industry.

Enhancing EU Cybersecurity: Key Takeaways from the NIS2 Directive

Enhancing EU Cybersecurity: Key Takeaways from the NIS2 Directive

The European Union has taken a significant step forward with the introduction of the NIS2 Directive. This directive, which builds upon the original Network and Information Systems (NIS) Directive, aims to bolster cybersecurity across the EU by imposing stricter requirements and expanding its scope. But how far does the NIS2 Directive reach, and what implications does it have for organizations within the EU?

A Broader Scope

One of the most notable changes in the NIS2 Directive is its expanded scope. While the original NIS Directive primarily targeted operators of essential services and digital service providers, NIS2 extends its reach to include a wider range of sectors. This includes public administration entities, the healthcare sector, and providers of digital infrastructure. By broadening the scope, the EU aims to ensure that more entities are covered under the directive, thereby enhancing the overall cybersecurity posture of the region.

Enhanced Security Requirements

The move brings more stringent security requirements for entities within its scope. Organizations are now required to implement robust cybersecurity measures, including risk management practices, incident response plans, and regular security assessments. These measures are designed to ensure that organizations are better prepared to prevent, detect, and respond to cyber threats.

Additionally, the directive emphasizes the importance of supply chain security. Organizations must now assess and manage the cybersecurity risks associated with their supply chains, ensuring that third-party vendors and partners adhere to the same high standards of security.

Incident Reporting Obligations

Another significant aspect of the NIS2 Directive is the enhanced incident reporting obligations. Under the new directive, organizations are required to report significant cybersecurity incidents to the relevant authorities within 24 hours of detection. This rapid reporting is crucial for enabling a swift response to cyber threats and minimizing the potential impact on critical infrastructure and services.

The directive also mandates that organizations provide detailed information about the incident, including the nature of the threat, the affected systems, and the measures taken to mitigate the impact. This level of transparency is intended to facilitate better coordination and information sharing among EU member states, ultimately strengthening the collective cybersecurity resilience of the region.

Governance and Accountability

Organizations are required to designate a responsible person or team for overseeing cybersecurity measures and ensuring compliance with the directive. This includes conducting regular audits and assessments to verify the effectiveness of the implemented security measures.

Organizations that fail to meet the requirements of the NIS2 Directive may face significant fines and other sanctions. This serves as a strong incentive for organizations to prioritize cybersecurity and ensure that they are fully compliant with the directive.

Challenges and Opportunities

It also offers numerous opportunities. By implementing the required cybersecurity measures, organizations can significantly enhance their security posture and reduce the risk of cyber incidents. This not only protects their own operations but also contributes to the overall security of the EU.

The directive also encourages greater collaboration and information sharing among EU member states. This collective approach to cybersecurity can lead to more effective threat detection and response, ultimately making the region more resilient to cyber threats.

EU Claims Meta’s Paid Ad-Free Option Violates Digital Competition Rules

 

European Union regulators have accused Meta Platforms of violating the bloc’s new digital competition rules by compelling Facebook and Instagram users to either view ads or pay to avoid them. This move comes as part of Meta’s strategy to comply with Europe's stringent data privacy regulations.

Starting in November, Meta began offering European users the option to pay at least 10 euros ($10.75) per month for ad-free versions of Facebook and Instagram. This was in response to a ruling by the EU’s top court, which mandated that Meta must obtain user consent before displaying targeted ads, a decision that jeopardized Meta’s business model of personalized advertising.

The European Commission, the EU’s executive body, stated that preliminary findings from its investigation indicate that Meta’s “pay or consent” model breaches the Digital Markets Act (DMA) of the 27-nation bloc. According to the commission, Meta’s approach fails to provide users the right to “freely consent” to the use of their personal data across its various services for personalized ads.

The commission also criticized Meta for not offering a less personalized service that is equivalent to its social networks. Meta responded by stating that their subscription model for no ads aligns with the direction of the highest court in Europe and complies with the DMA. The company expressed its intent to engage in constructive dialogue with the European Commission to resolve the investigation.

The investigation was launched soon after the DMA took effect in March, aiming to prevent tech “gatekeepers” from dominating digital markets through heavy financial penalties. One of the DMA's objectives is to reduce the power of Big Tech firms that have amassed vast amounts of personal data, giving them an advantage over competitors in online advertising and social media services. The commission suggested that Meta should offer an option that doesn’t rely on extensive personal data sharing for advertising purposes.

European Commissioner Thierry Breton, who oversees the bloc’s digital policy, emphasized that the DMA aims to empower users to decide how their data is used and to ensure that innovative companies can compete fairly with tech giants regarding data access.

Meta now has the opportunity to respond to the commission’s findings, with the investigation due to conclude by March 2025. The company could face fines of up to 10% of its annual global revenue, potentially amounting to billions of euros. Under the DMA, Meta is classified as one of seven online gatekeepers, with Facebook, Instagram, WhatsApp, Messenger, and its online ad business listed among two dozen “core platform services” that require the highest level of regulatory scrutiny.

This accusation against Meta is part of a series of regulatory actions by Brussels against major tech companies. Recently, the EU charged Apple with preventing app makers from directing users to cheaper options outside its App Store and accused Microsoft of violating antitrust laws by bundling its Teams app with its Office software.


EU Proposes New Law to Allow Bulk Scanning of Chat Messages

 

The European elections have ended, and the European football tournament is in full flow; why not allow bulk searches of people's private communications, including encrypted ones? Activists around Europe are outraged by the proposed European Union legislation. 

The EU governments' vote on Thursday in a significant Permanent Representatives Committee meeting would not have been the final obstacle to the legislation that aims to identify child sexual abuse material (CSAM). At the last minute, the contentious question was taken off the agenda. 

However, if the EU Council approves the Chat Control regulation later rather than sooner, experts believe it will be enacted towards the end of the difficult political process. Thus, the activists have asked Europeans to take action and keep up the pressure.

EU Council deaf to criticism

Actually, a regulation requiring chat services like Facebook Messenger and WhatsApp to sift through users' private chats in order to look for grooming and CSAM was first put out in 2022. 

Needless to say, privacy experts denounced it, with cryptography professor Matthew Green stating that the document described "the most sophisticated mass surveillance machinery ever deployed outside of China and the USSR.” 

“Let me be clear what that means: to detect “grooming” is not simply searching for known CSAM. It isn’t using AI to detect new CSAM, which is also on the table. It’s running algorithms reading your actual text messages to figure out what you’re saying, at scale,” stated Green. 

However, the EU has not backed down, and the draft law is currently going through the system. To be more specific, the proposed law would establish a "upload moderation" system to analyse all digital messages, including shared images, videos, and links.

The document is rather wild. Consider end-to-end encryption: on the one hand, the proposed legislation states that it is vital, but it also warns that encrypted messaging platforms may "inadvertently become secure zones where child sexual abuse material can be shared or disseminated." 

The method appears to involve scanning message content before encrypting it using apps such as WhatsApp, Messenger, or Signal. That sounds unconvincing, and it most likely is. 

Even if the regulation is approved by EU countries, additional problems may arise once the general public becomes aware of what is at stake. According to a study conducted last year by the European Digital Rights group, 66% of young people in the EU oppose the idea of having their private messages scanned.

EU Accuses Microsoft of Secretly Harvesting Children's Data

 

Noyb (None of Your Business), also known as the European Centre for Digital Rights, has filed two complaints against Microsoft under Article 77 of the GDPR, alleging that the tech giant breached schoolchildren's privacy rights with its Microsoft 365 Education service to educational institutions. 

Noyb claims that Microsoft tried to shift the responsibility and privacy expectations of GDPR principles onto institutions through its contracts, but that these organisations had no reasonable means of complying with such requests because they had no more control over the collected data. 

The non-profit argued that while schools and educational institutions in the European Union depended more on digital services during the pandemic, large tech businesses took advantage of this trend to try to attract a new generation of committed clients. While noyb supports the modernization of education, he believes Microsoft has breached various data protection rights by offering educational institutions with access to Microsoft's 365 Education services, leaving students, parents, and institutes with little options. 

Noyb voiced concern about the market strength of software vendors like Microsoft, which allows them to dictate the terms and circumstances of their contracts with schools. The organisation claims that this power has enabled IT companies to transfer most of their legal obligations under the General Data Protection Regulation (GDPR) to educational institutions and municipal governments. 

In reality, according to noyb, neither local government nor educational institutions have the power to affect how Microsoft handles user data. Rather, they were frequently faced with a "take it or leave it" scenario, in which Microsoft controlled all financial decisions and decision-making authority while the schools were required to bear all associated risks.

“This take-it-or-leave-it approach by software vendors such as Microsoft is shifting all GDPR responsibilities to schools,” stated Maartje de Graaf, a data protection lawyer at noyb. “Microsoft holds all the key information about data processing in its software, but is pointing the finger at schools when it comes to exercising rights. Schools have no way of complying with the transparency and information obligations.” 

Two complaints 

Due to suspected infringement of information privacy rules, Noyb represented two plaintiffs against Microsoft. The first complaint mentioned a father who requested personal data acquired by Microsoft's 365 Education service on behalf of his daughter in accordance with GDPR regulations. 

However, Microsoft had redirected the concerned parent to the "data controller," and after confirming with Microsoft that the school was the data controller, the parent contacted the school, which responded that they only had access to the student's email addresses used for sign-up. 

According to Microsoft's own documentation, the second complaint stated that, despite not giving consent to cookie or tracking technologies, Microsoft 365 Education installed cookies analysing user behaviour and collecting browser data, both of which are used for advertising purposes. The non-profit alleged that this type of invasive profiling was conducted without the school's knowledge or approval. 

noyb has requested that the Austrian data protection authority (DSB) investigate and analyse the data collected and processed by Microsoft 365 Education, as neither Microsoft's own privacy documentation, the complainant's access requests, nor the non-profit's own research could shed light on this process, which it believes violates the GDPR's transparency provisions.

Navigating Meta’s AI Data Training: Opt-Out Challenges and Privacy Considerations

Navigating Meta’s AI Data Training: Opt-Out Challenges and Privacy Considerations

The privacy policy update

Meta will reportedly amend its privacy policy beginning June 26 to allow its AI to be educated on your data. 

The story spread on social media after Meta sent out emails and notifications to subscribers in the United Kingdom and the European Union informing them of the change and offering them the option to opt out of data collecting. 

One UK-based user, Phillip Bloom, publicly published the message, informing everyone about the impending changes, which appear to also affect Instagram users.

The AI training process

These changes provide Meta permission to use your information and personal material from Meta-related services to train its AI. This implies that the social media giant will be able to use public Facebook posts, Instagram photographs and captions, and messages to Meta's AI chatbots to train its huge language model and other AI capabilities.

Meta states that private messages will not be included in the training data, and the business emphasizes in its emails and notifications that each user (in a protected region) has the "right to object" to the data being utilized. 

Once implemented, the new policy will begin automatically extracting information from the affected types of material. To avoid Meta removing your content, you can opt out right now by going to this Facebook help website. 

Keep in mind that this page will only load if you are in the European Union, the United Kingdom, or any country where Meta is required by law to provide an opt-out option.

Opting out: EU and UK users

If you live in the European Union, the United Kingdom, or another country with severe enough data protection regulations for Meta to provide an opt-out, go to the support page listed above, fill out the form, and submit it. 

You'll need to select your nation and explain why you're opting out in a text box, and you'll have the option to offer more information below that. You should receive a response indicating whether Meta will honor your request to opt out of having your data utilized. 

Prepare to fight—some users say that their requests are being denied, even though in countries governed by legislation such as the European Union's GDPR, Meta should be required to honor your request.

Challenges for users outside the EU and UK

There are a few caveats to consider. While the opt-out protects you, it does not guarantee that your postings will be protected if they are shared by friends or family members who have not opted out of using data for AI training. 

Make sure that any family members who use Facebook or other Meta services opt out, if possible. This move isn't surprising given that Meta has been gradually expanding its AI offerings on its platforms. 

As a result, the utilization of user data, particularly among Meta services, was always expected. There is too much data for the corporation to pass up as training material for its numerous AI programs.

Sensitive Documents Vanish Under Mysterious Circumstances from Europol Headquarters

 

A significant security breach has impacted the European Union's law enforcement agency, Europol, according to a report by Politico. Last summer, a collection of highly confidential documents containing personal information about prominent Europol figures vanished under mysterious circumstances.

The missing files, which included sensitive data concerning top law enforcement officials such as Europol Executive Director Catherine De Bolle, were stored securely at Europol's headquarters in The Hague. An ongoing investigation was launched by European authorities following the discovery of the breach.

An internal communication dated September 18, revealed that Europol's management was alerted to the disappearance of personal paper files belonging to several staff members on September 6, 2023. Subsequent checks uncovered additional missing files, prompting serious concerns regarding data security and privacy.

Europol took immediate steps to notify the individuals affected by the breach, as well as the European Data Protection Supervisor (EDPS). The incident poses significant risks not only to the individuals whose information was compromised but also to the agency's operations and ongoing investigations.

Adding to the gravity of the situation, Politico's report highlighted the unsettling discovery of some of the missing files by a member of the public in a public location in The Hague. However, key details surrounding the duration of the files' absence and the cause of the breach remain unclear.

Among the missing files were those belonging to Europol's top executives, including Catherine De Bolle and three deputy directors. These files contained a wealth of sensitive information, including human resources data.

In response to the breach, Europol took action against the agency's head of Human Resources, Massimiliano Bettin, placing him on administrative leave. Politico suggests that internal conflicts within the agency may have motivated the breach, speculating on potential motives for targeting Bettin specifically.

The security breach at Europol raises serious concerns about data protection and organizational security measures within the agency, prompting an urgent need for further investigation and safeguards to prevent future incidents.