Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label European Union. Show all posts

Enhancing EU Cybersecurity: Key Takeaways from the NIS2 Directive

Enhancing EU Cybersecurity: Key Takeaways from the NIS2 Directive

The European Union has taken a significant step forward with the introduction of the NIS2 Directive. This directive, which builds upon the original Network and Information Systems (NIS) Directive, aims to bolster cybersecurity across the EU by imposing stricter requirements and expanding its scope. But how far does the NIS2 Directive reach, and what implications does it have for organizations within the EU?

A Broader Scope

One of the most notable changes in the NIS2 Directive is its expanded scope. While the original NIS Directive primarily targeted operators of essential services and digital service providers, NIS2 extends its reach to include a wider range of sectors. This includes public administration entities, the healthcare sector, and providers of digital infrastructure. By broadening the scope, the EU aims to ensure that more entities are covered under the directive, thereby enhancing the overall cybersecurity posture of the region.

Enhanced Security Requirements

The move brings more stringent security requirements for entities within its scope. Organizations are now required to implement robust cybersecurity measures, including risk management practices, incident response plans, and regular security assessments. These measures are designed to ensure that organizations are better prepared to prevent, detect, and respond to cyber threats.

Additionally, the directive emphasizes the importance of supply chain security. Organizations must now assess and manage the cybersecurity risks associated with their supply chains, ensuring that third-party vendors and partners adhere to the same high standards of security.

Incident Reporting Obligations

Another significant aspect of the NIS2 Directive is the enhanced incident reporting obligations. Under the new directive, organizations are required to report significant cybersecurity incidents to the relevant authorities within 24 hours of detection. This rapid reporting is crucial for enabling a swift response to cyber threats and minimizing the potential impact on critical infrastructure and services.

The directive also mandates that organizations provide detailed information about the incident, including the nature of the threat, the affected systems, and the measures taken to mitigate the impact. This level of transparency is intended to facilitate better coordination and information sharing among EU member states, ultimately strengthening the collective cybersecurity resilience of the region.

Governance and Accountability

Organizations are required to designate a responsible person or team for overseeing cybersecurity measures and ensuring compliance with the directive. This includes conducting regular audits and assessments to verify the effectiveness of the implemented security measures.

Organizations that fail to meet the requirements of the NIS2 Directive may face significant fines and other sanctions. This serves as a strong incentive for organizations to prioritize cybersecurity and ensure that they are fully compliant with the directive.

Challenges and Opportunities

It also offers numerous opportunities. By implementing the required cybersecurity measures, organizations can significantly enhance their security posture and reduce the risk of cyber incidents. This not only protects their own operations but also contributes to the overall security of the EU.

The directive also encourages greater collaboration and information sharing among EU member states. This collective approach to cybersecurity can lead to more effective threat detection and response, ultimately making the region more resilient to cyber threats.

EU Claims Meta’s Paid Ad-Free Option Violates Digital Competition Rules

 

European Union regulators have accused Meta Platforms of violating the bloc’s new digital competition rules by compelling Facebook and Instagram users to either view ads or pay to avoid them. This move comes as part of Meta’s strategy to comply with Europe's stringent data privacy regulations.

Starting in November, Meta began offering European users the option to pay at least 10 euros ($10.75) per month for ad-free versions of Facebook and Instagram. This was in response to a ruling by the EU’s top court, which mandated that Meta must obtain user consent before displaying targeted ads, a decision that jeopardized Meta’s business model of personalized advertising.

The European Commission, the EU’s executive body, stated that preliminary findings from its investigation indicate that Meta’s “pay or consent” model breaches the Digital Markets Act (DMA) of the 27-nation bloc. According to the commission, Meta’s approach fails to provide users the right to “freely consent” to the use of their personal data across its various services for personalized ads.

The commission also criticized Meta for not offering a less personalized service that is equivalent to its social networks. Meta responded by stating that their subscription model for no ads aligns with the direction of the highest court in Europe and complies with the DMA. The company expressed its intent to engage in constructive dialogue with the European Commission to resolve the investigation.

The investigation was launched soon after the DMA took effect in March, aiming to prevent tech “gatekeepers” from dominating digital markets through heavy financial penalties. One of the DMA's objectives is to reduce the power of Big Tech firms that have amassed vast amounts of personal data, giving them an advantage over competitors in online advertising and social media services. The commission suggested that Meta should offer an option that doesn’t rely on extensive personal data sharing for advertising purposes.

European Commissioner Thierry Breton, who oversees the bloc’s digital policy, emphasized that the DMA aims to empower users to decide how their data is used and to ensure that innovative companies can compete fairly with tech giants regarding data access.

Meta now has the opportunity to respond to the commission’s findings, with the investigation due to conclude by March 2025. The company could face fines of up to 10% of its annual global revenue, potentially amounting to billions of euros. Under the DMA, Meta is classified as one of seven online gatekeepers, with Facebook, Instagram, WhatsApp, Messenger, and its online ad business listed among two dozen “core platform services” that require the highest level of regulatory scrutiny.

This accusation against Meta is part of a series of regulatory actions by Brussels against major tech companies. Recently, the EU charged Apple with preventing app makers from directing users to cheaper options outside its App Store and accused Microsoft of violating antitrust laws by bundling its Teams app with its Office software.


EU Proposes New Law to Allow Bulk Scanning of Chat Messages

 

The European elections have ended, and the European football tournament is in full flow; why not allow bulk searches of people's private communications, including encrypted ones? Activists around Europe are outraged by the proposed European Union legislation. 

The EU governments' vote on Thursday in a significant Permanent Representatives Committee meeting would not have been the final obstacle to the legislation that aims to identify child sexual abuse material (CSAM). At the last minute, the contentious question was taken off the agenda. 

However, if the EU Council approves the Chat Control regulation later rather than sooner, experts believe it will be enacted towards the end of the difficult political process. Thus, the activists have asked Europeans to take action and keep up the pressure.

EU Council deaf to criticism

Actually, a regulation requiring chat services like Facebook Messenger and WhatsApp to sift through users' private chats in order to look for grooming and CSAM was first put out in 2022. 

Needless to say, privacy experts denounced it, with cryptography professor Matthew Green stating that the document described "the most sophisticated mass surveillance machinery ever deployed outside of China and the USSR.” 

“Let me be clear what that means: to detect “grooming” is not simply searching for known CSAM. It isn’t using AI to detect new CSAM, which is also on the table. It’s running algorithms reading your actual text messages to figure out what you’re saying, at scale,” stated Green. 

However, the EU has not backed down, and the draft law is currently going through the system. To be more specific, the proposed law would establish a "upload moderation" system to analyse all digital messages, including shared images, videos, and links.

The document is rather wild. Consider end-to-end encryption: on the one hand, the proposed legislation states that it is vital, but it also warns that encrypted messaging platforms may "inadvertently become secure zones where child sexual abuse material can be shared or disseminated." 

The method appears to involve scanning message content before encrypting it using apps such as WhatsApp, Messenger, or Signal. That sounds unconvincing, and it most likely is. 

Even if the regulation is approved by EU countries, additional problems may arise once the general public becomes aware of what is at stake. According to a study conducted last year by the European Digital Rights group, 66% of young people in the EU oppose the idea of having their private messages scanned.

EU Accuses Microsoft of Secretly Harvesting Children's Data

 

Noyb (None of Your Business), also known as the European Centre for Digital Rights, has filed two complaints against Microsoft under Article 77 of the GDPR, alleging that the tech giant breached schoolchildren's privacy rights with its Microsoft 365 Education service to educational institutions. 

Noyb claims that Microsoft tried to shift the responsibility and privacy expectations of GDPR principles onto institutions through its contracts, but that these organisations had no reasonable means of complying with such requests because they had no more control over the collected data. 

The non-profit argued that while schools and educational institutions in the European Union depended more on digital services during the pandemic, large tech businesses took advantage of this trend to try to attract a new generation of committed clients. While noyb supports the modernization of education, he believes Microsoft has breached various data protection rights by offering educational institutions with access to Microsoft's 365 Education services, leaving students, parents, and institutes with little options. 

Noyb voiced concern about the market strength of software vendors like Microsoft, which allows them to dictate the terms and circumstances of their contracts with schools. The organisation claims that this power has enabled IT companies to transfer most of their legal obligations under the General Data Protection Regulation (GDPR) to educational institutions and municipal governments. 

In reality, according to noyb, neither local government nor educational institutions have the power to affect how Microsoft handles user data. Rather, they were frequently faced with a "take it or leave it" scenario, in which Microsoft controlled all financial decisions and decision-making authority while the schools were required to bear all associated risks.

“This take-it-or-leave-it approach by software vendors such as Microsoft is shifting all GDPR responsibilities to schools,” stated Maartje de Graaf, a data protection lawyer at noyb. “Microsoft holds all the key information about data processing in its software, but is pointing the finger at schools when it comes to exercising rights. Schools have no way of complying with the transparency and information obligations.” 

Two complaints 

Due to suspected infringement of information privacy rules, Noyb represented two plaintiffs against Microsoft. The first complaint mentioned a father who requested personal data acquired by Microsoft's 365 Education service on behalf of his daughter in accordance with GDPR regulations. 

However, Microsoft had redirected the concerned parent to the "data controller," and after confirming with Microsoft that the school was the data controller, the parent contacted the school, which responded that they only had access to the student's email addresses used for sign-up. 

According to Microsoft's own documentation, the second complaint stated that, despite not giving consent to cookie or tracking technologies, Microsoft 365 Education installed cookies analysing user behaviour and collecting browser data, both of which are used for advertising purposes. The non-profit alleged that this type of invasive profiling was conducted without the school's knowledge or approval. 

noyb has requested that the Austrian data protection authority (DSB) investigate and analyse the data collected and processed by Microsoft 365 Education, as neither Microsoft's own privacy documentation, the complainant's access requests, nor the non-profit's own research could shed light on this process, which it believes violates the GDPR's transparency provisions.

Navigating Meta’s AI Data Training: Opt-Out Challenges and Privacy Considerations

Navigating Meta’s AI Data Training: Opt-Out Challenges and Privacy Considerations

The privacy policy update

Meta will reportedly amend its privacy policy beginning June 26 to allow its AI to be educated on your data. 

The story spread on social media after Meta sent out emails and notifications to subscribers in the United Kingdom and the European Union informing them of the change and offering them the option to opt out of data collecting. 

One UK-based user, Phillip Bloom, publicly published the message, informing everyone about the impending changes, which appear to also affect Instagram users.

The AI training process

These changes provide Meta permission to use your information and personal material from Meta-related services to train its AI. This implies that the social media giant will be able to use public Facebook posts, Instagram photographs and captions, and messages to Meta's AI chatbots to train its huge language model and other AI capabilities.

Meta states that private messages will not be included in the training data, and the business emphasizes in its emails and notifications that each user (in a protected region) has the "right to object" to the data being utilized. 

Once implemented, the new policy will begin automatically extracting information from the affected types of material. To avoid Meta removing your content, you can opt out right now by going to this Facebook help website. 

Keep in mind that this page will only load if you are in the European Union, the United Kingdom, or any country where Meta is required by law to provide an opt-out option.

Opting out: EU and UK users

If you live in the European Union, the United Kingdom, or another country with severe enough data protection regulations for Meta to provide an opt-out, go to the support page listed above, fill out the form, and submit it. 

You'll need to select your nation and explain why you're opting out in a text box, and you'll have the option to offer more information below that. You should receive a response indicating whether Meta will honor your request to opt out of having your data utilized. 

Prepare to fight—some users say that their requests are being denied, even though in countries governed by legislation such as the European Union's GDPR, Meta should be required to honor your request.

Challenges for users outside the EU and UK

There are a few caveats to consider. While the opt-out protects you, it does not guarantee that your postings will be protected if they are shared by friends or family members who have not opted out of using data for AI training. 

Make sure that any family members who use Facebook or other Meta services opt out, if possible. This move isn't surprising given that Meta has been gradually expanding its AI offerings on its platforms. 

As a result, the utilization of user data, particularly among Meta services, was always expected. There is too much data for the corporation to pass up as training material for its numerous AI programs.

Sensitive Documents Vanish Under Mysterious Circumstances from Europol Headquarters

 

A significant security breach has impacted the European Union's law enforcement agency, Europol, according to a report by Politico. Last summer, a collection of highly confidential documents containing personal information about prominent Europol figures vanished under mysterious circumstances.

The missing files, which included sensitive data concerning top law enforcement officials such as Europol Executive Director Catherine De Bolle, were stored securely at Europol's headquarters in The Hague. An ongoing investigation was launched by European authorities following the discovery of the breach.

An internal communication dated September 18, revealed that Europol's management was alerted to the disappearance of personal paper files belonging to several staff members on September 6, 2023. Subsequent checks uncovered additional missing files, prompting serious concerns regarding data security and privacy.

Europol took immediate steps to notify the individuals affected by the breach, as well as the European Data Protection Supervisor (EDPS). The incident poses significant risks not only to the individuals whose information was compromised but also to the agency's operations and ongoing investigations.

Adding to the gravity of the situation, Politico's report highlighted the unsettling discovery of some of the missing files by a member of the public in a public location in The Hague. However, key details surrounding the duration of the files' absence and the cause of the breach remain unclear.

Among the missing files were those belonging to Europol's top executives, including Catherine De Bolle and three deputy directors. These files contained a wealth of sensitive information, including human resources data.

In response to the breach, Europol took action against the agency's head of Human Resources, Massimiliano Bettin, placing him on administrative leave. Politico suggests that internal conflicts within the agency may have motivated the breach, speculating on potential motives for targeting Bettin specifically.

The security breach at Europol raises serious concerns about data protection and organizational security measures within the agency, prompting an urgent need for further investigation and safeguards to prevent future incidents.

EU AI Act to Impact US Generative AI Deployments

 



In a move set to reshape the scope of AI deployment, the European Union's AI Act, slated to come into effect in May or June, aims to impose stricter regulations on the development and use of generative AI technology. The Act, which categorises AI use cases based on associated risks, prohibits certain applications like biometric categorization systems and emotion recognition in workplaces due to concerns over manipulation of human behaviour. This legislation will compel companies, regardless of their location, to adopt a more responsible approach to AI development and deployment.

For businesses venturing into generative AI adoption, compliance with the EU AI Act will necessitate a thorough evaluation of use cases through a risk assessment lens. Existing AI deployments will require comprehensive audits to ensure adherence to regulatory standards and mitigate potential penalties. While the Act provides a transition period for compliance, organisations must gear up to meet the stipulated requirements by 2026.

This isn't the first time US companies have faced disruption from overseas tech regulations. Similar to the impact of the GDPR on data privacy practices, the EU AI Act is expected to influence global AI governance standards. By aligning with EU regulations, US tech leaders may find themselves better positioned to comply with emerging regulatory mandates worldwide.

Despite the parallels with GDPR, regulating AI presents unique challenges. The rollout of GDPR witnessed numerous compliance hurdles, indicating the complexity of enforcing such regulations. Additionally, concerns persist regarding the efficacy of fines in deterring non-compliance among large corporations. The EU's proposed fines for AI Act violations range from 7.5 million to 35 million euros, but effective enforcement will require the establishment of robust regulatory mechanisms.

Addressing the AI talent gap is crucial for successful implementation and enforcement of the Act. Both the EU and the US recognize the need for upskilling to attend to the complexities of AI governance. While US efforts have focused on executive orders and policy initiatives, the EU's proactive approach is poised to drive AI enforcement forward.

For CIOs preparing for the AI Act's enforcement, understanding the tools and use cases within their organisations is imperceptible. By conducting comprehensive inventories and risk assessments, businesses can identify areas of potential non-compliance and take corrective measures. It's essential to recognize that seemingly low-risk AI applications may still pose significant challenges, particularly regarding data privacy and transparency.

Companies like TransUnion are taking a nuanced approach to AI deployment, tailoring their strategies to specific use cases. While embracing AI's potential benefits, they exercise caution in deploying complex, less explainable technologies, especially in sensitive areas like credit assessment.

As the EU AI Act reshapes the regulatory landscape, CIOs must proactively adapt their AI strategies to ensure compliance and mitigate risks. By prioritising transparency, accountability, and ethical considerations, organisations can navigate the evolving regulatory environment while harnessing the transformative power of AI responsibly.



ChatGPT Faces Data Protection Questions in Italy

 


OpenAI's ChatGPT is facing renewed scrutiny in Italy as the country's data protection authority, Garante, asserts that the AI chatbot may be in violation of data protection rules. This follows a previous ban imposed by Garante due to alleged breaches of European Union (EU) privacy regulations. Although the ban was lifted after OpenAI addressed concerns, Garante has persisted in its investigations and now claims to have identified elements suggesting potential data privacy violations.

Garante, known for its proactive stance on AI platform compliance with EU data privacy regulations, had initially banned ChatGPT over alleged breaches of EU privacy rules. Despite the reinstatement after OpenAI's efforts to address user consent issues, fresh concerns have prompted Garante to escalate its scrutiny. OpenAI, however, maintains that its practices are aligned with EU privacy laws, emphasising its active efforts to minimise the use of personal data in training its systems.

"We assure that our practices align with GDPR and privacy laws, emphasising our commitment to safeguarding people's data and privacy," stated the company. "Our focus is on enabling our AI to understand the world without delving into private individuals' lives. Actively minimising personal data in training systems like ChatGPT, we also decline requests for private or sensitive information about individuals."

In the past, OpenAI confirmed fulfilling numerous conditions demanded by Garante to lift the ChatGPT ban. The watchdog had imposed the ban due to exposed user messages and payment information, along with ChatGPT lacking a system to verify users' ages, potentially leading to inappropriate responses for children. Additionally, questions were raised about the legal basis for OpenAI collecting extensive data to train ChatGPT's algorithms. Concerns were voiced regarding the system potentially generating false information about individuals.

OpenAI's assertion of compliance with GDPR and privacy laws, coupled with its active steps to minimise personal data, appears to be a key element in addressing the issues that led to the initial ban. The company's efforts to meet Garante's conditions signal a commitment to resolving concerns related to user data protection and the responsible use of AI technologies. As the investigation takes its stride, these assurances may play a crucial role in determining how OpenAI navigates the challenges posed by Garante's scrutiny into ChatGPT's data privacy practices.

In response to Garante's claims, OpenAI is gearing up to present its defence within a 30-day window provided by Garante. This period is crucial for OpenAI to clarify its data protection practices and demonstrate compliance with EU regulations. The backdrop to this investigation is the EU's General Data Protection Regulation (GDPR), introduced in 2018. Companies found in violation of data protection rules under the GDPR can face fines of up to 4% of their global turnover.

Garante's actions underscore the seriousness with which EU data protection authorities approach violations and their willingness to enforce penalties. This case involving ChatGPT reflects broader regulatory trends surrounding AI systems in the EU. In December, EU lawmakers and governments reached provisional terms for regulating AI systems like ChatGPT, emphasising comprehensive rules to govern AI technology with a focus on safeguarding data privacy and ensuring ethical practices.

OpenAI's cooperation and its ability to address concerns regarding personal data usage will play a pivotal role. The broader regulatory trends in the EU indicate a growing emphasis on establishing comprehensive guidelines for AI systems, addressing data protection and ethical considerations. For readers, understanding these developments determines the importance of compliance with data protection regulations and the ongoing efforts to establish clear guidelines for AI technologies in the EU.



Privacy Watchdog Fines Italy’s Trento City for Privacy Breaches in Use of AI


Italy’s privacy watchdog has recently fined the northern city of Trento since they failed to keep up with the data protection guidelines in how they used artificial intelligence (AI) for street surveillance projects. 

Trento was the first local administration in Italy to be sanctioned by the GPDP watchdog for using data from AI tools. The city has been fined a sum of 50,000 euros (454,225). Trento has also been urged to take down the data gathered in the two European Union-sponsored projects. 

The privacy watchdog, known to be one of the most proactive bodies deployed by the EU, for evaluating AI platform compliance with the bloc's data protection regulations temporarily outlawed ChatGPT, a well-known chatbot, in Italy. In 2021, the authority also reported about a facial recognition system tested under the Italian Interior Ministry, which did not meet the terms of privacy laws.

Concerns around personal data security and privacy rights have been brought up by the rapid advancements in AI across several businesses.

Following a thorough investigation of the Trento projects, the GPDP found “multiple violations of privacy regulations,” they noted in a statement, while also recognizing how the municipality acted in good faith.

Also, it mentioned that the data collected in the project needed to be sufficiently anonymous and that it was illicitly shared with third-party entities. 

“The decision by the regulator highlights how the current legislation is totally insufficient to regulate the use of AI to analyse large amounts of data and improve city security,” it said in a statement.

Moreover, in its presidency of the Group of Seven (G7) major democracies, the government of Italy which is led by Prime Minister Giorgia Meloni has promised to highlight the AI revolution.

Legislators and governments in the European Union reached a temporary agreement in December to regulate ChatGPT and other AI systems, bringing the technology one step closer to regulations. One major source of contention concerns the application of AI to biometric surveillance.  

European Union to Block Amazon’s Acquisition Over iRobot


Amazon.com Inc. has recently proposed a takeover of the Roomba manufacturers iRobot Corp. This proposal is expected to be blocked by the European Union’s antitrust regulators, as they share their concerns that this will have an adverse impact on other robot vacuum makers. 

At a meeting with European Commission officials on Thursday, the e-commerce behemoth was informed that the transaction would probably be denied, according to sources familiar with the situation. The political leadership of the EU must still formally approve a final decision, which is required by February 14.  Meanwhile, Amazon declined to comment on the issue. 

On Friday, iRobot’s shares, based in Bedford, Massachusetts, fell as much as 31% to $16.30, expanding the deal spread to over $35, the greatest since the merger was disclosed more than a year ago.

Regulators believe that other vacuum manufacturers may find it more difficult to compete as a result of iRobot's partnership with Amazon, particularly if Amazon decides to give Roomba advantages over competitors on its online store.

There will probably be opposition to the deal in the US as well. People with an insight into the situation claim that the Federal Trade Commission has been preparing a lawsuit to try and stop the transaction. According to persons speaking about an ongoing investigation, the three FTC commissioners have yet to vote on a challenge or hold a final meeting with Amazon to discuss the possible case.

The investigation over Amazon’s acquisition of iRobot was initiated in July 2023 by the European Commission (EC), the EU’s competition watchdog. 

The EC has until February 14 to make a decision. The commission's 27 most powerful political members must agree to reject the proposal before the EC can make a final decision. 

While iRobot was all set to expand its business in the market of smart home appliances, it witnessed a 40% dip in its shares a few hours after the first reporting of the EU’s intentions in the Wall Street Journal. 

Given that the company has been struggling with declining revenues, the acquisition by Amazon was initially viewed as a boon.

In regards to the situation, Matt Schruers, president of tech lobbying group Computer and Communications Industry Association comments that "If the objective is to have more competition in the home robotics sector, this makes no sense[…]Blocking this deal may well leave consumers with fewer options, and regulators cannot sweep that fact under the rug."  

Europol Dismantles Ukrainian Ransomware Gang

A well-known ransomware organization operating in Ukraine has been successfully taken down by an international team under the direction of Europol, marking a major win against cybercrime. In this operation, the criminal group behind several high-profile attacks was the target of multiple raids.

The joint effort, which included law enforcement agencies from various countries, highlights the growing need for global cooperation in combating cyber threats. The dismantled group had been a prominent player in the world of ransomware, utilizing sophisticated techniques to extort individuals and organizations.

The operation comes at a crucial time, with Ukraine already facing challenges due to ongoing geopolitical tensions. Europol's involvement underscores the commitment of the international community to address cyber threats regardless of the geopolitical landscape.

One of the key events leading to the takedown was a series of coordinated raids across Ukraine. These actions, supported by Europol, aimed at disrupting the ransomware gang's infrastructure and apprehending key individuals involved in the criminal activities. The raids not only targeted the group's operational base but also sought to gather crucial evidence for further investigations.

Europol, in a statement, emphasized the significance of international collaboration in combating cybercrime. "This successful operation demonstrates the power of coordinated efforts in tackling transnational threats. Cybercriminals operate globally, and law enforcement must respond with a united front," stated the Europol representative.

The dismantled ransomware gang was reportedly using the Lockergoga ransomware variant, known for its sophisticated encryption methods and targeted attacks on high-profile victims. The group's activities had raised concerns globally, making its takedown a priority for law enforcement agencies.

In the aftermath of the operation, cybersecurity experts are optimistic about the potential impact on reducing ransomware threats. However, they also stress the importance of continued vigilance and collaboration to stay ahead of evolving cyber threats.

As the international community celebrates this successful operation, it serves as a reminder of the ongoing battle against cybercrime. The events leading to the dismantlement of the Ukrainian-based ransomware gang underscore the necessity for countries to pool their resources and expertise to protect individuals, businesses, and critical infrastructure from the ever-evolving landscape of cyber threats.

YouTube Faces Struggle from EU Regulators for Dropping Use of Ad Blockers


Alexander Hanff, a privacy activist is suing the European Commission, claiming that YouTube’s new ad blocker detection violates European law. 

In response to the Hanff’s claims to the European Commission, German Pirate Party MEP asked for a legal position on two key issues: whether this type of detection is "absolutely necessary to provide a service such as YouTube" and whether "protection of information stored on the device (Article 5(3) ePR) also cover information as to whether the user's device hides or blocks certain page elements, or whether ad-blocking software is used on the device."

YouTube’s New Policy 

Recently, YouTube has made it mandatory for users to cease using ad blockers or else they will receive notifications that may potentially prevent them from accessing any material on the platform. The majority of nations will abide by the new regulations, which YouTube claims are intended to increase revenue for creators.

However, the reasons that the company provides are not likely to hold up in Europe. Experts in privacy have noted that YouTube's demand to allow advertisements to run for free users is against EU legislation. Since it can now identify users who have installed ad blockers to avoid seeing advertisements on the site, YouTube has really been accused of spying on its customers.

EU regulators has already warned tech giants like Google and Apple. Now, YouTube is the next platform that could face lengthy legal battles with the authorities as it attempts to defend the methods used to identify these blocks and compel free YouTube viewers to watch advertisements regularly in between videos. Google and other digital behemoths like Apple have previously faced the wrath of EU regulators. Due to YouTube's decision to show adverts for free users, many have uninstalled ad blockers from their browsers as a result of these developments.

According to experts, YouTube along with violating the digital laws, is also violating certain Fundamental consumer rights. Thus, it is likely that the company would have to change its position in the area if the platform is found to be in violation of the law with its anti-ad blocker regulations. This is something that Meta was recently forced to do with Instagram and Facebook.

The social networking giant has further decided on the policy that if its users (Facebook and Instagram) do not want to see ads while browsing the platforms, they will be required to sign up for its monthly subscriptions, where the platforms are free from advertisements.  

Russian Exiled Journalist Says EU Should Ban Spyware


The editor-in-chief of the independent Russian news site Meduza has urged the European Union to enact a comprehensive ban on spyware, given that spyware has been frequently used to violate human rights.

According to Ivan Kolpakov, Meduza’s editor-in-chief based in Latvia, it was obvious that Europeans should be very concerned about Pegasus in light of the discoveries regarding the hacking of his colleague Galina Timichenko by an as-yet-unconfirmed EU country.

“If they can use it against an exiled journalist there are no guarantees they cannot use it against local journalists as well[…]Unfortunately, there are a lot of fans in Europe, and we are not only talking about Poland and Hungary, but Western European countries as well,” said Kolpakov.

Since last month, the European Commission has been working on guidelines for how governments could employ surveillance technologies like spyware in compliance with EU data privacy and national security rules since last month. Despite the fact that member states are responsible for their own national security, the Commission is considering adopting a position after learning that 14 EU governments had purchased the Pegasus technology from NSO Group.

Apparently, Timichenko was targeted by Pegasus in February 2023 when she was in Berlin for a private gathering of Russian media workers exile. The meeting's subject was the threats posed by the Russian government's categorization of independent Russian media outlets as foreign agents.

Taking into account the work that Timichenko deals with, Russia was first suspected; but, according to the digital rights organization Access Now, additional information suggests that one of the intelligence services of an EU member state — the exact one is yet unknown — is more likely to be to blame.

Allegedly, the motive behind the hack could be that numerous Baltic nations, to whom Russia has consistently posed a threat, are worried that a few FSB or GRU agents may have infiltrated their borders among expatriate dissidents and journalists.

“It may happen and probably it actually happens, but in my opinion, it does not justify the usage of that kind of brutal tool as Pegasus against a prominent independent journalist,” Kolpakov said.

Kolpakov believes that the revelations have left the exiled community feeling they are not safe in Europe. “This spyware has to be banned here in Europe. It really violates human rights,” he added.     

Here's Why Twitter Rival Threads Isn’t Accessible in the E.U.

 

With the introduction of Threads, Meta's text-based conversation network, across 100 countries in July, Twitter is facing its most formidable rival yet after months of instability under its new owner. 

The app gained 30 million users in less than a day, including celebrities and media outlets, but its debut in Europe has been delayed due to concerns over data protection. 

The app has been delayed by "upcoming regulatory uncertainty," as Meta spokesperson Christine Pai put it, which is generally understood to be a reference to the EU's Digital Markets Act (DMA). 

Tech firms and regulatory sceptics have long maintained that rules like the DMA stifle innovation by imposing onerous user security measures, but the looming competition law hasn't stopped Meta from offering new products — and Meta hasn't suggested that a European launch will be cancelled. 

If anything, the DMA adds friction to a product's introduction, forcing the company to review how it safeguards users before releasing it into the open — even if it affects Threads' popularity promptly. 

However, there is still a lot of uncertainty as companies wait for additional guidelines later this autumn, as well as an unanswered question: will compliance with Europe's standards undermine the design that has allowed Threads to grow so quickly?

Pai and other Meta representatives have refused to blame the Threads delay on any of Europe's multiple tech guidelines. However, discussions with Instagram CEO Adam Mosseri suggest that the EU's Digital Markets Act is to blame.

The regulation, which was passed last year, includes a slew of new rules aimed at preventing "gatekeepers" — companies with a specific user base and market cap — from abusing their market position.

The DMA forbids companies as big as Meta from reusing a user's personal data — including name and location — across products for targeted advertising without the user's agreement.

According to Meta's privacy policies, it collects and uses information across its products to deliver adverts to consumers. Information from Apple’s App Store suggests that Threads could collect a wide range of personal data, including a user's contacts and search history, as well as health and location data.

According to Ireland's Independent newspaper, a representative for the Data Protection Commission (DPC) suggested that the watchdog had been in communication with Meta concerning Threads and that the platform would not be rolled out in the EU "at this point." 

Meta's history with EU regulators 

Two recent rulings have raised data and privacy concerns regarding Meta's operations in the European Union. Earlier this year in July, the Court of Justice of the European Union (CJEU) in Luxembourg ruled that a German watchdog may investigate privacy violations in which user agreement wasn't acquired prior to the firm using personal data to target adverts to consumers.

Furthermore, in May, Ireland's Data Protection Commission (DPC), which oversees Meta across the EU, ordered Facebook to halt data transfers from the EU to the US and fined the internet giant a record 1.2 billion euros ($1.3 billion) for violating General Data Protection Regulation (GDPR) standards. 

Meta announced that it will appeal the ruling, claiming that it had been "singled out" by the DPC despite the fact that several other companies use identical data migration techniques.

EU's Implementation of Crypto Rules Faces Multiple Obstacles Across Continent

 

The European Union (EU) has approved a framework called Markets in Crypto Assets (MiCA), which is in charge of regulating cryptocurrencies in Europe. 

Christian Anders, CEO of the cryptocurrency company Btc.x, cautions that there may be difficulties in its successful deployment across the continent. 

Multiple obstacles 

Anders claims that the road to European MiCA standards approval is more like a marathon than a sprint. Even if the legislation itself gives the digital currency industry the much-needed structure, making it a reality might call for extra diplomatic skill.

Sweden and other European countries, for example, might require more convincing before they completely embrace the changes. 

European cryptocurrency exchanges are anxious for the MiCA framework to go into force so they may establish their businesses on a solid legal base. Some national governments, however, do not quite share this enthusiasm. A rising number of these countries, including Sweden, are reluctant to provide new licences to bitcoin businesses. 

The two-edged sword of crypto 

Even though such reservations won't prevent MiCA from being implemented, they might surely delay it. The MiCA framework's two sides are revealed here. It gives thorough restrictions for the bitcoin market on one side. On the other hand, it is susceptible to the various perspectives and degrees of acceptability of various European countries. 

The United States Securities Exchange Commission (SEC) appears to be trudging through its own regulatory minefield as the EU tries to manage similar difficulties. Because Crypto.com operates inside the US, Anders suggests that it will likely be the next company under SEC investigation.

Anders compares the regulatory environments in the US and Sweden, though on a much smaller scale, and compares the SEC's attack on Binance and Coinbase to the severe restrictions implemented by the Swedish government.

Anders continues to be enthusiastic on Bitcoin despite these regulatory ambiguities. He contends that the obstacles governments and banks have placed in the way of Bitcoin only strengthen his belief in the virtual currency. 

Particularly when compared to the flaws of fiat currency and the economic strain of inflation, Bitcoin's advantages in the struggle of monetary systems become increasingly clear.

Bitcoin appears to be doing well in terms of mining. With the creation of equipment that increases mining efficiency, businesses like Intel have entered the market. Anders claims that the increased use of renewable energy is accelerating the growth of bitcoin mining in Europe.

Given the strong popularity among the younger generation, the future of Bitcoin and other digital currencies appears secure. Their inclination towards these cutting-edge technologies is expected to influence how money and commerce are conducted across the continent and, by extension, around the globe.

The expansion and influence of the cryptocurrency business are unabated, even as the EU and other regulatory authorities struggle to come up with effective regulations.

Elon Musk Withdraws Twitter from EU’s Disinformation Code of Practice


European Union has recently confirmed that Twitter has withdrawn from the European Union’s voluntary code against disinformation.

The news was announced on Twitter, by EU’s internal market commissioner Thierry Breton. Breton later took to social media, warning Twitter that it cannot escape from the legal liability consequences that are incoming.

“Twitter leaves EU voluntary Code of Practice against disinformation. But obligations remain. You can run but you can’t hide[…]Beyond voluntary commitments, fighting disinformation will be legal obligation under #DSA as of August 25. Our teams will be ready for enforcement,” Breton wrote.

Herein, he referred to the legal duties that the platform must follow as a "very large online platform" (VLOP) under the EU's Digital Services Act (DSA).

European Union Disinformation Code

A number of tech firms, small and big, are apparently signed up to the EU’s disinformation code, along with Facebook’s parent company Meta, TikTok, Google, Microsoft and Twitch.

The code, which was introduced in June of last year, seeks to decrease profiteering from fake news and disinformation, increase transparency, and stop the spread of bots and fraudulent accounts. Companies who sign the code are free to decide on the what obligations they want to make, such as working with fact-checkers or monitoring political advertising.

Apparently, since Elon Musk took over Twitter, the company’s moderation has largely reduced, which as per the critics has resulted in a increase in spread of disinformation. 

However, experts and former Twitter employees claim that the majority of these specialists left their positions or were fired. The social media company once had a dedicated team that tried to combat coordinated disinformation campaigns.

Last month, BBC exposed hundreds of Russian and Chinese state propaganda accounts lurking on Twitter. However, Musk claims that there is now “less misinformation rather than more,” since he took Twitter’s ownership.

Moreover, the EU, along with its voluntary code has brought in a Digital Service Act- a law which will coerce firms to put more efforts in tackling illegal contents online.

From August 25, platforms with more than 45 million active users per month in the EU—including Twitter—must abide by the DSA's legislative requirements.

Twitter will be required by legislation to implement measures to combat the spread of misinformation, provide users with a way to identify illegal content, and respond "expeditiously" to notifications.

In regards to the issue, AFP news agency on Friday quoted a statement of a EU Commission official saying “If (Elon Musk) doesn’t take the code seriously, then it’s better that he quits.”  

Apple's Wireless Charging Push: Doing More Harm Than Good

 

The Indian government has mandated the use of a standard USB Type C charging port for all mobile phones beginning in 2025, following the lead of a European Union regulation. 

It's not the first time that India has followed the EU's lead; the European regulator also served as the inspiration for a recent Competition Commission of India (CCI) decision against Google, which required the tech giant to allow third-party app stores inside of its Play Store. However, the plans of just one company—Apple—are derailed by this new ruling. 

On its iPhones since 2012, Apple has utilised the lightning connector. Naturally, Android manufacturers for almost ten years used micro USB, which is still present in some budget phones, but since 2015, the industry has gradually shifted to Type C, with Apple being the lone holdout. 

Greg Joswiak, the head of marketing at Apple, acknowledged his annoyance with the EU regulations in an interview with the Wall Street Journal. Additionally, he recognised that Apple will be compelled to follow the ruling, and iPhone models sold in the EU—and likely everywhere else by 2025—will use USB Type C. However, there was some ambiguity in his admission. 

The wireless power consortium declared earlier this month at the Consumer Electronics Show in Las Vegas that Apple would contribute MagSafe to the upcoming Qi wireless charging standard. It is significant because MagSafe, which Apple introduced with the iPhone 12 in 2020, is its own proprietary method of wireless charging.

Why would Apple divulge its trade secrets to the public? I think regulators from all over the world are pressuring Apple to adopt USB Type C. MagSafe-certified accessories like cases, wireless chargers, wireless power banks, and even magnetic docks bring in a lot of money for Apple. But the world's most exclusive company is dispensing with its standard? How come? 

The iPhone without ports 

Apple has long imagined an iPhone devoid of all buttons and ports for connecting devices. Users should not be surprised to learn that Apple has made progress over the past ten years toward realising its vision of offering a fully wireless experience. 

The 'Lightning Connector,' which first appeared in 2012 before USB Type C, was its first step. The release of the iPhone 7 in 2016 marked the next significant development, which was the infamous removal of the 3.5mm headphone jack. In addition, it removed the physical home button from its iPhones in favour of a virtual one that mimicked a real tactile push using the tactic engine and its vibration system. 

The home button was completely abandoned in the 2017 iPhone X. The iPhone 15 models, at least the "Pro" models, are rumoured to have buttons for the volume and power switch that are similar to those on the iPhone 7. This could be the next step, and it could happen in 2023. The mechanical side buttons will probably be replaced by a system that mimics tactile feedback by Apple.

With the trackpad of the MacBook in 2015, Apple demonstrated its mastery of this technology. By the end of this year, the iPhone might not have any other physical moving parts besides the Lightning or USB Type C ports. 

Theoretically, Apple could circumvent all USB Type C regulations by the time the iPhone 16 is released in 2024 and offer a fully symmetrical iPhone that only accepts wireless charging through the MagSafe. 

Apple's tenacity is demonstrated by the fact that USB 2.0 data transfer rates for an iPhone over a wired connection are still pitifully slow. AirDrop, a wireless peer-to-peer data transfer technology that is supported by Apple products like the Mac, iPad, iPhone, and iPod, is encouraged for use. 

Wireless iPhone 

Apple enjoys streamlining its products and getting rid of potential flaws. The mechanical home button was swapped out for a simulated one because the mechanical one's dependability was a problem. Later this year, this might occur with the power and volume buttons.

The tech giant aimed to get rid of a dated part with the 3.5mm headphone jack. It sought to make more room for items like batteries, which some may consider to be hogwash. Additionally, it desired the industrial design symmetry that was long favoured by Sir Jonathan Paul Ive, its former chief design officer. 

There might be advantages to removing the Lightning cable port. A short circuit could destroy not only the lightning cable and charging port but also the iPhone's entire logic board, so first off, this charging port is a potential point of failure. It is typically something that cannot be fixed.

In addition, it will increase sales of MagSafe chargers, which are more costly than regular lightning cables and have few third-party substitutes. 

Furthermore, an iPhone's maximum wireless charging speed is 15W, which is a little less than the 20W wired charging speeds that Apple provides. An average iPhone user won't notice the difference because wireless charging speeds are improving every year.

Futuristic approach 

Apple is planning for the future even as regulators from all over the world continue to discuss the charging plug format. Induction charging will be used for everything within the next ten years, including electric vehicles.

Apple aspires to progress. In 2012, it desired that Lightning become the norm for all devices. But Intel was also working on USB Type C at the same time. Because it included the lightning-fast Thunderbolt protocol, manufacturers backed it. 

The tech giant does not want to see history repeat itself. It is  waiving its patents by significantly contributing to the next-generation Qi wireless charging format, allowing every manufacturer access to the technology it pioneered while keeping up with the times. MagSafe entails more than just an induction-based charging method and IP centred on high-intensity magnets that allow the charging puck to adhere to the device. It includes a microcontroller inside the device that allows the operating system to detect the presence of a MagSafe-compliant accessory. 

Given that many manufacturers are now offering faster 50W wireless charging speeds through exclusive charging docks, Apple may decide to open-source the technology's core and establish it as the industry standard for wireless charging. MagSafe has greater ubiquity even though it takes longer to charge because it does more than just charging and is widely used by Apple users. 

For items made by businesses like Belkin, which offers a variety of MagSafe-compliant accessories, Apple is likely to have a "designed iPhone" programme that already exists. As a result, it will be able to generate more royalties. 

Ultimately, regulators from all over the world have been waiting too long to impose a standardised charging format. In phones that cost more than Rs 40,000, wireless charging is increasingly common. Apple currently offers wireless charging on all of its most recent smartphones, and thanks to MagSafe and the new Qi standard, improvements in technology in the next one to two years will probably result in an iPhone that doesn't require wired charging.

Risky Online Behaviour Normalised Among Youth: EU Study


EU Study suggests risky online behaviour rising in Europe 

Criminal and risky online behaviour is under threat of becoming normalized among a generation of young people throughout Europe. The findings come from a European Union (EU) funded research that found one in four in 16- to 19-year-olds have trolled someone online and one in three have engaged in digital piracy.

An EU-funded study discovered proof of widespread risky, delinquent, and criminal among the 16-19 age group in nine European countries which includes the UK. 

The survey took 8000 participants to study online trolling 

A survey of 8000 young participants suggests that one out four has trolled someone on the web, one out of eight were involved in online harassment, and one out of ten were involved in hacking or hate speech, one out of five were involved in sexting, and one out of three were involved in digital piracy. The survey also suggests four out of ten were involved in watching pornography. 

Risky and criminal online behaviour has become almost normalized in young people of Europe, said Julia Davidson, a co-author of the research and professor of criminology at the University of East London (UEL). 

What the does the survey findings suggest?

The research suggests that a big proportion of youth in the EU are getting involved in some kind of cybercrime to such a level that doing low-level crimes on web and online-risk taking has almost become normal nowadays. 

Davidson said that the research findings said the research findings hint at more male involvement in criminal or risky behaviour, around three quarters of male have accepted involvement in some form of online risk-taking or cybercrime, in comparison to 65% of females. 

The Guards reports, "The survey asked young people about 20 types of behaviour online, including looking at pornographic material, posting revenge porn, making self-generated sexual images and posting hate speech. According to the survey findings, just under half of participants engaged in behaviour that could be considered criminal in most jurisdictions, such as hacking, non-consensual sharing of intimate images or “money muling” – where someone receives money from a third party and passes it on, in a practice linked to the proceeds of cybercrime." 

Youth involved in online harrasment in European countries 

The survey consists of 9 countries, these are: UK, France, Spain, Italy, Germany, the Netherlands, Sweden, Norway and Romania. The country that has the highest proportion of "cyberdeviancy" is Spain at 75%, followed by Romania, Germany, and Netherlands. The UK is at the last position at 58%. Cyberdeviancy, according to the survey, means a mixture of criminal and non-criminal risky behaviours. 

"The survey, conducted by a research agency with previously used sample groups, found that half of 16- to 19-year-olds spent four to seven hours a day online, with nearly four out of 10 spending more than eight hours a day online, primarily on phones. It found that the top five platforms among the group were YouTube, Instagram, WhatsApp, TikTok and Snapchat," Guardian said. 

FancyBear: Hackers Use PowerPoint Files to Deliver Malware

 

FancyBear: Hackers Use PowerPoint Files to Deliver Malware Cluster25 researchers have recently detected a threat group, APT28, also known as FancyBear, and attributed it to the Russian GRU (Main Intelligence Directorate of the Russian General Staff). The group has used a new code execution technique that uses mouse movement in Microsoft PowerPoint, to deliver Graphite malware.
 
According to the researchers, the threat campaign has been actively targeting organizations and individuals in the defense and government organizations of the European Union and East European countries. The cyber espionage campaign is believed to be still active.
 

Methodology of Threat Actor

 
The threat actor allegedly entices victims with a PowerPoint file claiming to be associated with the Organization for Economic Cooperation (OECD).
 
This file includes two slides, with instructions in English and French to access the translation feature in zoom. Additionally, it incorporates a hyperlink that plays a trigger for delivering a malicious PowerShell script that downloads a JPEG image carrying an encrypted DLL file.
 
The resulting payload, Graphite malware is in Portable Executable (PE) form, which allows the malware operator to load other malwares into the system memory.
 
“The code execution runs a PowerShell script that downloads and executes a dropper from OneDrive. The latter downloads a payload that extracts and injects in itself a new PE (Portable Executable) file, that the analysis showed to be a variant of a malware family known as Graphite, that uses the Microsoft Graph API and OneDrive for C&C communications.” States Cluster25, in its published analysis.
 
The aforementioned Graphite malware is a fileless malware that is deployed in-memory only and is used by malware operators to deliver post-exploitation frameworks like Empire. Graphite malware’s purpose is to allow the attacker to deploy other malwares into the system memory.
 
 
Based on the discovered metadata, according to Cluster25, the hackers have been preparing for the cyber campaign between January and February. However, the URLs used in the attacks were active in August and September.
 
With more hacker groups attempting to carry out such malicious cyber campaigns, the government and private sectors must deploy more powerful solutions to prevent future breaches and cyber attacks to safeguard their organizations.

Austria: Google Breached a EU Court Order

The Austrian advocacy group noyb.eu complained to France's data protection authorities on Wednesday that Google had violated a European Union court judgment by sending unsolicited advertising emails directly to the inbox of Gmail users. 

One of Europe's busiest data regulators, the French CNIL, has imposed some of the largest fines on companies like Google and Facebook. The activist organization gave CNIL screenshots of a user's inbox that displayed advertising messages at the top.

The French word 'annonce,' or 'ad,' and a green box were used to identify the messages. According to the group, that type of marketing was only permitted under EU rules with the users' consent.

When referring to Gmail's anti-spam filters, which place the majority of unsolicited emails in a separate folder, Romain Robert, program director at noyb.eu, said, "It's as if the mailman was paid to eliminate the ads from your inbox and put his own instead."

Requests for comment from Google did not immediately receive a response. A CNIL spokeswoman acknowledged that the organization had received the complaint and was in the process of registering it.

The CNIL was chosen by Vienna-based noyb.eu (None Of Your Business) over other national data privacy watchdogs because it has a reputation for being one of the EU's most outspoken regulators, according to Robert.

Even while any CNIL ruling would only be enforceable in France, it might force Google to examine its methods there. 

Max Schrems, an Austrian lawyer and privacy activist who won a prominent privacy case before Europe's top court in 2020, formed the advocacy group Noyb.eu.

This year, the CNIL fined Google a record-breaking 150 million euros ($149 million) for making it challenging for people to reject web trackers. Facebook (FB.O), owned by Meta Platforms, was also penalized 60 million euros for the same offense.

The firms are constantly under investigation for their practice of transmitting the private details of EU citizens to databases in the US. Numerous complaints have been made by NOYB to authorities throughout the bloc, claiming that the practice is forbidden.

A crucial tenet of the European Union's data privacy policy and a primary goal for the CNIL is the prior agreement of Internet users for the use of cookies, which are small bits of data that aid in the creation of targeted digital advertising campaigns.