Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Privacy Violation. Show all posts

The Concerning Rise of AI “Undressing” Apps: A Violation of Privacy and Ethics

 

Today, AI can help with a variety of tasks, like making personalised food plans and offering dating advice, as well as fixing image flaws and optimising workflow.

However, AI technology has also opened the door to more controversial apps, such as AI nude generators used for AI undressing. AI undressing is becoming increasingly popular as a result of rapid technical breakthroughs and the interest it generates. These apps use deep learning algorithms to analyse and edit images, successfully removing clothing from photographs. 

Nevertheless, the usage of these apps raises serious legal and ethical concerns. Many of these apps have the potential to infringe private rights and be used maliciously, which could result in legal consequences. Responsible use of AI undressing apps is critical, but the potential for abuse and the difficulties of regulation remain significant hurdles.

In Israel, for example, there have been debates about implementing regulations similar to those governing revenge pornography, which would criminalise the unauthorised use of AI undressing apps. In addition, Israeli tech businesses and academic institutions are creating educational courses and guidelines to promote the appropriate use of AI. These initiatives aim to mitigate the negative effects of applications such as AI undressing while upholding ethical standards in technology use. 

One of the most pressing challenges concerning AI-powered undressing apps is whether they can be used properly. This is a complex subject that ultimately depends on individual notions of right and wrong, as well as the willingness to take the required measures to safeguard oneself and others from the possible harms that these apps can generate. 

The appropriate use of such technology necessitates a thorough awareness of its ramifications as well as a commitment to ethical principles. As AI evolves, it is critical for society to strike a balance between innovation and ethical responsibility. It is critical to ensure that technological breakthroughs are used to improve our lives while maintaining our values and safety. 

This includes establishing strong legal frameworks, raising awareness and educating about the risks, and cultivating an ethical AI culture. By doing so, we can maximise the benefits of AI while minimising its potential risks, resulting in a safer and more responsible technological landscape for everybody.

EU Accuses Microsoft of Secretly Harvesting Children's Data

 

Noyb (None of Your Business), also known as the European Centre for Digital Rights, has filed two complaints against Microsoft under Article 77 of the GDPR, alleging that the tech giant breached schoolchildren's privacy rights with its Microsoft 365 Education service to educational institutions. 

Noyb claims that Microsoft tried to shift the responsibility and privacy expectations of GDPR principles onto institutions through its contracts, but that these organisations had no reasonable means of complying with such requests because they had no more control over the collected data. 

The non-profit argued that while schools and educational institutions in the European Union depended more on digital services during the pandemic, large tech businesses took advantage of this trend to try to attract a new generation of committed clients. While noyb supports the modernization of education, he believes Microsoft has breached various data protection rights by offering educational institutions with access to Microsoft's 365 Education services, leaving students, parents, and institutes with little options. 

Noyb voiced concern about the market strength of software vendors like Microsoft, which allows them to dictate the terms and circumstances of their contracts with schools. The organisation claims that this power has enabled IT companies to transfer most of their legal obligations under the General Data Protection Regulation (GDPR) to educational institutions and municipal governments. 

In reality, according to noyb, neither local government nor educational institutions have the power to affect how Microsoft handles user data. Rather, they were frequently faced with a "take it or leave it" scenario, in which Microsoft controlled all financial decisions and decision-making authority while the schools were required to bear all associated risks.

“This take-it-or-leave-it approach by software vendors such as Microsoft is shifting all GDPR responsibilities to schools,” stated Maartje de Graaf, a data protection lawyer at noyb. “Microsoft holds all the key information about data processing in its software, but is pointing the finger at schools when it comes to exercising rights. Schools have no way of complying with the transparency and information obligations.” 

Two complaints 

Due to suspected infringement of information privacy rules, Noyb represented two plaintiffs against Microsoft. The first complaint mentioned a father who requested personal data acquired by Microsoft's 365 Education service on behalf of his daughter in accordance with GDPR regulations. 

However, Microsoft had redirected the concerned parent to the "data controller," and after confirming with Microsoft that the school was the data controller, the parent contacted the school, which responded that they only had access to the student's email addresses used for sign-up. 

According to Microsoft's own documentation, the second complaint stated that, despite not giving consent to cookie or tracking technologies, Microsoft 365 Education installed cookies analysing user behaviour and collecting browser data, both of which are used for advertising purposes. The non-profit alleged that this type of invasive profiling was conducted without the school's knowledge or approval. 

noyb has requested that the Austrian data protection authority (DSB) investigate and analyse the data collected and processed by Microsoft 365 Education, as neither Microsoft's own privacy documentation, the complainant's access requests, nor the non-profit's own research could shed light on this process, which it believes violates the GDPR's transparency provisions.

Consent-O-Matic: A Perfect Tool for Blocking Cookie Pop-Ups

 

If you’re using the internet, you’re bound to be greeted by a cookie consent pop-up that seeks consent to track you and promises to use the cookies to enhance your browsing experience. The infiltrative behavior of cookies, which track your movements on the Internet, raised privacy issues. 

The privacy concerns of internet users led to the creation of a few laws and regulations, namely the General Data Protection Regulation (GDPR) and consent management platforms (CMPs), which went into effect in 2018. However, countless sites still outright violate regulations and deceptively track users’ activity. 

Cookies were invented in 1994 by 23-year-old engineer Louis J. Montulli II, who pioneered elements like HTTP proxying. He coined the term “cookies,” which he used in Netscape, the firm that designed one of the internet’s first widely used browsers called Mosaic. Soon after the advent of cookies, people started speaking up about the privacy concerns accompanying this information. 

Cookie blocker need of the hour 

The majority of consent pop-ups on the web do not meet the requirements for legally valid consent laid out in the General Data Protection Regulation (GDPR) four years ago. Hence, users are forced to share their data with multiple sites. 

Earlier this year in April, researchers at Aarhus University published Consent-O-Matic to automatically reject permission requests to track you. The consent-O-Matic extension is free and available for Firefox, Chrome, and other chromium-based browsers, and Safari for macOS and iOS. The browser extension already had 22,000 test customers from multiple countries before releasing publicly. 

“The reason I created this Consent-O-Matic extension was that I'd done the research and I'd demonstrated there was a lack of compliance when it came to 'consent' pop-ups on the web,” Midas Nouwens, one of the extension developers and first author of the academic paper introducing it, stated. “I knew from how it'd been in past years that it was going to be a slow process for regulators to pick up on this. Nor was I confident that they even would.”

“So, I figured I'd do something bottom-up, not just relying on authorities to try and enforce but build something users can use now while we wait for this slower, democratic process to happen

Shady practices of CMPs 

It seems that consent management platforms (CMPs) are already making attempts to bypass the Consent-O-Matic browser extension. Nouwens shared a patent application on Twitter filed on September 6, 2022, by CMP OneTrust aimed at detecting automated cookie rejection. If identified, the software would reject the automated request to block cookies and present the user with another request for consent, even inserting a captcha. 

"By automatically rejecting such consent, the user may not be making an informed decision and the website operator may not be able to ensure the website is in full compliance with applicable privacy laws and regulations,” the warning issued by OneTrust’s patent. 

“The patent is pretty hilarious. The idea it is premised on seems to be that a refusal of consent has to have the same high standards as a granting of consent—that is to be specific, informed, freely given, and unambiguous,” Michael Veale, a professor of digital rights and privacy at UCL Laws stated. “But that's simply incorrect. Refusing consent is different from giving it, and is not subject to those standards. Furthermore, data protection law specifically recognizes that an individual 'may exercise his or her right to object by automated means using technical specifications.” 

In 2020, a team of researchers including Nouwens and Veale published a paper entitled “Dark Patterns after the GDPR Scraping Consent Pop-ups and Demonstrating their Influence,” to highlight the shady practices employed by CMPs. In a survey of 680 of the UK's top sites, 24 percent of them employed OneTrust and only 1.8 percent of those sites were minimally compliant with GDPR. 

The results illustrated the extent to which illegal practices prevail, with vendors of CMPs turning a blind eye. Earlier this year in August, privacy group noyb filed 226 GDPR complaints against websites using OneTrust because they failed to comply with GDPR guidelines.