Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Dark Patterns. Show all posts

AI-Powered Dark Patterns: What's Up Next?

 

The rapid growth of generative AI (artificial intelligence) highlights how urgent it is to address privacy and ethical issues related to the use of these technologies across a range of sectors. Over the past year, data protection conferences have repeatedly emphasised AI's expanding role in the privacy and data protection domains as well as the pressing necessity for Data Protection Officers (DPOs) to handle the issues it presents for their businesses. 

These issues include the creation of deepfakes and synthetic content that could sway public opinion or threaten specific individuals as well as the public at large, the leakage of sensitive personal information in model outputs, the inherent bias in generative algorithms, and the overestimation of AI capabilities that results in inaccurate output (also known as AI hallucinations), which often refer to real individuals. 

So, what are the AI-driven dark patterns? These are deceptive UI strategies that use AI to influence application users into making decisions that favour the company rather than the user. These designs employ user psychology and behaviour in more sophisticated ways than typical dark patterns. 

Imagine getting a video call from your bank manager (created by a deepfake) informing you of some suspicious activity on your account. The AI customises the call for your individual bank branch, your bank manager's vocal patterns, and even their look, making it quite convincing. This deepfake call could tempt you to disclose sensitive data or click on suspicious links. 

Another alarming example of AI-driven dark patterns may be hostile actors creating highly targeted social media profiles that exploit your child's flaws. The AI can analyse your child's online conduct and create fake friendships or relationships that could trick the child into disclosing personal information or even their location to these people. Thus, the question arises: what can we do now to minimise these ills? How do we prevent future scenarios in which cyber criminals and even ill-intentioned organisations contact us and our loved ones via technologies on which we have come to rely for daily activities? 

Unfortunately, the solution is not simple. Mitigating AI-driven dark patterns necessitates a multifaceted approach that includes consumers, developers, and regulatory organisations. The globally recognised privacy principles of data quality, data collection limitation, purpose specification, use limitation, security, transparency, accountability, and individual participation are universally applicable to all systems that handle personal data, including training algorithms and generative AI. We must now test these principles to discover if they can actually protect us from this new, and often thrilling, technology.

Prevention tips 

First and foremost, we must educate people on AI-driven dark trends and fraudulent techniques. This can be accomplished by public awareness campaigns, educational tools at all levels of the education system, and the incorporation of warnings into user interfaces, particularly on social media platforms popular with young people. Cigarette firms must disclose the risks of their products, as should AI-powered services to which our children are exposed.

We should also look for ways to encourage users, particularly young and vulnerable users, to be critical consumers of information they come across online, especially when dealing with AI systems. In the twenty-first century, our educational systems should train members of society to question (far more) the source and intent of AI-generated content. 

Give the younger generation, and even the older ones, the tools they need to control their data and customise their interactions with AI systems. This might include options that allow users or parents of young users to opt out of AI-powered suggestions or data collection. Governments and regulatory agencies play an important role to establish clear rules and regulations for AI development and use. The European Union plans to propose its first such law this summer. The long-awaited EU AI Act puts many of these data protection and ethical concerns into action. This is a positive start.

Subscription Services Accused of Using 'Dark Patterns' to Manipulate Customers

 


It is a widespread practice among subscription sites to manipulate customers' behaviour around subscriptions and personal data to influence their decisions, according to a new report by two international consumer protection organizations. It is defined as the practice of guiding, deceiving, coercing, or manipulating consumers in ways that often aren't in their best interests when using an online user interface. 

An international research effort was conducted by the International Consumer Protection and Enforcement Network, along with the Global Privacy Enforcement Network, both of whom are responsible for conducting consumer protection and enforcement investigations. As a result of a review of selected websites and apps, the Federal Trade Commission and two international consumer protection networks reported that a significant portion of the websites and applications examined may be manipulative of consumers into buying products or services or revealing personal information to third parties. 

These dark patterns, and digital design techniques, can be found in most of the websites and apps examined that use these techniques. These types of strategies may be able to persuade consumers to take actions that they would not generally take. In an internet survey carried out by the Internet Society, an analysis was carried out of the websites and mobile apps of 642 traders. The study found that 75,7% of them had at least one dark pattern on their websites, and 66,8% had at least two or more dark patterns on their websites. 

An online user interface's shadow patterns are defined as the subtle, deceptive, coercive, or manipulative strategies used to steer, deceive, coerce, or manipulate users into making decisions that are not necessarily in their best interest and are rather detrimental to them. As part of the annual International Consumer Protection and Enforcement Network (ICPEN) sweep, which took place from January 29 to February 2, 2024, the 2018 Sweep was hosted by ICPEN. 

To conduct the study, participants were asked to serve as sweepers, representing 27 consumer protection enforcement authorities from 26 different countries. There has been a coordinated sweep between the ICPEN and the Global Privacy Enforcement Network (GPEN) for the very first time. In a world that is becoming increasingly global in terms of standards, regulations, and technology, GPEN is a membership-based network of over 80 privacy enforcement authorities, whose mission is to foster cross-border cooperation among privacy regulators and effectively protect personal privacy. 

Consumer protection is increasingly becoming intertwined with other spheres of the regulatory system due to the growing intersections. The assessment of the deceptive design patterns by both privacy and consumer protection sweepers who were conducting a review of website and app content demonstrated that many of these sites and apps employ techniques that interfere with the ability of individuals to make educated decisions to protect their rights as consumers and privacy. 

As a result of the analysis, the scourges rated the sites and apps from a point of view of six indicators that are characteristic of dark business practices according to the Organisation for Economic Co-operation and Development (OECD). A study conducted by ICPEN found that there were several potential sneaky practices, for example, the inability to turn off auto-renewal of subscription services by consumers, or interference with the user interface. These practices, such as highlighting a subscription that is beneficial to the trader, were particularly frequent during the survey period. 

In a recent publication, ICPEN and GPEN, a pair of organizations that are helping improve consumer protection and privacy for individuals throughout the world, have both released reports that outline their findings. On the ICPEN's website, users will find the report, and on the GPEN's website, they will find the report. GPEN has released a companion report exploring black patterns that could encourage users to compromise their privacy as a result of them. The majority of the more than 1,00 websites and apps analyzed in this study used a deceptive design practice in the development of their websites. 

As many as 89 per cent of these organizations had privacy policies that contained complex and confusing language. In addition to interface interference, 57 per cent of the platforms made the option with the least amount of privacy protection the easiest one to pick, and 42 per cent used words that could influence users' opinions and emotions in the privacy choices. The subtle cues that influence even the most astute individuals can lead to suboptimal decisions. 

These decisions might be relatively harmless, such as forgetting to cancel an auto-renewing service, or they might pose significant risks by encouraging the disclosure of more personal information than necessary. The recent reports have not specified whether these dark patterns were employed illicitly or illegally, only confirming their presence. This dual release underscores the critical importance of digital literacy as an essential skill in the modern age. Today's announcement coincides with the Federal Trade Commission (FTC) officially assuming the 2024-2025 presidency of the International Consumer Protection and Enforcement Network (ICPEN).

ICPEN is a global network of consumer protection authorities from over 70 countries, dedicated to safeguarding consumers worldwide by sharing information and fostering global enforcement cooperation. The FTC has long been committed to identifying and combating businesses that utilize deceptive and unlawful dark patterns. In 2022, the FTC published a comprehensive staff report titled "Bringing Dark Patterns to Light," which detailed an extensive array of these deceptive practices. 

The Federal Trade Commission collaborates with counterpart agencies to promote robust antitrust, consumer protection, and data privacy enforcement and policy. The FTC emphasizes that it will never demand money, issue threats, instruct individuals to transfer funds, or promise prizes. For the latest news and resources, individuals are encouraged to follow the FTC on social media, subscribe to press releases, and subscribe to the FTC International Monthly.