Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label CFPB. Show all posts

Privacy Expert Urges Policy Overhaul to Combat Data Brokers’ Practices

Privacy expert Yael Grauer, known for creating the Big Ass Data Broker Opt-Out List (BADBOOL), has a message for those frustrated with the endless cycle of removing personal data from brokers’ databases: push lawmakers to implement meaningful policy reforms. Speaking at the ShmooCon security conference, Grauer likened the process of opting out to an unwinnable game of Whac-A-Mole, where users must repeatedly fend off new threats to their data privacy. 

Grauer’s BADBOOL guide has served as a resource since 2017, offering opt-out instructions for numerous data brokers. These entities sell personal information to advertisers, insurers, law enforcement, and even federal agencies. Despite such efforts, the sheer number of brokers and their data sources makes it nearly impossible to achieve a permanent opt-out. Commercial data-removal services like DeleteMe offer to simplify this task, but Grauer’s research for Consumer Reports found them less effective than advertised. 

The study, released in August, gave its highest ratings to Optery and EasyOptOuts, but even these platforms left gaps. “None of these services cover everything,” Grauer warned, emphasizing that even privacy experts struggle to protect their data. Grauer stressed the need for systemic solutions, pointing to state-led initiatives like California’s Delete Act. This legislation aims to create a universal opt-out system through a state-run data broker registry. While similar proposals have surfaced at the federal level, Congress has repeatedly failed to pass comprehensive privacy laws. 

Other states have implemented statutes like Maryland’s Online Data Privacy Act, which restricts the sale of sensitive data. However, these laws often allow brokers to deal in publicly available information, such as home addresses found on property-tax sites. Grauer criticized these carve-outs, noting that they undermine broader privacy protections. One promising development is the Consumer Financial Protection Bureau’s (CFPB) proposal to classify data brokers as consumer reporting agencies under the Fair Credit Reporting Act. 

This designation would impose stricter controls on their operations. Grauer urged attendees to voice their support for this initiative through the CFPB’s public-comments form, open until March 3. Despite these efforts, Grauer expressed skepticism about Congress’s ability to act. She warned of political opposition to the CFPB itself, citing calls from conservative groups and influential figures to dismantle the agency. 

Grauer encouraged attendees to engage with their representatives to protect this regulatory body and advocate for robust privacy legislation. Ultimately, Grauer argued, achieving meaningful privacy protections will require collective action, from influencing policymakers to supporting state and federal initiatives aimed at curbing data brokers’ pervasive reach.

California Man Sues Banks Over $986K Cryptocurrency Scam



Ken Liem, a California resident, has filed a lawsuit against three major banks, accusing them of negligence in enabling a cryptocurrency investment scam. Liem claims he was defrauded of $986,000 after being targeted on LinkedIn in June 2023 by a scammer promoting crypto investment opportunities. Over six months, Liem wired substantial funds through Wells Fargo to accounts held by Hong Kong-based entities.

Liem’s ordeal escalated when his cryptocurrency account was frozen under false allegations of money laundering. To regain access to his funds, scammers demanded he pay a fake IRS tax—an established tactic used to maximize financial extraction from victims before vanishing.

The lawsuit names three financial institutions as defendants:
  • Chong Hing Bank Limited (Hong Kong-based)
  • Fubon Bank Limited (Hong Kong-based)
  • DBS Bank (Singapore-based, with a Los Angeles branch)

Allegations of Negligence and Non-Compliance

Liem accuses these banks of failing to follow mandatory “Know Your Customer” (KYC) and anti-money laundering (AML) protocols as required by the U.S. Bank Secrecy Act. The lawsuit asserts that the banks:
  • Failed to Verify Identities: Inadequate due diligence on account holders allowed fraudsters to operate unchecked.
  • Neglected Business Verification: The nature of the businesses linked to these accounts was not properly investigated.
  • Ignored Complaints: Liem reported the scam in August 2024, but the banks either disregarded his concerns or denied accountability.

The lawsuit contends that these financial institutions enabled the transfer of illicit funds from the U.S. to Asian accounts tied to organized scams by ignoring suspicious transactions.

Liem's case highlights the growing debate over banks' responsibility in preventing fraud. While lawsuits of this nature are uncommon, they are not without precedent. For instance:
  • January 2024: Two elderly victims of IRS impersonation scams sued JPMorgan Chase for allowing large international transfers without adequate scrutiny.

Globally, different approaches are being adopted to address fraud:
  • United Kingdom: New regulations require banks to reimburse scam victims up to £85,000 ($106,426) within five days, though banks have pushed back against raising this cap.
  • Australia: Proposed legislation could fine banks, telecom providers, and social media platforms for failing to prevent scams.
  • United States: The Consumer Financial Protection Bureau (CFPB) has taken legal action against Bank of America, Wells Fargo, and JPMorgan Chase for not preventing fraud on the Zelle platform, which has resulted in $870 million in losses since 2017.

As global authorities and financial institutions grapple with accountability measures, victims like Ken Liem face significant challenges in recovering their stolen funds. This lawsuit underscores the urgent need for stronger fraud prevention policies and stricter enforcement of compliance standards within the banking sector.

Guidelines on What Not to Share with ChatGPT: A Formal Overview

 


A simple device like ChatGPT has unbelievable power, and it has revolutionized our experience of interacting with computers in such a profound way. There are, however, some limitations that it is important to understand and bear in mind when using this tool. 

Using ChatGPT, OpenAI has seen a massive increase in revenue resulting from a massive increase in content. There were 10 million dollars of revenue generated by the company every year. It, however, grew from 1 million dollars in to 200 million dollars in the year 2023. In the coming years, the revenue is expected to increase to over one billion dollars by the end of 2024, which is even higher than what it is now. 

A wide array of algorithms is included in the ChatGPT application that is so powerful that it is capable of generating any text the users want, from a simple math sum to a complex rocket theory question. It can do them all and more! It is crucial to acknowledge the advantages that artificial intelligence can offer and to acknowledge their shortcomings as the prevalence of chatbots powered by artificial intelligence continues to rise.  

To be successful with AI chatbots, it is essential to understand that there are certain inherent risks associated with their use, such as the potential for cyber attacks and privacy issues.  A major change in Google's privacy policy recently made it clear that the company is considering providing its AI tools with the data that it has collected from web posts to train those models and tools.  

It is equally troubling that ChatGPT retains chat logs to improve the model and to improve the uptime of the service. Despite this, there is still a way to address this concern, and it involves not sharing certain information with chatbots that are based on artificial intelligence. Jeffrey Chester, executive director of the Center for Digital Democracy, an organization dedicated to digital rights advocacy stated these tools should be viewed by consumers with suspicion at least, since as with so many other popular technologies – they are all heavily influenced by the marketing and advertising industries.  

The Limits Of ChatGPT 


As the system was not enabled for browsing (which is a requirement for ChatGPT Plus), it generated responses based on the patterns and information it learned throughout its training, which included a range of internet texts while it was training until September 2021 when the training cut-off will be reached.  

Despite that, it is incapable of understanding the context in the same way as people do and does not know anything in the sense of "knowing" anything. ChatGPT is famous for its impressive and relevant responses a great deal of the time, but it is not infallible. The answers that it produces can be incorrect or unintelligible for several reasons. 

Its proficiency largely depends on the quality and clarity of the prompt given. 

1. Banking Credentials 


The Consumer Financial Protection Bureau (CFPB) published a report on June 6 about the limitations of chatbot technology as the complexity of questions increases. According to the report, implementing chatbot technology could result in financial institutions violating federal consumer protection laws, which is why the potential for violations of federal consumer protection laws is high. 

According to the Consumer Financial Protection Bureau (CFPB), the number of consumer complaints has increased due to a variety of issues that include resolving disputes, obtaining accurate information, receiving good customer service, seeking assistance from human representatives, and maintaining personal information security. In light of this fact, the CFPB advises financial institutions to refrain from solely using chatbots as part of their overall business model.  

2. Personal Identifiable Information (PII). 


Whenever users share sensitive personal information that can be used to identify users personally, they need to be careful to protect their privacy and minimise the risk that it will be misused. The user's full name, home address, social security number, credit card number, and any other information that can identify them as an individual is included in this category. The importance of protecting these sensitive details is paramount to ensuring their privacy and preventing potential harm from unauthorised use. 

3. Confidential information about the user's workplace


Users should exercise caution and refrain from sharing private company information when interacting with AI chatbots. It is crucial to understand the potential risks associated with divulging sensitive data to these virtual assistants. 

Major tech companies like Apple, Samsung, JPMorgan, and Google have even implemented stringent policies to prohibit the use of AI chatbots by their employees, recognizing the importance of protecting confidential information. 

A recent Bloomberg article shed light on an unfortunate incident involving a Samsung employee who inadvertently uploaded confidential code to a generative AI platform while utilizing ChatGPT for coding tasks. This breach resulted in the unauthorized disclosure of private information about Samsung, which subsequently led to the company imposing a complete ban on the use of AI chatbots. 

Such incidents highlight the need for heightened vigilance and adherence to security measures when leveraging AI chatbots. 

4. Passwords and security codes 


In the event that a chatbot asks you for passwords, PINs, security codes, or any other confidential access credentials, do not give them these things. It is prudent to prioritise your safety and refrain from sharing sensitive information with AI chatbots, even though these chatbots are designed with privacy in mind. 

For your accounts to remain secure and for your personal information to be protected from the potential of unauthorised access or misuse, it is paramount that you secure your passwords and access credentials.

In an age marked by the progress of AI chatbot technology, the utmost importance lies in the careful protection of personal and sensitive information. This report underscores the imperative necessity for engaging with AI-driven virtual assistants in a responsible and cautious manner, with the primary objective being the preservation of privacy and the integrity of data. It is advisable to remain well-informed and to exercise prudence when interacting with these potent technological tools.