Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label AI generated Deepfake. Show all posts

AI Tools Fueling Global Expansion of China-Linked Trafficking and Scamming Networks

 

A recent report highlights the alarming rise of China-linked human trafficking and scamming networks, now using AI tools to enhance their operations. Initially concentrated in Southeast Asia, these operations trafficked over 200,000 people into compounds in Myanmar, Cambodia, and Laos. Victims were forced into cybercrime activities, such as “pig butchering” scams, impersonating law enforcement, and sextortion. Criminals have now expanded globally, incorporating generative AI for multi-language scamming, creating fake profiles, and even using deepfake technology to deceive victims. 

The growing use of these tools allows scammers to target victims more efficiently and execute more sophisticated schemes. One of the most prominent types of scams is the “pig butchering” scheme, where scammers build intimate online relationships with their victims before tricking them into investing in fake opportunities. These scams have reportedly netted criminals around $75 billion. In addition to pig butchering, Southeast Asian criminal networks are involved in various illicit activities, including job scams, phishing attacks, and loan schemes. Their ability to evolve with AI technology, such as using ChatGPT to overcome language barriers, makes them more effective at deceiving victims. 

Generative AI also plays a role in automating phishing attacks, creating fake identities, and writing personalized scripts to target individuals in different regions. Deepfake technology, which allows real-time face-swapping during video calls, is another tool scammers are using to further convince their victims of their fabricated personas. Criminals now can engage with victims in highly realistic conversations and video interactions, making it much more difficult for victims to discern between real and fake identities. The UN report warns that these technological advancements are lowering the barrier to entry for criminal organizations that may lack advanced technical skills but are now able to participate in lucrative cyber-enabled fraud. 

As scamming compounds continue to operate globally, there has also been an uptick in law enforcement seizing Starlink satellite devices used by scammers to maintain stable internet connections for their operations. The introduction of “crypto drainers,” a type of malware designed to steal funds from cryptocurrency wallets, has also become a growing concern. These drainers mimic legitimate services to trick victims into connecting their wallets, allowing attackers to gain access to their funds.  

As global law enforcement struggles to keep pace with the rapid technological advances used by these networks, the UN has stressed the urgency of addressing this growing issue. Failure to contain these ecosystems could have far-reaching consequences, not only for Southeast Asia but for regions worldwide. AI tools and the expanding infrastructure of scamming operations are creating a perfect storm for criminals, making it increasingly difficult for authorities to combat these crimes effectively. The future of digital scamming will undoubtedly see more AI-powered innovations, raising the stakes for law enforcement globally.

Supreme Court Directive Mandates Self-Declaration Certificates for Advertisements

 

In a landmark ruling, the Supreme Court of India recently directed every advertiser and advertising agency to submit a self-declaration certificate confirming that their advertisements do not make misleading claims and comply with all relevant regulatory guidelines before broadcasting or publishing. This directive stems from the case of Indian Medical Association vs Union of India. 

To enforce this directive, the Ministry of Information and Broadcasting has issued comprehensive guidelines outlining the procedure for obtaining these certificates, which became mandatory from June 18, 2024, onwards. This move is expected to significantly impact advertisers, especially those using deepfakes generated by Generative AI (GenAI) on social media platforms like Instagram, Facebook, and YouTube. The use of deepfakes in advertisements has been a growing concern. 

In a previous op-ed titled “Urgently needed: A law to protect consumers from deepfake ads,” the rising menace of deepfake ads making misleading or fraudulent claims was highlighted, emphasizing the adverse effects on consumer rights and public figures. A survey conducted by McAfee revealed that 75% of Indians encountered deepfake content, with 38% falling victim to deepfake scams, and 18% directly affected by such fraudulent schemes. Alarmingly, 57% of those targeted mistook celebrity deepfakes for genuine content. The new guidelines aim to address these issues by requiring advertisers to provide bona fide details and final versions of advertisements to support their declarations. This measure is expected to aid in identifying and locating advertisers, thus facilitating tracking once complaints are filed. 

Additionally, it empowers courts to impose substantial fines on offenders. Despite the potential benefits, industry bodies such as the Indian Internet and Mobile Association of India (IAMAI), Indian Newspaper Association (INS), and the Indian Society of Advertisers (ISA) have expressed concerns over the additional compliance burden, particularly for smaller advertisers. These bodies argue that while self-certification has merit, the process needs to be streamlined to avoid hampering legitimate advertising activities. The challenge of regulating AI-enabled deepfake ads is further complicated by the sheer volume of digital advertisements, making it difficult for regulators to review each one. 

Therefore, it is suggested that online platforms be obligated to filter out deepfake ads, leveraging their technology and resources for efficient detection. The Ministry of Electronics and Information Technology highlighted the negligence of social media intermediaries in fulfilling their due diligence obligations under the IT Rules in a March 2024 advisory. 

Although non-binding, the advisory stipulates that intermediaries must not allow unlawful content on their platforms. The Supreme Court is set to hear the matter again on July 9, 2024, when industry bodies are expected to present their views on the new guidelines. This intervention could address the shortcomings of current regulatory approaches and set a precedent for robust measures against deceptive advertising practices. 

As the country grapples with the growing threat of dark patterns in online ads, the apex court’s involvement is crucial in ensuring consumer protection and the integrity of advertising practices in India.

Navigating the Challenges of Personhood Data in the Age of AI

 

In the ever-evolving geography of technology and data security, the emergence of AI-generated content and deepfake technology has thrust the issue of particular data into the limelight. This has urged a critical examination of the challenges girding Personhood Verification, a complex content gaining attention from major tech pots and nonsupervisory bodies. 

Tech titans like Meta, Microsoft, Google, and Amazon are at the van of the battle against the rise of deepfakes and deceptive AI content. Meta's recent commitment to labelling AI- generated audio-visual content represents a significant stride in addressing this multifaceted challenge. Still, directly relating all cases of AI-generated content remains an intricate and ongoing trouble. The voluntary accord reached at the Munich Security Conference outlines abecedarian principles for managing the pitfalls associated with deceptive AI election content. 

While the frame sets forth noble intentions, questions loiter about its effectiveness without detailed specialized plans and robust enforcement mechanisms. Regulatory responses are arising to fight AI-enabled impersonation, particularly in the United States. 

The Federal Trade Commission (FTC) has proposed updates to rules to combat AI-enabled impersonation and fraud. With the proliferation of AI tools easing impersonation at an unknown scale, nonsupervisory measures are supposed necessary to protect consumers from vicious actors. 

The proposed expansions to the final impersonation rule end to give consumers expedient against scammers exercising government seals or business ensigns to deceive individualities. Yet, enterprises loiter regarding the implicit clash between nonsupervisory sweats and indigenous rights, particularly the First Amendment's protection of lampoon and free speech. 

Innovative systems like Worldcoin are reshaping digital personhood verification or identity verification. Addressing the need for identity verification in an AI-driven world, Worldcoin aims to establish a global digital ID and fiscal network. Using biometric data and blockchain technology, Worldcoin offers individualities lesser control over their online individualities, promising a decentralized volition to traditional identification systems. 

The multifaceted nature of digital ID operations raises questions about their efficacity and implicit pitfalls. While proponents endorse digital tone-sovereignty and enhanced sequestration protections, disbelievers advise of the essential challenges and vulnerabilities associated with decentralized platforms. As Worldcoin and analogous enterprises gain traction, the debate girding the confluence of particular data and decentralized husbandry intensifies. 

As society navigates the intricate crossroads of technology, sequestration, and identity, the Personhood Data Dilemma persists as a redoubtable challenge. Controllers, tech companies, and originators are laboriously scuffling with these complex issues, emphasizing the need for robust safeguards and transparent governance mechanisms. 

In conclusion, as we navigate the challenges of Personhood Data in the Age of AI, it's clear that a delicate balance must be struck between technological invention, nonsupervisory fabrics, and individual rights. The ongoing elaboration of technology will continue to shape the future of particular data security, demanding a thoughtful and cooperative approach from all stakeholders involved.

As Deepfake of Sachin Tendulkar Surface, India’s IT Minister Promises Tighter Rules


On Monday, Indian minister of State for Information Technology Rajeev Chandrasekhar confirmed that the government will notify robust rules under the Information Technology Act in order to ensure compliance by platform in the country. 

Union Minister, on X, expressed gratitude to cricketer Sachin Tendulkar for pointing out the video. Tendulkar, in X, for pointing out the video, said that AI-generated deepfakes and misinformation are a threat to safety and trust of Indian users. He notes that platforms must comply with advisory issued by the Centre. 

"Thank you @sachin_rt for this tweet #DeepFakes and misinformation powered by #AI are a threat to Safety&Trust of Indian users and represents harm & legal violation that platforms have to prevent and take down. Recent Advisory by @GoI_MeitY requires platforms to comply wth this 100%. We will be shortly notifying tighter rules under IT Act to ensure compliance by platforms," Chandrasekhar posted on X

On X, Sachin Tendulkar was seen cautioning his fans and the public that aforementioned video was fake. Further, he asked viewers to report any such applications, videos and advertisements. 

"These videos are fake. It is disturbing to see rampant misuse of technology. Request everyone to report videos, ads & apps like these in large numbers. Social media platforms need to be alert and responsive to complaints. Swift action from their end is crucial to stopping the spread of misinformation and fake news. @GoI_MeitY, @Rajeev_GoI and @MahaCyber1," Tendulkar said on X.

Deepfakes are artificial media that have undergone digital manipulation to effectively swap out one person's likeness for another. The alteration of facial features using deep generative techniques is known as a "deepfake." While the practice of fabricating information is not new, deepfakes use sophisticated AI and machine learning algorithms to edit or create visual and auditory content that is easier to trick.

Last month, the government urged all online platforms to abide by the IT rules and mandated companies to notify users about forbidden content transparently and accurately.

The Centre has asked platforms to take urgent action against deepfakes and ensure that their terms of use and community standards comply with the laws and IT regulations in force. The government has made it abundantly evident that any violation will be taken very seriously and could result in legal actions against the entity.