Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label financial scams. Show all posts

Tamil Nadu Police, DoT Target SIM Card Fraud in SE Asia with AI Tools

 

The Cyber Crime Wing of Tamil Nadu Police, in collaboration with the Department of Telecommunications (DoT), is intensifying efforts to combat online fraud by targeting thousands of pre-activated SIM cards used in South-East Asian countries, particularly Laos, Cambodia, and Thailand. These SIM cards have been linked to numerous cybercrimes involving fraudulent calls and scams targeting individuals in Tamil Nadu. 

According to police sources, investigators employed Artificial Intelligence (AI) tools to identify pre-activated SIM cards registered with fake documents in Tamil Nadu but active in international locations. These cards were commonly used by scammers to commit fraud by making calls to unsuspecting victims in the State. The scams ranged from fake online trading opportunities to fraudulent credit or debit card upgrades. A senior official in the Cyber Crime Wing explained that a significant discrepancy was observed between the number of subscribers who officially activated international roaming services and the actual number of SIM cards being used abroad. 

The department is now working closely with central agencies to detect and block suspicious SIM cards.  The use of AI has proven instrumental in identifying mobile numbers involved in a disproportionately high volume of calls into Tamil Nadu. Numbers flagged by AI analysis undergo further investigation, and if credible evidence links them to cybercrimes, the SIM cards are promptly deactivated. The crackdown follows a series of high-profile scams that have defrauded individuals of significant amounts of money. 

For example, in Madurai, an advocate lost ₹96.57 lakh in June after responding to a WhatsApp advertisement promoting international share market trading with high returns. In another case, a government doctor was defrauded of ₹76.5 lakh through a similar investment scam. Special investigation teams formed by the Cyber Crime Wing have been successful in arresting several individuals linked to these fraudulent activities. Recently, a team probing ₹38.28 lakh frozen in various bank accounts apprehended six suspects. 

Following their interrogation, two additional suspects, Abdul Rahman from Melur and Sulthan Abdul Kadar from Madurai, were arrested. Authorities are also collaborating with police in North Indian states to apprehend more suspects tied to accounts through which the defrauded money was transacted. Investigations are ongoing in multiple cases, and the police aim to dismantle the network of fraudsters operating both within India and abroad. 

These efforts underscore the importance of using advanced technology like AI to counter increasingly sophisticated cybercrime tactics. By addressing vulnerabilities such as fraudulent SIM cards, Tamil Nadu’s Cyber Crime Wing is taking significant steps to protect citizens and mitigate financial losses.

UIUC Researchers Expose Security Risks in OpenAI's Voice-Enabled ChatGPT-4o API, Revealing Potential for Financial Scams

 

Researchers recently revealed that OpenAI’s ChatGPT-4o voice API could be exploited by cybercriminals for financial scams, showing some success despite moderate limitations. This discovery has raised concerns about the misuse potential of this advanced language model.

ChatGPT-4o, OpenAI’s latest AI model, offers new capabilities, combining text, voice, and vision processing. These updates are supported by security features aimed at detecting and blocking malicious activity, including unauthorized voice replication.

Voice-based scams have become a significant threat, further exacerbated by deepfake technology and advanced text-to-speech tools. Despite OpenAI’s security measures, researchers from the University of Illinois Urbana-Champaign (UIUC) demonstrated how these protections could still be circumvented, highlighting risks of abuse by cybercriminals.

Researchers Richard Fang, Dylan Bowman, and Daniel Kang emphasized that current AI tools may lack sufficient restrictions to prevent misuse. They pointed out the risk of large-scale scams using automated voice generation, which reduces the need for human effort and keeps operational costs low.

Their study examined a variety of scams, including unauthorized bank transfers, gift card fraud, cryptocurrency theft, and social media credential theft. Using ChatGPT-4o’s voice capabilities, the researchers automated key actions like navigation, data input, two-factor authentication, and following specific scam instructions.

To bypass ChatGPT-4o’s data protection filters, the team used prompt “jailbreaking” techniques, allowing the AI to handle sensitive information. They simulated interactions with ChatGPT-4o by acting as gullible victims, testing the feasibility of different scams on real websites.

By manually verifying each transaction, such as those on Bank of America’s site, they found varying success rates. For example, Gmail credential theft was successful 60% of the time, while crypto-related scams succeeded in about 40% of attempts.

Cost analysis showed that carrying out these scams was relatively inexpensive, with successful cases averaging $0.75. More complex scams, like unauthorized bank transfers, cost around $2.51—still low compared to the potential profits such scams might yield.

OpenAI responded by emphasizing that their upcoming model, o1-preview, includes advanced safeguards to prevent this type of misuse. OpenAI claims that this model significantly outperforms GPT-4o in resisting unsafe content generation and handling adversarial prompts.

OpenAI also highlighted the importance of studies like UIUC’s for enhancing ChatGPT’s defenses. They noted that GPT-4o already restricts voice replication to pre-approved voices and that newer models are undergoing stringent evaluations to increase robustness against malicious use.