Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label UPI. Show all posts

India’s Expanding Digital Reach Brings New Cybersecurity Challenges

 



India’s digital transformation has advanced rapidly over the past decade. With more than 86% of households now online, the Digital India initiative has helped connect citizens, businesses, and services like never before. However, this growing connectivity has also exposed millions to rising cybersecurity risks and financial fraud.

According to official government data, reported cybersecurity incidents have more than doubled, from 10.29 lakh in 2022 to 22.68 lakh in 2024. Experts say this rise not only reflects a more complex threat environment but also improved mechanisms for tracking and reporting attacks.

By February 2025, complaints worth ₹36.45 lakh in total had been filed on the National Cyber Crime Reporting Portal (NCRP), revealing the scale of digital financial fraud in the country.


The Changing Face of Cyber Frauds

Cybercriminals are constantly evolving their methods. Traditional scams like phishing and spoofing where fraudsters pretend to represent banks or companies are now being replaced by more advanced schemes. Some use artificial intelligence to generate convincing fake voices or videos, making deception harder to detect.

A major area of exploitation involves India’s popular Unified Payments Interface (UPI). Attackers have been using compromised mobile numbers to steal funds. In response, the Department of Telecommunications introduced the Financial Fraud Risk Indicator (FRI), which identifies phone numbers showing suspicious financial activity.

Another serious concern is the surge of illegal online betting and gaming applications. Investigations suggest these platforms have collectively generated over ₹400 crore through deceptive schemes. To address this, the government passed the Promotion and Regulation of Online Gaming Bill, 2025, which bans online money gaming while supporting legitimate e-sports and social gaming activities.

India’s legal and institutional framework for cybersecurity continues to expand. The Information Technology Act, 2000, remains the backbone of cyber law, supported by newer policies such as the Digital Personal Data Protection Act, 2023, which reinforces users’ privacy rights and lawful data handling. The Intermediary Guidelines and Digital Media Ethics Code, 2021, also make digital platforms more accountable for the content they host.

The Union Budget 2025–26 allocated ₹782 crore for national cybersecurity initiatives. The government has already blocked over 9.42 lakh SIM cards and 2.63 lakh IMEIs associated with fraudulent activity. Through the CyTrain portal, over one lakh police officers have received training in digital forensics and cybercrime investigation.


National Coordination and Citizen Awareness

Agencies like CERT-In and the Indian Cyber Crime Coordination Centre (I4C) are central to India’s cyber response system. CERT-In has conducted over 100 cyber drills involving more than 1,400 organizations to assess preparedness. I4C’s “Samanvaya” and “Sahyog” platforms enable coordination across states and assist in removing harmful online content.

The government’s helpline number 1930 and the cybercrime portal cybercrime.gov.in provide citizens with direct channels to report cyber incidents. Awareness campaigns through radio, newspapers, and social media further aim to educate the public on online safety.


A Shared Responsibility

India’s expanding digital frontier holds immense promise, but it also demands shared responsibility. With stronger laws, institutional coordination, and public vigilance, India can continue to drive its digital progress while keeping citizens safe from cyber threats.



AI Fraud Emerges as a Growing Threat to Consumer Technology


 

With the advent of generative AI, a paradigm shift has been ushered in the field of cybersecurity, transforming the tactics, techniques, and procedures that malicious actors have been using for a very long time. As threat actors no longer need to spend large amounts of money and time on extensive resources, they are now utilising generative AI to launch sophisticated attacks at an unprecedented pace and efficiency. 

With these tools, cybercriminals can scale their operations to a large level, while simultaneously lowering the technical and financial barriers of entry as they craft highly convincing phishing emails and automate malware development. The rapid growth of the cyber world is posing a serious challenge to cybersecurity professionals. 

The old defence mechanisms and threat models may no longer be sufficient in an environment where attackers are continuously adapting to their environment with AI-driven precision. Therefore, security teams need to keep up with current trends in AI-enabled threats as well as understand historical attack patterns and extract actionable insights from them in order to stay ahead of the curve in order to stay competitive.

By learning from previous incidents and anticipating the next use of generative artificial intelligence, organisations can improve their readiness to detect, defend against, and respond to intelligent cyber threats of a new breed. There has never been a more urgent time to implement proactive, AI-aware cybersecurity strategies than now. With the rapid growth of India's digital economy in recent years, supported by platforms like UPI for seamless payment and Digital India for accessible e-governance, cyber threats have become increasingly complex, which has fueled cybercrime. 

Aside from providing significant conveniences and economic opportunities, these technological advances have also exposed users to the threat of a new generation of cyber-related risks caused by artificial intelligence (AI). Previously, AI was used as a tool to drive innovation and efficiency. Today, cybercriminals use AI to carry out incredibly customized, scalable, and deceptive attacks based on artificial intelligence. 

A threat enabled by artificial intelligence, on the other hand, is capable of mimicking human behaviour, producing realistic messages, and adapting to targets in real time as opposed to traditional scams. A malicious actor is able to create phishing emails that mimic official correspondence very closely, use deepfakes to fool the public, and alarmingly automate large-scale scams by taking advantage of these capabilities. 

In India, where millions of users, many of whom are first-time internet users, may not have the awareness or tools required to detect such sophisticated attacks, the impact is particularly severe. As a global cybercrime loss is expected to reach trillions of dollars in the next decade, India’s digitally active population is becoming increasingly attractive as a target. 

Due to the rapid adoption of technology and the lack of digital literacy present in the country, AI-powered fraud is becoming increasingly common. This means that it is becoming increasingly imperative that government agencies, private businesses, and individuals coordinate efforts to identify the evolving threat landscape and develop robust cybersecurity strategies that take into account AI. 

Affectionately known as AI, Artificial Intelligence can be defined as the branch of computer science concerned with developing products capable of performing tasks that are typically generated by human intelligence, such as reasoning, learning, problem-solving, perception, and language understanding, all of which are traditionally performed by humans. In its simplest form, AI involves the development of algorithms and computational models that are capable of processing huge amounts of data, identifying meaningful patterns, adapting to new inputs, and making decisions with minimal human intervention, all of which are crucial to the overall success of AI. 

As an example, AI helps machines emulate cognitive functions such as recognising speech, interpreting images, comprehending natural language, and predicting outcomes, enabling them to automate, improve efficiency, and solve complex problems in the real world. The applications of artificial intelligence are extending into a wide variety of industries, from healthcare to finance to manufacturing to autonomous vehicles to cybersecurity. As part of the broader field of Artificial Intelligence, Machine Learning (ML) serves as a crucial subset that enables systems to learn and improve from experience without having to be explicitly programmed for every scenario possible. 

Data is analysed, patterns are identified, and these algorithms are refined over time in response to feedback, thus becoming more accurate as time passes. A more advanced subset of machine learning is Deep Learning (DL), which uses layered neural networks that are modelled after the human brain to process high-level data. Deep learning excels at processing unstructured data like images, audio, and natural language and is able to handle it efficiently. As a result, technologies like facial recognition systems, autonomous driving, and conversational AI models are powered by deep learning. 

ChatGPT is one of the best examples of deep learning in action since it uses large-scale language models to understand and respond to user queries as though they were made by humans. With the continuing evolution of these technologies, their impact across sectors is increasing rapidly and offering immense benefits. However, these technologies also present new vulnerabilities that cybercriminals are increasingly hoping to exploit in order to make a profit. 

A significant change has occurred in the fraud landscape as a result of the rise of generative AI technologies, especially large language models (LLMs), providing both powerful tools for defending against fraud as well as new opportunities for exploitation. While these technologies enhance the ability of security teams to detect and mitigate threats, they also allow cybercriminals to devise sophisticated fraud schemes that bypass conventional safeguards in order to conceal their true identity. 

As fraudsters increasingly use generative artificial intelligence to craft attacks that are more persuasive as well as harder to detect, they are making attacks that are increasingly convincing. There has been a significant increase in phishing attacks utilising artificial intelligence. In these attacks, language models are used to generate emails and messages that mimic the tone, structure, and branding of legitimate communications, eliminating any obvious telltale signs of poor grammar or suspicious formatting that used to be a sign of scams. 

A similar development is the deployment of deepfake technology, including voice cloning and video manipulation, to impersonate trusted individuals, enabling social engineering attacks that are both persuasive and difficult to dismiss. In addition, attackers have now been able to automate at scale, utilising generative artificial intelligence, in real time, to target multiple victims simultaneously, customise messages, and tweak their tactics. 

It is with this scalability that fraudulent campaigns become more effective and more widespread. Furthermore, AI also enables bad actors to use sophisticated evasion techniques, enabling them to create synthetic identities, manipulate behavioural biometrics, and adapt rapidly to new defences, thus making it difficult for them to be detected. The same artificial intelligence technologies that fraudsters utilise are also used by cybersecurity professionals to enhance the defences against potential threats.

As a result, security teams are utilising generative models to identify anomalies in real time, by establishing dynamic baselines of normal behaviour, to flag deviations—potential signs of fraud—more effectively. Furthermore, synthetic data generation allows the creation of realistic, anonymous datasets that can be used to train more accurate and robust fraud detection systems, particularly for identifying unusual or emerging fraud patterns in real time. 

A key application of artificial intelligence to the investigative process is the fact that it makes it possible for analysts to rapidly sift through massive data sets and find critical connections, patterns, and outliers that otherwise may go undetected. Also, the development of adaptive defence systems- AI-driven platforms that learn and evolve in response to new threat intelligence- ensures that fraud prevention strategies remain resilient and responsive even when threat tactics are constantly changing. In recent years, generative artificial intelligence has been integrated into both the offensive and defensive aspects of fraud, ushering in a revolutionary shift in digital risk management. 

It is becoming increasingly clear that as technology advances, fraud prevention efforts will increasingly be based upon organisations utilising and understanding artificial intelligence, not only in order to anticipate emerging threats, but also in order to stay several steps ahead of those looking to exploit them. Even though artificial intelligence is becoming more and more incorporated into our daily lives and business operations, it is imperative that people do not ignore the potential risks resulting from its misuse or vulnerability. 

As AI technologies continue to evolve, both individuals and organisations should adopt a comprehensive and proactive cybersecurity strategy tailored specifically to the unique challenges they may face. Auditing AI systems regularly is a fundamental step towards navigating this evolving landscape securely. Organisations must evaluate the trustworthiness, security posture and privacy implications of these technologies, whether they are using third-party platforms or internally developed models. 

In order to find weaknesses and minimize potential threats, organizations should conduct periodic system reviews, penetration tests, and vulnerability assessments in cooperation with cybersecurity and artificial intelligence specialists, in order to identify weaknesses and minimize potential threats. In addition, sensitive and personal information must be handled responsibly. A growing number of individuals are unintentionally sharing confidential information with artificial intelligence platforms without understanding the ramifications of this.

In the past, several corporations have submitted proprietary information to tools such as ChatGPT that are powered by artificial intelligence, or healthcare professionals have disclosed patient information. Both cases raise serious concerns regarding data privacy and compliance with regulations. The AI interactions will be recorded so that system improvements can be made, so it is important for users to avoid sharing any personal, confidential, or regulated information on such platforms. 

Secured data is another important aspect of AI modelling. The integrity of the training data is a vital component of the functionality of AI, and any manipulation, referred to as "data poisoning", can negatively impact outputs and lead to detrimental consequences for users. There are several ways to mitigate the risk of data loss and corruption, including implementing strong policies for data governance, deploying robust encryption methods, enforcing access controls, and using comprehensive backup solutions. 

Further strengthening the system's resilience involves the use of firewalls, intrusion detection systems, and secure password protocols. Additionally, it is important to adhere to the best practices in software maintenance in order to maintain the software correctly. With the latest security patches installed on AI frameworks, applications, and supporting infrastructure, you can significantly reduce the probability of exploitation. It is also important to deploy advanced antivirus and endpoint protection tools to help protect against AI-driven malware as well as other sophisticated threats.

In an attempt to improve AI models, adversarial training is one of the more advanced methods of training them, as it involves training them with simulated attacks as well as data inputs that are unpredictable. It is our belief that this approach will increase the robustness of the model in order for it to better deal with adversarial manipulations in real-world environments, thereby making it more resilient. As well as technological safeguards, employee awareness and preparedness are crucial. 

Employees need to be taught to recognise artificial intelligence-generated phishing attacks, avoid unsafe software downloads, and respond effectively to changing threats as they arise. As part of the AI risk management process, AI experts can be consulted to ensure that training programs are up-to-date and aligned with the latest threat intelligence. 

A second important practice is AI-specific vulnerability management, which involves identifying, assessing, and remediating security vulnerabilities within the AI systems continuously. By reducing the attack surface of an organisation, organisations can reduce the likelihood of breaches that will exploit the complex architecture of artificial intelligence. Last but not least, even with robust defences, incidents can still occur; therefore, there must be a clear set of plans for dealing with AI incidents. 

A good AI incident response plan should include containment protocols, investigation procedures, communication strategies, and recovery efforts, so that damage is minimised and operations are maintained as soon as possible following a cyber incident caused by artificial intelligence. It is critical that businesses adopt these multilayered security practices in order to maintain the trust of their users, ensure compliance, and safeguard against the sophisticated threats emerging in the AI-driven cyber landscape, especially at a time when AI is both a transformative force and a potential risk vector. 

As artificial intelligence is continuing to reshape the technological landscape, all stakeholders must address the risks associated with it. In order to develop comprehensive governance frameworks that balance innovation with security, it is important to work together in concert with business leaders, policymakers, and cybersecurity experts. In addition, cultivating a culture of continuous learning and vigilance among users will greatly reduce the vulnerabilities that can be exploited by increasingly sophisticated artificial intelligence-driven attacks in the future.

It will be imperative to invest in adaptive technologies that will evolve as threats arise, while maintaining ethical standards and ensuring transparency, to build resilient cyber defences. The goal of securing the benefits of AI ultimately depends upon embracing a forward-looking, integrated approach that embraces both technological advancement and rigorous risk management in order to protect digital ecosystems today and in the future, to be effective.

India’s Cyber Scams Create International Turmoil

 


It has been reported that the number of high-value cyber fraud cases in India has increased dramatically in the financial year 2024, which has increased more than fourfold and has resulted in losses totalling more than $20 million, according to government reports. An increase of almost fourfold demonstrates the escalating threat cybercrime is posing in one of the fastest-growing economies of the world today. 

The rapid digitisation of Indian society has resulted in hundreds of millions of financial transactions taking place every day through mobile apps, UPI platforms, and online banking systems, making India an ideal target for sophisticated fraud networks. There are alarming signs that these scams are rapidly expanding in size and complexity, outpacing traditional enforcement and security measures at an alarming rate, according to experts. 

Even though the country has increasingly embraced digital finance, cyber infrastructure vulnerabilities pose a growing threat, not only to domestic users but also to the global cybersecurity community. In the year 2024, cybercrime is expected to undergo a major change, marked by an unprecedented surge in online scams fueled by the rapid integration of artificial intelligence (AI) into criminal operations, causing a dramatic rise in the number of cybercrime incidents in the next decade. 

It is becoming increasingly evident that the level of technology-enabled fraud has reached alarming levels, eroded public confidence in digital systems, and caused substantial financial damage to individuals across the country, indicating that the Indian Cyber Crime Coordination Centre (I4C) paints a grim picture for India — just in the first six months of this year, Indians lost more than $11,000 crore to cyber fraud. 

There are a staggering number of cybercrime complaints filed on the National Cyber Crime Reporting Portal every day, suggesting that the scale of the problem is even larger than what appears at first glance. According to these figures, daily losses of approximately $60 crore are being sustained, signalling a greater focus on cybersecurity enforcement and awareness among the general public. These scams have become more sophisticated and harder to detect as a result of the integration of artificial intelligence, making it much more urgent to implement systemic measures to safeguard digital financial ecosystems. 

The digital economy of India, now considered the world's largest in terms of value, is experiencing a troubling increase in cybercrime, as high-value fraud cases increased dramatically during fiscal year 2024. Data from the official government indicates that cyber fraud has resulted in financial losses exceeding $177 crore (approximately $20.3 million), which is a more than twofold increase from the previous fiscal year. 

An equally alarming trend is the sharp increase in the number of significant fraud cases, which are those involving a lakh or more, which has increased from 6,699 in FY 2023 to 29,082 in FY 2024. With this steep increase in digital financial transactions occurring in the millions of cases each day, this demonstrates that India is experiencing increasing vulnerabilities in the digital space. The rapid transformation of India has been driven by affordable internet access, which costs just $11 per hour (about $0.13 per hour) for data packages. 

A $1 trillion mobile payments market has been developed as a result of this affordability, which has been dominated by platforms like Paytm, Google Pay, and the Walmart-backed PhonePe. It is important to note, however, that the growth of this market has outpaced the level of cyber literacy in this country, leaving millions vulnerable to increasingly sophisticated fraud schemes. The cybercriminals now employ an array of advanced techniques that include artificial intelligence tools and deepfake technologies, as well as impersonating authorities, manipulating voice calls, and crafting deceptive messages designed to exploit unsuspecting individuals. 

Increasing digital access and cybersecurity awareness continue to pose a serious threat to individuals and financial institutions alike, raising concerns about consumer safety and long-term digital trust. This is a striking example of how India is becoming more and more involved in cybercrime worldwide. In February 2024, a federal court in Montana, United States, sentenced a 24-year-old Haryanaan to four years in prison, in a case that has become increasingly controversial. In his role as the head of a sophisticated fraud operation, he swindled over $1.2 million from elderly Americans, including a staggering $150,000 from one individual. 

Through deceptive pop-ups that claimed to offer tech support, the scam took advantage of victims' trust by tricking them into granting remote access to their computers. Once the fraudsters gained control of the victim's computer, they manipulated the victim into handing over large amounts of money, which were then collected by coordinated in-person pickups throughout the U.S. This case is indicative of a deeper trend in India's changing digital landscape, which can be observed across many sectors. 

It is widely believed that India’s technology sector was once regarded as a world-class provider of IT support services. However, the industry is facing a profound shift due to a combination of automation, job saturation, and economic pressures, exacerbated by the COVID-19 pandemic. Due to these challenges, a shadow cybercrime economy has emerged that is a mirror image of the formal outsourcing industry in terms of structure, technical sophistication, and international reach, just like the formal outsourcing industry. 

It has become evident over the years that these illicit networks have turned India into one of the key nodes in the global cyber fraud chain by using call centers, messaging platforms, and AI-powered scams – raising serious questions about how technology-enabled crime is being regulated, accountable, and what socio-economic factors are contributing to its growth. A rise in the sophistication of online scams in 2024 has been attributed to the misuse of artificial intelligence (AI), resulting in a dramatic expansion of both their scale and psychological impact as a result of this advancement. 

Fraudsters are now utilising AI-driven tools to create highly convincing content that is designed to deceive and exploit unsuspecting victims by using a variety of methods, from manipulating voices and cloning audio clips to realistic images and deepfake videos. It has been troubling to see the rise of artificial intelligence-assisted voice cloning, a method of fabricating emergency scenarios and extorting money by using the voices of family members, close acquaintances, and others as their voices.

A remarkable amount of accuracy has been achieved with these synthetic voices, allowing emotional manipulation to be effected more easily and more effectively than ever before. It was found in one high-profile case that scammers impersonated Sunil Mittal's voice so that they could mislead company executives into transferring money. It is also becoming increasingly common for deepfake technology to be used to create videos of celebrities and business leaders that have been made up using artificial intelligence. 

These AI tools have made it possible to spread fraudulent content online and are often free of charge. It was reported that prominent personalities like Anant Ambani, Virat Kohli, and MS Dhoni were targeted with deepfake videos being circulated to promote a fake betting application, misinforming thousands of people and damaging public trust through their actions. 

Increasing accessibility and misuse of artificial intelligence tools demonstrate that cybercrime tactics have shifted dangerously, as traditional scams are evolving into emotionally manipulative, convincing operations that use a visual appeal to manipulate the user. As a result of the rise of artificial intelligence-generated deception, law enforcement agencies and tech platforms are challenged to adapt quickly to counter these emerging threats as a result of the wave of deception generated by AI.

During the last few years, the face of online fraud has undergone a radical evolution. Cybercriminals are no longer relying solely on poorly written messages or easily identifiable hoaxes, as they have developed techniques that are near perfection now. A majority of malicious links being circulated today are polished and technically sound, often embedded within well-designed websites with a realistic login interface that is very much like a legitimate platform's, along with HTTPS encryption.

In the past, these threats were relatively easy to identify, but they have become increasingly insidious and difficult to detect even for the digitally literate user. There is an astonishing amount of exposure to these threats. Virtually every person with a cell phone or internet access receives scam messages almost daily. It's important to note that some users can avoid these sophisticated schemes, but others fall victim to these sophisticated schemes, particularly those who are elderly, who are unfamiliar with digital technology, or who are caught unaware. 

There are many devastating effects of such frauds, including the significant financial toll, as well as the emotional distress, which can be long-lasting. Based on projections, there will be a dramatic 75% increase in the amount of revenue lost to cybercrime in India by 2025, a dramatic 75% increase from 2023. This alarming trend points to the need for systemic reform and a collaborative intervention strategy. The most effective way of addressing this challenge is to change a fundamental mindset: cyber fraud is not a single issue affecting others; rather, it is a collective threat affecting all stakeholders in the digital ecosystem.

To respond effectively, telecom operators, financial institutions, government agencies, social media platforms, and over-the-top (OTT) service providers must all cooperate actively to coordinate the response. There is no escaping the fact that cybercrimes are becoming more sophisticated and more prevalent. Experts believe that four key actions can be taken to dismantle the infrastructure that supports digital fraud. 

First, there should be a centralised fraud intelligence bureau where all players in the ecosystem can report and exchange real-time information about cybercriminals so that swift and collective responses are possible. Furthermore, each digital platform should develop and deploy tailored technologies to counter fraud, sharing these solutions across industries to protect users against fraud. Thirdly, an ongoing public awareness campaign should focus on educating users about digital hygiene, as well as common fraud tactics. 

Lastly, there is a need to broaden the regulatory framework to include all digital service providers as well. There are strict regulations in place for telecommunication companies, but OTT platforms remain almost completely unregulated, creating loopholes for fraudsters. As a result, Indian citizens will be able to begin to reclaim the integrity of their digital landscape as well as protect themselves from cybercrime that is escalating, through a strong combination of regulation, innovation, and education, as well as by working together with cross-sectoral collaboration. 

In today's digitally accelerated and cyber-vulnerable India, it is imperative to develop a strategy for combating online fraud that is forward-looking and cohesive. It is no longer appropriate to simply respond to cyber incidents that have already occurred. Instead, a proactive approach must be taken to combat cybersecurity threats, where technology, policy, and public engagement all work in tandem to build cyber resilience. As a result of this, security must be built into digital platforms, continuous threat intelligence is invested in, and targeted education campaigns are implemented to cultivate a national culture of cyber awareness. 

To prevent fraud and safeguard user data, governments must accelerate the implementation of robust regulatory frameworks that hold all providers of digital services accountable. This includes all digital service providers, regardless of size or sector. While the companies must prioritise cybersecurity not simply as a compliance checkbox, but as a business-critical pillar supported by dedicated infrastructure and real-time monitoring systems, they should not overlook it as just another compliance checkbox. 

To anticipate the changes in the cybercrime playbook, there must be a collective will across industries, institutions, and individuals to be able to adapt. To achieve the very promise of India's digital economy, users must transform cybersecurity from a reactive measure into a national imperative. This is a way to ensure that trust is maintained, innovations are protected, and the future of a truly digital Bharat is secured.

Call Merging Scams and Financial Security Risks with Prevention Strategies

 


It is not uncommon for fraudsters to develop innovative tactics to deceive their targets, with one of the latest scams being the called merging scam in which the scammers attempt to gain unauthorized access to the victim's accounts to defraud them. In many cases, the victims suffer substantial financial losses due to this scheme. 

There has been a warning issued by the Indian authorities in regards to a new scam that involves individuals being manipulated into merging their calls by scammers, who then subsequently reveal One-Time Passwords (OTPs) unknowingly. Using this deceptive tactic, fraudsters can gain access to victims' financial accounts, which will enable them to carry out fraudulent activities. 

NPCI's Unified Payments Interface (UPI), an initiative that was developed by the National Payments Corporation of India (NPCI), has expressed concern about this emerging threat. As a precautionary measure, UPI cautioned users on its X account of the risks involved in call merging scams and stressed that call merging scams pose a serious threat to users. 

As part of the advisory, individuals were advised to remain vigilant, stating, Fraudsters are using call merging tactics to deceive users into giving out OTPs. As part of its role to oversee the Unified Payments Interface (UPI), NPCI has expressed significant concerns about the growing cyber fraud epidemic. 

The goal of social engineering scammers is to deceive unsuspecting victims into disclosing their sensitive banking credentials to take control of the situation. In most cases, the scam begins with the fraudster contacting the target, falsely claiming to have obtained their phone number through a mutual acquaintance. 

The fraudster will then try to convince the target to combine the call with a similar call from a different number. It is true that in this second call, the victim is being connected to an official OTP verification call from their bank. Therefore, the victim does not know they are being deceived, and unwittingly allows someone to access their banking details. 

It uses social engineering techniques to manipulate individuals to unknowingly divulge their One-Time Password (OTP), an important security feature used for financial transactions, through their manipulation techniques. 

It is quite common for victims to receive a phone call from a trusted source offering lucrative opportunities or a message from one of their trusted contacts recommending what seems a beneficial scheme to them. 

A significant security risk can be posed by engaging with such communications without due diligence as a result of the growing prevalence of such fraud activities. As a result, financial institutions and regulatory agencies are cautioning individuals to remain vigilant when receiving unexpected phone calls and to refrain from sharing OTPs or merging calls without verifying the identity of the callers before doing so. 

It has become increasingly common for these frauds to occur, and so the Unified Payments Interface (UPI) has issued an urgent advisory that warns users about the dangers of call merging scams. To avoid being victimized by such deceptive tactics, individuals need to be vigilant and take strict security measures to protect their financial information. 

There is a deceptive technique known as the Call Merging Scam, which is used by fraudsters to trick people into divulging sensitive information such as One-Time Passwords (OTPs), unknowingly. In this manner, scammers can gain unauthorized access to victims' bank accounts and other secured platforms by exploiting this technique to commit financial fraud on the victims. 

Modus Operandi of the Scam


It is quite common for fraudsters to make deceptive telephone calls, falsely stating that they have obtained the recipient's phone number from an acquaintance or source that is reliable. 

There are many scams out there that involve victims being persuaded to merge calls with another individual. This is often accomplished by presenting another individual as a friend or a bank representative, depending on the scam. 

There is an automatic OTP verification call that they will be connected to without their knowledge. The automated call will direct them to a bank site that activates a mobile OTP verification system for verification. 

As a scammer, the victim is deceitfully manipulated into believing that sharing the OTP for their financial accounts to be accessed is necessary because sharing it is required for authentication. 

Preventive Measures to Safeguard Against Fraud 


To avoid the merging of calls between unknown callers, decline the request right away. Be careful about authenticating the identity of a caller: Whenever users receive an email from someone who claims to represent a financial institution, they should contact the bank directly through their official customer support phone number. Recognize Fraudulent Requests: Banks never ask customers for an OTP over the phone. 

A request of this nature should be viewed as an indication of a potential fraud and reported promptly. Ift an unsolicited OTP or suspected fraudulent activity occurs, individuals should notify their bank immediately and call 1930 (the national cybercrime helpline), so the incident can be investigated further. 

Considering the increasing number of scams like these, it has become imperative that one remains vigilant and adopts strict security practices as a precautionary measure to avoid financial loss. Many viral videos and discussions on social media emphasize a single aspect of fraudulent transactions — receiving an OTP via a merged call as opposed to a text message. 

Despite this, they often overlook the important point: an OTP is not sufficient for authorization of a transaction by itself. A fraudster needs to obtain essential banking details such as a card number, a card verification value, or a UPI Personal Identification Number (PIN) before he or she can use an OTP as a final step in committing an unauthorized transaction. 

To mitigate such risks, the Reserve Bank of India (RBI) has implemented strict security protocols to minimize them. To complete electronic transactions, financial institutions and payment service providers must implement multi-factor authentication (MFA) as of 2021 so that user authentication can be verified by more than one factor. This level of protection is achieved by implementing multiple authentication measures in combination with a combination of vital characteristics, including OTP verification, mobile device authentication, biometric identification, and hardware security tokens, which together provide a high level of security against unauthorized access. 

Digital transactions are typically protected by multiple layers of security, each requiring a combination of authentication factors to ensure their integrity. There are three types of authentication: manual, which includes everything the user possesses, such as their credentials, card numbers, and UPI IDs; known, such as their password, CVV, or PIN; and dynamic, such as their OTP, biometric authentication, or device authentication. 

To achieve the highest level of security, all three levels are necessary for most online banking and card transactions. However, a UPI transaction with a value up to a lakh does not require an OTP and can be authorized with only a UPI ID and PIN, without the need for an OTP. As a result of this multi-layered approach, financial fraud risks are greatly reduced and the security of digital payments is greatly strengthened.

The Future of Payment Authentication: How Biometrics Are Revolutionizing Transactions

 



As business operates at an unprecedented pace, consumers are demanding quick, simple, and secure payment options. The future of payment authentication is here — and it’s centered around biometrics. Biometric payment companies are set to join established players in the credit card industry, revolutionizing the payment process. Biometric technology not only offers advanced security but also enables seamless, rapid transactions.

In today’s world, technologies like voice recognition and fingerprint sensors are often viewed as intrusions in the payment ecosystem. However, in the broader context of fintech’s evolution, fingerprint payments represent a significant advancement in payment processing.

Just 70 years ago, plastic credit and debit cards didn’t exist. The introduction of these cards drastically transformed retail shopping behaviors. The earliest credit card lacked a magnetic strip or EMV chip and captured information using carbon copy paper through embossed numbers.

In 1950, Frank McNamara, after repeatedly forgetting his wallet, introduced the first "modern" credit card—the Diners Club Card. McNamara paid off his balance monthly, and at that time, he was one of only three people with a credit card. Security wasn’t a major concern, as credit card fraud wasn’t prevalent. Today, according to the Consumer Financial Protection Bureau’s 2023 credit card report, over 190 million adults in the U.S. own a credit card.

Biometric payment systems identify users and authorize fund deductions based on physical characteristics. Fingerprint payments are a common form of biometric authentication. This typically involves two-factor authentication, where a finger scan replaces the card swipe, and the user enters their personal identification number (PIN) as usual.

Biometric technology verifies identity using biological traits such as facial recognition, fingerprints, or iris scans. These methods enhance two-step authentication, offering heightened security. Airports, hospitals, and law enforcement agencies have widely adopted this technology for identity verification.

Beyond security, biometrics are now integral to unlocking smartphones, laptops, and secure apps. During the authentication process, devices create a secure template of biometric data, such as a fingerprint, for future verification. This data is stored safely on the device, ensuring accurate and private access control.

By 2026, global digital payment transactions are expected to reach $10 trillion, significantly driven by contactless payments, according to Juniper Research. Mobile wallets like Google Pay and Apple Pay are gaining popularity worldwide, with 48% of businesses now accepting mobile wallet payments.

India exemplifies this shift with its Unified Payments Interface (UPI), processing over 8 billion transactions monthly as of 2023. This demonstrates the country’s full embrace of digital payment technologies.

The Role of Governments and Businesses in Cashless Economies

Globally, governments and businesses are collaborating to offer cashless payment options, promoting convenience and interoperability. Initially, biometric applications were limited to high-security areas and law enforcement. Technologies like DNA analysis and fingerprint scanning reduced uncertainties in criminal investigations and helped verify authorized individuals in sensitive environments.

These early applications proved biometrics' precision and security. However, the idea of using biometrics for consumer payments was once limited to futuristic visions due to high costs and slow data processing capabilities.

Technological advancements and improved hardware have transformed the biometrics landscape. Today, biometrics are integrated into everyday devices like smartphones, making the technology more consumer-centric and accessible.

Privacy and Security Concerns

Despite its benefits, the rise of biometric payment systems has sparked privacy and security debates. Fingerprint scanning, traditionally linked to law enforcement, raises concerns about potential misuse of biometric data. Many fear that government agencies might gain unauthorized access to sensitive information.

Biometric payment providers, however, clarify that they do not store actual fingerprints. Instead, they capture precise measurements of a fingerprint's unique features and convert this into encrypted data for identity verification. This ensures that the original fingerprint isn't directly used in the verification process.

Yet, the security of biometric systems ultimately depends on robust databases and secure transaction mechanisms. Like any system handling sensitive data, protecting this information is paramount.

Biometric payment systems are redefining the future of financial transactions by offering unmatched security and convenience. As technology advances and adoption grows, addressing privacy concerns and ensuring data security will be critical for the widespread success of biometric authentication in the payment industry.

Banking Fraud: Jumped Deposit Scam Targets UPI Users


Users of the unified payments interface (UPI) are the victims of a recent cyber fraud known as the "jumped deposit scam." First, scammers persuade victims by making a modest, unsolicited deposit into their bank accounts. 

How does it operate? 

A scammer uses UPI to transfer a tiny sum to the victim's bank account. After that, they ask for a larger withdrawal right away. The victim might quickly verify their bank account amount due to this sudden deposit. The victim must input their personal identification number (PIN) to access their bank details, and the stolen withdrawal is authorized. The difference money is stolen by fraudsters.

The Hindu reports, “Scammers exploit the recipient’s curiosity over an unsolicited deposit to access their funds.”

The public was previously warned by the Tamil Nadu Cyber Crime Police to exercise caution when dealing with such unforeseen deposits. It noted that the latest scam was the subject of numerous complaints to the National Cyber Crime Reporting Portal.

What to do?

There are two methods UPI customers can use to guard against jumped deposit scams. 

Since withdrawal requests expire after a certain amount of time, wait 15 to 30 minutes after noticing an unexpected transaction in your bank account before checking your balance. Try carefully entering an incorrect PIN number to reject the prior transaction if you don't have time to wait a few minutes. 

Additionally, to confirm the legitimacy, notify your bank if you discover an unexpected or sudden credit in your account. Scam victims need to file a complaint with the cybercrime portal or the closest police station.

Banking attacks on the rise

The State Bank of India recently highlighted several cybercrimes, including digital arrests and fake customs claims, in light of the rise in cybercrimes. To safeguard themselves, the bank advised its clients to report shady calls and confirm any unexpected financial requests. 

It explained scams like "digital arrests," where scammers pretend to be law enforcement officers and threaten to question you about fictitious criminal conduct. For easy chores, some scammers may offer large quantities of money as payment. After that, they might request a security deposit.

Cybercriminals Target UPI Payments: How to Stay Safe

 



The Unified Payments Interface (UPI) has transformed the infrastructure of digital transactions in India, providing a fast, easy, and secure method for payments. However, its rapid adoption has also attracted the attention of cybercriminals. This article delves into the tactics used by fraudsters and the measures users can take to protect themselves.

Cybercriminals employ a variety of deceptive methods to exploit UPI users. Vishal Salvi, CEO of Quick Heal Technologies Ltd., explains that these criminals often impersonate familiar contacts or trusted services to trick users into making quick, unverified money transfers. One prevalent technique is phishing, where fraudsters send emails that appear to be from legitimate banks or UPI service providers, prompting users to reveal sensitive information.

Malware and spyware are also common tools in the cybercriminal's arsenal. These malicious programs can infiltrate devices to steal personal information, including UPI details, or even take control of the device to initiate unauthorised transactions. Social engineering tactics, where fraudsters pose as customer service representatives, are another method. They manipulate users into sharing confidential information by pretending to resolve a payment issue.

Protecting oneself from UPI payment fraud is crucial and can be achieved through vigilance and caution. Financial institutions have implemented multi-factor authentication (MFA) and financial literacy programs to enhance security, but users must also take proactive steps. It is essential never to share your UPI PIN or OTP with anyone. Always verify the authenticity of transactions and use official apps or websites. Ensuring a secure connection (https) before entering any information is another critical step. Regularly updating your app and enabling transaction alerts can help monitor for any suspicious activity.

In the event of a fraudulent transaction, immediate action is vital. The moment you suspect fraud, report the incident to your bank and the UPI platform. Blocking your account can prevent further unauthorised transactions. Filing a complaint with the bank's ombudsman, including all relevant details, and reporting the fraud to local cybercrime authorities are crucial steps. Quick and decisive actions can significantly increase the chances of recovering lost funds.

While UPI has revolutionised digital payments, users must remain vigilant against cyber threats. By following these safety measures and responding to any signs of fraud, users can enjoy the benefits of UPI while mminimising the risks.


Nationwide Banking Crisis: Servers Down, UPI Transactions in Jeopardy

 


Several bank servers have been reported to have been down on Tuesday, affecting Unified Payments Interface (UPI) transactions throughout the country. Several users took to social media platforms and reported that they encountered issues while making UPI transactions. They took to X (formerly Twitter) to complain about not being able to complete the transaction. It was confirmed in a tweet that the National Payments Corporation of India had suffered from an outage which led to the failure of UPI transactions in some banks. 

A website monitoring service with issues received reports that the UPI service was not working for Kotak Mahindra Bank, HDFC Bank, State Bank of India (SBI), and others, according to Downdetector, a website monitoring service. According to reports on social media platforms, hundreds of bank servers have experienced widespread outages nationwide, impacting the Unified Payments Interface (UPI) transactions. 

Users were flooding social media platforms with details of these disruptions. As well, Downdetector, a company providing website monitoring services, received reports of ongoing outages affecting UPI as well as Kotak Mahindra Bank, HDFC Bank, State Bank of India (SBI) and others. The outage seems to affect UPI transactions made using several banks as well. 

In some cases, users have reported experiencing server problems when making UPI payments with HDFC Bank, Baroda Bank, Mumbai Bank, State Bank of India (SBI), and Kotak Mahindra Bank, among other banks. In addition to reporting UPI, Kotak Mahindra Bank and HDFC Bank's ongoing outage on Downdetector, a website that keeps an eye on outages and issues across the entire business landscape, Downdetector has also received reports of ongoing outages from users. 

Several users have reported having difficulty with the “Fund Transfer” process within their respective banks due to technical difficulties. A new high was reached by UPI transactions in January, with a value of Rs 18.41 trillion, up marginally by 1 per cent from Rs 18.23 trillion in December. During November, the number of transactions increased by 1.5%, reaching 12.20 billion, which is up by 1.5 per cent from 12.02 billion in October. 

In November, the number of transactions was 11.4 billion, resulting in a value of Rs 17.4 trillion. The NPCI data shows that the volume of transactions in January was 52 per cent higher and the value was 42 per cent higher than the same month of the previous financial year, according to NPCI data. 

Earlier in November 2023, a report stating that the government was considering implementing a minimum time constraint within the initial interaction between two individuals who are carrying out transactions exceeding an adjustable amount was published. 

The Indian Express reported that, according to government sources, the proposed plan would dictate a four-hour timeframe to be imposed on the first digital payment between two users, particularly for transactions exceeding Rs 2,000, based on inputs that were received from the government.