Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Experts Say ‘Ghost Tapping’ Payment Scams Are Uncommon, But Consumers Should Still Stay Alert

  As contactless payment systems become increasingly common at stores, public events, and seasonal markets, cybersecurity and payment secur...

All the recent news you need to know

AI Deepfake Scam Changes Aadhaar Mobile Without OTP

 

AI-enabled fraudsters are now using deepfake tools to change Aadhaar details, such as the mobile number linked to an account, without victims noticing, enabling identity theft and loan fraud.

In Ahmedabad, cybercrime investigators uncovered a racket that quietly replaced victims’ Aadhaar-linked mobile numbers and then used those new numbers to intercept OTPs and take control of digital services, including DigiLocker and banking apps. The gang reportedly collected Aadhaar numbers, photographs and other personal data from leaks and social media, then used AI software to turn still photos into short “blink” videos that mimic liveness checks and fool verification systems. 

Once the fraudsters changed the registered mobile number, they could receive OTPs and update KYC details, effectively hijacking victims’ digital identities and applying for loans or accessing accounts in their names. Police say the operation was organised with distinct roles: some members sourced data and photos, others used Aadhaar update kits—often through Common Service Centres (CSCs)—to make unauthorised changes, and specialists created deepfake clips to pass biometric checks.

Authorities arrested several suspects after a businessman reported that his Aadhaar-linked number was altered without any OTP or call alerts, revealing how smoothly the criminals combined social engineering, physical update kits, and AI manipulation to bypass safeguards. Reports indicate the attackers exploited weaknesses in offline update workflows and gaps in liveness-detection systems that still accept AI-generated motion as genuine.

Safety recommendations 

To protect yourself, regularly verify the mobile number linked to your Aadhaar and lock your biometrics using official mAadhaar or UIDAI services when not in use. Monitor DigiLocker and bank accounts for unexpected changes and set up transaction alerts with your bank; if you spot unusual activity, report it immediately to local cybercrime units or UIDAI’s helplines. Avoid uploading Aadhaar photos or documents on unfamiliar platforms and be cautious about sharing personal information on social media, which criminals can reuse to create realistic deepfakes. 

Longer-term fixes will require stricter controls around Aadhaar update kits at CSCs, better audit trails for demographic changes, and improved liveness-detection algorithms that can distinguish AI-generated clips from real facial movement. Experts and regulators also urge faster data-breach notification rules and tighter controls on access to identity databases so criminals cannot easily assemble the building blocks for such attacks. Until these systemic changes arrive, vigilance, biometric locks, and immediate reporting remain the best defenses for citizens.

AI Chatbot Training Raises Growing Privacy and Data Security Concerns

 

Most conversations with AI bots carry hidden layers behind simple replies. While offering answers, some firms quietly gather exchanges to refine machine learning models. Personal thoughts, job-related facts, or private topics might slip into data pools shaping tomorrow's algorithms. Experts studying digital privacy point out people rarely notice how freely they share in routine bot talks. Hidden purposes linger beneath what seems like casual back-and-forth. Most chatbots rely on what experts call a large language model. 

Through exposure to massive volumes of text - pulled from sites, online discussions, video transcripts, published works, and similar open resources - these models grow sharper. Exposure shapes their ability to spot trends, suggest fitting answers, and produce dialogue resembling natural speech. As their learning material expands, so does their skill in managing complex questions and forming thorough outputs. Wider input often means smoother interactions. 

Still, official data isn’t what fills these models alone. Input from people using apps now feeds just as much raw material to tech firms building artificial intelligence. Each message entered into a conversational program might later get saved, studied, then applied to sharpen how future versions respond. Often, that process runs by default - only pausing if someone actively adjusts their preferences or chooses to withdraw when given the chance. Worries about digital privacy keep rising.

Talking to artificial intelligence systems means sharing intimate details - things like medical issues, money problems, mental health, job conflicts, legal questions, or relationship secrets. Even though firms say data gets stripped of identities prior to being used in machine learning, skeptics point out people must rely on assurances they can’t personally check. 

Some data marked as private today might lose that status later. Experts who study system safety often point out how new tools or pattern-matching tricks could link disguised inputs to real people down the line. Talks involving personal topics kept inside artificial intelligence platforms can thus pose hidden exposure dangers years after they happen. Most jobs now involve some form of digital tool interaction. 

As staff turn to AI assistants for tasks like interpreting files, generating scripts, organizing data tables, composing summaries, or solving tech glitches, risks grow quietly. Information meant to stay inside - such as sensitive project notes, client histories, budget figures, unique program logic, compliance paperwork, or strategic plans - can slip out without warning. When typed into an assistant interface, those fragments might linger in remote servers, later shaping how the system responds to others. Hidden patterns emerge where private inputs feed public outputs. 

One concern among privacy experts involves possible legal risks for firms in tightly controlled sectors. When companies send sensitive details - like internal strategies or customer records - to artificial intelligence tools without caution, trouble might follow. Problems may emerge later, such as failing to meet confidentiality duties or drawing attention from oversight authorities. These exposures stem not from malice but from routine actions taken too quickly. 

Because reliance on AI helpers keeps rising, people and companies must reconsider what details they hand over to chatbots. Speedy answers tend to push aside careful thinking, particularly when automated aids respond quickly with helpful outcomes. Still, specialists insist grasping how these learning models are built matters greatly - especially for shielding private data and corporate secrets amid expanding artificial intelligence use.

22 Year Old Developer Reverse Engineered Code in Claude Mythos, Tech Industry Shocked

 


Earlier this year, AI tech giant Anthropic launched its powerful new model called Claude Mythos. It created storms in the silicon valley and tech industry. The general-purpose model could find software bugs that no human knew ever existed.

About Claude Mythos


But Claude did not launch Mythos to the world, it only offered it to cybersecurity experts at big organizations that make or have critical software infrastructure and asked them to find and patch flaws before Anthropic released it commercially for the public use.

But, in just two weeks, a 22-year old developer called Kye Gomez made predictions about the core designs that made Claude Mythos advanced and later published OpenMythos. It is an open project that anticipates Anthropic’s breakthrough. Gomez’s code created a tsunami in the AI and tech research community.

If real, this incident can have serious implications . Why? Because if a self-taught developer can reverse engineer the infrastructure innovation of a billion-dollar AI firm in just a few days, then what can threat-actors with malicious intent do. If this happens, the proprietary debate about AI architecture will fade away.

About OpenMythos


OpenMythos allows developers to run and train effective variants of these models on laptops, also raising concerns about long-term dependency on huge, environment and community-destroying data centers.

Boon or curse?


Fortunately, organizations won’t be able to get AI secrets that only the big tech companies such as OpenAI, Anthropic, or Google control.

But what if users and small teams across the world can also reverse engineer the code of the biggest AI companies? It will be difficult to maintain a safe-tech world order. Advanced capabilities will sprout, and it will be difficult to contain.

About the developer, Gomez is not your typical ML engineer. He started coding as a kid, left school early and did not attend college. He built his reputation via coding.

Why OpenMythos


OpenMythos is built upon Gomez’s hypothesis that Claude Mythos uses a unique large language model (LLM) that has been under development since 2022 and shown reliability while training at scale at the start of this year. How is OpenMythos different from Claude Mythos?

Instead of putting neural network layers to give models more depth, experts advised looping data repetitively via smaller packets. This gave the model depth in due time.

Workplace Apps May Be Selling Employee Data Without Consent, Study Warns

 

A growing number of workplace applications are collecting vast amounts of employee data and, in many cases, sharing or selling that information to third-party companies without workers’ knowledge or permission, according to a recent analysis by privacy-focused tech company Incogni.

The company, which specializes in helping users locate and remove personal information from online databases, examined several employer-provided tools and widely used workplace communication platforms. The findings revealed how deeply integrated data collection has become in modern work environments, raising fresh concerns about employee privacy and cybersecurity.

“Collectively, these apps account for over 12.5 billion downloads on Google Play alone,” the Incogni post on the findings said. “On average, workplace apps collect around 19 data points and share approximately 2 data types [per user]. The three Google and Microsoft apps (Gmail, Google Meet, and Microsoft Teams) cluster at the top of the collection spectrum, each gathering 21–26 data types.”

The report highlighted that common communication platforms such as Gmail, Zoom, and Microsoft Teams often gather extensive user information. However, unlike consumer-focused platforms that sometimes provide opt-out settings, many workplace-mandated tools do not offer employees the ability to refuse data collection.

According to Incogni, productivity tracking and monitoring applications are especially aggressive in sharing information with outside organizations. Beyond standard details such as email addresses, location data, contacts, and app activity, some applications may also collect sensitive financial or health-related information.

The report identified Notion as one of the most data-sharing-intensive platforms reviewed. Using the app as an example, Incogni stated that it “shares the most data with third parties, distributing 8 distinct data types to third parties—including email addresses, names, user IDs, device or other IDs, and app interactions.”

Privacy experts warn that this growing exchange of employee data creates significant risks. Once personal information is transferred to multiple external entities, workers may lose visibility and control over how their data is being used. In addition, broader distribution increases exposure to cyberattacks and data breaches, incidents that platforms like Slack and Zoom have previously experienced.

“People tend to think of workplace apps as safe tools, but they don’t exist in isolation,” Incogni CEO Darius Belejevas told enterprise technology publication No Jitter. “A lot of them are part of much larger data ecosystems. Once information is collected, especially if it’s shared with third parties, it can travel much further than users expect.”

Experts suggest employees can lower some of these risks by limiting personal activity on workplace communication platforms and avoiding the use of personal devices for professional work whenever possible.

At the same time, businesses are being encouraged to prioritize stricter privacy protections when selecting workplace software. Organizations may benefit from requiring vendors to reduce unnecessary data collection and restrict third-party sharing practices before adopting enterprise tools.

“Workplace applications that access and share employee information can pose significant security and privacy risks for organizations,” Sarah McBride told No Jitter. “These risks arise from the sensitive nature of the data involved, the potential for misuse, and vulnerabilities in the applications themselves.”

India’s Cybersecurity Workforce Struggles to Keep Pace as AI and Cloud Systems Expand

 



India’s fast-growing digital economy is creating an urgent demand for cybersecurity professionals, but companies across the country are finding it increasingly difficult to hire people with the technical expertise required to secure modern systems.

A new study released by the Data Security Council of India and SANS Institute found that businesses are facing a serious shortage of skilled cybersecurity workers as technologies such as artificial intelligence, cloud computing, and API-driven infrastructure become more deeply integrated into daily operations.

According to the Indian Cyber Security Skilling Landscape Report 2025–26, nearly 73 per cent of enterprises and 68 per cent of service providers said there is a limited supply of qualified cybersecurity professionals in the country. The report suggests that organisations are struggling to build teams capable of handling increasingly advanced cyber risks at a time when companies are rapidly digitising services, storing more information online, and adopting AI-powered tools.

The hiring process itself is also becoming slower. Around 84 per cent of organisations surveyed said cybersecurity positions often remain vacant for one to six months before suitable candidates are found. This delay reflects a growing mismatch between industry expectations and the skills available in the job market.

Researchers noted that many applicants entering the cybersecurity workforce lack practical exposure to real-world security environments. Around 63 per cent of enterprises and 59 per cent of service providers said candidates often do not possess sufficient hands-on technical experience. Employers are no longer only looking for basic security knowledge. Companies increasingly require professionals who understand multiple areas at once, including cloud infrastructure, application security, digital identity systems, and access management technologies. Nearly 58 per cent of enterprises and 60 per cent of providers admitted they are struggling to find candidates with this type of cross-functional expertise.

The report connects this shortage to the changing structure of enterprise technology systems. Many organisations are moving away from traditional on-premise setups and shifting toward cloud-native environments, interconnected APIs, and AI-supported operations. As businesses automate more routine tasks, demand is gradually moving away from entry-level operational positions and toward specialised cybersecurity roles that require analytical thinking, threat detection capabilities, and advanced technical decision-making.

Artificial intelligence is now becoming one of the largest drivers of cybersecurity hiring demand. Around 83 per cent of organisations surveyed described AI and generative AI security skills as essential for future operations, while 78 per cent reported strong demand for AI security engineers. The findings also show that nearly 62 per cent of enterprises are already running active AI or generative AI projects, which experts say can create additional security risks if systems are not properly monitored and protected.

As companies deploy AI systems, the attack surface for cybercriminals also expands. Security teams are now expected to defend AI models, protect sensitive datasets, monitor automated systems for manipulation, and secure APIs connecting multiple digital services. Industry experts have repeatedly warned that many organisations are adopting AI tools faster than they are building security frameworks around them.

Some cybersecurity positions remain especially difficult to fill. The report found that almost half of service providers and nearly 40 per cent of enterprises are struggling to recruit security architects, professionals responsible for designing secure digital infrastructure and long-term defence strategies. Demand is also increasing for specialists in operational technology and industrial control system security, commonly known as OT/ICS security. These professionals help protect critical infrastructure such as manufacturing facilities, power systems, transportation networks, and industrial operations from cyberattacks.

At the same time, companies are facing growing retention problems. Around 70 per cent of service providers and 42 per cent of enterprises said employees are frequently leaving for competitors offering better salaries and career opportunities. Limited access to advanced training and upskilling programs is also contributing to workforce attrition across the sector.

The findings point to a larger issue facing the cybersecurity industry globally: technology is evolving faster than workforce development. Experts believe companies, educational institutions, and training organisations may need to work more closely together to create industry-focused learning pathways that prepare professionals for modern cyber threats instead of relying heavily on theoretical instruction alone.

With India continuing to expand digital public infrastructure, cloud adoption, fintech services, AI development, and connected industrial systems, cybersecurity professionals are expected to play a central role in protecting sensitive information, maintaining operational stability, and preserving trust in digital platforms.

AI Polling Reshapes Political Research as Firms Turn Conversations Into Data

 

Artificial intelligence is rapidly transforming the world of political opinion polling, replacing time-consuming human-led interviews with automated conversational systems capable of analysing public sentiment at scale.

"When you hear the word 'politician', what is the first image or emotion that comes to mind?"

The question is asked not by a human researcher, but by an AI-powered voice assistant. While a respondent shares his views over the phone, multiple AI systems simultaneously analyse the conversation. One verifies whether the person is answering the question correctly, another evaluates the depth of the response, while a third checks for possible fraud or bot-like behaviour.

The technology is being developed by Naratis, a French start-up focused on bringing artificial intelligence into political opinion research.

"The US has start-ups like Outset, Listen Labs and Hey Marvin that do AI polling like this in the commercial sphere. To my knowledge we're the first to do this for political opinion polling as well," says Pierre Fontaine, the 28-year-old engineer who founded the firm in 2025.

The emergence of AI-led polling marks a major shift for an industry traditionally dependent on manual interviews and extensive human analysis. In countries such as France, polling firms are increasingly exploring automation to reduce costs and speed up research processes.

Naratis specifically targets qualitative research, which is widely regarded as the most expensive and labour-intensive form of polling. Traditionally, these studies involve one-on-one interviews or focus groups that can take weeks to organise and analyse. By using conversational AI, the company says it can significantly reduce both time and cost.

Rather than relying on standard multiple-choice surveys, the platform encourages participants to engage in conversations with AI systems. "We don't ask people to tick boxes - they have a conversation with an AI," Fontaine explains. "That means we can explore not just what people think, but how they think - how they build their opinions, and even when those opinions change."

The company claims its approach is "10 times faster, 10 times cheaper and 90% as accurate as human polling".

According to the firm, projects that previously required weeks and substantial budgets can now be completed within a couple of days, with some responses collected in less than 24 hours. Fontaine describes this advantage as "parallelisation", where numerous AI agents conduct interviews simultaneously instead of relying on individual human researchers.

The rise of AI polling comes at a challenging time for the polling industry overall. Survey participation rates have dropped sharply over the decades, increasing operational costs and raising concerns about the reliability and representativeness of public opinion studies.

Supporters of AI polling argue that conversational systems may encourage respondents to be more honest, especially when discussing politically sensitive issues. Some researchers believe this could reduce social desirability bias, where people avoid expressing controversial opinions to human interviewers.

However, critics remain cautious about the growing dependence on AI in political research. Concerns include the possibility of AI systems generating inaccurate conclusions, producing overly generic responses, or creating misleading synthetic data.

Questions have also emerged around the use of "digital twins" and "synthetic people" — AI-generated profiles designed to imitate real human behaviour. While some market research firms use such tools for testing and simulations, many organisations remain reluctant to apply them in political polling.

At Ipsos, AI is already used extensively in consumer and behavioural research, including analysing user-recorded videos and studying social media activity. However, major firms continue to maintain human oversight in politically sensitive projects.

At OpinionWay, AI may assist with conducting interviews, but "we would never publish an opinion poll based on AI-generated data," says CEO of OpinionWay Bruno Jeanbart, citing concerns about trust.

Experts believe the future of polling will likely involve a hybrid approach combining AI efficiency with human supervision. While automation can accelerate research and lower costs, human researchers are still considered essential for validating findings, interpreting nuance and ensuring accountability.

Even AI advocates acknowledge the need for caution. "The goal is end-to-end automation, but today it would be unsafe and socially unacceptable to remove humans entirely," says Le Brun.

As economic pressures continue to push the polling industry toward faster and cheaper methods, companies like Naratis are betting that AI-driven conversations could redefine how public opinion is collected and understood. Whether this transformation strengthens trust in polling or deepens public scepticism may ultimately depend on how responsibly the technology is implemented and regulated.

Featured