Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

22 Year Old Developer Reverse Engineered Code in Claude Mythos, Tech Industry Shocked

  Earlier this year, AI tech giant Anthropic launched its powerful new model called Claude Mythos. It created storms in the silicon valley ...

All the recent news you need to know

Workplace Apps May Be Selling Employee Data Without Consent, Study Warns

 

A growing number of workplace applications are collecting vast amounts of employee data and, in many cases, sharing or selling that information to third-party companies without workers’ knowledge or permission, according to a recent analysis by privacy-focused tech company Incogni.

The company, which specializes in helping users locate and remove personal information from online databases, examined several employer-provided tools and widely used workplace communication platforms. The findings revealed how deeply integrated data collection has become in modern work environments, raising fresh concerns about employee privacy and cybersecurity.

“Collectively, these apps account for over 12.5 billion downloads on Google Play alone,” the Incogni post on the findings said. “On average, workplace apps collect around 19 data points and share approximately 2 data types [per user]. The three Google and Microsoft apps (Gmail, Google Meet, and Microsoft Teams) cluster at the top of the collection spectrum, each gathering 21–26 data types.”

The report highlighted that common communication platforms such as Gmail, Zoom, and Microsoft Teams often gather extensive user information. However, unlike consumer-focused platforms that sometimes provide opt-out settings, many workplace-mandated tools do not offer employees the ability to refuse data collection.

According to Incogni, productivity tracking and monitoring applications are especially aggressive in sharing information with outside organizations. Beyond standard details such as email addresses, location data, contacts, and app activity, some applications may also collect sensitive financial or health-related information.

The report identified Notion as one of the most data-sharing-intensive platforms reviewed. Using the app as an example, Incogni stated that it “shares the most data with third parties, distributing 8 distinct data types to third parties—including email addresses, names, user IDs, device or other IDs, and app interactions.”

Privacy experts warn that this growing exchange of employee data creates significant risks. Once personal information is transferred to multiple external entities, workers may lose visibility and control over how their data is being used. In addition, broader distribution increases exposure to cyberattacks and data breaches, incidents that platforms like Slack and Zoom have previously experienced.

“People tend to think of workplace apps as safe tools, but they don’t exist in isolation,” Incogni CEO Darius Belejevas told enterprise technology publication No Jitter. “A lot of them are part of much larger data ecosystems. Once information is collected, especially if it’s shared with third parties, it can travel much further than users expect.”

Experts suggest employees can lower some of these risks by limiting personal activity on workplace communication platforms and avoiding the use of personal devices for professional work whenever possible.

At the same time, businesses are being encouraged to prioritize stricter privacy protections when selecting workplace software. Organizations may benefit from requiring vendors to reduce unnecessary data collection and restrict third-party sharing practices before adopting enterprise tools.

“Workplace applications that access and share employee information can pose significant security and privacy risks for organizations,” Sarah McBride told No Jitter. “These risks arise from the sensitive nature of the data involved, the potential for misuse, and vulnerabilities in the applications themselves.”

India’s Cybersecurity Workforce Struggles to Keep Pace as AI and Cloud Systems Expand

 



India’s fast-growing digital economy is creating an urgent demand for cybersecurity professionals, but companies across the country are finding it increasingly difficult to hire people with the technical expertise required to secure modern systems.

A new study released by the Data Security Council of India and SANS Institute found that businesses are facing a serious shortage of skilled cybersecurity workers as technologies such as artificial intelligence, cloud computing, and API-driven infrastructure become more deeply integrated into daily operations.

According to the Indian Cyber Security Skilling Landscape Report 2025–26, nearly 73 per cent of enterprises and 68 per cent of service providers said there is a limited supply of qualified cybersecurity professionals in the country. The report suggests that organisations are struggling to build teams capable of handling increasingly advanced cyber risks at a time when companies are rapidly digitising services, storing more information online, and adopting AI-powered tools.

The hiring process itself is also becoming slower. Around 84 per cent of organisations surveyed said cybersecurity positions often remain vacant for one to six months before suitable candidates are found. This delay reflects a growing mismatch between industry expectations and the skills available in the job market.

Researchers noted that many applicants entering the cybersecurity workforce lack practical exposure to real-world security environments. Around 63 per cent of enterprises and 59 per cent of service providers said candidates often do not possess sufficient hands-on technical experience. Employers are no longer only looking for basic security knowledge. Companies increasingly require professionals who understand multiple areas at once, including cloud infrastructure, application security, digital identity systems, and access management technologies. Nearly 58 per cent of enterprises and 60 per cent of providers admitted they are struggling to find candidates with this type of cross-functional expertise.

The report connects this shortage to the changing structure of enterprise technology systems. Many organisations are moving away from traditional on-premise setups and shifting toward cloud-native environments, interconnected APIs, and AI-supported operations. As businesses automate more routine tasks, demand is gradually moving away from entry-level operational positions and toward specialised cybersecurity roles that require analytical thinking, threat detection capabilities, and advanced technical decision-making.

Artificial intelligence is now becoming one of the largest drivers of cybersecurity hiring demand. Around 83 per cent of organisations surveyed described AI and generative AI security skills as essential for future operations, while 78 per cent reported strong demand for AI security engineers. The findings also show that nearly 62 per cent of enterprises are already running active AI or generative AI projects, which experts say can create additional security risks if systems are not properly monitored and protected.

As companies deploy AI systems, the attack surface for cybercriminals also expands. Security teams are now expected to defend AI models, protect sensitive datasets, monitor automated systems for manipulation, and secure APIs connecting multiple digital services. Industry experts have repeatedly warned that many organisations are adopting AI tools faster than they are building security frameworks around them.

Some cybersecurity positions remain especially difficult to fill. The report found that almost half of service providers and nearly 40 per cent of enterprises are struggling to recruit security architects, professionals responsible for designing secure digital infrastructure and long-term defence strategies. Demand is also increasing for specialists in operational technology and industrial control system security, commonly known as OT/ICS security. These professionals help protect critical infrastructure such as manufacturing facilities, power systems, transportation networks, and industrial operations from cyberattacks.

At the same time, companies are facing growing retention problems. Around 70 per cent of service providers and 42 per cent of enterprises said employees are frequently leaving for competitors offering better salaries and career opportunities. Limited access to advanced training and upskilling programs is also contributing to workforce attrition across the sector.

The findings point to a larger issue facing the cybersecurity industry globally: technology is evolving faster than workforce development. Experts believe companies, educational institutions, and training organisations may need to work more closely together to create industry-focused learning pathways that prepare professionals for modern cyber threats instead of relying heavily on theoretical instruction alone.

With India continuing to expand digital public infrastructure, cloud adoption, fintech services, AI development, and connected industrial systems, cybersecurity professionals are expected to play a central role in protecting sensitive information, maintaining operational stability, and preserving trust in digital platforms.

AI Polling Reshapes Political Research as Firms Turn Conversations Into Data

 

Artificial intelligence is rapidly transforming the world of political opinion polling, replacing time-consuming human-led interviews with automated conversational systems capable of analysing public sentiment at scale.

"When you hear the word 'politician', what is the first image or emotion that comes to mind?"

The question is asked not by a human researcher, but by an AI-powered voice assistant. While a respondent shares his views over the phone, multiple AI systems simultaneously analyse the conversation. One verifies whether the person is answering the question correctly, another evaluates the depth of the response, while a third checks for possible fraud or bot-like behaviour.

The technology is being developed by Naratis, a French start-up focused on bringing artificial intelligence into political opinion research.

"The US has start-ups like Outset, Listen Labs and Hey Marvin that do AI polling like this in the commercial sphere. To my knowledge we're the first to do this for political opinion polling as well," says Pierre Fontaine, the 28-year-old engineer who founded the firm in 2025.

The emergence of AI-led polling marks a major shift for an industry traditionally dependent on manual interviews and extensive human analysis. In countries such as France, polling firms are increasingly exploring automation to reduce costs and speed up research processes.

Naratis specifically targets qualitative research, which is widely regarded as the most expensive and labour-intensive form of polling. Traditionally, these studies involve one-on-one interviews or focus groups that can take weeks to organise and analyse. By using conversational AI, the company says it can significantly reduce both time and cost.

Rather than relying on standard multiple-choice surveys, the platform encourages participants to engage in conversations with AI systems. "We don't ask people to tick boxes - they have a conversation with an AI," Fontaine explains. "That means we can explore not just what people think, but how they think - how they build their opinions, and even when those opinions change."

The company claims its approach is "10 times faster, 10 times cheaper and 90% as accurate as human polling".

According to the firm, projects that previously required weeks and substantial budgets can now be completed within a couple of days, with some responses collected in less than 24 hours. Fontaine describes this advantage as "parallelisation", where numerous AI agents conduct interviews simultaneously instead of relying on individual human researchers.

The rise of AI polling comes at a challenging time for the polling industry overall. Survey participation rates have dropped sharply over the decades, increasing operational costs and raising concerns about the reliability and representativeness of public opinion studies.

Supporters of AI polling argue that conversational systems may encourage respondents to be more honest, especially when discussing politically sensitive issues. Some researchers believe this could reduce social desirability bias, where people avoid expressing controversial opinions to human interviewers.

However, critics remain cautious about the growing dependence on AI in political research. Concerns include the possibility of AI systems generating inaccurate conclusions, producing overly generic responses, or creating misleading synthetic data.

Questions have also emerged around the use of "digital twins" and "synthetic people" — AI-generated profiles designed to imitate real human behaviour. While some market research firms use such tools for testing and simulations, many organisations remain reluctant to apply them in political polling.

At Ipsos, AI is already used extensively in consumer and behavioural research, including analysing user-recorded videos and studying social media activity. However, major firms continue to maintain human oversight in politically sensitive projects.

At OpinionWay, AI may assist with conducting interviews, but "we would never publish an opinion poll based on AI-generated data," says CEO of OpinionWay Bruno Jeanbart, citing concerns about trust.

Experts believe the future of polling will likely involve a hybrid approach combining AI efficiency with human supervision. While automation can accelerate research and lower costs, human researchers are still considered essential for validating findings, interpreting nuance and ensuring accountability.

Even AI advocates acknowledge the need for caution. "The goal is end-to-end automation, but today it would be unsafe and socially unacceptable to remove humans entirely," says Le Brun.

As economic pressures continue to push the polling industry toward faster and cheaper methods, companies like Naratis are betting that AI-driven conversations could redefine how public opinion is collected and understood. Whether this transformation strengthens trust in polling or deepens public scepticism may ultimately depend on how responsibly the technology is implemented and regulated.

Ransomware Attacks Reach All Time High, Leaked Over 2.6 Billion Records

 

A recent analysis of cybercrime data of last year (2025) disclosed that ransomware victims have risen rapidly by 45% in the previous year. But this is not important, as there exists something more dangerous. The passive dependence on hacked credentials as the primary entry point tactic is the main concern. Regardless of the platforms used, the accounts you are trying to protect, it is high time users start paying attention to password security. 

State of Cybercrime report 2026


The report from KELA found over 2.86 billion hacked credentials, passwords, session cookies, and other info that allows 2FA authentication. Surprisingly, authentication services and business cloud accounted for over 30% of the leaked data in 2025.

The analysis also revealed that infostealer malware which compromised credentials is immune to whatever OS you are using, “infections on macOS devices increased from fewer than 1,000 cases in 2024 to more than 70,000 in 2025, a 7,000% increase,” the report said.

Expert advice


Experts from Forbes have warned users about the risks associated with infostealer malware endless times. The leaked data includes FBI operations aimed at shutting down cybercrime gangs, millions of gmail passwords within leaked infostealer logs, and much more. Despite the KELA analysis, the risk continues. To make things worse, the damage is increasing year after year.

About infostealer


Kela defined the malware as something that is “designed to exfiltrate sensitive data from compromised machines, including login credentials, authentication tokens, and other critical account information.” What is more troublesome is the ubiquity of malware-as-a-service campaigns in the dark web world. The entry barrier is not closed, but the gates have been kicked wide open for experts as well as amateur threat actors. Data compromise in billions

Infostealer malware, according to Kela, ‘is designed to exfiltrate sensitive data from compromised machines, including login credentials, authentication tokens, and other critical account information.” And with the now almost universal availability of malware-as-a-service operations to the infostealer criminal world, the barrier to entry has not only been lowered but kicked to the curb completely.

In 2025, Kela found around “3.9 million unique machines infected with infostealer malware globally, which collectively yielded 347.5 million compromised credentials.” The grand total amounts to 2.86 billion hacked credentials throughout all platforms: databases of infostealer logs and dark web criminal marketplaces.

Tricks used by infostealers:


AI-generated tailored scams, messaging apps, and email frequently use Phishing-as-a-Service to get around MFA. In so-called "hack your own password" assaults, users are duped into manually running scripts in order to circumvent conventional security measures.

Trojanized software is promoted by malicious advertisements and search results, increasing the risk of infection. In supply chain assaults, high-privilege credentials are the target of poisoned packages and DevTools impersonation. Form-grabbing and cookie theft are made possible via compromised browser extension updates. Fake software updates and pirated apps continued to be successful.

OpenAI Codex Bug Leads to GitHub Token Breach

 

In March 2026, researchers from BeyondTrust showed that a tailored GitHub branch name was enough to steal Codex’s OAuth token in cleartext. Tech giant OpenAI termed it as “Critical P1”. Soon after, Anthropic’s Claude Code source code leaked into the public npm registry, and Adversa’s Claude Code mutely ignored its own deny protocols once a prompt (command) exceeded over 50 subcommands.

Malicious codes in AI These codes were not isolated vulnerabilities. They were new in a nine-month campaign: six research teams revealed exploits against Copilot, Vertex AI, Codex, Claude Code. Every exploit followed the same strategy. An AI agent kept a credential, performed an action, and verified to a production system without any human session supporting the request.

The attack surface was first showcased at Balck Hat USA 2025, where experts hacked ChatGPT, Microsoft Copilot Studio, Gemini, Cursor and many more, on stage, with zero clicks. After nine, threat actors breached those same credentials.

How a branch name in Codex compromised GitHub


Researchers at BeyondTrust found Codex cloned repositories using a GitHub OAuth token attached in the git remote URL. While cloning, the branch name label allowed malicious data into the setup script. A backtick subshell and a semicolon changed the branch name into an extraction payload.

About the bug


The vulnerability affects the ChatGPT website, Codex CLI, Codex SDK, and the Codex IDE Extension. All reported issues have since been fixed in collaboration with OpenAI's security team.

This vulnerability allows an attacker to inject arbitrary commands through the GitHub branch name parameter, potentially leading to the theft of a victim's GitHub User Access Token—the same token Codex uses to authenticate with GitHub—through automated techniques. A victim's GitHub User Access Token, which Codex needs to authenticate with GitHub, may be stolen as a result.

Vulnerability impact


This vulnerability can scale to compromise numerous people interacting with a shared environment or GitHub repository using automated ways. The Codex CLI, Codex SDK, Codex IDE Extension, and the ChatGPT website are all impacted by the vulnerability. Since then, every issue that was reported has been fixed in collaboration with OpenAI's security team.

“OpenAI Codex is a cloud-based coding agent, accessible through ChatGPT. It allows users to point the tool toward a codebase and submit tasks through a prompt. Codex then spins up a managed container instance to execute these tasks—such as generating code, answering questions about a codebase, creating pull requests, and performing code reviews against the selected repository,” said Beyond Trust.

Spotify Verified Badge Targets AI Music Confusion as Human Artist Authentication Expands

 

Now appearing beside artist profiles, Spotify’s new “Verified by Spotify” badge uses a green checkmark to highlight real human creators. Only accounts meeting the platform’s internal authenticity checks receive the label. Rather than algorithm-built personas, these profiles represent actual musicians behind the music. The rollout is happening gradually, changing how artists appear in searches, playlists, and recommendations. 

The update arrives as concerns continue growing around AI-generated music flooding streaming services. Spotify says verification depends on signals such as active social media accounts, consistent listener activity, merchandise listings, and live performance schedules - indicators suggesting a genuine person is tied to the profile. 

According to the company, these measures are designed to separate human creators from automated content increasingly appearing online.  Spotify says most artists users actively search for will eventually receive verification. Artists recognized for meaningful contributions to music culture are expected to be prioritized ahead of bulk-uploaded or mass-generated accounts. 

Over the coming weeks, the checkmarks will gradually appear across the platform, with influence and authenticity carrying more weight than upload volume. The move comes as streaming platforms face mounting criticism over how they handle AI-generated tracks. While the badge confirms a profile belongs to a real person, some critics quickly pointed out that it does not indicate whether artificial intelligence was used to help create the music itself. 

Questions around what counts as “real” music continue growing as AI tools become more involved in production. Creator-rights advocate and former AI executive Ed Newton-Rex warned that systems like Spotify’s may unintentionally disadvantage independent musicians who do not tour, sell merchandise, or maintain strong social media visibility. 

Instead, he suggested platforms should directly label AI-generated songs rather than relying solely on artist verification. Experts also note that defining AI involvement in music is increasingly difficult. Professor Nick Collins from Durham University described AI-assisted music creation as a broad spectrum rather than a simple divide between human-made and machine-made work. Many songs now involve software-assisted mixing, mastering, composition, or editing, making it far harder to classify music by origin alone. 

Spotify has faced years of criticism over AI-generated audio. Across forums and online communities, users have repeatedly called for clearer labels showing whether tracks were created by humans or algorithms. Some developers have even built independent tools aimed at detecting and filtering AI-generated songs on the platform. Concerns intensified after projects like The Velvet Sundown attracted large audiences despite having no interviews, live performances, or publicly traceable history. 

The group later described itself as a “synthetic music project” supported by artificial intelligence, fueling debate around transparency in digital music spaces. Spotify’s latest verification effort appears aimed at rebuilding trust while balancing support for evolving AI technologies. The move also reflects a broader trend across digital platforms, where companies are introducing verification systems to distinguish human-created content from synthetic material as AI-generated media becomes harder to identify.

Featured