New proposals in the French Parliament will mandate tech companies to give decrypted messages, email. If businesses don’t comply, heavy fines will be imposed.
France has proposed a law requiring end-to-end encryption messaging apps like WhatsApp and Signal, and encrypted email services like Proton Mail to give law enforcement agencies access to decrypted data on demand.
The move comes after France’s proposed “Narcotraffic” bill, asking tech companies to hand over encrypted chats of suspected criminals within 72 hours.
The law has stirred debates in the tech community and civil society groups because it may lead to building of “backdoors” in encrypted devices that can be abused by threat actors and state-sponsored criminals.
Individuals failing to comply will face fines of €1.5m and companies may lose up to 2% of their annual world turnover in case they are not able to hand over encrypted communications to the government.
Few experts believe it is not possible to bring backdoors into encrypted communications without weakening their security.
According to Computer Weekly’s report, Matthias Pfau, CEO of Tuta Mail, a German encrypted mail provider, said, “A backdoor for the good guys only is a dangerous illusion. Weakening encryption for law enforcement inevitably creates vulnerabilities that can – and will – be exploited by cyber criminals and hostile foreign actors. This law would not just target criminals, it would destroy security for everyone.”
Researchers stress that the French proposals aren’t technically sound without “fundamentally weakening the security of messaging and email services.” Similar to the “Online Safety Act” in the UK, the proposed French law exposes a serious misunderstanding of the practical achievements with end-to-end encrypted systems. Experts believe “there are no safe backdoors into encrypted services.”
The law will allow using infamous spywares such as NSO Group’s Pegasus or Pragon that will enable officials to remotely surveil devices. “Tuta Mail has warned that if the proposals are passed, it would put France in conflict with European Union laws, and German IT security laws, including the IT Security Act and Germany’s Telecommunications Act (TKG) which require companies to secure their customer’s data,” reports Computer Weekly.
Online attacks are a common thing in 2025. The rising AI use has contributed to cyberattacks with faster speed and advanced features, the change is unlikely to slow down. To help readers, this blog outlines the basics of digital safety.
A good antivirus in your system helps you from malware, ransomware, phishing sites, and other major threats.
For starters, having Microsoft’s built-in Windows Security antivirus is a must (it is usually active in the default settings, unless you have changed it). Microsoft antivirus is reliable and runs without being nosy in the background.
You can also purchase paid antivirus software, which provides an extra security and additional features, in an all-in-one single interface.
A password manager is the spine of login security, whether an independent service, or a part of antivirus software, to protect login credentials across the web. In addition they also lower the chances of your data getting saved on the web.
A simple example: to maintain privacy, keep all the credit card info in your password manager, instead of allowing shopping websites to store sensitive details.
You'll be comparatively safer in case a threat actor gets unauthorized access to your account and tries to scam you.
In today's digital world, just a standalone password isn't a safe bet to protect you from attackers. Two-factor authentication (2FA) or multi-factor authentication provides an extra security layer before users can access their account. For instance, if a hacker has your login credentials, trying to access your account, they won't have all the details for signing in.
A safer option for users (if possible) is to use 2FA via app-generated one-time codes; these are safer than codes sent through SMS, which can be intercepted.
If passwords and 2FA feel like a headache, you can use your phone or PC as a security option, through a passkey.
Passkeys are easy, fast, and simple; you don't have to remember them; you just store them on your device. Unlike passwords, passkeys are linked to the device you've saved them on, this prevents them from getting stolen or misused by hackers. You're done by just using PIN or biometric authentication to allow a passkey use.
Earlier, it was easier to spot irregularities in an e-mail, all it took was one glance. As Gen AI models use flawless grammar, it is almost impossible to find errors in your mail copy,
In the past, one quick skim was enough to recognize something is off with an email, typically the incorrect grammar and laughable typos being the giveaways. Since scammers now use generative AI language models, most phishing messages have flawless grammar.
But there is hope. It is easier to identify Gen AI text, and keep an eye out for an unnatural flow of sentences, if everything seems to be too perfect, chances are it’s AI.
Though AI has made it difficult for users to find phishing scams, they show some classic behavior. The same tips apply to detect phishing emails.
In most cases, scammers mimic businesses and wish you won’t notice. For instance, instead of an official “info@members.hotstar.com” email ID, you may notice something like “info@members.hotstar-support.com.” You may also get unrequested links or attachments, which are a huge tell. URLs (mismatched) having subtle typos or extra words/letters are comparatively difficult to notice but a huge ti-off that you are on a malicious website or interacting with a fake business.
The biggest issue these days is combating deepfakes, which are also difficult to spot.
The attacker makes realistic video clips using photo and video prompts and uses video calling like Zoom or FaceTime to trap potential victims (especially elders and senior citizens) to give away sensitive data.
One may think that only old people may fall for deepfakes, but due to their sophistication, even experts fall prey to them. One famous incident happened in Hong Kong, where scammers deepfake a company CFO and looted HK$200 million (roughly $25 million).
AI is advancing, and becoming stronger every day. It is a double-edged sword, both a blessing and a curse. One should tread the ethical lines carefully and hope they don’t fall to the dark side of AI.
One infamous example is the “tech support scam,” where a fake warning tells the user their device is infected with malware and they need to reach out to contact support number (fake) or install fake anti-malware software to restore the system and clean up things. Over the years, users have noticed a few Microsoft IT support fraud pop-ups.
Realizing the threat, Microsoft is combating the issue with its new Scareware Blockers feature in Edge, which was first rolled out in November last year at the Ignite conference.
Defender SmartScreen, a feature that saves Edge users from scams, starts after a malicious site is caught and added to its index of abusive web pages to protect users globally.
The new AI-powered Edge scareware blocker by Microsoft “offers extra protection by detecting signs of scareware scams in real-time using a local machine learning model,” says Bleeping Computer.
Talking about Scareware, Microsoft says, “The blocker adds a new, first line of defense to help protect the users exposed to a new scam if it attempts to open a full-screen page.” “Scareware blocker uses a machine learning model that runs on the local computer,” it further adds.
Once the blocker catches a scam page, it informs users and allows them to continue using the webpage if they trust the website.
Before activating the blocker, the user needs to install the Microsoft Edge beta version. The version installs along with the main release variant of Edge, easing the user’s headache of co-mingling the versions. If the user is on a managed system, they should make sure previews are enabled admin.
"After making sure you have the latest updates, you should see the scareware blocker preview listed under "Privacy Search and Services,'" Microsoft says. Talking about reporting the scam site from users’ end for the blocker to work, Microsoft says it helps them “make the feature more reliable to catch the real scams.
Beyond just blocking individual scam outbreaks” their Digital Crimes Unit “goes even further to target the cybercrime supply chain directly.”
According to the FBI, criminals are increasingly using generative artificial intelligence (AI) to make their fraudulent schemes more convincing. This technology enables fraudsters to produce large amounts of realistic content with minimal time and effort, increasing the scale and sophistication of their operations.
Generative AI systems work by synthesizing new content based on patterns learned from existing data. While creating or distributing synthetic content is not inherently illegal, such tools can be misused for activities like fraud, extortion, and misinformation. The accessibility of generative AI raises concerns about its potential for exploitation.
AI offers significant benefits across industries, including enhanced operational efficiency, regulatory compliance, and advanced analytics. In the financial sector, it has been instrumental in improving product customization and streamlining processes. However, alongside these benefits, vulnerabilities have emerged, including third-party dependencies, market correlations, cyber risks, and concerns about data quality and governance.
The misuse of generative AI poses additional risks to financial markets, such as facilitating financial fraud and spreading false information. Misaligned or poorly calibrated AI models may result in unintended consequences, potentially impacting financial stability. Long-term implications, including shifts in market structures, macroeconomic conditions, and energy consumption, further underscore the importance of responsible AI deployment.
Fraudsters have increasingly turned to generative AI to enhance their schemes, using AI-generated text and media to craft convincing narratives. These include social engineering tactics, spear-phishing, romance scams, and investment frauds. Additionally, AI can generate large volumes of fake social media profiles or deepfake videos, which are used to manipulate victims into divulging sensitive information or transferring funds. Criminals have even employed AI-generated audio to mimic voices, misleading individuals into believing they are interacting with trusted contacts.
In one notable incident reported by the FBI, a North Korean cybercriminal used a deepfake video to secure employment with an AI-focused company, exploiting the position to access sensitive information. Similarly, Russian threat actors have been linked to fake videos aimed at influencing elections. These cases highlight the broad potential for misuse of generative AI across various domains.
To address these challenges, the FBI advises individuals to take several precautions. These include establishing secret codes with trusted contacts to verify identities, minimizing the sharing of personal images or voice data online, and scrutinizing suspicious content. The agency also cautions against transferring funds, purchasing gift cards, or sending cryptocurrency to unknown parties, as these are common tactics employed in scams.
Generative AI tools have been used to improve the quality of phishing messages by reducing grammatical errors and refining language, making scams more convincing. Fraudulent websites have also employed AI-powered chatbots to lure victims into clicking harmful links. To reduce exposure to such threats, individuals are advised to avoid sharing sensitive personal information online or over the phone with unverified sources.
By remaining vigilant and adopting these protective measures, individuals can mitigate their risk of falling victim to fraud schemes enabled by emerging AI technologies.
The supply chain campaign shows the advancement of cyber threats attacking developers and the urgent need for caution in open-source activities.
Experts have found two malicious packages uploaded to the Python Index (PyPI) repository pretending to be popular artificial intelligence (AI) models like OpenAI Chatgpt and Anthropic Claude to distribute an information stealer known as JarkaStealer.
Called gptplus and claudeai-eng, the packages were uploaded by a user called "Xeroline" last year, resulting in 1,748 and 1,826 downloads. The two libraries can't be downloaded from PyPI. According to Kaspersky, the malicious packages were uploaded to the repository by one author and differed only in name and description.
Experts believe the package offered a way to access GPT-4 Turbo and Claude AI API but contained malicious code that, upon installation, started the installation of malware.
Particularly, the "__init__.py" file in these packages included Base64-encoded data that included code to download a Java archive file ("JavaUpdater.jar") from a GitHub repository, also downloading the Java Runtime Environment (JRE) from a Dropbox URL in case Java isn't already deployed on the host, before running the JAR file.
Based on information stealer JarkaStealer, the JAR file can steal a variety of sensitive data like web browser data, system data, session tokens, and screenshots from a wide range of applications like Steam, Telegram, and Discord.
In the last step, the stolen data is archived, sent to the attacker's server, and then removed from the target's machine.JarkaStealer is known to offer under a malware-as-a-service (MaaS) model through a Telegram channel for a cost between $20 and $50, however, the source code has been leaked on GitHub.
ClickPy stats suggest packages were downloaded over 3,500 times, primarily by users in China, the U.S., India, Russia, Germany, and France. The attack was part of an all-year supply chain attack campaign.
The stolen information is compressed and transmitted to a remote server controlled by the hacker, where it is removed from the target’s device.
India, with rapid digital growth and reliance on technology, is in the hit list of cybercriminals. As one of the world's biggest economies, the country poses a distinct digital threat that cyber-crooks might exploit due to security holes in businesses, institutions, and personal users.
India recently saw a 51 percent surge in ransomware attacks in 2023 according to the Indian Computer Emergency Response Team, or CERT-In. Small and medium-sized businesses have been an especially vulnerable target, with more than 300 small banks being forced to close briefly in July after falling prey to a ransomware attack. For millions of Indians using digital banking for daily purchases and payments, such glitches underscore the need for further improvement in cybersecurity measures. A report from Kaspersky shows that 53% of SMBs operating in India have experienced the incidents of ransomware up till now this year, with more than 559 million cases being reported over just two months, starting from April and May this year.
Cyber Thugs are not only locking computers in businesses but extending attacks to individuals, even if it is personal electronic gadgets, stealing sensitive and highly confidential information. A well-organised group of attacks in the wave includes Mallox, RansomHub, LockBit, Kill Security, and ARCrypter. Such entities take advantage of Indian infrastructure weaknesses and focus on ransomware-as-a-service platforms that support Microsoft SQL databases. Recovery costs for affected organisations usually exceeded ₹11 crore and averaged ₹40 crore per incident in India, according to estimates for 2023. The financial sector, in particular the National Payment Corporation of India (NPCI), has been attacked very dearly, and it is crystal clear that there is an imperative need to strengthen the digital financial framework of India.
Cyber Defence Through AI
Indian organisations are now employing AI to fortify their digital defence. AI-based tools process enormous data in real time and report anomalies much more speedily than any manual system. From financial to healthcare sectors, high-security risks make AI become more integral in cybersecurity strategies in the sector. Lenovo's recent AI-enabled security initiatives exemplify how the technology has become mainstream with 71% of retailers in India adopting or planning to adopt AI-powered security.
As India pushes forward on its digital agenda, the threat of ransomware cannot be taken lightly. It will require intimate collaboration between government and private entities, investment in education in AI and cybersecurity, as well as creating safer environments for digital existence. For this, the government Cyber Commando initiative promises forward movement, but collective endeavours will be crucial to safeguarding India's burgeoning digital economy.
Artificial intelligence, once considered a tool for enhancing security measures, has become a threat. Cybercriminals are leveraging AI to orchestrate more sophisticated and pervasive attacks. AI’s capability to analyze vast amounts of data at lightning speed, identify vulnerabilities, and execute attacks autonomously has rendered traditional security measures obsolete.
Sneha Katkar from Quick Heal notes, “The landscape of cybercrime has evolved significantly with AI automating and enhancing these attacks.”
Cybercriminals employed AI-driven tools to bypass security protocols, resulting in the compromise of sensitive data. Such incidents underscore the urgent need for upgraded security frameworks to counter these advanced threats.
The rise of AI-powered malware and ransomware is particularly concerning. These malicious programs can adapt, learn, and evolve, making them harder to detect and neutralize. Traditional antivirus software, which relies on signature-based detection, is often ineffective against such threats. As Katkar pointed out, “AI-driven cyberattacks require an equally sophisticated response.”
One of the critical challenges in combating AI-driven cyberattacks is the speed at which these attacks can be executed. Automated attacks can be carried out in a matter of minutes, causing significant damage before any countermeasures can be deployed. This rapid execution leaves organizations with little time to react, highlighting the need for real-time threat detection and response systems.
Moreover, the use of AI in phishing attacks has added a new layer of complexity. Phishing emails generated by AI can mimic human writing styles, making them indistinguishable from legitimate communications. This sophistication increases the likelihood of unsuspecting individuals falling victim to these scams. Organizations must therefore invest in advanced AI-driven security solutions that can detect and mitigate such threats.
A new variant of the Rhadamanthys information stealer malware has been identified, which now poses a further threat to cryptocurrency users by adding AI to seed phrase recognition. The bad guys behind the malware were not enough in themselves, but when added into this malware came another functionality that includes optical character recognition or OCR scans for images and seed phrase recognition-the total key information needed to access cryptocurrency wallets.
According to Recorded Future's Insikt Group, Rhadamanthys malware now can scan for seed phrase images stored inside of infected devices in order to extract this information and yet further exploitation.
So, basically this means their wallets may now get hacked through this malware because their seed phrases are stored as images and not as text.
Evolution of Rhadamanthys
First discovered in 2022, Rhadamanthys has proven to be one of the most dangerous information-stealing malware available today that works under the MaaS model. It is a type of service allowing cyber criminals to rent their malware to other cyber criminals for a subscription fee of around $250 per month. The malware lets the attackers steal really sensitive information, including system details, credentials, browser passwords, and cryptocurrency wallet data.
The malware author, known as "kingcrete," continues to publish new versions through Telegram and Jabber despite the ban on underground forums like Exploit and XSS, in which mainly users from Russia and the former Soviet Union were targeted.
The last one, Rhadamanthys 0.7.0, which was published in June 2024, is a big improvement from the structural point of view. The malware is now equipped with AI-powered recognition of cryptocurrency wallet seed phrases by image. This has made the malware look like a very effective tool in the hands of hackers. Client and server-side frameworks were fully rewritten, making them fast and stable. Additionally, the malware now has the strength of 30 wallet-cracking algorithms and enhanced capabilities of extracting information from PDF and saved phrases.
Rhadamanthys also has a plugin system allowing it to further enhance its operations through keylogging ability, cryptocurrency clipping ability- wallet address alteration, and reverse proxy setups. The foregoing tools make it flexible for hackers to snoop for secrets in a stealthy manner.
Higher Risks for Crypto Users in Term of Security
Rhadamanthys is a crucial threat for anyone involved with cryptocurrencies, as the attackers are targeting wallet information stored in browsers, PDFs, and images. The worrying attack with AI at extracting seed phrases from images indicates attackers are always inventing ways to conquer security measures.
This evolution demands better security practices at the individual and organization level, particularly with regards to cryptocurrencies. Even for simple practices, like never storing sensitive data within an image or some other file without proper security, would have prevented this malware from happening.
Broader Implications and Related Threats
Rhdimanthys' evolving development is part of a larger evolutionary progress in malware evolution. Some other related kinds of stealer malware, such as Lumma and WhiteSnake, have also released updates recently that would further provide additional functionalities in extracting sensitive information. For instance, the Lumma stealer bypasses new security features implemented in newly designed browsers, whereas WhiteSnake stealer has been updated to obtain credit card information stored within web browsers.
These persistent updates on stealer malware are a reflection of the fact that cyber threats are becoming more mature. Also, other attacks, such as the ClickFix campaign, are deceiving users into running malicious code masqueraded as CAPTCHA verification systems.
With cybercrime operatives becoming more sophisticated and their tools being perfected day by day, there has never been such a challenge for online security. The user needs to be on the alert while getting to know what threats have risen in cyberspace to prevent misuse of personal and financial data.