Security vendor BforeAI said around 600 phishing campaigns surfaced after the Bybit heist, which was intended to steal cryptocurrency from its customers. In the last three weeks, after the news of the biggest crypto scam in history, BforeAI found 596 suspicious domains from 13 different countries.
Dozens of these malicious domains mimicked the cryptocurrency exchange itself (Bybit), most using typosquatting techniques and keywords like “wallet,” “refund,” “information, “recovery,” and “check.”
According to BforeAI, there were also “instances of popular crypto keywords such as ‘metaconnect,’ ‘mining,’ and ‘airdrop,’ as well as the use of free hosting and subdomain registration services such as Netlify, Vercel, and Pages.dev.”
The use of free hosting services and dynamics is a common practice in this dataset. Many phishing pages are hosted on forums that offer anonymous, quick deployment without asking for domain purchases. Also, the highest number of verified malicious domains were registered in the UK.
After the incident, Bybit assured customers that they wouldn’t lose any money as a result. But the hackers took advantage of this situation and intentionally created a sense of anxiety and urgency via deceptive tactics like ‘fake recovery services and ‘phishing schemes.’ A few phishing websites pretended to be the “Bybit Help Center.”
The end goal was to make victims enter their crypto/Bybit passwords. A few weeks later, campaigns changed from “withdrawals, information, and refunds” through spoof Bybit sites to providing “crypto and training guides” and special rewards to trick potential investors.
Regardless of the change in these crypto and training guides, the campaigns persevered a “connection to the earlier withdrawal scams by including ‘how to withdraw from Bybit guides,’ BforeAI explained. This results in “a flow of traffic between learning resources fakes and withdrawal phishing attempts,” it added.
Bybit has accused North Korean hackers behind the attacks, costing the firm a massive $1.5 billion in stolen crypto. The campaign has contributed to Q1 2025 with an infamous record: a $1.7 billion theft in the first quarter, the highest in history.
The latest "Qwen2.5-Omni-7B" is a multimodal model- it can process inputs like audio/video, text, and images- while also creating real-time text and natural speech responses, Alibaba’s cloud website reports. It also said that the model can be used on edge devices such as smartphones, providing higher efficiency without giving up on performance.
According to Alibaba, the “unique combination makes it the perfect foundation for developing agile, cost-effective AI agents that deliver tangible value, especially intelligent voice applications.” For instance, the AI can be used to assist visually impaired individuals to navigate their environment via real-time audio description.
The latest model is open-sourced on forums GitHub and Hugging Face, after a rising trend in China post DeepSeek breakthrough R1 model open-source. Open-source means a software in which the source code is created freely on web for potential modification and redistribution.
In recent years, Alibaba claims it has open-sourced more that 200 generative AI models. In the noise of China’s AI dominance intensified by DeepSeek due to its shoe string budget and capabilities, Alibaba and genAI competitors are also releasing new, cost-cutting models and services an exceptional case.
Last week, Chinese tech mammoth Baidu launched a new multimodal foundational model and its first reasoning-based model. Likewise, Alibaba introduced its updated Qwen 2.5 AI model in January and also launched a new variant of its AI assistant tool Quark this month.
Alibaba has also made strong commitments to its AI plan, recently, it announced a plan to put $53 billion in its cloud computing and AI infrastructure over the next three years, even surpassing its spending in the space over the past decade.
CNBC talked with Kai Wang, Asia Senior equity analyst at Morningstar, Mr Kai told CNBC that “large Chinese tech players such as Alibaba, which build data centers to meet the computing needs of AI in addition to building their own LLMs, are well positioned to benefit from China's post-DeepSeek AI boom.” According to CNBC, “Alibaba secured a major win for its AI business last month when it confirmed that the company was partnering with Apple to roll out AI integration for iPhones sold in China.”
Every day, the digital landscape evolves, thanks to innovations and technological advancements. Despite this growth, it suffers from a few roadblocks, cybercrime being a major one and not showing signs of ending anytime soon. Artificial Intelligence, large-scale data breaches, businesses, governments, and rising target refinement across media platforms have contributed to this problem. However, Nord VPN CTO Marijus Briedis believes, “Prevention alone is insufficient,” and we need resilience.
VPN provider Nord VPN experienced first-hand the changing cyber threat landscape after the spike in cybercrime cases attacking Lithuania, where the company is based, in the backdrop of the Ukraine conflict.
In the last few years, we have witnessed the expansion of cybercrime gangs and state-sponsored hackers and also the abuse of digital vulnerabilities. What is even worse is that “with little resources, you can have a lot of damage,” Briedis added. Data breaches reached an all-time high in 2024. The infamous “mother of all data breaches” incident resulted in a massive 26 billion record leak. Overall, more than 1 billion records were leaked throughout the year, according to NordLayer data.
Google’s Cybersecurity Forecast 2025 included Generative AI as a main threat, along with state-sponsored cybercriminals and ransomware.
Amid these increasing cyber threats, companies like NordVPN are widening the scope of their security services. A lot of countries have also implemented laws to safeguard against cyberattacks as much as possible throughout the years.
Over the years, governments, individuals, and organizations have also learned to protect their important data via vpn software, antivirus, firewall, and other security software. Despite these efforts, it’s not enough. According to Briedis, this happens because cybersecurity is not a fixed goal. "We have to be adaptive and make sure that we are learning from these attacks. We need to be [cyber] resilience."
In a RightsCon panel that Briedis attended, the discourse was aimed at NGOs, activists, and other small businesses, people take advantage of Nord’s advice to be more cyber-resilient. He gives importance to education, stressing it’s the “first thing.”
French startup Twin has introduced its very first AI-powered automation tool to help business owners who use Qonto. Qonto is a digital banking platform that offers financial services to companies across Europe. Many Qonto users spend hours each month gathering invoices from different sources and uploading them. Twin’s new tool does this job faster and with almost no effort from the user.
The tool is called Invoice Operator. It has been designed to save time by automatically finding and attaching invoices to the right transactions in a Qonto account. This means users no longer have to search for documents themselves or waste time uploading files manually.
Usually, companies use tools like Zapier or software like UiPath to automate tasks. These tools often need coding knowledge or work through complex scripts that break if a website changes. Twin uses a smarter method that copies how a person uses a web browser but with the help of artificial intelligence.
Here’s how Invoice Operator works: when a Qonto user starts the tool, it first checks which transactions are missing invoices. Then it opens a browser and prepares to visit the websites where invoices might be stored. If a login is required, the tool will stop and ask the user to enter their username and password. After logging in, the AI continues its job— finding the needed documents and uploading them to Qonto automatically.
This method is useful because businesses often use many different platforms to make purchases. It would be too difficult and time-consuming to write special instructions for each website. But Twin’s technology can handle thousands of services without needing extra scripts.
The tool is powered by an advanced AI model developed by OpenAI, which allows the software to operate a browser in the same way a person would. Twin was one of only a few companies allowed to test this AI model before it was released to the public.
What makes Twin’s tool even more helpful is that it’s very easy to use. Business owners don’t have to understand coding or set up anything complicated. Once logged in, the AI handles the process without further input. This makes it ideal for people who want results without dealing with technical steps.
In the long run, Twin believes its technology can be useful for many other tasks in different industries. For example, it could help online stores handle orders or assist customer support teams in finding information quickly.
With this launch, Twin is showing how smart automation can reduce boring and repetitive work. The company hopes to bring its AI tools to more people and businesses in the near future.
New proposals in the French Parliament will mandate tech companies to give decrypted messages, email. If businesses don’t comply, heavy fines will be imposed.
France has proposed a law requiring end-to-end encryption messaging apps like WhatsApp and Signal, and encrypted email services like Proton Mail to give law enforcement agencies access to decrypted data on demand.
The move comes after France’s proposed “Narcotraffic” bill, asking tech companies to hand over encrypted chats of suspected criminals within 72 hours.
The law has stirred debates in the tech community and civil society groups because it may lead to building of “backdoors” in encrypted devices that can be abused by threat actors and state-sponsored criminals.
Individuals failing to comply will face fines of €1.5m and companies may lose up to 2% of their annual world turnover in case they are not able to hand over encrypted communications to the government.
Few experts believe it is not possible to bring backdoors into encrypted communications without weakening their security.
According to Computer Weekly’s report, Matthias Pfau, CEO of Tuta Mail, a German encrypted mail provider, said, “A backdoor for the good guys only is a dangerous illusion. Weakening encryption for law enforcement inevitably creates vulnerabilities that can – and will – be exploited by cyber criminals and hostile foreign actors. This law would not just target criminals, it would destroy security for everyone.”
Researchers stress that the French proposals aren’t technically sound without “fundamentally weakening the security of messaging and email services.” Similar to the “Online Safety Act” in the UK, the proposed French law exposes a serious misunderstanding of the practical achievements with end-to-end encrypted systems. Experts believe “there are no safe backdoors into encrypted services.”
The law will allow using infamous spywares such as NSO Group’s Pegasus or Pragon that will enable officials to remotely surveil devices. “Tuta Mail has warned that if the proposals are passed, it would put France in conflict with European Union laws, and German IT security laws, including the IT Security Act and Germany’s Telecommunications Act (TKG) which require companies to secure their customer’s data,” reports Computer Weekly.
Online attacks are a common thing in 2025. The rising AI use has contributed to cyberattacks with faster speed and advanced features, the change is unlikely to slow down. To help readers, this blog outlines the basics of digital safety.
A good antivirus in your system helps you from malware, ransomware, phishing sites, and other major threats.
For starters, having Microsoft’s built-in Windows Security antivirus is a must (it is usually active in the default settings, unless you have changed it). Microsoft antivirus is reliable and runs without being nosy in the background.
You can also purchase paid antivirus software, which provides an extra security and additional features, in an all-in-one single interface.
A password manager is the spine of login security, whether an independent service, or a part of antivirus software, to protect login credentials across the web. In addition they also lower the chances of your data getting saved on the web.
A simple example: to maintain privacy, keep all the credit card info in your password manager, instead of allowing shopping websites to store sensitive details.
You'll be comparatively safer in case a threat actor gets unauthorized access to your account and tries to scam you.
In today's digital world, just a standalone password isn't a safe bet to protect you from attackers. Two-factor authentication (2FA) or multi-factor authentication provides an extra security layer before users can access their account. For instance, if a hacker has your login credentials, trying to access your account, they won't have all the details for signing in.
A safer option for users (if possible) is to use 2FA via app-generated one-time codes; these are safer than codes sent through SMS, which can be intercepted.
If passwords and 2FA feel like a headache, you can use your phone or PC as a security option, through a passkey.
Passkeys are easy, fast, and simple; you don't have to remember them; you just store them on your device. Unlike passwords, passkeys are linked to the device you've saved them on, this prevents them from getting stolen or misused by hackers. You're done by just using PIN or biometric authentication to allow a passkey use.
Earlier, it was easier to spot irregularities in an e-mail, all it took was one glance. As Gen AI models use flawless grammar, it is almost impossible to find errors in your mail copy,
In the past, one quick skim was enough to recognize something is off with an email, typically the incorrect grammar and laughable typos being the giveaways. Since scammers now use generative AI language models, most phishing messages have flawless grammar.
But there is hope. It is easier to identify Gen AI text, and keep an eye out for an unnatural flow of sentences, if everything seems to be too perfect, chances are it’s AI.
Though AI has made it difficult for users to find phishing scams, they show some classic behavior. The same tips apply to detect phishing emails.
In most cases, scammers mimic businesses and wish you won’t notice. For instance, instead of an official “info@members.hotstar.com” email ID, you may notice something like “info@members.hotstar-support.com.” You may also get unrequested links or attachments, which are a huge tell. URLs (mismatched) having subtle typos or extra words/letters are comparatively difficult to notice but a huge ti-off that you are on a malicious website or interacting with a fake business.
The biggest issue these days is combating deepfakes, which are also difficult to spot.
The attacker makes realistic video clips using photo and video prompts and uses video calling like Zoom or FaceTime to trap potential victims (especially elders and senior citizens) to give away sensitive data.
One may think that only old people may fall for deepfakes, but due to their sophistication, even experts fall prey to them. One famous incident happened in Hong Kong, where scammers deepfake a company CFO and looted HK$200 million (roughly $25 million).
AI is advancing, and becoming stronger every day. It is a double-edged sword, both a blessing and a curse. One should tread the ethical lines carefully and hope they don’t fall to the dark side of AI.
One infamous example is the “tech support scam,” where a fake warning tells the user their device is infected with malware and they need to reach out to contact support number (fake) or install fake anti-malware software to restore the system and clean up things. Over the years, users have noticed a few Microsoft IT support fraud pop-ups.
Realizing the threat, Microsoft is combating the issue with its new Scareware Blockers feature in Edge, which was first rolled out in November last year at the Ignite conference.
Defender SmartScreen, a feature that saves Edge users from scams, starts after a malicious site is caught and added to its index of abusive web pages to protect users globally.
The new AI-powered Edge scareware blocker by Microsoft “offers extra protection by detecting signs of scareware scams in real-time using a local machine learning model,” says Bleeping Computer.
Talking about Scareware, Microsoft says, “The blocker adds a new, first line of defense to help protect the users exposed to a new scam if it attempts to open a full-screen page.” “Scareware blocker uses a machine learning model that runs on the local computer,” it further adds.
Once the blocker catches a scam page, it informs users and allows them to continue using the webpage if they trust the website.
Before activating the blocker, the user needs to install the Microsoft Edge beta version. The version installs along with the main release variant of Edge, easing the user’s headache of co-mingling the versions. If the user is on a managed system, they should make sure previews are enabled admin.
"After making sure you have the latest updates, you should see the scareware blocker preview listed under "Privacy Search and Services,'" Microsoft says. Talking about reporting the scam site from users’ end for the blocker to work, Microsoft says it helps them “make the feature more reliable to catch the real scams.
Beyond just blocking individual scam outbreaks” their Digital Crimes Unit “goes even further to target the cybercrime supply chain directly.”
According to the FBI, criminals are increasingly using generative artificial intelligence (AI) to make their fraudulent schemes more convincing. This technology enables fraudsters to produce large amounts of realistic content with minimal time and effort, increasing the scale and sophistication of their operations.
Generative AI systems work by synthesizing new content based on patterns learned from existing data. While creating or distributing synthetic content is not inherently illegal, such tools can be misused for activities like fraud, extortion, and misinformation. The accessibility of generative AI raises concerns about its potential for exploitation.
AI offers significant benefits across industries, including enhanced operational efficiency, regulatory compliance, and advanced analytics. In the financial sector, it has been instrumental in improving product customization and streamlining processes. However, alongside these benefits, vulnerabilities have emerged, including third-party dependencies, market correlations, cyber risks, and concerns about data quality and governance.
The misuse of generative AI poses additional risks to financial markets, such as facilitating financial fraud and spreading false information. Misaligned or poorly calibrated AI models may result in unintended consequences, potentially impacting financial stability. Long-term implications, including shifts in market structures, macroeconomic conditions, and energy consumption, further underscore the importance of responsible AI deployment.
Fraudsters have increasingly turned to generative AI to enhance their schemes, using AI-generated text and media to craft convincing narratives. These include social engineering tactics, spear-phishing, romance scams, and investment frauds. Additionally, AI can generate large volumes of fake social media profiles or deepfake videos, which are used to manipulate victims into divulging sensitive information or transferring funds. Criminals have even employed AI-generated audio to mimic voices, misleading individuals into believing they are interacting with trusted contacts.
In one notable incident reported by the FBI, a North Korean cybercriminal used a deepfake video to secure employment with an AI-focused company, exploiting the position to access sensitive information. Similarly, Russian threat actors have been linked to fake videos aimed at influencing elections. These cases highlight the broad potential for misuse of generative AI across various domains.
To address these challenges, the FBI advises individuals to take several precautions. These include establishing secret codes with trusted contacts to verify identities, minimizing the sharing of personal images or voice data online, and scrutinizing suspicious content. The agency also cautions against transferring funds, purchasing gift cards, or sending cryptocurrency to unknown parties, as these are common tactics employed in scams.
Generative AI tools have been used to improve the quality of phishing messages by reducing grammatical errors and refining language, making scams more convincing. Fraudulent websites have also employed AI-powered chatbots to lure victims into clicking harmful links. To reduce exposure to such threats, individuals are advised to avoid sharing sensitive personal information online or over the phone with unverified sources.
By remaining vigilant and adopting these protective measures, individuals can mitigate their risk of falling victim to fraud schemes enabled by emerging AI technologies.