Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label AI. Show all posts

Cybercriminals Are Now Focusing More on Stealing Credentials Than Using Ransomware, IBM Warns

 



A new report from IBM’s X-Force 2025 Threat Intelligence Index shows that cybercriminals are changing their tactics. Instead of mainly using ransomware to lock systems, more hackers are now trying to quietly steal login information. IBM studied over 150 billion security events each day from 130+ countries and found that infostealers, a type of malware sent through emails to steal data, rose by 84% in 2024 compared to 2023.

This change means that instead of damaging systems right away, attackers are sneaking into networks to steal passwords and other sensitive information. Mark Hughes, a cybersecurity leader at IBM, said attackers are finding ways into complex cloud systems without making a mess. He also advised businesses to stop relying on basic protection methods. Instead, companies should improve how they manage passwords, fix weaknesses in multi-factor authentication, and actively search for hidden threats before any damage happens.

Critical industries such as energy, healthcare, and transportation were the main targets in the past year. About 70% of the incidents IBM helped handle involved critical infrastructure. In around 25% of these cases, attackers got in by taking advantage of known flaws in systems that had not been fixed. Many hackers now prefer stealing important data instead of locking it with ransomware. Data theft was the method in 18% of cases, while encryption-based attacks made up only 11%.

The study also found that Asia and North America were attacked the most, together making up nearly 60% of global incidents. Asia alone saw 34% of the attacks, and North America had 24%. Manufacturing businesses remained the top industry targeted for the fourth year in a row because even short outages can seriously hurt their operations.

Emerging threats related to artificial intelligence (AI) were also discussed. No major attacks on AI systems happened in 2024, but experts found some early signs of possible risks. For example, a serious security gap was found in a software framework used to create AI agents. As AI technology spreads, hackers are likely to build new tools to attack these systems, making it very important to secure AI pipelines early.

Another major concern is the slow pace of fixing vulnerabilities in many companies. IBM found that many Red Hat Enterprise Linux users had not updated their systems properly, leaving them open to attacks. Also, ransomware groups like Akira, Lockbit, Clop, and RansomHub have evolved to target both Windows and Linux systems.

Lastly, phishing attacks that deliver infostealers increased by 180% in 2024 compared to the year before. Even though ransomware still accounted for 28% of malware cases, the overall number of ransomware incidents fell. Cybercriminals are clearly moving towards quieter methods that focus on stealing identities rather than locking down systems.


Explaining AI's Impact on Ransomware Attacks and Businesses Security

 

Ransomware has always been an evolving menace, as criminal outfits experiment with new techniques to terrorise their victims and gain maximum leverage while making extortion demands. Weaponized AI is the most recent addition to the armoury, allowing high-level groups to launch more sophisticated attacks but also opening the door for rookie hackers. The NCSC has cautioned that AI is fuelling the global threat posed by ransomware, and there has been a significant rise in AI-powered phishing attacks. 

Organisations are increasingly facing increasing threats from sophisticated assaults, such as polymorphic malware, which can mutate in real time to avoid detection, allowing organisations to strike with more precision and frequency. As AI continues to rewrite the rules of ransomware attacks, businesses that still rely on traditional defences are more vulnerable to the next generation of cyber attack. 

Ransomware accessible via AI 

Online criminals, like legal businesses, are discovering new methods to use AI tools, which makes ransomware attacks more accessible and scalable. By automating crucial attack procedures, fraudsters may launch faster, more sophisticated operations with less human intervention. 

Established and experienced criminal gangs gain from the ability to expand their operations. At the same time, because AI is lowering entrance barriers, folks with less technical expertise can now utilise ransomware as a service (RaaS) to undertake advanced attacks that would ordinarily be outside their pay grade. 

OpenAI, the company behind ChatGPT, stated that it has detected and blocked more than 20 fraudulent operations with its famous generative AI tool. This ranged from creating copy for targeted phishing operations to physically coding and debugging malware. 

FunkSec, a RaaS supplier, is a current example of how these tools are enhancing criminal groups' capabilities. The gang is reported to have only a few members, and its human-created code is rather simple, with a very low level of English. However, since its inception in late 2024, FunkSec has recorded over 80 victims in a single month, thanks to a variety of AI techniques that allow them to punch much beyond their weight. 

Investigations have revealed evidence of AI-generated code in the gang's ransomware, as well as web and ransom text that was obviously created by a Large Language Model (LLM). The team also developed a chatbot to assist with their operations using Miniapps, a generative AI platform. 

Mitigation tips against AI-driven ransomware 

With AI fuelling ransomware groups, organisations must evolve their defences to stay safe. Traditional security measures are no longer sufficient, and organisations must match their fast-moving attackers with their own adaptive, AI-driven methods to stay competitive. 

One critical step is to investigate how to combat AI with AI. Advanced AI-driven detection and response systems may analyse behavioural patterns in real time, identifying anomalies that traditional signature-based techniques may overlook. This is critical for fighting strategies like polymorphism, which have been expressly designed to circumvent standard detection technologies. Continuous network monitoring provides an additional layer of defence, detecting suspicious activity before ransomware can activate and propagate. 

Beyond detection, AI-powered solutions are critical for avoiding data exfiltration, as modern ransomware gangs almost always use data theft to squeeze their victims. According to our research, 94% of reported ransomware attacks in 2024 involved exfiltration, highlighting the importance of Anti Data Exfiltration (ADX) solutions as part of a layered security approach. Organisations can prevent extortion efforts by restricting unauthorised data transfers, leaving attackers with no choice but to move on.

Building Smarter AI Through Targeted Training


 

In recent years, artificial intelligence and machine learning have been in high demand across a broad range of industries. As a consequence, the cost and complexity of constructing and maintaining these models have increased significantly. Artificial intelligence and machine learning systems are resource-intensive, as they require substantial computation resources and large datasets, and are also difficult to manage effectively due to their complexity. 

As a result of this trend, professionals such as data engineers, machine learning engineers, and data scientists are increasingly being tasked with identifying ways to streamline models without compromising performance or accuracy, which in turn will lead to improved outcomes. Among the key aspects of this process involves determining which data inputs or features can be reduced or eliminated, thereby making the model operate more efficiently. 

In AI model optimization, a systematic effort is made to improve a model's performance, accuracy, and efficiency to achieve superior results in real-world applications. The purpose of this process is to improve a model's operational and predictive capabilities through a combination of technical strategies. It is the engineering team's responsibility to improve computational efficiency—reducing processing time, reducing resource consumption, and reducing infrastructure costs—while also enhancing the model's predictive precision and adaptability to changing datasets by enhancing the model's computational efficiency. 

An important optimization task might involve fine-tuning hyperparameters, selecting the most relevant features, pruning redundant elements, and making advanced algorithmic adjustments to the model. Ultimately, the goal of modeling is not only to provide accurate and responsive data, but also to provide scalable, cost-effective, and efficient data. As long as these optimization techniques are applied effectively, they ensure the model will perform reliably in production environments as well as remain aligned with the overall objectives of the organization. 

It is designed to retain important details and user preferences as well as contextually accurate responses when ChatGPT's memory feature is enabled, which is typically set to active by default so that the system can provide more personalized responses over time. If the user desires to access this functionality, he or she can navigate to the Settings menu and select Personalization, where they can check whether memory is active and then remove specific saved interactions if needed. 

As a result of this, it is recommended that users periodically review the data that has been stored within the memory feature to ensure its accuracy. In some cases, incorrect information may be retained, including inaccurate personal information or assumptions made during a previous conversation. As an example, in certain circumstances, the system might incorrectly log information about a user’s family, or other aspects of their profile, based on the context in which it is being used. 

In addition, the memory feature may inadvertently store sensitive data when used for practical purposes, such as financial institutions, account details, or health-related queries, especially if users are attempting to solve personal problems or experiment with the model. It is important to remember that while the memory function contributes to improved response quality and continuity, it also requires careful oversight from the user. There is a strong recommendation that users audit their saved data points routinely and delete the information that they find inaccurate or overly sensitive. This practice helps maintain the accuracy of data, as well as ensure better, more secure interactions. 

It is similar to clearing the cache of your browser periodically to maintain your privacy and performance optimally. "Training" ChatGPT in terms of customized usage means providing specific contextual information to the AI so that its responses will be relevant and accurate in a way that is more relevant to the individual. ITGuides the AI to behave and speak in a way that is consistent with the needs of the users, users can upload documents such as PDFs, company policies, or customer service transcripts. 

When people and organizations can make customized interactions for business-related content and customer engagement workflows, this type of customization provides them with more customized interactions. It is, however, often unnecessary for users to build a custom GPT for personal use in the majority of cases. Instead, they can share relevant context directly within their prompts or attach files to their messages, thereby achieving effective personalization. 

As an example, a user can upload their resume along with a job description when crafting a job application, allowing artificial intelligence to create a cover letter based on the resume and the job description, ensuring that the cover letter accurately represents the user's qualifications and aligns with the position's requirements. As it stands, this type of user-level customization is significantly different from the traditional model training process, which requires large quantities of data to be processed and is mainly performed by OpenAI's engineering teams. 

Additionally, ChatGPT users can increase the extent of its memory-driven personalization by explicitly telling it what details they wish to be remembered, such as their recent move to a new city or specific lifestyle preferences, like dietary choices. This type of information, once stored, allows the artificial intelligence to keep a consistent conversation going in the future. Even though these interactions enhance usability, they also require thoughtful data sharing to ensure privacy and accuracy, especially as ChatGPT's memory is slowly swelled over time. 

It is essential to optimize an AI model to improve performance as well as resource efficiency. It involves refining a variety of model elements to maximize prediction accuracy and minimize computational demand while doing so. It is crucial that we remove unused parameters from networks to streamline them, that we apply quantization to reduce data precision and speed up processing, and that we implement knowledge distillation, which translates insights from complex models to simpler, faster models. 

A significant amount of efficiency can be achieved by optimizing data pipelines, deploying high-performance algorithms, utilizing hardware accelerations such as GPUs and TPUs, and employing compression techniques such as weight sharing, low-rank approximation, and optimization of the data pipelines. Also, balancing batch sizes ensures the optimal use of resources and the stability of training. 

A great way to improve accuracy is to curate clean, balanced datasets, fine-tune hyperparameters using advanced search methods, increase model complexity with caution and combine techniques like cross-validation and feature engineering with the models. Keeping long-term performance high requires not only the ability to learn from pre-trained models but also regular retraining as a means of combating model drift. To enhance the scalability, cost-effectiveness, and reliability of AI systems across diverse applications, these techniques are strategically applied. 

Using tailored optimization solutions from Oyelabs, organizations can unlock the full potential of their AI investments. In an age when artificial intelligence is continuing to evolve rapidly, it becomes increasingly important to train and optimize models strategically through data-driven optimization. There are advanced techniques that can be implemented by organizations to improve performance while controlling resource expenditures, from selecting features and optimizing algorithms to efficiently handling data. 

As professionals and teams that place a high priority on these improvements, they will put themselves in a much better position to create AI systems that are not only faster and smarter but are also more adaptable to the daily demands of the world. Businesses are able to broaden their understanding of AI and improve their scalability and long-term sustainability by partnering with experts and focusing on how AI achieves value-driven outcomes.

Fake Candidates, Real Threat: Deepfake Job Applicants Are the New Cybersecurity Challenge

 

When voice authentication firm Pindrop Security advertised an opening for a senior engineering role, one resume caught their attention. The candidate, a Russian developer named Ivan, appeared to be a perfect fit on paper. But during the video interview, something felt off—his facial expressions didn’t quite match his speech. It turned out Ivan wasn’t who he claimed to be.

According to Vijay Balasubramaniyan, CEO and co-founder of Pindrop, Ivan was a fraudster using deepfake software and other generative AI tools in an attempt to secure a job through deception.

“Gen AI has blurred the line between what it is to be human and what it means to be machine,” Balasubramaniyan said. “What we’re seeing is that individuals are using these fake identities and fake faces and fake voices to secure employment, even sometimes going so far as doing a face swap with another individual who shows up for the job.”

While businesses have always had to protect themselves against hackers targeting vulnerabilities, a new kind of threat has emerged: job applicants powered by AI who fake their identities to gain employment. From forged resumes and AI-generated IDs to scripted interview responses, these candidates are part of a fast-growing trend that cybersecurity experts warn is here to stay.

In fact, a Gartner report predicts that by 2028, 1 in 4 job seekers globally will be using some form of AI-generated deception.

The implications for employers are serious. Fraudulent hires can introduce malware, exfiltrate confidential data, or simply draw salaries under false pretenses.

A Growing Cybercrime Strategy

This problem is especially acute in cybersecurity and crypto startups, where remote hiring makes it easier for scammers to operate undetected. Ben Sesser, CEO of BrightHire, noted a massive uptick in these incidents over the past year.

“Humans are generally the weak link in cybersecurity, and the hiring process is an inherently human process with a lot of hand-offs and a lot of different people involved,” Sesser said. “It’s become a weak point that folks are trying to expose.”

This isn’t a problem confined to startups. Earlier this year, the U.S. Department of Justice disclosed that over 300 American companies had unknowingly hired IT workers tied to North Korea. The impersonators used stolen identities, operated via remote networks, and allegedly funneled salaries back to fund the country’s weapons program.

Criminal Networks & AI-Enhanced Resumes

Lili Infante, founder and CEO of Florida-based CAT Labs, says her firm regularly receives applications from suspected North Korean agents.

“Every time we list a job posting, we get 100 North Korean spies applying to it,” Infante said. “When you look at their resumes, they look amazing; they use all the keywords for what we’re looking for.”

To filter out such applicants, CAT Labs relies on ID verification companies like iDenfy, Jumio, and Socure, which specialize in detecting deepfakes and verifying authenticity.

The issue has expanded far beyond North Korea. Experts like Roger Grimes, a longtime computer security consultant, report similar patterns with fake candidates originating from Russia, China, Malaysia, and South Korea.

Ironically, some of these impersonators end up excelling in their roles.

“Sometimes they’ll do the role poorly, and then sometimes they perform it so well that I’ve actually had a few people tell me they were sorry they had to let them go,” Grimes said.

Even KnowBe4, the cybersecurity firm Grimes works with, accidentally hired a deepfake engineer from North Korea who used AI to modify a stock photo and passed through multiple background checks. The deception was uncovered only after suspicious network activity was flagged.

What Lies Ahead

Despite a few high-profile incidents, most hiring teams still aren’t fully aware of the risks posed by deepfake job applicants.

“They’re responsible for talent strategy and other important things, but being on the front lines of security has historically not been one of them,” said BrightHire’s Sesser. “Folks think they’re not experiencing it, but I think it’s probably more likely that they’re just not realizing that it’s going on.”

As deepfake tools become increasingly realistic, experts believe the problem will grow harder to detect. Fortunately, companies like Pindrop are already developing video authentication systems to fight back. It was one such system that ultimately exposed “Ivan X.”

Although Ivan claimed to be in western Ukraine, his IP address revealed he was operating from a Russian military base near North Korea, according to the company.

Pindrop, backed by Andreessen Horowitz and Citi Ventures, originally focused on detecting voice-based fraud. Today, it may be pivoting toward defending video and digital hiring interactions.

“We are no longer able to trust our eyes and ears,” Balasubramaniyan said. “Without technology, you’re worse off than a monkey with a random coin toss.”

Payment Fraud on the Rise: How Businesses Are Fighting Back with AI

The threat of payment fraud is growing rapidly, fueled by the widespread use of digital transactions and evolving cyber tactics. At its core, payment fraud refers to the unauthorized use of someone’s financial information to make illicit transactions. Criminals are increasingly leveraging hardware tools like skimmers and keystroke loggers, as well as malware, to extract sensitive data during legitimate transactions. 

As a result, companies are under mounting pressure to adopt more advanced fraud prevention systems. Credit and debit card fraud continue to dominate fraud cases globally. A recent report by Nilson found that global losses due to payment card fraud reached $33.83 billion in 2023, with nearly half of these losses affecting U.S. cardholders. 

While chip-enabled cards have reduced in-person fraud, online or card-not-present (CNP) fraud has surged. Debit card fraud often results in immediate financial damage to the victim, given its direct link to bank accounts. Meanwhile, mobile payments are vulnerable to tactics like SIM swapping and mobile malware, allowing attackers to hijack user accounts. 

Other methods include wire fraud, identity theft, chargeback fraud, and even check fraud—which, despite a decline in paper check usage, remains a threat through forged or altered checks. In one recent case, customers manipulated ATM systems to deposit fake checks and withdraw funds before detection, resulting in substantial bank losses. Additionally, criminals have turned to synthetic identity creation and AI-generated impersonations to carry out sophisticated schemes.  

However, artificial intelligence is not just a tool for fraudsters—it’s also a powerful ally for defense. Financial institutions are integrating AI into their fraud detection systems. Platforms like Visa Advanced Authorization and Mastercard Decision Intelligence use real-time analytics and machine learning to assess transaction risk and flag suspicious behavior. 

AI-driven firms such as Signifyd and Riskified help businesses prevent fraud by analyzing user behavior, transaction patterns, and device data. The consequences of payment fraud extend beyond financial loss. Businesses also suffer reputational harm, resource strain, and operational disruptions. 

With nearly 60% of companies reporting fraud-related losses exceeding $5 million in 2024, preventive action is crucial. From employee training and risk assessments to AI-powered tools and multi-layered security, organizations are now investing in proactive strategies to protect themselves and their customers from the rising tide of digital fraud.

AI Powers Airbnb’s Code Migration, But Human Oversight Still Key, Say Tech Giants

 

In a bold demonstration of AI’s growing role in software development, Airbnb has successfully completed a large-scale code migration project using large language models (LLMs), dramatically reducing the timeline from an estimated 1.5 years to just six weeks. The project involved updating approximately 3,500 React component test files from Enzyme to the more modern React Testing Library (RTL). 

According to Airbnb software engineer Charles Covey-Brandt, the company’s AI-driven pipeline used a combination of automated validation steps and frontier LLMs to handle the bulk of the transformation. Impressively, 75% of the files were migrated within just four hours, thanks to robust automation and intelligent retries powered by dynamic prompt engineering with context-rich inputs of up to 100,000 tokens. 

Despite this efficiency, about 900 files initially failed validation. Airbnb employed iterative tools and a status-tracking system to bring that number down to fewer than 100, which were finally resolved manually—underscoring the continued need for human intervention in such processes. Other tech giants echo this hybrid approach. Google, in a recent report, noted a 50% speed increase in migrating codebases using LLMs. 

One project converting ID types in the Google Ads system—originally estimated to take hundreds of engineering years—was largely automated, with 80% of code changes authored by AI. However, inaccuracies still required manual edits, prompting Google to invest further in AI-powered verification. Amazon Web Services also highlighted the importance of human-AI collaboration in code migration. 

Its research into modernizing Java code using Amazon Q revealed that developers value control and remain cautious of AI outputs. Participants emphasized their role as reviewers, citing concerns about incorrect or misleading changes. While AI is accelerating what were once laborious coding tasks, these case studies reveal that full autonomy remains out of reach. 

Engineers continue to act as crucial gatekeepers, validating and refining AI-generated code. For now, the future of code migration lies in intelligent partnerships—where LLMs do the heavy lifting and humans ensure precision.

600 Phishing Campaigns Emerged After Bybit Heist, Biggest Crypto Scam in History

600 Phishing Campaigns Emerged After Bybit Heist, Biggest Crypto Scam in History

Recently, the cryptocurrency suffered the largest cyberattack to date. The Bybit exchange was hit by the "largest cryptocurrency heist in history, with approximately $1.5 billion in Ethereum tokens stolen in a matter of hours," Forbes said.

After the Bybit hack, phishing campaigns steal crypto

Security vendor BforeAI said around 600 phishing campaigns surfaced after the Bybit heist, which was intended to steal cryptocurrency from its customers. In the last three weeks, after the news of the biggest crypto scam in history, BforeAI found 596 suspicious domains from 13 different countries. 

Dozens of these malicious domains mimicked the cryptocurrency exchange itself (Bybit), most using typosquatting techniques and keywords like “wallet,” “refund,” “information, “recovery,” and “check.” 

According to BforeAI, there were also “instances of popular crypto keywords such as ‘metaconnect,’ ‘mining,’ and ‘airdrop,’ as well as the use of free hosting and subdomain registration services such as Netlify, Vercel, and Pages.dev.” 

Malicious free domains used for attacks

The use of free hosting services and dynamics is a common practice in this dataset. Many phishing pages are hosted on forums that offer anonymous, quick deployment without asking for domain purchases.  Also, the highest number of verified malicious domains were registered in the UK.

After the incident, Bybit assured customers that they wouldn’t lose any money as a result. But the hackers took advantage of this situation and intentionally created a sense of anxiety and urgency via deceptive tactics like ‘fake recovery services and ‘phishing schemes.’ A few phishing websites pretended to be the “Bybit Help Center.”

The end goal was to make victims enter their crypto/Bybit passwords. A few weeks later, campaigns changed from “withdrawals, information, and refunds” through spoof Bybit sites to providing “crypto and training guides” and special rewards to trick potential investors. 

Regardless of the change in these crypto and training guides, the campaigns persevered a “connection to the earlier withdrawal scams by including ‘how to withdraw from Bybit guides,’ BforeAI explained. This results in “a flow of traffic between learning resources fakes and withdrawal phishing attempts,” it added.

Bybit has accused North Korean hackers behind the attacks, costing the firm a massive $1.5 billion in stolen crypto. The campaign has contributed to Q1 2025 with an infamous record: a $1.7 billion theft in the first quarter, the highest in history.

Alibaba Launches Latest Open-source AI Model from Qwen Series for ‘Cost-effective AI agents’

Alibaba Launches Lates Open-source AI Model from Qwen Series for ‘Cost-effective AI agents’

Last week, Alibaba Cloud launched its latest AI model in its “Qwen series,” as large language model (LLM) competition in China continues to intensify after the launch of famous “DeepSeek” AI.

The latest "Qwen2.5-Omni-7B" is a multimodal model- it can process inputs like audio/video, text, and images- while also creating real-time text and natural speech responses, Alibaba’s cloud website reports. It also said that the model can be used on edge devices such as smartphones, providing higher efficiency without giving up on performance. 

According to Alibaba, the “unique combination makes it the perfect foundation for developing agile, cost-effective AI agents that deliver tangible value, especially intelligent voice applications.” For instance, the AI can be used to assist visually impaired individuals to navigate their environment via real-time audio description. 

The latest model is open-sourced on forums GitHub and Hugging Face, after a rising trend in China post DeepSeek breakthrough R1 model open-source. Open-source means a software in which the source code is created freely on web for potential modification and redistribution. 

In recent years, Alibaba claims it has open-sourced more that 200 generative AI models. In the noise of China’s AI dominance intensified by DeepSeek due to its shoe string budget and capabilities, Alibaba and genAI competitors are also releasing new, cost-cutting models and services an exceptional case.

Last week, Chinese tech mammoth Baidu launched a new multimodal foundational model and its first reasoning-based model. Likewise, Alibaba introduced its updated Qwen 2.5 AI model in January and also launched a new variant of its AI assistant tool Quark this month. 

Alibaba has also made strong commitments to its AI plan, recently, it announced a plan to put $53 billion in its cloud computing and AI infrastructure over the next three years, even surpassing its spending in the space over the past decade. 

CNBC talked with Kai Wang, Asia Senior equity analyst at Morningstar, Mr Kai told CNBC that “large Chinese tech players such as Alibaba, which build data centers to meet the computing needs of AI in addition to building their own LLMs, are well positioned to benefit from China's post-DeepSeek AI boom.” According to CNBC, “Alibaba secured a major win for its AI business last month when it confirmed that the company was partnering with Apple to roll out AI integration for iPhones sold in China.”