A promotional campaign at South Korean cryptocurrency exchange Bithumb turned into a large scale operational incident after a data entry mistake resulted in users receiving bitcoin instead of a small cash-equivalent reward.
Initial reports suggested that certain customers were meant to receive 2,000 Korean won as part of a routine promotional payout. Instead, those accounts were credited with 2,000 bitcoin each. At current market valuations, 2,000 bitcoin represents roughly $140 million per account, transforming what should have been a minor incentive into an extraordinary allocation.
Bithumb later confirmed that the scope of the error was larger than early estimates. According to the exchange, a total of 620,000 bitcoin was mistakenly credited to 695 user accounts. Based on prevailing prices at the time of the incident, that amount corresponded to approximately $43 billion in value. The exchange stated that the issue stemmed from an internal processing mistake and was not connected to external hacking activity or a breach of its security infrastructure. It emphasized that customer asset custody systems were not compromised.
The sudden appearance of large bitcoin balances had an immediate effect on trading activity within the platform. Bithumb reported that the incident contributed to a temporary decline of about 10 percent in bitcoin’s price on its exchange, as some affected users rapidly sold the credited assets. To contain further disruption, the company restricted withdrawals and suspended certain transactions linked to the impacted accounts. It stated that 99.7 percent of the mistakenly issued bitcoin has since been recovered.
The event has revived discussion around the concept often described as “paper bitcoin.” On centralized exchanges, user balances are reflected in internal ledgers rather than always corresponding to coins held in individual blockchain wallets. In practice, exchanges may not maintain a one-to-one on-chain reserve for every displayed balance at every moment. This structural model has previously drawn criticism, most notably during the collapse of Mt. Gox in 2014, which was then the largest bitcoin exchange globally. Its failure exposed major discrepancies between reported and actual holdings.
Data from blockchain analytics firm Arkham Intelligence indicates that Bithumb currently controls digital assets worth approximately $5.3 billion. That figure is substantially lower than the $43 billion temporarily reflected in the erroneous credits, underscoring that the allocation existed within internal accounting records rather than as newly transferred blockchain assets.
Observers on social media platform X questioned how such a large discrepancy could occur without automated safeguards preventing the issuance. Bithumb has faced security challenges in the past. In 2017, an employee’s device was compromised, exposing customer data later used in phishing attempts. In 2018, around $30 million in cryptocurrency was stolen in an attack attributed to the Lazarus Group, an organization widely linked to North Korea. A further breach in 2019 resulted in losses of roughly $20 million and was initially suspected to involve insider participation. In each instance, Bithumb stated that it compensated affected users for lost funds, though earlier incidents included exposure of personal information.
Beyond cybersecurity events, the exchange has also been subject to regulatory scrutiny, including investigations related to alleged fraud, embezzlement, and promotional practices. Reports indicate it was again raided this week over concerns involving misleading advertising.
Bithumb maintains that no customer ultimately suffered a net financial loss from the recent error, though the price movement raised concerns about potential liquidations for leveraged traders. A comparable situation occurred at decentralized exchange Paradex, which reversed trades following a pricing malfunction.
The incident unfolds amid broader market strain, with digital asset prices astronomically below their October peaks and political debate intensifying around cryptocurrency-linked business interests connected to U.S. public figures. Recent disclosures from the U.S. Department of Justice concerning Jeffrey Epstein’s early involvement in cryptocurrency ventures have further fueled online speculation and conspiracy narratives across social platforms.
Cybersecurity investigators at Google have confirmed that state-sponsored hacking groups are actively relying on generative artificial intelligence to improve how they research targets, prepare cyber campaigns, and develop malicious tools. According to the company’s threat intelligence teams, North Korea–linked attackers were observed using the firm’s AI platform, Gemini, to collect and summarize publicly available information about organizations and employees they intended to target. This type of intelligence gathering allows attackers to better understand who works at sensitive companies, what technical roles exist, and how to approach victims in a convincing way.
Investigators explained that the attackers searched for details about leading cybersecurity and defense companies, along with information about specific job positions and salary ranges. These insights help threat actors craft more realistic fake identities and messages, often impersonating recruiters or professionals to gain the trust of their targets. Security experts warned that this activity closely resembles legitimate professional research, which makes it harder for defenders to distinguish normal online behavior from hostile preparation.
The hacking group involved, tracked as UNC2970, is linked to North Korea and overlaps with a network widely known as Lazarus Group. This group has previously run a long-term operation in which attackers pretended to offer job opportunities to professionals in aerospace, defense, and energy companies, only to deliver malware instead. Researchers say this group continues to focus heavily on defense-related targets and regularly impersonates corporate recruiters to begin contact with victims.
The misuse of AI is not limited to one actor. Multiple hacking groups connected to China and Iran were also found using AI tools to support different phases of their operations. Some groups used AI to gather targeted intelligence, including collecting email addresses and account details. Others relied on AI to analyze software weaknesses, prepare technical testing plans, interpret documentation from open-source tools, and debug exploit code. Certain actors used AI to build scanning tools and malicious web shells, while others created fake online identities to manipulate individuals into interacting with them. In several cases, attackers claimed to be security researchers or competition participants in order to bypass safety restrictions built into AI systems.
Researchers also identified malware that directly communicates with AI services to generate harmful code during an attack. One such tool, HONESTCUE, requests programming instructions from AI platforms and receives source code that is used to build additional malicious components on the victim’s system. Instead of storing files on disk, this malware compiles and runs code directly in memory using legitimate system tools, making detection and forensic analysis more difficult. Separately, investigators uncovered phishing kits designed to look like cryptocurrency exchanges. These fake platforms were built using automated website creation tools from Lovable AI and were used to trick victims into handing over login credentials. Parts of this activity were linked to a financially motivated group known as UNC5356.
Security teams also reported an increase in so-called ClickFix campaigns. In these schemes, attackers use public sharing features on AI platforms to publish convincing step-by-step guides that appear to fix common computer problems. In reality, these instructions lead users to install malware that steals personal and financial data. This trend was first flagged in late 2025 by Huntress.
Another growing threat involves model extraction attacks. In these cases, adversaries repeatedly query proprietary AI systems in order to observe how they respond and then train their own models to imitate the same behavior. In one large campaign, attackers sent more than 100,000 prompts to replicate how an AI model reasons across many tasks in different languages. Researchers at Praetorian demonstrated that a functional replica could be built using a relatively small number of queries and limited training time. Experts warned that keeping AI model parameters secret is not enough, because every response an AI system provides can be used as training data for attackers.
Google, which launched its AI Cyber Defense Initiative in 2024, stated that artificial intelligence is increasingly amplifying the capabilities of cybercriminals by improving their efficiency and speed. Company representatives cautioned that as attackers integrate AI into routine operations, the volume and sophistication of attacks will continue to rise. Security specialists argue that defenders must adopt similar AI-powered tools to automate threat detection, accelerate response times, and operate at the same machine-level speed as modern attacks.
Over six thousand SmarterMail systems sit reachable online, possibly at risk due to a serious login vulnerability, found by the nonprofit cybersecurity group Shadowserver. Attention grows as hackers increasingly aim for outdated corporate mail setups left unprotected.
The threat actors used internet-exposed SolarWinds Web Help Desk (WHD) instances to gain initial access and then proceed laterally across the organization's network to other high-value assets, according to Microsoft's disclosure of a multi-stage attack.
However, it is unclear if the activity used a previously patched vulnerability (CVE-2025-26399, CVSS score: 9.8) or recently revealed vulnerabilities (CVE-2025-40551, CVSS score: 9.8, and CVE-2025-40536, CVSS score: 8.1), according to the Microsoft Defender Security Research Team.
"Since the attacks occurred in December 2025 and on machines vulnerable to both the old and new set of CVEs at the same time, we cannot reliably confirm the exact CVE used to gain an initial foothold," the company said in the report.
CVE-2025-40551 and CVE-2025-26399 both relate to untrusted data deserialization vulnerabilities that could result in remote code execution, and CVE-2025-400536 is a security control bypass vulnerability that might enable an unauthenticated attacker to access some restricted functionality.
Citing proof of active exploitation in the field, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) added CVE-2025-40551 to its list of known exploited vulnerabilities (KEVs) last week. By February 6, 2026, agencies of the Federal Civilian Executive Branch (FCEB) were required to implement the solutions for the defect.
The successful exploitation of the exposed SolarWinds WHD instance in the attacks that Microsoft discovered gave the attackers the ability to execute arbitrary commands within the WHD application environment and accomplish unauthenticated remote code execution.
Microsoft claimed that in at least one instance, the threat actors used a DCSync attack, in which they impersonated a Domain Controller (DC) and asked an Active Directory (AD) database for password hashes and other private data.
Users are recommended to update WHD instances, identify and eliminate any unauthorized RMM tools, rotate admin and service accounts, and isolate vulnerable workstations to minimize the breach in order to combat the attack.
"This activity reflects a common but high-impact pattern: a single exposed application can provide a path to full domain compromise when vulnerabilities are unpatched or insufficiently monitored," the creator of Windows stated.
Two students affiliated with Stanford University have raised $2 million to expand an accelerator program designed for entrepreneurs who are still in college or who have recently graduated. The initiative, called Breakthrough Ventures, focuses on helping early-stage founders move from rough ideas to viable businesses by providing capital, guidance, and access to professional networks.
The program was created by Roman Scott, a recent graduate, and Itbaan Nafi, a current master’s student. Their work began with small-scale demo days held at Stanford in 2024, where student teams presented early concepts and received feedback. Interest from participants and observers revealed a clear gap. Many students had promising ideas but lacked practical support, legal guidance, and introductions to investors. The founders then formalized the effort into a structured accelerator and raised funding to scale it.
Breakthrough Ventures aims to address two common obstacles faced by student founders. First, early funding is difficult to access before a product or revenue exists. Second, students often do not have reliable access to mentors and industry networks. The program responds to both challenges through a combination of financial support and hands-on assistance.
Selected teams receive grant funding of up to $10,000 without giving up ownership in their companies. Participants also gain access to legal support and structured mentorship from experienced professionals. The program includes technical resources such as compute credits from technology partners, which can lower early development costs for startups building software or data-driven products. At the end of the program, founders who demonstrate progress may be considered for additional investment of up to $50,000.
The accelerator operates through a hybrid format. Founders participate in a mix of online sessions and in-person meetups, and the program concludes with a demo day at Stanford, where teams present their progress to potential investors and collaborators. This structure is intended to keep participation accessible while still offering in-person exposure to the startup ecosystem.
Over the next three years, the organizers plan to deploy the $2 million fund to support at least 100 student-led companies across areas such as artificial intelligence, healthcare, consumer products, sustainability, and deep technology. By targeting founders at an early stage, the program aims to reduce the friction between having an idea and building a credible company, while promoting responsible, well-supported innovation within the student community.