As artificial intelligence becomes embedded in daily business functions, concerns are growing over whether the workforce is adequately prepared to manage its risks and responsibilities. EC-Council has announced the launch of four new AI-focused certifications along with an updated Certified CISO v4 program, marking the largest single expansion in the organization’s 25-year history.
The rollout comes amid projections that unmanaged AI-related vulnerabilities could expose the global economy to as much as $5.5 trillion in risk, according to industry estimates attributed to IDC. At the same time, analysis from Bain & Company suggests that approximately 700,000 workers in the United States will require reskilling in AI and cybersecurity disciplines to meet rising demand.
Global institutions including the International Monetary Fund and the World Economic Forum have identified workforce capability as a primary constraint on AI-driven productivity, arguing that the barrier is no longer access to technology but access to trained professionals.
Security threats are escalating in parallel with adoption. Reports indicate that 87 percent of organizations have encountered AI-enabled cyberattacks. Additionally, generative AI-related network traffic has increased by 890 percent, significantly expanding potential attack surfaces. Emerging risks include prompt injection attacks, data poisoning, manipulation of machine learning models, and compromise of AI supply chains.
The new Enterprise AI Credential Suite is structured around EC-Council’s operational framework described as Adopt, Defend, and Govern. The “Adopt” pillar emphasizes structured and safeguarded AI deployment. “Defend” focuses on protecting AI systems from evolving threats. “Govern” integrates oversight, accountability, and risk management mechanisms into AI systems from the design stage.
Artificial Intelligence Essentials serves as the foundational certification, aimed at building practical literacy and responsible AI usage across professional roles. The Certified AI Program Manager credential prepares professionals to convert AI strategy into coordinated implementation, ensuring governance alignment and measurable return on investment.
The Certified Offensive AI Security Professional program trains specialists to identify vulnerabilities in large language models, simulate adversarial techniques, and strengthen AI infrastructure. The Certified Responsible AI Governance and Ethics certification centers on enterprise-scale oversight and compliance, referencing established standards such as those developed by NIST and ISO.
Certified CISO v4 has also been updated to prepare executive leaders for AI-integrated risk environments, where intelligent systems influence operational and strategic decisions. According to EC-Council leadership, security executives must now manage adaptive systems that evolve rapidly and require clear governance accountability.
The initiative aligns with U.S. federal priorities outlined in Executive Order 14179, the July 2025 AI Action Plan’s workforce development pillar, and Executive Orders 14277 and 14278, all of which emphasize expanding AI education pathways and strengthening job-ready skills across professional and skilled trade sectors.
AI expertise remains geographically concentrated, with 67 percent of U.S. AI talent located in just 15 cities, while women account for 28 percent of the workforce, underlining ongoing participation disparities.
Founded in 2001, EC-Council is known for its Certified Ethical Hacker credential. The organization holds ISO/IEC 17024 accreditation and reports certifying more than 350,000 professionals globally, including personnel within government agencies, the Department of Defense under DoD 8140 baseline recognition, and Fortune 100 companies.
As AI transitions from experimentation to infrastructure, workforce readiness and governance capability are increasingly central to secure and sustainable deployment.
A promotional campaign at South Korean cryptocurrency exchange Bithumb turned into a large scale operational incident after a data entry mistake resulted in users receiving bitcoin instead of a small cash-equivalent reward.
Initial reports suggested that certain customers were meant to receive 2,000 Korean won as part of a routine promotional payout. Instead, those accounts were credited with 2,000 bitcoin each. At current market valuations, 2,000 bitcoin represents roughly $140 million per account, transforming what should have been a minor incentive into an extraordinary allocation.
Bithumb later confirmed that the scope of the error was larger than early estimates. According to the exchange, a total of 620,000 bitcoin was mistakenly credited to 695 user accounts. Based on prevailing prices at the time of the incident, that amount corresponded to approximately $43 billion in value. The exchange stated that the issue stemmed from an internal processing mistake and was not connected to external hacking activity or a breach of its security infrastructure. It emphasized that customer asset custody systems were not compromised.
The sudden appearance of large bitcoin balances had an immediate effect on trading activity within the platform. Bithumb reported that the incident contributed to a temporary decline of about 10 percent in bitcoin’s price on its exchange, as some affected users rapidly sold the credited assets. To contain further disruption, the company restricted withdrawals and suspended certain transactions linked to the impacted accounts. It stated that 99.7 percent of the mistakenly issued bitcoin has since been recovered.
The event has revived discussion around the concept often described as “paper bitcoin.” On centralized exchanges, user balances are reflected in internal ledgers rather than always corresponding to coins held in individual blockchain wallets. In practice, exchanges may not maintain a one-to-one on-chain reserve for every displayed balance at every moment. This structural model has previously drawn criticism, most notably during the collapse of Mt. Gox in 2014, which was then the largest bitcoin exchange globally. Its failure exposed major discrepancies between reported and actual holdings.
Data from blockchain analytics firm Arkham Intelligence indicates that Bithumb currently controls digital assets worth approximately $5.3 billion. That figure is substantially lower than the $43 billion temporarily reflected in the erroneous credits, underscoring that the allocation existed within internal accounting records rather than as newly transferred blockchain assets.
Observers on social media platform X questioned how such a large discrepancy could occur without automated safeguards preventing the issuance. Bithumb has faced security challenges in the past. In 2017, an employee’s device was compromised, exposing customer data later used in phishing attempts. In 2018, around $30 million in cryptocurrency was stolen in an attack attributed to the Lazarus Group, an organization widely linked to North Korea. A further breach in 2019 resulted in losses of roughly $20 million and was initially suspected to involve insider participation. In each instance, Bithumb stated that it compensated affected users for lost funds, though earlier incidents included exposure of personal information.
Beyond cybersecurity events, the exchange has also been subject to regulatory scrutiny, including investigations related to alleged fraud, embezzlement, and promotional practices. Reports indicate it was again raided this week over concerns involving misleading advertising.
Bithumb maintains that no customer ultimately suffered a net financial loss from the recent error, though the price movement raised concerns about potential liquidations for leveraged traders. A comparable situation occurred at decentralized exchange Paradex, which reversed trades following a pricing malfunction.
The incident unfolds amid broader market strain, with digital asset prices astronomically below their October peaks and political debate intensifying around cryptocurrency-linked business interests connected to U.S. public figures. Recent disclosures from the U.S. Department of Justice concerning Jeffrey Epstein’s early involvement in cryptocurrency ventures have further fueled online speculation and conspiracy narratives across social platforms.
Cybersecurity investigators at Google have confirmed that state-sponsored hacking groups are actively relying on generative artificial intelligence to improve how they research targets, prepare cyber campaigns, and develop malicious tools. According to the company’s threat intelligence teams, North Korea–linked attackers were observed using the firm’s AI platform, Gemini, to collect and summarize publicly available information about organizations and employees they intended to target. This type of intelligence gathering allows attackers to better understand who works at sensitive companies, what technical roles exist, and how to approach victims in a convincing way.
Investigators explained that the attackers searched for details about leading cybersecurity and defense companies, along with information about specific job positions and salary ranges. These insights help threat actors craft more realistic fake identities and messages, often impersonating recruiters or professionals to gain the trust of their targets. Security experts warned that this activity closely resembles legitimate professional research, which makes it harder for defenders to distinguish normal online behavior from hostile preparation.
The hacking group involved, tracked as UNC2970, is linked to North Korea and overlaps with a network widely known as Lazarus Group. This group has previously run a long-term operation in which attackers pretended to offer job opportunities to professionals in aerospace, defense, and energy companies, only to deliver malware instead. Researchers say this group continues to focus heavily on defense-related targets and regularly impersonates corporate recruiters to begin contact with victims.
The misuse of AI is not limited to one actor. Multiple hacking groups connected to China and Iran were also found using AI tools to support different phases of their operations. Some groups used AI to gather targeted intelligence, including collecting email addresses and account details. Others relied on AI to analyze software weaknesses, prepare technical testing plans, interpret documentation from open-source tools, and debug exploit code. Certain actors used AI to build scanning tools and malicious web shells, while others created fake online identities to manipulate individuals into interacting with them. In several cases, attackers claimed to be security researchers or competition participants in order to bypass safety restrictions built into AI systems.
Researchers also identified malware that directly communicates with AI services to generate harmful code during an attack. One such tool, HONESTCUE, requests programming instructions from AI platforms and receives source code that is used to build additional malicious components on the victim’s system. Instead of storing files on disk, this malware compiles and runs code directly in memory using legitimate system tools, making detection and forensic analysis more difficult. Separately, investigators uncovered phishing kits designed to look like cryptocurrency exchanges. These fake platforms were built using automated website creation tools from Lovable AI and were used to trick victims into handing over login credentials. Parts of this activity were linked to a financially motivated group known as UNC5356.
Security teams also reported an increase in so-called ClickFix campaigns. In these schemes, attackers use public sharing features on AI platforms to publish convincing step-by-step guides that appear to fix common computer problems. In reality, these instructions lead users to install malware that steals personal and financial data. This trend was first flagged in late 2025 by Huntress.
Another growing threat involves model extraction attacks. In these cases, adversaries repeatedly query proprietary AI systems in order to observe how they respond and then train their own models to imitate the same behavior. In one large campaign, attackers sent more than 100,000 prompts to replicate how an AI model reasons across many tasks in different languages. Researchers at Praetorian demonstrated that a functional replica could be built using a relatively small number of queries and limited training time. Experts warned that keeping AI model parameters secret is not enough, because every response an AI system provides can be used as training data for attackers.
Google, which launched its AI Cyber Defense Initiative in 2024, stated that artificial intelligence is increasingly amplifying the capabilities of cybercriminals by improving their efficiency and speed. Company representatives cautioned that as attackers integrate AI into routine operations, the volume and sophistication of attacks will continue to rise. Security specialists argue that defenders must adopt similar AI-powered tools to automate threat detection, accelerate response times, and operate at the same machine-level speed as modern attacks.
Over six thousand SmarterMail systems sit reachable online, possibly at risk due to a serious login vulnerability, found by the nonprofit cybersecurity group Shadowserver. Attention grows as hackers increasingly aim for outdated corporate mail setups left unprotected.
The threat actors used internet-exposed SolarWinds Web Help Desk (WHD) instances to gain initial access and then proceed laterally across the organization's network to other high-value assets, according to Microsoft's disclosure of a multi-stage attack.
However, it is unclear if the activity used a previously patched vulnerability (CVE-2025-26399, CVSS score: 9.8) or recently revealed vulnerabilities (CVE-2025-40551, CVSS score: 9.8, and CVE-2025-40536, CVSS score: 8.1), according to the Microsoft Defender Security Research Team.
"Since the attacks occurred in December 2025 and on machines vulnerable to both the old and new set of CVEs at the same time, we cannot reliably confirm the exact CVE used to gain an initial foothold," the company said in the report.
CVE-2025-40551 and CVE-2025-26399 both relate to untrusted data deserialization vulnerabilities that could result in remote code execution, and CVE-2025-400536 is a security control bypass vulnerability that might enable an unauthenticated attacker to access some restricted functionality.
Citing proof of active exploitation in the field, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) added CVE-2025-40551 to its list of known exploited vulnerabilities (KEVs) last week. By February 6, 2026, agencies of the Federal Civilian Executive Branch (FCEB) were required to implement the solutions for the defect.
The successful exploitation of the exposed SolarWinds WHD instance in the attacks that Microsoft discovered gave the attackers the ability to execute arbitrary commands within the WHD application environment and accomplish unauthenticated remote code execution.
Microsoft claimed that in at least one instance, the threat actors used a DCSync attack, in which they impersonated a Domain Controller (DC) and asked an Active Directory (AD) database for password hashes and other private data.
Users are recommended to update WHD instances, identify and eliminate any unauthorized RMM tools, rotate admin and service accounts, and isolate vulnerable workstations to minimize the breach in order to combat the attack.
"This activity reflects a common but high-impact pattern: a single exposed application can provide a path to full domain compromise when vulnerabilities are unpatched or insufficiently monitored," the creator of Windows stated.