Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

Commercial Spy Trackers Breach U.S. Army Networks, Jeopardizing National Security

 

U.S. Army networks face a hidden invasion from commercial spy technology, compromising soldier data and national security in alarming ways. A groundbreaking study by the Army Cyber Institute at West Point analyzed traffic on military networks, discovering that 21.2% of the most frequently visited websites host tracker domains. These trackers relentlessly collect sensitive information like geolocation, email addresses, and detailed browsing histories from troops during routine online activities.

The infiltration stems from ubiquitous commercial tools embedded in popular sites. Companies such as Adobe, Microsoft, Akamai, and even the banned TikTok deploy these trackers, funneling harvested data to brokers who resell it without regard for buyers' intentions. This surveillance capitalism mirrors civilian web tracking but strikes deeper when targeting military personnel, turning everyday internet use into a potential intelligence leak.

Researchers from Duke University exposed the severity by purchasing dossiers on active-duty service members from data brokers with ease. They acquired names, home addresses, personal emails, and military branch details, often from non-U.S. domains, highlighting how adversaries could exploit this for blackmail, targeting installations, or cyber campaigns . One expert called the process "disturbingly simple," underscoring the broker market's indifference to national security risks.

Persistent vulnerabilities echo the 2018 Strava fitness app scandal, where heatmap data revealed covert base locations worldwide. The latest findings show trackers in 42% of network requests and 10.4% of sites, exceeding privacy safeguards on mainstream streaming platforms. Cybersecurity professor Alan Woodward of the University of Surrey warns, "If you’re not paying, you are the product," a harsh reality for soldiers navigating the open web.

The Pentagon is responding aggressively through its 2023 Cyber Strategy, implementing Zero Trust architecture, enhanced endpoint detection, and widespread tracker blocking . The National Defense Authorization Act bolsters these efforts with mandates for spyware mitigation and stricter social media vetting. The Army Cyber Institute advocates quantifying trackers and extending blocks to personal devices, elevating data privacy to a core element of force protection in the digital age.

Hackers Exploit FortiGate Devices to Hack Networks and Credentials


Exploiting network points to hack victims 

Cybersecurity experts have warned about a new campaign where hackers are exploiting FortiGate Next-Gen Firewall (NGFW) devices as entry points to hack target networks. 

The campaign involves abusing the recently revealed security flaws or weak password to take out configuration files. The activity has singled out class linked to government, healthcare, and managed service providers. 

Attack tactic 

According to experts, “FortiGate network appliances have considerable access to the environments they were installed to protect. In many configurations, this includes service accounts which are connected to the authentication infrastructure, such as Active Directory (AD) and Lightweight Directory Access Protocol (LDAP).”

"This setup can enable the appliance to map roles to specific users by fetching attributes about the connection that’s being analyzed and correlating with the Directory information, which is useful in cases where role-based policies are set or for increasing response speed for network security alerts detected by the device,” the experts added. 

Misconfigurations opening doors for hackers 

But the experts noticed that this access could be compromised by hackers who hack into FortiGate devices via flaws or misconfigurations.

In one attack, the hackers breached a FortiGate appliance last year in November to make a new local admin account “support” and built four new firewall policies that let the account to travel across all zones without any limitations. 

The hacker then routinely checked device access. “Evidence demonstrates the attacker authenticated to the AD using clear text credentials from the fortidcagent service account, suggesting the attacker decrypted the configuration file and extracted the service account credentials,” SentinelOne reported. 

How was the account used?

After this, hacker leveraged the service account to verify the target's environment and put rogue workstations in the AD for further access. Following this, network scanning started and the breach was found, and lateral movement was stopped. 

The contents of the NTDS.dit file and SYSTEM registry hive were exfiltrated to an external server ("172.67.196[.]232") over port 443 by the Java malware, which was triggered via DLL side-loading.

SentinelOne said that “While the actor may have attempted to crack passwords from the data, no such credential usage was identified between the time of credential harvesting and incident containment.”

Apple Rolls Out Global Age-Verification System to Protect Kids Online

 

Apple has rolled out a new global age-verification system across its platforms, aimed at keeping kids safer online while helping developers comply with tightening child safety laws worldwide. The move targets both app downloads and in‑app experiences, with a particular focus on blocking underage access to adult‑rated content without sacrificing user privacy.

Under the new rules, users in countries such as Brazil, Australia and Singapore will be blocked from downloading apps rated 18+ unless Apple can confirm they are adults. Similar protections are being extended to parts of the United States, where states like Utah and Louisiana are introducing strict online age‑assurance laws, pushing platforms to verify whether users are children, teens or adults before allowing access to certain apps or features.This marks one of Apple’s strongest steps yet to align its App Store with regional regulations on children’s digital safety.

At the heart of the initiative is Apple’s privacy‑focused Declared Age Range API, which lets apps learn a user’s age category instead of their exact birthdate. Developers can use this signal to tailor content, enable or disable features, or trigger parental consent flows for younger users, while never seeing sensitive identity details. Apple says this design is meant to minimize data collection and reduce the risk of intrusive ID checks or third‑party age‑verification databases.

For parents, the age‑verification push builds on Apple’s existing child account system and content restrictions.Parents can already set up child profiles, choose age ranges and apply web content filters, and now those settings can flow through to third‑party apps via the new tools.This means a game, social app or streaming service can automatically recognize that a user is a child or teen and adjust what they can see or do without asking for new personal information.

For developers, Apple is introducing an expanded toolkit that includes the updated Declared Age Range API, new age‑rating properties in StoreKit, and improved server notifications to track compliance. These tools will be essential in regions where apps must prove they are screening out underage users from adult content or obtaining parental consent for significant changes. As more governments pass online safety laws, Apple’s global age‑verification framework is likely to become a key part of how the App Store balances regulatory demands with user privacy.

Age Verification Laws for Social Media Raise Privacy Concerns and Enforcement Challenges

 

Across nations, governments push tighter rules limiting young users’ access to social media. Because of worries over endless scrolling, disturbing material online, or growing emotional struggles in teens, officials demand change. Minimum entry ages - often 13 or 16 - are now common in draft laws shaping platform duties. While debates continue, one thing holds: unrestricted teenage access faces mounting resistance. 

Still, putting such policies into practice stirs up both technological hurdles and concerns about personal privacy. To make sure people are old enough, services need proof - yet proving age typically means gathering private details. Meanwhile, current regulations push firms to keep data collection minimal. That tension forms what specialists call an “age-verification trap,” where tighter control over access can weaken safeguards meant to protect individual information. 

While many rules about age limits demand that services make "reasonable efforts" to block young users, clear guidance on checking someone's actual age is almost never included. One way firms handle this gap: they lean heavily on just two methods when deciding what to do. Starting off, identity checks require people to show their age using official ID or online identity tools. 

Although more reliable, keeping such data creates worries over privacy breaches. Handling vast collections of private details increases exposure to cyber threats. Security weakens when too much sensitive material gathers in one place. Age guesses shape the next method. By watching how someone uses a device, or analyzing video selfies with face-scanning tech, systems try to judge their years without asking for ID cards. 

Still, since these outcomes depend on likelihoods instead of confirmed proof, doubt remains part of the process. Some big tech firms now run these kinds of tools. While Meta applies face-based age checks on Instagram in select regions - asking certain users to send brief video clips if they seem underage - TikTok examines openly shared videos to guess how old someone might be. 

Elsewhere, Google and its platform YouTube lean on activity patterns; yet when doubt remains, they can ask for official identification or payment details. These steps aim at confirming ages without relying solely on stated information. Mistakes happen within these systems. Though meant to protect, they occasionally misidentify adults as children - leading to sudden account access issues. 

At times, underage individuals slip through gaps, using borrowed IDs or setting up more than one profile. Restrictions fail when shared credentials enter the picture. A single appeal can expose personal details when systems retain proof materials past their immediate need. Stored face scans, ID photos, or validation logs may linger just to satisfy legal checks. These files attract digital intrusions simply by existing. Every extra day they remain increases the chance of breach. 

Where identity infrastructure is weak, the difficulty grows. Biometrics might step in when official systems fall short. Oversight tends to be sparse, even as outside verifiers take on bigger roles. Still, shielding kids on the web without losing grip on private information is far from simple. When authorities roll out tighter rules for confirming age, the tools built to follow these laws could change how identities and personal details move through digital spaces.

AI-Powered Cybercrime Hits 600+ FortiGate Firewalls Across 55 Countries, AWS Warns

 

Cybercriminals using readily available generative AI tools managed to breach more than 600 internet-facing FortiGate firewalls across 55 countries within a little over a month, according to a recent incident analysis released by Amazon Web Services (AWS).

The operation, active between mid-January and mid-February, did not rely on sophisticated zero-day vulnerabilities. Instead, attackers automated large-scale attempts to access exposed systems by rapidly testing weak or reused credentials—essentially the digital equivalent of trying every unlocked door, but at high speed with the assistance of AI.

AWS investigators believe the operation was carried out by a financially motivated Russian-speaking group. The attackers scanned for publicly accessible FortiGate management interfaces, attempted to log in using commonly reused passwords, and once successful, extracted configuration files that provided detailed insight into the victims’ network environments.

According to AWS’s security team, the threat actors leveraged multiple commercially available AI tools to produce attack playbooks, scripts, and operational documentation. This allowed a relatively small or less technically advanced group to conduct a campaign that would typically require greater manpower and development effort. Analysts also discovered traces of AI-generated code and planning materials on compromised systems, indicating that AI tools were used extensively throughout the operation rather than just for occasional scripting tasks.

"The volume and variety of custom tooling would typically indicate a well-resourced development team," said CJ Moses, CISO at Amazon. "Instead, a single actor or very small group generated this entire toolkit through AI-assisted development."

After gaining access to the firewalls, the attackers retrieved configuration data containing administrator and VPN credentials, network architecture information, and firewall policies. Armed with these details, they attempted deeper intrusions by targeting directory services such as Active Directory, harvesting credentials, and exploring options for lateral movement across compromised networks. Backup infrastructure, including servers running Veeam, was also targeted during the intrusions.

AWS researchers noted that although the tools used in the campaign were functional, they appeared somewhat crude. The scripts showed basic parsing methods and repetitive comments often associated with machine-generated drafts. Despite their imperfections, the tools proved effective enough for large-scale automated attacks. When systems proved difficult to compromise, the attackers often abandoned them and shifted focus to easier targets, suggesting that their strategy prioritized volume over precision.

The affected organizations were spread across several regions, including Europe, Asia, Africa, and Latin America. The activity did not appear to focus on a single sector or country, indicating opportunistic targeting. However, investigators observed clusters of incidents suggesting that some breaches may have provided access to managed service providers or shared infrastructure, potentially increasing the scale of downstream exposure.

AWS emphasized that many of the compromises could have been avoided with standard cybersecurity practices. Preventing management interfaces from being publicly accessible, implementing multi-factor authentication, and avoiding password reuse would have significantly reduced the attackers’ chances of success.

The report comes shortly after Google cautioned that cybercriminal groups are increasingly integrating generative AI technologies—including tools such as Gemini AI—into their operations. These technologies are being used for tasks such as reconnaissance, target profiling, phishing campaign creation, and malware development


Researchers Find Critical Zero-Day Vulnerabilities in Foxit and Apryse PDF Platforms

 

PDF files are often seen as simple digital documents, but recent research shows they have evolved into complex software environments that can expose corporate systems to cyber risks. Modern PDF tools now function more like application platforms than basic viewers, potentially giving attackers pathways into private networks. 

A study by Novee Security examined two widely used platforms, Foxit and Apryse. Released on February 18, 2026, the report identified 13 categories of vulnerabilities and 16 potential attack paths that could allow systems to be compromised. 

Researchers say these issues are more than minor bugs. Some zero-day flaws could allow attackers to run commands on backend servers or take over user accounts without needing to compromise a browser or operating system. To find the vulnerabilities, analysts first identified common patterns that signal security weaknesses. These patterns were then used to train an AI system that scanned large volumes of code much faster than manual review alone. 

By combining human insight with automated analysis, the system detected several high-impact issues that conventional scanning tools might miss. One major flaw appeared in Foxit’s digital signature server, which verifies electronically signed documents. Some of the most serious findings involve one-click exploits where simply opening a document or loading a link can trigger malicious activity. Vulnerabilities CVE-2025-70402 and CVE-2025-70400 affect Apryse WebViewer by allowing the software to trust remote configuration files without proper validation, enabling attackers to run malicious scripts. 

Another flaw, CVE-2025-70401, showed that malicious code could be hidden in the “Author” field of a PDF comment and executed when a user interacts with it. Researchers also identified CVE-2025-66500, which affects Foxit browser plugins. In this case, manipulated messages could trick the plugin into running harmful scripts within the application. Testing further showed that certain weaknesses could allow attackers to send a simple request that triggers command execution on a server, granting unauthorized access to parts of the system. 

These vulnerabilities highlight how small interactions or overlooked behaviors can lead to significant security risks. Experts say the core problem lies in how modern PDF platforms are built. Many now rely on web technologies such as iframes and server-side processing, yet organizations still treat PDF files as harmless static documents. This mismatch can create “trust boundary” failures where software accepts external data without sufficient validation. 

Both vendors were notified before the research was published, and the vulnerabilities were assigned official CVE identifiers to support patching efforts. The findings highlight how document-processing systems—often overlooked in security planning—can become complex attack surfaces if not properly secured.

ECB Tightens Oversight of Banks’ Growing AI Sector Risks

 

The European Central Bank is intensifying its oversight of how eurozone lenders finance the fast‑growing artificial intelligence ecosystem, reflecting concern that the boom in data‑centre and AI‑related infrastructure could hide pockets of credit and concentration risk.

In recent weeks, the ECB has sent targeted requests to a select group of major European banks, asking for granular data on their loans and other exposures to AI‑linked activities such as data‑centre construction, vendor financing and large project‑finance structures. Supervisors want to map where credit is clustering around a small set of hyperscalers, cloud providers and specialized hardware suppliers, amid global estimates of trillions of dollars in planned AI‑related capital spending. Officials stress this is a diagnostic exercise rather than an immediate step toward higher capital charges, but it marks a shift from general discussion to hands‑on information gathering.

The push comes as European banks race to harness AI inside their own operations, from credit scoring and fraud detection to automating back‑office tasks and enhancing customer service. Supervisors acknowledge that these technologies promise sizeable efficiency gains and new revenue opportunities, yet warn that many institutions still lack mature governance for AI models, including robust data‑quality controls, explainability, and clear accountability for automated decisions. The ECB has repeatedly argued that AI adoption must be matched by stronger risk‑management frameworks and continuous human oversight over model life cycles.

Regulators are also increasingly uneasy about systemic dependencies created by the dominance of a handful of mostly non‑EU AI and cloud providers. Heavy reliance on these external platforms raises concerns about operational resilience, data protection, and geopolitical risk that could spill over into financial stability if disruptions occur. At the same time, the ECB’s broader financial‑stability assessments have highlighted stretched valuations in some AI‑linked equities, warning that a sharp correction could transmit stress into bank balance sheets through both direct exposures and wider market channels. 

For now, supervisors frame their AI‑sector review as part of a wider effort to “encourage innovation while managing risks,” aligning prudential expectations with Europe’s new AI Act and digital‑operational‑resilience rules. Banks are being nudged to tighten contract terms, strengthen model‑validation teams and improve documentation before scaling AI‑driven business lines. The message from Frankfurt is that AI remains welcome as a driver of competitiveness in European finance—but only if lenders can demonstrate they understand, measure and contain the new concentrations of credit, market and operational risk that accompany the technology’s rapid rise.

DeepMind Chief Sounds Alarm on AI's Dual Threats

 

Google DeepMind CEO Sir Demis Hassabis has issued a stark warning on the escalating threats posed by artificial intelligence, urging immediate action from governments and tech firms. In an exclusive BBC interview at the AI Impact Summit in Delhi, he emphasized that more research into AI risks "needs to be done urgently," rather than waiting years. Hassabis highlighted the industry's push for "smart regulation" targeting genuine dangers from increasingly autonomous systems.

The AI pioneer identified two primary threats: malicious exploitation by bad actors and the potential loss of human control over super-capable AI systems. He stressed that current fragmented efforts in safety research are insufficient, with massive investments in AI development far outpacing those in oversight and evaluation. As AI models grow more powerful, Hassabis warned of a "narrow window" to implement robust safeguards before existing institutions are overwhelmed.

Speaking at the summit, which concluded recently in India's capital, Hassabis called for scaled-up funding and talent in AI safety science. He compared the challenge to nuclear safety protocols, arguing that advanced AI now demands societal-level treatment with rigorous testing before widespread deployment. The event brought together global leaders to discuss AI's societal impacts amid rapid advancements.

Hassabis advocated for international cooperation, noting AI's borderless nature means it affects everyone worldwide. He praised forums like those in the UK, Paris, and Seoul for uniting technologists and policymakers, while pushing for minimum global standards on AI deployment.However, tensions exist, as the US delegation at the Delhi summit rejected global AI governance outright.

This comes as AI capabilities surge, with systems learning physical realities and approaching artificial general intelligence (AGI) in 5-10 years. Hassabis acknowledged natural constraints like hardware shortages may slow progress, providing time for safeguards, but stressed proactive measures are essential. Industry leaders must balance innovation with risk mitigation to harness AI's potential safely.

Safety recommendations 

To counter AI threats, organizations should prioritize independent safety evaluations and red-teaming exercises before deploying models. Governments must fund public AI safety research grants and enforce "smart regulations" focused on real risks like misuse and loss of control. Individuals can stay vigilant by verifying AI-generated content, using tools like watermark detectors, limiting data shared with AI systems, and supporting ethical AI policies through advocacy.