Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

Windows Telemetry Explained: What Diagnostic Data Microsoft Collects and Why It Matters

 

Years after Windows 10 arrived, a single aspect keeps stirring conversation - telemetry. This data gathering, labeled diagnostic info by Microsoft, pulls details from machines without manual input. Its purpose? Keeping systems stable, secure, running smoothly. Yet reactions split sharply between everyday users and those watching privacy trends. 

Early on, after Windows 10 arrived, observers questioned whether its telemetry might double as monitoring. A few writers argued it collected large amounts of user detail while transmitting data to Microsoft machines. Still, analysts inspecting how the OS handles information report minimal proof backing such suspicions. 

Beginning in 2017, scrutiny from the Dutch Data Protection Authority revealed shortcomings in how Windows presented telemetry consent choices. Although designed to gather system performance details, the setup failed to align with regional privacy expectations due to unclear user permissions. 

Instead of defending the original design, Microsoft adjusted both interface wording and backend configurations. Following these updates, oversight bodies acknowledged improvements, noting no evidence emerged suggesting private information was gathered unlawfully. Independent analysts alongside regulatory teams had previously flagged the configuration, yet after revisions, compliance concerns faded gradually. 

What runs behind the scenes in Windows includes a mix of telemetry types - mainly split into essential and extra reporting layers. Most personal computers, especially those outside corporate control, turn on the basic tier automatically; there exists no standard menu option to switch it off entirely. This baseline layer gathers only what Microsoft claims is vital for stability and core operations. 

Though hidden from typical adjustments, its presence supports ongoing performance checks across devices. Basic troubleshooting relies on specific diagnostics tied to functions like Windows Update. Information might cover simple fault summaries, setup traits of hardware, software plus driver footprints, along with records tracking how updates succeed or fail. 

As noted by Microsoft, insights drawn support better stability fixes, safety patches, app alignment, and smoother running systems. Some diagnostic details go beyond basics, capturing patterns in app use or web habits. These insights might involve deeper system errors, performance signs, or hardware traits. 

While such data helps refine functionality, access remains under user control via Windows options. Those cautious about personal information often choose to turn this off. Control sits within settings, letting choices match comfort levels. Occasionally, memory dumps taken during system failures form part of Optional diagnostic data, according to experts. 

When a crash happens, pieces of active files might get saved inside these records. Because of this risk, certain groups managing confidential material prefer disabling the setting altogether. In 2018, Microsoft rolled out a feature named Diagnostic Data Viewer to boost openness. This tool gives people access to review what information their machine shares with the company, revealing specifics found in diagnostics and system summaries. 

One billion devices now operate on Windows 11 across the globe. Because of countless variations in hardware and software setups, Microsoft relies on telemetry data - this information reveals issues, shapes update improvements, yet supports consistent performance. While tracking user interactions might sound intrusive, it actually guides fixes without exposing personal details; instead, patterns emerge that steer engineering decisions behind the scenes. 

Even though some diagnostic details are essential for basic operations, those worried about personal data might choose to limit what gets sent by turning off non-essential diagnostics in device preferences. Still, full function depends on keeping certain reporting active.

Silent Scam Calls Used to Verify Active Phone Numbers, Cybersecurity Experts Warn

 

Many people have answered calls from unfamiliar numbers only to hear silence on the other end. In some cases, no one speaks at all. In others, there is a short delay before a caller finally responds. While this may appear to be a simple mistake or a wrong number, cybersecurity experts say these calls are often part of a deliberate scam tactic used to verify active phone numbers. 

According to security specialists, these silent calls function as a form of automated reconnaissance. Fraud operations run large-scale calling systems that dial thousands of numbers to determine which ones belong to real people. When someone answers, the system confirms that the number is active and marks it as a potential target for future scams. 

Keeper Security Chief Information Security Officer Shane Barney explained that such calls are rarely accidental. Instead, they help attackers filter out inactive numbers before investing more time and resources into scams. Verified contact information has value in modern cybercrime networks, where data about reachable individuals can be bought, sold, and reused across different fraud campaigns. 

Once a phone number is confirmed as active, it may be used in several ways. In some cases, scammers follow up with phishing calls or messages designed to trick victims into revealing personal or financial information. In more advanced attacks, a verified phone number could be combined with leaked email addresses from data breaches or used in schemes such as SIM-swap fraud, where attackers attempt to gain control of a victim’s mobile account. 

Another variation occurs when callers respond only after a brief pause. This delay is typically caused by predictive dialing systems that automatically place large volumes of calls. These systems detect when a human answers and then route the call to a live operator. The short silence represents the time it takes for the system to transfer the connection. 

Some people also worry that speaking during these calls could allow scammers to clone their voice using artificial intelligence. While voice cloning technology exists, experts say creating a convincing replica generally requires longer and clearer audio samples than a brief greeting. 

However, voice cloning could still become part of larger scams if criminals already possess other personal details about a victim. Security professionals recommend simple precautions when receiving suspicious calls. If an unknown number produces silence, hanging up immediately is usually the safest option. 

Another tactic is answering without speaking, which prevents automated systems from detecting a human voice. Spam-filtering tools can also help reduce nuisance calls. Applications such as Truecaller, RoboKiller, and Hiya identify numbers previously reported as spam. However, experts caution that no filtering system is perfect because scammers frequently change phone numbers. 

Ultimately, while call-blocking tools can reduce the volume of unwanted calls, maintaining strong account security and being cautious with unknown callers remain the most effective ways to avoid phone-based scams.

ShinyHunters Threatens Data Leak After Alleged Salesforce Breach

 

The hacking group ShinyHunters has warned roughly 400 companies that it may publish stolen data online if ransom demands are not met. The group claims it accessed private records through websites built on Salesforce Experience Cloud, a platform companies use to create public portals and customer support sites. 

According to earlier findings by cybersecurity firm Mandiant, the attackers targeted organisations that used Salesforce’s Experience Cloud for external-facing services such as help centres and information portals. 

How the breach allegedly happened? The reported intrusion appears linked to the configuration of public access settings within these websites. 

Salesforce allows websites built on Experience Cloud to include a “guest user” profile so visitors can view limited information without logging in. 

If these settings are configured too broadly, however, the access permissions can expose internal data to the public internet. Investigations suggest the attackers used a modified version of a tool called Aura Inspector to scan websites for such weaknesses. 

Once vulnerabilities were identified, the hackers were able to extract information including names and phone numbers. Security experts say the stolen data may already be fueling vishing attacks. 

In such scams, attackers contact employees by phone and attempt to trick them into revealing additional confidential information. 

Dispute over the root cause There is disagreement over whether the problem stems from a software flaw or from how companies configured their systems. Salesforce has said the platform itself remains secure and that the issue is related to customer settings rather than a vulnerability in the product. 

“Our investigation to date confirms that this activity relates to a customer-configured guest user setting, not a platform security flaw,” the company said in a blog post. 

ShinyHunters disputes that explanation, claiming it discovered a previously unknown flaw that allows it to bypass certain protections even on sites that appear properly configured. 

Independent researchers have not yet verified that claim. Pressure tactics used by hackers ShinyHunters is known for using aggressive extortion strategies to pressure victims into paying ransom demands. The group often releases stolen data in stages to increase pressure on organisations that refuse to negotiate. 

A recent example involved Dutch telecommunications provider Odido and its brand Ben. After the company declined to pay a ransom reportedly worth one million euros, the hackers began publishing large quantities of customer data on the dark web. 

Security guidance for companies Salesforce is urging customers to review their portal configurations and tighten access controls. The company recommends applying a “least privilege” approach, meaning guest users should only have the minimum permissions required to use a site. 

Businesses are also advised to keep data private by default, disable settings that expose internal staff information, and turn off public application programming interfaces where possible. 

These interfaces can allow external systems to exchange data and may create additional entry points if left open. 

The incident highlights the growing risks associated with misconfigured cloud services, which security analysts say have become a common target for cybercriminal groups seeking large volumes of corporate data.

Commercial Spy Trackers Breach U.S. Army Networks, Jeopardizing National Security

 

U.S. Army networks face a hidden invasion from commercial spy technology, compromising soldier data and national security in alarming ways. A groundbreaking study by the Army Cyber Institute at West Point analyzed traffic on military networks, discovering that 21.2% of the most frequently visited websites host tracker domains. These trackers relentlessly collect sensitive information like geolocation, email addresses, and detailed browsing histories from troops during routine online activities.

The infiltration stems from ubiquitous commercial tools embedded in popular sites. Companies such as Adobe, Microsoft, Akamai, and even the banned TikTok deploy these trackers, funneling harvested data to brokers who resell it without regard for buyers' intentions. This surveillance capitalism mirrors civilian web tracking but strikes deeper when targeting military personnel, turning everyday internet use into a potential intelligence leak.

Researchers from Duke University exposed the severity by purchasing dossiers on active-duty service members from data brokers with ease. They acquired names, home addresses, personal emails, and military branch details, often from non-U.S. domains, highlighting how adversaries could exploit this for blackmail, targeting installations, or cyber campaigns . One expert called the process "disturbingly simple," underscoring the broker market's indifference to national security risks.

Persistent vulnerabilities echo the 2018 Strava fitness app scandal, where heatmap data revealed covert base locations worldwide. The latest findings show trackers in 42% of network requests and 10.4% of sites, exceeding privacy safeguards on mainstream streaming platforms. Cybersecurity professor Alan Woodward of the University of Surrey warns, "If you’re not paying, you are the product," a harsh reality for soldiers navigating the open web.

The Pentagon is responding aggressively through its 2023 Cyber Strategy, implementing Zero Trust architecture, enhanced endpoint detection, and widespread tracker blocking . The National Defense Authorization Act bolsters these efforts with mandates for spyware mitigation and stricter social media vetting. The Army Cyber Institute advocates quantifying trackers and extending blocks to personal devices, elevating data privacy to a core element of force protection in the digital age.

Hackers Exploit FortiGate Devices to Hack Networks and Credentials


Exploiting network points to hack victims 

Cybersecurity experts have warned about a new campaign where hackers are exploiting FortiGate Next-Gen Firewall (NGFW) devices as entry points to hack target networks. 

The campaign involves abusing the recently revealed security flaws or weak password to take out configuration files. The activity has singled out class linked to government, healthcare, and managed service providers. 

Attack tactic 

According to experts, “FortiGate network appliances have considerable access to the environments they were installed to protect. In many configurations, this includes service accounts which are connected to the authentication infrastructure, such as Active Directory (AD) and Lightweight Directory Access Protocol (LDAP).”

"This setup can enable the appliance to map roles to specific users by fetching attributes about the connection that’s being analyzed and correlating with the Directory information, which is useful in cases where role-based policies are set or for increasing response speed for network security alerts detected by the device,” the experts added. 

Misconfigurations opening doors for hackers 

But the experts noticed that this access could be compromised by hackers who hack into FortiGate devices via flaws or misconfigurations.

In one attack, the hackers breached a FortiGate appliance last year in November to make a new local admin account “support” and built four new firewall policies that let the account to travel across all zones without any limitations. 

The hacker then routinely checked device access. “Evidence demonstrates the attacker authenticated to the AD using clear text credentials from the fortidcagent service account, suggesting the attacker decrypted the configuration file and extracted the service account credentials,” SentinelOne reported. 

How was the account used?

After this, hacker leveraged the service account to verify the target's environment and put rogue workstations in the AD for further access. Following this, network scanning started and the breach was found, and lateral movement was stopped. 

The contents of the NTDS.dit file and SYSTEM registry hive were exfiltrated to an external server ("172.67.196[.]232") over port 443 by the Java malware, which was triggered via DLL side-loading.

SentinelOne said that “While the actor may have attempted to crack passwords from the data, no such credential usage was identified between the time of credential harvesting and incident containment.”

Apple Rolls Out Global Age-Verification System to Protect Kids Online

 

Apple has rolled out a new global age-verification system across its platforms, aimed at keeping kids safer online while helping developers comply with tightening child safety laws worldwide. The move targets both app downloads and in‑app experiences, with a particular focus on blocking underage access to adult‑rated content without sacrificing user privacy.

Under the new rules, users in countries such as Brazil, Australia and Singapore will be blocked from downloading apps rated 18+ unless Apple can confirm they are adults. Similar protections are being extended to parts of the United States, where states like Utah and Louisiana are introducing strict online age‑assurance laws, pushing platforms to verify whether users are children, teens or adults before allowing access to certain apps or features.This marks one of Apple’s strongest steps yet to align its App Store with regional regulations on children’s digital safety.

At the heart of the initiative is Apple’s privacy‑focused Declared Age Range API, which lets apps learn a user’s age category instead of their exact birthdate. Developers can use this signal to tailor content, enable or disable features, or trigger parental consent flows for younger users, while never seeing sensitive identity details. Apple says this design is meant to minimize data collection and reduce the risk of intrusive ID checks or third‑party age‑verification databases.

For parents, the age‑verification push builds on Apple’s existing child account system and content restrictions.Parents can already set up child profiles, choose age ranges and apply web content filters, and now those settings can flow through to third‑party apps via the new tools.This means a game, social app or streaming service can automatically recognize that a user is a child or teen and adjust what they can see or do without asking for new personal information.

For developers, Apple is introducing an expanded toolkit that includes the updated Declared Age Range API, new age‑rating properties in StoreKit, and improved server notifications to track compliance. These tools will be essential in regions where apps must prove they are screening out underage users from adult content or obtaining parental consent for significant changes. As more governments pass online safety laws, Apple’s global age‑verification framework is likely to become a key part of how the App Store balances regulatory demands with user privacy.

Age Verification Laws for Social Media Raise Privacy Concerns and Enforcement Challenges

 

Across nations, governments push tighter rules limiting young users’ access to social media. Because of worries over endless scrolling, disturbing material online, or growing emotional struggles in teens, officials demand change. Minimum entry ages - often 13 or 16 - are now common in draft laws shaping platform duties. While debates continue, one thing holds: unrestricted teenage access faces mounting resistance. 

Still, putting such policies into practice stirs up both technological hurdles and concerns about personal privacy. To make sure people are old enough, services need proof - yet proving age typically means gathering private details. Meanwhile, current regulations push firms to keep data collection minimal. That tension forms what specialists call an “age-verification trap,” where tighter control over access can weaken safeguards meant to protect individual information. 

While many rules about age limits demand that services make "reasonable efforts" to block young users, clear guidance on checking someone's actual age is almost never included. One way firms handle this gap: they lean heavily on just two methods when deciding what to do. Starting off, identity checks require people to show their age using official ID or online identity tools. 

Although more reliable, keeping such data creates worries over privacy breaches. Handling vast collections of private details increases exposure to cyber threats. Security weakens when too much sensitive material gathers in one place. Age guesses shape the next method. By watching how someone uses a device, or analyzing video selfies with face-scanning tech, systems try to judge their years without asking for ID cards. 

Still, since these outcomes depend on likelihoods instead of confirmed proof, doubt remains part of the process. Some big tech firms now run these kinds of tools. While Meta applies face-based age checks on Instagram in select regions - asking certain users to send brief video clips if they seem underage - TikTok examines openly shared videos to guess how old someone might be. 

Elsewhere, Google and its platform YouTube lean on activity patterns; yet when doubt remains, they can ask for official identification or payment details. These steps aim at confirming ages without relying solely on stated information. Mistakes happen within these systems. Though meant to protect, they occasionally misidentify adults as children - leading to sudden account access issues. 

At times, underage individuals slip through gaps, using borrowed IDs or setting up more than one profile. Restrictions fail when shared credentials enter the picture. A single appeal can expose personal details when systems retain proof materials past their immediate need. Stored face scans, ID photos, or validation logs may linger just to satisfy legal checks. These files attract digital intrusions simply by existing. Every extra day they remain increases the chance of breach. 

Where identity infrastructure is weak, the difficulty grows. Biometrics might step in when official systems fall short. Oversight tends to be sparse, even as outside verifiers take on bigger roles. Still, shielding kids on the web without losing grip on private information is far from simple. When authorities roll out tighter rules for confirming age, the tools built to follow these laws could change how identities and personal details move through digital spaces.

AI-Powered Cybercrime Hits 600+ FortiGate Firewalls Across 55 Countries, AWS Warns

 

Cybercriminals using readily available generative AI tools managed to breach more than 600 internet-facing FortiGate firewalls across 55 countries within a little over a month, according to a recent incident analysis released by Amazon Web Services (AWS).

The operation, active between mid-January and mid-February, did not rely on sophisticated zero-day vulnerabilities. Instead, attackers automated large-scale attempts to access exposed systems by rapidly testing weak or reused credentials—essentially the digital equivalent of trying every unlocked door, but at high speed with the assistance of AI.

AWS investigators believe the operation was carried out by a financially motivated Russian-speaking group. The attackers scanned for publicly accessible FortiGate management interfaces, attempted to log in using commonly reused passwords, and once successful, extracted configuration files that provided detailed insight into the victims’ network environments.

According to AWS’s security team, the threat actors leveraged multiple commercially available AI tools to produce attack playbooks, scripts, and operational documentation. This allowed a relatively small or less technically advanced group to conduct a campaign that would typically require greater manpower and development effort. Analysts also discovered traces of AI-generated code and planning materials on compromised systems, indicating that AI tools were used extensively throughout the operation rather than just for occasional scripting tasks.

"The volume and variety of custom tooling would typically indicate a well-resourced development team," said CJ Moses, CISO at Amazon. "Instead, a single actor or very small group generated this entire toolkit through AI-assisted development."

After gaining access to the firewalls, the attackers retrieved configuration data containing administrator and VPN credentials, network architecture information, and firewall policies. Armed with these details, they attempted deeper intrusions by targeting directory services such as Active Directory, harvesting credentials, and exploring options for lateral movement across compromised networks. Backup infrastructure, including servers running Veeam, was also targeted during the intrusions.

AWS researchers noted that although the tools used in the campaign were functional, they appeared somewhat crude. The scripts showed basic parsing methods and repetitive comments often associated with machine-generated drafts. Despite their imperfections, the tools proved effective enough for large-scale automated attacks. When systems proved difficult to compromise, the attackers often abandoned them and shifted focus to easier targets, suggesting that their strategy prioritized volume over precision.

The affected organizations were spread across several regions, including Europe, Asia, Africa, and Latin America. The activity did not appear to focus on a single sector or country, indicating opportunistic targeting. However, investigators observed clusters of incidents suggesting that some breaches may have provided access to managed service providers or shared infrastructure, potentially increasing the scale of downstream exposure.

AWS emphasized that many of the compromises could have been avoided with standard cybersecurity practices. Preventing management interfaces from being publicly accessible, implementing multi-factor authentication, and avoiding password reuse would have significantly reduced the attackers’ chances of success.

The report comes shortly after Google cautioned that cybercriminal groups are increasingly integrating generative AI technologies—including tools such as Gemini AI—into their operations. These technologies are being used for tasks such as reconnaissance, target profiling, phishing campaign creation, and malware development