Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Personal Data. Show all posts

Security Flaw Exposes Personal Data on Somalia’s E-Visa System Weeks After Major Breach

 

A recently uncovered weakness in Somalia’s electronic visa system has triggered fresh alarm over the protection of travelers’ personal information, coming just weeks after authorities admitted to a large-scale data breach affecting tens of thousands of applicants. Findings indicate that the Somalia e-visa platform is missing basic security safeguards, allowing unauthorized access to and downloading of sensitive documents with little technical effort.

The vulnerability was confirmed this week by Al Jazeera following a tip from a source with professional web development experience. The source explained that flaws in the e-visa system could be exploited to extract large volumes of visa application files containing highly confidential data. This exposed information reportedly includes passport details, full names, and dates of birth, data that could be abused for criminal activities or intelligence purposes.

According to the source, evidence of the security lapse was shared with Al Jazeera, along with proof that Somali authorities had been formally notified about the vulnerability a week earlier. Despite these warnings, the source said there was no response from officials and no sign that corrective measures had been taken.

Al Jazeera independently confirmed the claims by recreating the flaw as described. During testing, journalists were able to download e-visa documents belonging to dozens of individuals in a short time. The affected records included applicants from multiple countries, such as Somalia, Portugal, Sweden, the United States, and Switzerland.

“Breaches involving sensitive personal data are particularly dangerous as they put people at risk of various harms, including identity theft, fraud, and intelligence gathering by malicious actors,” Bridget Andere, a senior policy analyst at the digital rights organization Access Now, said in comments to Al Jazeera. She added that such incidents go beyond technical shortcomings and can have long-term implications for personal safety and privacy.

New Vulnerability Surfaces After Earlier Mass Data Leak

This latest Somalia e-visa security issue emerges less than a month after officials announced an investigation into a prior cyberattack on the same system. That earlier breach drew warnings from both the United States and the United Kingdom. According to official alerts, personal data belonging to more than 35,000 Somalia e-visa applicants had been exposed. The US Embassy in Somalia previously said the leaked information included names, photographs, dates and places of birth, email addresses, marital status, and home addresses.

Following that incident, Somalia’s Immigration and Citizenship Agency (ICA) shifted the e-visa platform to a new web domain, stating that the move was intended to improve security. On November 16, the agency said it was treating the breach with “special importance” and confirmed that an investigation was underway. However, the emergence of a new vulnerability suggests that deeper security weaknesses may still persist.

Security Praise Contrasts With Legal Responsibilities

Earlier the same week, Somalia’s Defence Minister, Ahmed Moalim Figi, publicly commended the e-visa system, saying it had helped prevent ISIL (ISIS) fighters from entering the country amid ongoing military operations against a regional affiliate in northern Somalia.

“The government's push to deploy the e-visa system despite being clearly unprepared for potential risks, then redeploying it after a serious data breach, is a clear example of how disregard for people's concerns and rights when introducing digital infrastructures can erode public trust and create avoidable vulnerabilities,” Andere said. She also voiced concern that Somali authorities had not issued a public notice regarding the serious data breach reported in November.

Under Somalia’s data protection law, organizations handling personal data are required to inform the national data protection authority when breaches occur. In cases involving high risk, particularly where sensitive personal data is exposed, affected individuals must also be notified. “Extra protections should apply in this case because it involves people of different nationalities and therefore multiple legal jurisdictions,” Andere added.

Al Jazeera stated that it could not publish specific technical details of the newly discovered flaw because it remains unpatched and could be exploited further if disclosed. Any sensitive data accessed during the investigation was destroyed to safeguard the privacy of those impacted.

AuraStealer Malware Uses Scam Yourself Tactics to Steal Sensitive Data

 

A recent investigation by Gen Digital’s Gen Threat Labs has brought attention to AuraStealer, a newly emerging malware-as-a-service offering that has begun circulating widely across underground cybercrime communities. First observed in mid-2025, the malware is being promoted as a powerful data-stealing tool capable of compromising a broad range of Windows operating systems. Despite its growing visibility, researchers caution that AuraStealer’s technical sophistication does not always match the claims made by its developers. 

Unlike conventional malware campaigns that rely on covert infection techniques such as malicious email attachments or exploit kits, AuraStealer employs a strategy that places users at the center of their own compromise. This approach, described as “scam-yourself,” relies heavily on social engineering rather than stealth delivery. Threat actors distribute convincing video content on popular social platforms, particularly TikTok, presenting the malware execution process as a legitimate software activation tutorial. 

These videos typically promise free access to paid software products. Viewers are guided through step-by-step instructions that require them to open an administrative PowerShell window and manually enter commands shown on screen. Instead of activating software, the commands quietly retrieve and execute AuraStealer, granting attackers access to the victim’s system without triggering traditional download-based defenses. 

From an analysis perspective, AuraStealer incorporates multiple layers of obfuscation designed to complicate both manual and automated inspection. The malware disrupts straightforward code execution paths by dynamically calculating control flow at runtime, preventing analysts from easily tracing its behavior. It also leverages exception-based execution techniques, intentionally generating system errors that are intercepted by custom handlers to perform malicious actions. These tactics are intended to confuse security sandboxes and delay detection. 

Functionally, AuraStealer targets a wide range of sensitive information. Researchers report that it is designed to harvest data from more than a hundred web browsers and dozens of desktop applications. Its focus includes credentials stored in both Chromium- and Gecko-based browsers, as well as data associated with cryptocurrency wallets maintained through browser extensions and standalone software. 

One of the more concerning aspects of the malware is its attempt to circumvent modern browser protections such as Application-Bound Encryption. The malware tries to launch browser processes in a suspended state and inject code capable of extracting encryption keys. However, researchers observed that this technique is inconsistently implemented and fails across multiple environments, suggesting that the malware remains technically immature. 

Despite being sold through subscription-based pricing that can reach several hundred dollars per month, AuraStealer contains notable weaknesses. Analysts found that its aggressive obfuscation introduces detectable patterns and that coding errors undermine its ability to remain stealthy. These shortcomings provide defenders with opportunities to identify and block infections before significant damage occurs. 

While AuraStealer is actively evolving and backed by ongoing development, its emergence highlights a broader trend toward manipulation-driven cybercrime. Security professionals continue to emphasize that any online tutorial instructing users to paste commands into a system terminal in exchange for free software should be treated as a significant warning sign.

LinkedIn Profile Data Among Billions of Records Found in Exposed Online Database

 



Cybersecurity researchers recently identified a massive online database that was left publicly accessible without any security protections, exposing a vast collection of professional and personal information. The database contained more than 16 terabytes of data, representing over 4.3 billion individual records that could be accessed without authorization.

Researchers associated with Cybernews reported that the exposed dataset is among the largest lead-generation style databases ever discovered online. The information appears to be compiled from publicly available professional profiles, including data commonly found on LinkedIn, such as profile handles, URLs, and employment-related details.

The exposed records included extensive personal and professional information. This ranged from full names, job titles, employer names, and work histories to education records, degrees, certifications, skills, languages, and location data. In some cases, the datasets also contained phone numbers, email addresses, social media links, and profile images. Additional information related to corporate relationships and contract-linked data was also present, suggesting the dataset was built for commercial or business intelligence purposes.

Investigators believe the data was collected gradually over several years and across different geographic regions. The database was stored in a MongoDB instance, a system commonly used by organizations to manage large volumes of information efficiently. While MongoDB itself is widely used, leaving such databases unsecured can expose sensitive information at scale, which is what occurred in this incident.

The exposed database was discovered on November 23 and secured approximately two days later. However, researchers were unable to determine how long the data had been accessible before it was identified. The exposure is believed to have resulted from misconfiguration or human error rather than a deliberate cyberattack, a common issue in cloud-based data storage environments.

Researchers noted that the database was highly organized and structured, indicating the information was intentionally collected and maintained. Based on its format, the data also appears to be relatively current and accurate.

Such large datasets are particularly attractive to cybercriminals. When combined with automated tools or large language models, this information can be used to conduct large-scale phishing campaigns, generate fraudulent emails, or carry out targeted social engineering attacks against individuals and corporate employees.

Security experts recommend that individuals take precautionary measures following incidents like this. This includes updating passwords for professional networking accounts such as LinkedIn, email services, and any connected financial accounts. Users should also remain cautious of unexpected emails, messages, or phone calls that attempt to pressure them into sharing personal information or clicking unknown links.

Although collecting publicly available data is not illegal in many jurisdictions, failing to properly secure a database of this size may carry legal and regulatory consequences. At present, the ownership and purpose of the database remain unclear. Further updates are expected if more information becomes available or accountability is established.

Cybercriminals Exploit Law Enforcement Data Requests to Steal User Information

 

While most of the major data breaches occur as a result of software vulnerabilities, credit card information theft, or phishing attacks, increasingly, identity theft crimes are being enacted via an intermediary source that is not immediately apparent. Some of the biggest firms in technology are knowingly yielding private information to what they believe are lawful authorities, only to realize that the identity thieves were masquerading as such.  

Technology firms such as Apple, Google, and Meta are mandated by law to disclose limited information about their users to the relevant law enforcement agencies in given situations such as criminal investigations and emergency situations that pose a threat to human life or national security. Such requests for information are usually channeled through formal systems, with a high degree of priority since they are often urgent. All these companies possess detailed information about their users, including their location history, profiles, and gadget data, which is of critical use to law enforcement. 

This process, however, has also been exploited by cybercriminals. These individuals try to evade the security measures that safeguard data by using law enforcement communication mimicking. One of the recent tactics adopted by cyber criminals is the acquisition of typosquatting domains or email addresses that are almost similar to law enforcement or governmental domains, with only one difference in the characters. These malicious parties then send sophisticated emails to companies’ compliance or legal departments that look no different from law enforcement emails. 

In more sophisticated attacks, the perpetrators employ business email compromise to break into genuine email addresses of law enforcement or public service officials. Requests that appear in genuine email addresses are much more authentic, which in turn multiplies the chances of companies responding positively. Even though this attack is more sophisticated, it is also more effective since it is apparently coming from authentic sources. These malicious data requests can be couched in the terms of emergency disclosures, which could shorten the time for verification. 

This emergency request is aimed at averting real damage that could occur immediately, but the attacker takes advantage of the urgency in convincing companies to disclose information promptly. Using such information, identity theft, money fraud, account takeover, or selling on dark markets could be the outcome. Despite these dangers, some measures have been taken by technology companies to ensure that their services are not abused. Most of the major companies currently make use of law enforcement request portals that are reviewed internally before any data sharing takes place. Such requests are reviewed for their validity, authority, and compliance with the law before any data is shared. 

This significantly decreased the number of cases of data abuse but did not eradicate the risk. As more criminals register expertise in impersonation schemes that exploit trust-based systems, it is evident that the situation also embodies a larger challenge for the tech industry. It is becoming increasingly difficult to ensure a good blend of legal services to law-enforcement agencies with the need to safeguard the privacy of services used by users. Abuse of law-enforcement data request systems points to the importance of ensuring that sensitive information is not accessed by criminals.

CISA Warns of Rising Targeted Spyware Campaigns Against Encrypted Messaging Users

 

The U.S. Cybersecurity and Infrastructure Security Agency has issued an unusually direct warning regarding a series of active campaigns deploying advanced spyware against users of encrypted messaging platforms, including Signal and WhatsApp. According to the agency, these operations are being conducted by both state-backed actors and financially motivated threat groups, and their activity has broadened significantly throughout the year. The attacks now increasingly target politicians, government officials, military personnel, and other influential individuals across several regions. 

This advisory marks the first time CISA has publicly grouped together multiple operations that rely on commercial surveillance tools, remote-access malware, and sophisticated exploit chains capable of infiltrating secure communications without alerting the victim. The agency noted that the goal of these campaigns is often to hijack messaging accounts, exfiltrate private data, and sometimes obtain long-term access to devices for further exploitation. 

Researchers highlighted multiple operations demonstrating the scale and diversity of techniques. Russia-aligned groups reportedly misused Signal’s legitimate device-linking mechanism to silently take control of accounts. Android spyware families such as ProSpy and ToSpy were distributed through spoofed versions of well-known messaging apps in the UAE. Another campaign in Russia leveraged Telegram channels and phishing pages imitating WhatsApp, Google Photos, TikTok, and YouTube to spread the ClayRat malware. In more technically advanced incidents, attackers chained recently disclosed WhatsApp zero-day vulnerabilities to compromise fewer than 200 targeted users. Another operation, referred to as LANDFALL, used a Samsung vulnerability affecting devices in the Middle East. 

CISA stressed that these attacks are highly selective and aimed at individuals whose communications have geopolitical relevance. Officials described the activity as precision surveillance rather than broad collection. Analysts believe the increasing focus on encrypted platforms reflects a strategic shift as adversaries attempt to bypass the protections of end-to-end encryption by compromising the devices used to send and receive messages. 

The tactics used in these operations vary widely. Some rely on manipulated QR codes or impersonated apps, while others exploit previously unknown iOS and Android vulnerabilities requiring no user interaction. Experts warn that for individuals considered high-risk, standard cybersecurity practices may no longer be sufficient. 

CISA’s guidance urges those at risk to adopt stronger security measures, including hardware upgrades, phishing-resistant authentication, protected telecom accounts, and stricter device controls. The agency also recommends reliance on official app stores, frequent software updates, careful permission auditing, and enabling advanced device protections such as Lockdown Mode on iPhones or Google Play Protect on Android.  

Officials stated that the rapid increase in coordinated mobile surveillance operations reflects a global shift in espionage strategy. With encrypted messaging now central to sensitive communication, attackers are increasingly focused on compromising the endpoint rather than the encryption itself—a trend authorities expect to continue growing.

AI Emotional Monitoring in the Workplace Raises New Privacy and Ethical Concerns

 

As artificial intelligence becomes more deeply woven into daily life, tools like ChatGPT have already demonstrated how appealing digital emotional support can be. While public discussions have largely focused on the risks of using AI for therapy—particularly for younger or vulnerable users—a quieter trend is unfolding inside workplaces. Increasingly, companies are deploying generative AI systems not just for productivity but to monitor emotional well-being and provide psychological support to employees. 

This shift accelerated after the pandemic reshaped workplaces and normalized remote communication. Now, industries including healthcare, corporate services and HR are turning to software that can identify stress, assess psychological health and respond to emotional distress. Unlike consumer-facing mental wellness apps, these systems sit inside corporate environments, raising questions about power dynamics, privacy boundaries and accountability. 

Some companies initially introduced AI-based counseling tools that mimic therapeutic conversation. Early research suggests people sometimes feel more validated by AI responses than by human interaction. One study found chatbot replies were perceived as equally or more empathetic than responses from licensed therapists. This is largely attributed to predictably supportive responses, lack of judgment and uninterrupted listening—qualities users say make it easier to discuss sensitive topics. 

Yet the workplace context changes everything. Studies show many employees hesitate to use employer-provided mental health tools due to fear that personal disclosures could resurface in performance reviews or influence job security. The concern is not irrational: some AI-powered platforms now go beyond conversation, analyzing emails, Slack messages and virtual meeting behavior to generate emotional profiles. These systems can detect tone shifts, estimate personal stress levels and map emotional trends across departments. 

One example involves workplace platforms using facial analytics to categorize emotional expression and assign wellness scores. While advocates claim this data can help organizations spot burnout and intervene early, critics warn it blurs the line between support and surveillance. The same system designed to offer empathy can simultaneously collect insights that may be used to evaluate morale, predict resignations or inform management decisions. 

Research indicates that constant monitoring can heighten stress rather than reduce it. Workers who know they are being analyzed tend to modulate behavior, speak differently or avoid emotional honesty altogether. The risk of misinterpretation is another concern: existing emotion-tracking models have demonstrated bias against marginalized groups, potentially leading to misread emotional cues and unfair conclusions. 

The growing use of AI-mediated emotional support raises broader organizational questions. If employees trust AI more than managers, what does that imply about leadership? And if AI becomes the primary emotional outlet, what happens to the human relationships workplaces rely on? 

Experts argue that AI can play a positive role, but only when paired with transparent data use policies, strict privacy protections and ethical limits. Ultimately, technology may help supplement emotional care—but it cannot replace the trust, nuance and accountability required to sustain healthy workplace relationships.

WhatsApp Enumeration Flaw Exposes Data of 3.5 Billion Users in Massive Scraping Incident

 

Security researchers in Austria uncovered a significant privacy vulnerability in WhatsApp that enabled them to collect the personal details of more than 3.5 billion registered users, an exposure they believe may be the largest publicly documented data leak to date. The issue stems from a long-standing feature that allows users to search WhatsApp accounts by entering phone numbers. While meant for convenience, the function can be exploited to automatically compile profiles at scale. 

Using phone numbers generated with a custom tool built on Google’s libphonenumber system, the research team was able to query account details at an astonishing rate—more than 100 million accounts per hour. They reported exceeding 7,000 automated lookups per second without facing IP bans or meaningful rate-limiting measures. Their findings indicate that WhatsApp’s registered user base is larger than previously disclosed, contradicting the platform’s statement that it serves “over two billion” users globally. 

The scraped records included phone numbers, account names, profile photos, and, in some cases, personal text attached to accounts. Over half of the identified users had public profile images, and a substantial portion contained identifiable human faces. About 29 percent included text descriptions, which researchers noted could reveal sensitive personal information such as sexuality, political affiliation, drug use, professional identities, or links to other platforms—including LinkedIn and dating apps.  
The study also revealed that millions of accounts belonged to phone numbers registered in countries where WhatsApp is restricted or banned, including China, Myanmar, and North Korea. Researchers warn that such exposure could put users in those regions at risk of government monitoring, penalties, or arrest. 

Beyond state-level dangers, experts stress that the harvested dataset could be misused by cybercriminals conducting targeted phishing campaigns, fraudulent messaging schemes, robocalling, and identity-based scams. The team emphasized that the persistence of phone numbers poses an ongoing risk: half of the numbers leaked during Facebook’s large-scale 2021 data scraping incident were still active in WhatsApp’s ecosystem. 

Meta confirmed receiving the researchers’ disclosure through its bug bounty process. The company stated that it has since deployed updated anti-scraping defenses and thanked the researchers for responsibly deleting collected data. According to WhatsApp engineering leadership, the vulnerability did not expose private messages or encrypted content. 

The researchers validated Meta’s claim, noting that the original enumeration method is now blocked. However, they highlighted that verifying security completeness remains difficult and emphasized the nearly year-long delay between initial reporting and effective remediation.  
Whether this incident triggers systemic scrutiny or remains an isolated cautionary case, it underscores a critical reality: even services built around encryption can expose sensitive user metadata, creating new avenues for surveillance and exploitation.

Bluetooth Security Risks: Why Leaving It On Could Endanger Your Data

 

Bluetooth technology, widely used for wireless connections across smartphones, computers, health monitors, and peripherals, offers convenience but carries notable security risks—especially when left enabled at all times. While Bluetooth security and encryption have advanced over decades, the protocol remains exposed to various cyber threats, and many users neglect these vulnerabilities, putting personal data at risk.

Common Bluetooth security threats

Leaving Bluetooth permanently on is among the most frequent cybersecurity oversights. Doing so effectively announces your device’s continuous availability to connect, making it a target for attackers. 

Threat actors exploit Bluetooth through methods like bluesnarfing—the unauthorized extraction of data—and bluejacking, where unsolicited messages and advertisements are sent without consent. If hackers connect, they may siphon valuable information such as banking details, contact logs, and passwords, which can subsequently be used for identity theft, fraudulent purchases, or impersonation.

A critical issue is that data theft via Bluetooth is often invisible—victims receive no notification or warning. Further compounding the problem, Bluetooth signals can be leveraged for physical tracking. Retailers, for instance, commonly use Bluetooth beacons to trace shopper locations and gather granular behavioral data, raising privacy concerns.

Importantly, Bluetooth-related vulnerabilities affect more than just smartphones; they extend to health devices and wearables. Although compromising medical Bluetooth devices such as pacemakers or infusion pumps is technically challenging, targeted attacks remain a possibility for motivated adversaries.

Defensive strategies 

Mitigating Bluetooth risks starts with turning off Bluetooth in public or unfamiliar environments and disabling automatic reconnection features when constant use (e.g., wireless headphones) isn’t essential. Additionally, set devices to ‘undiscoverable’ mode as a default, blocking unexpected or unauthorized connections.

Regularly updating operating systems is vital, since outdated devices are prone to exploits like BlueBorne—a severe vulnerability allowing attackers full control over devices, including access to apps and cameras. Always reject unexpected Bluetooth pairing requests and periodically review app permissions, as many apps may exploit Bluetooth to track locations or obtain contact data covertly. 

Utilizing a Virtual Private Network (VPN) enhances overall security by encrypting network activity and masking IP addresses, though this measure isn’t foolproof. Ultimately, while Bluetooth offers convenience, mindful management of its settings is crucial for defending against the spectrum of privacy and security threats posed by wireless connectivity.