Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Nvidia’s Strong Earnings Ease AI Bubble Fears Despite Market Volatility

  Nvidia (NVDA) delivered a highly anticipated earnings report, and the AI semiconductor leader lived up to expectations. “These results an...

All the recent news you need to know

Your Phone Is Being Tracked in Ways You Can’t See: One Click Shows the Truth

 



Many people believe they are safe online once they disable cookies, switch on private browsing, or limit app permissions. Yet these steps do not prevent one of the most persistent tracking techniques used today. Modern devices reveal enough technical information for websites to recognise them with surprising accuracy, and users can see this for themselves with a single click using publicly available testing tools.

This practice is known as device fingerprinting. It collects many small and unrelated pieces of information from your phone or computer, such as the type of browser you use, your display size, system settings, language preferences, installed components, and how your device handles certain functions. None of these details identify you directly, but when a large number of them are combined, they create a pattern that is specific to your device. This allows trackers to follow your activity across different sites, even when you try to browse discreetly.

The risk is not just about being observed. Once a fingerprint becomes associated with a single real-world action, such as logging into an account or visiting a page tied to your identity, that unique pattern can then be connected back to you. From that point onward, any online activity linked to that fingerprint can be tied to the same person. This makes fingerprinting an effective tool for profiling behaviour over long periods of time.

Growing concerns around online anonymity are making this issue more visible. Recent public debates about identity checks, age verification rules, and expanded monitoring of online behaviour have already placed digital privacy under pressure. Fingerprinting adds an additional layer of background tracking that does not rely on traditional cookies and cannot be easily switched off.

This method has also spread far beyond web browsers. Many internet-connected devices, including smart televisions and gaming systems, can reveal similar sets of technical signals that help build a recognisable device profile. As more home electronics become connected, these identifiers grow even harder for users to avoid.

Users can test their own exposure through tools such as the Electronic Frontier Foundation’s browser evaluation page. By selecting the option to analyse your browser, you will either receive a notice that your setup looks common or that it appears unique compared to others tested. A unique result means your device stands out strongly among the sample and can likely be recognised again. Another testing platform demonstrates just how many technical signals a website can collect within seconds, listing dozens of attributes that contribute to a fingerprint.

Some browsers attempt to make fingerprinting more difficult by randomising certain data points or limiting access to high-risk identifiers. These protections reduce the accuracy of device recognition, although they cannot completely prevent it. A virtual private network can hide your network address, but it cannot block the internal characteristics that form a fingerprint.

Tracking also happens through mobile apps and background services. Many applications collect usage and technical data, and privacy labels do not always make this clear to users. Studies have shown that complex privacy settings and permission structures often leave people unaware of how much information their devices share.

Users should also be aware of design features that shift them out of protected environments. For example, when performing a search through a mobile browser, some pages include prompts that encourage the user to open a separate application instead of continuing in the browser. These buttons are typically placed near navigation controls, making accidental taps more likely. Moving into a dedicated search app places users in a different data-collection environment, where protections offered by the browser may no longer apply.

While there is no complete way to avoid fingerprinting, users can limit their exposure by choosing browsers with built-in privacy protections, reviewing app permissions frequently, and avoiding unnecessary redirections into external applications. Ultimately, the choice depends on how much value an individual places on privacy, but understanding how this technology works is the first step toward reducing risk.

CrowdStrike Fires Insider Who Leaked Internal Screenshots to Hacker Groups, Says no Customer Data was Breached

 

American cybersecurity company CrowdStrike has confirmed that screenshots taken from its internal systems were shared with hacker groups by a now-terminated employee. 

The disclosure follows the appearance of the screenshots on Telegram, posted by the cybercrime collective known as Scattered Lapsus$ Hunters. 

In a statement to BleepingComputer, a CrowdStrike spokesperson said the company’s security was not compromised as a result of the insider activity and that customers remained fully protected. According to the spokesperson, the employee in question was identified during an internal investigation last month. 

The individual was later terminated and the matter has been reported to law enforcement. CrowdStrike did not clarify which threat group was behind the leak or what drove the employee to share sensitive images. 

However, the company offered the statement after BleepingComputer reached out regarding screenshots of CrowdStrike systems circulating on Telegram. Those screenshots were posted by members of ShinyHunters, Scattered Spider, and the Lapsus$ group, who now operate collectively under the name Scattered Lapsus$ Hunters. ShinyHunters told BleepingComputer that they allegedly paid the insider 25,000 dollars for access to CrowdStrike’s network. 

The threat actors claimed they received SSO authentication cookies, but CrowdStrike had already detected the suspicious activity and revoked the employee’s access. 

The group also claimed it attempted to buy internal CrowdStrike reports on ShinyHunters and Scattered Spider but never received them. 

Scattered Lapsus$ Hunters have been responsible for a large-scale extortion campaign against companies using Salesforce. Since the beginning of the year, the group has launched voice phishing attacks to breach Salesforce customers. Their list of known or claimed victims includes Google, Cisco, Allianz Life, Farmers Insurance, Qantas, Adidas, Workday, and luxury brands under LVMH such as Dior, Louis Vuitton, and Tiffany & Co. 

They have also attempted to extort numerous high-profile organizations including FedEx, Disney, McDonald’s, Marriott, Home Depot, UPS, Chanel, and IKEA. 

The group has previously claimed responsibility for a major breach at Jaguar Land Rover that exposed sensitive data and disrupted operations, resulting in losses estimated at more than 196 million pounds. 

Most recently, ShinyHunters asserted that over 280 companies were affected in a new wave of Salesforce-related data theft. Among the names mentioned were LinkedIn, GitLab, Atlassian, Verizon, and DocuSign. 

Though, DocuSign has denied being breached, stating that internal investigations have shown no evidence of compromise.

Streaming Platforms Face AI Music Detection Crisis

 

Distinguishing AI-generated music from human compositions has become extraordinarily challenging as generative models improve, raising urgent questions about detection, transparency, and industry safeguards. This article explores why even trained listeners struggle to identify machine-made tracks and what technical, cultural, and regulatory responses are emerging.

Why detection is so difficult

Modern AI music systems produce outputs that blend seamlessly into mainstream genres, especially pop and electronic styles already dominated by digital production. Traditional warning signs—slightly slurred vocals, unnatural consonant pronunciation, or "ghost" harmonies that appear and vanish unpredictably—remain only hints rather than definitive proof, and these tells fade as models advance. Music producer insights emphasize that AI recognizes patterns but lacks the emotional depth and personal narratives behind human creativity, yet casual listeners find these distinctions nearly impossible to hear.

Technical solutions and limits

Streaming platform Deezer launched an AI detection tool in January 2024 and introduced visible tagging for fully AI-generated tracks by summer, reporting that over one-third of daily uploads—approximately 50,000 tracks—are now entirely machine-made.The company's research director noted initial detection volumes were so high they suspected a system error. Deezer claims detection accuracy exceeds 99.8 percent by identifying subtle audio artifacts left by generative models, with minimal false positives. However, critics warn that watermarking schemes can be stripped through basic audio processing, and no universal standard yet exists across platforms.

Economic and ethical implications

Undisclosed AI music floods catalogues, distorts recommendation algorithms, and crowds out human artists, potentially driving down streaming payouts.Training data disputes compound the problem: many AI systems learn from copyrighted recordings without consent or compensation, sparking legal battles over ownership and moral rights. Survey data shows 80 percent of listeners want mandatory labelling for fully AI-generated tracks, and three-quarters prefer platforms to flag AI recommendations.

Industry and policy response

Spotify announced support for new DDEX standards requiring AI disclosure in music credits, alongside enhanced spam filtering and impersonation enforcement. Deezer removes fully AI tracks from editorial playlists and algorithmic recommendations. Yet regulatory frameworks lag technological capability, leaving artists exposed as adoption accelerates and platforms develop inconsistent, case-by-case policies The article concludes that transparent labelling and enforceable standards are essential to protect both creators and listener choice.

AI Emotional Monitoring in the Workplace Raises New Privacy and Ethical Concerns

 

As artificial intelligence becomes more deeply woven into daily life, tools like ChatGPT have already demonstrated how appealing digital emotional support can be. While public discussions have largely focused on the risks of using AI for therapy—particularly for younger or vulnerable users—a quieter trend is unfolding inside workplaces. Increasingly, companies are deploying generative AI systems not just for productivity but to monitor emotional well-being and provide psychological support to employees. 

This shift accelerated after the pandemic reshaped workplaces and normalized remote communication. Now, industries including healthcare, corporate services and HR are turning to software that can identify stress, assess psychological health and respond to emotional distress. Unlike consumer-facing mental wellness apps, these systems sit inside corporate environments, raising questions about power dynamics, privacy boundaries and accountability. 

Some companies initially introduced AI-based counseling tools that mimic therapeutic conversation. Early research suggests people sometimes feel more validated by AI responses than by human interaction. One study found chatbot replies were perceived as equally or more empathetic than responses from licensed therapists. This is largely attributed to predictably supportive responses, lack of judgment and uninterrupted listening—qualities users say make it easier to discuss sensitive topics. 

Yet the workplace context changes everything. Studies show many employees hesitate to use employer-provided mental health tools due to fear that personal disclosures could resurface in performance reviews or influence job security. The concern is not irrational: some AI-powered platforms now go beyond conversation, analyzing emails, Slack messages and virtual meeting behavior to generate emotional profiles. These systems can detect tone shifts, estimate personal stress levels and map emotional trends across departments. 

One example involves workplace platforms using facial analytics to categorize emotional expression and assign wellness scores. While advocates claim this data can help organizations spot burnout and intervene early, critics warn it blurs the line between support and surveillance. The same system designed to offer empathy can simultaneously collect insights that may be used to evaluate morale, predict resignations or inform management decisions. 

Research indicates that constant monitoring can heighten stress rather than reduce it. Workers who know they are being analyzed tend to modulate behavior, speak differently or avoid emotional honesty altogether. The risk of misinterpretation is another concern: existing emotion-tracking models have demonstrated bias against marginalized groups, potentially leading to misread emotional cues and unfair conclusions. 

The growing use of AI-mediated emotional support raises broader organizational questions. If employees trust AI more than managers, what does that imply about leadership? And if AI becomes the primary emotional outlet, what happens to the human relationships workplaces rely on? 

Experts argue that AI can play a positive role, but only when paired with transparent data use policies, strict privacy protections and ethical limits. Ultimately, technology may help supplement emotional care—but it cannot replace the trust, nuance and accountability required to sustain healthy workplace relationships.

IGT Responds to Reports of Significant Ransomware Intrusion

 


An investigation by the Russian-linked ransomware group Qilin has raised fresh concerns within the global gaming and gambling industry after they claimed responsibility for the cyber intrusion that targeted global gambling giant IGT in recent weeks. 

A dark-web leak site that listed the company on Wednesday stated that it had exfiltrated ten gigabytes of data, or more than two thousand files, which is an amount that would equal around ten gigabytes of internal data. The posting itself didn’t provide many details about this. 

As can be seen by the entry stamped in bright green with the word “Publicated”, IGT does not appear to have communicated with Qilin or they refuse to accept ransom demands from him. IGT offers a complete suite of products and services to casinos, retailers, and online operators worldwide that range from gaming machines to lottery technology to PlaySports betting platforms to iGaming systems. 

Through its suite of products, IGT supports millions of players every day. This recent breach has prompted increased scrutiny of a leading technology provider’s security posture, and raised questions about the potential impact on operations and the broader gaming infrastructure of this company. According to a recent filing submitted to the Securities and Exchange Commission, International Game Technology (IGT) has acknowledge that it is in the middle of managing a major cyber incident. 

In the filing, IGT confirmed an unauthorized attempt to access portions of its internal IT system on November 17 was detected. There is a note in the disclosure that indicates that the company's incident response procedures were immediately activated after the intrusion. 

These procedures included a number of steps commonly associated with attempts to contain suspected ransomware activities, including taking certain systems offline and engaging external forensic specialists to assist in the investigation. 

In the midst of it assessing the extent of the disruption, the notorious ransomware group Qilin also has mentioned IGT, claiming that around 10GB of data, or over 21,000 files, has been stolen from its dark-web leak portal. Despite the fact that Qilin has not yet provided proof of compromise samples, the group has labeled the archive as published, a term criminals frequently use to indicate that exfiltrated data is now circulating beyond the victim's control. This adds further urgency to IGT's efforts to contain and remediate the data in question.

A report from Cybernews claims that Qilin's leak page also offers a link to an FTP file believed to contain a complete cache of allegedly stolen information, but no verification has been made and the amount of information available is limited at this point. To date, IGT has not either confirmed or denied the gang's assertions and has not responded to media inquiries seeking clarification. 

As one of the world's biggest gaming companies, GTECH offers a range of lottery technology products across more than 100 jurisdictions, including electronic gaming machines, iLottery systems, and sports betting platforms. Its headquarters are in London, with major operations centers in Las Vegas, Rome, and Providence. IGT is the primary technology partner for 26 U.S. lotteries and casinos, serving dozens of lottery operators and casino operators across the country. 

The entire lottery industry has been facing increasing cyber threats; earlier this year, the Ohio Lottery suffered a ransomware attack that disrupted jackpot information, delayed prize claim processing, and exposed sensitive consumer and retailer information. 

With such a backdrop in mind, IGT’s statement to the SEC underscored the company’s commitment to minimizing operational disruptions while restoring systems and maintaining transparency with its customers. In order to ensure service stability while forensic specialists continue their assessment, the company has deployed contingency solutions under its business continuity framework. 

It is vital that IGT maintains trust among lottery operators, casino customers and millions of daily users as it navigates the aftermath of the breach. IGT continues to work to secure that trust as the recovery proceeds. In light of the ongoing investigation, this incident underscores the widening threat landscape that operators of high-value digital games and lotteries face.

In order to achieve the best results for IGT, it is imperative that they reinforce cyber-resilience, accelerate security modernization, and strengthen partnerships with regulators and industry partners. It is widely believed that maintaining transparency, rapid threat intelligence sharing, and investing in robust incident response capabilities will be crucial not only for restoring confidence, but also for safeguarding interconnected gaming ecosystems from increasingly sophisticated ransomware actors who are eager to exploit any vulnerabilities that may arise.

PlushDaemon Group Reroutes Software Updates to Deploy Espionage Tools

 



A cyberespionage group known in security research circles as PlushDaemon has been carrying out a long-running operation in which they take advantage of software update systems to secretly install their own tools on targeted computers. According to new analysis by ESET, this group has been active for several years and has repeatedly improved its techniques. Their operations have reached both individuals and organizations across multiple regions, including areas in East Asia, the United States, and Oceania. Victims have included universities, companies that manufacture electronics, and even a major automotive facility located in Cambodia. ESET’s data suggests that this shift toward manipulating software updates has been a consistent part of PlushDaemon’s strategy since at least 2019, which indicates the group has found this method to be reliable and efficient.

The attackers begin by attempting to take control of the network equipment that people rely on for internet connectivity, such as routers or similar devices. They usually exploit security weaknesses that are already publicly known or take advantage of administrators who have left weak passwords unchanged. Once the attackers get access to these devices, they install a custom-built implant researchers call EdgeStepper. This implant is written in the Go programming language and compiled in a format that works comfortably on Linux-based router systems. After deployment, EdgeStepper operates quietly in the background, monitoring how the device handles internet traffic.

What makes this implant dangerous is its ability to interfere with DNS queries. DNS is the system that helps computers find the correct server whenever a user tries to reach a domain name. EdgeStepper watches these requests and checks whether a particular domain is involved in delivering software updates. If EdgeStepper recognizes an update-related domain, it interferes and redirects the request to a server controlled by PlushDaemon. The victim sees no warning sign because the update process appears completely normal. However, instead of downloading a legitimate update from the software provider, the victim unknowingly receives a malicious file from the attackers’ infrastructure.

This deceptive update carries the first stage of a layered malware chain. The initial file is a Windows component known as LittleDaemon. It is intentionally disguised as a DLL file to convince the system that it is a harmless library file. Once LittleDaemon runs, it connects to one of the attacker-controlled nodes and downloads the next stage, known as DaemonicLogistics. This second-stage tool is decrypted and executed directly in memory, which makes it more difficult for traditional security products to spot because it avoids writing visible files to disk. DaemonicLogistics is essentially the bridge that loads the final and most important payload.

The last payload is the group’s advanced backdoor, SlowStepper. This backdoor has been documented in earlier incidents, including a case in which users of a South Korean VPN service unknowingly received a trojanized installer from what appeared to be the vendor’s official site. SlowStepper gives the attackers broad access to a compromised machine. It can gather system information, execute various commands, browse and manipulate files, and activate additional spyware tools. Many of these tools are written in Python and are designed to steal browser data, capture keystrokes, and extract stored credentials, giving PlushDaemon a detailed picture of the victim’s activity.

ESET researchers also examined the group’s interference with update traffic for Sogou Pinyin, which is one of the most widely used Chinese input software products. While this example helps illustrate the group’s behavior, the researchers observed similar hijacking patterns affecting other software products as well. This means PlushDaemon is not focused on one specific application but is instead targeting any update system they can manipulate through the network devices they have compromised. Because their technique relies on controlling the network path rather than exploiting a flaw inside the software itself, the group’s approach could be applied to targets anywhere in the world.

The research report includes extensive technical information on every component uncovered in this campaign and offers indicators of compromise for defenders, including associated files, domains, and IP addresses. These findings suggest how imperative it is that a routine process like installing updates can become a highly effective attack vector when network infrastructure is tampered with. The case also reinforces the importance of securing routers and keeping administrator credentials strong, since a compromised device at the network level allows attackers to alter traffic without the user noticing any warning signs.




Featured