Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Indian Teen Enables Apple-Exclusive AirPods Features on Android

  As Apple's AirPods have long been known, they offer a wide range of intelligent features, such as seamless device switching, adaptive ...

All the recent news you need to know

Banking Malware Can Hack.Communications via Encrypted Apps


Sturnus hacks communication 

A new Android banking malware dubbed Sturnus can hack interactions from entirety via encrypted messaging networks like Signal, WhatsApp, and Telegram, as well as take complete control of the device.  

While still under growth, the virus is fully functional and has been programmed to target accounts at various financial institutions across Europe by employing "region-specific overlay templates."  

Attack tactic 

Sturnus uses a combination of plaintext, RSA, and AES-encrypted communication with the command-and-control (C2) server, making it a more sophisticated threat than existing Android malware families.

Sturnus may steal messages from secure messaging apps after the decryption step by recording the content from the device screen, according to a research from online fraud prevention and threat intelligence agency Threatfabric. The malware can also collect banking account details using HTML overlays and offers support for complete, real-time access through VNC session.

Malware distribution 

The researchers haven't found how the malware is disseminated but they assume that malvertising or direct communications are plausible approaches. Upon deployment, the malware connects to the C2 network to register the target via a cryptographic transaction. 

For instructions and data exfiltration, it creates an encrypted HTTPS connection; for real-time VNC operations and live monitoring, it creates an AES-encrypted WebSocket channel. Sturnus can begin reading text on the screen, record the victim's inputs, view the UI structure, identify program launches, press buttons, scroll, inject text, and traverse the phone by abusing the Accessibility services on the device.

To get full command of the system, Sturnus gets Android Device Administrator credentials, which let it keep tabs of password changes and attempts to unlock and lock the device remotely. The malware also tries to stop the user from disabling its privileges or deleting it from the device. Sturnus uses its permissions to identify message content, inputted text, contact names, and conversation contents when the user accesses WhatsApp, Telegram, or Signal.

Nvidia’s Strong Earnings Ease AI Bubble Fears Despite Market Volatility

 

Nvidia (NVDA) delivered a highly anticipated earnings report, and the AI semiconductor leader lived up to expectations.

“These results and commentary should help steady the ship for the AI trade into the end of the year,” Jefferies analysts wrote in a Thursday note.

The company’s late-Wednesday announcement arrived at a critical moment for the broader AI-driven market rally. Over the past few weeks, debate around whether AI valuations have entered bubble territory has intensified, fueled by concerns over massive data-center investments, the durability of AI infrastructure, and uncertainty around commercial adoption.

Thursday’s market swings showed just how unresolved the conversation remains. The Nasdaq Composite surged more than 2% early in the day, only to reverse course and fall nearly 2% by afternoon. Nvidia shares followed a similar pattern—after climbing 5% in the morning, the stock later slipped almost 3%.

Still, Nvidia’s exceptional performance provided some reassurance to investors worried about overheating in the AI sector.

The company reported that quarterly revenue jumped 62% to $57 billion, with expectations for current-quarter sales to reach $65 billion. Margins also improved, and Nvidia projected gross margins would expand further to nearly 75% in the coming quarter.

“Bubbles are irrational, with prices rising despite weaker fundamentals. Nvidia’s numbers show that fundamentals are still strong,” said David Russell, Global Head of Market Strategy at TradeStation.

Executives also addressed long-standing questions about AI profitability, return on investment, and the useful life of AI infrastructure during the earnings call.

CEO Jensen Huang highlighted the broad scope of industries adopting Nvidia hardware, pointing to Meta’s (META) rising ad conversions as evidence that “transitioning to generative AI represents substantial revenue gains for hyperscalers.”

CFO Colette Kress also reassured investors about hardware longevity, stating, “Thanks to CUDA, the A100 GPUs we shipped six years ago are still running at full utilization today.”
Her remarks appeared to indirectly counter claims from hedge fund manager Michael Burry, who recently suggested that tech firms were extending the assumed lifespan of GPUs to downplay data-center costs.

Most analysts responded positively to the report.

“On these numbers, it is very hard to see how this stock does not keep moving higher from here,” UBS analysts wrote. “Ultimately, the AI infrastructure tide is still rising so fast that all boats will be lifted,” they added.

However, not everyone is convinced that the concerns fueling the AI bubble debate have been resolved.

“The AI bubble debate has never been about whether or not NVIDIA can sell chips,” said Julius Franck, co-founder of Vertus. “Their outstanding results do not address the elephant in the room: will the customers buying all this hardware ever make money from it?”

Others suggested that investor scrutiny may only increase from here.

“Many of the risks now worrying investors, like heavy spending and asset depreciation, are real,” noted TradeStation's Russell. “We may see continued weakness in the shares of companies taking on debt to build data centers, even as the boom continues.”

Your Phone Is Being Tracked in Ways You Can’t See: One Click Shows the Truth

 



Many people believe they are safe online once they disable cookies, switch on private browsing, or limit app permissions. Yet these steps do not prevent one of the most persistent tracking techniques used today. Modern devices reveal enough technical information for websites to recognise them with surprising accuracy, and users can see this for themselves with a single click using publicly available testing tools.

This practice is known as device fingerprinting. It collects many small and unrelated pieces of information from your phone or computer, such as the type of browser you use, your display size, system settings, language preferences, installed components, and how your device handles certain functions. None of these details identify you directly, but when a large number of them are combined, they create a pattern that is specific to your device. This allows trackers to follow your activity across different sites, even when you try to browse discreetly.

The risk is not just about being observed. Once a fingerprint becomes associated with a single real-world action, such as logging into an account or visiting a page tied to your identity, that unique pattern can then be connected back to you. From that point onward, any online activity linked to that fingerprint can be tied to the same person. This makes fingerprinting an effective tool for profiling behaviour over long periods of time.

Growing concerns around online anonymity are making this issue more visible. Recent public debates about identity checks, age verification rules, and expanded monitoring of online behaviour have already placed digital privacy under pressure. Fingerprinting adds an additional layer of background tracking that does not rely on traditional cookies and cannot be easily switched off.

This method has also spread far beyond web browsers. Many internet-connected devices, including smart televisions and gaming systems, can reveal similar sets of technical signals that help build a recognisable device profile. As more home electronics become connected, these identifiers grow even harder for users to avoid.

Users can test their own exposure through tools such as the Electronic Frontier Foundation’s browser evaluation page. By selecting the option to analyse your browser, you will either receive a notice that your setup looks common or that it appears unique compared to others tested. A unique result means your device stands out strongly among the sample and can likely be recognised again. Another testing platform demonstrates just how many technical signals a website can collect within seconds, listing dozens of attributes that contribute to a fingerprint.

Some browsers attempt to make fingerprinting more difficult by randomising certain data points or limiting access to high-risk identifiers. These protections reduce the accuracy of device recognition, although they cannot completely prevent it. A virtual private network can hide your network address, but it cannot block the internal characteristics that form a fingerprint.

Tracking also happens through mobile apps and background services. Many applications collect usage and technical data, and privacy labels do not always make this clear to users. Studies have shown that complex privacy settings and permission structures often leave people unaware of how much information their devices share.

Users should also be aware of design features that shift them out of protected environments. For example, when performing a search through a mobile browser, some pages include prompts that encourage the user to open a separate application instead of continuing in the browser. These buttons are typically placed near navigation controls, making accidental taps more likely. Moving into a dedicated search app places users in a different data-collection environment, where protections offered by the browser may no longer apply.

While there is no complete way to avoid fingerprinting, users can limit their exposure by choosing browsers with built-in privacy protections, reviewing app permissions frequently, and avoiding unnecessary redirections into external applications. Ultimately, the choice depends on how much value an individual places on privacy, but understanding how this technology works is the first step toward reducing risk.

CrowdStrike Fires Insider Who Leaked Internal Screenshots to Hacker Groups, Says no Customer Data was Breached

 

American cybersecurity company CrowdStrike has confirmed that screenshots taken from its internal systems were shared with hacker groups by a now-terminated employee. 

The disclosure follows the appearance of the screenshots on Telegram, posted by the cybercrime collective known as Scattered Lapsus$ Hunters. 

In a statement to BleepingComputer, a CrowdStrike spokesperson said the company’s security was not compromised as a result of the insider activity and that customers remained fully protected. According to the spokesperson, the employee in question was identified during an internal investigation last month. 

The individual was later terminated and the matter has been reported to law enforcement. CrowdStrike did not clarify which threat group was behind the leak or what drove the employee to share sensitive images. 

However, the company offered the statement after BleepingComputer reached out regarding screenshots of CrowdStrike systems circulating on Telegram. Those screenshots were posted by members of ShinyHunters, Scattered Spider, and the Lapsus$ group, who now operate collectively under the name Scattered Lapsus$ Hunters. ShinyHunters told BleepingComputer that they allegedly paid the insider 25,000 dollars for access to CrowdStrike’s network. 

The threat actors claimed they received SSO authentication cookies, but CrowdStrike had already detected the suspicious activity and revoked the employee’s access. 

The group also claimed it attempted to buy internal CrowdStrike reports on ShinyHunters and Scattered Spider but never received them. 

Scattered Lapsus$ Hunters have been responsible for a large-scale extortion campaign against companies using Salesforce. Since the beginning of the year, the group has launched voice phishing attacks to breach Salesforce customers. Their list of known or claimed victims includes Google, Cisco, Allianz Life, Farmers Insurance, Qantas, Adidas, Workday, and luxury brands under LVMH such as Dior, Louis Vuitton, and Tiffany & Co. 

They have also attempted to extort numerous high-profile organizations including FedEx, Disney, McDonald’s, Marriott, Home Depot, UPS, Chanel, and IKEA. 

The group has previously claimed responsibility for a major breach at Jaguar Land Rover that exposed sensitive data and disrupted operations, resulting in losses estimated at more than 196 million pounds. 

Most recently, ShinyHunters asserted that over 280 companies were affected in a new wave of Salesforce-related data theft. Among the names mentioned were LinkedIn, GitLab, Atlassian, Verizon, and DocuSign. 

Though, DocuSign has denied being breached, stating that internal investigations have shown no evidence of compromise.

Streaming Platforms Face AI Music Detection Crisis

 

Distinguishing AI-generated music from human compositions has become extraordinarily challenging as generative models improve, raising urgent questions about detection, transparency, and industry safeguards. This article explores why even trained listeners struggle to identify machine-made tracks and what technical, cultural, and regulatory responses are emerging.

Why detection is so difficult

Modern AI music systems produce outputs that blend seamlessly into mainstream genres, especially pop and electronic styles already dominated by digital production. Traditional warning signs—slightly slurred vocals, unnatural consonant pronunciation, or "ghost" harmonies that appear and vanish unpredictably—remain only hints rather than definitive proof, and these tells fade as models advance. Music producer insights emphasize that AI recognizes patterns but lacks the emotional depth and personal narratives behind human creativity, yet casual listeners find these distinctions nearly impossible to hear.

Technical solutions and limits

Streaming platform Deezer launched an AI detection tool in January 2024 and introduced visible tagging for fully AI-generated tracks by summer, reporting that over one-third of daily uploads—approximately 50,000 tracks—are now entirely machine-made.The company's research director noted initial detection volumes were so high they suspected a system error. Deezer claims detection accuracy exceeds 99.8 percent by identifying subtle audio artifacts left by generative models, with minimal false positives. However, critics warn that watermarking schemes can be stripped through basic audio processing, and no universal standard yet exists across platforms.

Economic and ethical implications

Undisclosed AI music floods catalogues, distorts recommendation algorithms, and crowds out human artists, potentially driving down streaming payouts.Training data disputes compound the problem: many AI systems learn from copyrighted recordings without consent or compensation, sparking legal battles over ownership and moral rights. Survey data shows 80 percent of listeners want mandatory labelling for fully AI-generated tracks, and three-quarters prefer platforms to flag AI recommendations.

Industry and policy response

Spotify announced support for new DDEX standards requiring AI disclosure in music credits, alongside enhanced spam filtering and impersonation enforcement. Deezer removes fully AI tracks from editorial playlists and algorithmic recommendations. Yet regulatory frameworks lag technological capability, leaving artists exposed as adoption accelerates and platforms develop inconsistent, case-by-case policies The article concludes that transparent labelling and enforceable standards are essential to protect both creators and listener choice.

AI Emotional Monitoring in the Workplace Raises New Privacy and Ethical Concerns

 

As artificial intelligence becomes more deeply woven into daily life, tools like ChatGPT have already demonstrated how appealing digital emotional support can be. While public discussions have largely focused on the risks of using AI for therapy—particularly for younger or vulnerable users—a quieter trend is unfolding inside workplaces. Increasingly, companies are deploying generative AI systems not just for productivity but to monitor emotional well-being and provide psychological support to employees. 

This shift accelerated after the pandemic reshaped workplaces and normalized remote communication. Now, industries including healthcare, corporate services and HR are turning to software that can identify stress, assess psychological health and respond to emotional distress. Unlike consumer-facing mental wellness apps, these systems sit inside corporate environments, raising questions about power dynamics, privacy boundaries and accountability. 

Some companies initially introduced AI-based counseling tools that mimic therapeutic conversation. Early research suggests people sometimes feel more validated by AI responses than by human interaction. One study found chatbot replies were perceived as equally or more empathetic than responses from licensed therapists. This is largely attributed to predictably supportive responses, lack of judgment and uninterrupted listening—qualities users say make it easier to discuss sensitive topics. 

Yet the workplace context changes everything. Studies show many employees hesitate to use employer-provided mental health tools due to fear that personal disclosures could resurface in performance reviews or influence job security. The concern is not irrational: some AI-powered platforms now go beyond conversation, analyzing emails, Slack messages and virtual meeting behavior to generate emotional profiles. These systems can detect tone shifts, estimate personal stress levels and map emotional trends across departments. 

One example involves workplace platforms using facial analytics to categorize emotional expression and assign wellness scores. While advocates claim this data can help organizations spot burnout and intervene early, critics warn it blurs the line between support and surveillance. The same system designed to offer empathy can simultaneously collect insights that may be used to evaluate morale, predict resignations or inform management decisions. 

Research indicates that constant monitoring can heighten stress rather than reduce it. Workers who know they are being analyzed tend to modulate behavior, speak differently or avoid emotional honesty altogether. The risk of misinterpretation is another concern: existing emotion-tracking models have demonstrated bias against marginalized groups, potentially leading to misread emotional cues and unfair conclusions. 

The growing use of AI-mediated emotional support raises broader organizational questions. If employees trust AI more than managers, what does that imply about leadership? And if AI becomes the primary emotional outlet, what happens to the human relationships workplaces rely on? 

Experts argue that AI can play a positive role, but only when paired with transparent data use policies, strict privacy protections and ethical limits. Ultimately, technology may help supplement emotional care—but it cannot replace the trust, nuance and accountability required to sustain healthy workplace relationships.

Featured