Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Fake Go Crypto Package Caught Stealing Passwords and Spreading Linux Backdoor

  Cybersecurity investigators have revealed a rogue Go module engineered to capture passwords, establish long-term SSH access, and deploy a ...

All the recent news you need to know

Hollywood Studios Target AI Video Tool

 

Hollywood studios are intensifying efforts to curb an "ultra-realistic" AI video generator that produces lifelike clips from simple text prompts. The tool, capable of creating scenes like a fist fight between Tom Cruise and Brad Pitt, has sparked alarm in the entertainment industry over potential job losses and intellectual property misuse. Major players are pushing for regulatory action to protect actors and creators from deepfake disruptions.

The controversy erupted after a viral AI-generated video showcased the tool's prowess, depicting high-profile stars in a convincing brawl that stunned viewers worldwide. Creators behind the technology hail it as innovative, but industry insiders fear it could flood markets with unauthorized content, undermining traditional filmmaking. Hollywood executives have rallied, warning that unchecked AI could "transform or destroy" careers they've built over decades.

Prominent voices in the field have voiced deep concerns. One affected professional noted, "So many people I care about are facing the potential loss of careers they cherish. I myself am at risk." He expressed astonishment at the video's professionalism, shifting from initial nonchalance to genuine apprehension about the industry's future. This reflects broader anxieties as AI blurs lines between real and synthetic media.

Studios are now collaborating on legal strategies, targeting the tool's developers and platforms hosting such content. Discussions include lawsuits for copyright infringement and calls for stricter AI guidelines from governments. While the technology promises creative efficiencies, opponents argue it prioritizes speed over ethical safeguards, potentially devaluing human artistry. Recent viral spreads on social media have amplified the urgency, with calls to remove deceptive videos. 

As AI evolves rapidly, Hollywood's standoff highlights a pivotal clash between innovation and preservation. Balancing advancement with protection will define the sector's resilience amid digital transformation. Stakeholders urge immediate intervention to prevent irreversible damage, positioning this as a landmark battle in the AI era.

Senior Engineers at Spotify Rely on AI Tools Over Direct Code Writing


 

A long-foreseen confrontation between intelligent machines and human programmers no longer seems theoretical. Initially considered a distant possibility automation nibbling at the edges of software development it now appears that some of the world's most influential technology firms are witnessing the evolution of this idea. 

With artificial intelligence systems maturing from experimental assistants to autonomous collaborators, the concept of writing code is being re-evaluated. As a result of the accelerating automation and bold predictions of the future of technical work, Spotify has made one of the most apparent signals to date that this shift is not just conceptual but operational as well. 

Since December, Spotify's co-CEO Gustav Söderström has stated that none of the company's best developers have written a single line of code. This comes despite repeated warnings from industry figures that coding may lose relevance as a hands-on craft. 

At the same time that he makes these remarks, Spotify is expanding its artificial intelligence-driven features such as Prompted Playlists, Page Match for audiobooks, and About This Song—while simultaneously embedding artificial intelligence directly into its engineering process. 

Elon Musk has further predicted that by the year 2026, programming as a profession will likely largely disappear. The broader industry trajectory suggests that such forecasts are indicative of a tangible shift despite the dramatic sounding forecasts.

Companies such as Anthropic, Google, and Microsoft are increasingly relying on artificial intelligence (AI) to develop and refine complex software. Spotify appears to be part of this movement, with its internal “Honk AI” platform reportedly facilitating significant portions of the development process. 

As part of Spotify's fourth-quarter earnings call, Söderström stressed the importance of AI within Spotify's technical pipeline, pointing out that the company's top engineers have moved away from directly writing code and are now supervising, guiding, and shaping the outputs of intelligent systems. 

During the discussion, Spotify executives elaborated on how artificial intelligence is deeply ingrained in Spotify's engineering operations, making the implications of the shift more apparent. As part of the fourth quarter earnings discussion, Söderström indicated that the company's most experienced developers have shifted away from manual coding to directing and supervising artificial intelligence-based systems to perform much of the technical work. This disclosure was accompanied by a statement highlighting how automation is expediting development across various departments. 

Spotify released over 50 new features and updates to its streaming platform throughout the year 2025, reflecting what it referred to as a significant improvement in product velocity. In addition to AI-powered Prompted Playlists, Page Match audiobooks, and About This Song, the company has recently launched features that demonstrate the company’s growing reliance on machine learning to provide personalization and contextualization to users. 

In addition to consumer-facing tools, Spotify has undergone an in-house engineering overhaul. At the core of its overhaul, Spotify has created a platform known as Honk that is based on the Claude Code framework and is integrated with a ChatOps framework from Slack. 

Using the system, engineers can initiate bug fixes, implement feature changes, and oversee releases using natural language prompts rather than conventional coding interfaces, automating large portions of the build and deployment pipeline. 

Engineers can instruct the AI via Slack during morning commutes to modify the iOS application, according to Söderström; once the AI has finished modifying the application, a revised build is delivered back to the engineer for review and approval, allowing the application to be deployed to production before the workday officially commences. This architecture was credited by Spotify with reducing friction between ideation and release, significantly reducing development timelines. This approach is regarded as a preliminary step rather than a final destination in a broader evolution driven by artificial intelligence. 

A company executive highlighted what the company views as a competitive advantage, which consists of a proprietary dataset rooted in music behavior, taste preferences, and contextual listening signals that is difficult for general-purpose language models to replicate or commoditize.

Spotify believes its data foundation allows it to extend AI capabilities beyond traditional knowledge retrieval to nuanced, experience-driven domains, such as music discovery and interpretation, where the answers are often subjective rather than factual. As a result of these developments, engineers are less likely to be replaced than re-calibrated. 

Increasingly, generative systems assume the responsibility for syntax, scaffolding, and execution, thereby shifting the focus of software development toward architectural judgment, system thinking, data stewardship, and rigorous supervision. 

Technology leaders must now expand their agenda beyond adoption to governance: establishing validation frameworks, security guardrails, and accountability structures in order to ensure AI-accelerated output meets production-grade requirements. 

Rather than competing against intelligent systems line by line, engineers' competitive advantage will increasingly lie in their ability to orchestrate them. In the future, coding will not be defined by keystrokes but by how effectively humans create, constrain, and direct the machines that code them.

U.S. Justice Department Seizes $61 Million in Tether Linked to ‘Pig Butchering’ Crypto Scams


The U.S. Department of Justice (DoJ) has revealed that it seized approximately $61 million in Tether connected to fraudulent cryptocurrency operations commonly referred to as “pig butchering” scams.

According to the department, investigators traced the confiscated digital assets to wallet addresses allegedly used to launder funds obtained through cryptocurrency investment fraud schemes. The stolen proceeds were reportedly siphoned from victims who were manipulated into investing in fake platforms promising lucrative returns.

"Criminal actors and professional money launderers use cyber-enabled fraud schemes to swindle their victims and conceal their ill-gotten gains," said HSI Charlotte Acting Special Agent in Charge Kyle D. Burns.

"HSI special agents work diligently to trace the illicit proceeds of crime across the globe to disrupt and dismantle the transnational criminal organizations that seek to defraud hardworking Americans."

Authorities explained that these schemes typically begin with scammers initiating contact through dating platforms or social media messaging applications. The perpetrators build trust by posing as romantic interests or financial advisors before persuading victims to invest in fabricated cryptocurrency opportunities.

Officials further noted that many of these operations are allegedly run from scam compounds based primarily in Southeast Asia. Individuals trafficked under false promises of well-paying jobs are reportedly forced to participate in the schemes. Their passports are confiscated, and they are coerced into deceiving targets online under threats of severe punishment.

Victims are directed to professional-looking but fraudulent investment websites that display falsified portfolios and exaggerated profits. These manipulated dashboards are designed to encourage larger investments. When victims attempt to withdraw their funds, they are often told to pay additional “fees,” resulting in further financial losses.

"Once the victims' money transferred to a cryptocurrency wallet under the scammers’ control, the crooks quickly routed that money through many other wallets to hide the nature, source, control, and ownership of that stolen money," the department added.

In a related statement, Tether disclosed that it has frozen roughly $4.2 billion in assets tied to unlawful activities so far. The company said that nearly $250 million of that amount has been linked to scam networks since June 2025.

The seizure marks one of the larger enforcement actions targeting cryptocurrency-enabled fraud and reflects ongoing efforts by U.S. authorities to disrupt global cybercrime syndicates exploiting digital assets.

Crazy Ransomware Gang Abuses Net Monitor and SimpleHelp for Stealthy Network Persistence

 

Not long ago, security analysts from Huntress spotted someone tied to the Crazy ransomware group using standard employee surveillance and remote assistance programs. This person used common system tools - not custom malware - to stay hidden within company networks. Instead of flashy attacks, they moved quietly through digital environments already familiar to IT teams. What stands out is how ordinary software became part of a stealthy buildup toward data encryption. Behind the scenes, attackers mimic regular maintenance tasks to avoid suspicion. Their method skips complex hacking tricks in favor of blending in. Over time, such tactics make detection harder since alerts resemble routine actions. Rather than breaking in, they act like insiders who belong. Recently, this approach has become more frequent across different cybercrime efforts. Normal-looking tool usage now masks malicious goals deep inside infrastructure.

Throughout several cases reviewed by Huntress, Net Monitor for Employees Professional appeared next to SimpleHelp’s remote access software. Using both together let attackers maintain ongoing, hands-on access to affected machines. This pairing lowered their chances of setting off detection mechanisms. Each tool played a role in staying under the radar. 

A single instance involved deployment of surveillance software through Windows Installer by running msiexec.exe, enabling adversaries to pull the agent straight from the official provider site. With it active, complete remote screen access emerged alongside command launching, data movement, and live observation of machine activity - delivering control similar to admin privileges on compromised devices. 

To tighten their hold, the hackers tried turning on the default admin account via "net user administrator /active:yes." Another layer came when they pulled down SimpleHelp using PowerShell scripts. Files were hidden under names that looked real - some copied Visual Studio’s vshost.exe pattern. Others posed as OneDrive components, tucked inside folders like ProgramData. Despite detection of a single remote component, operations persisted due to multiple deployment layers. 

Occasionally, the SimpleHelp executable appeared under altered names, mimicking standard corporate software files. Observed by analysts, these changes helped it evade immediate recognition. At times, Huntress noticed efforts aimed at weakening Microsoft Defender - achieved by halting and removing related system services - to limit detection on infected devices. One breach showed attackers setting up alert triggers inside SimpleHelp, activated whenever machines reached sites tied to digital currency storage or trading. 

These triggers watched for terms linked to wallet providers, exchange portals, blockchain lookup tools, and online payment systems. Elsewhere, the surveillance tool logged mentions of remote access software like RDP, AnyDesk, TeamViewer, UltraViewer, and VNC, possibly to spot signs of IT staff or security teams logging into affected endpoints. Despite just a single confirmed instance leading to Crazy ransomware activation, Huntress identified shared command servers and repeated file names like “vhost.exe.” These similarities point toward one actor behind both breaches. 

Notably, infrastructure links emerged across incidents. One attack stood out in impact. Yet patterns in execution imply coordination. File artifacts matched closely. Operation methods showed consistency. The evidence ties the events together indirectly. Reuse of tools strengthens that view. Infrastructure overlap was clear. Execution timing varied. Still, the digital fingerprints align. Not just one but two security incidents traced back to stolen SSL VPN login details, showing how shaky remote entry points can open doors. 

Instead of assuming safety, watch for odd patterns - like when trusted remote management software shows up without warning, used now more often by attackers who twist normal tools into stealthy weapons. Despite growing reliance on standard tools by attackers, requiring extra verification steps for every remote login helps block stolen passwords from being useful. Because hackers now blend in using common management programs, watching network behavior closely while limiting who can enter key systems stays essential for company security.

AI Coding Platform Orchids Exposed to Zero-Click Hack in BBC Security Test

 


A BBC journalist has demonstrated an unresolved cybersecurity weakness in an artificial intelligence coding platform that is rapidly gaining users.

The tool, called Orchids, belongs to a new category often referred to as “vibe-coding.” These services allow individuals without programming training to create software by describing what they want in plain language. The system then writes and executes the code automatically. In recent months, platforms like this have surged in popularity and are frequently presented as examples of how AI could reshape professional work by making development faster and cheaper.

Yet the same automation that makes these tools attractive may also introduce new forms of exposure.

Orchids states that it has around one million users and says major technology companies such as Google, Uber, and Amazon use its services. It has also received strong ratings from software review groups, including App Bench. The company is headquartered in San Francisco, was founded in 2025, and publicly lists a team of fewer than ten employees. The BBC said it contacted the firm multiple times for comment but did not receive a response before publication.

The vulnerability was demonstrated by cybersecurity researcher Etizaz Mohsin, who has previously uncovered software flaws, including issues connected to surveillance tools such as Pegasus. Mohsin said he discovered the weakness in December 2025 while experimenting with AI-assisted coding. He reported attempting to alert Orchids through email, LinkedIn, and Discord over several weeks. According to the BBC, the company later replied that the warnings may have been overlooked due to a high volume of incoming messages.

To test the flaw, a BBC reporter installed the Orchids desktop application on a spare laptop and asked it to generate a simple computer game modeled on a news website. As the AI produced thousands of lines of code on screen, Mohsin exploited a security gap that allowed him to access the project remotely. He was able to view and modify the code without the journalist’s knowledge.

At one point, he inserted a short hidden instruction into the project. Soon after, a text file appeared on the reporter’s desktop stating that the system had been breached, and the device’s wallpaper changed to an image depicting an AI-themed hacker. The experiment showed that an outsider could potentially gain control of a machine running the software.

Such access could allow an attacker to install malicious programs, extract private corporate or financial information, review browsing activity, or activate cameras and microphones. Unlike many common cyberattacks, this method did not require the victim to click a link, download a file, or enter login details. Security professionals refer to this technique as a zero-click attack.

Mohsin said the rise of AI-driven coding assistants represents a shift in how software is built and managed, creating new categories of technical risk. He added that delegating broad system permissions to AI agents carries consequences that are not yet fully understood.

Although Mohsin said he has not identified the same flaw in other AI coding tools such as Claude Code, Cursor, Windsurf, or Lovable, cybersecurity academics urge caution. Kevin Curran, a professor at Ulster University, noted that software created without structured review and documentation may be more vulnerable under attack.

The discussion extends beyond coding platforms. AI agents designed to perform tasks directly on a user’s device are becoming more common. One recent example is Clawbot, also known as Moltbot or Open Claw, which can send messages or manage calendars with minimal human input and has reportedly been downloaded widely.

Karolis Arbaciauskas, head of product at NordPass, warned that granting such systems unrestricted access to personal devices can expose users to serious risks. He advised running experimental AI tools on separate machines and using temporary accounts to limit potential damage.

Russia Blocks WhatsApp, Pushes State Surveillance App

 

Russia has effectively erased WhatsApp from its internet, impacting up to 100 million users in a bold move by regulator Roskomnadzor. On Wednesday, the app was removed from the national directory, severing access without prior slowdown warnings, as reported by the Financial Times and Gizmodo. WhatsApp condemned this as an attempt to force users onto a "state-owned surveillance app," highlighting the isolation of millions from secure communication. 

This crackdown escalates Russia's long-running battle against foreign messaging services amid its push for digital sovereignty. Restrictions began in August 2025 with blocks on voice and video calls, citing WhatsApp's failure to aid fraud and terrorism probes. Courts fined the Meta-owned app repeatedly for not removing banned content or opening a local office; by December, speeds dropped 70%, but full removal came after ongoing non-compliance. Telegram faced similar cuts this week, leaving Russians scrambling.

Enter Max, VK's 2025-launched "superapp" modeled on China's WeChat, now aggressively promoted as the national alternative. Preinstalled on devices and endorsed by celebrities and educators, it offers chats, video calls, file sharing up to 4GB, payments via Russia's Faster Payment System, and government services like digital IDs and e-signatures. Unlike WhatsApp's encryption, Max mandates activity sharing with authorities and lacks apparent privacy safeguards, per The Insider. 

The Kremlin justifies the ban as protecting citizens from scams and terrorism while achieving tech independence under sanctions. Spokesman Dmitry Peskov cited Meta's refusal to follow Russian law, though WhatsApp could return via compliance talks. Critics see it as unprecedented speech suppression, building on post-2022 Ukraine invasion censorship labeled "unprecedented" by Amnesty International. Yet past efforts, like the failed 2018 Telegram block, exposed regime overreach.

Users are turning to VPNs or rivals, but Max's rise could cement state surveillance in daily life. This mirrors global trends—France pushes local apps, and Meta faces U.S. spying claims—but Russia's unencrypted alternative raises alarms for privacy. As Putin eyes indefinite rule, such controls signal deepening authoritarianism, forcing 100 million into monitored chats.

Featured