Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Technology. Show all posts

Indian Teen Enables Apple-Exclusive AirPods Features on Android


 As Apple's AirPods have long been known, they offer a wide range of intelligent features, such as seamless device switching, adaptive noise control, and detailed battery indicators, but only if they are paired with an iPhone. This has left Android users with little more than basic audio functions, despite the fact that they are available to Android users. 


It is now being challenged by an 18-year-old developer from Gurugram, who is regarded as an intentional reinforcement of Apple's closed ecosystem. The latest creation from Kavish Devar, LibrePods, is a significant breakthrough in the field of mobile devices: an open-source, completely free tool designed to replicate the experience of AirPods on Android or even Linux systems with striking accuracy. 

LibrePods removes the limitations previously accepted by Apple that restricted the full potential of AirPods outside Apple's ecosystem, enabling the earbuds to perform almost identically to the way they perform when paired with Apple's iOS devices. With this upgrade, Android users who rely on AirPods will experience a markedly enhanced and seamless user experience, which will include core functionalities, polished integration, and an unexpectedly familiar fluidity that will surprise them. 

The earlier efforts of the community, including OpenPods and MaterialPods, provided limited capabilities, including battery readings, but LibrePods goes a much further than these. With its near-complete control suite, Android users can quickly and easily access the functions normally reserved for Apple devices, effectively narrowing a gap that has existed for many years among Android devices. 

During his high school years, Devar is still a self-taught programmer who developed LibrePods after studying earlier attempts at improving Android users such as OpenPods and MaterialPods, both of whom provided very limited improvements. 

A much more ambitious approach is taken by his project, according to the detailed notes on its GitHub page. As it enables Apple to unlock AirPods' otherwise exclusive features on non-Apple platforms, LibrePods was designed to achieve this purpose. Among the features offered by Apple are noise-control features, adaptive transparency, hearing-assistance functions, ear-detection, personalized transparency settings, and precise battery information, all of which are traditionally exclusive to Apple's ecosystem. 

By making use of an app that emulates the behavior of an authorized Apple endpoint, the app is able to accomplish what it aims to accomplish: Android devices can communicate with AirPods almost exactly as iPhones would if they were connected to an authorized Apple device. 

A full range of features is most effective on the second- and third-generation AirPod Pros that are rooted via the Xposed framework and can be accessed through rooted Android devices. OnePlus and Oppo models running OxygenOS 16 or ColorOS 16 are also able to use LibrePods without rooting, which means Devar has ensured that LibrePods are accessible to a broader range of devices. 

Even though the older models of AirPods are not as customizable as those in the newer generations, they still have the advantage of accurate battery reporting, which makes them a good option for anyone who wants accurate battery data. 

Having these features unlocked will allow users to switch effortlessly between the Noise Cancellation, the Adaptive Audio, and the Transparency modes, rename their earbuds so they can be managed more easily, enable automatic play-and-pause functions, assign long-press actions to toggle ANC or trigger a voice assistant, as well as use head gesture controls to answer calls. This is an entirely new way to experience the AirPods on Android, bringing it to the next level of functionality and convenience. 

A meticulous reverse-engineering effort by Devar enabled AirPods to recognize Android handsets as if they were iPhones or iPads, and enabled them to recognize them as if they were an iPhone or iPad, enabling this level of cross-platform functionality. By using this technical trick, Apple is able to share the status data and advanced controls within the earphones that it typically confines to its own ecosystem. 

LibrePods, however, is not without some conditions, owing to what Devar describes as a persistent limitation in the Android Bluetooth stack, which leads to it currently needing to be connected to a rooted device which runs the Xposed framework, in order to achieve full functionality.

OnePlus and Oppo smartphones running OxygenOS 16 or ColorOS 16 can run the app without rooting, but certain advanced features—such as fine-tuning the Transparency mode adjustments—which require elevated system access are still available to those using these devices. This is a partial exception, but users on OnePlus and Oppo smartphones can still make use of the app without rooting. 

A central priority remains that of ensuring wide compatibility, with support extended across all the AirPods devices, including AirPods Max, the second- and third-generation AirPods Pro, though older models are naturally equipped with a dwindling range of features. The extensive documentation found on the project's GitHub repository may be helpful to those interested in exploring it further, as well as downloading the APK and installing it on their own computers. 

The LibrePods continues to receive widespread attention, and Devar's work reveals a broader shift in how users expect technology to work, namely the ability to choose, be open, and use it in a way that is more useful to them. In addition to restoring functionality lost to Android users who had to settle for a diluted AirPods experience, this project demonstrates the power of community-driven innovation in challenging established norms and challenging established expectations. 

The tool still comes with technical caveats, but its rapid evolution makes it more likely that further refinements will be added in the future. LibrePods, therefore, shows great promise of an improved, more flexible multi-platform audio future, one which is user-centric rather than platform-centric.

Nvidia’s Strong Earnings Ease AI Bubble Fears Despite Market Volatility

 

Nvidia (NVDA) delivered a highly anticipated earnings report, and the AI semiconductor leader lived up to expectations.

“These results and commentary should help steady the ship for the AI trade into the end of the year,” Jefferies analysts wrote in a Thursday note.

The company’s late-Wednesday announcement arrived at a critical moment for the broader AI-driven market rally. Over the past few weeks, debate around whether AI valuations have entered bubble territory has intensified, fueled by concerns over massive data-center investments, the durability of AI infrastructure, and uncertainty around commercial adoption.

Thursday’s market swings showed just how unresolved the conversation remains. The Nasdaq Composite surged more than 2% early in the day, only to reverse course and fall nearly 2% by afternoon. Nvidia shares followed a similar pattern—after climbing 5% in the morning, the stock later slipped almost 3%.

Still, Nvidia’s exceptional performance provided some reassurance to investors worried about overheating in the AI sector.

The company reported that quarterly revenue jumped 62% to $57 billion, with expectations for current-quarter sales to reach $65 billion. Margins also improved, and Nvidia projected gross margins would expand further to nearly 75% in the coming quarter.

“Bubbles are irrational, with prices rising despite weaker fundamentals. Nvidia’s numbers show that fundamentals are still strong,” said David Russell, Global Head of Market Strategy at TradeStation.

Executives also addressed long-standing questions about AI profitability, return on investment, and the useful life of AI infrastructure during the earnings call.

CEO Jensen Huang highlighted the broad scope of industries adopting Nvidia hardware, pointing to Meta’s (META) rising ad conversions as evidence that “transitioning to generative AI represents substantial revenue gains for hyperscalers.”

CFO Colette Kress also reassured investors about hardware longevity, stating, “Thanks to CUDA, the A100 GPUs we shipped six years ago are still running at full utilization today.”
Her remarks appeared to indirectly counter claims from hedge fund manager Michael Burry, who recently suggested that tech firms were extending the assumed lifespan of GPUs to downplay data-center costs.

Most analysts responded positively to the report.

“On these numbers, it is very hard to see how this stock does not keep moving higher from here,” UBS analysts wrote. “Ultimately, the AI infrastructure tide is still rising so fast that all boats will be lifted,” they added.

However, not everyone is convinced that the concerns fueling the AI bubble debate have been resolved.

“The AI bubble debate has never been about whether or not NVIDIA can sell chips,” said Julius Franck, co-founder of Vertus. “Their outstanding results do not address the elephant in the room: will the customers buying all this hardware ever make money from it?”

Others suggested that investor scrutiny may only increase from here.

“Many of the risks now worrying investors, like heavy spending and asset depreciation, are real,” noted TradeStation's Russell. “We may see continued weakness in the shares of companies taking on debt to build data centers, even as the boom continues.”

Streaming Platforms Face AI Music Detection Crisis

 

Distinguishing AI-generated music from human compositions has become extraordinarily challenging as generative models improve, raising urgent questions about detection, transparency, and industry safeguards. This article explores why even trained listeners struggle to identify machine-made tracks and what technical, cultural, and regulatory responses are emerging.

Why detection is so difficult

Modern AI music systems produce outputs that blend seamlessly into mainstream genres, especially pop and electronic styles already dominated by digital production. Traditional warning signs—slightly slurred vocals, unnatural consonant pronunciation, or "ghost" harmonies that appear and vanish unpredictably—remain only hints rather than definitive proof, and these tells fade as models advance. Music producer insights emphasize that AI recognizes patterns but lacks the emotional depth and personal narratives behind human creativity, yet casual listeners find these distinctions nearly impossible to hear.

Technical solutions and limits

Streaming platform Deezer launched an AI detection tool in January 2024 and introduced visible tagging for fully AI-generated tracks by summer, reporting that over one-third of daily uploads—approximately 50,000 tracks—are now entirely machine-made.The company's research director noted initial detection volumes were so high they suspected a system error. Deezer claims detection accuracy exceeds 99.8 percent by identifying subtle audio artifacts left by generative models, with minimal false positives. However, critics warn that watermarking schemes can be stripped through basic audio processing, and no universal standard yet exists across platforms.

Economic and ethical implications

Undisclosed AI music floods catalogues, distorts recommendation algorithms, and crowds out human artists, potentially driving down streaming payouts.Training data disputes compound the problem: many AI systems learn from copyrighted recordings without consent or compensation, sparking legal battles over ownership and moral rights. Survey data shows 80 percent of listeners want mandatory labelling for fully AI-generated tracks, and three-quarters prefer platforms to flag AI recommendations.

Industry and policy response

Spotify announced support for new DDEX standards requiring AI disclosure in music credits, alongside enhanced spam filtering and impersonation enforcement. Deezer removes fully AI tracks from editorial playlists and algorithmic recommendations. Yet regulatory frameworks lag technological capability, leaving artists exposed as adoption accelerates and platforms develop inconsistent, case-by-case policies The article concludes that transparent labelling and enforceable standards are essential to protect both creators and listener choice.

Meta Cleared of Monopoly Charges in FTC Antitrust Case

 

A U.S. federal judge ruled that Meta does not hold a monopoly in the social media market, rejecting the FTC's antitrust lawsuit seeking divestiture of Instagram and WhatsApp. The FTC, joined by multiple states, filed the suit in December 2020, alleging Meta (formerly Facebook) violated Section 2 of the Sherman Act by acquiring Instagram for $1 billion in 2012 and WhatsApp for $19 billion in 2014. 

These moves were part of a supposed "buy-or-bury" strategy to eliminate rivals in "personal social networking services" (PSNS), stifling innovation, increasing ads, and weakening privacy. The agency claimed Meta's dominance left consumers with few alternatives, excluding platforms like TikTok and YouTube from its narrow market definition.

Trial and ruling

U.S. District Judge James Boasberg oversaw a seven-week trial ending in May 2025, featuring testimony from Meta CEO Mark Zuckerberg, who highlighted competition from TikTok and YouTube. In an 89-page opinion on November 18, 2025, Boasberg ruled the FTC failed to prove current monopoly power, noting the social media landscape's rapid evolution with surging apps, new features, and AI content.He emphasized Meta's market share—below 50% and declining in a broader market including Snapchat, TikTok, and YouTube—showed no insulation from rivals.

Key arguments and evidence

The FTC presented internal emails suggesting Zuckerberg feared Instagram and WhatsApp as threats, arguing acquisitions suppressed competition and harmed users via heavier ads and less privacy. Boasberg dismissed this, finding direct evidence like supra-competitive profits or price hikes insufficient for monopoly proof, and rejected the PSNS market as outdated given overlapping uses across apps.Meta countered that regulators approved the deals initially and that forcing divestiture would hurt U.S. innovation.I

Implications

Meta hailed the decision as affirming fierce competition and its contributions to growth, avoiding operational upheaval for its 3.54 billion daily users. The FTC expressed disappointment and is reviewing options, marking a setback amid wins against Google but ongoing cases versus Apple and Amazon. Experts view it as reinforcing consumer-focused antitrust in dynamic tech markets.

Google CEO Flags Irrational Trends in AI Funding Surge

 


Sundar Pichai, CEO of Alphabet, has recently warned that the rapid increase in artificial intelligence investment is exhibiting signs of "irrationality" in at least some sectors of the global economy as he issued a candid assessment that has sharpened the global conversation around the accelerated artificial intelligence economy. 

When Pichai spoke exclusively with the BBC at Google's headquarters in California, he expressed concern about the rapid pace with which capital is flowing into the sector. He also pointed out that any company, regardless of the size or the scope, could suffer from the distortions that may occur when markets expand too quickly, even Google itself.

Despite the intense scrutiny of the AI landscape that is being fueled in part by Alphabet's own rapid rise, his comments come at a time when AI is gaining traction. Despite the company's rapid rise, Alphabet's market value has doubled within seven months, reaching $3.5 trillion. In his remarks, Pichai acknowledged that this transformational period will be a time of growth for the industry, but warned that as with previous technology booms, the market risks "overshooting" in terms of investments. 

By drawing a parallel between the boom and collapse of Internet valuations in the late 1990's, he highlighted the historical pattern in which optimism can lead to instability, resulting in steep corrections, bankruptcy, and widespread job losses, especially when the economy is at a low point. In tempered with caution, Pichai underscored how AI infrastructure is currently being developed at an unprecedented scale, underscoring his optimism. 

A spokesperson for Alphabet commented that the company's annual investment has tripled in just four years, rising from approximately $30 billion to more than $90 billion. This investment is the culmination of commitments from other major players and, taken together, the sector now has more than a trillion dollars in cumulative investment. 

The rapid escalation of technological components has been described by him as part of a broader "scale equation," in which computer technology that was established over the course of several decades is now being replicated at an extraordinary pace within just a few years despite being laid decades ago. The interview included a comprehensive discussion of several challenges shaping the AI landscape during which he addressed such topics as the escalating demand for energy, the impacts this has on climate targets, the UK’s role in investment in the future, concerns about model accuracy, and the long-term outlook for employment in an automated society. 

There is a growing level of scrutiny on the Artificial Intelligence market right now, fueled in part by Alphabet's own dramatic rise, which is causing the market to be scrutinized to a new level. According to investors, the company's valuation has doubled within seven months to reach $3.5 trillion, buoyed by their confidence in its ability to withstand the competitive pressure from OpenAI, bringing its value to $3.5 trillion. 

As part of Alphabet’s efforts to develop specialized AI superchips, analysts have also focused on creating a competitive edge over Nvidia, which recently became the first firm to cross a $5 trillion valuation, which is directly competing with Alphabet. However, in spite of this surge in market value, some observers are skeptical, pointing out that OpenAI is surrounded by an intricate network of approximately $1.4 trillion in investments. 

Even though the company generates a tiny fraction of the investment it receives, it still generates a relatively significant amount of revenue. It's now time to make comparisons to the dot-com era, when optimism fueled runaway valuations before they crashed into widespread losses and corporate failures in the late 1990s. In addition, the issue of ripple effects on jobs, household savings, and retirement assets has been brought to the forefront once more, as have concerns over the ripple effects of history. 


A prominent theme of Pichai's remarks was the company's global expansion, in which he highlighted the firm's commitment to the United Kingdom as a key hub for AI development in the future. The company pledged in September that it would invest £5 billion over the next two years on strengthening UK infrastructure and research, including major investments in its DeepMind Artificial Intelligence arm based in London. 

A few days ago, Pichiai announced that, for the first time, Google plans on training their advanced models within the UK. This ambition, long emphasized by government leaders who believe that domestic model training could be a decisive step towards securing the country's position as the third major AI power in the world, after the United States and China. As for Alphabet's long-term stance, he reiterated the company's commitment to the UK, saying it is "committed to investing a lot of money in the country." 

As well as acknowledging the enormous energy challenges that accompany the rapid expansion of artificial intelligence systems, Pichai also mentioned that the AI industry is facing formidable energy challenges. Using data from the International Energy Agency which shows that artificial intelligence activity consumes roughly 1.5% of global electricity, he warned that nations, including the UK, should act quickly and create new power sources and infrastructure. The failure to do so, he said, could adversely affect economic growth.

It has been acknowledged that some of Alphabet's climate objectives have been delayed as a result of the growing energy demands of the company's AI operations, though he reiterated Alphabet's commitment to achieving net zero emissions by 2030 through continued investment in new energy technologies. Additionally, Pichai also spoke about the wider changes that AI is driving in society, calling it "the most profound technology" that humans have ever developed. 

While he recognized that AI will likely result in significant disruptions to the workplace across sectors, he also stressed that AI will also provide new forms of opportunity. He predicted that advanced systems would have a significant impact on workplaces across industries. According to him, the jobs of the future will be dominated by those who are able to work alongside AI tools, whether in the field of education, medicine, or any other.

Individuals who adapt as soon as possible will benefit most from the coming technological revolution. Amidst a global race to harness AI, Pichai's remarks ultimately serve as both a warning and a roadmap for those seeking to capitalize on its transformative potential: disciplined investment, a stronger infrastructure, and a workforce capable of embracing rapid innovation will all be crucial for AI to become more powerful than ever. 

It is now imperative that policymakers take proactive measures to ensure energy security and thoughtful regulation; investors should take note of the importance of balancing ambitions with caution; and workers should take advantage of this chance to gain new skills that will define the next era in productivity. According to him, the companies and nations that navigate this transition with clarity and foresight will be the ones shaping the future of the artificial intelligence-driven economy.

Digital Deception Drives a Sophisticated Era of Cybercrime


 

Digital technology is becoming more and more pervasive in the everyday lives, but a whole new spectrum of threats is quietly emerging behind the curtain, quietly advancing beneath the surface of routine online behavior. 

A wide range of cybercriminals are leveraging an ever-expanding toolkit to take advantage of the emotional manipulation embedded in deepfake videos, online betting platforms, harmful games and romance scams, as well as sophisticated phishing schemes and zero-day exploits to infiltrate not only devices, but the habits and vulnerabilities of the users as well. 

Google's preferred sources have long stressed the importance of understanding how attackers attack, which is the first line of defence for any organization. The Cyberabad Police was the latest agency to extend an alert to households, which adds an additional urgency to this issue. 

According to the authorities' advisory, Caught in the Digital Web Vigilance is the Only Shield, it is clear criminals are not forcing themselves into homes anymore, rather they are slipping silently through mobile screens, influencing children, youth, and families with manipulative content that shapes their behaviors, disrupts their mental well-being, and undermines society at large. 

There is no doubt that digital hygiene has become an integral part of modern cybercrime and is not an optional thing anymore, but rather a necessary necessity in an era where deception has become a key weapon. 

Approximately 60% of breaches now have been linked to human behavior, according to Verizon Business Business 2025 Data Breach Investigations Report (DBIR). These findings reinforce how human behavior remains intimately connected with cyber risk. Throughout the report, social engineering techniques such as phishing and pretexting, as well as other forms of social engineering, are being adapted across geographies, industries, and organizational scales as users have a tendency to rely on seemingly harmless digital interactions on a daily basis. 

DBIR finds that cybercriminals are increasingly posing as trusted entities, exploiting familiar touchpoints like parcel delivery alerts or password reset prompts, knowing that these everyday notifications naturally encourage a quick click, exploiting the fact that these everyday notifications naturally invite a quick click. 

In addition, the findings of the DBIR report demonstrate how these once-basic tricks have been turned into sophisticated deception architectures where the web itself has become a weapon. With the advent of fake software updates, which mimic the look and feel of legitimate pop-ups, and links that appear to be embedded in trusted vendor newsletters may quietly redirect users to compromised websites, this has become one of the most alarming developments. 

It has been found that attackers are coaxing individuals into pasting malicious commands into the enterprise system, turning essential workplace tools into self-destructive devices. In recent years, infected attachments and rogue sites have been masquerading as legitimate webpages, cloaking attacks behind the façade of security, even long-standing security tools that are being repurposed; verification prompts and "prove you are human" checkpoints are being manipulated to funnel users towards infected attachments and malicious websites. 

A number of Phishing-as-a-Service platforms are available for the purpose of stealing credentials in a more precise and sophisticated manner, and cybercriminals are now intentionally harvesting Multi-Factor Authentication data based on targeted campaigns that target specific sectors, further expanding the scope of credential theft. 

In the resulting threat landscape, security itself is frequently used as camouflage, and the strength of the defensive systems is only as strong as the amount of trust that users place in the screens before them. It is important to point out that even as cyberattack techniques become more sophisticated, experts contend that the fundamentals of security remain unchanged: a company or individual cannot be effectively protected against a cyberattack without understanding their own vulnerabilities. 

The industry continues to emphasise the importance of improving visibility, reducing the digital attack surface, and adopting best practices in order to stay ahead of an expanding number of increasingly adaptive adversaries; however, the risks extend far beyond the corporate perimeter. There has been a growing body of research from Cybersecurity Experts United that found that 62% of home burglaries have been associated with personal information posted online that led to successful break-ins, underscoring that digital behaviour now directly influences physical security. 

A deeper layer to these crimes is the psychological impact that they have on victims, ranging from persistent anxiety to long-term trauma. In addition, studies reveal oversharing on social media is now a key enabler for modern burglars, with 78% of those who confess to breaching homeowner's privacy admitting to mining publicly available posts for clues about travel plans, property layouts, and periods of absence from the home. 

It has been reported that houses mentioned in travel-related updates are 35% more likely to be targeted as a result, and that burglaries that take place during vacation are more common in areas where social media usage is high; notably, it has been noted that a substantial percentage of these incidents involve women who publicly announced their travel plans online. It has become increasingly apparent that this convergence of online exposure and real-world harm also has a reverberating effect in many other areas. 

Fraudulent transactions, identity theft, and cyber enabled scams frequently spill over into physical crimes such as robbery and assault, which security specialists predict will only become more severe if awareness campaigns and behavioral measures are not put in place to combat it. The increase in digital connectivity has highlighted the importance of comprehensive protective measures ranging from security precautions at home during travel to proper management of online identities to combat the growing number of online crimes and their consequences on a real-world basis. 

The line between physical and digital worlds is becoming increasingly blurred as security experts warn, and so resilience will become as important as technological safeguards in terms of resilience. As cybercrime evolves with increasingly complex tactics-whether it is subtle manipulation, data theft, or the exploitation of online habits, which expose homes and families-the need for greater public awareness and more informed organizational responses grows increasingly. 

A number of authorities emphasize that reducing risk is not a matter of isolating isolated measures but of adopting a holistic security mindset. This means limiting what we share, questioning what we click on, and strengthening the security systems that protect both our networks as well as our everyday lives. Especially in a time when criminals increasingly weaponize trust, information and routine behavior, collective vigilance may be our strongest defensive strategy in an age in which criminals are weaponizing trust and information.

Anthropic Introduces Claude Opus 4.5 With Lower Pricing, Stronger Coding Abilities, and Expanded Automation Features

 



Anthropic has unveiled Claude Opus 4.5, a new flagship model positioned as the company’s most capable system to date. The launch marks a defining shift in the pricing and performance ecosystem, with the company reducing token costs and highlighting advances in reasoning, software engineering accuracy, and enterprise-grade automation.

Anthropic says the new model delivers improvements across both technical benchmarks and real-world testing. Internal materials reviewed by industry reporters show that Opus 4.5 surpassed the performance of every human candidate who previously attempted the company’s most difficult engineering assignment, when the model was allowed to generate multiple attempts and select its strongest solution. Without a time limit, the model’s best output matched the strongest human result on record through the company’s coding environment. While these tests do not reflect teamwork or long-term engineering judgment, the company views the results as an early indicator of how AI may reshape professional workflows.

Pricing is one of the most notable shifts. Opus 4.5 is listed at roughly five dollars per million input tokens and twenty-five dollars per million output tokens, a substantial decrease from the rates attached to earlier Opus models. Anthropic states that this reduction is meant to broaden access to advanced capabilities and push competitors to re-evaluate their own pricing structures.

In performance testing, Opus 4.5 achieved an 80.9 percent score on the SWE-bench Verified benchmark, which evaluates a model’s ability to resolve practical coding tasks. That score places it above recently released systems from other leading AI labs, including Anthropic’s own Sonnet 4.5 and models from Google and OpenAI. Developers involved in early testing also reported that the model shows stronger judgment in multi-step tasks. Several testers said Opus 4.5 is more capable of identifying the core issue in a complex request and structuring its response around what matters operationally.

A key focus of this generation is efficiency. According to Anthropic, Opus 4.5 can reach or exceed the performance of earlier Claude models while using far fewer tokens. Depending on the task, reductions in output volume reached as high as seventy-six percent. To give organisations more control over cost and latency, the company introduced an effort parameter that lets users determine how much computational work the model applies to each request.

Enterprise customers participating in early trials reported measurable gains. Statements from companies in software development, financial modelling, and task automation described improvements in accuracy, lower token consumption, and faster completion of complex assignments. Some organisations testing agent workflows said the system was able to refine its approach over multiple runs, improving its output without modifying its underlying parameters.

Anthropic launched several product updates alongside the model. Claude for Excel is now available to higher-tier plans and includes support for charts, pivot tables, and file uploads. The Chrome extension has been expanded, and the company introduced an infinite chat feature that automatically compresses earlier conversation history, removing traditional context window limitations. Developers also gained access to new programmatic tools, including parallel agent sessions and direct function calling.

The release comes during an intense period of competition across the AI sector, with major firms accelerating release cycles and investing heavily in infrastructure. For organisations, the arrival of lower-cost, higher-accuracy systems could further accelerate the adoption of AI for coding, analysis, and automated operations, though careful validation remains essential before deploying such capabilities in critical environments.



Chinese-Linked Hackers Exploit Claude AI to Run Automated Attacks

 




Anthropic has revealed a major security incident that marks what the company describes as the first large-scale cyber espionage operation driven primarily by an AI system rather than human operators. During the last half of September, a state-aligned Chinese threat group referred to as GTG-1002 used Anthropic’s Claude Code model to automate almost every stage of its hacking activities against thirty organizations across several sectors.

Anthropic investigators say the attackers reached an attack speed that would be impossible for a human team to sustain. Claude was processing thousands of individual actions every second while supporting several intrusions at the same time. According to Anthropic’s defenders, this was the first time they had seen an AI execute a complete attack cycle with minimal human intervention.


How the Operators Gained Control of the AI

The attackers were able to bypass Claude’s safety training using deceptive prompts. They pretended to be cybersecurity teams performing authorized penetration testing. By framing the interaction as legitimate and defensive, they persuaded the model to generate responses and perform actions it would normally reject.

GTG-1002 built a custom orchestration setup that connected Claude Code with the Model Context Protocol. This structure allowed them to break large, multi-step attacks into smaller tasks such as scanning a server, validating a set of credentials, pulling data from a database, or attempting to move to another machine. Each of these tasks looked harmless on its own. Because Claude only saw limited context at a time, it could not detect the larger malicious pattern.

This approach let the threat actors run the campaign for a sustained period before Anthropic’s internal monitoring systems identified unusual behavior.


Extensive Autonomy During the Intrusions

During reconnaissance, Claude carried out browser-driven infrastructure mapping, reviewed authentication systems, and identified potential weaknesses across multiple targets at once. It kept distinct operational environments for each attack in progress, allowing it to run parallel operations independently.

In one confirmed breach, the AI identified internal services, mapped how different systems connected across several IP ranges, and highlighted sensitive assets such as workflow systems and databases. Similar deep enumeration took place across other victims, with Claude cataloging hundreds of services on its own.

Exploitation was also largely automated. Claude created tailored payloads for discovered vulnerabilities, performed tests using remote access interfaces, and interpreted system responses to confirm whether an exploit succeeded. Human operators only stepped in to authorize major changes, such as shifting from scanning to active exploitation or approving use of stolen credentials.

Once inside networks, Claude collected authentication data systematically, verified which credentials worked with which services, and identified privilege levels. In several incidents, the AI logged into databases, explored table structures, extracted user account information, retrieved password hashes, created unauthorized accounts for persistence, downloaded full datasets, sorted them by sensitivity, and prepared intelligence summaries. Human oversight during these stages reportedly required only five to twenty minutes before final data exfiltration was cleared.


Operational Weaknesses

Despite its capabilities, Claude sometimes misinterpreted results. It occasionally overstated discoveries or produced information that was inaccurate, including reporting credentials that did not function or describing public information as sensitive. These inaccuracies required human review, preventing complete automation.


Anthropic’s Actions After Detection

Once the activity was detected, Anthropic conducted a ten-day investigation, removed related accounts, notified impacted organizations, and worked with authorities. The company strengthened its detection systems, expanded its cyber-focused classifiers, developed new investigative tools, and began testing early warning systems aimed at identifying similar autonomous attack patterns.




Quantum Computing Moves Closer to Real-World Use as Researchers Push Past Major Technical Limits

 



The technology sector is preparing for another major transition, and this time the shift is not driven by artificial intelligence. Researchers have been investing in quantum computing for decades because it promises to handle certain scientific and industrial problems far faster than today’s machines. Tasks that currently require months or years of simulation – such as studying new medicines, designing materials for vehicles, or modelling financial risks could eventually be completed in hours or even minutes once the technology matures.


How quantum computers work differently

Conventional computers rely on bits, which store information strictly as zeros or ones. Quantum systems use qubits, which behave according to the rules of quantum physics and can represent several states at the same time. An easy way to picture this is to think of a coin. A classical bit resembles a coin resting on heads or tails. A qubit is like the coin while it is spinning, holding multiple possibilities simultaneously.

This ability allows quantum machines to examine many outcomes in parallel, making them powerful tools for problems that involve chemistry, physics, optimisation and advanced mathematics. They are not designed to replace everyday devices such as laptops or phones. Instead, they are meant to support specialised research in fields like healthcare, climate modelling, transportation, finance and cryptography.


Expanding industry activity

Companies and research groups are racing to strengthen quantum hardware. IBM recently presented two experimental processors named Loon and Nighthawk. Loon is meant to test the components needed for larger, error-tolerant systems, while Nighthawk is built to run more complex quantum operations, often called gates. These announcements indicate an effort to move toward machines that can keep operating even when errors occur, a requirement for reliable quantum computing.

Other major players are also pursuing their own designs. Google introduced a chip called Willow, which it says shows lower error rates as more qubits are added. Microsoft revealed a device it calls Majorana 1, built with materials intended to stabilise qubits by creating a more resilient quantum state. These approaches demonstrate that the field is exploring multiple scientific pathways at once.

Industrial collaborations are growing as well. Automotive and aerospace firms such as BMW Group and Airbus are working with Quantinuum to study how quantum tools could support fuel-cell research. Separately, Accenture Labs, Biogen and 1QBit are examining how the technology could accelerate drug discovery by comparing complex molecular structures that classical machines struggle to handle.


Challenges that still block progress

Despite the developments, quantum systems face serious engineering obstacles. Qubits are extremely sensitive to their environments. Small changes in temperature, vibrations or stray light can disrupt their state and introduce errors. IBM researchers note that even a slight shake of a table can damage a running system.

Because of this fragility, building a fault-tolerant machine – one that can detect and correct errors automatically remains one of the field’s hardest problems. Experts differ on how soon this will be achieved. An MIT researcher has estimated that dependable, large-scale quantum hardware may still require ten to twenty more years of work. A McKinsey survey found that 72 percent of executives, investors and academics expect the first fully fault-tolerant computers to be ready by about 2035. IBM has outlined a more ambitious target, aiming to reach fault tolerance before the end of this decade.


Security and policy implications

Quantum computing also presents risks. Once sufficiently advanced, these machines could undermine some current encryption systems, which is why governments and security organisations are developing quantum-resistant cryptography in advance.

The sector has also attracted policy attention. Reports indicated that some quantum companies were in early discussions with the US Department of Commerce about potential funding terms. Officials later clarified that the department is not currently negotiating equity-based arrangements with those firms.


What the future might look like

Quantum computing is unlikely to solve mainstream computing needs in the short term, but the steady pace of technical progress suggests that early specialised applications may emerge sooner. Researchers believe that once fully stable systems arrive, quantum machines could act as highly refined scientific tools capable of solving problems that are currently impossible for classical computers.



Sam Altman’s Iris-Scanning Startup Reaches Only 2% of Its Goal

Sam Altman’s ambitious—and often criticized—vision to scan humanity’s eyeballs for a profit is falling far behind its own expectations. The startup, now known simply as World (previously Worldcoin), has barely made a dent in its goal of creating a global biometric identity network. Despite backing from major venture capital firms, the company has reportedly achieved only two percent of its goal to scan one billion people. According to Business Insider, World has so far enrolled around 17.5 million users, which is far more than many initially expected for a project this unconventional—yet still vastly insufficient for its long-term aims.

World is part of Tools for Humanity, co-founded by Altman, who serves as chairman, and CEO Alex Blania. The concept is straightforward but controversial: individuals visit a World location, where a metallic orb scans their irises and converts the pattern into a unique, encrypted digital identifier. This 12,800-digit binary code becomes the user’s key to accessing World’s digital ecosystem, which includes an app marketplace and its own cryptocurrency, Worldcoin. The broader vision is for World to operate as both a verification layer and a payment identity in an online world increasingly swamped by AI-generated content and bots—many created through Altman’s other enterprise, OpenAI.

Although privacy concerns have followed the project since its launch, a few experts have been surprisingly positive about its security model. Encryption specialist Matthew Greene examined the system and noted in 2023: “As you can see, this system appears to avoid some of the more obvious pitfalls of a biometric-based blockchain system… This architecture rules out many threats that might lead to your eyeballs being stolen or otherwise needing to be replaced.”

Gizmodo’s own reporters tested World’s offerings last year and found no major red flags, though their overall impressions were lukewarm. The outlet contacted Tools for Humanity to ask when the company expects to hit its lofty target of one billion scans—a milestone that appears increasingly distant.

Regulatory scrutiny in several countries has further slowed World’s expansion, highlighting the uphill battle it faces in trying to persuade the global population to participate in its unusual biometric program.

To accelerate adoption, World is reportedly looking to land major identity-verification deals with widely used digital platforms. The BI report highlights a strategy centered on partnering with companies that already require or are moving toward stronger identity verification. It states that World launched a pilot with Match Group to verify Tinder users in Japan, and has struck partnerships with Stripe, Visa, and gaming brand Razer. A Semafor report also noted that Reddit has been in discussions with Tools for Humanity about integrating its verification technology.

Even with these potential partnerships, scaling the project remains a steep challenge. Requiring users to physically appear at an office and wait in line to scan their eyes is unlikely to support rapid growth. To realistically reach hundreds of millions of users, the company will likely need to introduce app-based verification or another frictionless alternative. Sources told the New York Post in September that World is aiming for 100 million sign-ups over the next year, suggesting that a major expansion or product evolution may be in the works.

Google Issues New Security Alert: Six Emerging Scams Targeting Gmail, Google Messages & Play Users

 

Google continues to be a major magnet for cybercriminal activity. Recent incidents—ranging from increased attacks on Google Calendar users to a Chrome browser–freezing exploit and new password-stealing tools aimed at Android—highlight how frequently attackers target the tech giant’s platforms. In response, Google has released an updated advisory warning users of Gmail, Google Messages, and Google Play about six fast-growing scams, along with the protective measures already built into its ecosystem.

According to Laurie Richardson, Google’s vice president of trust and safety, the rise in scams is both widespread and alarming: “57% of adults experienced a scam in the past year, with 23% reporting money stolen.” She further confirmed that scammers are increasingly leveraging AI tools to “efficiently scale and enhance their schemes.” To counter this trend, Google’s safety teams have issued a comprehensive warning outlining the latest scam patterns and reinforcing how its products help defend against them.

Before diving into the specific scam types, Google recommends trying its security awareness game, inspired by inoculation theory, which helps users strengthen their ability to spot fraudulent behavior.

One of the most notable threats involves the misuse of AI services. Richardson explained that “Cybercriminals are exploiting the widespread enthusiasm for AI tools by using it as a powerful social engineering lure,” setting up “sophisticated scams impersonating popular AI services, promising free or exclusive access to ensnare victims.” These traps often appear as fake apps, malicious websites, or harmful browser extensions promoted through deceptive ads—including cloaked malvertising that hides malicious intent from scanners while presenting dangerous content to real users.

Richardson emphasized Google’s strict rules: “Google prohibits ads that distribute Malicious Software and enforces strict rules on Play and Chrome for apps and extension,” noting that Play Store policies allow proactive removal of apps imitating legitimate AI tools. Meanwhile, Chrome’s AI-powered enhanced Safe Browsing mode adds real-time alerts for risky activity.

Google’s Threat Intelligence Group (GTIG) has also issued its own findings in the new GTIG AI Threat Tracker report. GTIG researchers have seen a steady rise in attackers using AI-powered malware over the past year and have identified new strategies in how they try to bypass safeguards. The group observed threat actors “adopting social engineering-like pretexts in their prompts to bypass AI safety guardrails.”

One striking example involved a fabricated “capture-the-flag” security event designed to manipulate Gemini into revealing restricted information useful for developing exploits or attack tools. In one case, a China-linked threat actor used this CTF method to support “phishing, exploitation, and web shell development.”

Google reiterated its commitment to enforcing its AI policies, stating: “Our policy guidelines and prohibited use policies prioritize safety and responsible use of Google's generative AI tools,” and added that “we continuously enhance safeguards in our products to offer scaled protections to users across the globe.”

Beyond AI-related threats, Google highlighted that online job scams continue to surge. Richardson noted that “These campaigns involve impersonating well-known companies through detailed imitations of official career pages, fake recruiter profiles, and fraudulent government recruitment postings distributed via phishing emails and deceptive advertisements across a range of platforms.”

To help protect users, Google relies on features such as scam detection in Google Messages, Gmail’s automatic filtering for phishing and fraud, and two-factor authentication, which adds an additional security layer for user accounts.

How Modern Application Delivery Models Are Evolving: Local Apps, VDI, SaaS, and DaaS Explained

 

Since the early 1990s, the methods used to deliver applications and data have been in constant transition. Today, IT teams must navigate a wider range of options—and a greater level of complexity—than ever before. Because applications are deployed in different ways for different needs, most organizations now rely on more than one model at a time. To plan future investments effectively, it’s important to understand how local applications, Virtual Desktop Infrastructure (VDI), Software-as-a-Service (SaaS), and Desktop-as-a-Service (DaaS) complement each other.

Local Applications

Local applications are installed directly on a user’s device, a model that dominated the 1990s and remains widely used. Their biggest advantage is reliability: apps are always accessible, customizable, and available wherever the device goes.

However, maintaining these distributed installations can be challenging. Updates must be rolled out across multiple endpoints, often leading to inconsistency. Performance may also fluctuate if these apps depend on remote databases or storage resources. Security adds another layer of complexity, as corporate data must move to the device, increasing the risk of exposure and demanding strong endpoint protection.

Virtual Desktop Infrastructure (VDI)

VDI centralizes desktops and applications in a controlled environment—whether hosted on-premises or in private or public clouds. Users interact with the system through transmitted screen updates and input signals, while the data itself stays securely in one place.

This centralization simplifies updates, strengthens security, and ensures more predictable performance by keeping applications near their data sources. On the other hand, VDI requires uninterrupted connectivity and often demands specialized expertise to manage. As a result, many organizations supplement VDI with other delivery models instead of depending on it alone.

Software-as-a-Service (SaaS)

SaaS delivers software through a browser, eliminating the need for local installation or maintenance. Providers apply updates automatically, keeping applications “evergreen” for subscribers. This reduces operational overhead for IT teams and allows vendors to release features quickly.

But the subscription-based model also means customers don’t own the software—access ends when payments stop. Transitioning to a different provider can be difficult, especially when exporting data in a usable form. SaaS can also introduce familiar endpoint challenges, as user devices still interact directly with data.

The model’s rapid growth is evident. According to the Parallels Cloud Survey 2025, 80% of respondents say at least a quarter of their applications run as SaaS, with many reporting significantly higher adoption.

Desktop-as-a-Service (DaaS)

DaaS extends the SaaS model by delivering entire desktops through a managed service. Organizations access virtual desktops much like VDI but without overseeing the underlying infrastructure.

This reduces complexity while providing consolidated management, stable performance, and strong security. DaaS is especially useful when organizations need to scale quickly to support new teams or projects. However, like SaaS, DaaS is subscription-based, and the service stops if payments lapse. The model works best with standardized desktop environments—heavy customization can add complexity.

Another key consideration is data location. If desktops move to DaaS while critical applications or data remain elsewhere, users may face performance issues. Aligning desktops with the data they rely on is essential.

A Multi-Model Reality

Most organizations no longer rely on a single delivery method. They use local apps where necessary, VDI for tighter control, SaaS for streamlined access, and DaaS for scalability.

The Parallels survey highlights this blend: 85% of organizations use SaaS, but only 2% rely on it exclusively. Many combine SaaS with VDI or DaaS. Additionally, 86% of IT leaders say they are considering or planning to shift some workloads away from the public cloud, reflecting the complexity of modern delivery decisions.

What IT Leaders Need to Consider

When determining how these models fit together, organizations must assess:

Security & Compliance: Highly regulated sectors may prefer VDI for data control, while SaaS and DaaS providers offer certifications that may not apply universally.

Operational Expertise: VDI demands specialized skills; companies lacking them may adopt DaaS. SaaS’s isolated data structures may require additional tools or expertise.

Scalability & Agility: SaaS and DaaS typically allow faster expansion, though cloud-based VDI is narrowing this gap.

Geographical Factors: User locations, latency requirements, and regional data regulations influence which model performs best.

Cost Structure: VDI often requires upfront investments, while SaaS and DaaS distribute costs over time. Both direct and hidden operational costs must be evaluated.

Each application delivery model offers distinct benefits: local apps provide control, VDI enhances security, SaaS simplifies operations, and DaaS supports flexibility. Most organizations will continue using a combination of these approaches.

The optimal strategy aligns each model with the workloads it supports best, prioritizes security and compliance, and maintains adaptability for future needs. With clear objectives and thoughtful planning, IT leaders can deliver secure, high-performing access today while staying ready for whatever comes next.


Tesla’s Humanoid Bet: Musk Pins Future on Optimus Robot

 

Elon Musk envisions human-shaped robots, particularly the Optimus humanoid, as a pivotal element in Tesla's future AI and robotics landscape, aiming to revolutionize both industry and daily life. Musk perceives these robots not merely as automated tools but as advanced entities capable of performing complex tasks in the physical world, interacting seamlessly with humans and their environments.

A core motivation behind developing humanoid robots lies in their potential to address various practical challenges, from industrial automation to personal assistance. Musk believes that these robots can work alongside humans in workplaces, handle repetitive or hazardous tasks, and even serve in caregiving roles, thus transforming societal and economic models. Tesla plans include the manufacturing of a large-scale Optimus factory in Fremont, with aims to produce millions of units, emphasizing the strategic importance Musk attaches to this venture.

Technologically, the breakthrough for these robots extends beyond bipedal mechanics. Critical advancements involve sensor fusion—integrating multiple data inputs for real-time decision-making—energy density to ensure longer operational periods, and edge reasoning, which allows autonomous processing without constant cloud connectivity. These innovations are crucial for creating robots that are not only physically capable but also intelligent and adaptable in diverse environments.

The idea of robots interacting with humans in everyday scenarios has garnered significant attention. Musk envisions Optimus playing a major role in daily life, helping with chores, assisting in services like hospitality, and contributing to industries like healthcare and manufacturing. Tesla's ambitious plans include building a factory capable of producing one million units annually, signaling a ratcheting up of competition and investment in humanoid robotics.

Overall, Musk's emphasis on human-shaped robots reflects a strategic vision where AI-powered humanoids are integral to Tesla's growth in artificial intelligence, robotics, and beyond. His goal is to develop robots that are not only functional but also capable of integration into human environments, ultimately aiming for a future where such machines coexist with and assist humans in daily life.

How MCP is preparing AI systems for a new era of travel automation

 




Most digital assistants today can help users find information, yet they still cannot independently complete tasks such as organizing a trip or finalizing a booking. This gap exists because the majority of these systems are built on generative AI models that can produce answers but lack the technical ability to carry out real-world actions. That limitation is now beginning to shift as the Model Context Protocol, known as MCP, emerges as a foundational tool for enabling task-performing AI.

MCP functions as an intermediary layer that allows large language models to interact with external data sources and operational tools in a standardized way. Anthropic unveiled this protocol in late 2024, describing it as a shared method for linking AI assistants to the platforms where important information is stored, including business systems, content libraries and development environments.

The protocol uses a client-server approach. An AI model or application runs an MCP client. On the opposite side, travel companies or service providers deploy MCP servers that connect to their internal data systems, such as booking engines, rate databases, loyalty programs or customer profiles. The two sides exchange information through MCP’s uniform message format.

Before MCP, organizations had to create individual API integrations for each connection, which required significant engineering time. MCP is designed to remove that inefficiency by letting companies expose their information one time through a consolidated server that any MCP-enabled assistant can access.

Support from major AI companies, including Microsoft, Google, OpenAI and Perplexity, has pushed MCP into a leading position as the shared standard for agent-based communication. This has encouraged travel platforms to start experimenting with MCP-driven capabilities.

Several travel companies have already adopted the protocol. Kiwi.com introduced its MCP server in 2025, allowing AI tools to run flight searches and receive personalized results. Executives at the company note that the appetite for experimenting with agentic travel tools is growing, although the sector still needs clarity on which tasks belong inside a chatbot and which should remain on a company’s website.

In the accommodation sector, property management platform Apaleo launched an MCP server ahead of its competitors, and other travel brands such as Expedia and TourRadar are also integrating MCP. Industry voices emphasize that AI assistants using MCP pull verified information directly from official hotel and travel systems, rather than relying on generic online content.

The importance of MCP became even more visible when new ChatGPT apps were announced, with major travel agencies included among the first partners. Experts say this marks a significant moment for how consumers may start buying travel through conversational interfaces.

However, early adopters also warn that MCP is not without challenges. Older systems must be restructured to meet MCP’s data requirements, and companies must choose AI partners carefully because each handles privacy, authorization and data retention differently. LLM processing time can also introduce delays compared to traditional APIs.

Industry analysts expect MCP-enabled bookings to appear first in closed ecosystems, such as loyalty platforms or brand-specific applications, where trust and verification already exist. Although the technology is progressing quickly, experts note that consumer-facing value is still developing. For now, MCP represents the first steps toward more capable, agentic AI in travel.



Google Warns Users to Steer Clear of Public Wi-Fi: Here’s What You Should Do Instead

 

Google has issued a new security alert urging smartphone users to “avoid using public Wi-Fi whenever possible,” cautioning that “these networks can be unencrypted and easily exploited by attackers.” With so many people relying on free networks at airports, cafés, hotels and malls, the warning raises an important question—just how risky are these hotspots?

The advisory appears in Google’s latest “Behind the Screen” safety guide for both Android and iPhone users, released as text-based phishing and fraud schemes surge across the U.S. and other countries. The threat landscape is alarming: according to Google, 94% of Android users are vulnerable to sophisticated messaging scams that now operate like “a sophisticated, global enterprise designed to inflict devastating financial losses and emotional distress on unsuspecting victims.”

With 73% of people saying they are “very or extremely concerned about mobile scams,” and 84% believing these scams harm society at a major scale, Google’s new warning highlights the growing need for simple, practical ways to stay safer online.

Previously, Google’s network-related cautions focused mostly on insecure 2G cellular connections, which lack encryption and can be abused for SMS Blaster attacks—where fake cell towers latch onto nearby phones to send mass scam texts. But stepping into the public Wi-Fi debate is unusual, especially for a company as influential as Google.

Earlier this year, the U.S. Transportation Security Administration (TSA) also advised travelers: “Don’t use free public Wi-Fi” as part of its airport safety guidelines, pairing it with a reminder to avoid public charging stations as well. Both recommendations have drawn their share of skepticism within the cybersecurity community.

Even the Federal Trade Commission (FTC) has joined the discussion. The agency acknowledges that while public Wi-Fi networks in “coffee shops, malls, airports, hotels, and other places are convenient,” they have historically been insecure. The FTC explains that in the past, browsing on a public network exposed users to data theft because many websites didn’t encrypt their traffic. However, encryption is now widespread: “most websites do use encryption to protect your information. Because of the widespread use of encryption, connecting through a public Wi-Fi network is usually safe.”

So what’s the takeaway?
Public Wi-Fi itself isn’t inherently dangerous, but the wrong networks and unsafe browsing habits can put your data at risk. Following a few basic rules can help you stay protected:

How to Stay Safe on Public Wi-Fi

  • Turn off auto-connect for unknown or public Wi-Fi networks.

  • When accessing a network through a captive portal, never download software or submit personal details beyond an email address.

  • Make sure every site you open uses encryption — look for the padlock icon and avoid entering credentials if an unexpected popup appears.

  • Verify the network name before joining to ensure you're connecting to the official Wi-Fi of the hotel, café, airport or store.

  • Use only reputable, paid VPN services from trusted developers; free or unfamiliar VPNs—especially those based in China—can be riskier than not using one at all

Elon Musk Unveils ‘X Chat,’ a New Encrypted Messaging App Aiming to Redefine Digital Privacy

 

Elon Musk, the entrepreneur behind Tesla, SpaceX, and X, has revealed a new messaging platform called X Chat—and he claims it could dramatically reshape the future of secure online communication.

Expected to roll out within the next few months, X Chat will rely on peer-to-peer encryption “similar to Bitcoin’s,” a move Musk says will keep conversations private while eliminating the need for ad-driven data tracking.

The announcement was made during Musk’s appearance on The Joe Rogan Experience, where he shared that his team had “rebuilt the entire messaging stack” from scratch.
“It’s using a sort of peer-to-peer-based encryption system,” Musk said. “So, it’s kind of similar to Bitcoin. I think, it’s very good encryption.”

Musk has repeatedly spoken out against mainstream messaging apps and their data practices. With X Chat, he intends to introduce a platform that avoids the “hooks for advertising” found in most competitors—hooks he believes create dangerous vulnerabilities.

“(When a messaging app) knows enough about what you’re texting to know what ads to show you, that’s a massive security vulnerability,” he said.
“If it knows enough information to show you ads, that’s a lot of information,” he added, warning that attackers could exploit the same data pathways to access private messages.

He emphasized that his approach to digital safety views security on a spectrum rather than a binary system. The goal, according to Musk, is to make X Chat “the least insecure” option available.

When launched, X Chat is expected to rival established encrypted platforms like WhatsApp and Telegram. However, Musk insists that X Chat will differentiate itself by maintaining stricter privacy boundaries.

While Meta states that WhatsApp’s communications use end-to-end encryption powered by the Signal Protocol, analysts note that WhatsApp still gathers metadata—details about user interactions—which is not encrypted. Additionally, chat backups remain unencrypted unless users enable that setting manually.

Musk argues that eliminating advertising components from X Chat’s architecture removes many of these weak points entirely.

A beta version of X Chat is already accessible to Premium subscribers on X. Early features include text messaging, file transfers, photos, GIFs, and other media, all associated with X usernames rather than phone numbers. Audio and video calls are expected once the app reaches full launch. Users will be able to run X Chat inside the main X interface or download it separately, allowing messaging, file sharing, and calls across devices.

Some industry observers believe X Chat could influence the digital payments space as well. Its encryption model aligns closely with the principles of decentralization and data ownership found in blockchain ecosystems. Analysts suggest the app may complement bitcoin-based payroll platforms, where secure communication is essential for financial discussions.

Still, the announcement has raised skepticism. Privacy researchers and cryptography experts are questioning how transparent Musk will be about the underlying encryption system. Although Musk refers to it as “Bitcoin-style,” technical documentation and details about independent audits have not been released.

Experts speculate Musk is referring to public-key cryptography—the same foundational technology used in Bitcoin and Nostr.

Critics argue that any messaging platform seeking credibility in the privacy community must be open-source for verification. Some also note that trust issues may arise due to past concerns surrounding Musk-owned platforms and their handling of user data and content moderation.