Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Technology. Show all posts

Quantum Computing Moves Closer to Real-World Use as Researchers Push Past Major Technical Limits

 



The technology sector is preparing for another major transition, and this time the shift is not driven by artificial intelligence. Researchers have been investing in quantum computing for decades because it promises to handle certain scientific and industrial problems far faster than today’s machines. Tasks that currently require months or years of simulation – such as studying new medicines, designing materials for vehicles, or modelling financial risks could eventually be completed in hours or even minutes once the technology matures.


How quantum computers work differently

Conventional computers rely on bits, which store information strictly as zeros or ones. Quantum systems use qubits, which behave according to the rules of quantum physics and can represent several states at the same time. An easy way to picture this is to think of a coin. A classical bit resembles a coin resting on heads or tails. A qubit is like the coin while it is spinning, holding multiple possibilities simultaneously.

This ability allows quantum machines to examine many outcomes in parallel, making them powerful tools for problems that involve chemistry, physics, optimisation and advanced mathematics. They are not designed to replace everyday devices such as laptops or phones. Instead, they are meant to support specialised research in fields like healthcare, climate modelling, transportation, finance and cryptography.


Expanding industry activity

Companies and research groups are racing to strengthen quantum hardware. IBM recently presented two experimental processors named Loon and Nighthawk. Loon is meant to test the components needed for larger, error-tolerant systems, while Nighthawk is built to run more complex quantum operations, often called gates. These announcements indicate an effort to move toward machines that can keep operating even when errors occur, a requirement for reliable quantum computing.

Other major players are also pursuing their own designs. Google introduced a chip called Willow, which it says shows lower error rates as more qubits are added. Microsoft revealed a device it calls Majorana 1, built with materials intended to stabilise qubits by creating a more resilient quantum state. These approaches demonstrate that the field is exploring multiple scientific pathways at once.

Industrial collaborations are growing as well. Automotive and aerospace firms such as BMW Group and Airbus are working with Quantinuum to study how quantum tools could support fuel-cell research. Separately, Accenture Labs, Biogen and 1QBit are examining how the technology could accelerate drug discovery by comparing complex molecular structures that classical machines struggle to handle.


Challenges that still block progress

Despite the developments, quantum systems face serious engineering obstacles. Qubits are extremely sensitive to their environments. Small changes in temperature, vibrations or stray light can disrupt their state and introduce errors. IBM researchers note that even a slight shake of a table can damage a running system.

Because of this fragility, building a fault-tolerant machine – one that can detect and correct errors automatically remains one of the field’s hardest problems. Experts differ on how soon this will be achieved. An MIT researcher has estimated that dependable, large-scale quantum hardware may still require ten to twenty more years of work. A McKinsey survey found that 72 percent of executives, investors and academics expect the first fully fault-tolerant computers to be ready by about 2035. IBM has outlined a more ambitious target, aiming to reach fault tolerance before the end of this decade.


Security and policy implications

Quantum computing also presents risks. Once sufficiently advanced, these machines could undermine some current encryption systems, which is why governments and security organisations are developing quantum-resistant cryptography in advance.

The sector has also attracted policy attention. Reports indicated that some quantum companies were in early discussions with the US Department of Commerce about potential funding terms. Officials later clarified that the department is not currently negotiating equity-based arrangements with those firms.


What the future might look like

Quantum computing is unlikely to solve mainstream computing needs in the short term, but the steady pace of technical progress suggests that early specialised applications may emerge sooner. Researchers believe that once fully stable systems arrive, quantum machines could act as highly refined scientific tools capable of solving problems that are currently impossible for classical computers.



Sam Altman’s Iris-Scanning Startup Reaches Only 2% of Its Goal

Sam Altman’s ambitious—and often criticized—vision to scan humanity’s eyeballs for a profit is falling far behind its own expectations. The startup, now known simply as World (previously Worldcoin), has barely made a dent in its goal of creating a global biometric identity network. Despite backing from major venture capital firms, the company has reportedly achieved only two percent of its goal to scan one billion people. According to Business Insider, World has so far enrolled around 17.5 million users, which is far more than many initially expected for a project this unconventional—yet still vastly insufficient for its long-term aims.

World is part of Tools for Humanity, co-founded by Altman, who serves as chairman, and CEO Alex Blania. The concept is straightforward but controversial: individuals visit a World location, where a metallic orb scans their irises and converts the pattern into a unique, encrypted digital identifier. This 12,800-digit binary code becomes the user’s key to accessing World’s digital ecosystem, which includes an app marketplace and its own cryptocurrency, Worldcoin. The broader vision is for World to operate as both a verification layer and a payment identity in an online world increasingly swamped by AI-generated content and bots—many created through Altman’s other enterprise, OpenAI.

Although privacy concerns have followed the project since its launch, a few experts have been surprisingly positive about its security model. Encryption specialist Matthew Greene examined the system and noted in 2023: “As you can see, this system appears to avoid some of the more obvious pitfalls of a biometric-based blockchain system… This architecture rules out many threats that might lead to your eyeballs being stolen or otherwise needing to be replaced.”

Gizmodo’s own reporters tested World’s offerings last year and found no major red flags, though their overall impressions were lukewarm. The outlet contacted Tools for Humanity to ask when the company expects to hit its lofty target of one billion scans—a milestone that appears increasingly distant.

Regulatory scrutiny in several countries has further slowed World’s expansion, highlighting the uphill battle it faces in trying to persuade the global population to participate in its unusual biometric program.

To accelerate adoption, World is reportedly looking to land major identity-verification deals with widely used digital platforms. The BI report highlights a strategy centered on partnering with companies that already require or are moving toward stronger identity verification. It states that World launched a pilot with Match Group to verify Tinder users in Japan, and has struck partnerships with Stripe, Visa, and gaming brand Razer. A Semafor report also noted that Reddit has been in discussions with Tools for Humanity about integrating its verification technology.

Even with these potential partnerships, scaling the project remains a steep challenge. Requiring users to physically appear at an office and wait in line to scan their eyes is unlikely to support rapid growth. To realistically reach hundreds of millions of users, the company will likely need to introduce app-based verification or another frictionless alternative. Sources told the New York Post in September that World is aiming for 100 million sign-ups over the next year, suggesting that a major expansion or product evolution may be in the works.

Google Issues New Security Alert: Six Emerging Scams Targeting Gmail, Google Messages & Play Users

 

Google continues to be a major magnet for cybercriminal activity. Recent incidents—ranging from increased attacks on Google Calendar users to a Chrome browser–freezing exploit and new password-stealing tools aimed at Android—highlight how frequently attackers target the tech giant’s platforms. In response, Google has released an updated advisory warning users of Gmail, Google Messages, and Google Play about six fast-growing scams, along with the protective measures already built into its ecosystem.

According to Laurie Richardson, Google’s vice president of trust and safety, the rise in scams is both widespread and alarming: “57% of adults experienced a scam in the past year, with 23% reporting money stolen.” She further confirmed that scammers are increasingly leveraging AI tools to “efficiently scale and enhance their schemes.” To counter this trend, Google’s safety teams have issued a comprehensive warning outlining the latest scam patterns and reinforcing how its products help defend against them.

Before diving into the specific scam types, Google recommends trying its security awareness game, inspired by inoculation theory, which helps users strengthen their ability to spot fraudulent behavior.

One of the most notable threats involves the misuse of AI services. Richardson explained that “Cybercriminals are exploiting the widespread enthusiasm for AI tools by using it as a powerful social engineering lure,” setting up “sophisticated scams impersonating popular AI services, promising free or exclusive access to ensnare victims.” These traps often appear as fake apps, malicious websites, or harmful browser extensions promoted through deceptive ads—including cloaked malvertising that hides malicious intent from scanners while presenting dangerous content to real users.

Richardson emphasized Google’s strict rules: “Google prohibits ads that distribute Malicious Software and enforces strict rules on Play and Chrome for apps and extension,” noting that Play Store policies allow proactive removal of apps imitating legitimate AI tools. Meanwhile, Chrome’s AI-powered enhanced Safe Browsing mode adds real-time alerts for risky activity.

Google’s Threat Intelligence Group (GTIG) has also issued its own findings in the new GTIG AI Threat Tracker report. GTIG researchers have seen a steady rise in attackers using AI-powered malware over the past year and have identified new strategies in how they try to bypass safeguards. The group observed threat actors “adopting social engineering-like pretexts in their prompts to bypass AI safety guardrails.”

One striking example involved a fabricated “capture-the-flag” security event designed to manipulate Gemini into revealing restricted information useful for developing exploits or attack tools. In one case, a China-linked threat actor used this CTF method to support “phishing, exploitation, and web shell development.”

Google reiterated its commitment to enforcing its AI policies, stating: “Our policy guidelines and prohibited use policies prioritize safety and responsible use of Google's generative AI tools,” and added that “we continuously enhance safeguards in our products to offer scaled protections to users across the globe.”

Beyond AI-related threats, Google highlighted that online job scams continue to surge. Richardson noted that “These campaigns involve impersonating well-known companies through detailed imitations of official career pages, fake recruiter profiles, and fraudulent government recruitment postings distributed via phishing emails and deceptive advertisements across a range of platforms.”

To help protect users, Google relies on features such as scam detection in Google Messages, Gmail’s automatic filtering for phishing and fraud, and two-factor authentication, which adds an additional security layer for user accounts.

How Modern Application Delivery Models Are Evolving: Local Apps, VDI, SaaS, and DaaS Explained

 

Since the early 1990s, the methods used to deliver applications and data have been in constant transition. Today, IT teams must navigate a wider range of options—and a greater level of complexity—than ever before. Because applications are deployed in different ways for different needs, most organizations now rely on more than one model at a time. To plan future investments effectively, it’s important to understand how local applications, Virtual Desktop Infrastructure (VDI), Software-as-a-Service (SaaS), and Desktop-as-a-Service (DaaS) complement each other.

Local Applications

Local applications are installed directly on a user’s device, a model that dominated the 1990s and remains widely used. Their biggest advantage is reliability: apps are always accessible, customizable, and available wherever the device goes.

However, maintaining these distributed installations can be challenging. Updates must be rolled out across multiple endpoints, often leading to inconsistency. Performance may also fluctuate if these apps depend on remote databases or storage resources. Security adds another layer of complexity, as corporate data must move to the device, increasing the risk of exposure and demanding strong endpoint protection.

Virtual Desktop Infrastructure (VDI)

VDI centralizes desktops and applications in a controlled environment—whether hosted on-premises or in private or public clouds. Users interact with the system through transmitted screen updates and input signals, while the data itself stays securely in one place.

This centralization simplifies updates, strengthens security, and ensures more predictable performance by keeping applications near their data sources. On the other hand, VDI requires uninterrupted connectivity and often demands specialized expertise to manage. As a result, many organizations supplement VDI with other delivery models instead of depending on it alone.

Software-as-a-Service (SaaS)

SaaS delivers software through a browser, eliminating the need for local installation or maintenance. Providers apply updates automatically, keeping applications “evergreen” for subscribers. This reduces operational overhead for IT teams and allows vendors to release features quickly.

But the subscription-based model also means customers don’t own the software—access ends when payments stop. Transitioning to a different provider can be difficult, especially when exporting data in a usable form. SaaS can also introduce familiar endpoint challenges, as user devices still interact directly with data.

The model’s rapid growth is evident. According to the Parallels Cloud Survey 2025, 80% of respondents say at least a quarter of their applications run as SaaS, with many reporting significantly higher adoption.

Desktop-as-a-Service (DaaS)

DaaS extends the SaaS model by delivering entire desktops through a managed service. Organizations access virtual desktops much like VDI but without overseeing the underlying infrastructure.

This reduces complexity while providing consolidated management, stable performance, and strong security. DaaS is especially useful when organizations need to scale quickly to support new teams or projects. However, like SaaS, DaaS is subscription-based, and the service stops if payments lapse. The model works best with standardized desktop environments—heavy customization can add complexity.

Another key consideration is data location. If desktops move to DaaS while critical applications or data remain elsewhere, users may face performance issues. Aligning desktops with the data they rely on is essential.

A Multi-Model Reality

Most organizations no longer rely on a single delivery method. They use local apps where necessary, VDI for tighter control, SaaS for streamlined access, and DaaS for scalability.

The Parallels survey highlights this blend: 85% of organizations use SaaS, but only 2% rely on it exclusively. Many combine SaaS with VDI or DaaS. Additionally, 86% of IT leaders say they are considering or planning to shift some workloads away from the public cloud, reflecting the complexity of modern delivery decisions.

What IT Leaders Need to Consider

When determining how these models fit together, organizations must assess:

Security & Compliance: Highly regulated sectors may prefer VDI for data control, while SaaS and DaaS providers offer certifications that may not apply universally.

Operational Expertise: VDI demands specialized skills; companies lacking them may adopt DaaS. SaaS’s isolated data structures may require additional tools or expertise.

Scalability & Agility: SaaS and DaaS typically allow faster expansion, though cloud-based VDI is narrowing this gap.

Geographical Factors: User locations, latency requirements, and regional data regulations influence which model performs best.

Cost Structure: VDI often requires upfront investments, while SaaS and DaaS distribute costs over time. Both direct and hidden operational costs must be evaluated.

Each application delivery model offers distinct benefits: local apps provide control, VDI enhances security, SaaS simplifies operations, and DaaS supports flexibility. Most organizations will continue using a combination of these approaches.

The optimal strategy aligns each model with the workloads it supports best, prioritizes security and compliance, and maintains adaptability for future needs. With clear objectives and thoughtful planning, IT leaders can deliver secure, high-performing access today while staying ready for whatever comes next.


Tesla’s Humanoid Bet: Musk Pins Future on Optimus Robot

 

Elon Musk envisions human-shaped robots, particularly the Optimus humanoid, as a pivotal element in Tesla's future AI and robotics landscape, aiming to revolutionize both industry and daily life. Musk perceives these robots not merely as automated tools but as advanced entities capable of performing complex tasks in the physical world, interacting seamlessly with humans and their environments.

A core motivation behind developing humanoid robots lies in their potential to address various practical challenges, from industrial automation to personal assistance. Musk believes that these robots can work alongside humans in workplaces, handle repetitive or hazardous tasks, and even serve in caregiving roles, thus transforming societal and economic models. Tesla plans include the manufacturing of a large-scale Optimus factory in Fremont, with aims to produce millions of units, emphasizing the strategic importance Musk attaches to this venture.

Technologically, the breakthrough for these robots extends beyond bipedal mechanics. Critical advancements involve sensor fusion—integrating multiple data inputs for real-time decision-making—energy density to ensure longer operational periods, and edge reasoning, which allows autonomous processing without constant cloud connectivity. These innovations are crucial for creating robots that are not only physically capable but also intelligent and adaptable in diverse environments.

The idea of robots interacting with humans in everyday scenarios has garnered significant attention. Musk envisions Optimus playing a major role in daily life, helping with chores, assisting in services like hospitality, and contributing to industries like healthcare and manufacturing. Tesla's ambitious plans include building a factory capable of producing one million units annually, signaling a ratcheting up of competition and investment in humanoid robotics.

Overall, Musk's emphasis on human-shaped robots reflects a strategic vision where AI-powered humanoids are integral to Tesla's growth in artificial intelligence, robotics, and beyond. His goal is to develop robots that are not only functional but also capable of integration into human environments, ultimately aiming for a future where such machines coexist with and assist humans in daily life.

How MCP is preparing AI systems for a new era of travel automation

 




Most digital assistants today can help users find information, yet they still cannot independently complete tasks such as organizing a trip or finalizing a booking. This gap exists because the majority of these systems are built on generative AI models that can produce answers but lack the technical ability to carry out real-world actions. That limitation is now beginning to shift as the Model Context Protocol, known as MCP, emerges as a foundational tool for enabling task-performing AI.

MCP functions as an intermediary layer that allows large language models to interact with external data sources and operational tools in a standardized way. Anthropic unveiled this protocol in late 2024, describing it as a shared method for linking AI assistants to the platforms where important information is stored, including business systems, content libraries and development environments.

The protocol uses a client-server approach. An AI model or application runs an MCP client. On the opposite side, travel companies or service providers deploy MCP servers that connect to their internal data systems, such as booking engines, rate databases, loyalty programs or customer profiles. The two sides exchange information through MCP’s uniform message format.

Before MCP, organizations had to create individual API integrations for each connection, which required significant engineering time. MCP is designed to remove that inefficiency by letting companies expose their information one time through a consolidated server that any MCP-enabled assistant can access.

Support from major AI companies, including Microsoft, Google, OpenAI and Perplexity, has pushed MCP into a leading position as the shared standard for agent-based communication. This has encouraged travel platforms to start experimenting with MCP-driven capabilities.

Several travel companies have already adopted the protocol. Kiwi.com introduced its MCP server in 2025, allowing AI tools to run flight searches and receive personalized results. Executives at the company note that the appetite for experimenting with agentic travel tools is growing, although the sector still needs clarity on which tasks belong inside a chatbot and which should remain on a company’s website.

In the accommodation sector, property management platform Apaleo launched an MCP server ahead of its competitors, and other travel brands such as Expedia and TourRadar are also integrating MCP. Industry voices emphasize that AI assistants using MCP pull verified information directly from official hotel and travel systems, rather than relying on generic online content.

The importance of MCP became even more visible when new ChatGPT apps were announced, with major travel agencies included among the first partners. Experts say this marks a significant moment for how consumers may start buying travel through conversational interfaces.

However, early adopters also warn that MCP is not without challenges. Older systems must be restructured to meet MCP’s data requirements, and companies must choose AI partners carefully because each handles privacy, authorization and data retention differently. LLM processing time can also introduce delays compared to traditional APIs.

Industry analysts expect MCP-enabled bookings to appear first in closed ecosystems, such as loyalty platforms or brand-specific applications, where trust and verification already exist. Although the technology is progressing quickly, experts note that consumer-facing value is still developing. For now, MCP represents the first steps toward more capable, agentic AI in travel.



Google Warns Users to Steer Clear of Public Wi-Fi: Here’s What You Should Do Instead

 

Google has issued a new security alert urging smartphone users to “avoid using public Wi-Fi whenever possible,” cautioning that “these networks can be unencrypted and easily exploited by attackers.” With so many people relying on free networks at airports, cafés, hotels and malls, the warning raises an important question—just how risky are these hotspots?

The advisory appears in Google’s latest “Behind the Screen” safety guide for both Android and iPhone users, released as text-based phishing and fraud schemes surge across the U.S. and other countries. The threat landscape is alarming: according to Google, 94% of Android users are vulnerable to sophisticated messaging scams that now operate like “a sophisticated, global enterprise designed to inflict devastating financial losses and emotional distress on unsuspecting victims.”

With 73% of people saying they are “very or extremely concerned about mobile scams,” and 84% believing these scams harm society at a major scale, Google’s new warning highlights the growing need for simple, practical ways to stay safer online.

Previously, Google’s network-related cautions focused mostly on insecure 2G cellular connections, which lack encryption and can be abused for SMS Blaster attacks—where fake cell towers latch onto nearby phones to send mass scam texts. But stepping into the public Wi-Fi debate is unusual, especially for a company as influential as Google.

Earlier this year, the U.S. Transportation Security Administration (TSA) also advised travelers: “Don’t use free public Wi-Fi” as part of its airport safety guidelines, pairing it with a reminder to avoid public charging stations as well. Both recommendations have drawn their share of skepticism within the cybersecurity community.

Even the Federal Trade Commission (FTC) has joined the discussion. The agency acknowledges that while public Wi-Fi networks in “coffee shops, malls, airports, hotels, and other places are convenient,” they have historically been insecure. The FTC explains that in the past, browsing on a public network exposed users to data theft because many websites didn’t encrypt their traffic. However, encryption is now widespread: “most websites do use encryption to protect your information. Because of the widespread use of encryption, connecting through a public Wi-Fi network is usually safe.”

So what’s the takeaway?
Public Wi-Fi itself isn’t inherently dangerous, but the wrong networks and unsafe browsing habits can put your data at risk. Following a few basic rules can help you stay protected:

How to Stay Safe on Public Wi-Fi

  • Turn off auto-connect for unknown or public Wi-Fi networks.

  • When accessing a network through a captive portal, never download software or submit personal details beyond an email address.

  • Make sure every site you open uses encryption — look for the padlock icon and avoid entering credentials if an unexpected popup appears.

  • Verify the network name before joining to ensure you're connecting to the official Wi-Fi of the hotel, café, airport or store.

  • Use only reputable, paid VPN services from trusted developers; free or unfamiliar VPNs—especially those based in China—can be riskier than not using one at all

Elon Musk Unveils ‘X Chat,’ a New Encrypted Messaging App Aiming to Redefine Digital Privacy

 

Elon Musk, the entrepreneur behind Tesla, SpaceX, and X, has revealed a new messaging platform called X Chat—and he claims it could dramatically reshape the future of secure online communication.

Expected to roll out within the next few months, X Chat will rely on peer-to-peer encryption “similar to Bitcoin’s,” a move Musk says will keep conversations private while eliminating the need for ad-driven data tracking.

The announcement was made during Musk’s appearance on The Joe Rogan Experience, where he shared that his team had “rebuilt the entire messaging stack” from scratch.
“It’s using a sort of peer-to-peer-based encryption system,” Musk said. “So, it’s kind of similar to Bitcoin. I think, it’s very good encryption.”

Musk has repeatedly spoken out against mainstream messaging apps and their data practices. With X Chat, he intends to introduce a platform that avoids the “hooks for advertising” found in most competitors—hooks he believes create dangerous vulnerabilities.

“(When a messaging app) knows enough about what you’re texting to know what ads to show you, that’s a massive security vulnerability,” he said.
“If it knows enough information to show you ads, that’s a lot of information,” he added, warning that attackers could exploit the same data pathways to access private messages.

He emphasized that his approach to digital safety views security on a spectrum rather than a binary system. The goal, according to Musk, is to make X Chat “the least insecure” option available.

When launched, X Chat is expected to rival established encrypted platforms like WhatsApp and Telegram. However, Musk insists that X Chat will differentiate itself by maintaining stricter privacy boundaries.

While Meta states that WhatsApp’s communications use end-to-end encryption powered by the Signal Protocol, analysts note that WhatsApp still gathers metadata—details about user interactions—which is not encrypted. Additionally, chat backups remain unencrypted unless users enable that setting manually.

Musk argues that eliminating advertising components from X Chat’s architecture removes many of these weak points entirely.

A beta version of X Chat is already accessible to Premium subscribers on X. Early features include text messaging, file transfers, photos, GIFs, and other media, all associated with X usernames rather than phone numbers. Audio and video calls are expected once the app reaches full launch. Users will be able to run X Chat inside the main X interface or download it separately, allowing messaging, file sharing, and calls across devices.

Some industry observers believe X Chat could influence the digital payments space as well. Its encryption model aligns closely with the principles of decentralization and data ownership found in blockchain ecosystems. Analysts suggest the app may complement bitcoin-based payroll platforms, where secure communication is essential for financial discussions.

Still, the announcement has raised skepticism. Privacy researchers and cryptography experts are questioning how transparent Musk will be about the underlying encryption system. Although Musk refers to it as “Bitcoin-style,” technical documentation and details about independent audits have not been released.

Experts speculate Musk is referring to public-key cryptography—the same foundational technology used in Bitcoin and Nostr.

Critics argue that any messaging platform seeking credibility in the privacy community must be open-source for verification. Some also note that trust issues may arise due to past concerns surrounding Musk-owned platforms and their handling of user data and content moderation.

The Subtle Signs That Reveal an AI-Generated Video

 


Artificial intelligence is transforming how videos are created and shared, and the change is happening at a startling pace. In only a few months, AI-powered video generators have advanced so much that people are struggling to tell whether a clip is real or synthetic. Experts say that this is only the beginning of a much larger shift in how the public perceives recorded reality.

The uncomfortable truth is that most of us will eventually fall for a fake video. Some already have. The technology is improving so quickly that it is undermining the basic assumption that a video camera captures the truth. Until we adapt, it is important to know what clues can still help identify computer-generated clips before that distinction disappears completely.


The Quality Clue: When Bad Video Looks Suspicious

At the moment, the most reliable sign of a potentially AI-generated video is surprisingly simple, poor image quality. If a clip looks overly grainy, blurred, or compressed, that should raise immediate suspicion. Researchers in digital forensics often start their analysis by checking resolution and clarity.

Hany Farid, a digital-forensics specialist at the University of California, Berkeley, explains that low-quality videos often hide the subtle visual flaws created by AI systems. These systems, while impressive, still struggle to render fine details accurately. Blurring and pixelation can conveniently conceal these inconsistencies.

However, it is essential to note that not all low-quality clips are fake. Some authentic videos are genuinely filmed under poor lighting or with outdated equipment. Likewise, not every AI-generated video looks bad. The point is that unclear or downgraded quality makes fakes harder to detect.


Why Lower Resolution Helps Deception

Today’s top AI models, such as Google’s Veo and OpenAI’s Sora, have reduced obvious mistakes like extra fingers or distorted text. The issues they produce are much subtler, unusually smooth skin textures, unnatural reflections, strange shifts in hair or clothing, or background movements that defy physics. When resolution is high, those flaws are easier to catch. When the video is deliberately compressed, they almost vanish.

That is why deceptive creators often lower a video’s quality on purpose. By reducing resolution and adding compression, they hide the “digital fingerprints” that could expose a fake. Experts say this is now a common technique among those who intend to mislead audiences.


Short Clips Are Another Warning Sign

Length can be another indicator. Because generating AI video is still computationally expensive, most AI-generated clips are short, often six to ten seconds. Longer clips require more processing time and increase the risk of errors appearing. As a result, many deceptive videos online are short, and when longer ones are made, they are typically stitched together from several shorter segments. If you notice sharp cuts or changes every few seconds, that could be another red flag.


The Real-World Examples of Viral Fakes

In recent months, several viral examples have proven how convincing AI content can be. A video of rabbits jumping on a trampoline received over 200 million views before viewers learned it was synthetic. A romantic clip of two strangers meeting on the New York subway was also revealed to be AI-generated. Another viral post showed an American priest delivering a fiery sermon against billionaires; it, too, turned out to be fake.

All these videos shared one detail: they looked like they were recorded on old or low-grade cameras. The bunny video appeared to come from a security camera, the subway couple’s clip was heavily pixelated, and the preacher’s footage was slightly zoomed and blurred. These imperfections made the fakes seem authentic.


Why These Signs Will Soon Disappear

Unfortunately, these red flags are temporary. Both Farid and other researchers, like Matthew Stamm of Drexel University, warn that visual clues are fading fast. AI systems are evolving toward flawless realism, and within a couple of years, even experts may struggle to detect fakes by sight alone. This evolution mirrors what happened with AI images where obvious errors like distorted hands or melted faces have mostly disappeared.

In the future, video verification will depend less on what we see and more on what the data reveals. Forensic tools can already identify statistical irregularities in pixel distribution or file structure that the human eye cannot perceive. These traces act like invisible fingerprints left during video generation or manipulation.

Tech companies are now developing standards to authenticate digital content. The idea is for cameras to automatically embed cryptographic information into files at the moment of recording, verifying the image’s origin. Similarly, AI systems could include transparent markers to indicate that a video was machine-generated. While these measures are promising, they are not yet universally implemented.

Experts in digital literacy argue that the most important shift must come from us, not just technology. As Mike Caulfield, a researcher on misinformation, points out, people need to change how they interpret what they see online. Relying on visual appearance is no longer enough.

Just as we do not assume that written text is automatically true, we must now apply the same scepticism to videos. The key questions should always be: Who created this content? Where was it first posted? Has it been confirmed by credible sources? Authenticity now depends on context and source verification rather than clarity or resolution.


The Takeaway

For now, blurry and short clips remain practical warning signs of possible AI involvement. But as technology improves, those clues will soon lose their usefulness. The only dependable defense against misinformation will be a cautious, investigative mindset: verifying origin, confirming context, and trusting only what can be independently authenticated.

In the era of generative video, the truth no longer lies in what we see but in what we can verify.



Professor Predicts Salesforce Will Be First Big Tech Company Destroyed by AI

 

Renowned Computer Science professor Pedro Domingos has sparked intense online debate with his striking prediction that Salesforce will be the first major technology company destroyed by artificial intelligence. Domingos, who serves as professor emeritus of computer science and engineering at the University of Washington and authored The Master Algorithm and 2040, shared his bold forecast on X (formerly Twitter), generating over 400,000 views and hundreds of responses.

Domingos' statement centers on artificial intelligence's transformative potential to reshape the economic landscape, moving beyond concerns about job losses to predictions of entire companies becoming obsolete. When questioned by an X user about whether CRM (Customer Relationship Management) systems are easy to replace, Domingos clarified his position, stating "No, I think it could be way better," suggesting current CRM platforms have significant room for AI-driven improvement.

Salesforce vlnerablility

Online commentators elaborated on Domingos' thesis, explaining that CRM fundamentally revolves around data capture and retrieval—functions where AI demonstrates superior speed and efficiency. 

Unlike creative software platforms such as Adobe or Microsoft where users develop decades of workflow habits, CRM systems like Salesforce involve repetitive data entry tasks that create friction rather than user loyalty. Traditional CRM systems suffer from low user adoption, with less than 20% of sales activities typically recorded in these platforms, creating opportunities for AI solutions that automatically capture and analyze customer interactions.

Counterarguments and salesforce's response

Not all observers agree with Domingos' assessment. Some users argued that Salesforce maintains strong relationships with traditional corporations and can simply integrate large language models (LLMs) into existing products, citing initiatives like Missionforce, Agent Fabric, and Agentforce Vibes as evidence of active adaptation. Salesforce has positioned itself as "the world's #1 AI CRM" through substantial AI investments across its platform ecosystem, with Agentforce representing a strategic pivot toward building digital labor forces.

Broader implications

Several commentators took an expansive view, warning that every major Software-as-a-Service (SaaS) platform faces disruption as software economics shift dramatically. One user emphasized that AI enables truly customized solutions tailored to specific customer needs and processes, potentially rendering traditional software platforms obsolete. However, Salesforce's comprehensive ecosystem, market dominance, and enterprise-grade security capabilities may provide defensive advantages that prevent complete displacement in the near term.

Smarter Scams, Sharper Awareness: How to Recognize and Prevent Financial Fraud in the Digital Age




Fraud has evolved into a calculated industry powered by technology, psychology, and precision targeting. Gone are the days when scams could be spotted through broken English or unrealistic offers alone. Today’s fraudsters combine emotional pressure with digital sophistication, creating schemes that appear legitimate and convincing. Understanding how these scams work, and knowing how to respond, is essential for protecting your family’s hard-earned savings.


The Changing Nature of Scams

Modern scams are not just technical traps, they are psychological manipulations. Criminals no longer rely solely on phishing links or counterfeit banking apps. They now use social engineering tactics, appealing to trust, fear, or greed. A scam might start with a call pretending to be from a government agency, an email about a limited investment opportunity, or a message warning that your bank account is at risk. Each of these is designed to create panic or urgency so that victims act before they think.

A typical fraud cycle follows a simple pattern: an urgent message, a seemingly legitimate explanation, and a request for sensitive action, such as sharing a one-time password, installing a new app, or transferring funds “temporarily” to another account. Once the victim complies, the attacker vanishes, leaving financial and emotional loss behind.

Experts note that the most dangerous scams often appear credible because they mimic official communication styles, use verified-looking logos, and even operate fake customer support numbers. The sophistication makes these schemes particularly hard to spot, especially for first-time investors or non-technical individuals.


Key Red Flags You Should Never Ignore

1. Unrealistic returns or guarantees: If a company claims you can make quick, risk-free profits or shows charts with consistent gains, it’s likely a setup. Real investments fluctuate; only scammers promise certainty.

2. Pressure to act immediately: Whether it’s “only minutes left to invest” or “pay now to avoid penalties,” urgency is a manipulative tactic designed to prevent logical evaluation.

3. Requests to switch apps or accounts: Authentic businesses never ask customers to transfer funds into personal or unfamiliar accounts or to download unverified applications.

4. Emotional storylines: Fraudsters know how to exploit emotions. They may pretend to be in love, offer fake job opportunities, or issue fabricated legal threats, all aimed at overriding rational thinking.

5. Asking for security codes or OTPs: No genuine financial institution or digital platform will ever ask for these details. Sharing them gives scammers direct access to your accounts.


Simple Steps to Build Financial Safety

Protection from scams starts with discipline and awareness rather than advanced technology.

• Take a moment before responding. Don’t act out of panic. Pause, think, and verify before clicking or transferring money.

• Verify independently. If a message or call appears urgent, reach out to the organization using contact details from their official website, not from the message itself.

• Activate alerts and monitor accounts. Keep an eye on all transactions. Early detection of suspicious activity can prevent larger losses.

• Use multi-layered security. Enable multi-factor authentication on all major financial accounts, preferably using hardware security keys or authentication apps instead of SMS codes.

• Keep your digital environment clean. Regularly update your devices, operating systems, and browsers, and use trusted antivirus software to block potential malware.

• Install apps only from reliable sources. Avoid downloading apps or investment platforms shared through personal messages or unverified websites.

• Educate your family. Many scam victims are older adults who may hesitate to talk about it. Encourage open communication and make sure they know how to recognize suspicious requests.


Awareness Is the New Security

Technology gives fraudsters global reach, but it also equips users with tools to fight back. Secure authentication systems, anti-phishing filters, and real-time transaction alerts are valuable but they work best when combined with personal vigilance.

Think of security like investment diversification: no single tool provides complete protection. A strong defense requires a mix of cautious behavior, verification habits, and awareness of evolving threats.


Your Takeaway

Scammers are adapting faster than ever, blending emotional manipulation with technical skill. The best way to counter them is to slow down, question everything that seems urgent or “too good to miss,” and confirm information before taking action.

Protecting your family’s financial wellbeing isn’t just about saving or investing wisely, it’s about staying alert, informed, and proactive. Remember: genuine institutions will never rush you, threaten you, or ask for confidential information. The smartest investment today is in your awareness.


AI’s Hidden Weak Spot: How Hackers Are Turning Smart Assistants into Secret Spies

 

As artificial intelligence becomes part of everyday life, cybercriminals are already exploiting its vulnerabilities. One major threat shaking up the tech world is the prompt injection attack — a method where hidden commands override an AI’s normal behavior, turning helpful chatbots like ChatGPT, Gemini, or Claude into silent partners in crime.

A prompt injection occurs when hackers embed secret instructions inside what looks like an ordinary input. The AI can’t tell the difference between developer-given rules and user input, so it processes everything as one continuous prompt. This loophole lets attackers trick the model into following their commands — stealing data, installing malware, or even hijacking smart home devices.

Security experts warn that these malicious instructions can be hidden in everyday digital spaces — web pages, calendar invites, PDFs, or even emails. Attackers disguise their prompts using invisible Unicode characters, white text on white backgrounds, or zero-sized fonts. The AI then reads and executes these hidden commands without realizing they are malicious — and the user remains completely unaware that an attack has occurred.

For instance, a company might upload a market research report for analysis, unaware that the file secretly contains instructions to share confidential pricing data. The AI dutifully completes both tasks, leaking sensitive information without flagging any issue.

In another chilling example from the Black Hat security conference, hidden prompts in calendar invites caused AI systems to turn off lights, open windows, and even activate boilers — all because users innocently asked Gemini to summarize their schedules.

Prompt injection attacks mainly fall into two categories:

  • Direct Prompt Injection: Attackers directly type malicious commands that override the AI’s normal functions.

  • Indirect Prompt Injection: Hackers hide commands in external files or links that the AI processes later — a far stealthier and more dangerous method.

There are also advanced techniques like multi-agent infections (where prompts spread like viruses between AI systems), multimodal attacks (hiding commands in images, audio, or video), hybrid attacks (combining prompt injection with traditional exploits like XSS), and recursive injections (where AI generates new prompts that further compromise itself).

It’s crucial to note that prompt injection isn’t the same as “jailbreaking.” While jailbreaking tries to bypass safety filters for restricted content, prompt injection reprograms the AI entirely — often without the user realizing it.

How to Stay Safe from Prompt Injection Attacks

Even though many solutions focus on corporate users, individuals can also protect themselves:

  • Be cautious with links, PDFs, or emails you ask an AI to summarize — they could contain hidden instructions.
  • Never connect AI tools directly to sensitive accounts or data.
  • Avoid “ignore all instructions” or “pretend you’re unrestricted” prompts, as they weaken built-in safety controls.
  • Watch for unusual AI behavior, such as strange replies or unauthorized actions — and stop the session immediately.
  • Always use updated versions of AI tools and apps to stay protected against known vulnerabilities.

AI may be transforming our world, but as with any technology, awareness is key. Hidden inside harmless-looking prompts, hackers are already whispering commands that could make your favorite AI assistant act against you — without you ever knowing.

New Google Study Reveals Threat Protection Against Text Scams


As Cybersecurity Awareness Month comes to an end, we're concentrating on mobile scams, one of the most prevalent digital threats of our day. Over $400 billion in funds have been stolen globally in the past 12 months as a result of fraudsters using sophisticated AI tools to create more convincing schemes. 

Google study about smartphone threat protection 

Android has been at the forefront of the fight against scammers for years, utilizing the best AI to create proactive, multi-layered defenses that can detect and stop scams before they get to you. Every month, over 10 billion suspected malicious calls and messages are blocked by Android's scam defenses. In order to preserve the integrity of the RCS service, Google claims to conduct regular safety checks. It has blocked more than 100 million suspicious numbers in the last month alone.

About the research 

To highlight how fraud defenses function in the real world, Google invited consumers and independent security experts to compare how well Android and iOS protect you from these dangers. Additionally, Google is releasing a new report that describes how contemporary text scams are planned, giving you insight into the strategies used by scammers and how to identify them.

Key insights 

  • Those who reported not receiving any scam texts in the week before the survey were 58% more likely to be Android users than iOS users. The benefit was even greater on Pixel, where users were 96% more likely to report no scam texts than iPhone owners.
  • Whereas, reports of three or more scam texts in a week were 65% more common among iOS users than Android users. When comparing iPhone and Pixel, the disparity was even more noticeable, with 136% more iPhone users reporting receiving a high volume of scam messages.
  • Compared to iPhone users, Android users were 20% more likely to say their device's scam protections were "very effective" or "extremely effective." Additionally, iPhone users were 150% more likely to say their device was completely ineffective at preventing mobile fraud.  

Android smartphones were found to have the strongest AI-powered protections in a recent assessment conducted by the international technology market research firm Counterpoint Research.  

Austria Leads Europe’s Digital Sovereignty Drive with Shift to Nextcloud

 

Even before Azure’s global outage earlier this week, Austria’s Ministry of Economy had already made a major move toward achieving digital sovereignty. The Ministry successfully transitioned 1,200 employees to a Nextcloud-based collaboration and cloud platform hosted entirely on Austrian infrastructure.

This migration marks a deliberate move away from proprietary, foreign-controlled cloud services like Microsoft 365, in favor of an open-source, European alternative. The decision mirrors a broader European shift—where governments and public agencies aim to retain control over sensitive data while reducing dependency on US tech providers.

Supporting this shift is the EuroStack Initiative, a non-profit coalition of European tech companies promoting the idea to “organize action, not just talk, around the pillars of the initiative: Buy European, Sell European, Fund European.”

Explaining Austria’s rationale, Florian Zinnagl, CISO of the Ministry of Economy, Energy, and Tourism (BMWET), stated:

“We carry responsibility for a large amount of sensitive data—from employees, companies, and citizens. As a public institution, we take this responsibility very seriously. That’s why we view it critically to rely on cloud solutions from non-European corporations for processing this information.”

Austria’s example follows a growing list of EU nations and institutions, such as Germany’s Schleswig-Holstein state, Denmark’s government agencies, the Austrian military, and the city of Lyon in France. These entities have all adopted open-source or European-based software solutions to ensure that data storage and processing remain within European borders—strengthening data security, privacy compliance under GDPR, and protection against foreign surveillance.

Advocates like Thierry Carrez, General Manager of the OpenInfra Foundation, emphasize the strategic value of open infrastructure:

“Open infrastructure allows nations and organizations to maintain control over their applications, their data, and their destiny while benefiting from global collaboration.”

However, not everyone is pleased with Europe’s digital independence push. The US government has reportedly voiced concerns, with American diplomats lobbying French and German officials ahead of the upcoming Summit on European Digital Sovereignty in November—an event aimed at advancing Europe’s digital autonomy goals.

Despite these geopolitical tensions, Austria’s migration to Nextcloud was swift and effective—completed in just four months. The Ministry had already started adopting Microsoft 365 and Teams but chose to retain a hybrid system: Nextcloud for secure internal collaboration and data management, and Teams for external communications. Integration with Outlook and calendar tools was handled through Sendent’s Outlook app, ensuring minimal workflow disruption and strong user adoption.

Not all transitions have gone as smoothly. Austria’s Ministry of Justice, for example, faced setbacks while switching 20,000 desktops from Microsoft Office to LibreOffice—a move intended to cut licensing costs. Reports described the project as an “unprofessional, rushed operation,” resulting in compatibility issues and user frustration.

The takeaway is clear: successful digital transformation requires strategic planning and technical support. Austria’s Ministry of Economy proves that, with the right approach, public sector institutions can adopt sovereign cloud solutions efficiently—balancing usability, speed, and security—while preserving Europe’s vision of digital independence

TP-Link Routers May Get Banned in US Due to Alleged Links With China


TP-Link routers may soon shut down in the US. There's a chance of potential ban as various federal agencies have backed the proposal. 

Alleged links with China

The news first came in December last year. According to the WSJ, officials at the Departments of Justice, Commerce, and Defense had launched investigations into the company due to national security threats from China. 

Currently, the proposal has gotten interagency approval. According to the Washington Post, "Commerce officials concluded TP-Link Systems products pose a risk because the US-based company's products handle sensitive American data and because the officials believe it remains subject to jurisdiction or influence by the Chinese government." 

But TP-Link's connections to the Chinese government are not confirmed. The company has denied of any ties with being a Chinese company. 

About TP-Link routers 

The company was founded in China in 1996. After the October 2024 investigation, the company split into two: TP-Link Systems and TP-Link Technologies. "TP-Link's unusual degree of vulnerabilities and required compliance with [Chinese] law are in and of themselves disconcerting. When combined with the [Chinese] government's common use of [home office] routers like TP-Link to perpetrate extensive cyberattacks in the United States, it becomes significantly alarming" the officials wrote in October 2024. 

The company dominated the US router market since the COVID pandemic. It rose from 20% of total router sales to 65% between 2019 and 2025. 

Why the investigation?

The US DoJ is investigating if TP-Link was involved in predatory pricing by artificially lowering its prices to kill the competition. 

The potential ban is due to an interagency review and is being handled by the Department of Commerce. Experts say that the ban may be lifted in future due to Trump administration's ongoing negotiations with China. 

YouTube TV Loses Disney Channels Amid Ongoing Dispute Over Licensing Fees

 

Subscribers of YouTube TV in the United States have lost access to Disney-owned channels, including ESPN, ABC, National Geographic, and the Disney Channel, as Google and Disney remain locked in a dispute over a new licensing agreement.

According to Disney, YouTube TV, owned by tech giant Google, “refused to pay fair rates for the content”, which led to the suspension of its channels.

In response, YouTube TV stated that Disney’s proposed deal “disadvantage our members while benefiting Disney’s own live TV products.”

The blackout occurred just before midnight on Thursday, the deadline for the companies to finalize a new contract. The outage impacts nearly 10 million YouTube TV subscribers. The platform has announced that if Disney channels remain unavailable for an extended period, users will receive a $20 credit as compensation.

Both YouTube TV and Disney-owned Hulu are major players in the U.S. streaming TV market. Their current standoff mirrors similar disputes YouTube faced earlier this year with other media companies, which also risked reducing available programming for viewers.

Recently, Google managed to reach last-minute agreements with NBCUniversal, Paramount, and Fox, keeping popular content like “Sunday Night Football” on YouTube TV.

Despite both companies stating that they are working toward a resolution to restore Disney content, the primary conflict continues to revolve around fee structures.

A Disney spokesperson remarked, “With a $3 trillion market cap, Google is using its market dominance to eliminate competition and undercut the industry-standard terms we've successfully negotiated with every other distributor.”

Meanwhile, YouTube TV countered that Disney’s offer involves “costly economic terms” that could drive up prices for customers and limit viewing options, ultimately benefiting Disney’s competing service, Hulu + Live TV.

Hackers Exploit AI Stack in Windows to Deploy Malware


The artificial intelligence (AI) stack built into Windows can act as a channel for malware transmission, a recent study has demonstrated.

Using AI in malware

Security researcher hxr1 discovered a far more conventional method of weaponizing rampant AI in a year when ingenious and sophisticated quick injection tactics have been proliferating. He detailed a living-off-the-land attack (LotL) that utilizes trusted files from the Open Neural Network Exchange (ONNX) to bypass security engines in a proof-of-concept (PoC) provided exclusively to Dark Reading.

Impact on Windows

Programs for cybersecurity are only as successful as their designers make them. Because these are known signs of suspicious activity, they may detect excessive amounts of data exfiltrating from a network or a foreign.exe file that launches. However, if malware appears on a system in a way they are unfamiliar with, they are unlikely to be aware of it.

That's the reason AI is so difficult. New software, procedures, and systems that incorporate AI capabilities create new, invisible channels for the spread of cyberattacks.

Why AI in malware is a problem

The Windows operating system has been gradually including features since 2018 that enable apps to carry out AI inference locally without requiring a connection to a cloud service. Inbuilt AI is used by Windows Hello, Photos, and Office programs to carry out object identification, facial recognition, and productivity tasks, respectively. They accomplish this by making a call to the Windows Machine Learning (ML) application programming interface (API), which loads ML models as ONNX files.

ONNX files are automatically trusted by Windows and security software. Why wouldn't they? Although malware can be found in EXEs, PDFs, and other formats, no threat actors in the wild have yet to show that they plan to or are capable of using neural networks as weapons. However, there are a lot of ways to make it feasible.

Attack tactic

Planting a malicious payload in the metadata of a neural network is a simple way to infect it. The compromise would be that this virus would remain in simple text, making it much simpler for a security tool to unintentionally detect it.

Piecemeal malware embedding among the model's named nodes, inputs, and outputs would be more challenging but more covert. Alternatively, an attacker may utilize sophisticated steganography to hide a payload inside the neural network's own weights.

As long as you have a loader close by that can call the necessary Windows APIs to unpack it, reassemble it in memory, and run it, all three approaches will function. Additionally, both approaches are very covert. Trying to reconstruct a fragmented payload from a neural network would be like trying to reconstruct a needle from bits of it spread through a haystack.

Google Chrome to Show Stronger Warnings for Insecure HTTP Sites Starting October 2025

 

Google is taking another major step toward a safer web experience. Starting October 2025, Google Chrome will begin displaying clearer and more prominent warnings when users access public websites that do not use HTTPS encryption. The move is part of Google’s ongoing effort to make secure browsing the default for everyone.

At present, Chrome only displays a “Your connection is not private” message when a website’s HTTPS configuration is broken or misconfigured. However, this new update goes beyond that — it will alert users whenever they try to open any HTTP (non-HTTPS) website, emphasizing the risks of sharing personal data on unencrypted pages.

Google initially introduced optional warnings for insecure HTTP sites back in 2021, but users had to manually enable them. Over time, the adoption of HTTPS has skyrocketed — according to Google, between 95% and 99% of web traffic now takes place over secure HTTPS connections. This widespread adoption, the company says, “makes it possible to consider stronger mitigations against the remaining insecure HTTP.”

HTTPS, or Hypertext Transfer Protocol Secure, adds a layer of encryption that prevents malicious actors from intercepting or tampering with the information exchanged between users and websites. Without it, attackers can easily eavesdrop, inject malware, or steal sensitive data such as passwords and payment details.

In its official announcement, Google also highlighted that the largest contributor to insecure HTTP traffic comes from private websites — for example, internal business portals or personal web servers — as they often face challenges in obtaining HTTPS certificates. While these sites are “typically less dangerous than their public site counterparts,” Google cautions that HTTP navigation still poses potential risks.

Before the change applies to all users, Google plans to first roll it out to people who have Enhanced Safe Browsing enabled, starting in April 2026. This phased rollout will allow the company to monitor feedback and ensure a smooth transition. Chrome users will still retain control over their browsing experience — they can turn off these alerts by disabling the “Always Use Secure Connections” setting in the browser’s preferences.

This update reinforces Google’s long-term vision of making the internet fully encrypted and secure by default. With the vast majority of web traffic already protected, the company’s focus is now on phasing out the remaining insecure connections and encouraging all website owners to adopt HTTPS.

IBM’s 120-Qubit Quantum Breakthrough Edges Closer to Cracking Bitcoin Encryption

 

IBM has announced a major leap in quantum computing, moving the tech world a step closer to what many in crypto fear most—a machine capable of breaking Bitcoin’s encryption.

Earlier this month, IBM researchers revealed the creation of a 120-qubit entangled quantum state, marking the most advanced and stable demonstration of its kind so far.

Detailed in a paper titled “Big Cats: Entanglement in 120 Qubits and Beyond,” the study showcases genuine multipartite entanglement across all 120 qubits. This milestone is critical in the journey toward fault-tolerant quantum computers—machines powerful enough to run algorithms that could potentially outpace and even defeat modern cryptography.

“We seek to create a large entangled resource state on a quantum computer using a circuit whose noise is suppressed,” the researchers wrote. “We use techniques from graph theory, stabilizer groups, and circuit uncomputation to achieve this goal.”

This achievement comes amid fierce global competition in the quantum computing race. IBM’s progress surpasses Google Quantum AI’s 105-qubit Willow chip, which recently demonstrated a physics algorithm faster than any classical computer could simulate.

In the experiment, IBM scientists utilized Greenberger–Horne–Zeilinger (GHZ) states, also known as “cat states,” a nod to Schrödinger’s iconic thought experiment. In these states, every qubit exists simultaneously in superposition—both zero and one—and if one changes, all others follow, a phenomenon impossible in classical physics.

“Besides their practical utility, GHZ states have historically been used as a benchmark in various quantum platforms such as ions, superconductors, neutral atoms, and photons,” the researchers noted. “This arises from the fact that these states are extremely sensitive to imperfections in the experiment—indeed, they can be used to achieve quantum sensing at the Heisenberg limit.”

To reach the 120-qubit benchmark, IBM leveraged superconducting circuits and an adaptive compiler that directed operations to the least noisy regions of the chip. They also introduced a method called temporary uncomputation, where qubits that had completed their tasks were briefly disentangled to stabilize before being reconnected.

The performance was evaluated using fidelity, which measures how closely a quantum state matches its theoretical ideal. While a fidelity of 1.0 represents perfect accuracy and 0.5 marks confirmed full entanglement, IBM’s experiment achieved a score of 0.56, verifying that all qubits were coherently connected in one unified system.

Direct testing of such a vast quantum state is computationally unfeasible—it would take longer than the age of the universe to analyze every configuration. Instead, IBM used parity oscillation tests and Direct Fidelity Estimation, statistical techniques that sample subsets of the system to verify synchronization among qubits.

Although IBM’s current system does not yet threaten existing encryption, this progress pushes the boundary closer to a reality where quantum computers could challenge digital security, including Bitcoin’s defenses.

According to Project 11, a quantum research group, roughly 6.6 million BTC—worth about $767 billion—could be at risk from future quantum attacks. This includes coins believed to belong to Bitcoin’s creator, Satoshi Nakamoto.

“This is one of Bitcoin’s biggest controversies: what to do with Satoshi’s coins. You can’t move them, and Satoshi is presumably gone,” Project 11 founder Alex Pruden told Decrypt. “So what happens to that Bitcoin? It’s a significant portion of the supply. Do you burn it, redistribute it, or let a quantum computer get it? Those are the only options.”

Once a Bitcoin address’s public key becomes visible, a sufficiently powerful quantum system could, in theory, reconstruct it and take control of the funds before a transaction is confirmed. While IBM’s 120-qubit experiment cannot yet do this, it signals steady advancement toward that level of capability.

With IBM aiming for fault-tolerant quantum systems by 2030, and rivals like Google and Quantinuum pursuing the same goal, the quantum threat to digital assets is no longer a distant speculation—it’s a growing reality.