Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

Amazon and Microsoft AI Investments Put India at a Crossroads

 

Major technology companies Amazon and Microsoft have announced combined investments exceeding $50 billion in India, placing artificial intelligence firmly at the center of global attention on the country’s technology ambitions. Microsoft chief executive Satya Nadella revealed the company’s largest-ever investment in Asia, committing $17.5 billion to support infrastructure development, workforce skills, and what he described as India’s transition toward an AI-first economy. Shortly after, Amazon said it plans to invest more than $35 billion in India by 2030, with part of that funding expected to strengthen its artificial intelligence capabilities in the country. 

These announcements arrive at a time of heightened debate around artificial intelligence valuations globally. As concerns about a potential AI-driven market bubble have grown, some financial institutions have taken a contrarian view on India’s position. Analysts at Jefferies described Indian equities as a “reverse AI trade,” suggesting the market could outperform if global enthusiasm for AI weakens. HSBC has echoed similar views, arguing that Indian stocks offer diversification for investors wary of overheated technology markets elsewhere. This perspective has gained traction as Indian equities have underperformed regional peers over the past year, while foreign capital has flowed heavily into AI-centric companies in South Korea and Taiwan. 

Against this backdrop, the scale of Amazon and Microsoft’s commitments offers a significant boost to confidence. However, questions remain about how competitive India truly is in the global AI race. Adoption of artificial intelligence across the country has accelerated, with increasing investment in data centers and early movement toward domestic chip manufacturing. A recent collaboration between Intel and Tata Electronics to produce semiconductors locally reflects growing momentum in strengthening AI infrastructure. 

Despite these advances, India continues to lag behind global leaders when it comes to building sovereign AI models. The government launched a national AI mission aimed at supporting researchers and startups with high-performance computing resources to develop a large multilingual model. While officials say a sovereign model supporting more than 22 languages is close to launch, global competitors such as OpenAI and China-based firms have continued to release more advanced systems in the interim. India’s public investment in this effort remains modest when compared with the far larger AI spending programs seen in countries like France and Saudi Arabia. 

Structural challenges also persist. Limited access to advanced semiconductors, fragmented data ecosystems, and insufficient long-term research investment constrain progress. Although India has a higher-than-average concentration of AI-skilled professionals, retaining top talent remains difficult as global mobility draws developers overseas. Experts argue that policy incentives will be critical if India hopes to convert its talent advantage into sustained leadership. 

Even so, international studies suggest India performs strongly relative to its economic stage. The country ranks among the top five globally for new AI startups receiving investment and contributes a significant share of global AI research publications. While funding volumes remain far below those of the United States and China, experts believe India’s advantage may lie in applying AI to real-world problems rather than competing directly in foundational model development. 

AI-driven applications addressing agriculture, education, and healthcare are already gaining traction, demonstrating the technology’s potential impact at scale. At the same time, analysts warn that artificial intelligence could disrupt India’s IT services sector, a long-standing engine of economic growth. Slowing hiring, wage pressure, and weaker stock performance indicate that this transition is already underway, underscoring both the opportunity and the risk embedded in India’s AI future.

OpenAI Warns Future AI Models Could Increase Cybersecurity Risks and Defenses

 

Meanwhile, OpenAI told the press that large language models will get to a level where future generations of these could pose a serious risk to cybersecurity. The company in its blog postingly admitted that powerful AI systems could eventually be used to craft sophisticated cyberattacks, such as developing previously unknown software vulnerabilities or aiding stealthy cyber-espionage operations against well-defended targets. Although this is still theoretical, OpenAI has underlined that the pace with which AI cyber-capability improvements are taking place demands proactive preparation. 

The same advances that could make future models attractive for malicious use, according to the company, also offer significant opportunities to strengthen cyber defense. OpenAI said such progress in reasoning, code analysis, and automation has the potential to significantly enhance security teams' ability to identify weaknesses in systems better, audit complex software systems, and remediate vulnerabilities more effectively. Instead of framing the issue as a threat alone, the company cast the issue as a dual-use challenge-one in which adequate management through safeguards and responsible deployment would be required. 

In the development of such advanced AI systems, OpenAI says it is investing heavily in defensive cybersecurity applications. This includes helping models improve particularly on tasks related to secure code review, vulnerability discovery, and patch validation. It also mentioned its effort on creating tooling supporting defenders in running critical workflows at scale, notably in environments where manual processes are slow or resource-intensive. 

OpenAI identified several technical strategies that it thinks are critical to the mitigation of cyber risk associated with increased capabilities of AI systems: stronger access controls to restrict who has access to sensitive features, hardened infrastructure to prevent abuse, outbound data controls to reduce the risk of information leakage, and continuous monitoring to detect anomalous behavior. These altogether are aimed at reducing the likelihood that advanced capabilities could be leveraged for harmful purposes. 

It also announced the forthcoming launch of a new program offering tiered access to additional cybersecurity-related AI capabilities. This is intended to ensure that researchers, enterprises, and security professionals working on legitimate defensive use cases have access to more advanced tooling while providing appropriate restrictions on higher-risk functionality. Specific timelines were not discussed by OpenAI, although it promised that more would be forthcoming very soon. 

Meanwhile, OpenAI also announced that it would create a Frontier Risk Council comprising renowned cybersecurity experts and industry practitioners. Its initial mandate will lie in assessing the cyber-related risks that come with frontier AI models. But this is expected to expand beyond this in the near future. Its members will be required to offer advice on the question of where the line should fall between developing capability responsibly and possible misuse. And its input would keep informing future safeguards and evaluation frameworks. 

OpenAI also emphasized that the risks of AI-enabled cyber misuse have no single-company or single-platform constraint. Any sophisticated model, across the industry, it said, may be misused if there are no proper controls. To that effect, OpenAI said it continues to collaborate with peers through initiatives such as the Frontier Model Forum, sharing threat modeling insights and best practices. 

By recognizing how AI capabilities could be weaponized and where the points of intervention may lie, the company believes, the industry will go a long way toward balancing innovation and security as AI systems continue to evolve.

Fix SOC Blind Spots: Real-Time Industry & Country Threat Visibility

 

Modern SOCs are now grappling with a massive visibility problem, essentially “driving through fog” but now with their headlights dimming rapidly. The playbook for many teams is still looking back: analysts wait for an alert to fire, investigate the incident, and then try to respond. 

While understandable due to the high volume of noise and alert fatigue, this reactive attitude leaves the organization exposed. It induces a clouded vision from structural level, where teams cannot observe threat actors conducting attack preparations, they do not predict campaign sequences aimed at their own sector, and are not capable of modifying the defense until after an attack has been launched.

Operational costs of delay 

Remaining in a reactive state imposes severe penalties on security teams in terms of time, budget, and risk profile. 

  • Investigation latency: Without broader context, analysts are forced to research every suspicious object from scratch, significantly slowing down response times.
  • Resource drain: Teams often waste cycles chasing false positives or threats that are irrelevant to their geography or vertical because they lack the intelligence to filter them out.
  • Increased breach risk: Attackers frequently reuse infrastructure and target specific industries; failing to spot these patterns early hands the advantage to the adversary. 

According to security analysts, the only way out is the transition from the current reactive SOC model to an active SOC model powered by Threat Intelligence (TI). Tools like the ANY.RUN Threat Intelligence Lookup serve as a "tactical magnifying glass," converting raw data into operational assets .The use of TI helps the SOC understand the threats currently present in their environment and which alerts must be escalated immediately. 

Rise of hybrid threats 

One of the major reasons for this imperative change is the increased pace of change in attack infrastructure, specifically hybrid threats. The use of multiple attacks together has now been brought to the fore by recent investigations by the researchers, including Tycoon 2FA and Salty attack kits combining together as one kill chain attack. In these scenarios, one kit may handle the initial lure and reverse proxy, while another manages session hijacking. These combinations effectively break existing detection rules and confuse traditional defense strategies.

To address this challenge, IT professionals need behavioral patterns and attack logic visibility in real time, as opposed to only focusing on signatures. Finally, proactive protection based on industry and geo context enables SOC managers to understand the threats that matter to them more effectively while predicting attacks rather than reacting to them.

Critical FreePBX Vulnerabilities Expose Authentication Bypass and Remote Code Execution Risks

 

Researchers at Horizon3.ai have uncovered several security vulnerabilities within FreePBX, an open-source private branch exchange platform. Among them, one severity flaw could be exploited to bypass authentication if very specific configurations are enabled. The issues were disclosed privately to FreePBX maintainers in mid-September 2025, and the researchers have raised concerns about the exposure of internet-facing PBX deployments.  

According to Horizon3.ai's analysis, the disclosed vulnerabilities affect several FreePBX core components and can be exploited by an attacker to achieve unauthorized access, manipulate databases, upload malicious files, and ultimately execute arbitrary commands. One of the most critical finding involves an authentication bypass weakness that could grant attackers access to the FreePBX Administrator Control Panel without needing valid credentials, given specific conditions. This vulnerability manifests itself in situations where the system's authorization mechanism is configured to trust the web server rather than FreePBX's own user management. 

Although the authentication bypass is not active in the default FreePBX configuration, it becomes exploitable with the addition of multiple advanced settings enabled. Once these are in place, an attacker can create HTTP requests that contain forged authorization headers as a way to provide administrative access. Researchers pointed out that such access can be used to add malicious users to internal database tables effectively to maintain control of the device. The behavior greatly resembles another FreePBX vulnerability disclosed in the past and that was being actively exploited during the first months of 2025.  

Besides the authentication bypass, Horizon3.ai found various SQL injection bugs that impact different endpoints within the platform. These bugs allow authenticated attackers to read from and write to the underlying database by modifying request parameters. Such access can leak call records, credentials, and system configuration data. The researchers also discovered an arbitrary file upload bug that can be exploited as part of having a valid session identifier, thus allowing attacks to upload a PHP-based web shell and use command execution against the underlying server. 

This can be used for extracting sensitive system files or establishing deeper persistence. Horizon3.ai noted that the vulnerabilities are fairly low-complexity to exploit and may enable remote code execution by both authenticated and unauthenticated attackers, depending on which endpoint is exposed and how the system is configured. It added that the PBX systems are an attractive target because such boxes are very exposed to the internet and also often integrated deeply into critical communications infrastructure. The FreePBX project has made patches available to address the issues across supported versions, beginning the rollout in incremental fashion between October and December 2025.

In light of the findings, the project also disabled the ability to configure authentication providers through the web interface and required administrators to configure this setting through command-line tools. Temporary mitigation guidance issued by those impacted encouraged users to transition to the user manager authentication method, limit overrides to advanced settings, and reboot impacted systems to kill potentially unauthorized sessions. Researchers and FreePBX maintainers have called on administrators to check their environments for compromise-especially in cases where the vulnerable authentication configuration was enabled. 

While several vulnerable code paths remain, they require security through additional authentication layers. Security experts underscored that, whenever possible, legacy authentication mechanisms should be avoided because they offer weaker protection against exploitation. The incident serves as a reminder of the importance of secure configuration practices, especially for systems that play a critical role in organizational communications.

Chrome ‘Featured’ Urban VPN Extension Caught Harvesting Millions of AI Chats

 

A popular browser extension called Urban VPN Proxy, available for users of Google’s Chrome browser, has been discovered secretly sniffing out and harvesting confidential AI conversation data of millions of users across sites such as ChatGPT, Claude, Copilot, Gemini, Grok, Meta AI, and Perplexity. 

The browser extension, known for providing users with a safe and private manner of accessing any blocked website through a virtual private network, was recently upgraded in July of 2025 and has an added function enabling it to fish out all conversation data between users and AI chat bot systems by injecting specific JavaScript code into these sites.

By overriding browser network APIs, the extension is able to collect prompts, responses, conversation IDs, timestamps, session metadata, and the particular AI model in use. The extension's developer, Urban Cyber Security Inc., which also owns BiScience, a company well-known for gathering and profiting from user browsing data, then sends the collected data to remote servers under their control. 

The privacy policy of Urban VPN, which was last updated in June 2025, confesses to collecting AI queries and responses for the purposes of "Safe Browsing" and marketing analysis, asserting that any personal data is anonymized and pooled. However, BiScience shares raw, non-anonymized browsing data with business partners, using it for commercial insights and advertising. 

Despite the extension offering an “AI protection” feature that warns users about sharing personal information, the data harvesting occurs regardless of whether this feature is enabled, raising concerns about transparency and user consent.The extension and three other similar ones—1ClickVPN Proxy, Urban Browser Guard, and Urban Ad Blocker—all published by Urban Cyber Security Inc., collectively have over eight million installations. 

Notably, these extensions bear the “Featured” badge on Chrome and Edge marketplaces, which is intended to signal high quality and adherence to best practices. This badge may mislead users into trusting the extensions, underlining the risk of data misuse through seemingly legitimate channels. 

Koi Security’s research highlights how extension marketplaces’ trust signals can be abused to collect sensitive data at scale, particularly as users increasingly share personal details and emotions with AI chatbots. The researcher calls attention to the vulnerability of user data, even with privacy-focused tools, and underscores the need for vigilance and stricter oversight on data collection practices by browser extensions.

Online Retail Store Coupang Suffers South Korea's Worst Data Breach, Leak Linked to Former Employee


33.7 million customer data leaked

Data breach is an unfortunate attack that businesses often suffer. Failing to address these breaches is even worse as it costs businesses reputational and privacy damage. 

A breach at Coupang that leaked the data of 33.7 million customers has been linked to a former employee who kept access to internal systems after leaving the organization. 

About the incident 

The news was reported by the Seoul Metropolitan Police Agency with news agencies after an inquiry that involved a raid on Coupang's offices recently. The firm is South Korea's biggest online retailer. It employs 95,000 people and generates an annual revenue of more than $30 billion. 

Earlier in December, Coupang reported that it had been hit by a data breach that leaked the personal data of 33.7 million customers such as email IDs, names, order information, and addresses.

The incident happened in June, 2025, but the firm found it in November and launched an internal investigation immediately. 

The measures

In December beginning, Coupang posted an update on the breach, assuring the customers that the leaked data had not been exposed anywhere online. 

Even after all this, and Coupang's full cooperation with the authorities, the officials raided the firm's various offices on Tuesday to gather evidence for a detailed enquiry.

Recently, Coupang's CEO Park Dae-Jun gave his resignation and apologies to the public for not being able to stop what is now South Korea's worst cybersecurity breach in history. 

Police investigation 

In the second day of police investigation in Coupang's offices, the officials found that the main suspect was a 43-year old Chinese national who was an employee of the retail giant. The man is called JoongAng, who joined the firm in November 2022 and overlooked the authentication management system. He left the firm in 2024. JoongAng is suspected to have already left South Korea. 

What next?

According to the police, although Coupang is considered the victim, the business and staff in charge of safeguarding client information may be held accountable if carelessness or other legal infractions are discovered. 

Since the beginning of the month, the authorities have received hundreds of reports of Coupang impersonation. Meanwhile, the incident has caused a large amount of phishing activity in the country, affecting almost two-thirds of its population.

AI-Powered Shopping Is Transforming How Consumers Buy Holiday Gifts

 

Artificial intelligence is emerging with a new dimension in holiday shopping for consumers, going beyond search capabilities into a more proactive role in exploration and decision-making. Rather than endlessly clicking through online shopping sites, consumers are increasingly turning to AI-powered chatbots to suggest gift ideas, compare prices, and recommend specialized products they may not have thought of otherwise. Such a trend is being fueled by the increasing availability of technology such as Microsoft Copilot, ChatGPT from OpenAI, and Gemini from Google. With basic information such as a few elements of a gift receiver’s interest, age, or hobbies, personalized recommendations can be obtained which will direct such a person to specialized retail stores or distinct products. 

Such technology is being viewed increasingly as a means of relieving a busy time of year with thoughtfulness in gift selection despite being rushed. Industry analysts have termed this year a critical milestone in AI-enabled commerce. Although figures quantifying expenditures driven by AI are not available, a report by Salesforce reveals that AI-enabled activities have the potential to impact over one-twentieth of holiday sales globally, amounting to an expenditure in the order of hundreds of billions of dollars. Supportive evidence can be derived from a poll of consumers in countries such as America, Britain, and Ireland, where a majority of them have already adopted AI assistance in shopping, mainly for comparisons and recommendations. 

Although AI adoption continues to gain pace, customer satisfaction with AI-driven retail experiences remains a mixed bag. With most consumers stating they have found AI solutions to be helpful, they have not come across experiences they find truly remarkable. Following this, retailers have endeavored to improve product representation in AI-driven recommendations. Experts have cautioned that inaccurate or old product information can work against them in AI-driven recommendations, especially among smaller brands where larger rivals have an advantage in resources. 

The technology is also developing in other ways beyond recommenders. Some AI firms have already started working on in-chat checkout systems, which will enable consumers to make purchases without leaving the chat interface. OpenAI has started to integrate in-checkout capabilities into conversations using collaborations with leading platforms, which will allow consumers to browse products and make purchases without leaving chat conversations. 

However, this is still in a nascent stage and available on a selective basis to vendors approved by AI firms. The above trend gives a cause for concern with regards to concentration in the market. Experts have indicated that AI firms control gatekeeping, where they get to show which retailers appear on the platform and which do not. Those big brands with organized product information will benefit in this case, but small retailers will need to adjust before being considered. On the other hand, some small businesses feel that AI shopping presents an opportunity rather than a threat. Through their investment in quality content online, small businesses hope to become more accessible to AI shopping systems without necessarily partnering with them. 

As AI shopping continues to gain popularity, it will soon become important for a business to organize information coherently in order to succeed. Although AI-powered shopping assists consumers in being better informed and making better decisions, overdependence on such technology can prove counterproductive. Those consumers who do not cross-check the recommendations they receive will appear less well-informed, bringing into focus the need to balance personal acumen with technology in a newly AI-shaped retail market.

Trump Approves Nvidia AI Chip Sales to China Amid Shift in U.S. Export Policy


It was the Trump administration's decision to permit Nvidia to regain sales of one of its more powerful artificial intelligence processors to Chinese buyers that sparked a fierce debate in Washington, underscoring the deep tensions between national security policy and economic strategy. 

It represents one of the most significant reversals of U.S. technology export controls in recent history, as the semiconductor giant has been allowed to export its H200 artificial intelligence chips to China, which are the second most advanced chips in the world. 

The decision was swiftly criticized by China hardliners and Democratic lawmakers, who warned that Beijing could exploit the advanced computing capabilities of the country to speed up military modernization and surveillance. 

It was concluded by administration officials, however, that a shift was justified after months of intensive negotiations with industry executives and national security agencies. Among the proposed measures, the U.S. government agreed that economic gains from the technology outweighed earlier fears that it would increase China's technological and military ambitions, including the possibility that the U.S. government would receive a share of its revenues resulting from the technology. 

A quick response from the financial markets was observed when former President Donald Trump announced the policy shift on his Truth Social platform on his morning show. Shares of Nvidia soared about 2% after hours of trading after Trump announced the decision, adding to a roughly 3% gain that was recorded earlier in the session as a result of a Semafor report. 

The president of China, Xi Jinping, said he informed him personally that the move was being made, noting that Xi responded positively to him, a particularly significant gesture considering that Nvidia's chips are being scrutinized by Chinese regulators so closely. 

Trump also noted that the U.S. Commerce Department has been in the process of formalizing the deal, and that the same framework is going to extend to other U.S. chip companies as well, including Advanced Micro Devices and Intel. 

As part of the deal, the United States government will be charged a 25 percent government tax, a significant increase from the 15 percent proposed earlier this year, which a White House official confirmed would be collected as an import tax from Taiwan, where the chips are manufactured, before they are processed for export to China, as a form of security. 

There was no specific number on how many H200 chips Trump would approve or detail what conditions would apply to the shipment, but he said the shipment would proceed only under safeguards designed to protect the national security of the US. 

Officials from the administration described the decision as a calculated compromise, in which they stopped short of allowing exports of Nvidia's most advanced Blackwell chips, while at the same time avoiding a complete ban that could result in a greater opportunity for Chinese companies such as Huawei to dominate the domestic AI chip market. 

NVIDIA argued that by offering H200 processors to vetted commercial customers approved by the Commerce Department, it strikes a “thoughtful balance” between American interests and the interests of the companies. Intel declined to comment and AMD and the Commerce Department did not respond to inquiries. 

When asked about the approval by the Chinese foreign ministry, they expressed their belief that the cooperation should be mutually beneficial for both sides. Among the most important signals that Trump is trying to loosen long-standing restrictions on the sale of advanced U.S. artificial intelligence technology to Chinese countries is his decision, which is widely viewed as a clear signal of his broader efforts. During this time of intensifying global competition, it is a strategic move aimed at increasing the number of overseas markets for American companies. 

In an effort to mend relations among the two countries, Washington has undergone a significant shift in the way it deals with Beijing's controls on rare earth minerals, which provide a significant part of the raw materials for high-tech products in the United States and abroad. 

Kush Desai, a White House spokesperson, said that the administration remains committed to preserving American dominance in artificial intelligence, without compromising national security, as Chinese Embassy spokesperson Liu Pengyu urged the United States to take concrete steps to ensure that global supply chains are stable and work efficiently. 

Despite requests for comment, the Commerce Department, which oversees export controls, did not respond immediately to my inquiries. Trump’s decision marks a sharp departure from his first term, when he aggressively restricted Chinese access to U.S. technology, which received international attention.

China has repeatedly denied allegations that it has misappropriated American intellectual property and repurposed commercial technology for military purposes-claims which Beijing has consistently denied. There is now a belief among senior administration officials that limiting the export of advanced AI chips could slow down the rise of domestic Chinese rivals because it would reduce companies such as Huawei's incentive to develop competing processors, thus slowing their growth. 

According to David Sacks, the White House's AI policy lead, the approach is a strategic necessity, stating that if Chinese chips start dominating global markets, it will mean a loss of U.S. technological leadership.

Although Stewart Baker, a former senior official at the Department of Homeland Security and the National Security Agency, has argued this rationale is extremely unpopular across Washington, it seems unlikely that China will remain dependent on American chips for years to come. According to Baker, Beijing will inevitably seek to displace American suppliers by developing a self-sufficient industry. 

Senator Ron Wyden, a democratic senator who argued that Trump struck a deal that undermined American security interests, expressed similar concerns in his remarks and Representative Raja Krishnamoorthi, who called it a significant national security mistake that benefits America’s foremost strategic rival. 

There are, however, those who are China hawks who contend that the practical impact may be more limited than others. For example, James Mulvenon, a longtime Chinese military analyst, who was consulted by the U.S. government when the sanctions against Chinese chipmakers SMIC were imposed. In total, the decision underscores the fact that artificial intelligence hardware has become an important tool in both economic diplomacy and strategic competition. 

The administration has taken a calibrated approach to exports by opening a narrow channel while maintaining strict limits on the most advanced technologies. Even though the long-term consequences of this move remain uncertain, it has maintained a balanced approach that seeks to balance commercial interest with security considerations.

In order for U.S. policymakers to ensure that well-established oversight mechanisms keep pace with rapid advances in chip capabilities, it will be important to ensure that they prevent the use of such devices for unintended reasons such as military or spying, while maintaining the competitiveness of American firms abroad. 

There is no doubt that the episode demonstrates the growing need to take geopolitical risks into account when planning and executing product, supply chain, and investment decisions in the industry. It also signals that lawmakers are having a broader conversation about whether export controls alone can shape technological leadership in an era of rapid technological advances.

The outcome of the ongoing battle between Washington and Beijing is unlikely to simply affect the development of artificial intelligence, but it is likely to also determine the rules that govern how strategic technologies are transferred across borders—a matter that will require sustained attention beyond the immediate reaction of the market.

Cybercriminals Exploit Law Enforcement Data Requests to Steal User Information

 

While most of the major data breaches occur as a result of software vulnerabilities, credit card information theft, or phishing attacks, increasingly, identity theft crimes are being enacted via an intermediary source that is not immediately apparent. Some of the biggest firms in technology are knowingly yielding private information to what they believe are lawful authorities, only to realize that the identity thieves were masquerading as such.  

Technology firms such as Apple, Google, and Meta are mandated by law to disclose limited information about their users to the relevant law enforcement agencies in given situations such as criminal investigations and emergency situations that pose a threat to human life or national security. Such requests for information are usually channeled through formal systems, with a high degree of priority since they are often urgent. All these companies possess detailed information about their users, including their location history, profiles, and gadget data, which is of critical use to law enforcement. 

This process, however, has also been exploited by cybercriminals. These individuals try to evade the security measures that safeguard data by using law enforcement communication mimicking. One of the recent tactics adopted by cyber criminals is the acquisition of typosquatting domains or email addresses that are almost similar to law enforcement or governmental domains, with only one difference in the characters. These malicious parties then send sophisticated emails to companies’ compliance or legal departments that look no different from law enforcement emails. 

In more sophisticated attacks, the perpetrators employ business email compromise to break into genuine email addresses of law enforcement or public service officials. Requests that appear in genuine email addresses are much more authentic, which in turn multiplies the chances of companies responding positively. Even though this attack is more sophisticated, it is also more effective since it is apparently coming from authentic sources. These malicious data requests can be couched in the terms of emergency disclosures, which could shorten the time for verification. 

This emergency request is aimed at averting real damage that could occur immediately, but the attacker takes advantage of the urgency in convincing companies to disclose information promptly. Using such information, identity theft, money fraud, account takeover, or selling on dark markets could be the outcome. Despite these dangers, some measures have been taken by technology companies to ensure that their services are not abused. Most of the major companies currently make use of law enforcement request portals that are reviewed internally before any data sharing takes place. Such requests are reviewed for their validity, authority, and compliance with the law before any data is shared. 

This significantly decreased the number of cases of data abuse but did not eradicate the risk. As more criminals register expertise in impersonation schemes that exploit trust-based systems, it is evident that the situation also embodies a larger challenge for the tech industry. It is becoming increasingly difficult to ensure a good blend of legal services to law-enforcement agencies with the need to safeguard the privacy of services used by users. Abuse of law-enforcement data request systems points to the importance of ensuring that sensitive information is not accessed by criminals.

Gartner Warns: Block AI Browsers to Avert Data Leaks and Security Risks

 

Analyst company Gartner has issued a recommendation to block AI-powered browsers to help organizations protect business data and cybersecurity. The company says most of these agentic browsers—browsers using autonomous AI models for interacting with web content and automating tasks by default—are designed to provide good user experiences at the cost of compromising security. 

These, the company warns, may leak sensitive information, such as credentials, bank details, or emails, to malicious websites or unauthorized parties. While browsers like OpenAI's ChatGPT Atlas can summarize content, gather data, and automatically navigate users between different websites, the cloud-based back ends commonly used by such browsers handle and store user data, leaving it exposed unless their security settings are carefully managed and appropriate measures implemented. 

What Gartner analysts mean here is that agentic browsers can be deceived into collecting and sending sensitive data to unauthorized parties, especially when workers have confidential data open in browser tabs while using an AI assistant. Furthermore, even if the backend of a browser conforms to the cybersecurity policies of a firm, improper use or configuration may turn the situation very risky. 

The analysts highlight that in all cases, the responsibility lies squarely with each organization to determine the compliance and risks involved with backend services for any AI browser. Besides, Gartner cautions that workers will be tempted to automate mundane or mandated activities, such as cybersecurity training, with the browsers, which could circumvent basic security protocols. 

Safety tips 

To mitigate these risks, Gartner suggests organizations train users on the hazards of exposing sensitive data to AI browser back ends and ensure users do not use these tools while viewing highly confidential information. 

"With the rise of AI, there is a growing tension between productivity and security, as most AI browsers today err toward convenience over safety. I would not recommend complete bans but encourage organizations to perform risk assessments on specific AI services powering the browsers," security expert Javvad Malik of KnowBe4 commented. 

Tailored playbooks for the adoption, oversight, and management of risk for AI agents should be developed to enable organizations to harness the productivity benefits of AI browsers while sustaining appropriate cybersecurity postures.

AI Browsers Raise Privacy and Security Risks as Prompt Injection Attacks Grow

 

A new wave of competition is stirring in the browser market as companies like OpenAI, Perplexity, and The Browser Company aggressively push to redefine how humans interact with the web. Rather than merely displaying pages, these AI browsers will be engineered to reason, take action independently, and execute tasks on behalf of end users. At least four such products, including ChatGPT's Atlas, Perplexity's Comet, and The Browser Company's Dia, represent a transition reminiscent of the early browser wars, when Netscape and Internet Explorer battled to compete for a role in the shaping of the future of the Internet. 

Whereas the other browsers rely on search results and manual navigation, an AI browser is designed to understand natural language instructions and perform multi-step actions. For instance, a user can ask an AI browser to find a restaurant nearby, compare options, and make a reservation without the user opening the booking page themselves. In this context, the browser has to process both user instructions and the content of each of the webpages it accesses, intertwining decision-making with automation. 

But this capability also creates a serious security risk that's inherent in the way large language models work. AI systems cannot be sure whether a command comes from a trusted user or comes with general text on an untrusted web page. Malicious actors may now inject malicious instructions within webpages, which can include uses of invisible text, HTML comments, and image-based prompts. Unbeknownst to them, that might get processed by an AI browser along with the user's original request-a type of attack now called prompt injection. 

The consequence of such attacks could be dire, since AI browsers are designed to gain access to sensitive data in order to function effectively. Many ask for permission to emails, calendars, contacts, payment information, and browsing histories. If compromised, those very integrations become conduits for data exfiltration. Security researchers have shown just how prompt injections can trick AI browsers into forwarding emails, extracting stored credentials, making unauthorized purchases, or downloading malware without explicit user interaction. One such neat proof-of-concept was that of Perplexity's Comet browser, wherein the researchers had embedded command instructions in a Reddit comment, hidden behind a spoiler tag. When the browser arrived and was asked to summarise the page, it obediently followed the buried commands and tried to scrape email data. The user did nothing more than request a summary; passive interactions indeed are enough to get someone compromised. 

More recently, researchers detailed a method called HashJack, which abuses the way web browsers process URL fragments. Everything that appears after the “#” in a URL never actually makes it to the server of a given website and is only accessible to the browser. An attacker can embed nefarious commands in this fragment, and AI-powered browsers may read and act upon it without the hosting site detecting such commands. Researchers have already demonstrated that this method can make AI browsers show the wrong information, such as incorrect dosages of medication on well-known medical websites. Though vendors are experimenting with mitigations, such as reinforcement learning to detect suspicious prompts or restricting access during logged-out browsing sessions, these remain imperfect. 

The flexibility that makes AI browsers useful also makes them vulnerable. As the technology is still in development, it shows great convenience, but the security risks raise questions of whether fully trustworthy AI browsing is an unsolved problem.

Asus Supplier Breach Sparks Security Concerns After Everest Ransomware Claims Data Theft

 

Asus has confirmed a security breach via one of its third-party suppliers after the Everest ransomware group claimed it had accessed internal materials belonging to the company. In its statement, Asus confirmed that a supply chain vendor "was hacked," and the intrusion impacted portions of the source code relating to cameras for Asus smartphones. The company emphasized that no internal systems, products, or customer data were impacted. It refused to name the breached supplier or detail exactly what was accessed, but it said it is shoring up supply chain defenses to align with cybersecurity best practices. 

The disclosure comes amid brazen claims from the Everest ransomware gang, an established extortion outfit that has traditionally targeted major technology firms. Everest claimed it had pilfered around 1 TB of data related to Asus, ArcSoft, and Qualcomm, leaking screenshots online as evidence of the breach. The group said on its dark-web leak site that it was offering an array of deep technical assets, from segmentation modules to source code, RAM dumps, firmware tools, AI model weights, image datasets, crash logs, and test applications. The cache supposedly also contained calibration files, dual-camera data, internal test videos, and performance evaluation reports, the gang said. 

As it is, Asus hasn't verified the broader claims of Everest and has called the incident isolated to a single external supplier that holds camera-related resources. The company hasn't provided an explanation of whether material that was supposedly exfiltrated by the attackers included its proprietary code or information from the other organizations named by the group. Requests for additional comment from the manufacturer went unreturned, thus leaving various aspects of the breach unexplained. The timing is problematic for Asus, coming just weeks after new research highlighted yet another security issue with the company's consumer networking hardware. Analysts in recent weeks said about 50,000 Asus routers were compromised in what observers believe is a China-linked operation. 

That campaign involved attackers exploiting firmware vulnerabilities to build a relatively large botnet that's able to manipulate traffic and facilitate secondary infections. Although the router exploitation campaign and the supplier breach seem unrelated, taken together the two incidents raise the temperature on Asus' overall security posture. With attackers already targeting its networking devices en masse, the discovery of a supply chain intrusion-one that may have entailed access to source code-only adds to the questions about the robustness of the company's development environments. 

As supply chain compromises remain one of the biggest risks facing the tech sector, the incident serves as a reminder of the need for far better oversight, vetting of vendors, and continuous monitoring to ensure malicious actors do not penetrate upstream partners. For Asus, the breach raises pressure on the company to reassure customers and partners that its software and hardware ecosystems remain secure amid unrelenting global cyberthreat activity.

December Patch Tuesday Brings Critical Microsoft, Notepad++, Fortinet, and Ivanti Security Fixes

 


While December's Patch Tuesday gave us a lighter release than normal, it arrived with several urgent vulnerabilities that need attention immediately. In all, Microsoft released 57 CVE patches to finish out 2025, including one flaw already under active exploitation and two others that were publicly disclosed. Notably, critical security updates also came from Notepad++, Ivanti, and Fortinet this cycle, making it particularly important for system administrators and enterprise security teams alike. 

The most critical of Microsoft's disclosures this month is CVE-2025-62221, a Windows Cloud Files Mini Filter Driver bug rated 7.8 on the CVSS scale. It allows for privilege escalation: an attacker who has code execution rights can leverage the bug to escalate to full system-level access. Researchers say this kind of bug is exploited on a regular basis in real-world intrusions, and "patching ASAP" is critical. Microsoft hasn't disclosed yet which threat actors are actively exploiting this flaw; however, experts explain that bugs like these "tend to pop up in almost every big compromise and are often used as stepping stones to further breach". 

Another two disclosures from Microsoft were CVE-2025-54100 in PowerShell and CVE-2025-64671, impacting GitHub Copilot for JetBrains. Although these are not confirmed to be exploited, they were publicly disclosed ahead of patching. Graded at 8.4, the Copilot vulnerability would have allowed for remote code execution via malicious cross-prompt injection, provided a user is tricked into opening untrusted files or connecting to compromised servers. Security researchers expect more vulnerabilities of this type to emerge as AI-integrated development tools expand in usage. 

But one of the more ominous developments outside Microsoft belongs to Notepad++. The popular open-source editor pushed out version 8.8.9 to patch a weakness in the way updates were checked for authenticity. Attackers were managing to intercept network traffic from the WinGUp update client, then redirecting users to rogue servers, where malicious files were downloaded instead of legitimate updates. There are reports that threat groups in China were actively testing and exploiting this vulnerability. Indeed, according to the maintainer, "Due to the improper update integrity validation, an adversary was able to manipulate the download"; therefore, users should upgrade as soon as possible. 

Fortinet also patched two critical authentication bypass vulnerabilities, CVE-2025-59718 and CVE-2025-59719, in FortiOS and several related products. The bugs enable hackers to bypass FortiCloud SSO authentication using crafted SAML messages, which only works if SSO has been enabled. Administrators are advised to disable the feature until they can upgrade to patched builds to avoid unauthorized access. Rounding out the disclosures, Ivanti released a fix for CVE-2025-10573, a severe cross-site scripting vulnerability in its Endpoint Manager. The bug allows an attacker to register fake endpoints and inject malicious JavaScript into the administrator dashboard. Viewed, this could serve an attacker full control over the session without credentials. There has been no observed exploitation so far, but researchers warn that it is likely attackers will reverse engineer the fix soon, making for a deployment environment of haste.

UK Cyber Agency says AI Prompt-injection Attacks May Persist for Years

 



The United Kingdom’s National Cyber Security Centre has issued a strong warning about a spreading weakness in artificial intelligence systems, stating that prompt-injection attacks may never be fully solved. The agency explained that this risk is tied to the basic design of large language models, which read all text as part of a prediction sequence rather than separating instructions from ordinary content. Because of this, malicious actors can insert hidden text that causes a system to break its own rules or execute unintended actions.

The NCSC noted that this is not a theoretical concern. Several demonstrations have already shown how attackers can force AI models to reveal internal instructions or sensitive prompts, and other tests have suggested that tools used for coding, search, or even résumé screening can be manipulated by embedding concealed commands inside user-supplied text.

David C, a technical director at the NCSC, cautioned that treating prompt injection as a familiar software flaw is a mistake. He observed that many security professionals compare it to SQL injection, an older type of vulnerability that allowed criminals to send harmful instructions to databases by placing commands where data was expected. According to him, this comparison is dangerous because it encourages the belief that both problems can be fixed in similar ways, even though the underlying issues are completely different.

He illustrated this difference with a practical scenario. If a recruiter uses an AI system to filter applications, a job seeker could hide a message in the document that tells the model to ignore existing rules and approve the résumé. Since the model does not distinguish between what it should follow and what it should simply read, it may carry out the hidden instruction.

Researchers are trying to design protective techniques, including systems that attempt to detect suspicious text or training methods that help models recognise the difference between instructions and information. However, the agency emphasised that all these strategies are trying to impose a separation that the technology does not naturally have. Traditional solutions for similar problems, such as Confused Deputy vulnerabilities, do not translate well to language models, leaving large gaps in protection.

The agency also stressed upon a security idea recently shared on social media that attempted to restrict model behaviour. Even the creator of that proposal admitted that it would sharply reduce the abilities of AI systems, showing how complex and limiting effective safeguards may become.

The NCSC stated that prompt-injection threats are likely to remain a lasting challenge rather than a fixable flaw. The most realistic path is to reduce the chances of an attack or limit the damage it can cause through strict system design, thoughtful deployment, and careful day-to-day operation. The agency pointed to the history of SQL injection, which once caused widespread breaches until better security standards were adopted. With AI now being integrated into many applications, they warned that a similar wave of compromises could occur if organisations do not treat prompt injection as a serious and ongoing risk.


How Retailers Should Harden Accounts Before the Holiday Rush




Retailers rely heavily on the year-end shopping season, but it also happens to be the period when online threats rise faster than most organizations can respond. During the rush, digital systems handle far more traffic than usual, and internal teams operate under tighter timelines. This combination creates a perfect opening for attackers who intentionally prepare their campaigns weeks in advance and deploy automated tools when stores are at their busiest.

Security analysts consistently report that fraudulent bot traffic, password-testing attempts, and customer account intrusions grow sharply during the weeks surrounding Black Friday, festive sales, and year-end shopping events. Attackers time their operations carefully because the chance of slipping through undetected is higher when systems are strained and retailers are focused on maintaining performance rather than investigating anomalies.

A critical reason criminals favor this season is the widespread reuse of passwords. Large collections of leaked usernames and passwords circulate on criminal forums, and attackers use automated software to test these combinations across retail login pages. These tools can attempt thousands of logins per minute. When one match succeeds, the attacker gains access to stored payment information, saved addresses, shopping histories, loyalty points, and in some cases stored tokenized payment methods. All of these can be exploited immediately, which makes the attack both low-effort and highly profitable.

Another layer of risk arises from the credentials of external partners. Many retailers depend on vendors for services ranging from maintenance to inventory support, which means third-party accounts often hold access to internal systems. Past retail breaches have shown that attackers frequently begin their intrusion not through the company itself but through a partner whose login rights were not secured with strong authentication or strict access controls. This amplifies the impact far beyond a single compromised account, highlighting the need for retailers to treat vendor and contractor credentials with the same seriousness as internal workforce accounts.

Balancing security with customer experience becomes especially challenging during peak seasons. Retailers cannot introduce so much friction that shoppers abandon their carts, yet they also cannot ignore the fact that most account takeovers begin with weak, reused, or compromised passwords.

Modern authentication frameworks recommend focusing on password length, screening new passwords against known breach data, and reducing reliance on outdated complexity rules that frustrate users without meaningfully improving security. Adaptive multi-factor authentication is viewed as the most practical solution. It triggers an additional verification step only when something unusual is detected, such as a login from an unfamiliar device, a significant change to account settings, or a suspicious location. This approach strengthens security without slowing down legitimate customers.

Internal systems require equal attention. Administrative dashboards, point-of-sale backends, vendor portals, and remote-access platforms usually hold higher levels of authority, which means they must follow a stricter standard. Mandatory MFA, centralized identity management, unique employee credentials, and secure vaulting of privileged passwords significantly reduce the blast radius of any single compromised account.

Holiday preparedness also requires a layered approach to blocking automated abuse. Retailers can deploy tools that differentiate real human activity from bots by studying device behavior, interaction patterns, and risk signals. Rate limits, behavioral monitoring for credential stuffing, and intelligence-based blocking of known malicious sources help limit abuse without overwhelming the customer experience. Invisible or background challenge mechanisms are often more effective than traditional CAPTCHAs, which can hinder sales during peak traffic.

A final but critical aspect of resilience is operational continuity. Authentication providers, SMS delivery routes, and verification systems can fail under heavy demand, and outages during peak shopping hours can have direct financial consequences. Retailers should run rehearsals before the season begins, including testing failover paths for sign-in systems, defining emergency access methods that are short-lived and fully auditable, and ensuring there is a manual verification process that stores can rely on if digital systems lag or fail. Running load tests and tabletop exercises helps confirm that backup procedures will hold under real stress.

Strengthening password policies and monitoring for compromised credentials also plays a vital role. Tools that enforce password screenings against known breach databases, encourage passphrases, restrict predictable patterns, and integrate directly with directory services allow retailers to apply consistent controls across both customer-facing and internal systems. Telemetry from these tools can reveal early signs of suspicious behavior, providing opportunities to intervene before attackers escalate their actions.

With attackers preparing earlier each year and using highly automated methods, retailers must enter the holiday season with defenses that are both proactive and adaptable. By tightening access controls, reinforcing authentication, preparing for system failures, and using layered detection methods, retailers can significantly reduce the likelihood of account takeovers and fraud, all while maintaining smooth and reliable shopping experiences for their customers.


AI IDE Security Flaws Exposed: Over 30 Vulnerabilities Highlight Risks in Autonomous Coding Tools

 

More than 30 security weaknesses in various AI-powered IDEs have recently been uncovered, raising concerns as to how emerging automated development tools might unintentionally expose sensitive data or enable remote code execution. A collective set of vulnerabilities, referred to as IDEsaster, was termed by security researcher Ari Marzouk (MaccariTA), who found that such popular tools and extensions as Cursor, Windsurf, Zed.dev, Roo Code, GitHub Copilot, Claude Code, and others were vulnerable to attack chains leveraging prompt injection and built-in functionalities of the IDEs. At least 24 of them have already received a CVE identifier, which speaks to their criticality. 

However, the most surprising takeaway, according to Marzouk, is how consistently the same attack patterns could be replicated across every AI IDE they examined. Most AI-assisted coding platforms, the researcher said, don't consider the underlying IDE tools within their security boundaries but rather treat long-standing features as inherently safe. But once autonomous AI agents can trigger them without user approval, the same trusted functions can be repurposed for leaking data or executing malicious commands. 

Generally, the core of each exploit chain starts with prompt injection techniques that allow an attacker to redirect the large language model's context and behavior. Once the context is compromised, an AI agent might automatically execute instructions, such as reading files, modifying configuration settings, or writing new data, without the explicit consent of the user. Various documented cases showed how these capabilities could eventually lead to sensitive information disclosure or full remote code execution on a developer's system. Some vulnerabilities relied on workspaces being configured for automatic approval of file writes; thus, in practice, an attacker influencing a prompt could trigger code-altering actions without any human interaction. 

Researchers also pointed out that prompt injection vectors may be obfuscated in non-obvious ways, such as invisible Unicode characters, poisoned context originating from Model Context Protocol servers, or malicious file references added by developers who may not suspect a thing. Wider concerns emerged when new weaknesses were identified in widely deployed AI development tools from major companies including OpenAI, Google, and GitHub. 

As autonomous coding agents see continued adoption in the enterprise, experts warn these findings demonstrate how AI tools significantly expand the attack surface of development workflows. Rein Daelman, a researcher at Aikido, said any repository leveraging AI for automation tasks-from pull request labeling to code recommendations-may be vulnerable to compromise, data theft, or supply chain manipulation. Marzouk added that the industry needs to adopt what he calls Secure for AI, meaning systems are designed with intentionality to resist the emerging risks tied to AI-powered automation, rather than predicated on software security assumptions.

Palo Alto GlobalProtect Portals Face Spike in Suspicious Login Attempts

 


Among the developments that have disturbed security teams around the world, threat-intelligence analysts have detected a sudden and unusually coordinated wave of probing of Palo Alto Networks' GlobalProtect remote access infrastructure. This activity appears to be influenced by the presence of well-known malicious fingerprints and well-worn attack mechanisms.

It has been revealed in new reports from GreyNoise that the surge began on November 14 and escalated sharply until early December, culminating in more than 7,000 unique IP addresses trying to log into GlobalProtect portals through the firm's Global Observation Grid monitored by GlobalProtect. This influx of hostile activity has grown to the highest level in 90 days and has prompted fresh concerns among those defending the computer system from attempts to hack themselves, who are watching for signs that such reconnaissance is likely to lead to a significant breach of their system. 

In general, the activity stems mostly from infrastructure that operates under the name 3xK GmbH (AS200373), which accounts for approximately 2.3 million sessions which were directed to the global-protect/login.esp endpoint used by Palo Alto's PAN-OS and GlobalProtect products. The data was reported by GreyNoise to reveal that 62 percent of the traffic was geolocated in Germany, with 15 percent being traced to Canada. 

In parallel, AS208885 contributed a steady stream of probing throughout the entire network. As a result of early analysis, it is clear that this campaign requires continuity with prior malicious campaigns that targeted Palo Alto equipment, showing that recurring TCP patterns were used, repeated JA4T signatures were seen, and that infrastructure associated with known threat actors was reused. 

Despite the fact that the scans were conducted mainly in the United States, Mexico, and Pakistan regions, all of them were subjected to a comparable level of pressure, which suggested a broad, opportunistic approach as opposed to a narrowly targeted campaign, and served as a stark reminder of the persistent attention adversaries pay to remote-access technologies that are widely deployed. 

There has been a recent increase in the activity of this campaign, which is closely related to the pattern that was first observed between late September and mid-October, when three distinct fingerprints were detected among more than nine million nonspoofable HTTP sessions, primarily directed towards GlobalProtect portals, in an effort to track the attacks. 

There is enough technical overlap between four autonomous systems that originate those earlier scans to raise early suspicion, even though they had no prior history of malicious behavior. At the end of November, however, the same signatures resurfaced from 3xK Tech GmbH’s infrastructure in a concentrated burst. This event generated about 2.3 million sessions using identical TCP and JA4t indicators, with the majority of the traffic coming from IP addresses located in Germany. 

In the present, GreyNoise is highly confident that both phases of activity are associated with a single threat actor. It has now been reported that fingerprints of the attackers have reapplied on December 3, this time in probing attempts against SonicWall's SonicOS API, suggesting more than a product-specific reconnaissance campaign, but a more general reconnaissance sweep across widely deployed perimeter technologies. According to security analysts, GlobalProtect remains a high-profile target because of its deep penetration into enterprise networks and its history of high-impact vulnerabilities. 

It is important to note, however, that CVE-2024-3400 is still affecting unremedied systems despite being patched in April 2024 with a 9.8 rating due to a critical command-injection flaw, CVE-2024-3400. During recent attacks, malicious actors have used pre-authentication access as a tool for enumerating endpoints, brute-forcing credentials, and deploying malware to persist by exploiting misconfigurations that allow pre-authentication access, such as exposed administrative portals and unchanged default credentials. 

They have also developed custom tools modeled on well-known exploitation frameworks. Although researchers caution that no definitive attribution has been established for the current surge of activity, Mandiant has observed the same methods being used by Chinese state-related groups like UNC4841 in operations linked to those groups. A number of indicators of confirmed intrusions have included sudden spikes in UDP traffic to port 4501, followed by HTTP requests to "/global-protect/login.urd," from which attackers have harvested session tokens and gotten deeper into victim environments by harvesting session tokens.

According to a Palo Alto Networks advisory dated December 5, administrators are urged to harden exposed portals with multi-factor authentication, tighten firewall restrictions, and install all outstanding patches, but noted that properly configured deployments remain resilient despite the increased scrutiny. Since then, CISA has made it clear that appropriate indicators have been added to its Catalog of Known Exploited Vulnerabilities and that federal agencies must fix any issues within 72 hours. 

The latest surge in malicious attacks represents a stark reminder of how quickly opportunistic reconnaissance can escalate into compromise when foundational controls are neglected, so organizations should prepare for the possibility of follow-on attacks. Security experts have highlighted that these recent incidents serve as a warning to organizations about potential follow-on attacks. A number of security experts advise organizations to adopt a more disciplined hardening strategy rather than rely on reactive patching, which includes monitoring the attack surface continuously, checking identity policies regularly, and segmenting all remote access paths as strictly as possible. 

According to analysts, defenders could also benefit from closer alignment between security operations teams and network administrators in order to keep an eye on anomalous traffic spikes or repeated fingerprint patterns and escalate them before they become operationally relevant. Researchers demonstrate the importance of sharing indicators early and widely, particularly among organizations that operate internet-facing VPN frameworks, as attackers have become increasingly adept at recycling infrastructure, tooling, and products across many different product families. 

Even though GlobalProtect and similar platforms are generally secure if they are configured correctly, recent scan activity highlights a broader truth that is not obvious. In order to remain resilient to adversaries who are intent on exploiting even the slightest crack in perimeter defenses, sustained vigilance, timely remediation, and a culture of proactive security hygiene remain the most effective barriers.

NATO Concludes Cyber Coalition Exercise in Estonia, Preparing for Future Digital Threats

 

NATO has wrapped up its annual Cyber Coalition exercise in Estonia after a week of intensive drills focused on protecting networks and critical infrastructure from advanced cyberattacks. 

More than 1,300 cyber defenders joined the 2025 exercise. Participants represented 29 NATO countries, 7 partner nations, as well as Austria, Georgia, Ireland, Japan, South Korea, Switzerland, Ukraine, the European Union, industry experts, and universities. 

The goal of the training was to strengthen cooperation and improve the ability to detect, deter, and respond to cyber threats that could affect military and civilian systems. 

Commander Brian Caplan, the Exercise Director, said that Cyber Coalition brings countries together to learn how they would operate during a cyber crisis. He highlighted that cyber threats do not stay within borders and that sharing information is key to improving global defence. 

This year’s exercise presented seven complex scenarios that mirrored real-world challenges. They included attacks on critical national infrastructure, cyber disruptions linked to space systems, and a scenario called “Ghost in the Backup,” which involved hidden malware inside sensitive data repositories. 

Multiple simulated threat actors carried out coordinated digital operations against a NATO mission. The drills required participants to communicate continuously, share intelligence, and use systems such as the Virtual Cyber Incident Support Capability. 

The exercise also tested the ability of teams to make difficult decisions. Participants had to identify early warning signs like delayed satellite data, irregular energy distribution logs, and unexpected power grid alerts. They were also challenged to decide when to escalate issues to civilian authorities or NATO headquarters and how to follow international law when sharing military intelligence with law enforcement. 

A British officer taking part in the event said cyber warfare is no longer limited to watching computers. Participants must also track information shared by media and social networks, including sources that may be run by hostile groups.

Over the years, Cyber Coalition has evolved based on new technologies, new policies, and new threats. According to Commander Caplan, the exercise helps NATO and its partners adjust together before a real crisis takes place. 

Cyber defence is now a major pillar in NATO’s training efforts. Leaders say large-scale drills like Cyber Coalition are necessary as cyber threats continue to grow in both sophistication and frequency.

Google’s New Update Allows Employers To Archive Texts On Work-Managed Android Phones

 




A recent Android update has marked a paradigm shifting change in how text messages are handled on employer-controlled devices. This means Google has introduced a feature called Android RCS Archival, which lets organisations capture and store all RCS, SMS, and MMS communications sent through Google Messages on fully managed work phones. While the messages remain encrypted in transport, they can now be accessed on the device itself once delivered.

This update is designed to help companies meet compliance and record-keeping requirements, especially in sectors that must retain communication logs for regulatory reasons. Until now, many organizations had blocked RCS entirely because of its encryption, which made it difficult to archive. The new feature gives them a way to support richer messaging while still preserving mandatory records.

Archiving occurs via authorized third-party software that integrates directly with Google Messages on work-managed devices. Once enabled by a company's IT, the software will log every interaction inside of a conversation, including messages received, sent, edited, or later deleted. Employees using these devices will see a notification when archiving is active, signaling their conversations are being logged.

Google's indicated that this functionality only refers to work-managed Android devices, personal phones and personal profiles are not impacted, and the update doesn't allow employers access to user data on privately-owned devices. The feature must also be intentionally switched on by the organisation; it is not automatically on.

The update also brings to the surface a common misconception about encrypted messaging: End-to-end encryption protects content only while it's in transit between devices. When a message lands on a device that is owned and administered by an employer, the organization has the technical ability to capture it. It does not extend to over-the-top platforms - such as WhatsApp or Signal - that manage their own encryption. Those apps can expose data as well in cases where backups aren't encrypted or when the device itself is compromised.

This change also raises a broader issue: one of counterparty risk. A conversation remains private only if both ends of it are stored securely. Screenshots, unsafe backups, and linked devices outside the encrypted environment can all leak message content. Work-phone archiving now becomes part of that wider set of risks users should be aware of.

For employees, the takeaway is clear: A company-issued phone is a workplace tool, not a private device. Any communication that originates from a fully managed device can be archived, meaning personal conversations should stay on a personal phone. Users reliant on encrypted platforms have reason to review their backup settings and steer clear of mixing personal communication with corporate technology.

Google's new archival option gives organisations a compliance solution that brings RCS in line with traditional SMS logging, while for workers it is a further reminder that privacy expectations shift the moment a device is brought under corporate management.