Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI technology. Show all posts

Network Detection and Response Defends Against AI Powered Cyber Attacks

 

Cybersecurity teams are facing growing pressure as attackers increasingly adopt artificial intelligence to accelerate, scale, and conceal malicious activity. Modern threat actors are no longer limited to static malware or simple intrusion techniques. Instead, AI-powered campaigns are using adaptive methods that blend into legitimate system behavior, making detection significantly more difficult and forcing defenders to rethink traditional security strategies. 

Threat intelligence research from major technology firms indicates that offensive uses of AI are expanding rapidly. Security teams have observed AI tools capable of bypassing established safeguards, automatically generating malicious scripts, and evading detection mechanisms with minimal human involvement. In some cases, AI-driven orchestration has been used to coordinate multiple malware components, allowing attackers to conduct reconnaissance, identify vulnerabilities, move laterally through networks, and extract sensitive data at machine speed. These automated operations can unfold faster than manual security workflows can reasonably respond. 

What distinguishes these attacks from earlier generations is not the underlying techniques, but the scale and efficiency at which they can be executed. Credential abuse, for example, is not new, but AI enables attackers to harvest and exploit credentials across large environments with only minimal input. Research published in mid-2025 highlighted dozens of ways autonomous AI agents could be deployed against enterprise systems, effectively expanding the attack surface beyond conventional trust boundaries and security assumptions. 

This evolving threat landscape has reinforced the relevance of zero trust principles, which assume no user, device, or connection should be trusted by default. However, zero trust alone is not sufficient. Security operations teams must also be able to detect abnormal behavior regardless of where it originates, especially as AI-driven attacks increasingly rely on legitimate tools and system processes to hide in plain sight. 

As a result, organizations are placing renewed emphasis on network detection and response technologies. Unlike legacy defenses that depend heavily on known signatures or manual investigation, modern NDR platforms continuously analyze network traffic to identify suspicious patterns and anomalous behavior in real time. This visibility allows security teams to spot rapid reconnaissance activity, unusual data movement, or unexpected protocol usage that may signal AI-assisted attacks. 

NDR systems also help security teams understand broader trends across enterprise and cloud environments. By comparing current activity against historical baselines, these tools can highlight deviations that would otherwise go unnoticed, such as sudden changes in encrypted traffic levels or new outbound connections from systems that rarely communicate externally. Capturing and storing this data enables deeper forensic analysis and supports long-term threat hunting. 

Crucially, NDR platforms use automation and behavioral analysis to classify activity as benign, suspicious, or malicious, reducing alert fatigue for security analysts. Even when traffic is encrypted, network-level context can reveal patterns consistent with abuse. As attackers increasingly rely on AI to mask their movements, the ability to rapidly triage and respond becomes essential.  

By delivering comprehensive network visibility and faster response capabilities, NDR solutions help organizations reduce risk, limit the impact of breaches, and prepare for a future where AI-driven threats continue to evolve.

Amazon and Microsoft AI Investments Put India at a Crossroads

 

Major technology companies Amazon and Microsoft have announced combined investments exceeding $50 billion in India, placing artificial intelligence firmly at the center of global attention on the country’s technology ambitions. Microsoft chief executive Satya Nadella revealed the company’s largest-ever investment in Asia, committing $17.5 billion to support infrastructure development, workforce skills, and what he described as India’s transition toward an AI-first economy. Shortly after, Amazon said it plans to invest more than $35 billion in India by 2030, with part of that funding expected to strengthen its artificial intelligence capabilities in the country. 

These announcements arrive at a time of heightened debate around artificial intelligence valuations globally. As concerns about a potential AI-driven market bubble have grown, some financial institutions have taken a contrarian view on India’s position. Analysts at Jefferies described Indian equities as a “reverse AI trade,” suggesting the market could outperform if global enthusiasm for AI weakens. HSBC has echoed similar views, arguing that Indian stocks offer diversification for investors wary of overheated technology markets elsewhere. This perspective has gained traction as Indian equities have underperformed regional peers over the past year, while foreign capital has flowed heavily into AI-centric companies in South Korea and Taiwan. 

Against this backdrop, the scale of Amazon and Microsoft’s commitments offers a significant boost to confidence. However, questions remain about how competitive India truly is in the global AI race. Adoption of artificial intelligence across the country has accelerated, with increasing investment in data centers and early movement toward domestic chip manufacturing. A recent collaboration between Intel and Tata Electronics to produce semiconductors locally reflects growing momentum in strengthening AI infrastructure. 

Despite these advances, India continues to lag behind global leaders when it comes to building sovereign AI models. The government launched a national AI mission aimed at supporting researchers and startups with high-performance computing resources to develop a large multilingual model. While officials say a sovereign model supporting more than 22 languages is close to launch, global competitors such as OpenAI and China-based firms have continued to release more advanced systems in the interim. India’s public investment in this effort remains modest when compared with the far larger AI spending programs seen in countries like France and Saudi Arabia. 

Structural challenges also persist. Limited access to advanced semiconductors, fragmented data ecosystems, and insufficient long-term research investment constrain progress. Although India has a higher-than-average concentration of AI-skilled professionals, retaining top talent remains difficult as global mobility draws developers overseas. Experts argue that policy incentives will be critical if India hopes to convert its talent advantage into sustained leadership. 

Even so, international studies suggest India performs strongly relative to its economic stage. The country ranks among the top five globally for new AI startups receiving investment and contributes a significant share of global AI research publications. While funding volumes remain far below those of the United States and China, experts believe India’s advantage may lie in applying AI to real-world problems rather than competing directly in foundational model development. 

AI-driven applications addressing agriculture, education, and healthcare are already gaining traction, demonstrating the technology’s potential impact at scale. At the same time, analysts warn that artificial intelligence could disrupt India’s IT services sector, a long-standing engine of economic growth. Slowing hiring, wage pressure, and weaker stock performance indicate that this transition is already underway, underscoring both the opportunity and the risk embedded in India’s AI future.

AI-Powered Shopping Is Transforming How Consumers Buy Holiday Gifts

 

Artificial intelligence is emerging with a new dimension in holiday shopping for consumers, going beyond search capabilities into a more proactive role in exploration and decision-making. Rather than endlessly clicking through online shopping sites, consumers are increasingly turning to AI-powered chatbots to suggest gift ideas, compare prices, and recommend specialized products they may not have thought of otherwise. Such a trend is being fueled by the increasing availability of technology such as Microsoft Copilot, ChatGPT from OpenAI, and Gemini from Google. With basic information such as a few elements of a gift receiver’s interest, age, or hobbies, personalized recommendations can be obtained which will direct such a person to specialized retail stores or distinct products. 

Such technology is being viewed increasingly as a means of relieving a busy time of year with thoughtfulness in gift selection despite being rushed. Industry analysts have termed this year a critical milestone in AI-enabled commerce. Although figures quantifying expenditures driven by AI are not available, a report by Salesforce reveals that AI-enabled activities have the potential to impact over one-twentieth of holiday sales globally, amounting to an expenditure in the order of hundreds of billions of dollars. Supportive evidence can be derived from a poll of consumers in countries such as America, Britain, and Ireland, where a majority of them have already adopted AI assistance in shopping, mainly for comparisons and recommendations. 

Although AI adoption continues to gain pace, customer satisfaction with AI-driven retail experiences remains a mixed bag. With most consumers stating they have found AI solutions to be helpful, they have not come across experiences they find truly remarkable. Following this, retailers have endeavored to improve product representation in AI-driven recommendations. Experts have cautioned that inaccurate or old product information can work against them in AI-driven recommendations, especially among smaller brands where larger rivals have an advantage in resources. 

The technology is also developing in other ways beyond recommenders. Some AI firms have already started working on in-chat checkout systems, which will enable consumers to make purchases without leaving the chat interface. OpenAI has started to integrate in-checkout capabilities into conversations using collaborations with leading platforms, which will allow consumers to browse products and make purchases without leaving chat conversations. 

However, this is still in a nascent stage and available on a selective basis to vendors approved by AI firms. The above trend gives a cause for concern with regards to concentration in the market. Experts have indicated that AI firms control gatekeeping, where they get to show which retailers appear on the platform and which do not. Those big brands with organized product information will benefit in this case, but small retailers will need to adjust before being considered. On the other hand, some small businesses feel that AI shopping presents an opportunity rather than a threat. Through their investment in quality content online, small businesses hope to become more accessible to AI shopping systems without necessarily partnering with them. 

As AI shopping continues to gain popularity, it will soon become important for a business to organize information coherently in order to succeed. Although AI-powered shopping assists consumers in being better informed and making better decisions, overdependence on such technology can prove counterproductive. Those consumers who do not cross-check the recommendations they receive will appear less well-informed, bringing into focus the need to balance personal acumen with technology in a newly AI-shaped retail market.

AI Browsers Raise Privacy and Security Risks as Prompt Injection Attacks Grow

 

A new wave of competition is stirring in the browser market as companies like OpenAI, Perplexity, and The Browser Company aggressively push to redefine how humans interact with the web. Rather than merely displaying pages, these AI browsers will be engineered to reason, take action independently, and execute tasks on behalf of end users. At least four such products, including ChatGPT's Atlas, Perplexity's Comet, and The Browser Company's Dia, represent a transition reminiscent of the early browser wars, when Netscape and Internet Explorer battled to compete for a role in the shaping of the future of the Internet. 

Whereas the other browsers rely on search results and manual navigation, an AI browser is designed to understand natural language instructions and perform multi-step actions. For instance, a user can ask an AI browser to find a restaurant nearby, compare options, and make a reservation without the user opening the booking page themselves. In this context, the browser has to process both user instructions and the content of each of the webpages it accesses, intertwining decision-making with automation. 

But this capability also creates a serious security risk that's inherent in the way large language models work. AI systems cannot be sure whether a command comes from a trusted user or comes with general text on an untrusted web page. Malicious actors may now inject malicious instructions within webpages, which can include uses of invisible text, HTML comments, and image-based prompts. Unbeknownst to them, that might get processed by an AI browser along with the user's original request-a type of attack now called prompt injection. 

The consequence of such attacks could be dire, since AI browsers are designed to gain access to sensitive data in order to function effectively. Many ask for permission to emails, calendars, contacts, payment information, and browsing histories. If compromised, those very integrations become conduits for data exfiltration. Security researchers have shown just how prompt injections can trick AI browsers into forwarding emails, extracting stored credentials, making unauthorized purchases, or downloading malware without explicit user interaction. One such neat proof-of-concept was that of Perplexity's Comet browser, wherein the researchers had embedded command instructions in a Reddit comment, hidden behind a spoiler tag. When the browser arrived and was asked to summarise the page, it obediently followed the buried commands and tried to scrape email data. The user did nothing more than request a summary; passive interactions indeed are enough to get someone compromised. 

More recently, researchers detailed a method called HashJack, which abuses the way web browsers process URL fragments. Everything that appears after the “#” in a URL never actually makes it to the server of a given website and is only accessible to the browser. An attacker can embed nefarious commands in this fragment, and AI-powered browsers may read and act upon it without the hosting site detecting such commands. Researchers have already demonstrated that this method can make AI browsers show the wrong information, such as incorrect dosages of medication on well-known medical websites. Though vendors are experimenting with mitigations, such as reinforcement learning to detect suspicious prompts or restricting access during logged-out browsing sessions, these remain imperfect. 

The flexibility that makes AI browsers useful also makes them vulnerable. As the technology is still in development, it shows great convenience, but the security risks raise questions of whether fully trustworthy AI browsing is an unsolved problem.

AI IDE Security Flaws Exposed: Over 30 Vulnerabilities Highlight Risks in Autonomous Coding Tools

 

More than 30 security weaknesses in various AI-powered IDEs have recently been uncovered, raising concerns as to how emerging automated development tools might unintentionally expose sensitive data or enable remote code execution. A collective set of vulnerabilities, referred to as IDEsaster, was termed by security researcher Ari Marzouk (MaccariTA), who found that such popular tools and extensions as Cursor, Windsurf, Zed.dev, Roo Code, GitHub Copilot, Claude Code, and others were vulnerable to attack chains leveraging prompt injection and built-in functionalities of the IDEs. At least 24 of them have already received a CVE identifier, which speaks to their criticality. 

However, the most surprising takeaway, according to Marzouk, is how consistently the same attack patterns could be replicated across every AI IDE they examined. Most AI-assisted coding platforms, the researcher said, don't consider the underlying IDE tools within their security boundaries but rather treat long-standing features as inherently safe. But once autonomous AI agents can trigger them without user approval, the same trusted functions can be repurposed for leaking data or executing malicious commands. 

Generally, the core of each exploit chain starts with prompt injection techniques that allow an attacker to redirect the large language model's context and behavior. Once the context is compromised, an AI agent might automatically execute instructions, such as reading files, modifying configuration settings, or writing new data, without the explicit consent of the user. Various documented cases showed how these capabilities could eventually lead to sensitive information disclosure or full remote code execution on a developer's system. Some vulnerabilities relied on workspaces being configured for automatic approval of file writes; thus, in practice, an attacker influencing a prompt could trigger code-altering actions without any human interaction. 

Researchers also pointed out that prompt injection vectors may be obfuscated in non-obvious ways, such as invisible Unicode characters, poisoned context originating from Model Context Protocol servers, or malicious file references added by developers who may not suspect a thing. Wider concerns emerged when new weaknesses were identified in widely deployed AI development tools from major companies including OpenAI, Google, and GitHub. 

As autonomous coding agents see continued adoption in the enterprise, experts warn these findings demonstrate how AI tools significantly expand the attack surface of development workflows. Rein Daelman, a researcher at Aikido, said any repository leveraging AI for automation tasks-from pull request labeling to code recommendations-may be vulnerable to compromise, data theft, or supply chain manipulation. Marzouk added that the industry needs to adopt what he calls Secure for AI, meaning systems are designed with intentionality to resist the emerging risks tied to AI-powered automation, rather than predicated on software security assumptions.

AI-Assisted Cyberattacks Signal a Shift in Modern Threat Strategies and Defense Models

 

A new wave of cyberattacks is using large language models as an offensive tool, according to recent reporting from Anthropic and Oligo Security. Both groups said hackers used jailbroken LLMs-some capable of writing code and conducting autonomous reasoning-to conduct real-world attack campaigns. While the development is alarming, cybersecurity researchers had already anticipated such advancements. 

Earlier this year, a group at Cornell University published research predicting that cybercriminals would eventually use AI to automate hacking at scale. The evolution is consistent with a recurring theme in technology history: Tools designed for productivity or innovation inevitably become dual-use. Any number of examples-from drones to commercial aircraft to even Alfred Nobel's invention of dynamite-demonstrate how innovation often carries unintended consequences. 

The biggest implication of it all in cybersecurity is that LLMs today finally allow attackers to scale and personalize their operations simultaneously. In the past, cybercriminals were mostly forced to choose between highly targeted efforts that required manual work or broad, indiscriminate attacks with limited sophistication. 

Generative AI removes this trade-off, allowing attackers to run tailored campaigns against many targets at once, all with minimal input. In Anthropic's reported case, attackers initially provided instructions on ways to bypass its model safeguards, after which the LLM autonomously generated malicious output and conducted attacks against dozens of organizations. Similarly, Oligo Security's findings document a botnet powered by AI-generated code, first exploiting an AI infrastructure tool called Ray and then extending its activity by mining cryptocurrency and scanning for new targets. 

Traditional defenses, including risk-based prioritization models, may become less effective within this new threat landscape. These models depend upon the assumption that attackers will strategically select targets based upon value and feasibility. Automation collapses the cost of producing custom attacks such that attackers are no longer forced to prioritize. That shift erases one of the few natural advantages defenders had. 

Complicating matters further, defenders must weigh operational impact when making decisions about whether to implement a security fix. In many environments, a mitigation that disrupts legitimate activity poses its own risk and may be deferred, leaving exploitable weaknesses in place. Despite this shift, experts believe AI can also play a crucial role in defense. The future could be tied to automated mitigations capable of assessing risks and applying fixes dynamically, rather than relying on human intervention.

In some cases, AI might decide that restrictions should narrowly apply to certain users; in other cases, it may recommend immediate enforcement across the board. While the attackers have momentum today, cybersecurity experts believe the same automation that today enables large-scale attacks could strengthen defenses if it is deployed strategically.

AI Emotional Monitoring in the Workplace Raises New Privacy and Ethical Concerns

 

As artificial intelligence becomes more deeply woven into daily life, tools like ChatGPT have already demonstrated how appealing digital emotional support can be. While public discussions have largely focused on the risks of using AI for therapy—particularly for younger or vulnerable users—a quieter trend is unfolding inside workplaces. Increasingly, companies are deploying generative AI systems not just for productivity but to monitor emotional well-being and provide psychological support to employees. 

This shift accelerated after the pandemic reshaped workplaces and normalized remote communication. Now, industries including healthcare, corporate services and HR are turning to software that can identify stress, assess psychological health and respond to emotional distress. Unlike consumer-facing mental wellness apps, these systems sit inside corporate environments, raising questions about power dynamics, privacy boundaries and accountability. 

Some companies initially introduced AI-based counseling tools that mimic therapeutic conversation. Early research suggests people sometimes feel more validated by AI responses than by human interaction. One study found chatbot replies were perceived as equally or more empathetic than responses from licensed therapists. This is largely attributed to predictably supportive responses, lack of judgment and uninterrupted listening—qualities users say make it easier to discuss sensitive topics. 

Yet the workplace context changes everything. Studies show many employees hesitate to use employer-provided mental health tools due to fear that personal disclosures could resurface in performance reviews or influence job security. The concern is not irrational: some AI-powered platforms now go beyond conversation, analyzing emails, Slack messages and virtual meeting behavior to generate emotional profiles. These systems can detect tone shifts, estimate personal stress levels and map emotional trends across departments. 

One example involves workplace platforms using facial analytics to categorize emotional expression and assign wellness scores. While advocates claim this data can help organizations spot burnout and intervene early, critics warn it blurs the line between support and surveillance. The same system designed to offer empathy can simultaneously collect insights that may be used to evaluate morale, predict resignations or inform management decisions. 

Research indicates that constant monitoring can heighten stress rather than reduce it. Workers who know they are being analyzed tend to modulate behavior, speak differently or avoid emotional honesty altogether. The risk of misinterpretation is another concern: existing emotion-tracking models have demonstrated bias against marginalized groups, potentially leading to misread emotional cues and unfair conclusions. 

The growing use of AI-mediated emotional support raises broader organizational questions. If employees trust AI more than managers, what does that imply about leadership? And if AI becomes the primary emotional outlet, what happens to the human relationships workplaces rely on? 

Experts argue that AI can play a positive role, but only when paired with transparent data use policies, strict privacy protections and ethical limits. Ultimately, technology may help supplement emotional care—but it cannot replace the trust, nuance and accountability required to sustain healthy workplace relationships.

Google’s High-Stakes AI Strategy: Chips, Investment, and Concerns of a Tech Bubble

 

At Google’s headquarters, engineers work on Google’s Tensor Processing Unit, or TPU—custom silicon built specifically for AI workloads. The device appears ordinary, but its role is anything but. Google expects these chips to eventually power nearly every AI action across its platforms, making them integral to the company’s long-term technological dominance. 

Pichai has repeatedly described AI as the most transformative technology ever developed, more consequential than the internet, smartphones, or cloud computing. However, the excitement is accompanied by growing caution from economists and financial regulators. Institutions such as the Bank of England have signaled concern that the rapid rise in AI-related company valuations could lead to an abrupt correction. Even prominent industry leaders, including OpenAI CEO Sam Altman, have acknowledged that portions of the AI sector may already display speculative behavior. 

Despite those warnings, Google continues expanding its AI investment at record speed. The company now spends over $90 billion annually on AI infrastructure, tripling its investment from only a few years earlier. The strategy aligns with a larger trend: a small group of technology companies—including Microsoft, Meta, Nvidia, Apple, and Tesla—now represents roughly one-third of the total value of the U.S. S&P 500 market index. Analysts note that such concentration of financial power exceeds levels seen during the dot-com era. 

Within the secured TPU lab, the environment is loud, dominated by cooling units required to manage the extreme heat generated when chips process AI models. The TPU differs from traditional CPUs and GPUs because it is built specifically for machine learning applications, giving Google tighter efficiency and speed advantages while reducing reliance on external chip suppliers. The competition for advanced chips has intensified to the point where Silicon Valley executives openly negotiate and lobby for supply. 

Outside Google, several AI companies have seen share value fluctuations, with investors expressing caution about long-term financial sustainability. However, product development continues rapidly. Google’s recently launched Gemini 3.0 model positions the company to directly challenge OpenAI’s widely adopted ChatGPT.  

Beyond financial pressures, the AI sector must also confront resource challenges. Analysts estimate that global data centers could consume energy on the scale of an industrialized nation by 2030. Still, companies pursue ever-larger AI systems, motivated by the possibility of reaching artificial general intelligence—a milestone where machines match or exceed human reasoning ability. 

Whether the current acceleration becomes a long-term technological revolution or a temporary bubble remains unresolved. But the race to lead AI is already reshaping global markets, investment patterns, and the future of computing.

Genesis Mission Launches as US Builds Closed-Loop AI System Linking National Laboratories

 

The United States has announced a major federal scientific initiative known as the Genesis Mission, framed by the administration as a transformational leap forward in how national research will be conducted. Revealed on November 24, 2025, the mission is described by the White House as the most ambitious federal science effort since the Manhattan Project. The accompanying executive order tasks the Department of Energy with creating an interconnected “closed-loop AI experimentation platform” that will join the nation’s supercomputers, 17 national laboratories, and decades of research datasets into one integrated system. 

Federal statements position the initiative as a way to speed scientific breakthroughs in areas such as quantum engineering, fusion, advanced semiconductors, biotechnology, and critical materials. DOE has called the system “the most complex scientific instrument ever built,” describing it as a mechanism designed to double research productivity by linking experiment automation, data processing, and AI models into a single continuous pipeline. The executive order requires DOE to progress rapidly, outlining milestones across the next nine months that include cataloging datasets, mapping computing capacity, and demonstrating early functionality for at least one scientific challenge. 

The Genesis Mission will not operate solely as a federal project. DOE’s launch materials confirm that the platform is being developed alongside a broad coalition of private, academic, nonprofit, cloud, and industrial partners. The roster includes major technology companies such as Microsoft, Google, OpenAI for Government, NVIDIA, AWS, Anthropic, Dell Technologies, IBM, and HPE, alongside aerospace companies, semiconductor firms, and energy providers. Their involvement signals that Genesis is designed not only to modernize public research, but also to serve as part of a broader industrial and national capability. 

However, key details remain unclear. The administration has not provided a cost estimate, funding breakdown, or explanation of how platform access will be structured. Major news organizations have already noted that the order contains no explicit budget allocation, meaning future appropriations or resource repurposing will determine implementation. This absence has sparked debate across the AI research community, particularly among smaller labs and industry observers who worry that the platform could indirectly benefit large frontier-model developers facing high computational costs. 

The order also lays the groundwork for standardized intellectual-property agreements, data governance rules, commercialization pathways, and security requirements—signaling a tightly controlled environment rather than an open-access scientific commons. Certain community reactions highlight how the initiative could reshape debates around open-source AI, public research access, and the balance of federal and private influence in high-performance computing. While its long-term shape is not yet clear, the Genesis Mission marks a pivotal shift in how the United States intends to organize, govern, and accelerate scientific advancement using artificial intelligence and national infrastructure.

Microsoft’s Copilot Actions in Windows 11 Sparks Privacy and Security Concerns

When it comes to computer security, every decision ultimately depends on trust. Users constantly weigh whether to download unfamiliar software, share personal details online, or trust that their emails reach the intended recipient securely. Now, with Microsoft’s latest feature in Windows 11, that question extends further — should users trust an AI assistant to access their files and perform actions across their apps? 


Microsoft’s new Copilot Actions feature introduces a significant shift in how users interact with AI on their PCs. The company describes it as an AI agent capable of completing tasks by interacting with your apps and files — using reasoning, vision, and automation to click, type, and scroll just like a human. This turns the traditional digital assistant into an active AI collaborator, capable of managing documents, organizing folders, booking tickets, or sending emails once user permission is granted.  

However, giving an AI that level of control raises serious privacy and security questions. Granting access to personal files and allowing it to act on behalf of a user requires substantial confidence in Microsoft’s safeguards. The company seems aware of the potential risks and has built multiple protective layers to address them. 

The feature is currently available only in experimental mode through the Windows Insider Program for pre-release users. It remains disabled by default until manually turned on from Settings > System > AI components > Agent tools by activating the “Experimental agentic features” option. 

To maintain strict oversight, only digitally signed agents from trusted sources can integrate with Windows. This allows Microsoft to revoke or block malicious agents if needed. Furthermore, Copilot Actions operates within a separate standard account created when the feature is enabled. By default, the AI can only access known folders such as Documents, Downloads, Desktop, and Pictures, and requires explicit user permission to reach other locations. 

These interactions occur inside a controlled Agent workspace, isolated from the user’s desktop, much like Windows Sandbox. According to Dana Huang, Corporate Vice President of Windows Security, each AI agent begins with limited permissions, gains access only to explicitly approved resources, and cannot modify the system without user consent. 

Adding to this, Microsoft’s Peter Waxman confirmed in an interview that the company’s security team is actively “red-teaming” the feature — conducting simulated attacks to identify vulnerabilities. While he did not disclose test details, Microsoft noted that more granular privacy and security controls will roll out during the experimental phase before the feature’s public release. 

Even with these assurances, skepticism remains. The security research community — known for its vigilance and caution — will undoubtedly test whether Microsoft’s new agentic AI model can truly deliver on its promise of safety and transparency. As the preview continues, users and experts alike will be watching closely to see whether Copilot Actions earns their trust.

Microsoft Sentinel Aims to Unify Cloud Security but Faces Questions on Value and Maturity

 

Microsoft is positioning its Sentinel platform as the foundation of a unified cloud-based security ecosystem. At its core, Sentinel is a security information and event management (SIEM) system designed to collect, aggregate, and analyze data from numerous sources — including logs, metrics, and signals — to identify potential malicious activity across complex enterprise networks. The company’s vision is to make Sentinel the central hub for enterprise cybersecurity operations.

A recent enhancement to Sentinel introduces a data lake capability, allowing flexible and open access to the vast quantities of security data it processes. This approach enables customers, partners, and vendors to build upon Sentinel’s infrastructure and customize it to their unique requirements. Rather than keeping data confined within Sentinel’s ecosystem, Microsoft is promoting a multi-modal interface, inviting integration and collaboration — a move intended to solidify Sentinel as the core of every enterprise security strategy. 

Despite this ambition, Sentinel remains a relatively young product in Microsoft’s security portfolio. Its positioning alongside other tools, such as Microsoft Defender, still generates confusion. Defender serves as the company’s extended detection and response (XDR) tool and is expected to be the main interface for most security operations teams. Microsoft envisions Defender as one of many “windows” into Sentinel, tailored for different user personas — though the exact structure and functionality of these views remain largely undefined. 

There is potential for innovation, particularly with Sentinel’s data lake supporting graph-based queries that can analyze attack chains or assess the blast radius of an intrusion. However, Microsoft’s growing focus on generative and “agentic” AI may be diverting attention from Sentinel’s immediate development needs. The company’s integration of a Model Context Protocol (MCP) server within Sentinel’s architecture hints at ambitions to power AI agents using Sentinel’s datasets. This would give Microsoft a significant advantage if such agents become widely adopted within enterprises, as it would control access to critical security data. 

While Sentinel promises a comprehensive solution for data collection, risk identification, and threat response, its value proposition remains uncertain. The pricing reflects its ambition as a strategic platform, but customers are still evaluating whether it delivers enough tangible benefits to justify the investment. As it stands, Sentinel’s long-term potential as a unified security platform is compelling, but the product continues to evolve, and its stability as a foundation for enterprise-wide adoption remains unproven. 

For now, organizations deeply integrated with Azure may find it practical to adopt Sentinel at the core of their security operations. Others, however, may prefer to weigh alternatives from established vendors such as Splunk, Datadog, LogRhythm, or Elastic, which offer mature and battle-tested SIEM solutions. Microsoft’s vision of a seamless, AI-driven, cloud-secure future may be within reach someday, but Sentinel still has considerable ground to cover before it becomes the universal security platform Microsoft envisions.

Chrome vs Comet: Security Concerns Rise as AI Browsers Face Major Vulnerability Reports

 

The era of AI browsers is inevitable — the question is not if, but when everyone will use one. While Chrome continues to dominate across desktops and mobiles, the emerging AI-powered browser Comet has been making waves. However, growing concerns about privacy and cybersecurity have placed these new AI browsers under intense scrutiny. 

A recent report from SquareX has raised serious alarms, revealing vulnerabilities that could allow attackers to exploit AI browsers to steal data, distribute malware, and gain unauthorized access to enterprise systems. According to the findings, Comet was particularly affected, falling victim to an OAuth-based attack that granted hackers full access to users’ Gmail and Google Drive accounts. Sensitive files and shared documents could be exfiltrated without the user’s knowledge. 

The report further revealed that Comet’s automation features, which allow the AI to complete tasks within a user’s inbox, were exploited to distribute malicious links through calendar invites. These findings echo an earlier warning from LayerX, which stated that even a single malicious URL could compromise an AI browser like Comet, exposing sensitive user data with minimal effort.  
Experts agree that AI browsers are still in their infancy and must significantly strengthen their defenses. SquareX CEO Vivek Ramachandran emphasized that autonomous AI agents operating with full user privileges lack human judgment and can unknowingly execute harmful actions. This raises new security challenges for enterprises relying on AI for productivity. 

Meanwhile, adoption of AI browsers continues to grow. Venn CEO David Matalon noted a 14% year-over-year increase in the use of non-traditional browsers among remote employees and contractors, driven by the appeal of AI-enhanced performance. However, Menlo Security’s Pejman Roshan cautioned that browsers remain one of the most critical points of vulnerability in modern computing — making the switch to AI browsers a risk that must be carefully weighed. 

The debate between Chrome and Comet reflects a broader shift. Traditional browsers like Chrome are beginning to integrate AI features to stay competitive, blurring the line between old and new. As LayerX CEO Or Eshed put it, AI browsers are poised to become the primary interface for interacting with AI, even as they grapple with foundational security issues. 

Responding to the report, Perplexity’s Kyle Polley argued that the vulnerabilities described stem from human error rather than AI flaws. He explained that the attack relied on users instructing the AI to perform risky actions — an age-old phishing problem repackaged for a new generation of technology. 

As the competition between Chrome and Comet intensifies, one thing is clear: the AI browser revolution is coming fast, but it must first earn users’ trust in security and privacy.

Spanish Police Dismantle AI-Powered Phishing Network and Arrest Developer “GoogleXcoder”

 

Spanish authorities have dismantled a highly advanced AI-driven phishing network and arrested its mastermind, a 25-year-old Brazilian developer known online as “GoogleXcoder.” The operation, led by the Civil Guard’s Cybercrime Department, marks a major breakthrough in the ongoing fight against digital fraud and banking credential theft across Spain. 

Since early 2023, Spain has been hit by a wave of sophisticated phishing campaigns in which cybercriminals impersonated major banks and government agencies. These fake websites duped thousands of victims into revealing their personal and financial data, resulting in millions of euros in losses. Investigators soon discovered that behind these attacks was a criminal ecosystem powered by “Crime-as-a-Service” tools — prebuilt phishing kits sold by “GoogleXcoder.” 

Operating from various locations across Spain, the developer built and distributed phishing software capable of instantly cloning legitimate bank and agency websites. His kits allowed even inexperienced criminals to launch professional-grade phishing operations. He also offered ongoing updates, customization options, and technical support — effectively turning online fraud into an organized commercial enterprise. Communication and transactions primarily took place over Telegram, where access to the tools cost hundreds of euros per day. One group, brazenly named “Stealing Everything from Grandmas,” highlighted the disturbing scale and attitude of these cybercrime operations. 

After months of investigation, the Civil Guard tracked the suspect to San Vicente de la Barquera, Cantabria. The arrest led to the seizure of multiple electronic devices containing phishing source codes, cryptocurrency wallets, and chat logs linking him to other cybercriminals. Forensic specialists are now analyzing this evidence to trace stolen funds and identify collaborators. 

The coordinated police operation spanned several Spanish cities, including Valladolid, Zaragoza, Barcelona, Palma de Mallorca, San Fernando, and La Línea de la Concepción. Raids in these locations resulted in the recovery of stolen money, digital records, and hardware tied to the phishing network. Authorities have also deactivated Telegram channels associated with the scheme, though they believe more arrests could follow as the investigation continues. 

The successful operation was made possible through collaboration between the Brazilian Federal Police and the cybersecurity firm Group IB, emphasizing the importance of international partnerships in tackling digital crime. As Spain continues to strengthen its cyber defense mechanisms, the dismantling of “GoogleXcoder’s” network stands as a significant milestone in curbing the global spread of AI-powered phishing operations.

Rise of Evil LLMs: How AI-Driven Cybercrime Is Lowering Barriers for Global Hackers

 

As artificial intelligence continues to redefine modern life, cybercriminals are rapidly exploiting its weaknesses to create a new era of AI-powered cybercrime. The rise of “evil LLMs,” prompt injection attacks, and AI-generated malware has made hacking easier, cheaper, and more dangerous than ever. What was once a highly technical crime now requires only creativity and access to affordable AI tools, posing global security risks. 

While “vibe coding” represents the creative use of generative AI, its dark counterpart — “vibe hacking” — is emerging as a method for cybercriminals to launch sophisticated attacks. By feeding manipulative prompts into AI systems, attackers are creating ransomware capable of bypassing traditional defenses and stealing sensitive data. This threat is already tangible. Anthropic, the developer behind Claude Code, recently disclosed that its AI model had been misused for personal data theft across 17 organizations, with each victim losing nearly $500,000. 

On dark web marketplaces, purpose-built “evil LLMs” like FraudGPT and WormGPT are being sold for as little as $100, specifically tailored for phishing, fraud, and malware generation. Prompt injection attacks have become a particularly powerful weapon. These techniques allow hackers to trick language models into revealing confidential data, producing harmful content, or generating malicious scripts. 

Experts warn that the ability to override safety mechanisms with just a line of text has significantly reduced the barrier to entry for would-be attackers. Generative AI has essentially turned hacking into a point-and-click operation. Emerging tools such as PromptLock, an AI agent capable of autonomously writing code and encrypting files, demonstrate the growing sophistication of AI misuse. According to Huzefa Motiwala, senior director at Palo Alto Networks, attackers are now using mainstream AI tools to compose phishing emails, create ransomware, and obfuscate malicious code — all without advanced technical knowledge. 

This shift has democratized cybercrime, making it accessible to a wider and more dangerous pool of offenders. The implications extend beyond technology and into national security. Experts warn that the intersection of AI misuse and organized cybercrime could have severe consequences, particularly for countries like India with vast digital infrastructures and rapidly expanding AI integration. 

Analysts argue that governments, businesses, and AI developers must urgently collaborate to establish robust defense mechanisms and regulatory frameworks before the problem escalates further. The rise of AI-powered cybercrime signals a fundamental change in how digital threats operate. It is no longer a matter of whether cybercriminals will exploit AI, but how quickly global systems can adapt to defend against it. 

As “evil LLMs” proliferate, the distinction between creative innovation and digital weaponry continues to blur, ushering in an age where AI can empower both progress and peril in equal measure.

Andesite AI Puts Human Analysts at the Center of Cybersecurity Innovation

 

Andesite AI Inc., a two-year-old cybersecurity startup, is reimagining how human expertise and artificial intelligence can work together to strengthen digital defense. Founded by former CIA officers Brian Carbaugh and William MacMillan, the company aims to counter a fragmented cybersecurity landscape that prioritizes technology over the people who operate it. Carbaugh, who spent 24 years at the CIA leading its Global Covert Action unit, said his experience showed him both the power and pitfalls of a technology-first mindset. He noted that true security efficiency comes when teams have seamless access to information and shared intelligence — something still missing in most cybersecurity ecosystems.  

MacMillan, Andesite’s chief product officer, echoed that sentiment. After two decades at the CIA and a leadership role at Salesforce Inc., he observed that Silicon Valley’s focus on building flashy “blinky boxes” has often ignored the needs of cybersecurity operators. He believes defenders should be treated like fighter pilots of the digital age — skilled professionals equipped with the best possible systems, not burdened by cumbersome tools and burnout. 

As generative AI becomes a double-edged sword in cybersecurity, the founders warn that attackers are increasingly using AI to automate exploits and identify vulnerabilities faster than ever. MacMillan cautioned that “the weaponization of gen AI by bad actors is going to be gnarly,” emphasizing the need for defense teams to be equally equipped and adaptable. 

To meet this challenge, Andesite AI has designed a platform that centers on human decision-making. Instead of replacing staff, it provides a “decision layer” that connects with an organization’s existing security tools, harmonizes data, and uses what MacMillan calls “evidentiary AI.” This system explains its reasoning as it correlates alerts, prioritizes threats, and recommends next steps, offering transparency that traditional AI systems often lack. The software can be deployed flexibly — from SaaS models to secure on-premises environments — ensuring adaptability across industries. 

By eliminating the need for analysts to switch between multiple dashboards or write complex queries, Andesite’s technology allows staff to engage with the system in natural language. Analysts can ask questions and receive context-rich insights in real time. The company claims that one workflow, previously requiring 1,000 analyst hours, was reduced to under three minutes using its platform. 

Backed by $38 million in funding, including a $23 million round led by In-Q-Tel Inc., Andesite AI’s client base spans government agencies and private enterprises. Named after a durable igneous rock, the startup plans to expand beyond its AI for Security Operations Centers into areas like fraud detection and risk management. For now, Carbaugh says their focus remains on “delivering absolute white glove excellence” to early adopters as they redefine how humans and AI collaborate in cybersecurity.

AI Adoption Outpaces Cybersecurity Awareness as Users Share Sensitive Data with Chatbots

 

The global surge in the use of AI tools such as ChatGPT and Gemini is rapidly outpacing efforts to educate users about the cybersecurity risks these technologies pose, according to a new study. The research, conducted by the National Cybersecurity Alliance (NCA) in collaboration with cybersecurity firm CybNet, surveyed over 6,500 individuals across seven countries, including the United States. It found that 65% of respondents now use AI in their everyday lives—a 21% increase from last year—yet 58% said they had received no training from employers on the data privacy and security challenges associated with AI use. 

“People are embracing AI in their personal and professional lives faster than they are being educated on its risks,” said Lisa Plaggemier, Executive Director of the NCA. The study revealed that 43% of respondents admitted to sharing sensitive information, including company financial data and client records, with AI chatbots, often without realizing the potential consequences. The findings highlight a growing disconnect between AI adoption and cybersecurity preparedness, suggesting that many organizations are failing to educate employees on how to use these tools responsibly. 

The NCA-CybNet report aligns with previous warnings about the risks posed by AI systems. A survey by software company SailPoint earlier this year found that 96% of IT professionals believe AI agents pose a security risk, while 84% said their organizations had already begun deploying the technology. These AI agents—designed to automate tasks and improve efficiency—often require access to sensitive internal documents, databases, or systems, creating new vulnerabilities. When improperly secured, they can serve as entry points for hackers or even cause catastrophic internal errors, such as one case where an AI agent accidentally deleted an entire company database. 

Traditional chatbots also come with risks, particularly around data privacy. Despite assurances from companies, most chatbot interactions are stored and sometimes used for future model training, meaning they are not entirely private. This issue gained attention in 2023 when Samsung engineers accidentally leaked confidential data to ChatGPT, prompting the company to ban employee use of the chatbot. 

The integration of AI tools into mainstream software has only accelerated their ubiquity. Microsoft recently announced that AI agents will be embedded into Word, Excel, and PowerPoint, meaning millions of users may interact with AI daily—often without any specialized training in cybersecurity. As AI becomes an integral part of workplace tools, the potential for human error, unintentional data sharing, and exposure to security breaches increases. 

While the promise of AI continues to drive innovation, experts warn that its unchecked expansion poses significant security challenges. Without comprehensive training, clear policies, and safeguards in place, individuals and organizations risk turning powerful productivity tools into major sources of vulnerability. The race to integrate AI into every aspect of modern life is well underway—but for cybersecurity experts, the race to keep users informed and protected is still lagging far behind.

Clanker: The Viral AI Slur Fueling Backlash Against Robots and Chatbots

 

In popular culture, robots have long carried nicknames. Battlestar Galactica called them “toasters,” while Blade Runner used the term “skinjobs.” Now, amid rising tensions over artificial intelligence, a new label has emerged online: “clanker.” 

The word, once confined to Star Wars lore where it was used against battle droids, has become the latest insult aimed at robots and AI chatbots. In a viral video, a man shouted, “Get this dirty clanker out of here!” at a sidewalk robot, echoing a sentiment spreading rapidly across social platforms. 

Posts using the term have exploded on TikTok, Instagram, and X, amassing hundreds of millions of views. Beyond online humor, “clanker” has been adopted in real-world debates. Arizona Senator Ruben Gallego even used the word while promoting his bill to regulate AI-driven customer service bots. For critics, it has become a rallying cry against automation, generative AI content, and the displacement of human jobs. 

Anti-AI protests in San Francisco and London have also adopted the phrase as a unifying slogan. “It’s still early, but people are really beginning to see the negative impacts,” said protest organizer Sam Kirchner, who recently led a demonstration outside OpenAI’s headquarters. 

While often used humorously, the word reflects genuine frustration. Jay Pinkert, a marketing manager in Austin, admits he tells ChatGPT to “stop being a clanker” when it fails to answer him properly. For him, the insult feels like a way to channel human irritation toward a machine that increasingly behaves like one of us. 

The term’s evolution highlights how quickly internet culture reshapes language. According to etymologist Adam Aleksic, clanker gained traction this year after online users sought a new word to push back against AI. “People wanted a way to lash out,” he said. “Now the word is everywhere.” 

Not everyone is comfortable with the trend. On Reddit and Star Wars forums, debates continue over whether it is ethical to use derogatory terms, even against machines. Some argue it echoes real-world slurs, while others worry about the long-term implications if AI achieves advanced intelligence. Culture writer Hajin Yoo cautioned that the word’s playful edge risks normalizing harmful language patterns. 

Still, the viral momentum shows little sign of slowing. Popular TikTok skits depict a future where robots, labeled clankers, are treated as second-class citizens in human society. For now, the term embodies both the humor and unease shaping public attitudes toward AI, capturing how deeply the technology has entered cultural debates.

Salesforce Launches AI Research Initiatives with CRMArena-Pro to Address Enterprise AI Failures

 

Salesforce is doubling down on artificial intelligence research to address one of the toughest challenges for enterprises: AI agents that perform well in demonstrations but falter in complex business environments. The company announced three new initiatives this week, including CRMArena-Pro, a simulation platform described as a “digital twin” of business operations. The goal is to test AI agents under realistic conditions before deployment, helping enterprises avoid costly failures.  

Silvio Savarese, Salesforce’s chief scientist, likened the approach to flight simulators that prepare pilots for difficult situations before real flights. By simulating challenges such as customer escalations, sales forecasting issues, and supply chain disruptions, CRMArena-Pro aims to prepare agents for unpredictable scenarios. The effort comes as enterprises face widespread frustration with AI. A report from MIT found that 95% of generative AI pilots does not reach production, while Salesforce’s research indicates that large language models succeed only about a third of the time in handling complex cases.  

CRMArena-Pro differs from traditional benchmarks by focusing on enterprise-specific tasks with synthetic but realistic data validated by business experts. Salesforce has also been testing the system internally before making it available to clients. Alongside this, the company introduced the Agentic Benchmark for CRM, a framework for evaluating AI agents across five metrics: accuracy, cost, speed, trust and safety, and sustainability. The sustainability measure stands out by helping companies match model size to task complexity, balancing performance with reduced environmental impact. 

A third initiative highlights the importance of clean data for AI success. Salesforce’s new Account Matching feature uses fine-tuned language models to identify and merge duplicate records across systems. This improves data accuracy and saves time by reducing the need for manual cross-checking. One major customer achieved a 95% match rate, significantly improving efficiency. 

The announcements come during a period of heightened security concerns. Earlier this month, more than 700 Salesforce customer instances were affected in a campaign that exploited OAuth tokens from a third-party chat integration. Attackers were able to steal credentials for platforms like AWS and Snowflake, underscoring the risks tied to external tools. Salesforce has since removed the compromised integration from its marketplace. 

By focusing on simulation, benchmarking, and data quality, Salesforce hopes to close the gap between AI’s promise and its real-world performance. The company is positioning its approach as “Enterprise General Intelligence,” emphasizing the need for consistency across diverse business scenarios. These initiatives will be showcased at Salesforce’s Dreamforce conference in October, where more AI developments are expected.

New Forensic System Tracks Ghost Guns Made With 3D Printing Using SIDE

 

The rapid rise of 3D printing has transformed manufacturing, offering efficient ways to produce tools, spare parts, and even art. But the same technology has also enabled the creation of “ghost guns” — firearms built outside regulated systems and nearly impossible to trace. These weapons have already been linked to crimes, including the 2024 murder of UnitedHealthcare CEO Brian Thompson, sparking concern among policymakers and law enforcement. 

Now, new research suggests that even if such weapons are broken into pieces, investigators may still be able to extract critical identifying details. Researchers from Washington University in St. Louis, led by Netanel Raviv, have developed a system called Secure Information Embedding and Extraction (SIDE). Unlike earlier fingerprinting methods that stored printer IDs, timestamps, or location data directly into printed objects, SIDE is designed to withstand tampering. 

Even if an object is deliberately smashed, the embedded information remains recoverable, giving investigators a powerful forensic tool. The SIDE framework is built on earlier research presented at the 2024 IEEE International Symposium on Information Theory, which introduced techniques for encoding data that could survive partial destruction. This new version adds enhanced security mechanisms, creating a more resilient system that could be integrated into 3D printers. 

The approach does not rely on obvious markings but instead uses loss-tolerant mathematical embedding to hide identifying information within the material itself. As a result, even fragments of plastic or resin may contain enough data to help reconstruct its origin. Such technology could help reduce the spread of ghost guns and make it more difficult for criminals to use 3D printing for illicit purposes. 

However, the system also raises questions about regulation and personal freedom. If fingerprinting becomes mandatory, even hobbyist printers used for harmless projects may be subject to oversight. This balance between improving security and protecting privacy is likely to spark debate as governments consider regulation. The potential uses of SIDE go far beyond weapons tracing. Any object created with a 3D printer could carry an invisible signature, allowing investigators to track timelines, production sources, and usage. 

Combined with artificial intelligence tools for pattern recognition, this could give law enforcement powerful new forensic capabilities. “This work opens up new ways to protect the public from the harmful aspects of 3D printing through a combination of mathematical contributions and new security mechanisms,” said Raviv, assistant professor of computer science and engineering at Washington University. He noted that while SIDE cannot guarantee protection against highly skilled attackers, it significantly raises the technical barriers for criminals seeking to avoid detection.

Congress Questions Hertz Over AI-Powered Scanners in Rental Cars After Customer Complaints

 

Hertz is facing scrutiny from U.S. lawmakers over its use of AI-powered vehicle scanners to detect damage on rental cars, following growing reports of customer complaints. In a letter to Hertz CEO Gil West, the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation requested detailed information about the company’s automated inspection process. 

Lawmakers noted that unlike some competitors, Hertz appears to rely entirely on artificial intelligence without human verification when billing customers for damage. Subcommittee Chair Nancy Mace emphasized that other rental car providers reportedly use AI technology but still include human review before charging customers. Hertz, however, seems to operate differently, issuing assessments solely based on AI findings. 

This distinction has raised concerns, particularly after a wave of media reports highlighted instances where renters were hit with significant charges once they had already left Hertz locations. Mace’s letter also pointed out that customers often receive delayed notifications of supposed damage, making it difficult to dispute charges before fees increase. The Subcommittee warned that these practices could influence how federal agencies handle car rentals for official purposes. 

Hertz began deploying AI-powered scanners earlier this year at major U.S. airports, including Atlanta, Charlotte, Dallas, Houston, Newark, and Phoenix, with plans to expand the system to 100 locations by the end of 2025. The technology was developed in partnership with Israeli company UVeye, which specializes in AI-driven camera systems and machine learning. Hertz has promoted the scanners as a way to improve the accuracy and efficiency of vehicle inspections, while also boosting availability and transparency for customers. 

According to Hertz, the UVeye platform can scan multiple parts of a vehicle—including body panels, tires, glass, and the undercarriage—automatically identifying possible damage or maintenance needs. The company has claimed that the system enhances manual checks rather than replacing them entirely. Despite these assurances, customer experiences tell a different story. On the r/HertzRentals subreddit, multiple users have shared frustrations over disputed damage claims. One renter described how an AI scanner flagged damage on a vehicle that was wet from rain, triggering an automated message from Hertz about detected issues. 

Upon inspection, the renter found no visible damage and even recorded a video to prove the car’s condition, but Hertz employees insisted they had no control over the system and directed the customer to corporate support. Such incidents have fueled doubts about the fairness and reliability of fully automated damage assessments. 

The Subcommittee has asked Hertz to provide a briefing by August 27 to clarify how the company expects the technology to benefit customers and how it could affect Hertz’s contracts with the federal government. 

With Congress now involved, the controversy marks a turning point in the debate over AI’s role in customer-facing services, especially when automation leaves little room for human oversight.