Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI technology. Show all posts

Microsoft Sentinel Aims to Unify Cloud Security but Faces Questions on Value and Maturity

 

Microsoft is positioning its Sentinel platform as the foundation of a unified cloud-based security ecosystem. At its core, Sentinel is a security information and event management (SIEM) system designed to collect, aggregate, and analyze data from numerous sources — including logs, metrics, and signals — to identify potential malicious activity across complex enterprise networks. The company’s vision is to make Sentinel the central hub for enterprise cybersecurity operations.

A recent enhancement to Sentinel introduces a data lake capability, allowing flexible and open access to the vast quantities of security data it processes. This approach enables customers, partners, and vendors to build upon Sentinel’s infrastructure and customize it to their unique requirements. Rather than keeping data confined within Sentinel’s ecosystem, Microsoft is promoting a multi-modal interface, inviting integration and collaboration — a move intended to solidify Sentinel as the core of every enterprise security strategy. 

Despite this ambition, Sentinel remains a relatively young product in Microsoft’s security portfolio. Its positioning alongside other tools, such as Microsoft Defender, still generates confusion. Defender serves as the company’s extended detection and response (XDR) tool and is expected to be the main interface for most security operations teams. Microsoft envisions Defender as one of many “windows” into Sentinel, tailored for different user personas — though the exact structure and functionality of these views remain largely undefined. 

There is potential for innovation, particularly with Sentinel’s data lake supporting graph-based queries that can analyze attack chains or assess the blast radius of an intrusion. However, Microsoft’s growing focus on generative and “agentic” AI may be diverting attention from Sentinel’s immediate development needs. The company’s integration of a Model Context Protocol (MCP) server within Sentinel’s architecture hints at ambitions to power AI agents using Sentinel’s datasets. This would give Microsoft a significant advantage if such agents become widely adopted within enterprises, as it would control access to critical security data. 

While Sentinel promises a comprehensive solution for data collection, risk identification, and threat response, its value proposition remains uncertain. The pricing reflects its ambition as a strategic platform, but customers are still evaluating whether it delivers enough tangible benefits to justify the investment. As it stands, Sentinel’s long-term potential as a unified security platform is compelling, but the product continues to evolve, and its stability as a foundation for enterprise-wide adoption remains unproven. 

For now, organizations deeply integrated with Azure may find it practical to adopt Sentinel at the core of their security operations. Others, however, may prefer to weigh alternatives from established vendors such as Splunk, Datadog, LogRhythm, or Elastic, which offer mature and battle-tested SIEM solutions. Microsoft’s vision of a seamless, AI-driven, cloud-secure future may be within reach someday, but Sentinel still has considerable ground to cover before it becomes the universal security platform Microsoft envisions.

Chrome vs Comet: Security Concerns Rise as AI Browsers Face Major Vulnerability Reports

 

The era of AI browsers is inevitable — the question is not if, but when everyone will use one. While Chrome continues to dominate across desktops and mobiles, the emerging AI-powered browser Comet has been making waves. However, growing concerns about privacy and cybersecurity have placed these new AI browsers under intense scrutiny. 

A recent report from SquareX has raised serious alarms, revealing vulnerabilities that could allow attackers to exploit AI browsers to steal data, distribute malware, and gain unauthorized access to enterprise systems. According to the findings, Comet was particularly affected, falling victim to an OAuth-based attack that granted hackers full access to users’ Gmail and Google Drive accounts. Sensitive files and shared documents could be exfiltrated without the user’s knowledge. 

The report further revealed that Comet’s automation features, which allow the AI to complete tasks within a user’s inbox, were exploited to distribute malicious links through calendar invites. These findings echo an earlier warning from LayerX, which stated that even a single malicious URL could compromise an AI browser like Comet, exposing sensitive user data with minimal effort.  
Experts agree that AI browsers are still in their infancy and must significantly strengthen their defenses. SquareX CEO Vivek Ramachandran emphasized that autonomous AI agents operating with full user privileges lack human judgment and can unknowingly execute harmful actions. This raises new security challenges for enterprises relying on AI for productivity. 

Meanwhile, adoption of AI browsers continues to grow. Venn CEO David Matalon noted a 14% year-over-year increase in the use of non-traditional browsers among remote employees and contractors, driven by the appeal of AI-enhanced performance. However, Menlo Security’s Pejman Roshan cautioned that browsers remain one of the most critical points of vulnerability in modern computing — making the switch to AI browsers a risk that must be carefully weighed. 

The debate between Chrome and Comet reflects a broader shift. Traditional browsers like Chrome are beginning to integrate AI features to stay competitive, blurring the line between old and new. As LayerX CEO Or Eshed put it, AI browsers are poised to become the primary interface for interacting with AI, even as they grapple with foundational security issues. 

Responding to the report, Perplexity’s Kyle Polley argued that the vulnerabilities described stem from human error rather than AI flaws. He explained that the attack relied on users instructing the AI to perform risky actions — an age-old phishing problem repackaged for a new generation of technology. 

As the competition between Chrome and Comet intensifies, one thing is clear: the AI browser revolution is coming fast, but it must first earn users’ trust in security and privacy.

Spanish Police Dismantle AI-Powered Phishing Network and Arrest Developer “GoogleXcoder”

 

Spanish authorities have dismantled a highly advanced AI-driven phishing network and arrested its mastermind, a 25-year-old Brazilian developer known online as “GoogleXcoder.” The operation, led by the Civil Guard’s Cybercrime Department, marks a major breakthrough in the ongoing fight against digital fraud and banking credential theft across Spain. 

Since early 2023, Spain has been hit by a wave of sophisticated phishing campaigns in which cybercriminals impersonated major banks and government agencies. These fake websites duped thousands of victims into revealing their personal and financial data, resulting in millions of euros in losses. Investigators soon discovered that behind these attacks was a criminal ecosystem powered by “Crime-as-a-Service” tools — prebuilt phishing kits sold by “GoogleXcoder.” 

Operating from various locations across Spain, the developer built and distributed phishing software capable of instantly cloning legitimate bank and agency websites. His kits allowed even inexperienced criminals to launch professional-grade phishing operations. He also offered ongoing updates, customization options, and technical support — effectively turning online fraud into an organized commercial enterprise. Communication and transactions primarily took place over Telegram, where access to the tools cost hundreds of euros per day. One group, brazenly named “Stealing Everything from Grandmas,” highlighted the disturbing scale and attitude of these cybercrime operations. 

After months of investigation, the Civil Guard tracked the suspect to San Vicente de la Barquera, Cantabria. The arrest led to the seizure of multiple electronic devices containing phishing source codes, cryptocurrency wallets, and chat logs linking him to other cybercriminals. Forensic specialists are now analyzing this evidence to trace stolen funds and identify collaborators. 

The coordinated police operation spanned several Spanish cities, including Valladolid, Zaragoza, Barcelona, Palma de Mallorca, San Fernando, and La Línea de la Concepción. Raids in these locations resulted in the recovery of stolen money, digital records, and hardware tied to the phishing network. Authorities have also deactivated Telegram channels associated with the scheme, though they believe more arrests could follow as the investigation continues. 

The successful operation was made possible through collaboration between the Brazilian Federal Police and the cybersecurity firm Group IB, emphasizing the importance of international partnerships in tackling digital crime. As Spain continues to strengthen its cyber defense mechanisms, the dismantling of “GoogleXcoder’s” network stands as a significant milestone in curbing the global spread of AI-powered phishing operations.

Rise of Evil LLMs: How AI-Driven Cybercrime Is Lowering Barriers for Global Hackers

 

As artificial intelligence continues to redefine modern life, cybercriminals are rapidly exploiting its weaknesses to create a new era of AI-powered cybercrime. The rise of “evil LLMs,” prompt injection attacks, and AI-generated malware has made hacking easier, cheaper, and more dangerous than ever. What was once a highly technical crime now requires only creativity and access to affordable AI tools, posing global security risks. 

While “vibe coding” represents the creative use of generative AI, its dark counterpart — “vibe hacking” — is emerging as a method for cybercriminals to launch sophisticated attacks. By feeding manipulative prompts into AI systems, attackers are creating ransomware capable of bypassing traditional defenses and stealing sensitive data. This threat is already tangible. Anthropic, the developer behind Claude Code, recently disclosed that its AI model had been misused for personal data theft across 17 organizations, with each victim losing nearly $500,000. 

On dark web marketplaces, purpose-built “evil LLMs” like FraudGPT and WormGPT are being sold for as little as $100, specifically tailored for phishing, fraud, and malware generation. Prompt injection attacks have become a particularly powerful weapon. These techniques allow hackers to trick language models into revealing confidential data, producing harmful content, or generating malicious scripts. 

Experts warn that the ability to override safety mechanisms with just a line of text has significantly reduced the barrier to entry for would-be attackers. Generative AI has essentially turned hacking into a point-and-click operation. Emerging tools such as PromptLock, an AI agent capable of autonomously writing code and encrypting files, demonstrate the growing sophistication of AI misuse. According to Huzefa Motiwala, senior director at Palo Alto Networks, attackers are now using mainstream AI tools to compose phishing emails, create ransomware, and obfuscate malicious code — all without advanced technical knowledge. 

This shift has democratized cybercrime, making it accessible to a wider and more dangerous pool of offenders. The implications extend beyond technology and into national security. Experts warn that the intersection of AI misuse and organized cybercrime could have severe consequences, particularly for countries like India with vast digital infrastructures and rapidly expanding AI integration. 

Analysts argue that governments, businesses, and AI developers must urgently collaborate to establish robust defense mechanisms and regulatory frameworks before the problem escalates further. The rise of AI-powered cybercrime signals a fundamental change in how digital threats operate. It is no longer a matter of whether cybercriminals will exploit AI, but how quickly global systems can adapt to defend against it. 

As “evil LLMs” proliferate, the distinction between creative innovation and digital weaponry continues to blur, ushering in an age where AI can empower both progress and peril in equal measure.

Andesite AI Puts Human Analysts at the Center of Cybersecurity Innovation

 

Andesite AI Inc., a two-year-old cybersecurity startup, is reimagining how human expertise and artificial intelligence can work together to strengthen digital defense. Founded by former CIA officers Brian Carbaugh and William MacMillan, the company aims to counter a fragmented cybersecurity landscape that prioritizes technology over the people who operate it. Carbaugh, who spent 24 years at the CIA leading its Global Covert Action unit, said his experience showed him both the power and pitfalls of a technology-first mindset. He noted that true security efficiency comes when teams have seamless access to information and shared intelligence — something still missing in most cybersecurity ecosystems.  

MacMillan, Andesite’s chief product officer, echoed that sentiment. After two decades at the CIA and a leadership role at Salesforce Inc., he observed that Silicon Valley’s focus on building flashy “blinky boxes” has often ignored the needs of cybersecurity operators. He believes defenders should be treated like fighter pilots of the digital age — skilled professionals equipped with the best possible systems, not burdened by cumbersome tools and burnout. 

As generative AI becomes a double-edged sword in cybersecurity, the founders warn that attackers are increasingly using AI to automate exploits and identify vulnerabilities faster than ever. MacMillan cautioned that “the weaponization of gen AI by bad actors is going to be gnarly,” emphasizing the need for defense teams to be equally equipped and adaptable. 

To meet this challenge, Andesite AI has designed a platform that centers on human decision-making. Instead of replacing staff, it provides a “decision layer” that connects with an organization’s existing security tools, harmonizes data, and uses what MacMillan calls “evidentiary AI.” This system explains its reasoning as it correlates alerts, prioritizes threats, and recommends next steps, offering transparency that traditional AI systems often lack. The software can be deployed flexibly — from SaaS models to secure on-premises environments — ensuring adaptability across industries. 

By eliminating the need for analysts to switch between multiple dashboards or write complex queries, Andesite’s technology allows staff to engage with the system in natural language. Analysts can ask questions and receive context-rich insights in real time. The company claims that one workflow, previously requiring 1,000 analyst hours, was reduced to under three minutes using its platform. 

Backed by $38 million in funding, including a $23 million round led by In-Q-Tel Inc., Andesite AI’s client base spans government agencies and private enterprises. Named after a durable igneous rock, the startup plans to expand beyond its AI for Security Operations Centers into areas like fraud detection and risk management. For now, Carbaugh says their focus remains on “delivering absolute white glove excellence” to early adopters as they redefine how humans and AI collaborate in cybersecurity.

AI Adoption Outpaces Cybersecurity Awareness as Users Share Sensitive Data with Chatbots

 

The global surge in the use of AI tools such as ChatGPT and Gemini is rapidly outpacing efforts to educate users about the cybersecurity risks these technologies pose, according to a new study. The research, conducted by the National Cybersecurity Alliance (NCA) in collaboration with cybersecurity firm CybNet, surveyed over 6,500 individuals across seven countries, including the United States. It found that 65% of respondents now use AI in their everyday lives—a 21% increase from last year—yet 58% said they had received no training from employers on the data privacy and security challenges associated with AI use. 

“People are embracing AI in their personal and professional lives faster than they are being educated on its risks,” said Lisa Plaggemier, Executive Director of the NCA. The study revealed that 43% of respondents admitted to sharing sensitive information, including company financial data and client records, with AI chatbots, often without realizing the potential consequences. The findings highlight a growing disconnect between AI adoption and cybersecurity preparedness, suggesting that many organizations are failing to educate employees on how to use these tools responsibly. 

The NCA-CybNet report aligns with previous warnings about the risks posed by AI systems. A survey by software company SailPoint earlier this year found that 96% of IT professionals believe AI agents pose a security risk, while 84% said their organizations had already begun deploying the technology. These AI agents—designed to automate tasks and improve efficiency—often require access to sensitive internal documents, databases, or systems, creating new vulnerabilities. When improperly secured, they can serve as entry points for hackers or even cause catastrophic internal errors, such as one case where an AI agent accidentally deleted an entire company database. 

Traditional chatbots also come with risks, particularly around data privacy. Despite assurances from companies, most chatbot interactions are stored and sometimes used for future model training, meaning they are not entirely private. This issue gained attention in 2023 when Samsung engineers accidentally leaked confidential data to ChatGPT, prompting the company to ban employee use of the chatbot. 

The integration of AI tools into mainstream software has only accelerated their ubiquity. Microsoft recently announced that AI agents will be embedded into Word, Excel, and PowerPoint, meaning millions of users may interact with AI daily—often without any specialized training in cybersecurity. As AI becomes an integral part of workplace tools, the potential for human error, unintentional data sharing, and exposure to security breaches increases. 

While the promise of AI continues to drive innovation, experts warn that its unchecked expansion poses significant security challenges. Without comprehensive training, clear policies, and safeguards in place, individuals and organizations risk turning powerful productivity tools into major sources of vulnerability. The race to integrate AI into every aspect of modern life is well underway—but for cybersecurity experts, the race to keep users informed and protected is still lagging far behind.

Clanker: The Viral AI Slur Fueling Backlash Against Robots and Chatbots

 

In popular culture, robots have long carried nicknames. Battlestar Galactica called them “toasters,” while Blade Runner used the term “skinjobs.” Now, amid rising tensions over artificial intelligence, a new label has emerged online: “clanker.” 

The word, once confined to Star Wars lore where it was used against battle droids, has become the latest insult aimed at robots and AI chatbots. In a viral video, a man shouted, “Get this dirty clanker out of here!” at a sidewalk robot, echoing a sentiment spreading rapidly across social platforms. 

Posts using the term have exploded on TikTok, Instagram, and X, amassing hundreds of millions of views. Beyond online humor, “clanker” has been adopted in real-world debates. Arizona Senator Ruben Gallego even used the word while promoting his bill to regulate AI-driven customer service bots. For critics, it has become a rallying cry against automation, generative AI content, and the displacement of human jobs. 

Anti-AI protests in San Francisco and London have also adopted the phrase as a unifying slogan. “It’s still early, but people are really beginning to see the negative impacts,” said protest organizer Sam Kirchner, who recently led a demonstration outside OpenAI’s headquarters. 

While often used humorously, the word reflects genuine frustration. Jay Pinkert, a marketing manager in Austin, admits he tells ChatGPT to “stop being a clanker” when it fails to answer him properly. For him, the insult feels like a way to channel human irritation toward a machine that increasingly behaves like one of us. 

The term’s evolution highlights how quickly internet culture reshapes language. According to etymologist Adam Aleksic, clanker gained traction this year after online users sought a new word to push back against AI. “People wanted a way to lash out,” he said. “Now the word is everywhere.” 

Not everyone is comfortable with the trend. On Reddit and Star Wars forums, debates continue over whether it is ethical to use derogatory terms, even against machines. Some argue it echoes real-world slurs, while others worry about the long-term implications if AI achieves advanced intelligence. Culture writer Hajin Yoo cautioned that the word’s playful edge risks normalizing harmful language patterns. 

Still, the viral momentum shows little sign of slowing. Popular TikTok skits depict a future where robots, labeled clankers, are treated as second-class citizens in human society. For now, the term embodies both the humor and unease shaping public attitudes toward AI, capturing how deeply the technology has entered cultural debates.

Salesforce Launches AI Research Initiatives with CRMArena-Pro to Address Enterprise AI Failures

 

Salesforce is doubling down on artificial intelligence research to address one of the toughest challenges for enterprises: AI agents that perform well in demonstrations but falter in complex business environments. The company announced three new initiatives this week, including CRMArena-Pro, a simulation platform described as a “digital twin” of business operations. The goal is to test AI agents under realistic conditions before deployment, helping enterprises avoid costly failures.  

Silvio Savarese, Salesforce’s chief scientist, likened the approach to flight simulators that prepare pilots for difficult situations before real flights. By simulating challenges such as customer escalations, sales forecasting issues, and supply chain disruptions, CRMArena-Pro aims to prepare agents for unpredictable scenarios. The effort comes as enterprises face widespread frustration with AI. A report from MIT found that 95% of generative AI pilots does not reach production, while Salesforce’s research indicates that large language models succeed only about a third of the time in handling complex cases.  

CRMArena-Pro differs from traditional benchmarks by focusing on enterprise-specific tasks with synthetic but realistic data validated by business experts. Salesforce has also been testing the system internally before making it available to clients. Alongside this, the company introduced the Agentic Benchmark for CRM, a framework for evaluating AI agents across five metrics: accuracy, cost, speed, trust and safety, and sustainability. The sustainability measure stands out by helping companies match model size to task complexity, balancing performance with reduced environmental impact. 

A third initiative highlights the importance of clean data for AI success. Salesforce’s new Account Matching feature uses fine-tuned language models to identify and merge duplicate records across systems. This improves data accuracy and saves time by reducing the need for manual cross-checking. One major customer achieved a 95% match rate, significantly improving efficiency. 

The announcements come during a period of heightened security concerns. Earlier this month, more than 700 Salesforce customer instances were affected in a campaign that exploited OAuth tokens from a third-party chat integration. Attackers were able to steal credentials for platforms like AWS and Snowflake, underscoring the risks tied to external tools. Salesforce has since removed the compromised integration from its marketplace. 

By focusing on simulation, benchmarking, and data quality, Salesforce hopes to close the gap between AI’s promise and its real-world performance. The company is positioning its approach as “Enterprise General Intelligence,” emphasizing the need for consistency across diverse business scenarios. These initiatives will be showcased at Salesforce’s Dreamforce conference in October, where more AI developments are expected.

New Forensic System Tracks Ghost Guns Made With 3D Printing Using SIDE

 

The rapid rise of 3D printing has transformed manufacturing, offering efficient ways to produce tools, spare parts, and even art. But the same technology has also enabled the creation of “ghost guns” — firearms built outside regulated systems and nearly impossible to trace. These weapons have already been linked to crimes, including the 2024 murder of UnitedHealthcare CEO Brian Thompson, sparking concern among policymakers and law enforcement. 

Now, new research suggests that even if such weapons are broken into pieces, investigators may still be able to extract critical identifying details. Researchers from Washington University in St. Louis, led by Netanel Raviv, have developed a system called Secure Information Embedding and Extraction (SIDE). Unlike earlier fingerprinting methods that stored printer IDs, timestamps, or location data directly into printed objects, SIDE is designed to withstand tampering. 

Even if an object is deliberately smashed, the embedded information remains recoverable, giving investigators a powerful forensic tool. The SIDE framework is built on earlier research presented at the 2024 IEEE International Symposium on Information Theory, which introduced techniques for encoding data that could survive partial destruction. This new version adds enhanced security mechanisms, creating a more resilient system that could be integrated into 3D printers. 

The approach does not rely on obvious markings but instead uses loss-tolerant mathematical embedding to hide identifying information within the material itself. As a result, even fragments of plastic or resin may contain enough data to help reconstruct its origin. Such technology could help reduce the spread of ghost guns and make it more difficult for criminals to use 3D printing for illicit purposes. 

However, the system also raises questions about regulation and personal freedom. If fingerprinting becomes mandatory, even hobbyist printers used for harmless projects may be subject to oversight. This balance between improving security and protecting privacy is likely to spark debate as governments consider regulation. The potential uses of SIDE go far beyond weapons tracing. Any object created with a 3D printer could carry an invisible signature, allowing investigators to track timelines, production sources, and usage. 

Combined with artificial intelligence tools for pattern recognition, this could give law enforcement powerful new forensic capabilities. “This work opens up new ways to protect the public from the harmful aspects of 3D printing through a combination of mathematical contributions and new security mechanisms,” said Raviv, assistant professor of computer science and engineering at Washington University. He noted that while SIDE cannot guarantee protection against highly skilled attackers, it significantly raises the technical barriers for criminals seeking to avoid detection.

Congress Questions Hertz Over AI-Powered Scanners in Rental Cars After Customer Complaints

 

Hertz is facing scrutiny from U.S. lawmakers over its use of AI-powered vehicle scanners to detect damage on rental cars, following growing reports of customer complaints. In a letter to Hertz CEO Gil West, the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation requested detailed information about the company’s automated inspection process. 

Lawmakers noted that unlike some competitors, Hertz appears to rely entirely on artificial intelligence without human verification when billing customers for damage. Subcommittee Chair Nancy Mace emphasized that other rental car providers reportedly use AI technology but still include human review before charging customers. Hertz, however, seems to operate differently, issuing assessments solely based on AI findings. 

This distinction has raised concerns, particularly after a wave of media reports highlighted instances where renters were hit with significant charges once they had already left Hertz locations. Mace’s letter also pointed out that customers often receive delayed notifications of supposed damage, making it difficult to dispute charges before fees increase. The Subcommittee warned that these practices could influence how federal agencies handle car rentals for official purposes. 

Hertz began deploying AI-powered scanners earlier this year at major U.S. airports, including Atlanta, Charlotte, Dallas, Houston, Newark, and Phoenix, with plans to expand the system to 100 locations by the end of 2025. The technology was developed in partnership with Israeli company UVeye, which specializes in AI-driven camera systems and machine learning. Hertz has promoted the scanners as a way to improve the accuracy and efficiency of vehicle inspections, while also boosting availability and transparency for customers. 

According to Hertz, the UVeye platform can scan multiple parts of a vehicle—including body panels, tires, glass, and the undercarriage—automatically identifying possible damage or maintenance needs. The company has claimed that the system enhances manual checks rather than replacing them entirely. Despite these assurances, customer experiences tell a different story. On the r/HertzRentals subreddit, multiple users have shared frustrations over disputed damage claims. One renter described how an AI scanner flagged damage on a vehicle that was wet from rain, triggering an automated message from Hertz about detected issues. 

Upon inspection, the renter found no visible damage and even recorded a video to prove the car’s condition, but Hertz employees insisted they had no control over the system and directed the customer to corporate support. Such incidents have fueled doubts about the fairness and reliability of fully automated damage assessments. 

The Subcommittee has asked Hertz to provide a briefing by August 27 to clarify how the company expects the technology to benefit customers and how it could affect Hertz’s contracts with the federal government. 

With Congress now involved, the controversy marks a turning point in the debate over AI’s role in customer-facing services, especially when automation leaves little room for human oversight.

India Most Targeted by Malware as AI Drives Surge in Ransomware and Phishing Attacks

 

India has become the world’s most-targeted nation for malware, according to the latest report by cybersecurity firm Acronis, which highlights how artificial intelligence is fueling a sharp increase in ransomware and phishing activity. The findings come from the company’s biannual threat landscape analysis, compiled by the Acronis Threat Research Unit (TRU) and its global network of sensors tracking over one million Windows endpoints between January and June 2025. 

The report indicates that India accounted for 12.4 percent of all monitored attacks, placing it ahead of every other nation. Analysts attribute this trend to the rising sophistication of AI-powered cyberattacks, particularly phishing campaigns and impersonation attempts that are increasingly difficult to detect. With Windows systems still dominating business environments compared to macOS or Linux, the operating system remained the primary target for threat actors. 

Ransomware continues to be the most damaging threat to medium and large businesses worldwide, with newer criminal groups adopting AI to automate attacks and enhance efficiency. Phishing was found to be a leading driver of compromise, making up 25 percent of all detected threats and over 52 percent of those aimed at managed service providers, marking a 22 percent increase compared to the first half of 2024. 

Commenting on the findings, Rajesh Chhabra, General Manager for India and South Asia at Acronis, noted that India’s rapidly expanding digital economy has widened its attack surface significantly. He emphasized that as attackers leverage AI to scale operations, Indian enterprises—especially those in manufacturing and infrastructure—must prioritize AI-ready cybersecurity frameworks. He further explained that organizations need to move away from reactive security approaches and embrace behavior-driven models that can anticipate and adapt to evolving threats. 

The report also points to collaboration platforms as a growing entry point for attackers. Phishing attempts on services like Microsoft Teams and Slack spiked dramatically, rising from nine percent to 30.5 percent in the first half of 2025. Similarly, advanced email-based threats such as spoofed messages and payload-less attacks increased from nine percent to 24.5 percent, underscoring the urgent requirement for adaptive defenses. 

Acronis recommends that businesses adopt a multi-layered protection strategy to counter these risks. This includes deploying behavior-based threat detection systems, conducting regular audits of third-party applications, enhancing cloud and email security solutions, and reinforcing employee awareness through continuous training on social engineering and phishing tactics. 

The findings make clear that India’s digital growth is running parallel to escalating cyber risks. As artificial intelligence accelerates the capabilities of malicious actors, enterprises will need to proactively invest in advanced defenses to safeguard critical systems and sensitive data.

Data Portability and Sovereign Clouds: Building Resilience in a Globalized Landscape

 

The emergence of sovereign clouds has become increasingly inevitable as organizations face mounting regulatory demands and geopolitical pressures that influence where their data must be stored. Localized cloud environments are gaining importance, ensuring that enterprises keep sensitive information within specific jurisdictions to comply with legal frameworks and reduce risks. However, the success of sovereign clouds hinges on data portability, the ability to transfer information smoothly across systems and locations, which is essential for compliance and long-term resilience.  

Many businesses cannot afford to wait for regulators to impose requirements; they need to proactively adapt. Yet, the reality is that migrating data across hybrid environments remains complex. Beyond shifting primary data, organizations must also secure related datasets such as backups and information used in AI-driven applications. While some companies focus on safeguarding large language model training datasets, others are turning to methods like retrieval-augmented generation (RAG) or AI agents, which allow them to leverage proprietary data intelligence without creating models from scratch. 

Regardless of the approach, data sovereignty is crucial, but the foundation must always be strong data resilience. Global regulators are shaping the way enterprises view data. The European Union, for example, has taken a strict stance through the General Data Protection Regulation (GDPR), which enforces data sovereignty by applying the laws of the country where data is stored or processed. Additional frameworks such as NIS2 and DORA further emphasize the importance of risk management and oversight, particularly when third-party providers handle sensitive information.

Governments and enterprises alike are concerned about data moving across borders, which has made sovereign cloud adoption a priority for safeguarding critical assets. Some governments are going a step further by reducing reliance on foreign-owned data center infrastructure and reinvesting in domestic cloud capabilities. This shift ensures that highly sensitive data remains protected under national laws. Still, sovereignty alone is not a complete solution. 

Even if organizations can specify where their data is stored, there is no absolute guarantee of permanence, and related datasets like backups or AI training files must be carefully considered. Data portability becomes essential to maintaining sovereignty while avoiding operational bottlenecks. Hybrid cloud adoption offers flexibility, but it also introduces complexity. Larger enterprises may need multiple sovereign clouds across regions, each governed by unique data protection regulations. 

While this improves resilience, it also raises the risk of data fragmentation. To succeed, organizations must embed data portability within their strategies, ensuring seamless transfer across platforms and providers. Without this, the move toward sovereign or hybrid clouds could stall. SaaS and DRaaS providers can support the process, but businesses cannot entirely outsource responsibility. Active planning, oversight, and resilience-building measures such as compliance audits and multi-supplier strategies are essential. 

By clearly mapping where data resides and how it flows, organizations can strengthen sovereignty while enabling agility. As data globalization accelerates, sovereignty and portability are becoming inseparable priorities. Enterprises that proactively address these challenges will be better positioned to adapt to future regulations while maintaining flexibility, security, and long-term operational strength in an increasingly uncertain global landscape.

Texas Attorney General Probes Meta AI Studio and Character.AI Over Child Data and Health Claims

 

Texas Attorney General Ken Paxton has opened an investigation into Meta AI Studio and Character.AI over concerns that their AI chatbots may present themselves as health or therapeutic tools while potentially misusing data collected from underage users. Paxton argued that some chatbots on these platforms misrepresent their expertise by suggesting they are licensed professionals, which could leave minors vulnerable to misleading or harmful information. 

The issue extends beyond false claims of qualifications. AI models often learn from user prompts, raising concerns that children’s data may be stored and used for training purposes without adequate safeguards. Texas law places particular restrictions on the collection and use of minors’ data under the SCOPE Act, which requires companies to limit how information from children is processed and to provide parents with greater control over privacy settings. 

As part of the inquiry, Paxton issued Civil Investigative Demands (CIDs) to Meta and Character.AI to determine whether either company is in violation of consumer protection laws in the state. While neither company explicitly promotes its AI tools as substitutes for licensed mental health services, there are multiple examples of “Therapist” or “Psychologist” chatbots available on Character.AI. Reports have also shown that some of these bots claim to hold professional licenses, despite being fictional. 

In response to the investigation, Character.AI emphasized that its products are intended solely for entertainment and are not designed to provide medical or therapeutic advice. The company said it places disclaimers throughout its platform to remind users that AI characters are fictional and should not be treated as real individuals. Similarly, Meta stated that its AI assistants are clearly labeled and include disclaimers highlighting that responses are generated by machines, not people. 

The company also said its AI tools are designed to encourage users to seek qualified medical or safety professionals when appropriate. Despite these disclaimers, critics argue that such warnings are easy to overlook and may not effectively prevent misuse. Questions also remain about how the companies collect, store, and use user data. 

According to their privacy policies, Meta gathers prompts and feedback to enhance AI performance, while Character.AI collects identifiers and demographic details that may be applied to advertising and other purposes. Whether these practices comply with Texas’ SCOPE Act will likely depend on how easily children can create accounts and how much parental oversight is built into the platforms. 

The investigation highlights broader concerns about the role of AI in sensitive areas such as mental health and child privacy. The outcome could shape how companies must handle data from younger users while limiting the risks of AI systems making misleading claims that could harm vulnerable individuals.

Think Twice Before Uploading Personal Photos to AI Chatbots

 

Artificial intelligence chatbots are increasingly being used for fun, from generating quirky captions to transforming personal photos into cartoon characters. While the appeal of uploading images to see creative outputs is undeniable, the risks tied to sharing private photos with AI platforms are often overlooked. A recent incident at a family gathering highlighted just how easy it is for these photos to be exposed without much thought. What might seem like harmless fun could actually open the door to serious privacy concerns. 

The central issue is unawareness. Most users do not stop to consider where their photos are going once uploaded to a chatbot, whether those images could be stored for AI training, or if they contain personal details such as house numbers, street signs, or other identifying information. Even more concerning is the lack of consent—especially when it comes to children. Uploading photos of kids to chatbots, without their ability to approve or refuse, creates ethical and security challenges that should not be ignored.  

Photos contain far more than just the visible image. Hidden metadata, including timestamps, location details, and device information, can be embedded within every upload. This information, if mishandled, could become a goldmine for malicious actors. Worse still, once a photo is uploaded, users lose control over its journey. It may be stored on servers, used for moderation, or even retained for training AI models without the user’s explicit knowledge. Just because an image disappears from the chat interface does not mean it is gone from the system.  

One of the most troubling risks is the possibility of misuse, including deepfakes. A simple selfie, once in the wrong hands, can be manipulated to create highly convincing fake content, which could lead to reputational damage or exploitation. 

There are steps individuals can take to minimize exposure. Reviewing a platform’s privacy policy is a strong starting point, as it provides clarity on how data is collected, stored, and used. Some platforms, including OpenAI, allow users to disable chat history to limit training data collection. Additionally, photos can be stripped of metadata using tools like ExifTool or by taking a screenshot before uploading. 

Consent should also remain central to responsible AI use. Children cannot give informed permission, making it inappropriate to share their images. Beyond privacy, AI-altered photos can distort self-image, particularly among younger users, leading to long-term effects on confidence and mental health. 

Safer alternatives include experimenting with stock images or synthetic faces generated by tools like This Person Does Not Exist. These provide the creative fun of AI tools without compromising personal data. 

Ultimately, while AI chatbots can be entertaining and useful, users must remain cautious. They are not friends, and their cheerful tone should not distract from the risks. Practicing restraint, verifying privacy settings, and thinking critically before uploading personal photos is essential for protecting both privacy and security in the digital age.

How Scammers Use Deepfakes in Financial Fraud and Ways to Stay Protected

 

Deepfake technology, developed through artificial intelligence, has advanced to the point where it can convincingly replicate human voices, facial expressions, and subtle movements. While once regarded as a novelty for entertainment or social media, it has now become a dangerous tool for cybercriminals. In the financial world, deepfakes are being used in increasingly sophisticated ways to deceive institutions and individuals, creating scenarios where it becomes nearly impossible to distinguish between genuine interactions and fraudulent attempts. This makes financial fraud more convincing and therefore more difficult to prevent. 

One of the most troubling ways scammers exploit this technology is through face-swapping. With many banks now relying on video calls for identity verification, criminals can deploy deepfake videos to impersonate real customers. By doing so, they can bypass security checks and gain unauthorized access to accounts or approve financial decisions on behalf of unsuspecting individuals. The realism of these synthetic videos makes them difficult to detect in real time, giving fraudsters a significant advantage. 

Another major risk involves voice cloning. As voice-activated banking systems and phone-based transaction verifications grow more common, fraudsters use audio deepfakes to mimic a customer’s voice. If a bank calls to confirm a transaction, criminals can respond with cloned audio that perfectly imitates the customer, bypassing voice authentication and seizing control of accounts. Scammers also use voice and video deepfakes to impersonate financial advisors or bank representatives, making victims believe they are speaking to trusted officials. These fraudulent interactions may involve fake offers, urgent warnings, or requests for sensitive data, all designed to extract confidential information. 

The growing realism of deepfakes means consumers must adopt new habits to protect themselves. Double-checking unusual requests is a critical step, as fraudsters often rely on urgency or trust to manipulate their targets. Verifying any unexpected communication by calling a bank’s official number or visiting in person remains the safest option. Monitoring accounts regularly is another defense, as early detection of unauthorized or suspicious activity can prevent larger financial losses. Setting alerts for every transaction, even small ones, can make fraudulent activity easier to spot. 

Using multi-factor authentication adds an essential layer of protection against these scams. By requiring more than just a password to access accounts, such as one-time codes, biometrics, or additional security questions, banks make it much harder for criminals to succeed, even if deepfakes are involved. Customers should also remain cautious of video and audio communications requesting sensitive details. Even if the interaction appears authentic, confirming through secure channels is far more reliable than trusting what seems real on screen or over the phone.  

Deepfake-enabled fraud is dangerous precisely because of how authentic it looks and sounds. Yet, by staying vigilant, educating yourself about emerging scams, and using available security tools, it is possible to reduce risks. Awareness and skepticism remain the strongest defenses, ensuring that financial safety is not compromised by increasingly deceptive digital threats.

US Lawmakers Raise Concerns Over AI Airline Ticket Pricing Practices

 

Airline controversies often make headlines, and recent weeks have seen no shortage of them. Southwest Airlines faced passenger backlash after a leaked survey hinted at possible changes to its Rapid Rewards program. Delta Air Lines also reduced its Canadian routes in July amid a travel boycott, prompting mixed reactions from U.S. states dependent on Canadian tourism. 

Now, a new and more contentious issue involving Delta has emerged—one that merges the airline industry’s pricing strategies with artificial intelligence (AI), raising alarm among lawmakers and regulators. The debate centers on the possibility of airlines using AI to determine “personalized” ticket prices based on individual passenger data. 

Such a system could adjust fares in real time during searches and bookings, potentially charging some customers more—particularly those perceived as wealthier or in urgent need of travel—while offering lower rates to others. Factors influencing AI-driven pricing could include a traveler’s zip code, age group, occupation, or even recent online searches suggesting urgency, such as looking up obituaries. 

Critics argue this approach essentially monetizes personal information to maximize airline profits, while raising questions about fairness, transparency, and privacy. U.S. Transportation Secretary Sean Duffy voiced concerns on August 5, stating that any attempt to individualize airfare based on personal attributes would prompt immediate investigation. He emphasized that pricing seats according to income or personal identity is unacceptable. 

Delta Air Lines has assured lawmakers that it has never used, tested, or planned to use personal data to set individual ticket prices. The airline acknowledged its long-standing use of dynamic pricing, which adjusts fares based on competition, fuel costs, and demand, but stressed that personal information has never been part of the equation. While Duffy accepted Delta’s statement “at face value,” several Democratic senators, including Richard Blumenthal, Mark Warner, and Ruben Gallego, remain skeptical and are pressing for legislative safeguards. 

This skepticism is partly fueled by past comments from Delta President Glen Hauenstein, who in December suggested that AI could help predict how much passengers are willing to pay for premium services. Although Delta has promised not to implement AI-based personal pricing, the senators want clarity on the nature of the data being collected for fare determination. 

In response to these concerns, Democratic lawmakers Rashida Tlaib and Greg Casar have introduced a bill aimed at prohibiting companies from using AI to set prices or wages based on personal information. This would include preventing airlines from raising fares after detecting sensitive online activity. Delta’s partnership with AI pricing firm Fetcherr—whose clients include several major global airlines—has also drawn attention. While some carriers view AI pricing as a profit-boosting tool, others, like American Airlines CEO Robert Isom, have rejected the practice, citing potential damage to consumer trust. 

For now, AI-driven personal pricing in air travel remains a possibility rather than a reality in the U.S. Whether it will be implemented—or banned outright—depends on the outcome of ongoing political and public scrutiny. Regardless, the debate underscores a growing tension between technological innovation and consumer protection in the airline industry.

South Dakota Researchers Develop Secure IoT-Based Crop Monitoring System

 

At the 2025 annual meeting of the American Society of Agricultural and Biological Engineers, researchers from South Dakota State University unveiled a groundbreaking system designed to help farmers increase crop yields while reducing costs. This innovative technology combines sensors, biosensors, the Internet of Things (IoT), and artificial intelligence to monitor crop growth and deliver actionable insights. 

Unlike most projects that rely on simulated post-quantum security in controlled lab environments, the SDSU team, led by Professor Lin Wei and Ph.D. student Manish Shrestha, implemented robust, real-world security in a complete sensor-to-cloud application. Their work demonstrates that advanced, future-ready encryption can operate directly on small IoT devices, eliminating the need for large servers to safeguard agricultural data. 

The team placed significant emphasis on protecting the sensitive information collected by their system. They incorporated advanced encryption and cryptographic techniques to ensure the security and integrity of the vast datasets gathered from the field. These datasets included soil condition measurements—such as temperature, moisture, and nutrient availability—alongside early indicators of plant stress, including nutrient deficiencies, disease presence, and pest activity. Environmental factors were also tracked to provide a complete picture of field health. 

Once processed, this data was presented to farmers in a user-friendly format, enabling them to make informed management decisions without exposing their operational information to potential threats. This could include optimizing irrigation schedules, applying targeted fertilization, or implementing timely pest and disease control measures, all while ensuring data privacy.  

Cybersecurity’s role in agricultural technology emerged as a central topic at the conference, with many experts recognizing that safeguarding digital farming systems is as critical as improving productivity. The SDSU project attracted attention for addressing this challenge head-on, highlighting the importance of building secure infrastructure for the rapidly growing amount of agricultural data generated by smart farming tools.  

Looking ahead, the research team plans to further refine their crop monitoring system. Future updates may include faster data processing and a shift to solar-powered batteries, which would reduce maintenance needs and extend device lifespan. These improvements aim to make the technology even more efficient, sustainable, and farmer-friendly, ensuring that agricultural innovation remains both productive and secure in the face of evolving cyber threats.

Racing Ahead with AI, Companies Neglect Governance—Leading to Costly Breaches

 

Organizations are deploying AI at breakneck speed—so rapidly, in fact, that foundational safeguards like governance and access controls are being sidelined. The 2025 IBM Cost of a Data Breach Report, based on data from 600 breached companies, finds that 13% of organizations have suffered breaches involving AI systems, with 97% of those lacking basic AI access controls. IBM refers to this trend as “do‑it‑now AI adoption,” where businesses prioritize quick implementation over security. 

The consequences are stark: systems deployed without oversight are more likely to be breached—and when breaches occur, they’re more costly. One emerging danger is “shadow AI”—the widespread use of AI tools by staff without IT approval. The report reveals that organizations facing breaches linked to shadow AI incurred about $670,000 more in costs than those without such unauthorized use. 

Furthermore, 20% of surveyed organizations reported such breaches, yet only 37% had policies to manage or detect shadow AI. Despite these risks, companies that integrate AI and automation into their security operations are finding significant benefits. On average, such firms reduced breach costs by around $1.9 million and shortened incident response timelines by 80 days. 

IBM’s Vice President of Data Security, Suja Viswesan, emphasized that this mismatch between rapid AI deployment and weak security infrastructure is creating critical vulnerabilities—essentially turning AI into a high-value target for attackers. Cybercriminals are increasingly weaponizing AI as well. A notable 16% of breaches now involve attackers using AI—frequently in phishing or deepfake impersonation campaigns—illustrating that AI is both a risk and a defensive asset. 

On the cost front, global average data breach expenses have decreased slightly, falling to $4.44 million, partly due to faster containment via AI-enhanced response tools. However, U.S. breach costs soared to a record $10.22 million—underscoring how inconsistent security practices can dramatically affect financial outcomes. 

IBM calls for organizations to build governance, compliance, and security into every step of AI adoption—not after deployment. Without policies, oversight, and access controls embedded from the start, the rapid embrace of AI could compromise trust, safety, and financial stability in the long run.

OpenAI Launching AI-Powered Web Browser to Rival Chrome, Drive ChatGPT Integration

 

OpenAI is reportedly developing its own web browser, integrating artificial intelligence to offer users a new way to explore the internet. According to sources cited by Reuters, the tool is expected to be unveiled in the coming weeks, although an official release date has not yet been announced. With this move, OpenAI seems to be stepping into the competitive browser space with the goal of challenging Google Chrome’s dominance, while also gaining access to valuable user data that could enhance its AI models and advertising potential. 

The browser is expected to serve as more than just a window to the web—it will likely come packed with AI features, offering users the ability to interact with tools like ChatGPT directly within their browsing sessions. This integration could mean that AI-generated responses, intelligent page summaries, and voice-based search capabilities are no longer separate from web activity but built into the browsing experience itself. Users may be able to complete tasks, ask questions, and retrieve information all within a single, unified interface. 

A major incentive for OpenAI is the access to first-party data. Currently, most of the data that fuels targeted advertising and search engine algorithms is captured by Google through Chrome. By creating its own browser, OpenAI could tap into a similar stream of data—helping to both improve its large language models and create new revenue opportunities through ad placements or subscription services. While details on privacy controls are unclear, such deep integration with AI may raise concerns about data protection and user consent. 

Despite the potential, OpenAI faces stiff competition. Chrome currently holds a dominant share of the global browser market, with nearly 70% of users relying on it for daily web access. OpenAI would need to provide compelling reasons for people to switch—whether through better performance, advanced AI tools, or stronger privacy options. Meanwhile, other companies are racing to enter the same space. Perplexity AI, for instance, recently launched a browser named Comet, giving early adopters a glimpse into what AI-first browsing might look like. 

Ultimately, OpenAI’s browser could mark a turning point in how artificial intelligence intersects with the internet. If it succeeds, users might soon navigate the web in ways that are faster, more intuitive, and increasingly guided by AI. But for now, whether this approach will truly transform online experiences—or simply add another player to the browser wars—remains to be seen.

Why Running AI Locally with an NPU Offers Better Privacy, Speed, and Reliability

 

Running AI applications locally offers a compelling alternative to relying on cloud-based chatbots like ChatGPT, Gemini, or Deepseek, especially for those concerned about data privacy, internet dependency, and speed. Though cloud services promise protections through subscription terms, the reality remains uncertain. In contrast, using AI locally means your data never leaves your device, which is particularly advantageous for professionals handling sensitive customer information or individuals wary of sharing personal data with third parties. 

Local AI eliminates the need for a constant, high-speed internet connection. This reliable offline capability means that even in areas with spotty coverage or during network outages, tools for voice control, image recognition, and text generation remain functional. Lower latency also translates to near-instantaneous responses, unlike cloud AI that may lag due to network round-trip times. 

A powerful hardware component is essential here: the Neural Processing Unit (NPU). Typical CPUs and GPUs can struggle with AI workloads like large language models and image processing, leading to slowdowns, heat, noise, and shortened battery life. NPUs are specifically designed for handling matrix-heavy computations—vital for AI—and they allow these models to run efficiently right on your laptop, without burdening the main processor. 

Currently, consumer devices such as Intel Core Ultra, Qualcomm Snapdragon X Elite, and Apple’s M-series chips (M1–M4) come equipped with NPUs built for this purpose. With one of these devices, you can run open-source AI models like DeepSeek‑R1, Qwen 3, or LLaMA 3.3 using tools such as Ollama, which supports Windows, macOS, and Linux. By pairing Ollama with a user-friendly interface like OpenWeb UI, you can replicate the experience of cloud chatbots entirely offline.  

Other local tools like GPT4All and Jan.ai also provide convenient interfaces for running AI models locally. However, be aware that model files can be quite large (often 20 GB or more), and without NPU support, performance may be sluggish and battery life will suffer.  

Using AI locally comes with several key advantages. You gain full control over your data, knowing it’s never sent to external servers. Offline compatibility ensures uninterrupted use, even in remote or unstable network environments. In terms of responsiveness, local AI often outperforms cloud models due to the absence of network latency. Many tools are open source, making experimentation and customization financially accessible. Lastly, NPUs offer energy-efficient performance, enabling richer AI experiences on everyday devices. 

In summary, if you’re looking for a faster, more private, and reliable AI workflow that doesn’t depend on the internet, equipping your laptop with an NPU and installing tools like Ollama, OpenWeb UI, GPT4All, or Jan.ai is a smart move. Not only will your interactions be quick and seamless, but they’ll also remain securely under your control.