Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI technology. Show all posts

Clanker: The Viral AI Slur Fueling Backlash Against Robots and Chatbots

 

In popular culture, robots have long carried nicknames. Battlestar Galactica called them “toasters,” while Blade Runner used the term “skinjobs.” Now, amid rising tensions over artificial intelligence, a new label has emerged online: “clanker.” 

The word, once confined to Star Wars lore where it was used against battle droids, has become the latest insult aimed at robots and AI chatbots. In a viral video, a man shouted, “Get this dirty clanker out of here!” at a sidewalk robot, echoing a sentiment spreading rapidly across social platforms. 

Posts using the term have exploded on TikTok, Instagram, and X, amassing hundreds of millions of views. Beyond online humor, “clanker” has been adopted in real-world debates. Arizona Senator Ruben Gallego even used the word while promoting his bill to regulate AI-driven customer service bots. For critics, it has become a rallying cry against automation, generative AI content, and the displacement of human jobs. 

Anti-AI protests in San Francisco and London have also adopted the phrase as a unifying slogan. “It’s still early, but people are really beginning to see the negative impacts,” said protest organizer Sam Kirchner, who recently led a demonstration outside OpenAI’s headquarters. 

While often used humorously, the word reflects genuine frustration. Jay Pinkert, a marketing manager in Austin, admits he tells ChatGPT to “stop being a clanker” when it fails to answer him properly. For him, the insult feels like a way to channel human irritation toward a machine that increasingly behaves like one of us. 

The term’s evolution highlights how quickly internet culture reshapes language. According to etymologist Adam Aleksic, clanker gained traction this year after online users sought a new word to push back against AI. “People wanted a way to lash out,” he said. “Now the word is everywhere.” 

Not everyone is comfortable with the trend. On Reddit and Star Wars forums, debates continue over whether it is ethical to use derogatory terms, even against machines. Some argue it echoes real-world slurs, while others worry about the long-term implications if AI achieves advanced intelligence. Culture writer Hajin Yoo cautioned that the word’s playful edge risks normalizing harmful language patterns. 

Still, the viral momentum shows little sign of slowing. Popular TikTok skits depict a future where robots, labeled clankers, are treated as second-class citizens in human society. For now, the term embodies both the humor and unease shaping public attitudes toward AI, capturing how deeply the technology has entered cultural debates.

Salesforce Launches AI Research Initiatives with CRMArena-Pro to Address Enterprise AI Failures

 

Salesforce is doubling down on artificial intelligence research to address one of the toughest challenges for enterprises: AI agents that perform well in demonstrations but falter in complex business environments. The company announced three new initiatives this week, including CRMArena-Pro, a simulation platform described as a “digital twin” of business operations. The goal is to test AI agents under realistic conditions before deployment, helping enterprises avoid costly failures.  

Silvio Savarese, Salesforce’s chief scientist, likened the approach to flight simulators that prepare pilots for difficult situations before real flights. By simulating challenges such as customer escalations, sales forecasting issues, and supply chain disruptions, CRMArena-Pro aims to prepare agents for unpredictable scenarios. The effort comes as enterprises face widespread frustration with AI. A report from MIT found that 95% of generative AI pilots does not reach production, while Salesforce’s research indicates that large language models succeed only about a third of the time in handling complex cases.  

CRMArena-Pro differs from traditional benchmarks by focusing on enterprise-specific tasks with synthetic but realistic data validated by business experts. Salesforce has also been testing the system internally before making it available to clients. Alongside this, the company introduced the Agentic Benchmark for CRM, a framework for evaluating AI agents across five metrics: accuracy, cost, speed, trust and safety, and sustainability. The sustainability measure stands out by helping companies match model size to task complexity, balancing performance with reduced environmental impact. 

A third initiative highlights the importance of clean data for AI success. Salesforce’s new Account Matching feature uses fine-tuned language models to identify and merge duplicate records across systems. This improves data accuracy and saves time by reducing the need for manual cross-checking. One major customer achieved a 95% match rate, significantly improving efficiency. 

The announcements come during a period of heightened security concerns. Earlier this month, more than 700 Salesforce customer instances were affected in a campaign that exploited OAuth tokens from a third-party chat integration. Attackers were able to steal credentials for platforms like AWS and Snowflake, underscoring the risks tied to external tools. Salesforce has since removed the compromised integration from its marketplace. 

By focusing on simulation, benchmarking, and data quality, Salesforce hopes to close the gap between AI’s promise and its real-world performance. The company is positioning its approach as “Enterprise General Intelligence,” emphasizing the need for consistency across diverse business scenarios. These initiatives will be showcased at Salesforce’s Dreamforce conference in October, where more AI developments are expected.

New Forensic System Tracks Ghost Guns Made With 3D Printing Using SIDE

 

The rapid rise of 3D printing has transformed manufacturing, offering efficient ways to produce tools, spare parts, and even art. But the same technology has also enabled the creation of “ghost guns” — firearms built outside regulated systems and nearly impossible to trace. These weapons have already been linked to crimes, including the 2024 murder of UnitedHealthcare CEO Brian Thompson, sparking concern among policymakers and law enforcement. 

Now, new research suggests that even if such weapons are broken into pieces, investigators may still be able to extract critical identifying details. Researchers from Washington University in St. Louis, led by Netanel Raviv, have developed a system called Secure Information Embedding and Extraction (SIDE). Unlike earlier fingerprinting methods that stored printer IDs, timestamps, or location data directly into printed objects, SIDE is designed to withstand tampering. 

Even if an object is deliberately smashed, the embedded information remains recoverable, giving investigators a powerful forensic tool. The SIDE framework is built on earlier research presented at the 2024 IEEE International Symposium on Information Theory, which introduced techniques for encoding data that could survive partial destruction. This new version adds enhanced security mechanisms, creating a more resilient system that could be integrated into 3D printers. 

The approach does not rely on obvious markings but instead uses loss-tolerant mathematical embedding to hide identifying information within the material itself. As a result, even fragments of plastic or resin may contain enough data to help reconstruct its origin. Such technology could help reduce the spread of ghost guns and make it more difficult for criminals to use 3D printing for illicit purposes. 

However, the system also raises questions about regulation and personal freedom. If fingerprinting becomes mandatory, even hobbyist printers used for harmless projects may be subject to oversight. This balance between improving security and protecting privacy is likely to spark debate as governments consider regulation. The potential uses of SIDE go far beyond weapons tracing. Any object created with a 3D printer could carry an invisible signature, allowing investigators to track timelines, production sources, and usage. 

Combined with artificial intelligence tools for pattern recognition, this could give law enforcement powerful new forensic capabilities. “This work opens up new ways to protect the public from the harmful aspects of 3D printing through a combination of mathematical contributions and new security mechanisms,” said Raviv, assistant professor of computer science and engineering at Washington University. He noted that while SIDE cannot guarantee protection against highly skilled attackers, it significantly raises the technical barriers for criminals seeking to avoid detection.

Congress Questions Hertz Over AI-Powered Scanners in Rental Cars After Customer Complaints

 

Hertz is facing scrutiny from U.S. lawmakers over its use of AI-powered vehicle scanners to detect damage on rental cars, following growing reports of customer complaints. In a letter to Hertz CEO Gil West, the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation requested detailed information about the company’s automated inspection process. 

Lawmakers noted that unlike some competitors, Hertz appears to rely entirely on artificial intelligence without human verification when billing customers for damage. Subcommittee Chair Nancy Mace emphasized that other rental car providers reportedly use AI technology but still include human review before charging customers. Hertz, however, seems to operate differently, issuing assessments solely based on AI findings. 

This distinction has raised concerns, particularly after a wave of media reports highlighted instances where renters were hit with significant charges once they had already left Hertz locations. Mace’s letter also pointed out that customers often receive delayed notifications of supposed damage, making it difficult to dispute charges before fees increase. The Subcommittee warned that these practices could influence how federal agencies handle car rentals for official purposes. 

Hertz began deploying AI-powered scanners earlier this year at major U.S. airports, including Atlanta, Charlotte, Dallas, Houston, Newark, and Phoenix, with plans to expand the system to 100 locations by the end of 2025. The technology was developed in partnership with Israeli company UVeye, which specializes in AI-driven camera systems and machine learning. Hertz has promoted the scanners as a way to improve the accuracy and efficiency of vehicle inspections, while also boosting availability and transparency for customers. 

According to Hertz, the UVeye platform can scan multiple parts of a vehicle—including body panels, tires, glass, and the undercarriage—automatically identifying possible damage or maintenance needs. The company has claimed that the system enhances manual checks rather than replacing them entirely. Despite these assurances, customer experiences tell a different story. On the r/HertzRentals subreddit, multiple users have shared frustrations over disputed damage claims. One renter described how an AI scanner flagged damage on a vehicle that was wet from rain, triggering an automated message from Hertz about detected issues. 

Upon inspection, the renter found no visible damage and even recorded a video to prove the car’s condition, but Hertz employees insisted they had no control over the system and directed the customer to corporate support. Such incidents have fueled doubts about the fairness and reliability of fully automated damage assessments. 

The Subcommittee has asked Hertz to provide a briefing by August 27 to clarify how the company expects the technology to benefit customers and how it could affect Hertz’s contracts with the federal government. 

With Congress now involved, the controversy marks a turning point in the debate over AI’s role in customer-facing services, especially when automation leaves little room for human oversight.

India Most Targeted by Malware as AI Drives Surge in Ransomware and Phishing Attacks

 

India has become the world’s most-targeted nation for malware, according to the latest report by cybersecurity firm Acronis, which highlights how artificial intelligence is fueling a sharp increase in ransomware and phishing activity. The findings come from the company’s biannual threat landscape analysis, compiled by the Acronis Threat Research Unit (TRU) and its global network of sensors tracking over one million Windows endpoints between January and June 2025. 

The report indicates that India accounted for 12.4 percent of all monitored attacks, placing it ahead of every other nation. Analysts attribute this trend to the rising sophistication of AI-powered cyberattacks, particularly phishing campaigns and impersonation attempts that are increasingly difficult to detect. With Windows systems still dominating business environments compared to macOS or Linux, the operating system remained the primary target for threat actors. 

Ransomware continues to be the most damaging threat to medium and large businesses worldwide, with newer criminal groups adopting AI to automate attacks and enhance efficiency. Phishing was found to be a leading driver of compromise, making up 25 percent of all detected threats and over 52 percent of those aimed at managed service providers, marking a 22 percent increase compared to the first half of 2024. 

Commenting on the findings, Rajesh Chhabra, General Manager for India and South Asia at Acronis, noted that India’s rapidly expanding digital economy has widened its attack surface significantly. He emphasized that as attackers leverage AI to scale operations, Indian enterprises—especially those in manufacturing and infrastructure—must prioritize AI-ready cybersecurity frameworks. He further explained that organizations need to move away from reactive security approaches and embrace behavior-driven models that can anticipate and adapt to evolving threats. 

The report also points to collaboration platforms as a growing entry point for attackers. Phishing attempts on services like Microsoft Teams and Slack spiked dramatically, rising from nine percent to 30.5 percent in the first half of 2025. Similarly, advanced email-based threats such as spoofed messages and payload-less attacks increased from nine percent to 24.5 percent, underscoring the urgent requirement for adaptive defenses. 

Acronis recommends that businesses adopt a multi-layered protection strategy to counter these risks. This includes deploying behavior-based threat detection systems, conducting regular audits of third-party applications, enhancing cloud and email security solutions, and reinforcing employee awareness through continuous training on social engineering and phishing tactics. 

The findings make clear that India’s digital growth is running parallel to escalating cyber risks. As artificial intelligence accelerates the capabilities of malicious actors, enterprises will need to proactively invest in advanced defenses to safeguard critical systems and sensitive data.

Data Portability and Sovereign Clouds: Building Resilience in a Globalized Landscape

 

The emergence of sovereign clouds has become increasingly inevitable as organizations face mounting regulatory demands and geopolitical pressures that influence where their data must be stored. Localized cloud environments are gaining importance, ensuring that enterprises keep sensitive information within specific jurisdictions to comply with legal frameworks and reduce risks. However, the success of sovereign clouds hinges on data portability, the ability to transfer information smoothly across systems and locations, which is essential for compliance and long-term resilience.  

Many businesses cannot afford to wait for regulators to impose requirements; they need to proactively adapt. Yet, the reality is that migrating data across hybrid environments remains complex. Beyond shifting primary data, organizations must also secure related datasets such as backups and information used in AI-driven applications. While some companies focus on safeguarding large language model training datasets, others are turning to methods like retrieval-augmented generation (RAG) or AI agents, which allow them to leverage proprietary data intelligence without creating models from scratch. 

Regardless of the approach, data sovereignty is crucial, but the foundation must always be strong data resilience. Global regulators are shaping the way enterprises view data. The European Union, for example, has taken a strict stance through the General Data Protection Regulation (GDPR), which enforces data sovereignty by applying the laws of the country where data is stored or processed. Additional frameworks such as NIS2 and DORA further emphasize the importance of risk management and oversight, particularly when third-party providers handle sensitive information.

Governments and enterprises alike are concerned about data moving across borders, which has made sovereign cloud adoption a priority for safeguarding critical assets. Some governments are going a step further by reducing reliance on foreign-owned data center infrastructure and reinvesting in domestic cloud capabilities. This shift ensures that highly sensitive data remains protected under national laws. Still, sovereignty alone is not a complete solution. 

Even if organizations can specify where their data is stored, there is no absolute guarantee of permanence, and related datasets like backups or AI training files must be carefully considered. Data portability becomes essential to maintaining sovereignty while avoiding operational bottlenecks. Hybrid cloud adoption offers flexibility, but it also introduces complexity. Larger enterprises may need multiple sovereign clouds across regions, each governed by unique data protection regulations. 

While this improves resilience, it also raises the risk of data fragmentation. To succeed, organizations must embed data portability within their strategies, ensuring seamless transfer across platforms and providers. Without this, the move toward sovereign or hybrid clouds could stall. SaaS and DRaaS providers can support the process, but businesses cannot entirely outsource responsibility. Active planning, oversight, and resilience-building measures such as compliance audits and multi-supplier strategies are essential. 

By clearly mapping where data resides and how it flows, organizations can strengthen sovereignty while enabling agility. As data globalization accelerates, sovereignty and portability are becoming inseparable priorities. Enterprises that proactively address these challenges will be better positioned to adapt to future regulations while maintaining flexibility, security, and long-term operational strength in an increasingly uncertain global landscape.

Texas Attorney General Probes Meta AI Studio and Character.AI Over Child Data and Health Claims

 

Texas Attorney General Ken Paxton has opened an investigation into Meta AI Studio and Character.AI over concerns that their AI chatbots may present themselves as health or therapeutic tools while potentially misusing data collected from underage users. Paxton argued that some chatbots on these platforms misrepresent their expertise by suggesting they are licensed professionals, which could leave minors vulnerable to misleading or harmful information. 

The issue extends beyond false claims of qualifications. AI models often learn from user prompts, raising concerns that children’s data may be stored and used for training purposes without adequate safeguards. Texas law places particular restrictions on the collection and use of minors’ data under the SCOPE Act, which requires companies to limit how information from children is processed and to provide parents with greater control over privacy settings. 

As part of the inquiry, Paxton issued Civil Investigative Demands (CIDs) to Meta and Character.AI to determine whether either company is in violation of consumer protection laws in the state. While neither company explicitly promotes its AI tools as substitutes for licensed mental health services, there are multiple examples of “Therapist” or “Psychologist” chatbots available on Character.AI. Reports have also shown that some of these bots claim to hold professional licenses, despite being fictional. 

In response to the investigation, Character.AI emphasized that its products are intended solely for entertainment and are not designed to provide medical or therapeutic advice. The company said it places disclaimers throughout its platform to remind users that AI characters are fictional and should not be treated as real individuals. Similarly, Meta stated that its AI assistants are clearly labeled and include disclaimers highlighting that responses are generated by machines, not people. 

The company also said its AI tools are designed to encourage users to seek qualified medical or safety professionals when appropriate. Despite these disclaimers, critics argue that such warnings are easy to overlook and may not effectively prevent misuse. Questions also remain about how the companies collect, store, and use user data. 

According to their privacy policies, Meta gathers prompts and feedback to enhance AI performance, while Character.AI collects identifiers and demographic details that may be applied to advertising and other purposes. Whether these practices comply with Texas’ SCOPE Act will likely depend on how easily children can create accounts and how much parental oversight is built into the platforms. 

The investigation highlights broader concerns about the role of AI in sensitive areas such as mental health and child privacy. The outcome could shape how companies must handle data from younger users while limiting the risks of AI systems making misleading claims that could harm vulnerable individuals.

Think Twice Before Uploading Personal Photos to AI Chatbots

 

Artificial intelligence chatbots are increasingly being used for fun, from generating quirky captions to transforming personal photos into cartoon characters. While the appeal of uploading images to see creative outputs is undeniable, the risks tied to sharing private photos with AI platforms are often overlooked. A recent incident at a family gathering highlighted just how easy it is for these photos to be exposed without much thought. What might seem like harmless fun could actually open the door to serious privacy concerns. 

The central issue is unawareness. Most users do not stop to consider where their photos are going once uploaded to a chatbot, whether those images could be stored for AI training, or if they contain personal details such as house numbers, street signs, or other identifying information. Even more concerning is the lack of consent—especially when it comes to children. Uploading photos of kids to chatbots, without their ability to approve or refuse, creates ethical and security challenges that should not be ignored.  

Photos contain far more than just the visible image. Hidden metadata, including timestamps, location details, and device information, can be embedded within every upload. This information, if mishandled, could become a goldmine for malicious actors. Worse still, once a photo is uploaded, users lose control over its journey. It may be stored on servers, used for moderation, or even retained for training AI models without the user’s explicit knowledge. Just because an image disappears from the chat interface does not mean it is gone from the system.  

One of the most troubling risks is the possibility of misuse, including deepfakes. A simple selfie, once in the wrong hands, can be manipulated to create highly convincing fake content, which could lead to reputational damage or exploitation. 

There are steps individuals can take to minimize exposure. Reviewing a platform’s privacy policy is a strong starting point, as it provides clarity on how data is collected, stored, and used. Some platforms, including OpenAI, allow users to disable chat history to limit training data collection. Additionally, photos can be stripped of metadata using tools like ExifTool or by taking a screenshot before uploading. 

Consent should also remain central to responsible AI use. Children cannot give informed permission, making it inappropriate to share their images. Beyond privacy, AI-altered photos can distort self-image, particularly among younger users, leading to long-term effects on confidence and mental health. 

Safer alternatives include experimenting with stock images or synthetic faces generated by tools like This Person Does Not Exist. These provide the creative fun of AI tools without compromising personal data. 

Ultimately, while AI chatbots can be entertaining and useful, users must remain cautious. They are not friends, and their cheerful tone should not distract from the risks. Practicing restraint, verifying privacy settings, and thinking critically before uploading personal photos is essential for protecting both privacy and security in the digital age.

How Scammers Use Deepfakes in Financial Fraud and Ways to Stay Protected

 

Deepfake technology, developed through artificial intelligence, has advanced to the point where it can convincingly replicate human voices, facial expressions, and subtle movements. While once regarded as a novelty for entertainment or social media, it has now become a dangerous tool for cybercriminals. In the financial world, deepfakes are being used in increasingly sophisticated ways to deceive institutions and individuals, creating scenarios where it becomes nearly impossible to distinguish between genuine interactions and fraudulent attempts. This makes financial fraud more convincing and therefore more difficult to prevent. 

One of the most troubling ways scammers exploit this technology is through face-swapping. With many banks now relying on video calls for identity verification, criminals can deploy deepfake videos to impersonate real customers. By doing so, they can bypass security checks and gain unauthorized access to accounts or approve financial decisions on behalf of unsuspecting individuals. The realism of these synthetic videos makes them difficult to detect in real time, giving fraudsters a significant advantage. 

Another major risk involves voice cloning. As voice-activated banking systems and phone-based transaction verifications grow more common, fraudsters use audio deepfakes to mimic a customer’s voice. If a bank calls to confirm a transaction, criminals can respond with cloned audio that perfectly imitates the customer, bypassing voice authentication and seizing control of accounts. Scammers also use voice and video deepfakes to impersonate financial advisors or bank representatives, making victims believe they are speaking to trusted officials. These fraudulent interactions may involve fake offers, urgent warnings, or requests for sensitive data, all designed to extract confidential information. 

The growing realism of deepfakes means consumers must adopt new habits to protect themselves. Double-checking unusual requests is a critical step, as fraudsters often rely on urgency or trust to manipulate their targets. Verifying any unexpected communication by calling a bank’s official number or visiting in person remains the safest option. Monitoring accounts regularly is another defense, as early detection of unauthorized or suspicious activity can prevent larger financial losses. Setting alerts for every transaction, even small ones, can make fraudulent activity easier to spot. 

Using multi-factor authentication adds an essential layer of protection against these scams. By requiring more than just a password to access accounts, such as one-time codes, biometrics, or additional security questions, banks make it much harder for criminals to succeed, even if deepfakes are involved. Customers should also remain cautious of video and audio communications requesting sensitive details. Even if the interaction appears authentic, confirming through secure channels is far more reliable than trusting what seems real on screen or over the phone.  

Deepfake-enabled fraud is dangerous precisely because of how authentic it looks and sounds. Yet, by staying vigilant, educating yourself about emerging scams, and using available security tools, it is possible to reduce risks. Awareness and skepticism remain the strongest defenses, ensuring that financial safety is not compromised by increasingly deceptive digital threats.

US Lawmakers Raise Concerns Over AI Airline Ticket Pricing Practices

 

Airline controversies often make headlines, and recent weeks have seen no shortage of them. Southwest Airlines faced passenger backlash after a leaked survey hinted at possible changes to its Rapid Rewards program. Delta Air Lines also reduced its Canadian routes in July amid a travel boycott, prompting mixed reactions from U.S. states dependent on Canadian tourism. 

Now, a new and more contentious issue involving Delta has emerged—one that merges the airline industry’s pricing strategies with artificial intelligence (AI), raising alarm among lawmakers and regulators. The debate centers on the possibility of airlines using AI to determine “personalized” ticket prices based on individual passenger data. 

Such a system could adjust fares in real time during searches and bookings, potentially charging some customers more—particularly those perceived as wealthier or in urgent need of travel—while offering lower rates to others. Factors influencing AI-driven pricing could include a traveler’s zip code, age group, occupation, or even recent online searches suggesting urgency, such as looking up obituaries. 

Critics argue this approach essentially monetizes personal information to maximize airline profits, while raising questions about fairness, transparency, and privacy. U.S. Transportation Secretary Sean Duffy voiced concerns on August 5, stating that any attempt to individualize airfare based on personal attributes would prompt immediate investigation. He emphasized that pricing seats according to income or personal identity is unacceptable. 

Delta Air Lines has assured lawmakers that it has never used, tested, or planned to use personal data to set individual ticket prices. The airline acknowledged its long-standing use of dynamic pricing, which adjusts fares based on competition, fuel costs, and demand, but stressed that personal information has never been part of the equation. While Duffy accepted Delta’s statement “at face value,” several Democratic senators, including Richard Blumenthal, Mark Warner, and Ruben Gallego, remain skeptical and are pressing for legislative safeguards. 

This skepticism is partly fueled by past comments from Delta President Glen Hauenstein, who in December suggested that AI could help predict how much passengers are willing to pay for premium services. Although Delta has promised not to implement AI-based personal pricing, the senators want clarity on the nature of the data being collected for fare determination. 

In response to these concerns, Democratic lawmakers Rashida Tlaib and Greg Casar have introduced a bill aimed at prohibiting companies from using AI to set prices or wages based on personal information. This would include preventing airlines from raising fares after detecting sensitive online activity. Delta’s partnership with AI pricing firm Fetcherr—whose clients include several major global airlines—has also drawn attention. While some carriers view AI pricing as a profit-boosting tool, others, like American Airlines CEO Robert Isom, have rejected the practice, citing potential damage to consumer trust. 

For now, AI-driven personal pricing in air travel remains a possibility rather than a reality in the U.S. Whether it will be implemented—or banned outright—depends on the outcome of ongoing political and public scrutiny. Regardless, the debate underscores a growing tension between technological innovation and consumer protection in the airline industry.

South Dakota Researchers Develop Secure IoT-Based Crop Monitoring System

 

At the 2025 annual meeting of the American Society of Agricultural and Biological Engineers, researchers from South Dakota State University unveiled a groundbreaking system designed to help farmers increase crop yields while reducing costs. This innovative technology combines sensors, biosensors, the Internet of Things (IoT), and artificial intelligence to monitor crop growth and deliver actionable insights. 

Unlike most projects that rely on simulated post-quantum security in controlled lab environments, the SDSU team, led by Professor Lin Wei and Ph.D. student Manish Shrestha, implemented robust, real-world security in a complete sensor-to-cloud application. Their work demonstrates that advanced, future-ready encryption can operate directly on small IoT devices, eliminating the need for large servers to safeguard agricultural data. 

The team placed significant emphasis on protecting the sensitive information collected by their system. They incorporated advanced encryption and cryptographic techniques to ensure the security and integrity of the vast datasets gathered from the field. These datasets included soil condition measurements—such as temperature, moisture, and nutrient availability—alongside early indicators of plant stress, including nutrient deficiencies, disease presence, and pest activity. Environmental factors were also tracked to provide a complete picture of field health. 

Once processed, this data was presented to farmers in a user-friendly format, enabling them to make informed management decisions without exposing their operational information to potential threats. This could include optimizing irrigation schedules, applying targeted fertilization, or implementing timely pest and disease control measures, all while ensuring data privacy.  

Cybersecurity’s role in agricultural technology emerged as a central topic at the conference, with many experts recognizing that safeguarding digital farming systems is as critical as improving productivity. The SDSU project attracted attention for addressing this challenge head-on, highlighting the importance of building secure infrastructure for the rapidly growing amount of agricultural data generated by smart farming tools.  

Looking ahead, the research team plans to further refine their crop monitoring system. Future updates may include faster data processing and a shift to solar-powered batteries, which would reduce maintenance needs and extend device lifespan. These improvements aim to make the technology even more efficient, sustainable, and farmer-friendly, ensuring that agricultural innovation remains both productive and secure in the face of evolving cyber threats.

Racing Ahead with AI, Companies Neglect Governance—Leading to Costly Breaches

 

Organizations are deploying AI at breakneck speed—so rapidly, in fact, that foundational safeguards like governance and access controls are being sidelined. The 2025 IBM Cost of a Data Breach Report, based on data from 600 breached companies, finds that 13% of organizations have suffered breaches involving AI systems, with 97% of those lacking basic AI access controls. IBM refers to this trend as “do‑it‑now AI adoption,” where businesses prioritize quick implementation over security. 

The consequences are stark: systems deployed without oversight are more likely to be breached—and when breaches occur, they’re more costly. One emerging danger is “shadow AI”—the widespread use of AI tools by staff without IT approval. The report reveals that organizations facing breaches linked to shadow AI incurred about $670,000 more in costs than those without such unauthorized use. 

Furthermore, 20% of surveyed organizations reported such breaches, yet only 37% had policies to manage or detect shadow AI. Despite these risks, companies that integrate AI and automation into their security operations are finding significant benefits. On average, such firms reduced breach costs by around $1.9 million and shortened incident response timelines by 80 days. 

IBM’s Vice President of Data Security, Suja Viswesan, emphasized that this mismatch between rapid AI deployment and weak security infrastructure is creating critical vulnerabilities—essentially turning AI into a high-value target for attackers. Cybercriminals are increasingly weaponizing AI as well. A notable 16% of breaches now involve attackers using AI—frequently in phishing or deepfake impersonation campaigns—illustrating that AI is both a risk and a defensive asset. 

On the cost front, global average data breach expenses have decreased slightly, falling to $4.44 million, partly due to faster containment via AI-enhanced response tools. However, U.S. breach costs soared to a record $10.22 million—underscoring how inconsistent security practices can dramatically affect financial outcomes. 

IBM calls for organizations to build governance, compliance, and security into every step of AI adoption—not after deployment. Without policies, oversight, and access controls embedded from the start, the rapid embrace of AI could compromise trust, safety, and financial stability in the long run.

OpenAI Launching AI-Powered Web Browser to Rival Chrome, Drive ChatGPT Integration

 

OpenAI is reportedly developing its own web browser, integrating artificial intelligence to offer users a new way to explore the internet. According to sources cited by Reuters, the tool is expected to be unveiled in the coming weeks, although an official release date has not yet been announced. With this move, OpenAI seems to be stepping into the competitive browser space with the goal of challenging Google Chrome’s dominance, while also gaining access to valuable user data that could enhance its AI models and advertising potential. 

The browser is expected to serve as more than just a window to the web—it will likely come packed with AI features, offering users the ability to interact with tools like ChatGPT directly within their browsing sessions. This integration could mean that AI-generated responses, intelligent page summaries, and voice-based search capabilities are no longer separate from web activity but built into the browsing experience itself. Users may be able to complete tasks, ask questions, and retrieve information all within a single, unified interface. 

A major incentive for OpenAI is the access to first-party data. Currently, most of the data that fuels targeted advertising and search engine algorithms is captured by Google through Chrome. By creating its own browser, OpenAI could tap into a similar stream of data—helping to both improve its large language models and create new revenue opportunities through ad placements or subscription services. While details on privacy controls are unclear, such deep integration with AI may raise concerns about data protection and user consent. 

Despite the potential, OpenAI faces stiff competition. Chrome currently holds a dominant share of the global browser market, with nearly 70% of users relying on it for daily web access. OpenAI would need to provide compelling reasons for people to switch—whether through better performance, advanced AI tools, or stronger privacy options. Meanwhile, other companies are racing to enter the same space. Perplexity AI, for instance, recently launched a browser named Comet, giving early adopters a glimpse into what AI-first browsing might look like. 

Ultimately, OpenAI’s browser could mark a turning point in how artificial intelligence intersects with the internet. If it succeeds, users might soon navigate the web in ways that are faster, more intuitive, and increasingly guided by AI. But for now, whether this approach will truly transform online experiences—or simply add another player to the browser wars—remains to be seen.

Why Running AI Locally with an NPU Offers Better Privacy, Speed, and Reliability

 

Running AI applications locally offers a compelling alternative to relying on cloud-based chatbots like ChatGPT, Gemini, or Deepseek, especially for those concerned about data privacy, internet dependency, and speed. Though cloud services promise protections through subscription terms, the reality remains uncertain. In contrast, using AI locally means your data never leaves your device, which is particularly advantageous for professionals handling sensitive customer information or individuals wary of sharing personal data with third parties. 

Local AI eliminates the need for a constant, high-speed internet connection. This reliable offline capability means that even in areas with spotty coverage or during network outages, tools for voice control, image recognition, and text generation remain functional. Lower latency also translates to near-instantaneous responses, unlike cloud AI that may lag due to network round-trip times. 

A powerful hardware component is essential here: the Neural Processing Unit (NPU). Typical CPUs and GPUs can struggle with AI workloads like large language models and image processing, leading to slowdowns, heat, noise, and shortened battery life. NPUs are specifically designed for handling matrix-heavy computations—vital for AI—and they allow these models to run efficiently right on your laptop, without burdening the main processor. 

Currently, consumer devices such as Intel Core Ultra, Qualcomm Snapdragon X Elite, and Apple’s M-series chips (M1–M4) come equipped with NPUs built for this purpose. With one of these devices, you can run open-source AI models like DeepSeek‑R1, Qwen 3, or LLaMA 3.3 using tools such as Ollama, which supports Windows, macOS, and Linux. By pairing Ollama with a user-friendly interface like OpenWeb UI, you can replicate the experience of cloud chatbots entirely offline.  

Other local tools like GPT4All and Jan.ai also provide convenient interfaces for running AI models locally. However, be aware that model files can be quite large (often 20 GB or more), and without NPU support, performance may be sluggish and battery life will suffer.  

Using AI locally comes with several key advantages. You gain full control over your data, knowing it’s never sent to external servers. Offline compatibility ensures uninterrupted use, even in remote or unstable network environments. In terms of responsiveness, local AI often outperforms cloud models due to the absence of network latency. Many tools are open source, making experimentation and customization financially accessible. Lastly, NPUs offer energy-efficient performance, enabling richer AI experiences on everyday devices. 

In summary, if you’re looking for a faster, more private, and reliable AI workflow that doesn’t depend on the internet, equipping your laptop with an NPU and installing tools like Ollama, OpenWeb UI, GPT4All, or Jan.ai is a smart move. Not only will your interactions be quick and seamless, but they’ll also remain securely under your control.

AI and the Rise of Service-as-a-Service: Why Products Are Becoming Invisible

 

The software world is undergoing a fundamental shift. Thanks to AI, product development has become faster, easier, and more scalable than ever before. Tools like Cursor and Lovable—along with countless “co-pilot” clones—have turned coding into prompt engineering, dramatically reducing development time and enhancing productivity. 

This boom has naturally caught the attention of venture capitalists. Funding for software companies hit $80 billion in Q1 2025, with investors eager to back niche SaaS solutions that follow the familiar playbook: identify a pain point, build a narrow tool, and scale aggressively. Y Combinator’s recent cohort was full of “Cursor for X” startups, reflecting the prevailing appetite for micro-products. 

But beneath this surge of point solutions lies a deeper transformation: the shift from product-led growth to outcome-driven service delivery. This evolution isn’t just about branding—it’s a structural redefinition of how software creates and delivers value. Historically, the SaaS revolution gave rise to subscription-based models, but the tools themselves remained hands-on. For example, when Adobe moved Creative Suite to the cloud, the billing changed—not the user experience. Users still needed to operate the software. SaaS, in that sense, was product-heavy and service-light. 

Now, AI is dissolving the product layer itself. The software is still there, but it’s receding into the background. The real value lies in what it does, not how it’s used. Glide co-founder Gautam Ajjarapu captures this perfectly: “The product gets us in the door, but what keeps us there is delivering results.” Take Glide’s AI for banks. It began as a tool to streamline onboarding but quickly evolved into something more transformative. Banks now rely on Glide to improve retention, automate workflows, and enhance customer outcomes. 

The interface is still a product, but the substance is service. The same trend is visible across leading AI startups. Zendesk markets “automated customer service,” where AI handles tickets end-to-end. Amplitude’s AI agents now generate product insights and implement changes. These offerings blur the line between tool and outcome—more service than software. This shift is grounded in economic logic. Services account for over 70% of U.S. GDP, and Nobel laureate Bengt Holmström’s contract theory helps explain why: businesses ultimately want results, not just tools. 

They don’t want a CRM—they want more sales. They don’t want analytics—they want better decisions. With agentic AI, it’s now possible to deliver on that promise. Instead of selling a dashboard, companies can sell growth. Instead of building an LMS, they offer complete onboarding services powered by AI agents. This evolution is especially relevant in sectors like healthcare. Corti’s CEO Andreas Cleve emphasizes that doctors don’t want more interfaces—they want more time. AI that saves time becomes invisible, and its value lies in what it enables, not how it looks. 

The implication is clear: software is becoming outcome-first. Users care less about tools and more about what those tools accomplish. Many companies—Glean, ElevenLabs, Corpora—are already moving toward this model, delivering answers, brand voices, or research synthesis rather than just access. This isn’t the death of the product—it’s its natural evolution. The best AI companies are becoming “services in a product wrapper,” where software is the delivery mechanism, but the value lies in what gets done. 

For builders, the question is no longer how to scale a product. It’s how to scale outcomes. The companies that succeed in this new era will be those that understand: users don’t want features—they want results. Call it what you want—AI-as-a-service, agentic delivery, or outcome-led software. But the trend is unmistakable. Service-as-a-Service isn’t just the next step for SaaS. It may be the future of software itself.

Personal AI Agents Could Become Digital Advocates in an AI-Dominated World

 

As generative AI agents proliferate, a new concept is gaining traction: AI entities that act as loyal digital advocates, protecting individuals from overwhelming technological complexity, misinformation, and data exploitation. Experts suggest these personal AI companions could function similarly to service animals—trained not just to assist, but to guard user interests in an AI-saturated world. From scam detection to helping navigate automated marketing and opaque algorithms, these agents would act as user-first shields. 

At a recent Imagination in Action panel, Consumer Reports’ Ginny Fahs explained, “As companies embed AI deeper into commerce, it becomes harder for consumers to identify fair offers or make informed decisions. An AI that prioritizes users’ interests can build trust and help transition toward a more transparent digital economy.” The idea is rooted in giving users agency and control in a system where most AI is built to serve businesses. Panelists—including experts like Dazza Greenwood, Amir Sarhangi, and Tobin South—discussed how loyal, trustworthy AI advocates could reshape personal data rights, online trust, and legal accountability. 

Greenwood drew parallels to early internet-era reforms such as e-signatures and automated contracts, suggesting a similar legal evolution is needed now to govern AI agents. South added that AI agents must be “loyal by design,” ensuring they act within legal frameworks and always prioritize the user. Sarhangi introduced the concept of “Know Your Agent” (KYA), which promotes transparency by tracking the digital footprint of an AI. 

With unique agent wallets and activity histories, bad actors could be identified and held accountable. Fahs described a tool called “Permission Slip,” which automates user requests like data deletion. This form of AI advocacy predates current generative models but shows how user-authorized agents could manage privacy at scale. Agents could also learn from collective behavior. For instance, an AI noting a negative review of a product could share that experience with other agents, building an automated form of word-of-mouth. 

This concept, said panel moderator Sandy Pentland, mirrors how Consumer Reports aggregates user feedback to identify reliable products. South emphasized that cryptographic tools could ensure safe data-sharing without blindly trusting tech giants. He also referenced NANDA, a decentralized protocol from MIT that aims to enable trustworthy AI infrastructure. Still, implementing AI agents raises usability questions. “We want agents to understand nuanced permissions without constantly asking users to approve every action,” Fahs said. 

Getting this right will be crucial to user adoption. Pentland noted that current AI models struggle to align with individual preferences. “An effective agent must represent you—not a demographic group, but your unique values,” he said. Greenwood believes that’s now possible: “We finally have the tools to build AI agents with fiduciary responsibilities.” In closing, South stressed that the real bottleneck isn’t AI capability but structuring and contextualizing information properly. “If you want AI to truly act on your behalf, we must design systems that help it understand you.” 

As AI becomes deeply embedded in daily life, building personalized, privacy-conscious agents may be the key to ensuring technology serves people—not the other way around.

WhatsApp Under Fire for AI Update Disrupting Group Communication


The new artificial intelligence capability introduced by WhatsApp aims to transform the way users interact with their conversations through sophisticated artificial intelligence. It uses advanced technology from Meta AI to provide a concise summary of unread messages across individual chats as well as group chats, which is referred to as Message Summaries. 

The tool was created to help users stay informed in increasingly active chat environments by automatically compiling key points and contextual highlights, allowing them to catch up in just a few clicks without having to scroll through lengthy message histories to catch up. The company claims all summaries are generated privately, so that confidentiality can be maintained and the process of use is as simple as possible for the user. 

WhatsApp announces its intention of integrating artificial intelligence-driven solutions into its app to improve user convenience as well as reshape communication habits for its global community with this rollout, sparking both excitement and controversy as a result. Despite being announced last month, WhatsApp’s innovative Message Summaries feature has moved from pilot testing to a full-scale rollout after successfully passing pilot testing. 

Having refined the tool and collected feedback from its users, it is now considered to be stable and has been formally launched for wider use. In the initial phase, the feature is only available to US users and is restricted to the English language at this time. This indicates that WhatsApp is cautious when it comes to deploying large-scale artificial intelligence. 

Nevertheless, the platform announced plans to extend its availability to more regions at some point in the future, along with the addition of multilingual support. The phased rollout strategy emphasises that the company is focused on ensuring that the technology is reliable and user-friendly before it is extended to the vast global market. 

It is WhatsApp's intention to focus on a controlled release so as to gather more insights about users' interaction with the AI-generated conversation summaries, as well as to fine-tune the experience before expanding internationally. As a result of WhatsApp's inability to provide an option for enabling or concealing the Message Summaries feature, there has been a significant amount of discontent among users. 

Despite the fact that Meta has refused to clarify the reason regarding the lack of an opt-out mechanism or why users were not offered the opportunity to opt out of the AI integration, they have not provided any explanation so far. As concerning as the technology itself is, the lack of transparency has been regarded equally as a cause for concern by many, raising questions about the control people have over their personal communications. As a result of these limitations, some people have attempted to circumvent the chatbot by switching to a WhatsApp Business account as a response. 

In addition, several users have commented that this strategy removed the AI functionality from Meta AI, but others have noted that the characteristic blue circle, which indicates Meta AI's presence, still appeared, which exacerbated the dissatisfaction and uncertainty. 

The Meta team hasn’t confirmed whether the business-oriented version of WhatsApp will continue to be exempt from AI integration for years to come. This rollout also represents Meta’s broader goal of integrating generative AI into all its platforms, which include Facebook and Instagram, into its ecosystem. 

Towards the end of 2024, Meta AI was introduced for the first time in Facebook Messenger in the United Kingdom, followed by a gradual extension into WhatsApp as part of a unified vision to revolutionise digital interactions. However, many users have expressed their frustration with this feature because it often feels intrusive and ultimately is useless, despite these ambitions. 

The chatbot appears to activate frequently when individuals are simply searching for past conversations or locating contacts, which results in obstructions rather than streamlining the experience. According to the initial feedback received, AI-generated responses are frequently perceived as superficial, repetitive, or even irrelevant to the conversation's context, as well as generating a wide range of perceptions of their value.

A Meta AI platform has been integrated directly into WhatsApp, unlike standalone platforms such as ChatGPT and Google Gemini, which are separately accessible by users. WhatsApp is a communication application that is used on a daily basis to communicate both personally and professionally. Because the feature was integrated without explicit consent and there were doubts about its usefulness, many users are beginning to wonder whether such pervasive AI assistance is really necessary or desirable. 

It has also been noted that there is a growing chorus of criticism about the inherent limitations of artificial intelligence in terms of reliably interpreting human communication. Many users have expressed their scepticism about AI's ability to accurately condense even one message within an active group chat, let alone synthesise hundreds of exchanges. It is not the first time Apple has faced similar challenges; Apple has faced similar challenges in the past when it had to pull an AI-powered feature that produced unintended and sometimes inaccurate summaries. 

As of today, the problem of "hallucinations," which occur in the form of factually incorrect or contextually irrelevant content generated by artificial intelligence, remains a persistent problem across nearly every generative platform, including commonly used platforms like ChatGPT. Aside from that, artificial intelligence continues to struggle with subtleties such as humour, sarcasm, and cultural nuance-aspects of natural conversation that are central to establishing a connection. 

In situations where the AI is not trained to recognise offhand or joking remarks, it can easily misinterpret those remarks. This leads to summaries that are alarmist, distorted, or completely inaccurate, as compared to human recipients' own. Due to the increased risk of misrepresentation, users who rely on WhatsApp for authentic, nuanced communication with colleagues, friends, and family are becoming more apprehensive than before. 

A philosophical objection has been raised beyond technical limitations, stating that the act of participating in a conversation is diminished by substituting real engagement for machine-generated recaps. There is a shared sentiment that the purpose of group chats lies precisely in the experience of reading and responding to the genuine voices of others while scrolling through a backlog of messages. 

However, there is a consensus that it is exhausting to scroll through such a large backlog of messages. It is believed that the introduction of Message Summaries not only threatens clear communication but also undermines the sense of personal connection that draws people into these digital communities in the first place, which is why these critics are concerned. 

In order to ensure user privacy, WhatsApp has created the Message Summaries feature using a new framework known as Private Processing, which is designed to safeguard user privacy. Meta and WhatsApp are specifically ensuring that neither the contents of their conversations nor the summaries that the AI system produces are able to be accessed by them, which is why this approach was developed. 

Instead of sending summaries to external servers, the platform is able to generate them locally on the users' devices, reinforcing its commitment to privacy. Each summary, presented in a clear bullet point format, is clearly labelled as "visible only to you," emphasising WhatsApp's privacy-centric design philosophy behind the feature as well. 

Message Summaries have shown to be especially useful in group chats in which the amount of unread messages is often overwhelming, as a result of the large volume of unread messages. With this tool, users are able to remain informed without having to read every single message, because lengthy exchanges are distilled into concise snapshots that enable them to stay updated without having to scroll through each and every individual message. 

The feature is disabled by default and needs to be activated manually, which addresses privacy concerns. Upon activating the feature, eligible chats display a discreet icon, signalling the availability of a summary without announcing it to other participants. Meta’s confidential computing infrastructure is at the core of its system, and in principle, it is comparable to Apple’s private cloud computing architecture. 

A Trusted Execution Environment (TEE) provides a foundation for Private Processing, ensuring that confidential information is handled in an effective manner, with robust measures against tampering, and clear mechanisms for ensuring transparency are in place.

A system's architecture is designed to shut down automatically or to generate verifiable evidence of the intrusion whenever any attempt is made to compromise the security assurances of the system. As well as supporting independent third-party audits, Meta has intentionally designed the framework in such a way that it will remain stateless, forward secure, and immune to targeted attacks so that Meta's claims about data protection can be verified. 

Furthermore, advanced chat privacy settings are included as a complement to these technical safeguards, as they allow users to select the conversations that will be eligible for AI-generated summaries and thus offer granular control over the use of the feature. Moreover, when a user decides to enable summaries in a chat, no notification is sent to other participants, allowing for greater discretion on the part of other participants.

There is currently a phase in which Message Summaries are being gradually introduced to users in the United States. They can only be read in English at the moment. There has been confirmation by Meta that the feature will be expanded to additional regions and supported in additional languages shortly, as part of their broader effort to integrate artificial intelligence into all aspects of their service offerings. 

As WhatsApp intensifies its efforts to embed AI capabilities deeper and deeper into everyday communication, Message Summaries marks a pivotal moment in the evolution of relationships between technology and human interaction as the company accelerates its ambition to involve AI capabilities across the entire enterprise. 

Even though the company has repeatedly reiterated that it is committed to privacy, transparency, and user autonomy, the response to this feature has been polarised, which highlights the challenges associated with incorporating artificial intelligence in spaces where trust, nuance, and human connection are paramount. 

It is a timely reminder that, for both individuals and organisations, the growth of convenience-driven automation impacts the genuine social fabric that is a hallmark of digital communities and requires a careful assessment. 

As platforms evolve, stakeholders would do well to remain vigilant with the changes to platform policies, evaluate whether such tools align with the communication values they hold dear, and consider offering structured feedback in order for these technologies to mature with maturity. As artificial intelligence continues to redefine the contours of messaging, users will need to be open to innovation while also expressing critical thought about the long-term implications on privacy, comprehension, and even the very nature of meaningful dialogue as AI use continues to grow in popularity.

Security Teams Struggle to Keep Up With Generative AI Threats, Cobalt Warns

 

A growing number of cybersecurity professionals are expressing concern that generative AI is evolving too rapidly for their teams to manage. 

According to new research by penetration testing company Cobalt, over one-third of security leaders and practitioners admit that the pace of genAI development has outstripped their ability to respond. Nearly half of those surveyed (48%) said they wish they could pause and reassess their defense strategies in light of these emerging threats—though they acknowledge that such a break isn’t realistic. 

In fact, 72% of respondents listed generative AI-related attacks as their top IT security risk. Despite this, one in three organizations still isn’t conducting regular security evaluations of their large language model (LLM) deployments, including basic penetration testing. 

Cobalt CTO Gunter Ollmann warned that the security landscape is shifting, and the foundational controls many organizations rely on are quickly becoming outdated. “Our research shows that while generative AI is transforming how businesses operate, it’s also exposing them to risks they’re not prepared for,” said Ollmann. 
“Security frameworks must evolve or risk falling behind.” The study revealed a divide between leadership and practitioners. Executives such as CISOs and VPs are more concerned about long-term threats like adversarial AI attacks, with 76% listing them as a top issue. Meanwhile, 45% of practitioners are more focused on immediate operational challenges such as model inaccuracies, compared to 36% of executives. 

A majority of leaders—52%—are open to rethinking their cybersecurity strategies to address genAI threats. Among practitioners, only 43% shared this view. The top genAI-related concerns identified by the survey included the risk of sensitive information disclosure (46%), model poisoning or theft (42%), data inaccuracies (40%), and leakage of training data (37%). Around half of respondents also expressed a desire for more transparency from software vendors about how vulnerabilities are identified and patched, highlighting a widening trust gap in the AI supply chain. 

Cobalt’s internal pentest data shows a worrying trend: while 69% of high-risk vulnerabilities are typically fixed across all test types, only 21% of critical flaws found in LLM tests are resolved. This is especially alarming considering that nearly one-third of LLM vulnerabilities are classified as serious. Interestingly, the average time to resolve these LLM-specific vulnerabilities is just 19 days—the fastest across all categories. 

However, researchers noted this may be because organizations prioritize easier, low-effort fixes rather than tackling more complex threats embedded in foundational AI models. Ollmann compared the current scenario to the early days of cloud adoption, where innovation outpaced security readiness. He emphasized that traditional controls aren’t enough in the age of LLMs. “Security teams can’t afford to be reactive anymore,” he concluded. “They must move toward continuous, programmatic AI testing if they want to keep up.”

Meta.ai Privacy Lapse Exposes User Chats in Public Feed

 

Meta’s new AI-driven chatbot platform, Meta.ai, launched recently with much fanfare, offering features like text and voice chats, image generation, and video restyling. Designed to rival platforms like ChatGPT, the app also includes a Discover feed, a space intended to showcase public content generated by users. However, what Meta failed to communicate effectively was that many users were unintentionally sharing their private conversations in this feed—sometimes with extremely sensitive content attached. 

In May, journalists flagged the issue when they discovered public chats revealing deeply personal user concerns—ranging from financial issues and health anxieties to legal troubles. These weren’t obscure posts either; they appeared in a publicly accessible area of the app, often containing identifying information. Conversations included users seeking help with medical diagnoses, children talking about personal experiences, and even incarcerated individuals discussing legal strategies—none of whom appeared to realize their data was visible to others. 

Despite some recent tweaks to the app’s sharing settings, disturbing content still appears on the Discover feed. Users unknowingly uploaded images and video clips, sometimes including faces, alongside alarming or bizarre prompts. One especially troubling instance featured a photo of a child at school, accompanied by a prompt instructing the AI to “make him cry.” Such posts reflect not only poor design choices but also raise ethical questions about the purpose and moderation of the Discover feed itself. 

The issue evokes memories of other infamous data exposure incidents, such as AOL’s release of anonymized user searches in 2006, which provided unsettling insight into private thoughts and behaviors. While social media platforms are inherently public, users generally view AI chat interactions as private, akin to using a search engine. Meta.ai blurred that boundary—perhaps unintentionally, but with serious consequences. Many users turned to Meta.ai seeking support, companionship, or simple productivity help. Some asked for help with job listings or obituary writing, while others vented emotional distress or sought comfort during panic attacks. 

In some cases, users left chats expressing gratitude—believing the bot had helped. But a growing number of conversations end in frustration or embarrassment when users realize the bot cannot deliver on its promises or that their content was shared publicly. These incidents highlight a disconnect between how users engage with AI tools and how companies design them. Meta’s ambition to merge AI capabilities with social interaction seems to have ignored the emotional and psychological expectations users bring to private-sounding features. 

For those using Meta.ai as a digital confidant, the lack of clarity around privacy settings has turned an experiment in convenience into a public misstep. As AI systems become more integrated into daily life, companies must rethink how they handle user data—especially when users assume privacy. Meta.ai’s rocky launch serves as a cautionary tale about transparency, trust, and design in the age of generative AI.

How Generative AI Is Accelerating the Rise of Shadow IT and Cybersecurity Gaps

 

The emergence of generative AI tools in the workplace has reignited concerns about shadow IT—technology solutions adopted by employees without the knowledge or approval of the IT department. While shadow IT has always posed security challenges, the rapid proliferation of AI tools is intensifying the issue, creating new cybersecurity risks for organizations already struggling with visibility and control. 

Employees now have access to a range of AI-powered tools that can streamline daily tasks, from summarizing text to generating code. However, many of these applications operate outside approved systems and can send sensitive corporate data to third-party cloud environments. This introduces serious privacy concerns and increases the risk of data leakage. Unlike legacy software, generative AI solutions can be downloaded and used with minimal friction, making them harder for IT teams to detect and manage. 

The 2025 State of Cybersecurity Report by Ivanti reveals a critical gap between awareness and preparedness. More than half of IT and security leaders acknowledge the threat posed by software and API vulnerabilities. Yet only about one-third feel fully equipped to deal with these risks. The disparity highlights the disconnect between theory and practice, especially as data visibility becomes increasingly fragmented. 

A significant portion of this problem stems from the lack of integrated data systems. Nearly half of organizations admit they do not have enough insight into the software operating on their networks, hindering informed decision-making. When IT and security departments work in isolation—something 55% of organizations still report—it opens the door for unmonitored tools to slip through unnoticed. 

Generative AI has only added to the complexity. Because these tools operate quickly and independently, they can infiltrate enterprise environments before any formal review process occurs. The result is a patchwork of unverified software that can compromise an organization’s overall security posture. 

Rather than attempting to ban shadow IT altogether—a move unlikely to succeed—companies should focus on improving data visibility and fostering collaboration between departments. Unified platforms that connect IT and security functions are essential. With a shared understanding of tools in use, teams can assess risks and apply controls without stifling innovation. 

Creating a culture of transparency is equally important. Employees should feel comfortable voicing their tech needs instead of finding workarounds. Training programs can help users understand the risks of generative AI and encourage safer choices. 

Ultimately, AI is not the root of the problem—lack of oversight is. As the workplace becomes more AI-driven, addressing shadow IT with strategic visibility and collaboration will be critical to building a strong, future-ready defense.