Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label AI technology. Show all posts

Personal AI Agents Could Become Digital Advocates in an AI-Dominated World

 

As generative AI agents proliferate, a new concept is gaining traction: AI entities that act as loyal digital advocates, protecting individuals from overwhelming technological complexity, misinformation, and data exploitation. Experts suggest these personal AI companions could function similarly to service animals—trained not just to assist, but to guard user interests in an AI-saturated world. From scam detection to helping navigate automated marketing and opaque algorithms, these agents would act as user-first shields. 

At a recent Imagination in Action panel, Consumer Reports’ Ginny Fahs explained, “As companies embed AI deeper into commerce, it becomes harder for consumers to identify fair offers or make informed decisions. An AI that prioritizes users’ interests can build trust and help transition toward a more transparent digital economy.” The idea is rooted in giving users agency and control in a system where most AI is built to serve businesses. Panelists—including experts like Dazza Greenwood, Amir Sarhangi, and Tobin South—discussed how loyal, trustworthy AI advocates could reshape personal data rights, online trust, and legal accountability. 

Greenwood drew parallels to early internet-era reforms such as e-signatures and automated contracts, suggesting a similar legal evolution is needed now to govern AI agents. South added that AI agents must be “loyal by design,” ensuring they act within legal frameworks and always prioritize the user. Sarhangi introduced the concept of “Know Your Agent” (KYA), which promotes transparency by tracking the digital footprint of an AI. 

With unique agent wallets and activity histories, bad actors could be identified and held accountable. Fahs described a tool called “Permission Slip,” which automates user requests like data deletion. This form of AI advocacy predates current generative models but shows how user-authorized agents could manage privacy at scale. Agents could also learn from collective behavior. For instance, an AI noting a negative review of a product could share that experience with other agents, building an automated form of word-of-mouth. 

This concept, said panel moderator Sandy Pentland, mirrors how Consumer Reports aggregates user feedback to identify reliable products. South emphasized that cryptographic tools could ensure safe data-sharing without blindly trusting tech giants. He also referenced NANDA, a decentralized protocol from MIT that aims to enable trustworthy AI infrastructure. Still, implementing AI agents raises usability questions. “We want agents to understand nuanced permissions without constantly asking users to approve every action,” Fahs said. 

Getting this right will be crucial to user adoption. Pentland noted that current AI models struggle to align with individual preferences. “An effective agent must represent you—not a demographic group, but your unique values,” he said. Greenwood believes that’s now possible: “We finally have the tools to build AI agents with fiduciary responsibilities.” In closing, South stressed that the real bottleneck isn’t AI capability but structuring and contextualizing information properly. “If you want AI to truly act on your behalf, we must design systems that help it understand you.” 

As AI becomes deeply embedded in daily life, building personalized, privacy-conscious agents may be the key to ensuring technology serves people—not the other way around.

WhatsApp Under Fire for AI Update Disrupting Group Communication


The new artificial intelligence capability introduced by WhatsApp aims to transform the way users interact with their conversations through sophisticated artificial intelligence. It uses advanced technology from Meta AI to provide a concise summary of unread messages across individual chats as well as group chats, which is referred to as Message Summaries. 

The tool was created to help users stay informed in increasingly active chat environments by automatically compiling key points and contextual highlights, allowing them to catch up in just a few clicks without having to scroll through lengthy message histories to catch up. The company claims all summaries are generated privately, so that confidentiality can be maintained and the process of use is as simple as possible for the user. 

WhatsApp announces its intention of integrating artificial intelligence-driven solutions into its app to improve user convenience as well as reshape communication habits for its global community with this rollout, sparking both excitement and controversy as a result. Despite being announced last month, WhatsApp’s innovative Message Summaries feature has moved from pilot testing to a full-scale rollout after successfully passing pilot testing. 

Having refined the tool and collected feedback from its users, it is now considered to be stable and has been formally launched for wider use. In the initial phase, the feature is only available to US users and is restricted to the English language at this time. This indicates that WhatsApp is cautious when it comes to deploying large-scale artificial intelligence. 

Nevertheless, the platform announced plans to extend its availability to more regions at some point in the future, along with the addition of multilingual support. The phased rollout strategy emphasises that the company is focused on ensuring that the technology is reliable and user-friendly before it is extended to the vast global market. 

It is WhatsApp's intention to focus on a controlled release so as to gather more insights about users' interaction with the AI-generated conversation summaries, as well as to fine-tune the experience before expanding internationally. As a result of WhatsApp's inability to provide an option for enabling or concealing the Message Summaries feature, there has been a significant amount of discontent among users. 

Despite the fact that Meta has refused to clarify the reason regarding the lack of an opt-out mechanism or why users were not offered the opportunity to opt out of the AI integration, they have not provided any explanation so far. As concerning as the technology itself is, the lack of transparency has been regarded equally as a cause for concern by many, raising questions about the control people have over their personal communications. As a result of these limitations, some people have attempted to circumvent the chatbot by switching to a WhatsApp Business account as a response. 

In addition, several users have commented that this strategy removed the AI functionality from Meta AI, but others have noted that the characteristic blue circle, which indicates Meta AI's presence, still appeared, which exacerbated the dissatisfaction and uncertainty. 

The Meta team hasn’t confirmed whether the business-oriented version of WhatsApp will continue to be exempt from AI integration for years to come. This rollout also represents Meta’s broader goal of integrating generative AI into all its platforms, which include Facebook and Instagram, into its ecosystem. 

Towards the end of 2024, Meta AI was introduced for the first time in Facebook Messenger in the United Kingdom, followed by a gradual extension into WhatsApp as part of a unified vision to revolutionise digital interactions. However, many users have expressed their frustration with this feature because it often feels intrusive and ultimately is useless, despite these ambitions. 

The chatbot appears to activate frequently when individuals are simply searching for past conversations or locating contacts, which results in obstructions rather than streamlining the experience. According to the initial feedback received, AI-generated responses are frequently perceived as superficial, repetitive, or even irrelevant to the conversation's context, as well as generating a wide range of perceptions of their value.

A Meta AI platform has been integrated directly into WhatsApp, unlike standalone platforms such as ChatGPT and Google Gemini, which are separately accessible by users. WhatsApp is a communication application that is used on a daily basis to communicate both personally and professionally. Because the feature was integrated without explicit consent and there were doubts about its usefulness, many users are beginning to wonder whether such pervasive AI assistance is really necessary or desirable. 

It has also been noted that there is a growing chorus of criticism about the inherent limitations of artificial intelligence in terms of reliably interpreting human communication. Many users have expressed their scepticism about AI's ability to accurately condense even one message within an active group chat, let alone synthesise hundreds of exchanges. It is not the first time Apple has faced similar challenges; Apple has faced similar challenges in the past when it had to pull an AI-powered feature that produced unintended and sometimes inaccurate summaries. 

As of today, the problem of "hallucinations," which occur in the form of factually incorrect or contextually irrelevant content generated by artificial intelligence, remains a persistent problem across nearly every generative platform, including commonly used platforms like ChatGPT. Aside from that, artificial intelligence continues to struggle with subtleties such as humour, sarcasm, and cultural nuance-aspects of natural conversation that are central to establishing a connection. 

In situations where the AI is not trained to recognise offhand or joking remarks, it can easily misinterpret those remarks. This leads to summaries that are alarmist, distorted, or completely inaccurate, as compared to human recipients' own. Due to the increased risk of misrepresentation, users who rely on WhatsApp for authentic, nuanced communication with colleagues, friends, and family are becoming more apprehensive than before. 

A philosophical objection has been raised beyond technical limitations, stating that the act of participating in a conversation is diminished by substituting real engagement for machine-generated recaps. There is a shared sentiment that the purpose of group chats lies precisely in the experience of reading and responding to the genuine voices of others while scrolling through a backlog of messages. 

However, there is a consensus that it is exhausting to scroll through such a large backlog of messages. It is believed that the introduction of Message Summaries not only threatens clear communication but also undermines the sense of personal connection that draws people into these digital communities in the first place, which is why these critics are concerned. 

In order to ensure user privacy, WhatsApp has created the Message Summaries feature using a new framework known as Private Processing, which is designed to safeguard user privacy. Meta and WhatsApp are specifically ensuring that neither the contents of their conversations nor the summaries that the AI system produces are able to be accessed by them, which is why this approach was developed. 

Instead of sending summaries to external servers, the platform is able to generate them locally on the users' devices, reinforcing its commitment to privacy. Each summary, presented in a clear bullet point format, is clearly labelled as "visible only to you," emphasising WhatsApp's privacy-centric design philosophy behind the feature as well. 

Message Summaries have shown to be especially useful in group chats in which the amount of unread messages is often overwhelming, as a result of the large volume of unread messages. With this tool, users are able to remain informed without having to read every single message, because lengthy exchanges are distilled into concise snapshots that enable them to stay updated without having to scroll through each and every individual message. 

The feature is disabled by default and needs to be activated manually, which addresses privacy concerns. Upon activating the feature, eligible chats display a discreet icon, signalling the availability of a summary without announcing it to other participants. Meta’s confidential computing infrastructure is at the core of its system, and in principle, it is comparable to Apple’s private cloud computing architecture. 

A Trusted Execution Environment (TEE) provides a foundation for Private Processing, ensuring that confidential information is handled in an effective manner, with robust measures against tampering, and clear mechanisms for ensuring transparency are in place.

A system's architecture is designed to shut down automatically or to generate verifiable evidence of the intrusion whenever any attempt is made to compromise the security assurances of the system. As well as supporting independent third-party audits, Meta has intentionally designed the framework in such a way that it will remain stateless, forward secure, and immune to targeted attacks so that Meta's claims about data protection can be verified. 

Furthermore, advanced chat privacy settings are included as a complement to these technical safeguards, as they allow users to select the conversations that will be eligible for AI-generated summaries and thus offer granular control over the use of the feature. Moreover, when a user decides to enable summaries in a chat, no notification is sent to other participants, allowing for greater discretion on the part of other participants.

There is currently a phase in which Message Summaries are being gradually introduced to users in the United States. They can only be read in English at the moment. There has been confirmation by Meta that the feature will be expanded to additional regions and supported in additional languages shortly, as part of their broader effort to integrate artificial intelligence into all aspects of their service offerings. 

As WhatsApp intensifies its efforts to embed AI capabilities deeper and deeper into everyday communication, Message Summaries marks a pivotal moment in the evolution of relationships between technology and human interaction as the company accelerates its ambition to involve AI capabilities across the entire enterprise. 

Even though the company has repeatedly reiterated that it is committed to privacy, transparency, and user autonomy, the response to this feature has been polarised, which highlights the challenges associated with incorporating artificial intelligence in spaces where trust, nuance, and human connection are paramount. 

It is a timely reminder that, for both individuals and organisations, the growth of convenience-driven automation impacts the genuine social fabric that is a hallmark of digital communities and requires a careful assessment. 

As platforms evolve, stakeholders would do well to remain vigilant with the changes to platform policies, evaluate whether such tools align with the communication values they hold dear, and consider offering structured feedback in order for these technologies to mature with maturity. As artificial intelligence continues to redefine the contours of messaging, users will need to be open to innovation while also expressing critical thought about the long-term implications on privacy, comprehension, and even the very nature of meaningful dialogue as AI use continues to grow in popularity.

Security Teams Struggle to Keep Up With Generative AI Threats, Cobalt Warns

 

A growing number of cybersecurity professionals are expressing concern that generative AI is evolving too rapidly for their teams to manage. 

According to new research by penetration testing company Cobalt, over one-third of security leaders and practitioners admit that the pace of genAI development has outstripped their ability to respond. Nearly half of those surveyed (48%) said they wish they could pause and reassess their defense strategies in light of these emerging threats—though they acknowledge that such a break isn’t realistic. 

In fact, 72% of respondents listed generative AI-related attacks as their top IT security risk. Despite this, one in three organizations still isn’t conducting regular security evaluations of their large language model (LLM) deployments, including basic penetration testing. 

Cobalt CTO Gunter Ollmann warned that the security landscape is shifting, and the foundational controls many organizations rely on are quickly becoming outdated. “Our research shows that while generative AI is transforming how businesses operate, it’s also exposing them to risks they’re not prepared for,” said Ollmann. 
“Security frameworks must evolve or risk falling behind.” The study revealed a divide between leadership and practitioners. Executives such as CISOs and VPs are more concerned about long-term threats like adversarial AI attacks, with 76% listing them as a top issue. Meanwhile, 45% of practitioners are more focused on immediate operational challenges such as model inaccuracies, compared to 36% of executives. 

A majority of leaders—52%—are open to rethinking their cybersecurity strategies to address genAI threats. Among practitioners, only 43% shared this view. The top genAI-related concerns identified by the survey included the risk of sensitive information disclosure (46%), model poisoning or theft (42%), data inaccuracies (40%), and leakage of training data (37%). Around half of respondents also expressed a desire for more transparency from software vendors about how vulnerabilities are identified and patched, highlighting a widening trust gap in the AI supply chain. 

Cobalt’s internal pentest data shows a worrying trend: while 69% of high-risk vulnerabilities are typically fixed across all test types, only 21% of critical flaws found in LLM tests are resolved. This is especially alarming considering that nearly one-third of LLM vulnerabilities are classified as serious. Interestingly, the average time to resolve these LLM-specific vulnerabilities is just 19 days—the fastest across all categories. 

However, researchers noted this may be because organizations prioritize easier, low-effort fixes rather than tackling more complex threats embedded in foundational AI models. Ollmann compared the current scenario to the early days of cloud adoption, where innovation outpaced security readiness. He emphasized that traditional controls aren’t enough in the age of LLMs. “Security teams can’t afford to be reactive anymore,” he concluded. “They must move toward continuous, programmatic AI testing if they want to keep up.”

Meta.ai Privacy Lapse Exposes User Chats in Public Feed

 

Meta’s new AI-driven chatbot platform, Meta.ai, launched recently with much fanfare, offering features like text and voice chats, image generation, and video restyling. Designed to rival platforms like ChatGPT, the app also includes a Discover feed, a space intended to showcase public content generated by users. However, what Meta failed to communicate effectively was that many users were unintentionally sharing their private conversations in this feed—sometimes with extremely sensitive content attached. 

In May, journalists flagged the issue when they discovered public chats revealing deeply personal user concerns—ranging from financial issues and health anxieties to legal troubles. These weren’t obscure posts either; they appeared in a publicly accessible area of the app, often containing identifying information. Conversations included users seeking help with medical diagnoses, children talking about personal experiences, and even incarcerated individuals discussing legal strategies—none of whom appeared to realize their data was visible to others. 

Despite some recent tweaks to the app’s sharing settings, disturbing content still appears on the Discover feed. Users unknowingly uploaded images and video clips, sometimes including faces, alongside alarming or bizarre prompts. One especially troubling instance featured a photo of a child at school, accompanied by a prompt instructing the AI to “make him cry.” Such posts reflect not only poor design choices but also raise ethical questions about the purpose and moderation of the Discover feed itself. 

The issue evokes memories of other infamous data exposure incidents, such as AOL’s release of anonymized user searches in 2006, which provided unsettling insight into private thoughts and behaviors. While social media platforms are inherently public, users generally view AI chat interactions as private, akin to using a search engine. Meta.ai blurred that boundary—perhaps unintentionally, but with serious consequences. Many users turned to Meta.ai seeking support, companionship, or simple productivity help. Some asked for help with job listings or obituary writing, while others vented emotional distress or sought comfort during panic attacks. 

In some cases, users left chats expressing gratitude—believing the bot had helped. But a growing number of conversations end in frustration or embarrassment when users realize the bot cannot deliver on its promises or that their content was shared publicly. These incidents highlight a disconnect between how users engage with AI tools and how companies design them. Meta’s ambition to merge AI capabilities with social interaction seems to have ignored the emotional and psychological expectations users bring to private-sounding features. 

For those using Meta.ai as a digital confidant, the lack of clarity around privacy settings has turned an experiment in convenience into a public misstep. As AI systems become more integrated into daily life, companies must rethink how they handle user data—especially when users assume privacy. Meta.ai’s rocky launch serves as a cautionary tale about transparency, trust, and design in the age of generative AI.

How Generative AI Is Accelerating the Rise of Shadow IT and Cybersecurity Gaps

 

The emergence of generative AI tools in the workplace has reignited concerns about shadow IT—technology solutions adopted by employees without the knowledge or approval of the IT department. While shadow IT has always posed security challenges, the rapid proliferation of AI tools is intensifying the issue, creating new cybersecurity risks for organizations already struggling with visibility and control. 

Employees now have access to a range of AI-powered tools that can streamline daily tasks, from summarizing text to generating code. However, many of these applications operate outside approved systems and can send sensitive corporate data to third-party cloud environments. This introduces serious privacy concerns and increases the risk of data leakage. Unlike legacy software, generative AI solutions can be downloaded and used with minimal friction, making them harder for IT teams to detect and manage. 

The 2025 State of Cybersecurity Report by Ivanti reveals a critical gap between awareness and preparedness. More than half of IT and security leaders acknowledge the threat posed by software and API vulnerabilities. Yet only about one-third feel fully equipped to deal with these risks. The disparity highlights the disconnect between theory and practice, especially as data visibility becomes increasingly fragmented. 

A significant portion of this problem stems from the lack of integrated data systems. Nearly half of organizations admit they do not have enough insight into the software operating on their networks, hindering informed decision-making. When IT and security departments work in isolation—something 55% of organizations still report—it opens the door for unmonitored tools to slip through unnoticed. 

Generative AI has only added to the complexity. Because these tools operate quickly and independently, they can infiltrate enterprise environments before any formal review process occurs. The result is a patchwork of unverified software that can compromise an organization’s overall security posture. 

Rather than attempting to ban shadow IT altogether—a move unlikely to succeed—companies should focus on improving data visibility and fostering collaboration between departments. Unified platforms that connect IT and security functions are essential. With a shared understanding of tools in use, teams can assess risks and apply controls without stifling innovation. 

Creating a culture of transparency is equally important. Employees should feel comfortable voicing their tech needs instead of finding workarounds. Training programs can help users understand the risks of generative AI and encourage safer choices. 

Ultimately, AI is not the root of the problem—lack of oversight is. As the workplace becomes more AI-driven, addressing shadow IT with strategic visibility and collaboration will be critical to building a strong, future-ready defense.

AI Skills Shortage Deepens as Enterprise Demand Grows Faster Than Talent Supply

 

The shortage of skilled professionals in artificial intelligence is becoming a major concern for enterprises, as organizations race to adopt the technology without a matching increase in qualified talent. The latest Harvey Nash Digital Leadership report, released by Nash Squared in May, highlights a sharp rise in demand for AI skills across industries—faster than any previous tech trend tracked in the last 16 years. 

Based on responses from over 2,000 tech executives, the report found that more than half of IT leaders now cite a lack of AI expertise as a key barrier to progress. This marks a steep climb from just 28% a year ago. In fact, AI has jumped from the sixth most difficult skill to hire for to the number one spot in just over a year. Interest in AI adoption continues to soar, with 90% of surveyed organizations either investing in or piloting AI solutions—up significantly from 59% in 2023. Despite this enthusiasm, a majority of companies have not yet seen measurable returns from their AI projects. Many remain stuck in early testing phases, unable to deploy solutions at scale. 

Numerous challenges continue to slow enterprise AI deployment. Besides the scarcity of skilled professionals, companies face obstacles such as inadequate data infrastructure and tight budgets. Without the necessary expertise, organizations struggle to transition from proof-of-concept to full integration. Bev White, CEO of Nash Squared, emphasized that enterprises are navigating uncharted territory. “There’s no manual for scaling AI,” she explained. “Organizations must combine various strategies—formal education, upskilling of tech and non-tech teams, and hands-on experimentation—to build their AI capabilities.” She also stressed the need for operational models that naturally embed AI into daily workflows. 

The report’s findings show that the surge in AI skill demand has outpaced any other technology shift in recent memory. Sectors like manufacturing, education, pharmaceuticals, logistics, and professional services are all feeling the pressure to hire faster than the talent pool allows. Supporting this trend, job market data shows explosive growth in demand for AI roles. 

According to Indeed, postings for generative AI positions nearly tripled year-over-year as of January 2025. Unless companies prioritize upskilling and talent development, the widening AI skills gap could undermine the long-term success of enterprise AI strategies. For now, the challenge of turning AI interest into practical results remains a steep climb.

AI Agents Raise Cybersecurity Concerns Amid Rapid Enterprise Adoption

 

A growing number of organizations are adopting autonomous AI agents despite widespread concerns about the cybersecurity risks they pose. According to a new global report released by identity security firm SailPoint, this accelerated deployment is happening in a largely unregulated environment. The findings are based on a survey of more than 350 IT professionals, revealing that 84% of respondents said their organizations already use AI agents internally. 

However, only 44% confirmed the presence of any formal policies to regulate the agents’ actions. AI agents differ from traditional chatbots in that they are designed to independently plan and execute tasks without constant human direction. Since the emergence of generative AI tools like ChatGPT in late 2022, major tech companies have been racing to launch their own agents. Many smaller businesses have followed suit, motivated by the desire for operational efficiency and the pressure to adopt what is widely viewed as a transformative technology.  

Despite this enthusiasm, 96% of survey participants acknowledged that these autonomous systems pose security risks, while 98% stated their organizations plan to expand AI agent usage within the next year. The report warns that these agents often have extensive access to sensitive systems and information, making them a new and significant attack surface for cyber threats. Chandra Gnanasambandam, SailPoint’s Executive Vice President of Product and Chief Technology Officer, emphasized the risks associated with such broad access. He explained that these systems are transforming workflows but typically operate with minimal oversight, which introduces serious vulnerabilities. 

Further compounding the issue is the inconsistent implementation of governance controls. Although 92% of those surveyed agree that AI agents should be governed similarly to human employees, 80% reported incidents where agents performed unauthorized actions or accessed restricted data. These incidents underscore the dangers of deploying autonomous systems without robust monitoring or access controls. 

Gnanasambandam suggests adopting an identity-first approach to agent management. He recommends applying the same security protocols used for human users, including real-time access permissions, least privilege principles, and comprehensive activity tracking. Without such measures, organizations risk exposing themselves to breaches or data misuse due to the very tools designed to streamline operations. 

As AI agents become more deeply embedded in business processes, experts caution that failing to implement adequate oversight could create long-term vulnerabilities. The report serves as a timely reminder that innovation must be accompanied by strong governance to ensure cybersecurity is not compromised in the pursuit of automation.

Foxconn’s Chairman Warns AI and Robotics Will Replace Low-End Manufacturing Jobs

 

Foxconn chairman Young Liu has issued a stark warning about the future of low-end manufacturing jobs, suggesting that generative AI and robotics will eventually eliminate many of these roles. Speaking at the Computex conference in Taiwan, Liu emphasized that this transformation is not just technological but geopolitical, urging world leaders to prepare for the sweeping changes ahead. 

According to Liu, wealthy nations have historically relied on two methods to keep manufacturing costs down: encouraging immigration to bring in lower-wage workers and outsourcing production to countries with lower GDP. However, he argued that both strategies are reaching their limits. With fewer low-GDP countries to outsource to and increasing resistance to immigration in many parts of the world, Liu believes that generative AI and robotics will be the next major solution to bridge this gap. He cited Foxconn’s own experience as proof of this shift. 

After integrating generative AI into its production processes, the company discovered that AI alone could handle up to 80% of the work involved in setting up new manufacturing runs—often faster than human workers. While human input is still required to complete the job, the combination of AI and skilled labor significantly improves efficiency. As a result, Foxconn’s human experts are now able to focus on more complex challenges rather than repetitive tasks. Liu also announced the development of a proprietary AI model named “FoxBrain,” tailored specifically for manufacturing. 

Built using Meta’s Llama 3 and 4 models and trained on Foxconn’s internal data, this tool aims to automate workflows and enhance factory operations. The company plans to open-source FoxBrain and deploy it across all its facilities, continuously improving the model with real-time performance feedback. Another innovation Liu highlighted was Foxconn’s use of Nvidia’s Omniverse to create digital twins of future factories. These AI-operated virtual factories are used to test and optimize layouts before construction begins, drastically improving design efficiency and effectiveness. 

In addition to manufacturing, Foxconn is eyeing the electric vehicle sector. Liu revealed the company is working on a reference design for EVs, a model that partners can customize—much like Foxconn’s strategy with PC manufacturers. He claimed this approach could reduce product development workloads by up to 80%, enhancing time-to-market and cutting costs. 

Liu closed his keynote by encouraging industry leaders to monitor these developments closely, as the rise of AI-driven automation could reshape the global labor landscape faster than anticipated.

Google’s AI Virtual Try-On Tool Redefines Online Shopping Experience

 

At the latest Google I/O developers conference, the tech giant introduced an unexpected innovation in online shopping: an AI-powered virtual try-on tool. This new feature lets users upload a photo of themselves and see how clothing items would appear on their body. By merging the image of the user with that of the garment, Google’s custom-built image generation model creates a realistic simulation of the outfit on the individual. 

While the concept seems simple, the underlying AI technology is advanced. In a live demonstration, the tool appeared to function seamlessly. The feature is now available in the United States and is part of Google’s broader efforts to enhance the online shopping experience through AI integration. It’s particularly useful for people who often struggle to visualize how clothing will look on their body compared to how it appears on models.  

However, the rollout of this tool raised valid questions about user privacy. AI systems that involve personal images often come with concerns over data usage. Addressing these worries, a Google representative clarified that uploaded photos are used exclusively for the try-on experience. The images are not stored for AI training, are not shared with other services or third parties, and users can delete or update their photos at any time. This level of privacy protection is notable in an industry where user data is typically leveraged to improve algorithms. 

Given Google’s ongoing development of AI-driven tools, some expected the company to utilize this photo data for model training. Instead, the commitment to user privacy in this case suggests a more responsible approach. Virtual fitting technology isn’t entirely new. Retail and tech companies have been exploring similar ideas for years. Amazon, for instance, has experimented with AI tools in its fashion division. Google, however, claims its new tool offers a more in-depth understanding of diverse body types. 

During the presentation, Vidhya Srinivasan, Google’s VP of ads and commerce, emphasized the system’s goal of accommodating different shapes and sizes more effectively. Past AI image tools have faced criticism for lacking diversity and realism. It’s unclear whether Google’s new tool will be more reliable across the board. Nevertheless, their assurance that user images won’t be used to train models helps build trust. 

Although the virtual preview may not always perfectly reflect real-life appearances, this development points to a promising direction for AI in retail. If successful, it could improve customer satisfaction, reduce returns, and make online shopping a more personalized experience.

Klarna Scales Back AI-Led Customer Service Strategy, Resumes Human Support Hiring

 

Klarna Group Plc, the Sweden-based fintech company, is reassessing its heavy reliance on artificial intelligence (AI) in customer service after admitting the approach led to a decline in service quality. CEO and co-founder Sebastian Siemiatkowski acknowledged that cost-cutting took precedence over customer experience during a company-wide AI push that replaced hundreds of human agents. 

Speaking at Klarna’s Stockholm headquarters, Siemiatkowski conceded, “As cost unfortunately seems to have been a too predominant evaluation factor when organizing this, what you end up having is lower quality.” The company had frozen hiring for over a year to scale its AI capabilities but now plans to recalibrate its customer service model. 

In a strategic shift, Klarna is restarting recruitment for customer support roles — a rare move that reflects the company’s need to restore the quality of human interaction. A new pilot program is underway that allows remote workers — including students and individuals in rural areas — to provide customer service on-demand in an “Uber-like setup.” Currently, two agents are part of the trial. “We also know there are tons of Klarna users that are very passionate about our company and would enjoy working for us,” Siemiatkowski said. 

He stressed the importance of giving customers the option to speak to a human, citing both brand and operational needs. Despite dialing back on AI-led customer support, Klarna is not walking away from AI altogether. The company is continuing to rebuild its tech stack with AI at the core, aiming to improve operational efficiency. It is also developing a digital financial assistant designed to help users secure better interest rates and insurance options. 

Klarna maintains a close relationship with OpenAI, a collaboration that began in 2023. “We wanted to be [OpenAI’s] favorite guinea pig,” Siemiatkowski noted, reinforcing the company’s long-term commitment to leveraging AI. Klarna’s course correction follows a turbulent financial period. After peaking at a $45.6 billion valuation in 2021, the company saw its value drop to $6.7 billion in 2022. It has since rebounded and aims to raise $1 billion via an IPO, targeting a valuation exceeding $15 billion — though IPO plans have been paused due to market volatility. 

The company’s 2024 announcement that AI was handling the workload of 700 human agents disrupted the call center industry, leading to a sharp drop in shares of Teleperformance SE, a major outsourcing firm. While Klarna is resuming hiring, its overall workforce is expected to shrink. “In a year’s time, we’ll probably be down to about 2,500 people from 3,000,” Siemiatkowski said, noting that attrition and further AI improvements will likely drive continued headcount reductions.

Quantum Computing Could Deliver Business Value by 2028 with 100 Logical Qubits

 

Quantum computing may soon move from theory to commercial reality, as experts predict that machines with 100 logical qubits could start delivering tangible business value by 2028—particularly in areas like material science. Speaking at the Commercialising Quantum Computing conference in London, industry leaders suggested that such systems could outperform even high-performance computing in solving complex problems. 

Mark Jackson, senior quantum evangelist at Quantinuum, highlighted that quantum computing shows great promise in generative AI applications, especially machine learning. Unlike traditional systems that aim for precise answers, quantum computers excel at identifying patterns in large datasets—making them highly effective for cybersecurity and fraud detection. “Quantum computers can detect patterns that would be missed by other conventional computing methods,” Jackson said.  

Financial services firms are also beginning to realize the potential of quantum computing. Phil Intallura, global head of quantum technologies at HSBC, said quantum technologies can help create more optimized financial models. “If you can show a solution using quantum technology that outperforms supercomputers, decision-makers are more likely to invest,” he noted. HSBC is already exploring quantum random number generation for use in simulations and risk modeling. 

In a recent collaborative study published in Nature, researchers from JPMorgan Chase, Quantinuum, Argonne and Oak Ridge national labs, and the University of Texas showcased Random Circuit Sampling (RCS) as a certified-randomness-expansion method, a task only achievable on a quantum computer. This work underscores how randomness from quantum systems can enhance classical financial simulations. Quantum cryptography also featured prominently at the conference. Regulatory pressure is mounting on banks to replace RSA-2048 encryption with quantum-safe standards by 2035, following recommendations from the U.S. National Institute of Standards and Technology. 

Santander’s Mark Carney emphasized the need for both software and hardware support to enable fast and secure post-quantum cryptography (PQC) in customer-facing applications. Gerard Mullery, interim CEO at Oxford Quantum Circuits, stressed the importance of integrating quantum computing into traditional enterprise workflows. As AI increasingly automates business processes, quantum platforms will need to support seamless orchestration within these ecosystems. 

While only a few companies have quantum machines with logical qubits today, the pace of development suggests that quantum computing could be transformative within the next few years. With increasing investment and maturing use cases, businesses are being urged to prepare for a hybrid future where classical and quantum systems work together to solve previously intractable problems.

AI Can Create Deepfake Videos of Children Using Just 20 Images, Expert Warns

 

Parents are being urged to rethink how much they share about their children online, as experts warn that criminals can now generate realistic deepfake videos using as few as 20 images. This alarming development highlights the growing risks of digital identity theft and fraud facing children due to oversharing on social media platforms.  

According to Professor Carsten Maple of the University of Warwick and the Alan Turing Institute, modern AI tools can construct highly realistic digital profiles, including 30-second deepfake videos, from a small number of publicly available photos. These images can be used not only by criminal networks to commit identity theft, open fraudulent accounts, or claim government benefits in a child’s name but also by large tech companies to train their algorithms, often without the user’s full awareness or consent. 

New research conducted by Perspectus Global and commissioned by Proton surveyed 2,000 UK parents of children under 16. The findings show that on average, parents upload 63 images to social media every month, with 59% of those being family-related. A significant proportion of parents—21%—share these photos multiple times a week, while 38% post several times a month. These frequent posts not only showcase images but also often contain sensitive data like location tags and key life events, making it easier for bad actors to build a detailed online profile of the child. Professor Maple warned that such oversharing can lead to long-term consequences. 

Aside from potential identity theft, children could face mental distress or reputational harm later in life from having a permanent digital footprint that they never consented to create. The problem is exacerbated by the fact that many parents are unaware of how their data is being used. For instance, 48% of survey respondents did not realize that cloud storage providers can access the data stored on their platforms. In fact, more than half of the surveyed parents (56%) store family images on cloud services such as Google Drive or Apple iCloud. On average, each parent had 185 photos of their children stored digitally—images that may be accessed or analyzed under vaguely worded terms and conditions.  

Recent changes to Instagram’s user agreement, which now allows the platform to use uploaded images to train its AI systems, have further heightened privacy concerns. Additionally, experts have warned about the use of personal images by other Big Tech firms to enhance facial recognition algorithms and advertising models. To protect their children, parents are advised to implement a range of safety measures. These include using secure and private cloud storage, adjusting privacy settings on social platforms, avoiding public Wi-Fi when sharing or uploading data, and staying vigilant against phishing scams. 

Furthermore, experts recommend setting boundaries with children regarding online activity, using parental controls, antivirus tools, and search filters, and modeling responsible digital behavior. The growing accessibility of AI-based image manipulation tools underscores the urgent need for greater awareness and proactive digital hygiene. What may seem like harmless sharing today could expose children to significant risks in the future.

AI in Cybersecurity Market Sees Rapid Growth as Network Security Leads 2024 Expansion

 

The integration of artificial intelligence into cybersecurity solutions has accelerated dramatically, driving the global market to an estimated value of $32.5 billion in 2024. This surge—an annual growth rate of 23%—reflects organizations’ urgent need to defend against increasingly sophisticated cyber threats. Traditional, signature-based defenses are no longer sufficient; today’s adversaries employ polymorphic malware, fileless attacks, and automated intrusion tools that can evade static rule sets. AI’s ability to learn patterns, detect anomalies in real time, and respond autonomously has become indispensable. 

Among AI-driven cybersecurity segments, network security saw the most significant expansion last year, accounting for nearly 40% of total AI security revenues. AI-enhanced intrusion prevention systems and next-generation firewalls leverage machine learning models to inspect vast streams of traffic, distinguishing malicious behavior from legitimate activity. These solutions can automatically quarantine suspicious connections, adapt to novel malware variants, and provide security teams with prioritized alerts—reducing mean time to detection from days to mere minutes. As more enterprises adopt zero-trust architectures, AI’s role in continuously verifying device and user behavior on the network has become a cornerstone of modern defensive strategies. 

Endpoint security followed closely, representing roughly 25% of the AI cybersecurity market in 2024. AI-powered endpoint detection and response (EDR) platforms monitor processes, memory activity, and system calls on workstations and servers. By correlating telemetry across thousands of devices, these platforms can identify subtle indicators of compromise—such as unusual parent‑child process relationships or command‑line flags—before attackers achieve persistence. The rise of remote work has only heightened demand: with employees connecting from diverse locations and personal devices, AI’s context-aware threat hunting capabilities help maintain comprehensive visibility across decentralized environments. 

Identity and access management (IAM) solutions incorporating AI now capture about 20% of the market. Behavioral analytics engines analyze login patterns, device characteristics, and geolocation data to detect risky authentication attempts. Rather than relying solely on static multi‑factor prompts, adaptive authentication methods adjust challenge levels based on real‑time risk scores, blocking illicit logins while minimizing friction for legitimate users. This dynamic approach addresses credential stuffing and account takeover attacks, which accounted for over 30% of cyber incidents in 2024. Cloud security, covering roughly 15% of the AI cybersecurity spend, is another high‑growth area. 

With workloads distributed across public, private, and hybrid clouds, AI-driven cloud security posture management (CSPM) tools continuously scan configurations and user activities for misconfigurations, vulnerable APIs, and data‑exfiltration attempts. Automated remediation workflows can instantly correct risky settings, enforce encryption policies, and isolate compromised workloads—ensuring compliance with evolving regulations such as GDPR and CCPA. 

Looking ahead, analysts predict the AI in cybersecurity market will exceed $60 billion by 2028, as vendors integrate generative AI for automated playbook creation and incident response orchestration. Organizations that invest in AI‑powered defenses will gain a competitive edge, enabling proactive threat hunting and resilient operations against a backdrop of escalating cyber‑threat complexity.

Agentic AI Is Reshaping Cybersecurity Careers, Not Replacing Them

 

Agentic AI took center stage at the 2025 RSA Conference, signaling a major shift in how cybersecurity professionals will work in the near future. No longer a futuristic concept, agentic AI systems—capable of planning, acting, and learning independently—are already being deployed to streamline incident response, bolster compliance, and scale threat detection efforts. These intelligent agents operate with minimal human input, making real-time decisions and adapting to dynamic environments. 

While the promise of increased efficiency and resilience is driving rapid adoption, cybersecurity leaders also raised serious concerns. Experts like Elastic CISO Mandy Andress called for greater transparency and stronger oversight when deploying AI agents in sensitive environments. Trust, explainability, and governance emerged as recurring themes throughout RSAC, underscoring the need to balance innovation with caution—especially as cybercriminals are also experimenting with agentic AI to enhance and scale their attacks. 

For professionals in the field, this isn’t a moment to fear job loss—it’s a chance to embrace career transformation. New roles are already emerging. AI-Augmented Cybersecurity Analysts will shift from routine alert triage to validating agent insights and making strategic decisions. Security Agent Designers will define logic workflows and trust boundaries for AI operations, blending DevSecOps with AI governance. Meanwhile, AI Threat Hunters will work to identify how attackers may exploit these new tools and develop defense mechanisms in response. 

Another critical role on the horizon is the Autonomous SOC Architect, tasked with designing next-generation security operations centers powered by human-machine collaboration. There will also be growing demand for Governance and AI Ethics Leads who ensure that decisions made by AI agents are auditable, compliant, and ethically sound. These roles reflect how cybersecurity is evolving into a hybrid discipline requiring both technical fluency and ethical oversight. 

To stay competitive in this changing landscape, professionals should build new skills. This includes prompt engineering, agent orchestration using tools like LangChain, AI risk modeling, secure deployment practices, and frameworks for explainability. Human-AI collaboration strategies will also be essential, as security teams learn to partner with autonomous systems rather than merely supervise them. As IBM’s Suja Viswesan emphasized, “Security must be baked in—not bolted on.” That principle applies not only to how organizations deploy agentic AI but also to how they train and upskill their cybersecurity workforce. 

The future of defense depends on professionals who understand how AI agents think, operate, and fail. Ultimately, agentic AI isn’t replacing people—it’s reshaping their roles. Human intuition, ethical reasoning, and strategic thinking remain vital in defending against modern cyber threats. 

As HackerOne CEO Kara Sprague noted, “Machines detect patterns. Humans understand motives.” Together, they can form a faster, smarter, and more adaptive line of defense. The cybersecurity industry isn’t just gaining new tools—it’s creating entirely new job titles and disciplines.

Agentic AI and Ransomware: How Autonomous Agents Are Reshaping Cybersecurity Threats

 

A new generation of artificial intelligence—known as agentic AI—is emerging, and it promises to fundamentally change how technology is used. Unlike generative AI, which mainly responds to prompts, agentic AI operates independently, solving complex problems and making decisions without direct human input. While this leap in autonomy brings major benefits for businesses, it also introduces serious risks, especially in the realm of cybersecurity. Security experts warn that agentic AI could significantly enhance the capabilities of ransomware groups. 

These autonomous agents can analyze, plan, and execute tasks on their own, making them ideal tools for attackers seeking to automate and scale their operations. As agentic AI evolves, it is poised to alter the cyber threat landscape, potentially enabling more efficient and harder-to-detect ransomware attacks. In contrast to the early concerns raised in 2022 with the launch of tools like ChatGPT, which mainly helped attackers draft phishing emails or debug malicious code, agentic AI can operate in real time and adapt to complex environments. This allows cybercriminals to offload traditionally manual processes like lateral movement, system enumeration, and target prioritization. 

Currently, ransomware operators often rely on Initial Access Brokers (IABs) to breach networks, then spend time manually navigating internal systems to deploy malware. This process is labor-intensive and prone to error, often leading to incomplete or failed attacks. Agentic AI, however, removes many of these limitations. It can independently identify valuable targets, choose the most effective attack vectors, and adjust to obstacles—all without human direction. These agents may also dramatically reduce the time required to carry out a successful ransomware campaign, compressing what once took weeks into mere minutes. 

In practice, agentic AI can discover weak points in a network, bypass defenses, deploy malware, and erase evidence of the intrusion—all in a single automated workflow. However, just as agentic AI poses a new challenge for cybersecurity, it also offers potential defensive benefits. Security teams could deploy autonomous AI agents to monitor networks, detect anomalies, or even create decoy systems that mislead attackers. 

While agentic AI is not yet widely deployed by threat actors, its rapid development signals an urgent need for organizations to prepare. To stay ahead, companies should begin exploring how agentic AI can be integrated into their defense strategies. Being proactive now could mean the difference between falling behind or successfully countering the next wave of ransomware threats.

SBI Issues Urgent Warning Against Deepfake Scam Videos Promoting Fake Investment Schemes

 

The State Bank of India (SBI) has issued an urgent public advisory warning customers and the general public about the rising threat of deepfake scam videos. These videos, circulating widely on social media, falsely claim that SBI has launched an AI-powered investment scheme in collaboration with the Government of India and multinational corporations—offering unusually high returns. 

SBI categorically denied any association with such platforms and urged individuals to verify investment-related information through its official website, social media handles, or local branches. The bank emphasized that it does not endorse or support any investment services that promise guaranteed or unrealistic returns. 

In a statement published on its official X (formerly Twitter) account, SBI stated: “State Bank of India cautions all its customers and the general public about many deepfake videos being circulated on social media, falsely claiming the launch of an AI-based platform showcasing lucrative investment schemes supported by SBI in association with the Government of India and some multinational companies. These videos misuse technology to create false narratives and deceive people into making financial commitments in fraudulent schemes. We clarify that SBI does not endorse any such schemes that promise unrealistic or unusually high returns.” 

Deepfake technology, which uses AI to fabricate convincing videos by manipulating facial expressions and voices, has increasingly been used to impersonate public figures and create fake endorsements. These videos often feature what appear to be real speeches or statements by senior officials or celebrities, misleading viewers into believing in illegitimate financial products. This isn’t an isolated incident. Earlier this year, a deepfake video showing India’s Union Finance Minister allegedly promoting an investment platform was debunked by the government’s PIB Fact Check. 

The video, which falsely claimed that viewers could earn a steady daily income through the platform, was confirmed to be digitally altered. SBI’s warning is part of a broader effort to combat the misuse of emerging technologies for financial fraud. The bank is urging everyone to remain cautious and to avoid falling prey to such digital deceptions. Scammers are increasingly using advanced AI tools to exploit public trust and create a false sense of legitimacy. 

To protect themselves, customers are advised to verify any financial information or offers through official SBI channels. Any suspicious activity or misleading promotional material should be reported immediately. SBI’s proactive communication reinforces its commitment to safeguarding customers in an era where financial scams are becoming more sophisticated. The bank’s message is clear: do not trust any claims about investment opportunities unless they come directly from verified, official sources.

Silicon Valley Crosswalk Buttons Hacked With AI Voices Mimicking Tech Billionaires

 

A strange tech prank unfolded across Silicon Valley this past weekend after crosswalk buttons in several cities began playing AI-generated voice messages impersonating Elon Musk and Mark Zuckerberg.  

Pedestrians reported hearing bizarre and oddly personal phrases coming from audio-enabled crosswalk systems in Menlo Park, Palo Alto, and Redwood City. The altered voices were crafted to sound like the two tech moguls, with messages that ranged from humorous to unsettling. One button, using a voice resembling Zuckerberg, declared: “We’re putting AI into every corner of your life, and you can’t stop it.” Another, mimicking Musk, joked about loneliness and buying a Cybertruck to fill the void.  

The origins of the incident remain unknown, but online speculation points to possible hacktivism—potentially aimed at critiquing Silicon Valley’s AI dominance or simply poking fun at tech culture. Videos of the voice spoof spread quickly on TikTok and X, with users commenting on the surreal experience and sarcastically suggesting the crosswalks had been “venture funded.” This situation prompts serious concern. 

Local officials confirmed they’re investigating the breach and working to restore normal functionality. According to early reports, the tampering may have taken place on Friday. These crosswalk buttons aren’t new—they’re part of accessibility technology designed to help visually impaired pedestrians cross streets safely by playing audio cues. But this incident highlights how vulnerable public infrastructure can be to digital interference. Security researchers have warned in the past that these systems, often managed with default settings and unsecured firmware, can be compromised if not properly protected. 

One expert, physical penetration specialist Deviant Ollam, has previously demonstrated how such buttons can be manipulated using unchanged passwords or open ports. Polara, a leading manufacturer of these audio-enabled buttons, did not respond to requests for comment. The silence leaves open questions about how widespread the vulnerability might be and what cybersecurity measures, if any, are in place. This AI voice hack not only exposed weaknesses in public technology but also raised broader questions about the blending of artificial intelligence, infrastructure, and data privacy. 

What began as a strange and comedic moment at the crosswalk is now fueling a much larger conversation about the cybersecurity risks of increasingly connected cities. With AI becoming more embedded in daily life, events like this might be just the beginning of new kinds of public tech disruptions.

Generative AI Fuels Identity Theft, Aadhaar Card Fraud, and Misinformation in India

 

A disturbing trend is emerging in India’s digital landscape as generative AI tools are increasingly misused to forge identities and spread misinformation. One user, Piku, revealed that an AI platform generated a convincing Aadhaar card using only a name, birth date, and address—raising serious questions about data security. While AI models typically do not use real personal data, the near-perfect replication of government documents hints at training on real-world samples, possibly sourced from public leaks or open repositories. 

This AI-enabled fraud isn’t occurring in isolation. Criminals are combining fake document templates with authentic data collected from discarded paperwork, e-waste, and old printers. The resulting forged identities are realistic enough to pass basic checks, enabling SIM card fraud, bank scams, and more. What started as tools for entertainment and productivity now pose serious risks. Misinformation tactics are evolving too. 

A recent incident involving playback singer Shreya Ghoshal illustrated how scammers exploit public figures to push phishing links. These fake stories led users to malicious domains targeting them with investment scams under false brand names like Lovarionix Liquidity. Cyber intelligence experts traced these campaigns to websites built specifically for impersonation and data theft. The misuse of generative AI also extends into healthcare fraud. 

In a shocking case, a man impersonated renowned cardiologist Dr. N John Camm and performed unauthorized surgeries at a hospital in Madhya Pradesh. At least two patient deaths were confirmed between December 2024 and February 2025. Investigators believe the impersonator may have used manipulated or AI-generated credentials to gain credibility. Cybersecurity professionals are urging more vigilance. CertiK founder Ronghui Gu emphasizes that users must understand the risks of sharing biometric data, like facial images, with AI platforms. Without transparency, users cannot be sure how their data is used or whether it’s shared. He advises precautions such as using pseudonyms, secondary emails, and reading privacy policies carefully—especially on platforms not clearly compliant with regulations like GDPR or CCPA. 

A recent HiddenLayer report revealed that 77% of companies using AI have already suffered security breaches. This underscores the need for robust data protection as AI becomes more embedded in everyday processes. India now finds itself at the center of an escalating cybercrime wave powered by generative AI. What once seemed like harmless innovation now fuels identity theft, document forgery, and digital misinformation. The time for proactive regulation, corporate accountability, and public awareness is now—before this new age of AI-driven fraud becomes unmanageable.

How GenAI Is Revolutionizing HR Analytics for CHROs and Business Leaders

 

Generative AI (GenAI) is redefining how HR leaders interact with data, removing the steep learning curve traditionally associated with people analytics tools. When faced with a spike in hourly employee turnover, Sameer Raut, Vice President of HRIS at Sunstate Equipment, didn’t need to build a custom report or consult data scientists. Instead, he typed a plain-language query into a GenAI-powered chatbot: 

“What are the top reasons for hourly employee terminations in the past 12 months?” Within seconds, he had his answer. This shift in how HR professionals access data marks a significant evolution in workforce analytics. Tools powered by large language models (LLMs) are now integrated into leading analytics platforms such as Visier, Microsoft Power BI, Tableau, Qlik, and Sisense. These platforms are leveraging GenAI to interpret natural language questions and deliver real-time, actionable insights without requiring technical expertise. 

One of the major advantages of GenAI is its ability to unify fragmented HR data sources. It streamlines data cleansing, ensures consistency, and improves the accuracy of workforce metrics like headcount growth, recruitment gaps, and attrition trends. As Raut notes, tools like Visier’s GenAI assistant “Vee” allow him to make quick decisions during meetings, helping HR become more responsive and strategic. This evolution is particularly valuable in a landscape where 39% of HR leaders cite limited analytics expertise as their biggest challenge, according to a 2023 Aptitude Research study. 

GenAI removes this barrier by enabling intuitive data exploration across familiar platforms like Slack and Microsoft Teams. Frontline managers who may never open a BI dashboard can now access performance metrics and workforce trends instantly. Experts believe this transformation is just beginning. While some analytics platforms are still improving their natural language processing capabilities, others are leading with more advanced and user-friendly GenAI chatbots. 

These tools can even create automated visualizations and summaries tailored to executive audiences, enabling CHROs to tell compelling data stories during high-level meetings. However, this transformation doesn’t come without risk. Data privacy remains a top concern, especially as GenAI tools engage with sensitive workforce data. HR leaders must ensure that platforms offer strict entitlement management and avoid training AI models on private customer data. Providers like Visier mitigate these risks by training their models solely on anonymized queries rather than real-world employee information. 

As GenAI continues to evolve, it’s clear that its role in HR will only expand. From democratizing access to HR data to enhancing real-time decision-making and storytelling, this technology is becoming indispensable for organizations looking to stay agile and informed.

AI Powers Airbnb’s Code Migration, But Human Oversight Still Key, Say Tech Giants

 

In a bold demonstration of AI’s growing role in software development, Airbnb has successfully completed a large-scale code migration project using large language models (LLMs), dramatically reducing the timeline from an estimated 1.5 years to just six weeks. The project involved updating approximately 3,500 React component test files from Enzyme to the more modern React Testing Library (RTL). 

According to Airbnb software engineer Charles Covey-Brandt, the company’s AI-driven pipeline used a combination of automated validation steps and frontier LLMs to handle the bulk of the transformation. Impressively, 75% of the files were migrated within just four hours, thanks to robust automation and intelligent retries powered by dynamic prompt engineering with context-rich inputs of up to 100,000 tokens. 

Despite this efficiency, about 900 files initially failed validation. Airbnb employed iterative tools and a status-tracking system to bring that number down to fewer than 100, which were finally resolved manually—underscoring the continued need for human intervention in such processes. Other tech giants echo this hybrid approach. Google, in a recent report, noted a 50% speed increase in migrating codebases using LLMs. 

One project converting ID types in the Google Ads system—originally estimated to take hundreds of engineering years—was largely automated, with 80% of code changes authored by AI. However, inaccuracies still required manual edits, prompting Google to invest further in AI-powered verification. Amazon Web Services also highlighted the importance of human-AI collaboration in code migration. 

Its research into modernizing Java code using Amazon Q revealed that developers value control and remain cautious of AI outputs. Participants emphasized their role as reviewers, citing concerns about incorrect or misleading changes. While AI is accelerating what were once laborious coding tasks, these case studies reveal that full autonomy remains out of reach. 

Engineers continue to act as crucial gatekeepers, validating and refining AI-generated code. For now, the future of code migration lies in intelligent partnerships—where LLMs do the heavy lifting and humans ensure precision.