Local applications are installed directly on a user’s device, a model that dominated the 1990s and remains widely used. Their biggest advantage is reliability: apps are always accessible, customizable, and available wherever the device goes.
However, maintaining these distributed installations can be challenging. Updates must be rolled out across multiple endpoints, often leading to inconsistency. Performance may also fluctuate if these apps depend on remote databases or storage resources. Security adds another layer of complexity, as corporate data must move to the device, increasing the risk of exposure and demanding strong endpoint protection.
VDI centralizes desktops and applications in a controlled environment—whether hosted on-premises or in private or public clouds. Users interact with the system through transmitted screen updates and input signals, while the data itself stays securely in one place.
This centralization simplifies updates, strengthens security, and ensures more predictable performance by keeping applications near their data sources. On the other hand, VDI requires uninterrupted connectivity and often demands specialized expertise to manage. As a result, many organizations supplement VDI with other delivery models instead of depending on it alone.
SaaS delivers software through a browser, eliminating the need for local installation or maintenance. Providers apply updates automatically, keeping applications “evergreen” for subscribers. This reduces operational overhead for IT teams and allows vendors to release features quickly.
But the subscription-based model also means customers don’t own the software—access ends when payments stop. Transitioning to a different provider can be difficult, especially when exporting data in a usable form. SaaS can also introduce familiar endpoint challenges, as user devices still interact directly with data.
The model’s rapid growth is evident. According to the Parallels Cloud Survey 2025, 80% of respondents say at least a quarter of their applications run as SaaS, with many reporting significantly higher adoption.
DaaS extends the SaaS model by delivering entire desktops through a managed service. Organizations access virtual desktops much like VDI but without overseeing the underlying infrastructure.
This reduces complexity while providing consolidated management, stable performance, and strong security. DaaS is especially useful when organizations need to scale quickly to support new teams or projects. However, like SaaS, DaaS is subscription-based, and the service stops if payments lapse. The model works best with standardized desktop environments—heavy customization can add complexity.
Another key consideration is data location. If desktops move to DaaS while critical applications or data remain elsewhere, users may face performance issues. Aligning desktops with the data they rely on is essential.
Most organizations no longer rely on a single delivery method. They use local apps where necessary, VDI for tighter control, SaaS for streamlined access, and DaaS for scalability.
The Parallels survey highlights this blend: 85% of organizations use SaaS, but only 2% rely on it exclusively. Many combine SaaS with VDI or DaaS. Additionally, 86% of IT leaders say they are considering or planning to shift some workloads away from the public cloud, reflecting the complexity of modern delivery decisions.
When determining how these models fit together, organizations must assess:
Security & Compliance: Highly regulated sectors may prefer VDI for data control, while SaaS and DaaS providers offer certifications that may not apply universally.
Operational Expertise: VDI demands specialized skills; companies lacking them may adopt DaaS. SaaS’s isolated data structures may require additional tools or expertise.
Scalability & Agility: SaaS and DaaS typically allow faster expansion, though cloud-based VDI is narrowing this gap.
Geographical Factors: User locations, latency requirements, and regional data regulations influence which model performs best.
Cost Structure: VDI often requires upfront investments, while SaaS and DaaS distribute costs over time. Both direct and hidden operational costs must be evaluated.
Each application delivery model offers distinct benefits: local apps provide control, VDI enhances security, SaaS simplifies operations, and DaaS supports flexibility. Most organizations will continue using a combination of these approaches.
The optimal strategy aligns each model with the workloads it supports best, prioritizes security and compliance, and maintains adaptability for future needs. With clear objectives and thoughtful planning, IT leaders can deliver secure, high-performing access today while staying ready for whatever comes next.
Most digital assistants today can help users find information, yet they still cannot independently complete tasks such as organizing a trip or finalizing a booking. This gap exists because the majority of these systems are built on generative AI models that can produce answers but lack the technical ability to carry out real-world actions. That limitation is now beginning to shift as the Model Context Protocol, known as MCP, emerges as a foundational tool for enabling task-performing AI.
MCP functions as an intermediary layer that allows large language models to interact with external data sources and operational tools in a standardized way. Anthropic unveiled this protocol in late 2024, describing it as a shared method for linking AI assistants to the platforms where important information is stored, including business systems, content libraries and development environments.
The protocol uses a client-server approach. An AI model or application runs an MCP client. On the opposite side, travel companies or service providers deploy MCP servers that connect to their internal data systems, such as booking engines, rate databases, loyalty programs or customer profiles. The two sides exchange information through MCP’s uniform message format.
Before MCP, organizations had to create individual API integrations for each connection, which required significant engineering time. MCP is designed to remove that inefficiency by letting companies expose their information one time through a consolidated server that any MCP-enabled assistant can access.
Support from major AI companies, including Microsoft, Google, OpenAI and Perplexity, has pushed MCP into a leading position as the shared standard for agent-based communication. This has encouraged travel platforms to start experimenting with MCP-driven capabilities.
Several travel companies have already adopted the protocol. Kiwi.com introduced its MCP server in 2025, allowing AI tools to run flight searches and receive personalized results. Executives at the company note that the appetite for experimenting with agentic travel tools is growing, although the sector still needs clarity on which tasks belong inside a chatbot and which should remain on a company’s website.
In the accommodation sector, property management platform Apaleo launched an MCP server ahead of its competitors, and other travel brands such as Expedia and TourRadar are also integrating MCP. Industry voices emphasize that AI assistants using MCP pull verified information directly from official hotel and travel systems, rather than relying on generic online content.
The importance of MCP became even more visible when new ChatGPT apps were announced, with major travel agencies included among the first partners. Experts say this marks a significant moment for how consumers may start buying travel through conversational interfaces.
However, early adopters also warn that MCP is not without challenges. Older systems must be restructured to meet MCP’s data requirements, and companies must choose AI partners carefully because each handles privacy, authorization and data retention differently. LLM processing time can also introduce delays compared to traditional APIs.
Industry analysts expect MCP-enabled bookings to appear first in closed ecosystems, such as loyalty platforms or brand-specific applications, where trust and verification already exist. Although the technology is progressing quickly, experts note that consumer-facing value is still developing. For now, MCP represents the first steps toward more capable, agentic AI in travel.
Google has issued a new security alert urging smartphone users to “avoid using public Wi-Fi whenever possible,” cautioning that “these networks can be unencrypted and easily exploited by attackers.” With so many people relying on free networks at airports, cafés, hotels and malls, the warning raises an important question—just how risky are these hotspots?
The advisory appears in Google’s latest “Behind the Screen” safety guide for both Android and iPhone users, released as text-based phishing and fraud schemes surge across the U.S. and other countries. The threat landscape is alarming: according to Google, 94% of Android users are vulnerable to sophisticated messaging scams that now operate like “a sophisticated, global enterprise designed to inflict devastating financial losses and emotional distress on unsuspecting victims.”
With 73% of people saying they are “very or extremely concerned about mobile scams,” and 84% believing these scams harm society at a major scale, Google’s new warning highlights the growing need for simple, practical ways to stay safer online.
Previously, Google’s network-related cautions focused mostly on insecure 2G cellular connections, which lack encryption and can be abused for SMS Blaster attacks—where fake cell towers latch onto nearby phones to send mass scam texts. But stepping into the public Wi-Fi debate is unusual, especially for a company as influential as Google.
Earlier this year, the U.S. Transportation Security Administration (TSA) also advised travelers: “Don’t use free public Wi-Fi” as part of its airport safety guidelines, pairing it with a reminder to avoid public charging stations as well. Both recommendations have drawn their share of skepticism within the cybersecurity community.
Even the Federal Trade Commission (FTC) has joined the discussion. The agency acknowledges that while public Wi-Fi networks in “coffee shops, malls, airports, hotels, and other places are convenient,” they have historically been insecure. The FTC explains that in the past, browsing on a public network exposed users to data theft because many websites didn’t encrypt their traffic. However, encryption is now widespread: “most websites do use encryption to protect your information. Because of the widespread use of encryption, connecting through a public Wi-Fi network is usually safe.”
Turn off auto-connect for unknown or public Wi-Fi networks.
When accessing a network through a captive portal, never download software or submit personal details beyond an email address.
Make sure every site you open uses encryption — look for the padlock icon and avoid entering credentials if an unexpected popup appears.
Verify the network name before joining to ensure you're connecting to the official Wi-Fi of the hotel, café, airport or store.
Use only reputable, paid VPN services from trusted developers; free or unfamiliar VPNs—especially those based in China—can be riskier than not using one at all
Artificial intelligence is transforming how videos are created and shared, and the change is happening at a startling pace. In only a few months, AI-powered video generators have advanced so much that people are struggling to tell whether a clip is real or synthetic. Experts say that this is only the beginning of a much larger shift in how the public perceives recorded reality.
The uncomfortable truth is that most of us will eventually fall for a fake video. Some already have. The technology is improving so quickly that it is undermining the basic assumption that a video camera captures the truth. Until we adapt, it is important to know what clues can still help identify computer-generated clips before that distinction disappears completely.
The Quality Clue: When Bad Video Looks Suspicious
At the moment, the most reliable sign of a potentially AI-generated video is surprisingly simple, poor image quality. If a clip looks overly grainy, blurred, or compressed, that should raise immediate suspicion. Researchers in digital forensics often start their analysis by checking resolution and clarity.
Hany Farid, a digital-forensics specialist at the University of California, Berkeley, explains that low-quality videos often hide the subtle visual flaws created by AI systems. These systems, while impressive, still struggle to render fine details accurately. Blurring and pixelation can conveniently conceal these inconsistencies.
However, it is essential to note that not all low-quality clips are fake. Some authentic videos are genuinely filmed under poor lighting or with outdated equipment. Likewise, not every AI-generated video looks bad. The point is that unclear or downgraded quality makes fakes harder to detect.
Why Lower Resolution Helps Deception
Today’s top AI models, such as Google’s Veo and OpenAI’s Sora, have reduced obvious mistakes like extra fingers or distorted text. The issues they produce are much subtler, unusually smooth skin textures, unnatural reflections, strange shifts in hair or clothing, or background movements that defy physics. When resolution is high, those flaws are easier to catch. When the video is deliberately compressed, they almost vanish.
That is why deceptive creators often lower a video’s quality on purpose. By reducing resolution and adding compression, they hide the “digital fingerprints” that could expose a fake. Experts say this is now a common technique among those who intend to mislead audiences.
Short Clips Are Another Warning Sign
Length can be another indicator. Because generating AI video is still computationally expensive, most AI-generated clips are short, often six to ten seconds. Longer clips require more processing time and increase the risk of errors appearing. As a result, many deceptive videos online are short, and when longer ones are made, they are typically stitched together from several shorter segments. If you notice sharp cuts or changes every few seconds, that could be another red flag.
The Real-World Examples of Viral Fakes
In recent months, several viral examples have proven how convincing AI content can be. A video of rabbits jumping on a trampoline received over 200 million views before viewers learned it was synthetic. A romantic clip of two strangers meeting on the New York subway was also revealed to be AI-generated. Another viral post showed an American priest delivering a fiery sermon against billionaires; it, too, turned out to be fake.
All these videos shared one detail: they looked like they were recorded on old or low-grade cameras. The bunny video appeared to come from a security camera, the subway couple’s clip was heavily pixelated, and the preacher’s footage was slightly zoomed and blurred. These imperfections made the fakes seem authentic.
Why These Signs Will Soon Disappear
Unfortunately, these red flags are temporary. Both Farid and other researchers, like Matthew Stamm of Drexel University, warn that visual clues are fading fast. AI systems are evolving toward flawless realism, and within a couple of years, even experts may struggle to detect fakes by sight alone. This evolution mirrors what happened with AI images where obvious errors like distorted hands or melted faces have mostly disappeared.
In the future, video verification will depend less on what we see and more on what the data reveals. Forensic tools can already identify statistical irregularities in pixel distribution or file structure that the human eye cannot perceive. These traces act like invisible fingerprints left during video generation or manipulation.
Tech companies are now developing standards to authenticate digital content. The idea is for cameras to automatically embed cryptographic information into files at the moment of recording, verifying the image’s origin. Similarly, AI systems could include transparent markers to indicate that a video was machine-generated. While these measures are promising, they are not yet universally implemented.
Experts in digital literacy argue that the most important shift must come from us, not just technology. As Mike Caulfield, a researcher on misinformation, points out, people need to change how they interpret what they see online. Relying on visual appearance is no longer enough.
Just as we do not assume that written text is automatically true, we must now apply the same scepticism to videos. The key questions should always be: Who created this content? Where was it first posted? Has it been confirmed by credible sources? Authenticity now depends on context and source verification rather than clarity or resolution.
The Takeaway
For now, blurry and short clips remain practical warning signs of possible AI involvement. But as technology improves, those clues will soon lose their usefulness. The only dependable defense against misinformation will be a cautious, investigative mindset: verifying origin, confirming context, and trusting only what can be independently authenticated.
In the era of generative video, the truth no longer lies in what we see but in what we can verify.
Fraud has evolved into a calculated industry powered by technology, psychology, and precision targeting. Gone are the days when scams could be spotted through broken English or unrealistic offers alone. Today’s fraudsters combine emotional pressure with digital sophistication, creating schemes that appear legitimate and convincing. Understanding how these scams work, and knowing how to respond, is essential for protecting your family’s hard-earned savings.
The Changing Nature of Scams
Modern scams are not just technical traps, they are psychological manipulations. Criminals no longer rely solely on phishing links or counterfeit banking apps. They now use social engineering tactics, appealing to trust, fear, or greed. A scam might start with a call pretending to be from a government agency, an email about a limited investment opportunity, or a message warning that your bank account is at risk. Each of these is designed to create panic or urgency so that victims act before they think.
A typical fraud cycle follows a simple pattern: an urgent message, a seemingly legitimate explanation, and a request for sensitive action, such as sharing a one-time password, installing a new app, or transferring funds “temporarily” to another account. Once the victim complies, the attacker vanishes, leaving financial and emotional loss behind.
Experts note that the most dangerous scams often appear credible because they mimic official communication styles, use verified-looking logos, and even operate fake customer support numbers. The sophistication makes these schemes particularly hard to spot, especially for first-time investors or non-technical individuals.
Key Red Flags You Should Never Ignore
1. Unrealistic returns or guarantees: If a company claims you can make quick, risk-free profits or shows charts with consistent gains, it’s likely a setup. Real investments fluctuate; only scammers promise certainty.
2. Pressure to act immediately: Whether it’s “only minutes left to invest” or “pay now to avoid penalties,” urgency is a manipulative tactic designed to prevent logical evaluation.
3. Requests to switch apps or accounts: Authentic businesses never ask customers to transfer funds into personal or unfamiliar accounts or to download unverified applications.
4. Emotional storylines: Fraudsters know how to exploit emotions. They may pretend to be in love, offer fake job opportunities, or issue fabricated legal threats, all aimed at overriding rational thinking.
5. Asking for security codes or OTPs: No genuine financial institution or digital platform will ever ask for these details. Sharing them gives scammers direct access to your accounts.
Simple Steps to Build Financial Safety
Protection from scams starts with discipline and awareness rather than advanced technology.
• Take a moment before responding. Don’t act out of panic. Pause, think, and verify before clicking or transferring money.
• Verify independently. If a message or call appears urgent, reach out to the organization using contact details from their official website, not from the message itself.
• Activate alerts and monitor accounts. Keep an eye on all transactions. Early detection of suspicious activity can prevent larger losses.
• Use multi-layered security. Enable multi-factor authentication on all major financial accounts, preferably using hardware security keys or authentication apps instead of SMS codes.
• Keep your digital environment clean. Regularly update your devices, operating systems, and browsers, and use trusted antivirus software to block potential malware.
• Install apps only from reliable sources. Avoid downloading apps or investment platforms shared through personal messages or unverified websites.
• Educate your family. Many scam victims are older adults who may hesitate to talk about it. Encourage open communication and make sure they know how to recognize suspicious requests.
Awareness Is the New Security
Technology gives fraudsters global reach, but it also equips users with tools to fight back. Secure authentication systems, anti-phishing filters, and real-time transaction alerts are valuable but they work best when combined with personal vigilance.
Think of security like investment diversification: no single tool provides complete protection. A strong defense requires a mix of cautious behavior, verification habits, and awareness of evolving threats.
Your Takeaway
Scammers are adapting faster than ever, blending emotional manipulation with technical skill. The best way to counter them is to slow down, question everything that seems urgent or “too good to miss,” and confirm information before taking action.
Protecting your family’s financial wellbeing isn’t just about saving or investing wisely, it’s about staying alert, informed, and proactive. Remember: genuine institutions will never rush you, threaten you, or ask for confidential information. The smartest investment today is in your awareness.
A prompt injection occurs when hackers embed secret instructions inside what looks like an ordinary input. The AI can’t tell the difference between developer-given rules and user input, so it processes everything as one continuous prompt. This loophole lets attackers trick the model into following their commands — stealing data, installing malware, or even hijacking smart home devices.
Security experts warn that these malicious instructions can be hidden in everyday digital spaces — web pages, calendar invites, PDFs, or even emails. Attackers disguise their prompts using invisible Unicode characters, white text on white backgrounds, or zero-sized fonts. The AI then reads and executes these hidden commands without realizing they are malicious — and the user remains completely unaware that an attack has occurred.
For instance, a company might upload a market research report for analysis, unaware that the file secretly contains instructions to share confidential pricing data. The AI dutifully completes both tasks, leaking sensitive information without flagging any issue.
In another chilling example from the Black Hat security conference, hidden prompts in calendar invites caused AI systems to turn off lights, open windows, and even activate boilers — all because users innocently asked Gemini to summarize their schedules.
Prompt injection attacks mainly fall into two categories:
Direct Prompt Injection: Attackers directly type malicious commands that override the AI’s normal functions.
Indirect Prompt Injection: Hackers hide commands in external files or links that the AI processes later — a far stealthier and more dangerous method.
There are also advanced techniques like multi-agent infections (where prompts spread like viruses between AI systems), multimodal attacks (hiding commands in images, audio, or video), hybrid attacks (combining prompt injection with traditional exploits like XSS), and recursive injections (where AI generates new prompts that further compromise itself).
It’s crucial to note that prompt injection isn’t the same as “jailbreaking.” While jailbreaking tries to bypass safety filters for restricted content, prompt injection reprograms the AI entirely — often without the user realizing it.
Even though many solutions focus on corporate users, individuals can also protect themselves:
Android has been at the forefront of the fight against scammers for years, utilizing the best AI to create proactive, multi-layered defenses that can detect and stop scams before they get to you. Every month, over 10 billion suspected malicious calls and messages are blocked by Android's scam defenses. In order to preserve the integrity of the RCS service, Google claims to conduct regular safety checks. It has blocked more than 100 million suspicious numbers in the last month alone.
To highlight how fraud defenses function in the real world, Google invited consumers and independent security experts to compare how well Android and iOS protect you from these dangers. Additionally, Google is releasing a new report that describes how contemporary text scams are planned, giving you insight into the strategies used by scammers and how to identify them.
Android smartphones were found to have the strongest AI-powered protections in a recent assessment conducted by the international technology market research firm Counterpoint Research.
The news first came in December last year. According to the WSJ, officials at the Departments of Justice, Commerce, and Defense had launched investigations into the company due to national security threats from China.
Currently, the proposal has gotten interagency approval. According to the Washington Post, "Commerce officials concluded TP-Link Systems products pose a risk because the US-based company's products handle sensitive American data and because the officials believe it remains subject to jurisdiction or influence by the Chinese government."
But TP-Link's connections to the Chinese government are not confirmed. The company has denied of any ties with being a Chinese company.
The company was founded in China in 1996. After the October 2024 investigation, the company split into two: TP-Link Systems and TP-Link Technologies. "TP-Link's unusual degree of vulnerabilities and required compliance with [Chinese] law are in and of themselves disconcerting. When combined with the [Chinese] government's common use of [home office] routers like TP-Link to perpetrate extensive cyberattacks in the United States, it becomes significantly alarming" the officials wrote in October 2024.
The company dominated the US router market since the COVID pandemic. It rose from 20% of total router sales to 65% between 2019 and 2025.
The US DoJ is investigating if TP-Link was involved in predatory pricing by artificially lowering its prices to kill the competition.
The potential ban is due to an interagency review and is being handled by the Department of Commerce. Experts say that the ban may be lifted in future due to Trump administration's ongoing negotiations with China.
Security researcher hxr1 discovered a far more conventional method of weaponizing rampant AI in a year when ingenious and sophisticated quick injection tactics have been proliferating. He detailed a living-off-the-land attack (LotL) that utilizes trusted files from the Open Neural Network Exchange (ONNX) to bypass security engines in a proof-of-concept (PoC) provided exclusively to Dark Reading.
Programs for cybersecurity are only as successful as their designers make them. Because these are known signs of suspicious activity, they may detect excessive amounts of data exfiltrating from a network or a foreign.exe file that launches. However, if malware appears on a system in a way they are unfamiliar with, they are unlikely to be aware of it.
That's the reason AI is so difficult. New software, procedures, and systems that incorporate AI capabilities create new, invisible channels for the spread of cyberattacks.
The Windows operating system has been gradually including features since 2018 that enable apps to carry out AI inference locally without requiring a connection to a cloud service. Inbuilt AI is used by Windows Hello, Photos, and Office programs to carry out object identification, facial recognition, and productivity tasks, respectively. They accomplish this by making a call to the Windows Machine Learning (ML) application programming interface (API), which loads ML models as ONNX files.
ONNX files are automatically trusted by Windows and security software. Why wouldn't they? Although malware can be found in EXEs, PDFs, and other formats, no threat actors in the wild have yet to show that they plan to or are capable of using neural networks as weapons. However, there are a lot of ways to make it feasible.
Planting a malicious payload in the metadata of a neural network is a simple way to infect it. The compromise would be that this virus would remain in simple text, making it much simpler for a security tool to unintentionally detect it.
Piecemeal malware embedding among the model's named nodes, inputs, and outputs would be more challenging but more covert. Alternatively, an attacker may utilize sophisticated steganography to hide a payload inside the neural network's own weights.
As long as you have a loader close by that can call the necessary Windows APIs to unpack it, reassemble it in memory, and run it, all three approaches will function. Additionally, both approaches are very covert. Trying to reconstruct a fragmented payload from a neural network would be like trying to reconstruct a needle from bits of it spread through a haystack.