Local applications are installed directly on a user’s device, a model that dominated the 1990s and remains widely used. Their biggest advantage is reliability: apps are always accessible, customizable, and available wherever the device goes.
However, maintaining these distributed installations can be challenging. Updates must be rolled out across multiple endpoints, often leading to inconsistency. Performance may also fluctuate if these apps depend on remote databases or storage resources. Security adds another layer of complexity, as corporate data must move to the device, increasing the risk of exposure and demanding strong endpoint protection.
VDI centralizes desktops and applications in a controlled environment—whether hosted on-premises or in private or public clouds. Users interact with the system through transmitted screen updates and input signals, while the data itself stays securely in one place.
This centralization simplifies updates, strengthens security, and ensures more predictable performance by keeping applications near their data sources. On the other hand, VDI requires uninterrupted connectivity and often demands specialized expertise to manage. As a result, many organizations supplement VDI with other delivery models instead of depending on it alone.
SaaS delivers software through a browser, eliminating the need for local installation or maintenance. Providers apply updates automatically, keeping applications “evergreen” for subscribers. This reduces operational overhead for IT teams and allows vendors to release features quickly.
But the subscription-based model also means customers don’t own the software—access ends when payments stop. Transitioning to a different provider can be difficult, especially when exporting data in a usable form. SaaS can also introduce familiar endpoint challenges, as user devices still interact directly with data.
The model’s rapid growth is evident. According to the Parallels Cloud Survey 2025, 80% of respondents say at least a quarter of their applications run as SaaS, with many reporting significantly higher adoption.
DaaS extends the SaaS model by delivering entire desktops through a managed service. Organizations access virtual desktops much like VDI but without overseeing the underlying infrastructure.
This reduces complexity while providing consolidated management, stable performance, and strong security. DaaS is especially useful when organizations need to scale quickly to support new teams or projects. However, like SaaS, DaaS is subscription-based, and the service stops if payments lapse. The model works best with standardized desktop environments—heavy customization can add complexity.
Another key consideration is data location. If desktops move to DaaS while critical applications or data remain elsewhere, users may face performance issues. Aligning desktops with the data they rely on is essential.
Most organizations no longer rely on a single delivery method. They use local apps where necessary, VDI for tighter control, SaaS for streamlined access, and DaaS for scalability.
The Parallels survey highlights this blend: 85% of organizations use SaaS, but only 2% rely on it exclusively. Many combine SaaS with VDI or DaaS. Additionally, 86% of IT leaders say they are considering or planning to shift some workloads away from the public cloud, reflecting the complexity of modern delivery decisions.
When determining how these models fit together, organizations must assess:
Security & Compliance: Highly regulated sectors may prefer VDI for data control, while SaaS and DaaS providers offer certifications that may not apply universally.
Operational Expertise: VDI demands specialized skills; companies lacking them may adopt DaaS. SaaS’s isolated data structures may require additional tools or expertise.
Scalability & Agility: SaaS and DaaS typically allow faster expansion, though cloud-based VDI is narrowing this gap.
Geographical Factors: User locations, latency requirements, and regional data regulations influence which model performs best.
Cost Structure: VDI often requires upfront investments, while SaaS and DaaS distribute costs over time. Both direct and hidden operational costs must be evaluated.
Each application delivery model offers distinct benefits: local apps provide control, VDI enhances security, SaaS simplifies operations, and DaaS supports flexibility. Most organizations will continue using a combination of these approaches.
The optimal strategy aligns each model with the workloads it supports best, prioritizes security and compliance, and maintains adaptability for future needs. With clear objectives and thoughtful planning, IT leaders can deliver secure, high-performing access today while staying ready for whatever comes next.
A recent report reveals that more than a third of healthcare organisations are unprepared for cyberattacks, despite an apparent rise in such incidents. Over the past three years, over 30% of these organisations have faced cyberattacks. The HHS Office for Civil Rights has reported a 256% increase in large data breaches involving hacking over the last five years, highlighting the sector's growing vulnerability.
Sensitive Data at High Risk
Healthcare organisations manage vast amounts of sensitive data, predominantly in digital form. This makes them prime targets for cybercriminals, especially since many operators have not sufficiently encrypted their data at rest or in transit. This lack of security is alarming, considering the high value of protected health information (PHI), which includes patient data, medical records, and insurance details. Such information is often sold on the dark web or used to ransom healthcare providers, forcing them to pay up to avoid losing critical patient data.
In response to the surge in cyberattacks, federal regulators and lawmakers have taken notice. The HHS recently released voluntary cybersecurity guidelines and is considering the introduction of enforceable standards to enhance the sector's defences. However, experts stress that healthcare systems must take proactive measures, such as conducting regular risk analyses, to better prepare for potential threats. Notably, the report found that 37% of healthcare organisations lack a contingency plan for cyberattacks, even though half have experienced such incidents.
To address these challenges, healthcare organisations need to implement several key strategies:
1. Assess Security Risks in IT Infrastructure
Regular cyber risk assessments and security evaluations are essential. These assessments should be conducted annually to identify new vulnerabilities, outdated policies, and security gaps that could jeopardise the organisation. Comprehensive cybersecurity audits, whether internal or by third parties, provide a thorough overview of the entire IT infrastructure, including network, email, and physical device security.
2. Implement Network Segmentation
Network segmentation is an effective practice that divides an organisation's network into smaller, isolated subnetworks. This approach limits data access and makes it difficult for hackers to move laterally within the network if they gain access. Each subnetwork has its own security rules and access privileges, enhancing overall security by preventing unauthorised access to the entire network through a single vulnerability.
3. Enforce Cybersecurity Training and Education
Human error is a growing factor in data breaches. To mitigate this, healthcare organisations must provide comprehensive cybersecurity training to their staff. This includes educating employees on secure password creation, safe internet browsing, recognizing phishing attacks, avoiding unsecured Wi-Fi networks, setting up multi-factor authentication, and protecting sensitive information such as social security numbers and credit card details. Regular updates to training programs are necessary to keep pace with the evolving nature of cyber threats.
By adopting these measures, healthcare organisations can significantly bolster their defences against cyberattacks, safeguarding sensitive patient information and maintaining compliance with HIPAA standards.
According to a recent IDC survey, conducted on more than 500 CIOs from more than 20 industries around the world, 46 percent of the respondents reported having witnessed at least one ransomware attack in the last three years. This indicates how ransomware has surpassed natural disaster, to become the main reason one needs to be skilled at handling large data restorations. Many years ago, disk system failure, which frequently required a complete restore from scratch, was the primary cause of such restores.
However, situations changed with the introduction of RAID and Erasure Coding, which brought terrorism and natural disasters to the forefront. Nonetheless, unless you lived in a specific disaster-prone area, the likelihood that any one company would experience a natural disaster was actually fairly low.
Is the Company Prepared for an Attack?
May be not.
The survey suggests that organizations who have had an experience of cyberattacks or data loss think highly of their ability to respond to such events in the future. In support of this notion, 85 percent of the respondents, on being asked about their security plans, claimed of having a cyber-recovery playbook for intrusion detection, prevention and response.
While, it is to be taken into consideration that ransomware attacks are ever-evolving, with threat actors implementing a different tactics for the attacks. Thus, it is difficult to conclude that the current data resiliency tools would be highly efficient for all the future ransomware attacks.
These tools however, should have one key objective in common. An efficient tool must be capable of recovering the breached data in a manner that the organization need not have to pay enormous ransom, while also making sure that the data is not lost. Since ransomware attacks are inevitable, data resiliency tool could at least ensure lesser damage from the attacks.
Minimizing Attack Damage
In order to detect a ransomware attack, to respond and to recover from it, one requires several crucial steps and tactics to be followed as given below.
• IT infrastructure could be created in a way to limit the damage of an attack, for example, by forbidding the usage of new domains (preventing command and control) and restricting internal lateral movement (minimizing the ability of the malware to spread internally). However, after ransomware has hit you, you must employ numerous tools, many of which may be automated for greater efficiency.
• Limiting lateral movement in order to halt the IP traffic all at once. If infected systems would not be able to communicate, no further damage would resultingly take place. Once the infected systems are identified and shut down, one can proceed with their disaster recovery phase of bringing infected systems online. Further, ensuring that the recovery systems are themselves not infected.