However, AI requires its data to be working perfectly in order operate efficiently. If the models are not trained properly on comprehensive, objective, and high-quality data, it could lead to insufficient outcomes. This way, AI has turned out to be lucrative aspect for healthcare institutions. However, it is quite challenging for them to gather and use information while also adhering to privacy and confidentiality regulations because of the sensitivity of the patient data involved.
This is where the idea of ‘synthetic data’ come into play.
The U.S. Census Bureau defines synthetic data as artificial microdata that is created with computer algorithms or statistical models to replicate the statistical characteristics of real-world data. It can supplement or replace actual data in public health, health information technology, and healthcare research, sparing companies the headache of obtaining and utilizing real patient data.
One of the reasons why synthetic data is preferred over the real-world information is the privacy it provides.
Synthetic data is created in a way that maintains the dataset's analytical usefulness while replacing any personally identifying information (PII) with non-identified numbers. This ensures that identities cannot be traced back to particular records or used for re-identification while facilitating the easy usage and exchange of data for internal use.
Using fake data as an alternative for PII ensures that the organizations remain true to their guidelines such as GDPR and HIPAA throughout the process.
In addition to protecting privacy, synthetic datasets can assist save the time and money that businesses often need to spend obtaining and managing real-world data using conventional techniques. Without needing businesses to enter into complicated data-sharing agreements, privacy legislation, or data access restrictions, they faithfully reproduce the original data.
Even though synthetic data has a lot of advantages over real data, it should never be treated carelessly.
For example, the output may be less dependable and accurate than anticipated and could have an impact on downstream applications if the statistical models and algorithms being used to generate the data are faulty or biased in any manner. In a similar vein, a malicious actor could be able to re-identify the data if it is only partially safeguarded.
Such case can happen if the synthetic data include outliners and unique data points, such as a rare disease found in a small number of records. It may be connected to the original dataset with ease. Re-identifying records in the synthetic data can also be accomplished by adversarial machine learning techniques, particularly in cases where the attacker has access to both the generative model and the synthetic data.
These situations can be avoided by using techniques like differential privacy – to add noise to the data – and disclosure control in the generation process in order to add alteration and perturbation of the information.
Generating synthetic data could be tricky and may as well result in compromise of transparency and reproducibility. Researchers and teams are thus advised to take the aforementioned approach without running the same risks, and constantly seek to document and share the procedures used to produce synthetic data.
Henry Schein is a Fortune 500 healthcare products and services provider with operations and affiliates in 32 countries, with approximately $12 billion in revenue reported in 2022.
It first made public on October 15 that, following a cyberattack the day before, it had to take some systems offline in order to contain the threat.
On November 22, more than a month later, the company announced that parts of its apps and the e-commerce platform had once more been taken down due to another attack that was attributed to the BlackCat ransomware.
"Certain Henry Schein applications, including its ecommerce platform, are currently unavailable. The Company continues to take orders using alternate means and continues to ship to its customers," the announcement said.
"Henry Schein has identified the cause of the occurrence. The threat actor from the previously disclosed cyber incident has claimed responsibility."
Today, the company released a statement, noting that it has restored its U.S. e-commerce platform and that it is expecting its platforms in Canada and Europe to be back online shortly.
The healthcare services company is apparently still taking orders through alternate methods and distributing them to customers in the affected areas.
Following the breach, the ransomware gang BlackCat added Henry Schein to its dark web leak forum, taking responsibility for breaching the company’s network. BlackCat notes that it has stolen 35 terabytes of the company’s crucial data.
The cybercrime organization claims that they re-encrypted the company's devices while Henry Schein was about to restore its systems, following a breakdown in negotiations toward the end of October.
This would make the event this month the third time that BlackCat has compromised Henry Schein's network and encrypted its computers after doing so on October 15.
"Despite ongoing discussions with Henry's team, we have not received any indication of their willingness to prioritize the security of their clients, partners, and employees, let alone protect their own network," the threat actors said.
The ransomware group further warned of releasing their internal payroll data and shareholder folders to their collective blog by midnight.
Initially discovered in November 2021, BlackCat is believed to have rebranded itself from the popular DarkSide/BlackMatter gang. DarkSide has earlier gained global recognition by initiating attacks on Colonial Pipelines, prompting extensive law enforcement probes.
Moreover, the FBI has linked the ransomware group to over 60 breaches, between November 2021 and March 2022, affecting companies globally.
The personal data of 8.5 million American patients was at risk due to a data breach that occurred recently at Welltok, a well-known supplier of healthcare solutions. Since cybersecurity specialists found the intrusion, the organization has been attempting to resolve the issue and minimize any possible harm.
According to reports from Bleeping Computer, the breach has exposed a vast amount of sensitive data, including patients' names, addresses, medical histories, and other confidential information. This breach not only raises concerns about the privacy and security of patient data but also highlights the increasing sophistication of cyber threats in the healthcare sector.
Welltok has promptly responded to the incident, acknowledging the breach through a notice posted on their official website. The company has assured affected individuals that it is taking necessary steps to investigate the breach, enhance its security measures, and collaborate with law enforcement agencies to identify the perpetrators.
The impact of this breach extends beyond the United States, as reports from sources suggest that the compromised data includes patients from various regions. This global reach amplifies the urgency for international cooperation in addressing cyber threats and fortifying data protection measures in the healthcare industry.
Cybersecurity analysts estimate that the breach may have affected up to 11 million patients, emphasizing the scale and severity of the incident. The potential consequences of such a breach are far-reaching, ranging from identity theft to unauthorized access to medical records, posing serious risks to individuals' well-being.
This incident underscores the critical need for organizations, especially those handling sensitive healthcare data, to continuously assess and strengthen their cybersecurity protocols. As technology advances, so do the methods employed by malicious actors, making it imperative for companies to stay vigilant and proactive in safeguarding the privacy and security of their users.
The ongoing risks to the healthcare sector are brought home sharply by the Welltok data hack. The company's efforts to stop the breach and safeguard the impacted parties serve as a reminder of the larger difficulties businesses encounter in preserving the confidentiality of sensitive data in the increasingly linked digital world.
The recent Truepill data breach has generated significant questions regarding the security of sensitive patient data and the vulnerability of digital platforms in the rapidly changing field of digital healthcare.
The breach, reported by TechCrunch on November 18, 2023, highlights the exposure of millions of patients' data through PostMeds, a pharmacy platform relying on Truepill's services. The scope of the breach underscores the urgency for healthcare organizations to reevaluate their cybersecurity protocols in an era where digital health is becoming increasingly integrated into patient care.
Truepill, a prominent player in the digital health space, has been a key facilitator for various healthcare startups looking to build or buy telehealth infrastructure. The incident prompts a reassessment of the risks associated with outsourcing healthcare services and infrastructure. As explored in a TechCrunch article from May 17, 2021, the decision for startups to build or buy telehealth infrastructure requires careful consideration of the potential security implications, especially in light of the Truepill breach.
One striking revelation from the recent breach is the misconception surrounding the Health Insurance Portability and Accountability Act (HIPAA). Contrary to popular belief, as noted by Consumer Reports, HIPAA alone does not provide comprehensive protection for medical privacy. The article highlights the gaps in the current legal framework, emphasizing the need for a more robust and nuanced approach to safeguarding sensitive healthcare data.
The Truepill data breach serves as a wake-up call for the entire healthcare ecosystem. It underscores the importance of continuous vigilance, stringent cybersecurity measures, and a comprehensive understanding of the evolving threat landscape. Healthcare providers, startups, and tech companies alike must prioritize the implementation of cutting-edge security protocols to protect patient confidentiality and maintain the trust that is integral to the doctor-patient relationship.
As the digital transformation of healthcare accelerates, the industry must learn from incidents like the Truepill data breach. This unfortunate event should catalyze a collective effort to fortify the defenses of digital health platforms, ensuring that patients can confidently embrace the benefits of telehealth without compromising the security of their sensitive medical information.
In a recent report published on Wednesday by research conducted by Proofpoint, an email security company, around 90% of healthcare organizations have experienced at least one cybersecurity incident in the past year.
In the past two years, more than half of the healthcare organizations have reported to have experienced an average of four ransomware attacks. 68% of the organizations surveyed noted that the attacks “negatively impacted patient safety and care.”
The aforementioned report conducted by Proofpoint includes a survey of more than 650 IT and cybersecurity professionals in the US healthcare sector, highlighting the healthcare sector's ongoing susceptibility to common attack methods. It occurs as the Cybersecurity and Infrastructure Security Agency works to provide greater assistance to small, rural hospitals that are underfunded and wilting under constant cyberattacks.
As healthcare organizations struggle to find alternatives to their outdated technology so they can keep providing services, these efforts are using up more and more of their resources. Between 2022 and 2023, the cost of the time spent minimizing the attacks' consequences on patient care rose by 50%, from around $660,000 to $1 million.
In the case of ransomware assault in hospital systems, where computer networks shut down, the impact is rapid and extensive.
Stephen Leffler, president and chief operating officer of the University of Vermont Medical Center, spoke about how a ransomware assault in October 2020 brought about a catastrophe at his facility during a congressional hearing in September. For 28 days, senior physicians had to train junior physicians on how to use paper records as the National Guard assisted the IT department in a round-the-clock operation to wipe and reconfigure every computer in the network.
Leffler remarked, "We literally went to Best Buy and bought every walkie-talkie they had." This was due to their internet-based phone system being offline. Between 2022 and 2023, the cost of patient care grew by 50%, from about $660,000 to $1 million.
Leffler, who has been an emergency medicine doctor for 30 years, further commented “I've been a hospital president for four years. The cyberattack was much harder than the pandemic by far.”
McLaren Health Care, a major healthcare provider, was hit by a ransomware attack. This type of cyberattack encrypts a victim's data and demands a ransom to decrypt it. The hackers stole sensitive patient data and threatened to release it if McLaren didn't pay them. This incident highlights the need for strong cybersecurity measures in the healthcare industry.
Residents received messages from McLaren Health Care on October 6, 2023, alerting them to the cyber threat that had put patient data confidentiality at risk. This incident serves as a sobering reminder of the growing cyber threats facing healthcare organizations around the world.
Ransomware attacks involve cybercriminals encrypting an organization's data and demanding a ransom for its release. In this case, McLaren Health Care's patient data is at stake. The attackers aim to exploit the highly sensitive nature of healthcare information, which includes medical histories, personal identification details, and potentially even financial data.
The implications of this breach are far-reaching. Patient trust, a cornerstone of healthcare, is at risk. Individuals rely on healthcare providers to safeguard their private information, and breaches like this erode that trust. Furthermore, the exposure of personal medical records can have severe consequences for individuals, leading to identity theft, insurance fraud, and emotional distress.
This incident emphasizes the urgency for healthcare organizations to invest in state-of-the-art cybersecurity measures. Robust firewalls, up-to-date antivirus software, regular security audits, and employee training are just a few of the essential components of a comprehensive cybersecurity strategy.
Additionally, there should be a renewed emphasis on data encryption and secure communication channels within the healthcare industry. This not only protects patient information but also ensures that in the event of a breach, the data remains unintelligible to unauthorized parties.
Regulatory bodies and governments must also play a role in strengthening cybersecurity in the healthcare sector. Strict compliance standards and hefty penalties for negligence can serve as powerful deterrents against lax security practices.
As McLaren Health Care grapples with the aftermath of this attack, it serves as a powerful warning to all healthcare providers. The threat of cyberattacks is real and pervasive, and the consequences of a breach can be devastating. It is imperative that the industry acts collectively to fortify its defenses and safeguard the trust of patients worldwide. The time to prioritize cybersecurity in healthcare is now.
Artificial intelligence (AI) is rapidly transforming healthcare, with the potential to revolutionize the way we diagnose, treat, and manage diseases. However, as with any emerging technology, there are also ethical concerns that need to be addressed.
AI systems are often complex and opaque, making it difficult to understand how they work and make decisions. This lack of transparency can make it difficult to hold AI systems accountable for their actions. For example, if an AI system makes a mistake that harms a patient, it may be difficult to determine who is responsible and what steps can be taken to prevent similar mistakes from happening in the future.
AI systems are trained on data, and if that data is biased, the AI system will learn to be biased as well. This could lead to AI systems making discriminatory decisions about patients, such as denying them treatment or recommending different treatments based on their race, ethnicity, or socioeconomic status.
AI systems collect and store large amounts of personal data about patients. This data needs to be protected from unauthorized access and use. If patient data is compromised, it could be used for identity theft, fraud, or other malicious purposes.
AI systems could potentially make decisions about patients' care without their consent. This raises concerns about patient autonomy and informed consent. Patients should have a right to understand how AI is being used to make decisions about their care and to opt out of AI-based care if they choose.
Guidelines for Addressing Ethical Issues:
In addition to the aforementioned factors, it's critical to be mindful of how AI could exacerbate already-existing healthcare disparities. AI systems might be utilized, for instance, to create novel medicines that are only available to wealthy patients. Alternatively, AI systems might be applied to target vulnerable people for the marketing of healthcare goods and services.
New York has recently passed a new provision in its state budget that prohibits advertisers from geofencing healthcare facilities. This law, which was passed in May, has made it increasingly difficult for advertisers who want to use location or healthcare data to maintain performance while still abiding by the law.
Under this new law, corporations are prohibited from creating a geofence within 1,850 feet of hospitals in New York state to deliver an advertisement, build consumer profiles, or infer health status. This means that advertisers can no longer target ads based on the location of potential customers near healthcare facilities.
The implications of this law are far-reaching, particularly because of how densely packed New York City is. Theoretically, an advertiser could geofence around another business that is proximate to a health care facility and still fall within the law’s prohibited radius, even if the advertiser had no interest in healthcare.
The law defines healthcare facilities as any governmental or private entity providing medical care or services, which could encompass many establishments on a New York City block.
This means that many businesses could potentially fall within the prohibited radius, making it difficult for advertisers to target their ads effectively.
This legislation comes at a time when the federal government is also scrutinizing how businesses use healthcare data for advertising. As privacy concerns continue to grow, we can expect more regulations like this in the future.
Advertisers will need to adapt their strategies and find new ways to reach their target audience without infringing on privacy laws.
New York's ban on geofencing near health care facilities is a significant development in the advertising industry. It highlights the increasing importance of privacy and the need for advertisers to adapt their strategies accordingly.
As we move forward, it will be interesting to see how this law impacts advertising strategies and whether other states will follow suit.
Artificial intelligence (AI) has reached another milestone in its quest to mimic human sensory perception. Recent breakthroughs in AI technology have demonstrated its ability to identify odors with remarkable precision, surpassing the capabilities of human noses. This development promises to revolutionize various industries, from healthcare to environmental monitoring.
Researchers from a Google startup have unveiled an AI system that can describe smells more accurately than humans. This innovative technology relies on machine learning algorithms and a database of molecular structures to discern and articulate complex scent profiles. The system's proficiency is not limited to simple odors; it can distinguish between subtle nuances, making it a potential game-changer in fragrance and flavor industries.
One of the key advantages of AI in odor identification is its ability to process vast amounts of data quickly. Human olfaction relies on a limited number of odor receptors, while AI systems can analyze a multitude of factors simultaneously, leading to more accurate and consistent results. This makes AI particularly valuable in fields such as healthcare, where it can be used to detect diseases through breath analysis. AI's unmatched sensitivity to odor compounds could potentially aid in the early diagnosis of conditions like diabetes and cancer.
Moreover, AI's odor identification capabilities extend beyond the human sensory range. It can detect odors that are imperceptible to us, such as certain gases or chemical compounds. This attribute has significant implications for environmental monitoring, as AI systems can be employed to detect pollutants and dangerous substances in the air more effectively than traditional methods.
In addition to its practical applications, AI's prowess in odor identification has opened up new avenues for creative exploration. Perfumers and chefs are excited about the possibilities of collaborating with AI to design unique fragrances and flavors that were previously unimaginable. This fusion of human creativity with AI precision could lead to groundbreaking innovations in the world of scents and tastes.
However, there are ethical considerations to be addressed as AI continues to advance in this field. Questions about privacy and consent arise when AI can detect personal health information from an individual's scent. Striking the right balance between the benefits and potential risks of AI-powered odor identification will be crucial.
Varian, a subsidiary of Siemens Healthineeres, provides software for the oncology department's applications and specializes in offering therapeutic and diagnostic oncology services. The California-based corporation has more than 10,000 employees as of 2021 and had an annual profit of £269 million.
While it is still unclear how LockBit got access to Varian's systems or how much data was stolen, the ransomware gang warned readers of its "victim blog" that if the company did not meet their demands within two weeks, soon, its private databases and patient medical data would be made public. Apparently, Varian has until 17 August to meet the negotiation demands in order to restore their stolen data, if they wish to avoid ‘all databases and patient data’ from being exposed in LockBit’s blog.
The attack is most likely to be a part of ‘triple extortion,’ a strategy usually used by ransomware actors. The strategy involves a three-part attack on an organization that starts with the theft of data that appears to be sensitive before it is encrypted. The corporate victim of the breach can only get their data back and keep it private if they pay a ransom, following which they will receive – in theory – a decryption key from the hackers.
In regards to the breach, Siemens Healthineers – Varian’s parent company confirmed that an internal investigation is ongoing. However, they did not provide any further details of the breach.
“Siemens Healthineers is aware that a segment of our business is allegedly affected by the Lockbit ransomware group[…]Cybersecurity is of utmost importance to Siemens Healthineers, and we are making every effort to continually improve our security and data privacy,” said a spokesperson.
Recent months have witnessed a good many cyberattacks conducted by LockBit against some major companies. According to a report by the US Cybersecurity and Infrastructure Security Agency, in the first quarter of 2023, the ransomware gang has already targeted 1,653 companies. They frequently repurposed freeware and open-source tools for use in network reconnaissance, remote access, tunnelling, credential dumping, and file exfiltration.
Some examples of the LockBit hit companies would be their recent campaign against the port of Nagoya, which ossified supply chains for Japanese automobile company Toyota, and SpaceX in which the ransomware gang claims to have led to a haul of 3,000 proprietary schematics, and an attempt to extort $70 million from Taiwanese chip maker TSMC.
Experts have expressed alarm about a worrying trend in the surveillance of people seeking abortions and gender-affirming medical care in a recent paper that has received a lot of attention. The research, released by eminent healthcare groups and publicized by numerous news sites, focuses light on the possible risks and privacy violations faced by vulnerable individuals when they make these critical healthcare decisions.
The report, titled "Surveillance of Abortion and Gender-Affirming Care: A Growing Threat," brings to the forefront the alarming implications of surveillance on patient confidentiality and personal autonomy. It emphasizes the importance of safeguarding patient privacy and confidentiality in all healthcare settings, particularly in the context of sensitive reproductive and gender-affirming services.
According to the report, surveillance can take various forms, including electronic monitoring, data tracking, and unauthorized access to medical records. This surveillance can occur at different levels, ranging from individual hackers to more sophisticated state-sponsored efforts. Patients seeking abortions and gender-affirming care are at heightened risk due to the politically sensitive nature of these medical procedures.
The report highlights that such surveillance not only compromises patient privacy but can also have serious real-world consequences. Unwanted disclosure of sensitive medical information can lead to stigmatization, discrimination, and even physical harm to the affected individuals. This growing threat has significant implications for the accessibility and inclusivity of reproductive and gender-affirming healthcare services.
The authors of the report stress that this surveillance threat is not limited to any specific region but is a global concern. Healthcare providers and policymakers must address this issue urgently to protect patient rights and uphold the principles of patient-centered care.
Dr. Emily Roberts, a leading researcher and co-author of the report, expressed her concern about the findings: "As healthcare professionals, we have a duty to ensure the privacy and safety of our patients. The increasing surveillance of those seeking abortions or gender-affirming care poses a grave threat to patient autonomy and trust in healthcare systems. It is crucial for us to implement robust security measures and advocate for policies that protect patient privacy."
The research makes a number of suggestions for legislators, advocacy groups, and healthcare professionals to address the growing issue of monitoring. To ensure the secure management of patient information, it urges higher funding for secure healthcare information systems, stricter data security regulations, and better training for healthcare staff.
In reaction to the findings, a number of healthcare organizations and patient advocacy groups have banded together to spread the word about the problem and call on lawmakers to take appropriate action. They stress the significance of creating a healthcare system that respects patient autonomy and privacy, irrespective of the medical treatments they require.
As this important research gets more attention, it acts as a catalyst for group effort to defend patient rights and preserve the privacy of those seeking abortions and gender-affirming care. Healthcare stakeholders may cooperate to establish a more egalitarian, secure, and compassionate healthcare environment for all patients by tackling the growing surveillance threat.
The healthcare sector is increasingly depending on technology to better patient care and increase operational efficiency in today's quickly evolving digital environment. Cybersecurity dangers are a major worry that comes with this digital transition. The demand for qualified cybersecurity specialists grows more critical than ever as healthcare organizations use digital systems and medical devices. Leading magazines and industry experts have noted that the demand for these specialists is expected to soar in the upcoming years.
Healthcare cybersecurity experts are predicted to experience an extraordinary rise in demand, according to a recent Forbes article. The paper highlights the urgent need for specialists who can secure linked medical equipment, safeguard essential healthcare infrastructure, and protect sensitive patient data. The potential hazards and vulnerabilities increase as healthcare systems grow more networked and reliant on digital technologies.
The World Economic Forum acknowledges the critical role of data in improving healthcare, but it also emphasizes the importance of robust cybersecurity measures. The integration of data analytics and artificial intelligence in healthcare presents immense potential for optimizing patient outcomes. However, it also introduces new avenues for cyberattacks, underscoring the necessity for skilled professionals who can counteract these threats effectively.
Government entities, such as the U.S. Department of Health and Human Services (HHS), have recognized the rising threat of cyberattacks in the healthcare sector. The HHS Cybersecurity Task Force has recently released new resources to address this challenge. In their official statement, the task force emphasizes the need for proactive cybersecurity measures and acknowledges the critical role of healthcare cybersecurity specialists in protecting patient data and ensuring public health safety.
The growing need for healthcare cybersecurity experts is also discussed in the Journal of the American Medical Association (JAMA). The essay emphasizes the need for professionals who can reduce these dangers while highlighting how susceptible medical devices are to cyberattacks. The potential repercussions of a cybersecurity attack in the healthcare industry are worrisome given how linked and dependent on network connectivity medical devices are becoming.
The U.S. Bureau of Labor Statistics (BLS) forecasts that this profession will increase at a rate that is significantly faster than average given the growing demand for healthcare cybersecurity experts. According to the BLS, cybersecurity will experience a 31% increase in employment between 2019 and 2029, making it one of the industries with the greatest growth. The ever-increasing reliance on technology across industries, including healthcare, is blamed for this development.
The Food and Drug Administration (FDA) also recognizes the importance of medical device cybersecurity. In a consumer update, the FDA highlights the risks associated with medical device vulnerabilities and advises healthcare organizations to prioritize cybersecurity measures. This reinforces the need for healthcare cybersecurity specialists who possess the expertise to protect medical devices and ensure patient safety.
A widely used diabetes management software recently experienced a serious technical failure, stunning the users and leaving them feeling angry and scared. The software, which is essential for assisting people with diabetes to monitor and manage their blood sugar levels, abruptly stopped functioning, alarming its devoted users. Concerns regarding the dependability and security of healthcare apps as well as the possible repercussions of such failures have been raised in response to the occurrence.
According to reports from BBC News, the app's malfunctioning was first brought to light by distressed users who took to social media platforms to express their frustration. The app's sudden failure meant that users were unable to access critical features, including blood glucose monitoring, insulin dosage recommendations, and personalized health data tracking. This unexpected disruption left many feeling vulnerable and anxious about managing their condition effectively.
The Daily Mail highlighted the severity of the situation, emphasizing how the app's failure posed a potential threat to the lives of its users. Many individuals with diabetes rely on the app to regulate their insulin levels, ensuring they maintain stable blood sugar readings. With this vital tool out of commission, users were left in a state of panic, forced to find alternative methods to track their glucose levels and administer appropriate medication.
The incident has triggered an outpouring of anger and fear from the affected users, who feel let down by the app's developers. One user expressed their frustration, stating, "I have come to depend on this app for my daily diabetes management. Its sudden breakdown has left me feeling helpless and anxious about my health." Others echoed similar sentiments, emphasizing the app's importance in their daily routines and the detrimental impact of its sudden unavailability.
The situation has also raised broader concerns regarding the reliability and security of healthcare apps. As these digital tools increasingly become a fundamental part of managing chronic conditions, their dependability and robustness are of paramount importance. This incident serves as a reminder of the potential risks associated with relying solely on technology for critical health-related tasks.
Furthermore, the incident sheds light on the need for developers to prioritize thorough testing and regular maintenance of healthcare apps to prevent such disruptions. App developers and healthcare providers must collaborate closely to ensure the seamless functioning of these tools, considering the impact they have on the well-being of individuals with chronic conditions.
This data of more than 3.1 million patients in the US has apparently been shared with advertisers and social media giants like Facebook, Google, and TikTok.
In a notice published on the company’s website, it addressed the case, admitting to having exposed patient data from as far back as October 2019 by the tracking technologies it had been utilizing.
The telehealth startup came to light in the wake of the COVID-19 pandemic, after the online-only virtual health services came into culture due to lockdown, disclosing the security lapse in its system at the time.
In a filing with the federal government, pertaining to the security lapse, the company revealed that it has shared personal and health-related information of patients who were attempting to seek therapy or other mental health care service via their app.
The collected and distributed data includes information like names, phone numbers, email addresses, dates of birth, IP addresses, and other demographic data. In addition to data obtained from Cerebral's online mental health self-assessment, which may also have included the services that the patient chose, assessment responses, and other related health information was also there.
Reportedly, Cerebral was using trackers and other data-collecting programmes that the company included in its apps to share patient data with digital giants in real time.
In most cases, it has been observed that online users have no idea if they are opting into the tracking options in these apps, and simply accept the app’s terms of use and privacy policies, which they clearly do not read.
According to Cerebral, the data could vary from patient to patient based on different factors, like “what actions individuals took on Cerebral’s Platforms, the nature of the services provided by the Subcontractors, the configuration of Tracking Technologies,” and more. The company added that it will notify the affected users, regardless of “how an individual interacted with the Cerebral’s platform.”
Moreover, it claims that nothing such as the patient’s social security, credit card credentials, or bank account information has been exposed. Following the data breach in January, the company says it has “disabled, reconfigured, and/or removed any of the tracking pixels on the platform to prevent future exposures, and has enhanced its information security practices and technology vetting processes.”
It added that the company has terminated the tracking code from its apps. However, the tech giants are under no obligation in taking down the exposed data that Cerebral has shared.
Taking into account the way Cerebral manages sensitive patient information, it is being protected by the HIPAA health privacy regulation in the United States. The U.S. Department of Health and Human Services, which supervises and enforces HIPAA, has compiled a list of health-related security violations under investigation. Cerebral's data leak is the second-largest compromise of health data in 2023.