Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label HIPAA. Show all posts

23andMe Faces Uncertainty After Data Breach

 

DNA and genetic testing firm 23andMe is grappling with significant challenges following a 2023 data breach and its ongoing financial downturn. Once a leader in the industry, the company now faces an uncertain future as it considers going private, raising concerns about the security of genetic data for its 15 million customers.

Known for its saliva-based genetic ancestry tests, 23andMe has seen its market value plummet by over 99% since its $6 billion high in 2021, largely due to unprofitability. This lack of profit is attributed to declining consumer interest in its one-time-use test kits and sluggish growth in its subscription services. Compounding these issues was a lengthy data breach in 2023, where hackers stole genetic data from nearly 7 million users. In September, the company agreed to pay $30 million to settle a lawsuit related to the breach.

Shortly after the settlement, 23andMe CEO Anne Wojcicki mentioned the possibility of third-party takeover offers but later clarified her intent to take the company private. The initial statement, however, led to the immediate resignation of the company's independent board members, amplifying concerns about the future handling of customer data.

Many customers may assume their genetic data is protected by health privacy laws, but 23andMe is not bound by the Health Insurance Portability and Accountability Act (HIPAA). Instead, the company follows its own privacy policies, which it can alter at any time. According to a company spokesperson, 23andMe believes its data management practices are more appropriate and transparent compared to the traditional healthcare model under HIPAA.

The lack of strict federal oversight and varying state privacy laws means that in the event of a sale, the genetic data of millions could be up for grabs. Wojcicki has signaled a shift in the company's business strategy, halting costly drug development programs to focus on monetizing its customer data for pharmaceutical research.

While 23andMe asserts its data privacy policies would remain unchanged even if sold, privacy advocates have raised alarms. The Electronic Frontier Foundation (EFF) has warned that selling the company to entities with law enforcement ties could lead to misuse of sensitive genetic information.

For those concerned about the future of their data, 23andMe allows users to delete their accounts, though some data may still be retained under legal and compliance requirements.

Florida Medical Lab Data Breach Exposes 300,000 Individuals’ Sensitive Information

 

Florida-based medical laboratory, American Clinical Solutions (ACS), recently experienced a significant data breach that exposed the sensitive information of approximately 300,000 individuals. The hacking incident, attributed to the criminal group RansomHub, resulted in the theft of 700 gigabytes of data, which has since been published on the dark web. The exposed data includes Social Security numbers, addresses, drug test results, medical records, insurance information, and other highly sensitive personal details. 

ACS specializes in patient testing for both prescription and illicit narcotics, offering its services to healthcare providers. On July 24, ACS reported the breach to the U.S. Department of Health and Human Services’ Office for Civil Rights. The stolen data encompasses lab testing results from January 2016 until May 2024, the period during which the hacking incident allegedly occurred. Privacy attorney David Holtzman, from the consulting firm HITprivacy LLC, expressed concerns over the nature of the exposed information, highlighting the potential for reputational harm, financial compromise, and extortion due to the sensitivity of drug testing data. 

Despite the severity of the breach, ACS has not yet issued a public statement about the incident on its website, nor has it responded to requests for further details. This lack of communication has raised concerns among legal and regulatory experts, who warn that failing to alert patients about the breach may compound the potential harm. Holtzman emphasized the importance of transparency in such situations, suggesting that the absence of a breach notification may prompt investigations by HHS or state attorneys general to determine whether ACS has complied with the Health Insurance Portability and Accountability Act (HIPAA) and other relevant state laws. 

The delay in notifying affected individuals may stem from various factors, including the possibility that law enforcement advised ACS to wait or that the total number of impacted individuals has not yet been determined. Regulatory attorney Rachel Rose pointed out that drug testing data, while not subject to the stringent federal 42 CFR Part 2 privacy regulations that govern substance disorder treatment facilities, is still considered highly sensitive. Rose compared the compromised information to reproductive health records, mental health records, and data related to diseases like AIDS. 

RansomHub, the group behind the attack, has rapidly gained notoriety within the cybersecurity community since its emergence in February. The gang has claimed responsibility for several major hacks across the healthcare sector, including a June attack on the drugstore chain Rite Aid, which compromised the data of 2.2 million individuals. Security firm Rapid7 recently identified RansomHub as one of the most notable new ransomware groups, underscoring the growing threat it poses to organizations worldwide.

Safeguarding Your Work: What Not to Share with ChatGPT

 

ChatGPT, a popular AI language model developed by OpenAI, has gained widespread usage in various industries for its conversational capabilities. However, it is essential for users to be cautious about the information they share with AI models like ChatGPT, particularly when using it for work-related purposes. This article explores the potential risks and considerations for users when sharing sensitive or confidential information with ChatGPT in professional settings.
Potential Risks and Concerns:
  1. Data Privacy and Security: When sharing information with ChatGPT, there is a risk that sensitive data could be compromised or accessed by unauthorized individuals. While OpenAI takes measures to secure user data, it is important to be mindful of the potential vulnerabilities that exist.
  2. Confidentiality Breach: ChatGPT is an AI model trained on a vast amount of data, and there is a possibility that it may generate responses that unintentionally disclose sensitive or confidential information. This can pose a significant risk, especially when discussing proprietary information, trade secrets, or confidential client data.
  3. Compliance and Legal Considerations: Different industries and jurisdictions have specific regulations regarding data privacy and protection. Sharing certain types of information with ChatGPT may potentially violate these regulations, leading to legal and compliance issues.

Best Practices for Using ChatGPT in a Work Environment:

  1. Avoid Sharing Proprietary Information: Refrain from discussing or sharing trade secrets, confidential business strategies, or proprietary data with ChatGPT. It is important to maintain a clear boundary between sensitive company information and AI models.
  2. Protect Personal Identifiable Information (PII): Be cautious when sharing personal information, such as social security numbers, addresses, or financial details, as these can be targeted by malicious actors or result in privacy breaches.
  3. Verify the Purpose and Security of Conversations: If using a third-party platform or integration to access ChatGPT, ensure that the platform has adequate security measures in place. Verify that the conversations and data shared are stored securely and are not accessible to unauthorized parties.
  4. Be Mindful of Compliance Requirements: Understand and adhere to industry-specific regulations and compliance standards, such as GDPR or HIPAA, when sharing any data through ChatGPT. Stay informed about any updates or guidelines regarding the use of AI models in your particular industry.
While ChatGPT and similar AI language models offer valuable assistance, it is crucial to exercise caution and prudence when using them in professional settings. Users must prioritize data privacy, security, and compliance by refraining from sharing sensitive or confidential information that could potentially compromise their organizations. By adopting best practices and maintaining awareness of the risks involved, users can harness the benefits of AI models like ChatGPT while safeguarding their valuable information.

Telehealth Startup Reveals Exposing Private Data of Millions of its Patients


Telehealth startup, Cerebral, which specializes in mental health has recently revealed that it has exposed its patients’ private information that includes mental health assessments. 

This data of more than 3.1 million patients in the US has apparently been shared with advertisers and social media giants like Facebook, Google, and TikTok. 

In a notice published on the company’s website, it addressed the case, admitting to having exposed patient data from as far back as October 2019 by the tracking technologies it had been utilizing. 

The telehealth startup came to light in the wake of the COVID-19 pandemic, after the online-only virtual health services came into culture due to lockdown, disclosing the security lapse in its system at the time. 

In a filing with the federal government, pertaining to the security lapse, the company revealed that it has shared personal and health-related information of patients who were attempting to seek therapy or other mental health care service via their app. 

The collected and distributed data includes information like names, phone numbers, email addresses, dates of birth, IP addresses, and other demographic data. In addition to data obtained from Cerebral's online mental health self-assessment, which may also have included the services that the patient chose, assessment responses, and other related health information was also there.

Reportedly, Cerebral was using trackers and other data-collecting programmes that the company included in its apps to share patient data with digital giants in real time. 

In most cases, it has been observed that online users have no idea if they are opting into the tracking options in these apps, and simply accept the app’s terms of use and privacy policies, which they clearly do not read. 

According to Cerebral, the data could vary from patient to patient based on different factors, like “what actions individuals took on Cerebral’s Platforms, the nature of the services provided by the Subcontractors, the configuration of Tracking Technologies,” and more. The company added that it will notify the affected users, regardless of “how an individual interacted with the Cerebral’s platform.” 

Moreover, it claims that nothing such as the patient’s social security, credit card credentials, or bank account information has been exposed. Following the data breach in January, the company says it has “disabled, reconfigured, and/or removed any of the tracking pixels on the platform to prevent future exposures, and has enhanced its information security practices and technology vetting processes.” 

It added that the company has terminated the tracking code from its apps. However, the tech giants are under no obligation in taking down the exposed data that Cerebral has shared. 

Taking into account the way Cerebral manages sensitive patient information, it is being protected by the HIPAA health privacy regulation in the United States. The U.S. Department of Health and Human Services, which supervises and enforces HIPAA, has compiled a list of health-related security violations under investigation. Cerebral's data leak is the second-largest compromise of health data in 2023.  

Consenting to Cookies is Not Sufficient

 


While most companies are spending a great deal of their time implementing cookie consent notices, it is becoming increasingly evident that the number and size of developments and lawsuits relating to privacy are on the rise. As a result, companies and their customers are rarely protected by these notices, which is not a surprise.  

It is undeniable that transparency is a worthwhile endeavor. But, the fact remains that companies can be vulnerable to several potential threats that are often beyond their direct control.   

For example, the recent lawsuits involving the Meta Pixel, which also affect many U.S. healthcare companies and are affecting many doctors, are an ideal example of this issue.    

The issue lies in the way websites are designed and built, which contributes to the problem. Except for a few of the biggest tech companies, all of the websites are built using third-party cloud services that are hosted on the web. Among the services offered here are CRM, analytics, form builders, and also trackers for advertisers that take advantage of these functions. Various third parties have a great deal of autonomy over these decisions. However, they are not regulated properly. 

Many kinds of pixels are available on the internet, and many of them serve some purpose. Usually, marketers use this type of data when they want to target advertisements to potential customers. In addition, they want to see how effective their ads are when it comes to reaching them. It is also imperative to note that, by using these trackers, highly specific and detailed personal data is also being collected. This data is being incorporated into existing data portfolios. 

Financial and Healthcare Data are Being Misused 

In most cases, the risks associated with visiting a healthcare website are much higher than when you are visiting any other website. Facebook is not a suitable place for you to share the medical conditions that you are researching with your friends who use that service. This data is not something that you want to be included in your social graph, and you do not want it added. Therefore, the crux of the issue in these lawsuits can be summarized this way: Protected Health Information (PHI) is protected by HIPAA (Health Insurance Portability and Accountability Act), which the actions described in the preceding sentence violate. Seeing digital advertising through the lens of healthcare can also shine a light on how troubling it can be when tracking is used. This is when viewed through the lens of advertising.   

As far as financial services are concerned, the same rules apply. A similar consequence may occur if an unauthorized party gains access to personally identifiable information (PII) or financial data, such as Social Security Numbers or credit card numbers, as well as other confidential data, and it is not handled correctly. This could have dire consequences. Privacy is crucial to safety. Details about your private life should be kept private for the right reasons. Modern advertising practices do not mesh well with these aspects of our lives, which are all significant.   

In addition to the Meta Pixel case, two other recent lawsuits provide us with a deeper understanding of how complex and broad the problem is, and how far it extends.  

Analyzing Sensitive Data From a Different Perspective 

In a recent lawsuit, Oracle was accused of trying to use the 4.5 billion records they currently hold as a proxy system for tracking sensitive consumer data. They have deliberately chosen not to share with any third parties. For comparison, the global population is 8 billion people. The concept of re-identification of de-identified data is far from an invention, but it serves as a clear example of why it matters so much to gather all these pieces of data, no matter how random they may seem. A person can infer most of the details of their life with almost astonishing accuracy. This is if they have access to enough data from Oracle, or whoever gets hold of the data. The data will end up being used in the same way in the end as this is a certainty. 

In a recent case, web testing tools were used to record the sessions of users on a website. This was so that they could see how well users navigated the site as they worked through the steps. As web developers and marketers, it is extremely common for them to use these tools to make their user interfaces more usable. 

In short, some companies are being accused of wiretapping under the Wiretap laws because they are using these tools to gather information. The reason for this is that these tools are capable of transmitting a considerable amount of information without the user's knowledge and the website owner's knowledge. It is inconceivable to believe that such a thing could happen. Even though this may seem like a minor issue, it is very clear once you look at it through the lens of sensitive data. 

A New Regulation Seeks to Secure Non-HIPAA Digital Health Apps

 

A guideline designed and distributed by several healthcare stakeholder groups strives to secure digital health technologies and mobile health apps, the overwhelming majority of which fall outside of HIPAA regulation. 

The Digital Health Assessment Framework was launched on May 2 by the American College of Physicians, the American Telemedicine Association, and the Organization for the Review of Care and Health Applications. The methodology intends to examine the use of digital health technologies while assisting healthcare leaders and patients in assessing the factors about which online health tools to employ. Covered entities must also adopt necessary administrative, physical, and technical protections to preserve the confidentiality, integrity, and availability of electronically protected health information, according to the Health Insurance Portability and Accountability Act Rules. 

Healthcare data security was never more critical, with cyberattacks on healthcare businesses on the rise and hackers creating extremely complex tools and tactics to attack healthcare firms. Before HIPAA, the healthcare field lacked a universally agreed set of security standards or broad obligations for protecting patient information. At the same time, new technologies were advancing, and the healthcare industry began to rely more heavily on electronic information systems to pay claims, answer eligibility issues, give health information, and perform a variety of other administrative and clinical duties. 

Furthermore, the Office for Civil Rights at the Department of Health and Human Services has enhanced HIPAA Rule enforcement, and settlements with covered businesses for HIPAA Rule violations are being reached at a faster rate than ever before. 

"Digital health technologies can provide safe, effective, and interacting access to personalized health and assistance, as well as more convenient care, improve patient-staff satisfaction and achieve better clinical outcomes," said Ann Mond Johnson, ATA CEO, in a statement. "Our goal is to provide faith that the health and wellness devices reviewed in this framework meet quality, privacy, and clinical assurance criteria in the United States," she added. 

Several health apps share personal information with third parties, leaving them prone to hacks. Over 86 million people in the US use a health or fitness app, which is praised for assisting patients in managing health outside of the doctor's office. HIPAA does not apply to any health app which is not advised for use by a healthcare provider. 

The problem is that the evidence strongly suggests the app developers engage in some less-than-transparent methods to compromise patient privacy. Focusing on a cross-sectional assessment of the top tier apps for depression and smoking cessation in the US and Australia, a study published in JAMA in April 2019 found that the majority of health apps share data to third parties, but only a couple disclosed the practice to consumers in one‘s privacy policies. 

Only 16 of the evaluated applications mentioned the additional uses for data sharing, despite the fact that the majority of the apps were forthright about the primary use of its data. 

According to the aforementioned study, nearly half of the apps sent data to a third party yet didn't have a privacy policy. But in more than 80% of cases, data was shared with Google and Facebook for marketing purposes. 

Another study published in the British Medical Journal in March 2019 discovered that the majority of the top 24 health education Android applications in the USA linked user data without explicitly informing users. In 2021, a study conducted by Knight Ink and Approov found that the 30 most popular mHealth apps are highly vulnerable to API hacks, which might result in the exploitation of health data. Only a few app developers were found in violation of the Federal Trade Commission's health breach rule. 

The guideline from ACP, ATA, and ORCHA aims to help the healthcare industry better comprehend product safety. "There has been no clear means to establish if a product is safe to use in a field of 365,000 goods, where the great majority fall outside of existing standards, such as medical device regulations, federal laws, and government counsel," as per the announcement. 

The implementation of digital health, covering condition management, clinical risk assessment, and decision assistance, is hampered by a lack of direction. The guide is a crucial step in identifying and developing digital health technologies which deliver benefits while protecting patient safety, according to ACP President Ryan D. Mire, MD. The guidelines were developed using the clinical expertise of ACP and ATA members, along with ORCHA's app assessment experience.

ACP also launched a pilot test of digital health solutions that were evaluated against the new framework in conjunction with the new framework. Mire hopes that the trial will assist providers to identify the most effective features for recommending high-value digital health technologies to patients and identify potential impediments to extensive digital health adoption.

FTC: Health App and Device Makers Should Comply With Health Breach Notification Rule

 

The Federal Trade Commission on 15th September authorized a policy statement reminding makers of health applications and linked devices that gather health-related data to follow a ten-year-old data breach notification rule. The regulation is part of the agency's push toward more robust technology enforcement under Chair Lina Khan, who hinted that more scrutiny of data-based ecosystems related to such apps and devices could be on the way. 

In written remarks, Chair Lina Khan stated, "The Commission will enforce this Rule with vigour." According to the FTC, the law applies to a range of vendors, as well as their third-party service providers, who are not covered by the HIPAA breach notification rule but are held liable when clients' sensitive health data is breached. 

After being charged with studying and establishing strategies to protect health information as part of the American Recovery and Reinvestment Act in 2009, the FTC created the Health Breach Notification Rule. 

The rule requires suppliers of personal health records and PHR-related companies to notify U.S. consumers and the FTC when unsecured identifiable health information is breached, or risk civil penalties, according to the FTC. "In practical terms, this means that entities covered by the Rule who have experienced breaches cannot conceal this fact from those who have entrusted them with sensitive health information," the FTC says. 

Since the rule's inception, there has been a proliferation of apps for tracking anything from fertility and menstruation to mental health, as well as linked gadgets that collect health-related data, such as fitness trackers. 

The FTC's warning comes after the agency and fertility mobile app maker Flo Health reached an agreement in June over data-sharing privacy concerns. According to the FTC, the start-up company misled millions of women about how it shared their sensitive health data with third-party analytics firms like Facebook and Google, in violation of the FTC Act. 

According to privacy attorney Kirk Nahra of the law firm WilmerHale, the FTC's actions on the Health Breach Notification Rule "are an interesting endeavour to widen how that rule has been understood since it was implemented."

"It is focusing attention on a much larger group of health-related companies, and changing how the FTC has looked at that rule and how the industry has perceived it. I expect meaningful challenges to this 'clarification' if it is put into play," he notes. 

Failure to comply might result in "monetary penalties of up to $43,792 per violation per day," according to the new policy statement.

1.2 Million People Affected by Practicefirst's Supply Chain Ransomware Breach

 

One of the largest health data breaches disclosed to federal regulators so far this year is a supply chain ransomware attack that affected over 1.2 million people. Practicefirst, a medical management services company situated in Amherst, New York, disclosed a data breach to federal officials on July 1. According to the company's breach notification statement, the company paid a ransom in exchange for the attackers promising to destroy and not further expose files seized in the incident. 

The HIPAA Breach Reporting Tool, a website run by the Department of Health and Human Services that lists health data breaches impacting 500 or more people, says that Practicefirst reported the event affecting more than 1.2 million people. The Practicefirst hack was the sixth-largest health data breach reported on the HHS website so far in 2021 as of Tuesday.

According to Practicefirst's breach notification statement, on December 30, 2020, "an unauthorized actor who attempted to deploy ransomware to encrypt our systems copied several files from our system, including files that include limited patient and employee personal information." When the corporation learned of the situation, it says it shut down its systems, changed passwords, notified law enforcement, and hired privacy and security specialists to help.

"The information copied from our system by the unauthorized actor before it was permanently deleted, included name, address, email address, date of birth, driver’s license number, Social Security number, diagnosis, laboratory and treatment information, patient identification number, medication information, health insurance identification and claims information, tax identification number, employee username with password, employee username with security questions and answers, and bank account and/or credit card/debit card information," Practicefirst says. 

"We are not aware of any fraud or misuse of any of the information as a result of this incident," the company says. "The actor who took the copy has advised that the information is destroyed and was not shared." Many security experts believe that such promises made by hackers are untrustworthy. "Cybercriminals who infiltrate information systems are not reputable or reliable. By their nature, they will lie, cheat and steal," says privacy attorney David Holtzman of consulting firm HITprivacy LLC. 

"Vendors to healthcare organizations should be transparent to the public and to the organizations contracted with those providers to make clear statements as to what happened, what data may have been compromised and what steps they are taking to notify the organizations they serve of the data that was put at risk."

Ransomware Attack Leaks GenRx’s Data

 

GenRx Pharmacy, which is settled in Scottsdale, AZ, is telling people of a data breach incident. The occurrence might affect the security of certain individuals. While the drug store doesn't know about any real damage done to people because of the circumstance, it is furnishing conceivably affected people with data by means of First Class mail with respect to steps taken, and what should be done to further fortify against likely defacement. 

On September 28, 2020, the pharmacy discovered proof of ransomware on its system and promptly started an examination, including recruiting independent information security and technology experts to help with incident response and criminological examination. During the ransomware assault, the drug store had full admittance to its information with unaffected reinforcements and had the option to keep up persistent business activities as they examined. Along with forensic experts, the drug store ended the cybercriminals' admittance to the drug store's system the very day and affirmed that an unapproved outsider conveyed the ransomware just a single day prior. On November 11, 2020, the drug store affirmed that the cybercriminals had exfiltrated a few records that incorporated certain health-related data, the drug store used to measure and transport endorsed items to patients.

As per the sources, the cybercriminals accessed health data of certain previous GenRx patients: patient ID, transaction ID, first and last name, address, telephone number, date of birth, sex, allergies, drug list, health plan data, and prescription data. The drug store doesn't gather patient Social Security Numbers ("SSNs") or keep up monetary data, thus it is extremely unlikely that the cybercriminal could get to that data of GenRx patients during this episode. 

An entry on the US Department of Health and Human Services HIPAA breach portal shows that more than 137,000 GenRx patients are being educated about the occurrence. GenRx Pharmacy has overhauled its firewall firmware, added extra anti-virus and web-sifting programming, established multifaceted verification, expanded Wi-Fi network traffic checking, gave extra preparation to representatives, refreshed inside approaches and methodology, and introduced real-time intrusion detection and reaction programming on all workstations and workers that access the organization.

The pharmacy is surveying more choices to improve its conventions and controls, technology, and preparation, including fortifying encryption. Although SSNs and monetary data were not influenced by this occurrence, the pharmacy suggests that as an overall best practice, people monitor account articulations and free credit reports to distinguish expected mistakes.