Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Medical. Show all posts

Stop Using AI for Medical Diagnosis: Experts

Stop Using AI for Medical Diagnosis: Experts

AI (artificial intelligence) has become an important tool in many spheres of life such as education, jobs, and the field of medical research as well. However, there have been concerns about AI providing medical advice to individual queries of patients accessing on their own. The issue has become a hot topic. Today, it is easier to sit at your desk, and with a few taps, access everything on your phone or your computer, one such case can be a medical diagnosis via an online search for your health. However, experts alert users to avoid such medical diagnoses via AI. Here is why.

Using AI for Medical Queries

AI tools like ChatGPT from Open AI or CoPilot from Microsoft work using language models trained on a huge spectrum of internet prompts. You can ask questions, and the chatbot responds based on learned patterns. The AI tech can generate numerous responses which can be helpful, but it is not accurate. 

The incorporation of AI into healthcare raises substantial regulatory and ethical concerns. There are significant gaps in the regulation of AI applications, which raises questions regarding liability and accountability when AI systems deliver inaccurate or harmful advice.

No Personalized Care

One of the main drawbacks of AI in medicine is the need for more individualized care. AI systems use large databases to discover patterns, but healthcare is very individualized. AI lacks the ability to comprehend the finer nuances of a patient's history or condition, frequently required for accurate diagnosis and successful treatment planning.

Bias and Data Quality

The efficacy of AI is strongly contingent on the quality of the data used to train it. AI's results can be misleading if the data is skewed or inaccurate. For example, if an AI model is largely trained on data from a single ethnic group, its performance may suffer when applied to people from other backgrounds. This can result in misdiagnoses or improper medical recommendations.

Misuse

The ease of access to AI for medical advice may result in misuse or misinterpretation of the info it delivers. Quick, AI-generated responses may be interpreted out of context or applied inappropriately by persons without medical experience. Such events have the potential to delay necessary medical intervention or lead to ineffective self-treatment.

Privacy Concerns

Using AI in healthcare usually requires entering sensitive personal information. This creates serious privacy and data security concerns, as breaches could allow unauthorized access to or misuse of user data.

Telehealth Companies Monetizing and Sharing Health Data

These reports come despite company promises to prospective patients that their user data, including information about mental health and addiction treatment, will remain confidential. 

Senators Amy Klobuchar, Susan Collins, Maria Cantwell, and Cynthia Lummis expressed their concern over the protection of patients' sensitive health information by well-known telehealth companies. 

They referenced an investigation by STAT and The Markup that uncovered the deliberate sharing of patient data by telehealth companies with tech giants such as Meta, Facebook, Google, TikTok, Microsoft and Twitter, and other advertising platforms. 

It has been reported that these digital health companies are monitoring and distributing the personally identifiable health information of their clients, including their contact information, financial details, and more. 

“Telehealth…has become a popular and effective way for many Americans to receive care.  One-fifth of the U.S. population resides in rural or medically-underserved communities where access to virtual care is vital. This access should not come at the cost of exposing personal and identifiable information to the world’s largest advertising ecosystems,” the senators added. 

Senators Amy Klobuchar (D-Minn.), Susan Collins (R-Maine), Maria Cantwell (D-Wash.), and Cynthia Lummis (R-Wyo.) recently sent letters to telehealth companies Monument, Workit Health, and Cerebral, inquiring about their data sharing practices. 

“Recent reports highlight how your company shares users’ contact information and health care data that should be confidential. This information is reportedly sent to advertising platforms, along with the information needed to identify users. This data is extremely personal, and it can be used to target advertisements for services that may be unnecessary or potentially harmful physically, psychologically, or emotionally,” the letter reads.

Telehealth involves the provision of healthcare services and information through the use of electronic communication and information technologies. It enables remote patient-provider communication to provide services including consultation, education, monitoring, intervention, and even admission for treatment, overcoming the barriers of distance.

Medical Device Cybersecurity: What Next in 2022?

 

A survey report on medical device cybersecurity was published by Cybellum, along with trends and predictions for 2022. It's worth noting that medical device cybersecurity has become a very challenging task. 

With medical devices increasingly becoming software-driven machines and the rapid pace at which cybersecurity risk emerges as a result of new vulnerabilities, complex supply chains, new suppliers, and new product lines, keeping the entire product portfolio secure and compliant at all times appears to be impossible. Learning from peers and attempting to identify the best path forward is now more crucial than ever. 

Security experts from hundreds of medical device manufacturers were asked what their biggest challenges are and how they plan to tackle them in 2022 and beyond in this poll. The following are some of the intriguing findings from the survey about medical device manufacturers' security readiness: 
  • The top security difficulty for respondents is managing an expanding number of tools and technologies, which is partially explained by a lack of high-level ownership. 
  • Seventy-five percent of respondents said they don't have a dedicated senior manager in charge of device security. 
  • Almost 90% of respondents acknowledged that companies need to improve in critical areas including SBOM analysis and compliance readiness. 
  • In 2022, nearly half of companies increased their cybersecurity spending by more than 25%. 
  • A dedicated response team (PSIRT) is not in existence at more than 55% of medical device makers. 
David Leichner, CMO at Cybellum said, “We embarked on this survey to gain a more comprehensive understanding of the main challenges facing product security teams at medical device manufacturers, as part of our effort to help to better secure the devices. Some of our findings were quite surprising and highlight serious gaps that exist both in processes for securing medical devices and in regulation compliance.”