HITInfrastructure

Networking News

Fear of Artificial Intelligence in Healthcare Can Delay Benefits

There is a fear of artificial intelligence in healthcare that needs to be managed so that the healthcare industry can take advantage of AI benefits for patients.

artificial intelligence

Source: Thinkstock

By Fred Donovan

- There is a fear of artificial intelligence in healthcare that needs to be managed so that the healthcare industry can take advantage of AI benefits for patients, said Michael Abramoff, founder and CEO of IDx Technologies.

Abramoff, who is also Robert C. Watzke Professor at the University of Iowa, told a Federal Trade Commission (FTC) hearing this week that he faced fear and concern in his effort to develop an AI product to detect diabetic retinopathy, the leading cause of blindness in United States.

The product, an autonomous diagnostic AI system, can be used at the point of care and no human reviewer or oversight is necessary. This shifts specialty diagnostics from the academic setting to the primary care setting, increasing the number of patients who can be tested and reducing testing costs, he said.

Abramoff explained that he first proposed the product in 2000, when he discovered an algorithm that could be used for diagnosing diabetic retinopathy. But he faced opposition from colleagues and the FDA.

Abramoff was given the name by his colleagues of the “Retinator” because “he will destroy jobs and also he’s not being safe for patients. Now, they think differently,” he related.

READ MORE: Artificial Intelligence Requires Broader HIT Infrastructure Talent

“The fear of AI is not new. It’s there, and it’s real. So, we need to manage that,” he said.

Abramoff said that he had to raise $22 million to develop his system and get FDA approval, which he just received this year, almost two decades after he first came up with the idea.

“It took a long time to do this, but now essentially the rules are set for how you approve autonomous AI,” he said.

“Technology used in a lab does not directly transfer to what we do in healthcare. Patient safety is paramount. If we don’t do it right, there will be pushback and we will lose all of the advantages that AI can provide in healthcare for better quality, lower costs, and better accessibility,” he concluded.

ONC Chief Scientist Teresa Zayas Caban told the FTC hearing that her agency has seen a recent surge in clinical applications of AI, including a recent tool by Google to detect metastatic cancer in patients in which the cancer has not spread.

READ MORE: Healthcare Artificial Intelligence Adoption Grows for Software Apps

“These tools have the potential to improve care but may require adaptation for successful clinical use,” she said.

Caban laid out criteria that AI tools need to meet in order from them to be widely used in healthcare:

  • Demonstrate technical soundness of the algorithms 
  • Perform at least as well as current standards of clinical care
  • Test across a wide range of situations
  • Provide improvement in patient outcomes, practicality of use, or reduced costs

Caban related that her agency cooperated with AHRQ and the Robert Wood Johnson Foundation on a report conducted by JASON on AI in healthcare.

The JASON report, published in December last year, looked at how AI can shape the future of public health, community health, and healthcare delivery over the next ten years.

The report concluded that the time is ripe for the adoption of AI for healthcare for three reasons: frustration with the existing medical system among patients and healthcare professionals, ubiquity of networked smart devices, and comfort with at-home services provided by tech companies like Amazon.

READ MORE: Preparing Health IT Infrastructure for Artificial Intelligence

Caban explained that the report identified six areas where there remain significant challenges to AI adoption in healthcare and offered recommendations to address those challenges:

Challenge: Acceptance of AI applications in clinical practice will require validation

Recommendation: Support work to prepare AI applications for rigorous approval procedures and create testing and validation approaches under conditions that are different than those used for training data sets

Challenge: Ability to leverage the confluence of personal network devices and AI tools

Recommendation: Support development of AI applications that can enhanced performance of mobile monitoring devices and apps and development of the data infrastructure to capture data generated from smart devices to support AI applications

Challenge: Availability of and access to high quality training data from which to build and maintain AI applications in healthcare

Recommendation: Support development and access to research databases of labeled and unlabeled health data for development of AI applications in healthcare

Challenge: Executing large-scale data collection to include missing data streams

Recommendation: Collect data that are relevant to health but are not systematically collected or integrated into clinical care, such as environmental exposure data

Challenge: Building on the success in other domains

Recommendation: Support AI competitions and sharing of data in public forums

Challenge: Understanding the limitations of AI methods in healthcare applications

Recommendation: Support development of safeguards to guard against misinformation and hype about AI in healthcare

ONC’s role in promoting AI in healthcare is to ensure that the data is interoperable to support the development of AI and understand the data infrastructure issues and what standards are needed, Caban concluded.  

X

Sign up for our free newsletter covering the latest IT technology for Hospitals:

Our privacy policy


no, thanks

Continue to site...