Radiology is facing multiple challenges. A global shortage of radiologists1, an increasing population, and an increasing demand for imaging and diagnostic services, to name a few. All of that reduce the available time of the radiologists to evaluate a single scan.
This pressure has been shown to significantly and negatively influence the well-being of radiologists, pushing many to burnout 4, and consequently reducing the department's productivity and quality of medical care given to patients. The use of artificial intelligence (AI) might be one of the solutions to relieve the radiologists' workload, but what are some of the challenges for clinical implementation of AI and how can we face them?
Before we go into challenges, lets first review some of the many benefits of clinical implementation of AI.
Clinical AI software is said to improve workflow efficiency without loss of accuracy, therefore keeping radiology sustainable and accessible. AI can be applied to otherwise lengthy, labour-intensive tasks, such as volume measurements or structural segmentations, which can accelerate the diagnostic pathway while maintaining a consistent quality of care. In other words, it can give radiologists more time for complex cases, expediting simple cases, and increasing standardization5.
Such increased standardization, when it comes to tasks like structured reporting, can have positive effects throughout the entire diagnostic and treatment pathway, mitigating communication barriers and optimizing therapeutic decision-making. In the case of prostate cancer, an AI-aided MRI assessment early in the diagnostic pathway may assist physicians later on the pathway, assisting urologists with MRI-guided biopsies6, pathologists with tumor staging, and (radiation) oncologists with MRI-guided treatment7.
Nevertheless, there remain several challenges that slow or even block the clinical implementation of AI in radiology.
The topic of trustworthy AI has been discussed multiple times all over by the scientific community, from journals to conferences8–10. The distrust of AI Is said to stem from the uncertainty of how much value It can actually bring to the clinical practice11. That uncertainty feeds of a long list of common myths, the perceived lack of transparency and the lack of formal education surrounding AI. So let's tackle those.
Some of the most well-known myths about AI in healthcare is that it might take over the job of radiologists, or even that AI would dehumanize the interactions with the patients8. These misconceptions, especially seen in medical students, might stem from an education gap as observed by a recent systematic review that reported a general lack of formal AI training, while still showing an overall positive attitude towards AI in radiology9. However, universities and healthcare associations are slowly implementing formal training adding to the already extensive catalogue of resources available for personal education such as position papers, conference workshops and blogs dedicated to explaining AI.
Lastly, the perceived lack of transparency can stem from radiologists doubts when it comes to aspects like the AI used, how was it trained and validated or what steps that the company takes In terms of security11. However, there are a lot of systems in place to make sure the AI software is well developed, such as IEC and ISO standards for medical software; regulatory bodies, like the FDA, that ensure the product can be sold and therefore use in the clinical setting; or specific regulations and laws regarding the protection of patient’s information like HIPAA that manufacturers need to comply with. We recommend you ask the vendor for the mechanism they have in place to make sure you always implement an AI software you deem safe by your standards.
Another challenge AI software has encountered when it comes to clinical implementation is the lack or the little amount of clinical proof, that is, studies that evaluate the software in a clinical setting using real-life use cases. Although software manufacturers provide validation to regulatory bodies to prove the use of the AI necessary for their approval (FDA clearance or CE marking as an example), running larger scale clinical trials can prove difficult to do. Therefore we encourage joint research between the clinics (and/or universities) and established companies to assess AI's real-world (clinical) utility, encourage better software and, therefore, solve these gap in clinical proof. We also believe this challenge can be overcome once more peer-reviewed evaluations following newly created guidelines, such as DECIDE-AI, become available.
If large or medium scale structured studies are not available to you and your institution, many companies will offer test-runs to test your data on their software. This will allow to see and use the software in action within your population and use-cases, and provide the information, or proof, to evaluate clinical utility in your institution.
Another reason for hesitancy that might exist in some centers is the return of investment (ROI) both in terms of time and money. When it is unclear how long preparation, install, training, and possibly adaptation of workflow takes, it will understandably worry potential users.
To alleviate those timing doubts, whether they are related to the installation process or the time the radiologists will have to use in order to get the right training and get used to the software, we encourage potential users to have a direct line of communication with the vendor. They will be able to assess the hospitals needs for both concerns and set everything in motion to make sure the transition is as smooth as possible and there’s no wasted time.
On the hospital/clinic side, there are methods, including the Turnaround Time (TAT), that can be used as a way to measure productivity in radiology objectively and, consequently, understand the time savings enabled through AI4.
Regarding the monetary concerns, the hospital/clinic can always run different analyses to determine the AI’s financial ROI. These analyses are often run in the form of a health technology assessment (HTA) that will provide the most transparent way to promote value for money in health. Under the umbrella term of HTA, a budget impact analysis (BIA) assesses the short- to medium-term financial consequences of the introduction of new technology12. BIAs are usually presented in complement to other economic evaluations, such as cost-effectiveness analysis, which assess both the costs and, most importantly, the effects of alternative health interventions (like survival or quality-adjusted life years (QALYs).
A recent article calls for AI governing bodies: interdisciplinary organization and collaboration among key stakeholders along the whole care pathway of AI to facilitate successful clinical implementation of AI resources13. Such a governing body could create a roadmap for what kind of tools to implement, how to assess these for their population, how to implement, and not unimportantly, how to monitor and maintain these implementations13.
This can be important, because while AI demonstrates multiple avenues of clinical benefit, hospital- and patient-level differences may result in different priorities for its implementation. For example, centres with a high patient volume and long fatigue-prone care pathways may prioritize the time savings enabled through streamlined workflows. Other financially constrained centres may aim to improve accuracy, reduce overtreatment and redundant second-line diagnostic workups, relieving overworked clinicians and improving the patient's quality of life.
It is well known that AI solutions can support radiologists and decrease their workload as they have the power to improve efficiency, increase accuracy and standardization, and improve the lives of patients. However, the clinical implementation of AI is still happening in a lower scale. We believe the challenges AI implementation faces in radiology can certainly be overcome by introducing better mechanisms for software evaluation, by increasing the amount of studies performed with AI and by developing governance bodies inside the hospital to make all assessments necessary to consider said AI.
We also believe the healthcare outcomes and benefits of AI can outweigh the challenges and costs related to AI implementation. Nevertheless, not all available solutions will be right for your institution. As a radiologist or healthcare professional looking to implement AI in your practice, hesitancy is good in order to make sure you end up with the solution that will bring most value to your life and that of your patients. We encourage the assessment of any challenge factors important for your setting.