Artificial intelligence has been mystified to the point of near satire with films such as iRobot, Her, and more. And while we can never be too sure of what the future holds, knowing the current advancements and limitations of AI, specifically in the case of healthcare and medical grade computers, can help us sidestep potential pitfalls and also progress how it’s applied.

Artificial Intelligence is a complex system of algorithms used to process vast amounts of data and imitate human cognition. Essentially, it is the act of computer algorithms ingesting tons of data and reaching a conclusion without the direct input of a human. 

This modern application of computer learning has inspired unique applications across several industries. Farms have begun using it to predict weather and water conditions for optimized yields, the marketing sector uses it to analyze customer feedback and adjust marketing efforts, and schools have even begun using it to create hyper-personalized curricula for students based on their learning styles. 

AI’s use cases seemingly have no end in sight for hundreds of different industries, but no other sector has embraced this versatility more than the healthcare space. 

According to a report by IDC FutureScape, in the next 5 years, 30% of business and clinical decisions made by health and life science organizations will be informed by AI insight. They even go on to say that AI moderated collaboration between humans and machines will transform the way 1 in 3 health systems operate by as early as 2023.

Healthcare’s Infatuation with AI

On paper, Healthcare’s love affair with artificial intelligence makes a lot of sense. Doctors and nurses deal with large amounts of complex information and variables. Everything ranging from allergies and medication to past injuries and more play an active role in how treatment is administered to each patient. 

AI has done wonders in processing this information. Not only that, it’s even leveraged this information towards making predictive diagnoses for patients without doctor input. Moorfield’s Eye Hospital NHS Foundation Trust tested this capacity of AI’s with its DeepMind project. 

Developing an AI capable of identifying over 50 different types of eye disease, the group was able to create software that could recommend patients for treatments they were ideal candidates for. The software’s AI was trained on over 15,000 different eye scans from 7,500 different patients. Using this data, the system was capable of recommending the same treatments as a panel of 8 doctors with a success rate of 94%.

Unfortunately, while a 94% may get you some kudos when it comes to tests and papers, when dealing with a patient’s vision or, more sensitive yet, their life, even a 1% inaccuracy warrants double or triple checking the results. And therein lies the main problem with healthcare AI as it’s implemented today. 

There’s no real “double checking” an AI’s answer because it isn’t transparent with how it reaches that answer.  

The “Black Box” Conundrum

AI isn’t a human, despite how adamantly we try to blur that line. It cannot provide rationale or explain its point of view. All it can do is provide that point of view. 

This is because the algorithms an AI system goes through when coming up with an answer are often so complex that a through-line between the proposed answer and the steps taken to come up with that answer can hardly be drawn. This is why many refer to AI as a “Black Box” operation, one in which the process behind the provided answer is hidden behind a curtain and all the human operator sees is the result. 

Black Box AI is surely useful in cases like the DeepMind project, but it’s far from a perfect solution. Doctors will only be able to use these systems to either double check conclusions they’ve already reached or to obtain a lead on a diagnosis that they hadn’t yet considered.

Another issue in AI’s incorporation into healthcare is the differences in which doctors and AI are trained. A doctor is specifically trained to notice outliers in patients – exceptions to standard treatment methods that would otherwise be effective. If, for example, standard protocol calls for the use of a certain drug but a doctor knows their patient is allergic to said drug, they use their human reasoning to come up with an alternative solution. 

AI, on the other hand, can’t pick up on these outliers if it isn’t trained with the appropriate data. Moreover, doctors can’t even check to see if an AI system picked up on an outlier because it doesn’t show its work. 

But what if it could?

 

Explainable AI: Removing the Curtain

As we stand right now, the holy grail of AI advancement is the long awaited Explainable AI system. Like the name implies, explainable AI (XAI) is a set of complex algorithms that CAN explain the reasoning behind its answer. XAI systems already exist today, but they’re limited to simpler algorithms that are easier to trace back. Decision trees, for example, aren’t very complex at all and a very clear line can be drawn between the answer they’ll spit out and the way that answer was attained. Unfortunately, more complicated and powerful algorithms such as neural networks sacrifice explainability and transparency for power and accuracy.

And with that in mind, it’s likely that the next big breakthrough in AI will be “interpretable” AI systems, ones in which complex and powerful algorithms can also be backtracked and observed for their reasoning. Many are attempting to reach XAI by implementing a “Reasoning Engine” – a built in operation within an AI designed to provide links between small bits of transparent information we know the system is pulling and the actual proposed solution. 

To give a very simplified example, imagine you show an AI a picture of an operating room and the system is able to identify it as such. You’re able to see the AI noticed a few key things in the picture including: surgical tools, a medical cart computer, and sterilization equipment. These are the small bits of transparent information we know the system picked up on. Using the proposed reasoning engine, the system would be able to correlate its solution with these bits of information, providing an answer along the lines of:

Surgical tools, a medical cart computer, and sterilization equipment were noticed in the photo. Based on data, it is observed that 95% of operating rooms include one or all of these items. Based on their appearance in this photo, it can be concluded with high certainty that this is a photo of an operating room.” 

Quite simple to see why a more fleshed out answer such as this can be helpful for the healthcare space, no? And naturally, several brilliant minds are already on the hunt to create XAI systems capable of this reasoning. After all, the potential breakthroughs such a system can provide are just too enticing to sleep on.

AI Assisted Precision Medicine

Precision medicine refers to treatment catered and personalized to a patient based on everything from their lifestyle and habits to their environment and even genetics. Of course, statistical data about all of these variables is grand and wide spanning, making them perfect candidates for being processed by a deep learning AI program. 

With explainable AI, doctors can receive insight into not only the cause of ailments in a patient, but into specialized combinations of drugs, treatments, and procedures tailored to that specific patient as well. 

The best part about AI assisted precision medicine is the fact that doctors will be able to trust the treatment plans delivered to them by the algorithms because the program will provide exact reasoning for the proposed treatment. 

AI Assisted Diagnosis

Simply saying a model is 95% accurate is not enough for a trained doctor. A professional needs to see how a machine comes up with its answer so they can reverse engineer it to double check the results. 

Let’s take the case of someone who might have come down with the flu. An XAI system may tell a doctor, “No, this patient does not have the flu,” going on to justify that statement (using its Reasoning Engine) by adding, “the patient exhibits sneezing and headaches but not fatigue, a major and very common symptom of the flu.” 

From here, the practicing physician may agree, now more confident in their diagnosis after having it confirmed by the AI whose answer can be double checked as well. Contrarily, they could disagree with the AI, perhaps because they noticed something the system missed such as a different symptom or a facet of the patient’s health history that raises the likelihood of contracting the flu.

Regardless, because the AI system is now explaining itself, even if it’s incorrect, a human caregiver is able to make a more educated decision on how to proceed with treatment. 

Moving Towards Transparent and Accurate AI

AI will continue to grow in versatility and application as we continue to fine tune it throughout the years. While there are surely limitations at the moment, there’s no denying that its prospective future applications are incredibly exciting, well worth the investments of both time and money being put into the industry. To learn more about these exciting applications and what they mean for the healthcare space, contact a professional from Cybernet today.