Get Instant Access To The Rest of The Story... Unlock The Full Story Content For Free

Enter your email to get free access to the complete story.
Your email address is 100% safe from spam!

Artificial intelligence (AI), or the ability for computers to process and learn from data to make accurate predictions, is shaping up to have a significant impact on how we use technology. AI is showing tremendous promise in healthcare in general, and pharmacy specifically. Here, we’ll take a look at the current state of AI in pharmacy, and what the future holds.

Khoa Nguyen, Pharm.D., Clinical Assistant Professor, Department of Pharmacotherapy and Translational Research, University of Florida College of Pharmacy
Khoa Nguyen, Pharm.D., Clinical Assistant Professor, Department of Pharmacotherapy and Translational Research, University of Florida College of Pharmacy

Khoa Nguyen, Pharm.D., is clinical assistant professor in the department of pharmacotherapy and translational research at the University of Florida College of Pharmacy. Professor Nguyen has expertise in pharmacy informatics as well. He has worked on developing technology for the University of Florida’s electronic health record (EHR), including implementing AI technology as a part of the university’s larger AI initiative. This initiative involves faculty with expertise in various areas of AI, both within the pharmacy school and in other areas of healthcare, all working to develop algorithms and implement AI into clinical services across the health system. Nguyen reports that the College of Pharmacy is currently working on an AI implementation to assess opioid overdose risk. While there are different definitions of artificial intelligence, Nguyen defines AI as the capability of the computer to discover meaning in data and generalize from it — and then, most importantly, show the ability to learn from past experience. “AI systems are able to perform a task similar to the way humans do,” says Nguyen, “and in cases even mimic human intellectual characteristics.”


What goes into creating an AI application? First, AI requires data. Just as with human decision-making, good data is central to good outcomes. The data trains an algorithm, and the algorithm can be integrated into software such as an EHR or pharmacy management system. “Our goal in the current project,” says Nguyen, “is that when a patient comes in for care the AI algorithm will assess whether that patient has a high risk of opioid overdose based on multiple factors. In cases where the risk is high, there’s critical decision support so that the provider can make the optimal decision to reduce this risk for the patient.” Next, developers of an AI application need to use different techniques to learn from the data and create an algorithm that produces accurate predictions. “Algorithm” is perhaps as significant a term in tech these days as AI is, and it’s simply a process or a set of rules for solving a problem. Nguyen gives several examples of the methods that the University of Florida uses to develop AI algorithms. “Machine learning is the technique that we use most commonly,” he says, “especially in this overdose risk project.” There are multiple types of machine learning techniques, notes Nguyen. In one blog post (, an IBM analytics expert breaks machine learning down into three broad categories: unsupervised, supervised, and semi-supervised. The post states that the main difference between supervised and unsupervised learning is labeled data. Supervised learning uses labeled input and output data, while unsupervised learning does not. In semi-supervised learning, some of the data in a training data set is labeled.

Want to learn more about the different machine learning techniques? 

Among the other approaches available, according to Nguyen, are artificial neural networks, which Wikipedia describes as loosely modeling the neurons in a biological brain (, or deep neural networks ( learning#Deep_neural_networks), which are artificial neural networks with multiple layers between the input and output layers. Then there’s natural language processing (NLP), which Wikipedia describes as giving a computer the ability of “‘understanding’ the contents of documents, including the contextual nuances of the language within them.” ( “NLP is important when working with EHR systems,” says Nguyen, “because a lot of data that we want to use comes from the notes that are written by providers. Which approach is best really depends on the data and the practical solution that we want to create. We usually use multiple techniques in the AI algorithm development process and try to figure out which one gives us the optimal solution in our clinical setting.”


When it comes to implementing an AI solution and even commercializing it, Nguyen sees three key areas: techniques, data, and implementation. “First,” he says, “we’ve already talked about finding the best technique to develop the algorithm. Generally, AI researchers and developers have gotten to be really good at using the different techniques we’ve talked about that are applicable to and useful for a healthcare algorithm development process.”This is the key first step in AI development and implementation in a clinical setting. The second important area is data quality. AI, notes Nguyen, needs clean, reliable, and relevant data to train on. “This area is more difficult,” says Nguyen. “Most of the time when we are working though the best solutions for developing an AI algorithm, we are working in what we call an optimistic data environment. What this means is that we are assuming, for example, that the patient always takes the medication, but in reality, the data is not like that. Usually the data is not that clean; it also might not be as relevant as we want it to be, and it can be biased, depending on the population.” For example, data sets often contain more adverse events and outcomes than positive ones, since data enters the record when there’s a side effect or other issue for which a patient seeks care. “We have a lot of data on the patients who experience issues in their care,” says Nguyen, “but not as much data on the healthy patient because they rarely go in to see the pharmacist or to the clinic or hospital. So we see how the data might be skewed into one area compared to the other.” Then there’s another human element. The primary use of healthcare data is to support clinicians in their care of patients, notes Nguyen. “We do not create data for the sake of research or for the sake of training AI on the data,” he says. “That is a secondary use of the data.” This means that there are commonly accepted ways providers enter data that can cause serious issues for AI. The example Nguyen provides is blood pressure numbers. “It’s quite common for a staff member to enter zero for blood pressure,” he says. “To other staff this clearly just means that that blood pressure is unavailable or irrelevant. But a machine doesn’t understand that. It will read blood pressure for that patient record as actually zero, and that is definitely going to skew the algorithm.” What it boils down to is that the quality of the data sets available for training AI creates a real difference between expectation and reality. “Blood pressure is just one common example,” says Nguyen, “but there is a lot of this kind of data in our real patient records. We have to be aware of these issues and make best efforts to clean up those data to prevent the bias.” Even in cases where the data is clean and carefully recorded, the success of an AI application can run up against the fact that, as Nguyen notes, clinical settings vary enough that an algorithm trained on data from one setting may not work in a second setting, because of the uniqueness of the populations, differences in the way the data is handled, or the policy of that data within an institution. Researchers and developers are aware of this, but it has caused problems even for such major AI initiatives as IBM Watson. “It’s a struggle,” says Nguyen. “You can have an algorithm that’s providing useful results in one region of the country, but when you apply it to data from a patient population in another region, the prediction is not that accurate anymore.” And finally, the third area is the process of implementation of AI in clinical practice. “We spend a lot of time developing multiple algorithms to support clinicians and pharmacists,” says Nguyen, “but the literature shows that only 2% or 3% of all of these models are being implemented into clinical practice.” The reason for this, according to Nguyen, is often that when people develop AI solutions, they do so without sufficient consideration of the real problems that need to be solved or who will actually use the tool. “If we want to develop a product that is practical and that people will use,” says Nguyen, “then we have to understand the need and the workflow, and we have to be able to make it clear to users how they will benefit from the AI product. When we don’t take the time to really develop an understanding of the need and ability to explain the benefit, we have a real tendency to overestimate the uptake of AI technology.”


While there are pharmacy technology vendors that have either implemented AI in their solutions or are working on it, on the whole AI is still a work in progress as far as your typical community or institutional pharmacy setting goes. DrFirst is one vendor applying AI to e-prescriptions to accurately translate patient directions. Read more about how one community pharmacy is benefiting from AI. “From my experience here at the University of Florida,” says Ngyuen, “we are still in the early stages with AI. We are still trying to find good test cases that show a benefit when we implement AI into our EHR system.” So is there truly a future for AI in pharmacy? Nguyen certainly thinks so. He sees AI growing into a role that makes pharmacy and healthcare overall more proactive. “This means using AI to give us the ability to prevent negative outcomes,” he says. “That is where I think AI can have the biggest impact in pharmacy over the next five years. AI is going to power our ability to predict and prevent the harmful use of a medication, to avoid side effects.” For example, notes Nguyen, AI is a powerful way to make use of pharmacogenomics to predict whether a medication will be useful in personalized medicine before beginning a patient on a drug. “AI has become something of a buzzword,” he continues, “but we are in fact gaining a lot more momentum with AI and gaining a great deal of experience with how to use it effectively in a clinical setting.” The challenge at the moment is to keep making advances while also keeping an eye on how AI needs to fit into the care model overall. “Again,” says Nguyen, “we need to make sure we are paying attention to the whole workflow and not just the algorithms or techniques. We have to be able to answer the question: How can we actually implement an AI tool into the clinical setting in a way that benefits both the provider and the patient?” CT