Mar 29, 2024
Join Dr. Bermeo in a conversation with Dr. Ezequiel Gleichgerrcht, Dr. Erik Kaestner, and Dr. Peter Widdess Walsh, as they discuss the article, "More Than Meets the Eye: Human Versus Computer in the Neuroimaging of Temporal Lobe Epilepsy".
Click here to read the article.
This podcast was sponsored by the American Epilepsy Society.
We’d also like to acknowledge contributing editor Dr. Rohit Marawar, and the team at Sage Publishing.
Summary
This fantastic Epilepsy Currents podcast episode delves into a groundbreaking study on the application of artificial intelligence (AI) in differentiating temporal lobe epilepsy from Alzheimer's disease and healthy controls using MRI-based deep learning. The study, co-authored by Dr. Ezequiel Gleichgerrcht and Dr. Erik Kaestner, demonstrated AI's potential to significantly improve the accuracy of neuroimaging analysis in epilepsy. The commentary by Dr. Peter Widdess-Walsh provided a critical examination of these findings, highlighting the practical implications for clinical practice and the limitations of current methodologies. The discussion framed AI not only as a tool for enhancing diagnostic accuracy but also as a means to uncover subtle neurobiological differences between diseases, potentially leading to more personalized treatment approaches.
Key Takeaways
Transcript
Adriana Bermeo, MD (Host): Hello and welcome to episode five of Epilepsy Currents podcast. Today, we will be talking about the use of artificial intelligence and machine learning in the study of temporal lobe epilepsy. I am your host, Adriana Bermeo. I am the senior podcast editor for Epilepsy Currents, the official journal of the American Epilepsy Society.
Today, I am joined by a team of experts in this very exciting neurodiagnostic arena. I want to first welcome contributing editor, Dr. Peter Widdess-Walsh, who wrote the commentary "More Than Meets the Eye: Human vs. Computer Neuroimaging of Temporal Lobe Epilepsy." This commentary was published on December 2023 issue of Epilepsy Currents. Dr. Widdess-Walsh is a consultant neurologist and Clinical Associate Professor at Beaumont Hospital in Dublin, Ireland. Peter, welcome to Epilepsy Currents podcast. podcast.
Peter Widdess-Walsh, MD: Thank you very much, Adriana. I'm delighted to be here.
Host: Thank you for being here. It is also my pleasure to welcome Dr. Ezequiel Zeke Gleichgerrcht, who authored the work that inspired this commentary titled, "MRI Based Deep Learning Can Discriminate Between Temporal Lobe Epilepsy, Alzheimer's Disease, and Healthy Controls. Their paper was published in Nature Communications Medicine in 2023. Dr. Gleichgerrcht is an Assistant Professor of Neurology at Emory University. Zeke, welcome.
Ezequiel Gleichgerrcht, MD: Thank you. Thanks for having me.
Host: We are also joined by Dr. Erik Kaestner, co-author on the original paper and lead author on other related publications addressing the use of artificial intelligence in epilepsy. Dr. Kaestner is a postdoctoral scholar at University of California, San Diego. Erik, thanks for being with us today.
Erik Kaestner, PhD: Thank you. Excited for the conversation.
Host: Very good. I can tell you that artificial intelligence is one of the most requested topics when we ask our listeners and the epilepsy community. The challenges and opportunities presented by the use of AI in almost every aspect of our daily life, but particularly in healthcare, are intriguing, exciting, and potentially concerning to some. But before we start discussing AI, Peter, can you share with our listeners what is the added value of expanded neuroradiology in the care of patients with epilepsy?
Peter Widdess-Walsh, MD: Thanks, Adriana. And thank you for highlighting this very interesting and topical article for the podcast. And many thanks to Dr. Gleichgerrcht and Dr. Kaestner for being here with us. We know that MRI is a key tool in finding the underlying cause of focal epilepsy. However, up to 30% of MRIs are non-lesional. That is, there is no visible lesion responsible for the patient's epilepsy. We can sometimes lower this non-lesional percentage by using additional MRI techniques such as volumetry or various additional sequences or higher strength magnets. But these are not universally available or have limitations, so there's a gap there. We know that there is a focal epileptogenic network, but we cannot see it. And it is partly due to limitations of MRI technology, which are improving all the time, but it's partly due to limitations of what we can see visually with the human eye. There are also human error rates in analyzing MRI, even with skilled radiologists known as perceptual errors.
A shrunken, bright hippocampus is easier to see than subtle alterations in grey matter volumes. Particularly in surgically remediable epilepsies, finding a lesion or visualizing the epileptic network can make a huge difference in a better patient outcome after surgery. And even in non-surgical patients, it's very reassuring to have an imaging correlate of the epilepsy that you know is clinically present.
I'm not an AI expert and I wrote the commentary from the perspective of an epileptologist in clinical practice like many of our listeners. So, it was fascinating to learn more about AI and how it might fit into our daily practice in this area. The authors used the term computer vision in this article, and I'm excited to learn more about how computer vision can help us in the diagnosis of epilepsy.
Host: Thank you very much. We are certainly excited to see how we can advance our field further than what we are practicing at the moment. Zeke, could you paint the picture as for how is AI revolutionizing the field of radiology and particularly when it comes to neuroimaging analysis?
Ezequiel Gleichgerrcht, MD: Yeah, absolutely. And I think we start acknowledging that AI is becoming this true transformative force in medicine. Like you said, neuroimaging is by no means the exception. So for instance, if we think about AI and how it gives us frameworks to analyze huge amounts of imaging data, the pace and precision with which it can achieve it is unattainable by humans alone, right?
Humans not using this computer vision that Peter was mentioning. So, let's talk about neuroimaging specifically. AI algorithms can sift through MRI, CT scans, and, almost do a first pass at detecting abnormalities from brain tumors, there's applications for aneurysms, areas affected by stroke, you name it. Of course, our interest is in epilepsy, but we know how crucial early and accurate detection of some of these findings can be. So, you know, when trying to answer this question, how is it revolutionizing the field of radiology? I think it's really about how the accurate detection of these findings can be instrumental in patient care. So, AI applied to neuroimaging can help us mostly scale detection of brain diseases and have tremendous impact on outcomes.
But it's not just about detection of disease, right? AI now can support decisions about care tailoring or personalization of patient care, because if you can combine neuroimaging and health data, now you can offer insights that lead to more tailored strategies. For example, in other studies, we have shown that using AI, we can predict who becomes seizure free after epilepsy surgery based exclusively on preoperative imaging. And we found accuracies that are 30% points higher than what humans achieve just based on their combination of health data alone. You know, when I think about the revolution of AI, it's also paving the way for prediction or predictive analysis, things like forecasting disease progression or identifying when seizures will happen, for example, for a given patient. So, it's certainly a very exciting time for AI and neuroimaging and epilepsy.
Host: Wow, that's a lot of changes expected for the future in the way we practice. Erik, I want to bring it to you for those of us who are not tech wizards. Can you break down what this idea of convolutional neural network or CNN, what it is and how it could transform the way we interpret, particularly neuroimaging and MRI scans?
Erik Kaestner, PhD: So at its heart, a convolutional neural network is a way to learn patterns and data. And so for this particular project, we gave it a task. We said, "Can you differentiate patients with temporal lobe epilepsy from patients with Alzheimer's disease from healthy controls?" And we provided T1 images of the brain to do this. And so, In a lot of ways, it's very similar to what a neuroradiologist is doing. You present a neuroradiologist with an image of the brain and you ask for some sort of call on what they're seeing. So to understand what the neuroradiologist is doing, I'm going to have to defer to my colleagues in neuroscience. But it's a little more straightforward for the CNN.
So, a CNN is made up of a series of layers, and each layer basically performs a small set of computations. And so, the first layer, which is given access to the image of the brain, takes little pieces of the image, performs its calculations, passes that image on to the next layer, it performs its little calculations, passes that on to the next layer. And the number of layers is dependent on the person building it, so you decide how many layers there are. And eventually, all of this data arrives at the final layer, which makes a decision. It says, "I think this is a patient with TLE," "I think this is a patient with Alzheimer's," or "I think this is a healthy control." and so as for how this can transform how we interpret MRI scans, I've actually discovered over the last year, that this is an extremely robust debate that's kind of ongoing in the field. Because like I mentioned, there's a lot of similarities in the tasks that neuroradiologists are being asked to do and what we can ask a convolutional neural network to do.
And so, I was giving a talk recently and someone came up afterwards. A clinician came up afterwards and said, "Well, why would I trust this? You know, I have no idea what these numbers mean or where they're coming from or what the model's doing," which I think is a very fair question. Another person on the panel gave a good answer, which I'll crib from a little bit, which was, "Well, when you go to the doctor, can you really see what the doctor's brain is doing before they give you kind of their advice?" And so, I think this is a great way to think about what these networks, what these convolutional neural networks need to produce in order to gain the trust of the clinicians and of the patients. And so, in kind of a bedrock level, there's the behavior. You know, does the model get it right? Can we kind of trust it to give us the right answer over many patients? Then beyond that, we also need to think about how sure the model is. What information is it using? Is it using the proper information that we as clinicians and researchers have wanted to use? And also, we need to improve the outputs to a way that is interpretable by the human.
Host: Thank you very much. We will certainly discuss a little bit about how this difference is so remarkable and how can we trust it better. But I am curious to know why did your team decide to choose these particular control groups? What is similar or different between the Alzheimer's patients, and the relationship to healthy subjects? Dr. Gleichgerrcht, I'm curious, how did you decide to use these groups?
Ezequiel Gleichgerrcht, MD: Yeah. Thanks for the question because we thought about it a lot. And I'd say the rationale for having an Alzheimer's disease group they are competing against the TLE and the healthy controls for the machine to classify was at least twofold. On the one hand, we have literature, I would say, for the past two decades or so, showing that patients with temporal lobe epilepsy, at least at the group level, have patterns of atrophy that are widespread, right? So, it's not that we just see atrophy in the hippocampus or even in the temporal lobes. At the group level, the patterns of atrophy for TLE are sometimes extratemporal and even contralateral to the side of seizures. So, that was one side of the rationale.
On the other hand, you know, in a prior paper, I believe it was in 2021, we had already shown that AI could identify who had temporal lobe epilepsy solely from the clinical MRI. And we had that benchmark to ask ourselves this question, almost as the next natural step: Is the machine detecting TLE simply based on detecting hippocampal atrophy, or is it detecting widespread epilepsy-related patterns that the humans are not typically paying attention to? So, when we included Alzheimer's disease as a third group that offered this competing disorder with prominent hippocampal atrophy, right? That is the core of pathology in Alzheimer's disease, then we were trying to answer the question, is it, again, just the hippocampal atrophy driving the classification? And, again, hippocampal atrophy being pervasive in Alzheimer's disease, but also in so many TLE patients. And, I want to say that when we asked this question, the natural reaction was, "Well, we also have to make sure that the machine is not just detecting older versus younger brains." Because, of course, if it's detecting just age related patterns, of course, it's going to do a great job at distinguishing Alzheimer's disease from temporal lobe epilepsy, because the mean averages of those populations are so radically different.
So, a lot of our method, and I invite the audience to go and read about this, was about removing the effect of age on the brain images before we entered those images into the machine to have it classify disease. We put a lot of thought into it. And I think we had a model that can be replicated for other diseases where you're training a machine to try and detect the disease independently of all these other confounding factors like age and patterns of atrophy.
Host: So much to discern from all of these conditions that may have multiple targets in the brain. In order to go deeper a little bit into the technicalities of the study and in order to help our listeners read the paper and understand it, Erik, can you please unpack some of the jargon for our listeners? Could you explain terms like accuracy, precision, recall, the F1 score? What do they mean when someone reads the paper? How should they read these variables?
Erik Kaestner, PhD: Yeah. So, kind of in the analogy I was using earlier of how does the model gain our trust, these are kind of the bedrock elements where we're just asking, "Can you do the task? Can you get it right?" And so for something like accuracy, you're just saying, "All right, patient by patient, control by control, overall, are you able to get these accurate to some degree?" And then, we can go a little deeper and look at each individual condition, and that's where something like F1 score comes in where we can start to say, "How are you performing on patients with TLE?" How are you performing on patients with Alzheimer's?" And so, F1 score basically combines two of the other measures you mentioned, precision and recall. And precision says when the model claims that someone is a patient with TLE, is it right often? You know, is it kind of just saying this is a patient with TLE to everyone, and so we can't really trust it? Or is it being more conservative and kind of giving you a high level of accuracy within that specific patient group? And then, you have something like recall. Recall says of the patients with TLE in the cohort, Were you able to identify a large proportion of them or not? And so often these two measures, you can maximize one, but it hurts you in the other. So if you just guess patient with TLE for every single image you come across, then your recall will be quite high, because you will identify every single patient with TLE, but your precision will be quite low because, basically, every time you say it's a patient with TLE, you're going one for three. And so, F1 score is very useful because it combines these two. And so if a model is cheating in some way, it'll kind of reveal it by showing, "Okay, I don't think it's actually being quite truthful with us essentially. I don't think it's being fair." And so, like I said, once you've kind of interrogated the model's behavior in this way and said, "Okay, it seems to be doing the task quite well, then you can go on to the next step and start to interrogate it in other ways."
Host: I want to go to Peter. You know, in your commentary, you highlighted how the AI showed a success rate in detecting temporal lobe epilepsy of 90% compared to the human eyes, 47%, which is humbling. And I wonder what is your take on these AI outperforming the human eye. And how do you think AI assisted temporal lobe epilepsy detection will mean for our clinical practice?
Peter Widdess-Walsh, MB: Adriana, that's a great question. I would emphasize that we have to be careful with AI tools as to what exactly this particular AI tool used in the study, it was the carefully selected unilateral temporal lobe epilepsy and their T1 weighted MRI images. Patients with distorting structural lesions such as tumors or malformations were excluded.
Most of the scans visible to the human eye had mesial temporal sclerosis. So we see that the AI tool that the authors developed was quite a bit better at picking up these other cases that were not visible to the human eye. As I mentioned, there are limits to the human vision processing system, and we can only spontaneously analyze so many areas at once.
I gave the example in the commentary of the invisible gorilla study, an experiment where an image of an angry gorilla was placed over the lung area of a chest CT, and then radiologists looking for lung nodules missed the gorilla image 83 percent of the time due to attentive and perceptual error. So, I think AI imaging tools can help us see what we cannot see and also reduce this perceptual error.
Although I still think we'll need the radiologist or clinician to incorporate information from AI into the interpretation and clinical management of the patient. I wrote the title to be somewhat thought provoking, but perhaps a better title would have been not human versus computer, but human with computer.
AI is there to assist us in our decision making, not make the decision for us. AI tools will typically have a very specific purpose, such as in this study, to detect an MRI signature in unilateral temporal lobe epilepsy. And in its current form, it doesn't read the whole scan, or pick up unexpected findings, or incorporate the results into the whole clinical picture or with EEG or discuss it with the patient and their family.
So we're not out of a job yet, but it has the potential to enhance the delivery of care to our patients.
Adriana Bermeo, MD (Host): Thank you so much. I would love, Dr. Gleichick, to answer this question. Take on that, how do you see, the future of these, artificial intelligence, aid, in, identifying lesions in neuroimaging in patients with epilepsy? And how do you see the role of radiologists and neurologists, for the immediate future as it seems?
Ezequiel Gleichgerrcht, MD: Yeah, well, you know, as computer processing power increases, first our speed and capacity to process more data will also increase. And these advantages, or advanced computer, uh, set ups will get more affordable. In fact they are, and as they become more affordable, they also become more scalable. So we'll get almost an explosion, right, of applications across medicine.
Of course, that includes epilepsy, but I really love the way Peter described this, which is that, I really do not think of AI as a replacement for humans, but as this superpowered assistant, if you will, helping humans be more efficient and precise at their jobs. AI will direct our attention to possible subtle lesions that the human expert eyes may miss on the first pass.
We're busy clinically and our volumes are increasing across the country and so getting this super powered assistant certainly helps. And humans will be the ones vetting whether the AI detection is relevant and accurate. Like Peter said, they're integrating that into all the other elements of the clinical care the assessment.
So AI will help predict maybe clinical trajectories with greater accuracy. Clinicians are still going back and understanding what that trajectory means for their specific patient. They're having the discussions about that trajectory with the patient. And so I really want us to think about it as this super powered assistant.
Definitely not a replacement.
Adriana Bermeo, MD (Host): I like that perspective too. Peter, in your commentary, you highlight the fact that all the areas of the brain were found involved with temporal lobe epilepsy beyond temporal lobe structures. and your take on it of what it can mean for our understanding of the condition is quite, interesting and provoking. I would love it if you can elaborate on that.
Peter Widdess-Walsh, MB: Yeah, incredibly interesting to see what computer vision sees over what we see. Subtle patterns and volume and architecture probably reflect the epileptic network changes and its connections as well as the underlying epileptic substrate. We know from prior voxel based morphology studies and studies of cortical thickness, for example, from the Enigma study group, that both temporal and extratemporal structural network alterations are there in patients with temporal and extratemporal epilepsy.
This is supported by studies looking at functional connectivity, such as those from intracranial EEG. The authors have taken the next step to develop and train a model that we can plug MRI data into and get a meaningful and accurate output. Does this patient have a pattern consistent with temporal lobe epilepsy that is different from other limbic system pathologies such as Alzheimer's disease?
It would be really interesting to know if there were signature patterns within temporal lobe epilepsy such as which patients will be medication resistant, which will have more cognitive impairment, and who will have better outcomes after temporal lobectomy.
Adriana Bermeo, MD (Host): Wonderful. I have one more question for. All, which is, kind of stepping out a little bit of the paper in particular, and just thinking about the future and the use of AI in epilepsy care, even beyond radiology. Where do you think we're going? We have touched on it, a little bit briefly. Erik, we'd love to start with your take.
What is next for the use of artificial intelligence in epilepsy care?
Erik Kaestner, PhD: I mean, I think that for me personally, where my mind has been going lately is really on the question how do you get an interface between the human and the model? Because as both of the others pointed out, it really isn't that you receive kind of a definitive answer, put it in the chart and move on. You know, the clinician really has to be able to explain, not only in the chart, but also to the patient, why they think that, they're presenting with, this particular condition or another. And so having some sort of interface where the clinician can go back and forth with the model and say, Oh, really? You think this is a patient with TLE? I wasn't so sure. Like what is leading you in that direction and being able to kind of engage in that. And I think some of the large language models that we've probably all played with a little bit could be one avenue into that. But really kind of we're ending up in a, almost a, user interface set of questions.
Adriana Bermeo, MD (Host): Zeke, where are we going with AI in epilepsy?
Ezequiel Gleichgerrcht, MD: So before we move on, beyond the neuroimaging, let me just put in a plug for our group and definitely a lot of other groups that are creating a lot of applications from lateralizing side of epilepsy, detecting focal cortical dysplasia, but even cool stuff that I never even thought about. So, you know, 3TMRI is not available everywhere and there are these new portable scanners, I think they're 0.55 Tesla. So there's even AI applications now to augment those lower magnet power MRIs and try to bring them to the quality of a 3T. So there's still a lot to discover in the neuroradiology aspect. But beyond that, are a lot of algorithms trying to predict, for example, when the next seizure will happen.
Our team actually published last year, using paper seizure diaries to predict at the individual level when the next seizure will happen for a patient. And you can imagine the kinds of consequences that that would have as we get better. So, there are a lot of teams now working with wearables, chronic EEG, even chronic EGOG from systems like RNS and using those data to predict at the patient level. There are applications in development for tailored drugs and serum biomarkers, and they're all relying on AI algorithms to achieve in very short time frames things that would have taken years, just a few years ago.
So it's definitely a very exciting time for AI, well beyond neuroradiology and epilepsy.
Adriana Bermeo, MD (Host): Peter, how do you imagine your AI integrated epilepsy practice?
Peter Widdess-Walsh, MB: I think AI is here to stay. there are hurdles before it's available to the wider epilepsy community to use in clinical practice. So, for example, the AI tool used in this study was developed in house, which they typically are, and then these tools have to be validated on external data sets to ensure they generalizable.
And then there's hurdles around regulation and software development and maintenance and as they would be or can be classified as medical devices if used for clinical decision making, there is FDA or equivalent approval. So there are steps, before it's available, our desk, you know, for clinical practice, but I think AI is here to stay.
Adriana Bermeo, MD (Host): Thank you, everybody. It is certainly the picture of a hopeful future for our practice and most importantly for our patients. I want to sincerely thank our guests and thank our listeners. I want to give special thanks to the American Epilepsy Society, who's the sponsor of episode five of our podcast, and to Dr. Rohit Marawar and the SAGE production team for the podcast. We look forward to having you join our next episode, and I want to remind you as always to subscribe to Epilepsy Currents podcast wherever you get your podcast, and send us your feedback, suggestions, or questions through our website, EpilepsyCurrents.org, and don't forget to follow us, on X, formerly Twitter, @AESCurrents. See you next time, everybody.