top of page
Search

Interview with Dr. Thomas Yankeelov

  • Writer: isaacfjung
    isaacfjung
  • Dec 20, 2024
  • 13 min read


Dr. Thomas Yankeelov
Dr. Thomas Yankeelov

Dr. Thomas Yankeelov is the W.A. "Tex" Moncrief Professor of Computational Oncology and a professor of Biomedical Engineering and Medicine at the University of Texas at Austin. He directs the Center for Computational Oncology and cancer imaging research within the Livestrong Cancer Institutes. His research focuses on improving patient care through advanced imaging and predictive biophysical models of tumor growth, aiming to personalize cancer treatment.


Isaac

Thank you for agreeing to speak with me today, Dr. Yankeelov. Could you share a little bit about what initially drew you to the field of mathematics and physics, and how they led you to biomedical engineering?


Dr. Yankeelov

Sure. Well, when I was younger, I always just enjoyed math. It made a lot of sense to me. Like a lot of people, I liked that you could get an answer and know if you were right or wrong at the end of it — that definitive aspect.As I got older, I started to see the beauty in it — how you can use mathematics to explain things that happen in nature. People often refer to it as the unreasonable effectiveness of mathematics to describe the natural world, which I thought was really cool. So, I just really liked it. When I went to college, I actually went because I wanted to be a high school English teacher. I wanted to be a literature teacher in high school, so I wasn’t really thinking about math, physics, or engineering at all. I just enjoyed it, so that’s what I was studying in college. Then, I saw the movie Jurassic Park.


Isaac

Oh yeah, I know that one.


Dr. Yankeelov

Yeah. They talk about chaos theory. I read the book afterward, and my brother, who was an electrical engineer, recommended a book called Chaos by James Gleick. I read that, and I became really interested in mathematical modeling. That led me to start thinking about it more seriously.


When I went to graduate school, we had to choose an area of application, so I chose applied math. I took a course in quantum mechanics and thought it was the coolest thing ever. I straddled both the math and physics departments for a few years, then did an internship at Brookhaven National Lab in New York, where I saw a seminar on Magnetic Resonance Imaging (MRI). I thought it was fascinating, with its heavy reliance on math, modeling, physics, and engineering. That’s where I began using it to understand biological processes, and eventually, I applied it to studying tumors.


At the time, mathematical modeling wasn’t part of cancer research. There were equations about how tumors grew, but they were all very theoretical. Nobody had really linked those equations to measurable data. I realized that my background could help bridge that gap, so I thought we could apply more rigorous, mathematical approaches to understanding tumor growth and therapy responses. I remember being at the beach and reading a historical review of meteorology. It talked about how 100 years ago there was no math or physics involved—just guesswork—and it struck me that oncology felt similar. There wasn’t a lot of math or physics in oncology.


When I got back to the lab, I gathered our team and we made a list of everything we could measure using medical imaging. We began developing mathematical models to incorporate those measurements. Meteorology has satellites, weather balloons, and radar to gather data, while in oncology, we don’t have those types of measurements. But we do have medical imaging, which can provide valuable data. So, we started linking medical imaging data with mathematical models to make predictive models of how tumors would grow and respond to therapy. That’s how it all came together.


Isaac

So you mentioned that Jurassic Park and your brother were big influences early on. Were there other experiences and mentors, perhaps at Vanderbilt or UT Austin, that helped guide the direction of your research?


Dr. Yankeelov

Yes, it started even earlier. My high school English teacher, Mr. Ricketts, was the first to really teach us that we could say whatever we wanted in our essays, but we had to back it up. That idea of trusting your own ideas but supporting them with evidence was something new for a lot of us at that age. It was a big leap. Even before that, my fifth-grade science teacher, Mrs. Souza, introduced me to the solar system, the galaxy, and the universe. That was the first time I was really blown away by science and nature.


In college, I had an English professor, Dr. Dale Billingsley, who taught me how to write and communicate ideas more clearly. That really helped with research papers, grant writing, and communicating ideas to larger audiences. In graduate school, I had a professor, Dr. Surat, who was a nuclear physicist. He taught quantum mechanics, and he was a phenomenal instructor. He made difficult concepts accessible with clarity, lots of examples, and different explanations.

When I was a postdoc at Vanderbilt, my advisor, Dr. John Gore, who still directs the Imaging Center at Vanderbilt, was a real visionary. He saw imaging science as a unified field, not separated into subfields. MRI, PET, ultrasound—they were all part of imaging science. His holistic view and ability to bring together different ideas opened a lot of doors for us.


Within mathematical oncology, I was also influenced early on by two people: Sandy Anderson, who established the Integrative Mathematical Oncology Program at Moffitt Cancer Center in Tampa, and Kristen Swanson, who I believe is now at the Mayo Clinic in Arizona. Their early work really inspired me and, by extension, our team.


Isaac

You mentioned that MRI and PET imaging are really useful for mathematical oncology. What makes them particularly helpful in studying cancer? Are there other imaging techniques that are also useful?


Dr. Yankeelov

Medical imaging is invaluable in this field because it allows you to take measurements in three dimensions, at multiple time points, and mostly non-invasively—without cutting into the patient. Of course, some imaging techniques, like CT and PET, do involve ionizing radiation, but even so, they are still less invasive than other methods.


For example, MRI is great for high-resolution anatomical imaging, especially for soft tissues. But it can also measure things like blood flow, cell density, hypoxia, and other characteristics. PET and other techniques can provide true molecular imaging, telling us about things like glucose metabolism, cell proliferation, and hypoxia. These measurements are critical because they help us understand how tumors initiate, grow, invade, and respond to therapy.


These imaging measurements are important for creating predictive mathematical models. In fact, some of the data—like cell density or proliferation—can statistically separate responders from non-responders. This gives us a lot of motivation to use that data in predictive models.


Isaac

You’ve also mentioned weather modeling as an inspiration for your work. How do the spatial-temporal dynamics from weather research play a role in your models? You also mentioned this in your article about designing clinical trials for patients who are not average.


Dr. Yankeelov

The key models we use are partial differential equations (PDEs). These equations track the dynamics of a system over both space and time. For example, in our work, the quantity of interest is often the number of tumor cells. PDEs allow us to track how these tumor cells move in space and time—along the X, Y, and Z dimensions, as well as in time (T).


The equations have two components: the left side describes how the tumor cells are moving through space and time, and the right side includes terms for things like how the tumor cells migrate, interact with tissue mechanical properties, proliferate, and respond to therapy. These are the three things we care about: migration, proliferation, and response to therapy.


We use medical imaging data to populate the model with real measurements. Any parameters we can’t measure directly need to be calibrated to the data. Since these are dynamic models, we usually need at least two time points: one before therapy starts and one early in the course of therapy. These two measurements allow us to calibrate the model.


Once the calibration process is complete, the model can predict how the tumor will evolve over time, which allows us to forecast if the tumor is shrinking, growing, or behaving in some other way. These predictions can then be compared to later measurements to see if the model is accurate.


Isaac

Just to confirm, these are essentially digital twins of the tumor, right?


Dr. Yankeelov

Yes, you’re absolutely right to ask about digital twins. What we’re doing is essentially a component of a digital twin. The idea of a digital twin is that you have a mathematical model that describes the physical processes of the system you’re studying, in this case, tumor growth. You use this model to predict the system's behavior and optimize interventions.


The key difference with a digital twin is that there’s a back-and-forth between the digital model and the real system. For a tumor, you calibrate the model with real patient data, make a prediction, and then intervene. You treat the patient, which changes the system, and then you measure again. This creates a feedback loop between the real system and its digital counterpart.


We’re not fully there yet in oncology. While we can make predictions and compare them to real patient outcomes, implementing this feedback loop in a clinical setting is still challenging. But that’s the next step: using the model to optimize treatment for each individual patient, rather than relying on a one-size-fits-all approach.


Isaac

So, in a way, it’s moving from population-based treatments to personalized ones.


Dr. Yankeelov

Exactly. Right now, oncology is still largely based on population-level interventions. Everyone with the same diagnosis gets the same treatment, but we know that this isn’t optimal for each individual patient. Mathematical modeling could help us move away from that.


As an analogy, when we launch a satellite, we don’t send up thousands of satellites and hope one of them lands in the right orbit. Thanks to Newton and 300 years of orbital mechanics, we can calculate the trajectory of one satellite and usually get it right. Oncology needs to reach that point. We can’t keep relying on trial and error; we need to use data and predictive models to determine the best intervention for each patient. That’s how we’ll get to personalized treatments, which is the future of the field.


Isaac

So you mentioned in your articles that we can't use large data sets or AI models to truly individualize treatment for patients. How would you form the equations for these models?


Dr. Yankeelov

Yeah, this is something I feel quite strongly about, but I’m in the minority on this. There are two extremes in computational modeling. On one side, you have methods driven by artificial intelligence and big data. You hear a lot about this in the press. On the other hand, there are mechanism-based models—sometimes called physics-based or biology-based models—where you explicitly incorporate known biology, physics, or chemistry into the problem.


AI models, on the other hand, rarely account for the underlying biology. Instead, they look for patterns in data. In AI, you always need a training set—a large population of data to train the model. For instance, a machine learning algorithm would require a population to train on, and if Miss Jane Doe’s condition is similar to that population, the model can predict how she might respond to treatment.

The problem with cancer, though, is that it’s so heterogeneous. It’s not just one disease. Depending on how you classify it, cancer can have more than 100 subtypes. For example, if Jane Doe is diagnosed with triple-negative breast cancer, that’s just one of several subtypes. So in order for an AI algorithm to work, it needs a training set specifically representing her subtype—and that’s not going to happen. The diseases are getting more specific, and treatments are following suit. It’s not just about whether a person has breast cancer anymore; it's about the specific molecular or genetic subtype.


Also, even if you do have a population that represents Jane’s subtype, the treatments would need to be tailored to her, and that would involve a huge variety of treatments, which would also need to have been tested on people with that same subtype. These training sets simply don’t exist, nor will they, because diseases and treatments are constantly becoming more specialized.

Isaac:So it’s a bit like trying to predict how someone will respond to treatment in a situation that’s constantly changing?


Dr. Yankeelov

Exactly. Cancer behaves very differently in different people, even if they have the same diagnosis. Why does one person respond well to treatment, while another person doesn’t? Why does one patient get cured and another dies? These differences are due to the heterogeneity of the disease, and AI struggles when there are many degrees of freedom, like in cancer.


AI is great for problems with relatively low complexity—like predicting how many Toyota Camrys should be shipped to Des Moines, Iowa next year. There are only so many types of Toyota Camrys, and the purchasing patterns in Des Moines are likely similar to other cities. But predicting how to treat a cancer patient is a different matter. AI might work for making bulk predictions in simpler scenarios, but for something as life-critical as cancer treatment, where the stakes are high and the system is highly sensitive to small differences, AI has its limits.


Isaac

Is it something like the butterfly effect, where small differences lead to big outcomes in a dynamic system?


Dr. Yankeelov

I’m not sure that analogy directly applies here, but I see what you mean. The butterfly effect refers to a dynamical system being highly sensitive to initial conditions—small changes at the beginning can lead to huge differences in the outcome. Our models are somewhat like that. They’re also sensitive to initial conditions, and once you intervene in a system like cancer, it changes. So, if we make an initial prediction, we need to update it as the system evolves. For instance, after a treatment like radiation or chemotherapy, the model needs to be recalibrated based on new measurements. So we can't predict indefinitely. We can typically predict for up to one to four weeks, after which we need to make new measurements.


Isaac

So what are the biggest challenges in implementing these models for clinical practice?


Dr. Yankeelov

Right now, the major challenge is figuring out how to prospectively test the "digital twin" approach in a clinical setting. Ideally, you’d conduct a clinical trial where patients are divided into two groups—one receiving standard care and the other receiving individualized treatment optimized through a digital twin. For example, in a brain cancer trial, one group would get the standard therapy, while the other group would get treatment tailored using our model.


For the second group, we’d calibrate the model based on their measurements, make predictions, and then optimize the delivery of radiation therapy or chemotherapy. After each treatment, the patient would return for additional measurements to update the model, and their next treatment would be adjusted accordingly. The goal would be to see if these personalized interventions outperform the standard of care.


The issue we face is how to get this kind of clinical trial off the ground. Physicians tend to be very conservative when it comes to changing therapy mid-treatment, especially in such high-risk situations as cancer. And that’s understandable—physicians need to be cautious because they’re dealing with people’s lives. But we need to figure out a way to test this approach systematically to see if the individualized treatment actually improves outcomes.


Isaac

So it’s a matter of overcoming the hesitancy to change treatments on an individual basis during therapy?


Dr. Yankeelov

Exactly. The idea of dynamically changing a treatment plan during the course of therapy is a tough sell, but we’ll have to tackle that hurdle at some point. We have to demonstrate that individualized treatments can outperform the current standard of care, and that requires prospective clinical trials. That’s where we’re focused as a field right now—figuring out how to make this work and prove its benefits. Once that happens, we could begin to see real improvements in patient outcomes.


Isaac

How does collaboration in mathematical oncology look for your research?


Dr. Yankeelov

Collaboration is essential. It doesn’t happen unless we collaborate across disciplines. We need people trained in engineering, math, biology, and biomedical engineering. But that’s not enough; we also need to work with clinical oncologists, radiation oncologists, medical oncologists, and surgical oncologists. They're the ones who are responsible for the patients and the ones who give us access to patient data. Without their involvement, our work would be largely academic.


They also help us identify what the actual problems are. Sometimes, we think we’re addressing a critical issue, but when we work with clinicians, they might tell us that it's not as important as something else. This feedback is invaluable.

At the end of the day, they’re the ones treating patients, so they have to be part of the process.


Getting digital twins into clinical practice is much more of a logistical challenge than a technological one. It’s about explaining to practicing oncologists—who are in the trenches every day—that we need to guide their interventions using mathematical models. That’s a tough sell, and just as importantly, how do we communicate this to patients? Explaining that their treatment will be guided by a mathematical model can sound bizarre.


There are also logistical issues like getting patient data to our models quickly, calibrating it, making predictions, and sending the results back to the electronic medical record for the treating team. How do we construct a digital twin report, similar to how radiologists produce reports for imaging or pathologists produce pathology reports? These are questions we’re still working through with our clinical colleagues to make sure the reports are useful and actionable.


Isaac

So, you really need clinician buy-in for this to work effectively.


Dr. Yankeelov

Absolutely. Clinician champions are crucial. They’re the end-users, and the model has to be useful to them. If it’s just an academic exercise, it limits the impact we can have. The real impact comes when it benefits patient care.


Isaac

What advice would you give to high school students interested in a career combining math and medical sciences?


Dr. Yankeelov

First, keep a notebook. Jot down things you find interesting, whether it’s from a class, a YouTube video, or something you read. Later, research it—watch videos, read books, or talk to someone in that field. It’s important to explore different disciplines. I ended up in math because I was interested in so many areas of science and couldn’t choose just one, so I picked something central, like math, and figured it out from there. I read about anthropology, meteorology, archaeology, and I even explored medicine for a long time.


I also suggest talking to people in various fields—doctors, physicists, biologists—and asking them what they like and don’t like about their jobs. Understanding their daily life can give you a real sense of what the work entails.


Additionally, I encourage you to have interests outside of math and science. It’s important to develop the creative side of your brain, whether that’s through the arts, improv classes, or any other creative outlet. It helps you think more creatively and communicate better, which is essential in science and medicine. You’ll need to explain your ideas to people who don’t have a technical background, like policymakers, so communication skills are just as important as technical knowledge.


Isaac

If you could have one breakthrough in your research in the next several years, what would it be?


Dr. Yankeelov

I’d like to see two breakthroughs, one logistical and one scientific.

Scientifically, we need to get much better at both imaging and modeling the effects of immunotherapy. Immunotherapy is a powerful tool against cancer, but we don’t fully understand why it works for some patients and not others. Our current biomarkers aren’t sufficient, and the side effects can be severe. We need to develop better ways to predict which patients will respond and understand why some don’t.


Logistically, I hope we can conduct the prospective study I mentioned earlier—a trial comparing individualized treatment guided by digital twins with the standard population-based approach. If we can show that individualized care outperforms the standard approach, it could change the way we think about treating cancer patients.


Isaac

Thank you so much for speaking with me today.


Dr. Yankeelov

Thank you for having me. It was a pleasure.



 
 
 

Comments


bottom of page