SBI Podcast

Economic Impact of AI on Breast Imaging

Society of Breast Imaging

This specific episode highlights Economic Impact of AI on Breast Imaging. It features: Lauren Nicola, MD American College of Radiology Ultrasound Commission Chair. 


This quarterly podcast will cover different breast imaging themes each year. The theme for the first year is: Breast Imaging Economics.

Speaker 1:

Welcome back to the SBI podcast, where we discuss all things breast imaging and hot topics relevant to the breast imaging world. I'm Carla Sepulveda, professor at Baylor College of Medicine, and I'm joined by my co-host, dr Alyssa Cubison, assistant professor of radiology at Ohio State University. This is episode three in our inaugural year where we are exploring economic issues in breast imaging. Before I introduce our guest for today, I want to take a moment to thank the audience for the positive feedback. When starting this podcast, my goal was to create a platform for thoughtful and informative discussion on the issues we are all facing in our practices. I appreciate that listeners have found the podcast valuable. Please keep the feedback coming. So on that note. On to today's guest, I'm very excited to welcome Dr Lauren Nicola. Dr Nicola is a board-certified radiologist with subspecialties in breast and pediatric imaging. She is the Chief Executive Officer at Triad Radiology Associates in North Carolina. She serves as the ACR Ultrasound Commission Chair and is a member of the ACR Board of Chancellors. Dr Nicola is the alternate radiology member on the RVS Update Committee, also known as the RUC For those of you who do not follow payment policy closely. The RUC is a volunteer group of 31 physicians who advise Medicare on how to value a physician's work. Welcome, dr Nicola. Thank you, carla, happy to be here. Thank you so much.

Speaker 1:

As Lauren and I discussed possible topics for today's podcast, we decided to focus specifically on economic issues around AI and breast imaging practice. I hope this conversation provides listeners with useful information on how AI can be introduced and maintained in your practice in a way that makes economic sense. With that background, there is certainly plenty to cover here, so let's jump in. So, just as I thought about AI and economic issues, I realize just how multifaceted this topic is. We can talk about AI improving practice efficiency, workflow, just complete organization of how our practices are running, but there is a separate economic impact that we need to think about in terms of how does this impact the cost of cancer care and diagnosing cancers at earlier stages? Also, I think there's a human element for our workforce. Does this improve workforce productivity and potentially minimize burnout? For our group, and also one of the main barriers to adoption, who's going to pay for AI in our practices, lauren? Anything else you think I need to add into that multifaceted view of AI?

Speaker 2:

I think you had enough topics to cover about a week's worth of podcasts.

Speaker 1:

I think we'll helpful on the historical view of where we are today is a JBI article that was published in 2002. Dr Nicola co-wrote it with Dr Smetherman, dr Moy and Dr Rubin, and when they provided that historical background, cat was discussed, provided that historical background, cad was discussed. And first I want to start with, since 2002 and the article, how far do you think we've come in terms of the discussion of payment for AI?

Speaker 2:

And how far have we advanced since 2022? So it seems like what? Three years. It doesn't seem like it'd be that long, but in the health policy world that's an eternity. So there has been a lot of advancements. 2022, we wrote that article.

Speaker 2:

In some ways, ai was just kind of getting a feet under it. There's been some developments in the actual tools that are available and in the payment policy side of things. Cad was first FDA approved in 1998, I think. So it's really a long time ago. So if you think about where this journey started, it's been there for a while. It kind of stalled for a long time.

Speaker 2:

They kind of what we've traditionally thought of is the CAD tools that we had. But the way things evolve from a policy standpoint reimbursement standpoint really, I think are important to understand. So when CAD was first developed, it was recognized as an additional piece of physician work potentially and an additional expense for practice. The practice had to pay for the software. So there was a payment and a code assigned to CAD when used with mammography. We considered that a big win for our practices because they were able to bill for the CAD. There's this thing that happens in payment policy called bundling. So unfortunately, if you do one procedure with another procedure, more than 75% of the time those codes will get bundled together. Usually what happens is you lose a lot of value. So instead of being able to bill two codes separately, you can only build one, and that's kind of what we saw in the mammography plus CAD. So now you get paid quote unquote for using CAD, but you get that on every code whether you use it or not. So there's one payment for the screening and diagnostic mammography that includes CAD if it's done. So that's one of the downfalls of technology percolating through the system is that once it becomes a standard of care, then often that incentive for doing it as an action goes away. So that's sort of the history of payment.

Speaker 2:

We'll talk about how that applies to more modern pieces of AI. But really since 2022, we have seen a couple of algorithms come out that are a little more advanced than what we had in 2022. There's some breast tools that are detection related, more sophisticated CAD, ai type CAD, and then there's some CAD X tools that have a little more diagnostic or quantitative piece to them. So in my mind I think about the AI in two buckets. One is the detection, so is it highlighting something that a radiologist could see, but it's helping them do it maybe faster or not miss it. And then the other is the quantification.

Speaker 2:

Fda calls it CATX, and that's doing something that a radiologist can't do. So maybe an ultrasound is giving it a malignancy score. There's a tool out there that does that for breast lesions and other lesions. So I can't look at a nodule on an ultrasound and just give it a malignancy score in my head. I can only look at its size and features. That I know of. So that's giving the radiologist additional information that we can't really do on our own. And so what we've seen is that in Medicare at least they're more willing to pay for those types of algorithms that do things that radiologists can't do. So we've seen that progression a little bit. There are a couple, five or six of those type of quantification codes that do have payment in the Medicare system not in the fee schedule but in the hospital outpatient protective payment system.

Speaker 2:

So that's, I think, one change that we've seen since 2022 is a couple of the algorithms have gotten an assigned payment for them. We still have not seen payment really for the detection algorithms. Couple of the algorithms have gotten an assigned payment for them. We still have not seen payment really for the detection algorithms outside of the old school CAD that get bundled into the mammogram payment.

Speaker 1:

Well, and with you mentioning that, I think it brings up the question do payers, if we're talking about the AI, that is, detection, how differently do they view AI in its detection version? How differently do they see it from CAD? And is there a risk that those algorithms that we use for detection just replace the CAD and the bundling of the bundle payment?

Speaker 2:

Yeah, that's a good question. In general, the CPT panel, when they create codes, they've been really completely against creating new codes for work that's already done by physicians. So, for example, an algorithm that helps you find pulmonary nodules or detects intracranial hemorrhage, those don't have payment. They probably never will have payment because we already have codes that pay a radiologist to do those things, so they're not going to have a duplicative code. That's. You know, I get to bill for the chest CT that says I look for pulmonary nodules and I also get to bill for an AI tool that helps me look for pulmonary nodules. So they've been pretty, pretty firm about not wanting those types of duplicative payments in the system.

Speaker 2:

Now I guess you know really, at least from the ACR standpoint, our job is to figure out what algorithms add value to patient care, so what really is going to move the needle for patients and improve the quality of care. And if algorithms do that, if they actually make a difference and improve care for patients, then our job is to advocate for payment for them. So that's kind of been our litmus test is does this add value for patient care or does it just make the radiologist work easier, faster, more efficient? And that's really been our deciding line from an advocacy standpoint, what we're trying to get behind for reimbursement.

Speaker 1:

Okay, okay, speaking of payment, how have you seen various AI companies structure how they want to get paid? How are the companies approaching? I've heard various models and can we discuss that?

Speaker 2:

Yeah, I think you know that there's some evolution of that as well. Initially some of the vendors were doing subscription plans. You paid a set fee per year and you could use that software on whatever patient you wanted to. The downside of that is a nuance of the physician fee schedule is that there's two types of practice expense. So if you own your own practice you can bill for the technical component and there's an indirect practice expense and a direct practice expense, and the indirect is like all the things that it costs you to keep your doors open. So your rent, your staff fees, your MRI CT scanner cost those types of things. Generally those are paid pretty not well.

Speaker 2:

The direct expense is things that you can attribute per patient. So you need to use whatever supplies, equipment, time per patient. That's usually reimbursed better. So the vendors got smart and realized that if they can have a per patient cost then it should fare better for them. I'm sure they have other reasons for doing it too, but they should fare better in the reimbursement form. So we've seen a transition from a subscription model where most people now are doing the per case or per use fee. So most of the vendors that I'm familiar with recently have switched to. You know, every time you run this algorithm on the patient, you pay the vendor a per click type fee and that tracks better with the direct practice expense and the fee levels.

Speaker 1:

Okay, okay. So I think we're going to move to sort of the current landscape for where we're at.

Speaker 3:

Sure, sure, and I think we've touched on it a little bit too. So I guess my question will be more from a general sense. The article that you had published does a beautiful job of outlining where you were in 2022, of where we were in the current payment landscape of AI, and also juxtaposition that to where we're headed in the future and where maybe some predictions might be. So, I guess, for the audience, just where do you feel like we stand now in terms of, and, practically speaking, in terms of, our current payment applications for AI?

Speaker 2:

Yeah. So, like I mentioned before, we have seen some AI algorithms have been successful in getting reimbursement. Typically those are the ones that do the additional value added, so it's something that a radiologist can't do. So the quantification type algorithms For breast, the main one is that ultrasound tool that will give a malignancy risk or that you can use to kind of upgrade or downgrade your suspicion for a mass you know. So those types of tools that do things other than just detection.

Speaker 2:

I think there's six of them that have gotten payment in the hospital outpatient payment system. Generally those are really high payments. They're higher than the actual ultrasound itself, which the idea is to incentivize people to uptake that technology. The payments are usually self-limited, so the way that they're built into the fee schedules they only last for three or four years, and then those companies are stuck going back to the CPT world to try to get a category one code. But where I think we're going, there's an interesting conundrum, I guess, on how much of the payment should go to the technical. So the software costs money, right? So the practice has to buy money by the software. And then how much of it is physician work? So how much should not zero?

Speaker 2:

There is some additional work required to evaluate the output of the algorithm, decide whether you agree or disagree with its results and incorporate it into your report. The problem with having a CPT code with position work for every single AI algorithm that will be brought onto the landscape is that it's just extremely impractical. You have thousands of new codes and then that would trigger all of the base codes to come back to the rock and be revalued. It would just be a mess altogether. So our team is trying to think about the possibility of an algorithm agnostic code for the position work associated with using any AI tool. So, whether it's breast or chest or whatever, there's a street packet of work that is involved in assessing the output and adjudicating the findings and putting that into your report. So that's the way we're leaning towards as a policy team from the ACR is trying to put something together that there would be a billable code when we use AI, but that's not specific to every algorithm out there that can be used generically and we just feel like that makes the most sense from a practical standpoint.

Speaker 2:

Obviously, it's not just our decision. There's a bunch of other people in the world, including Medicare, that would have to weigh in on that, but that's the direction that we're heading and that's evolved over time. I think in 2022, we didn't really foresee or put in foreseeing that there would have been so very many you know possible algorithms out there and how that would look, um, so that's one thing that's evolved and then, um, you know, I think just in general, the there's some other other trends, like the evolution to some platform type products. Instead of having individual algorithms that your institution would have to implement and integrate into your packs and into your workflow on a one-by-one basis, there's companies out there that now have platform offerings where you can get a bundle of 12 algorithms or however many algorithms all at once. So that's another new thing we've seen.

Speaker 1:

And can I ask you and from the perspective I recognize that radiology I think in January it was published. You know, of the 1,000 or slightly more FDA approved, more than 75% are radiology related. That said, there are other fields that are looking to integrate ophthalmology, cardiology, et cetera. And to your point of how to approach it, do we do the agnostic code? Do you see some synergy around that approach with?

Speaker 2:

the other fields? Yeah, that's a great question. In fact, one of the reasons we designed it that way is because we anticipated we would get support from some of the other fields. One of the downsides to having, you know, thousands of new ai codes that radiology uses because we're in a budget neutral system is that everybody else loses. Right, it makes it lose some value, um, but I mean, with the nail on the head, there's plenty of other specialties that may be able to use ai algorithms in their own field, and if we have an agnostic code, then any time a neurologist or urologist or cardiologist used an AI tool, that they would be able to do it the same way, and so in that way, we think that there would be multi-specialty support for that type of payment.

Speaker 2:

And the other thing we may talk about it later there's the work of evaluating a per patient AI output. But what about the work of making sure that the AI is doing its job overall, the governance and oversight of the algorithms? I mean, that's real work that has to be. Probably not a good idea to leave it to the vendors to take care of that. It's going to fall on the doctors who are ultimately responsible. So that's another piece of the puzzle that probably needs to be incentivized if we want to do this right.

Speaker 3:

Yeah, that's a really good point. I think I was intending to mention it later, but I guess, to build on that was just regarding the professional fee that's incurred and how that, I'm sure, will look different as we apply AI especially. I know in the article it had referenced one of the AI applications for workflow efficiency, so not necessarily for purely diagnostic, but if there are screening mammograms that are deemed to be negative and may or may not have minimal to no oversight by a physician, how might that translate to having a professional fee? And what are the ethics there of charging a professional fee if there necessarily isn't none? Is it, I guess, in your opinion, or is there a conversation happening about? Is there an overall AI fee in lieu of professional fee, or is there, I guess, a conversation happening there?

Speaker 2:

Yeah, those are interesting thought experiments. I think right now and in the foreseeable future, the world's not ready for autonomous AI. I don't think patients are ready for autonomous AI. They're really, at least from what we've seen in the data that's out there now is that we still need a physician. Maybe the AI can screen it or triage it or detect the initial findings, but you still need that physician arbitrator at the end. Some of that's for liability reasons. I mean who would you sue if the AI misses it? But I mean who would you sue if the AI misses it? But then no doctor was ever responsible for looking at it. So some of it's liability, some of it's patient trust and credibility, but I mean it would be hard to argue if you had a ton of data that showed that the computer is a thousand times better than a mere human at diagnosing cancer and not letting it do that. I mean 10 years from now, 20 years from now. That's the world that we live in. So I think we have to evolve our thoughts on that.

Speaker 2:

From a payment standpoint, you know we're paid. The physician work reviews are assigned based on the time it takes you to do something and the intensity of the work that you're doing. So if you imagine a world where you take out all the normal screening mammograms and the doctor's left interpreting all the ones that aren't normal, that are more complex, harder, theoretically, the payment you would get per mammogram would have to go up because the intensity is going up. You know you're losing the easy, fatty. You know really quick and easy mammograms and you're getting the more complex ones that you have to think about harder. So you've got that kind of balance of the time and the intensity that would work in our favor. Now what the net? You know the net of that may be that you lose 70% normal mammograms and you're still, you know, still your payment goes down if you're not billing for all of the normals, but there would be some offset in the intensity of work that's left.

Speaker 1:

Yeah, interesting perspective.

Speaker 3:

Yeah, absolutely. And then I guess you know, maybe to build on that too, and as we've been kind of alluding to it and referencing it, do you have predictions towards Medicare stance and how it might evolve over the next? You know, I guess, short time frame, three to five years? Oh my gosh.

Speaker 2:

I was predicting Crystal Ball. I think they like to just change it up just to keep us guessing Sometimes. I think they like to just change it up just to keep us guessing. In the short term, in the three to five years and obviously we have a new administration and anything can happen I don't see any huge changes.

Speaker 2:

One thing that there's really no mechanism for innovation in the fee schedule for how we pay physicians. They have come up with different ways. We mentioned in the article. There's a new tech add-on payment, new tech APC in the inpatient and the hospital outpatient world. They both have avenues that can incentivize AI. Those payments I was talking about earlier are in the hops. The fee schedule doesn't have an equivalent to that and because the fee schedule is budget neutral, anytime you would add a payment to incentivize technology and innovation, you're, by definition, hurting other people. So it really is this perverse incentive for innovation in the fee schedule. So, yeah, there's plenty of legislative talk about potentially adding more money to the fee schedule, tying it to the Medicare Economic Index or having it go up with inflation, but as long as we're tied to budget neutrality and the fee schedule, I just I don't anticipate any major changes in how AI is reimbursed, because there's not going to be enough money to go around.

Speaker 1:

Lauren, can I ask, do you where, in terms of breast imaging modalities, I feel like by far the most of the algorithms are focused on mammography algorithms are focused on mammography. Are you seeing increases for ultrasound and MRI as well?

Speaker 2:

or what's sort of the landscape there. Yeah, I think you're right. Most of the algorithms are for mammography. I just think it's a volume basis. So if you're a company and you want to make the most money you're going to figure out where the addressable market is.

Speaker 2:

And yeah, there's tens of millions of mammograms done compared to the numbers of ultrasounds and MR done. But we are seeing more characterization quantification type tools in the ultrasound and MRI world. I think we talked about radiologists' burnout and radiologists probably aren't getting that burned out from the number of ultrasounds they're reading. They're getting burned out because they have a stack of 200 screening mammograms they're supposed to get through in a day reading. They're getting burned out because they have a stack of 200 screening mammograms they're supposed to get through in a day. So if we look at where the market pressures are, it's probably more on the mammogram side of things in the breast imaging world. But you know we've certainly seen advances across all modalities, not just mammo, in both ultrasound and MRI and CT.

Speaker 1:

Okay, you brought up the NTAP, the new technology add-on payment, and I'm sorry, I didn't realize that is only for inpatient and HOPs.

Speaker 2:

Well, the NTAP is for inpatient, so it doesn't really apply to the breast world, since we don't do a lot of inpatient work. It's equivalent in the HOPs. It's called a new tech APC payment kind of a similar idea where there's this extra incentive payment for technology that Medicare deems to be valuable to its beneficiaries. So there's an equivalent in the hops. Okay.

Speaker 1:

I'm just thinking more broadly of our practice in the United States and the vast majority of breast imaging centers, I believe, are outpatients. So that point you're making about not having that potential Avenue is an important one, I think, for breast imaging practice in the U? S.

Speaker 2:

Yeah, unless they're hospital outpatient departments, right, yeah, I think most of them probably are not. There are some, but then yes, yeah, okay.

Speaker 1:

So from the out, those centers that are outpatient centers, what's going to be the end tap or the other one you referred to for hops?

Speaker 2:

So I don't know that there will be one. I mean, like I said, as long as we're bound by budget neutrality, I'm not sure there will be one. So the adoption will depend on one. It just becomes standard of care. Everyone's doing it. It's a cost of doing business. Two, a marketing incentive. So some practices are offering these tools because they can get patients to come in to their center versus the one across the street because they have the latest, greatest AI tool. You know, and yeah, of course, as doctors we're altruistic. We want to do the right thing for our patients. So if we believe the tool is helping us be better, practices will buy it because it's the right thing to do. They're probably not going to buy it at an enormous loss, but if they're buying it and it's a reasonable expense, if it's a workflow advantage, I think practices will buy it Again. Those are just things that are considered cost of doing business rather than a reimbursable service. I don't anticipate seeing reimbursable services in the detection world in the outpatient world in the near future.

Speaker 1:

That's a really that's an interesting separation you're making there in terms of cost of doing practice and just adopting it for all its benefits and because you think it's the right thing for patients, versus its ability to get reimbursed. It's a really interesting point.

Speaker 3:

And the NTAP and this is, I guess, for mine and potentially a listener's clarification the NTAP was the one that was set to expire after three years or so.

Speaker 2:

Yeah, that's correct. So the first radiology tool that got the NTAP was a large vessel occlusion detector for Shrew. It would detect the stroke and then it kind of triggered the communication pathway to get the patient to the angio suite for treatment. And they're paying over $1,000 per use of that algorithm. So it was really kind of shocking to the radiology world that like, oh my gosh, but that payment is time limited by statute, so once it expired there are no radiology tools that have an impact. Out right now the pathway still exists. We just don't have any existing radiology algorithms that are behind a payment.

Speaker 1:

You brought up the stroke. Actually, I heard you speak once about that code and the successful approval of it, and you made such a good point about AI must prove its value, or we need to be able to demonstrate AI's value, that it can improve the quality of care and or reduce costs, and that that's really a pathway as we bring these algorithms making that argument in order to achieve and they were able. It was an interesting conversation because you were talking about how they framed it in terms of how did they advance the care of stroke patients and outcomes real outcomes in terms of patient survival or less morbidity for these patients by the use of that, or less morbidity for these patients by the use of that and it really made me stop to think about how do we do that in breast imaging or those types of applications that we have to prove in order for pain.

Speaker 2:

I think that's a really good point. One of the kind of poster children of the AI algorithms is heart flow. I don't know if you do any chest cardiac imaging, but it's an algorithm that does fractional flow reserve for cardiac imaging and they kind of checked all the boxes. They got commercial payer coverage. They got a new tech ABC payment in the hops. Now they have a category one code and I think the reason behind it was because this algorithm avoided cats. So they really kept people out of the invasive angio lab and substituted for cats. So the the payers were like oh my gosh, this is a huge cost saving. Good for patients but also saves us in the bottom line. It was a very convincing story that that was good, so it was really easy for them. I did not think it was easy, but from the outside it looked easy for them to meet all the reimbursements um milestones that other companies work really, really hard for years and years to do so.

Speaker 2:

Proving value, I think, is that key piece. So how can we prove outcomes that we're making a difference in patient care and a huge bonus if you can do it at a lower cost. So that is something absolutely to think about for breast imaging. It's not that big of a stretch If we have tools that can find cancer early. You know outcomes. Research is hard. It's maybe hard to put a dollar amount on how much dollars you save, but clearly finding cancer early is important for patients, the overall well-being and the cost. And the opposite is true too. Like if you can say that a breast lesion is very unlikely to be cancer, you don't have to buy reds throughous, you don't have to biopsy it. You don't have to biopsy it, you don't have to put the patient through that. So I think both sides of that reducing unnecessary follow-up and finding things earlier if you can prove that, then the battle's won basically.

Speaker 1:

Yeah, and I think this kind of leads in, segues nicely to just talking about AI and breast imaging as a big picture immunography in the context of today's environment, of the cost of treatments for advanced breast cancer and how the fact that we have these wonderful new treatments that are extremely expensive and that that actually on the equation of the benefit of screening mammogram, it improves that equation because that early detection, if we can get it before they need the chemotherapy in these, as these become more and more expensive and personalized and so forth, we're saving the system money, right, and it was great.

Speaker 1:

It was such an interesting way to look at the screening economic benefit and so you're bringing that up in terms of how do we sell this? And it's obviously this is not just a conversation for breast imaging, this is cancer imaging right, or screening for cancer imaging, and I sense that from a I agree with you that I feel like there's a great opportunity there to frame that argument, to make the case that you talk about, you know, demonstrating the quality or the cost savings that we in breast imaging, I think, have a great opportunity to lead. We in breast imaging, I think, have a great opportunity to lead One because there are so many algorithms that are mammography-based, but also, just more broadly, that idea of screening and how we can demonstrate the value of screening. I think there's a great opportunity for us.

Speaker 2:

Yeah, the other reason I think we're in a position to lead as breast senators is that it's one of the few things in radiology where we kind of own the management of the patient. So a lot of the other fields of radiology we do things because they're ordered. We can recommend follow up, recommend CIMR, whatever, but ultimately it's kind of the doctor who orders it. Breast imaging we're the ones who are managing that. So if we had a tool that could prevent unnecessary follow-ups, that's kind of in our hands. Right, we can say it's not a virus, they're eating more, it's a virus too. So I think we have a kind of a front row seat to that ownership and the management of the patient in a way that's unique to the other aspects of radiology.

Speaker 1:

That's a great point. Great point. Can I tie this to value-based care and payment models around that? Do we see an avenue there?

Speaker 2:

Yeah, I mean we talked about the constraints of the budget-neutral system and the physician-feast data. All that goes away if you leave fee-for-service and you go to value-based care. Right, Because any practice or hospital or health system would want a tool that's going to improve the care for their population, right? So, even if it's not reimbursed on a per-click level, if it's going to find the cancers early and reduce the need for expensive chemotherapy early or prevent unnecessary follow-up, if it's, you know, got those benefits to it, then it's immediately valuable, whether it's got a code to it or not. So you know. The question is, how quickly is that going to come? And is it, you know?

Speaker 2:

certainly not a switch that flips on and off, but it I think a lot of it's regional, from what I understand. So some areas of the country have progressed pretty far along and the payers are pushing risk-based contracts on the big health organizations and they're learning how to do that. Other places you know where I live it's really still in its infancy. We have a small, fluently integrated network, a couple of really small risk-based contracts that really mostly all primary care-based. So I think it depends on where you are in the country.

Speaker 2:

Medicare still says they're committed to transitioning to value-based care. They want to phase out MIPS as a kind of a bridge to value-based care, move everybody into these MIPS value pathways that look more like APMs. But I think it's a lot harder than they thought it was going to be. The early value-based care models with the shared savings plans and things like that didn't turn out to be wildly successful, so they had to go back to square one and reimagine it in different ways, Turns out it's hard to measure quality and it's really hard to reduce cost in a way that doesn't negatively impact patients. So yeah, kind of a long-winded answer to there's, I think, been slow progress that's sort of variably implemented across the country, but certainly nothing earth-shattering like we may have thought, if you asked me five, six years ago.

Speaker 3:

Dr Nicola, I'm thinking just of potential AI implementations, maybe to like a smaller practice, or potentially in like a rural underserved area or really for anyone wanting to implement. Do you think that there's maybe some hidden or less obvious costs associated with AI and its implementation that maybe we haven't mentioned or that most might not be aware of?

Speaker 2:

Yeah, I mean certainly the implementation integration costs are there. So if you're a very small practice and you don't have a sophisticated IT team and you're not plugged into a health, you don't have a sophisticated IT team and you're not plugged into a health system, it's probably going to be pretty hard or expensive for you to build algorithms in that seamlessly work with your PACs and have output with your power script or dictation system. All that's sophisticated for a small practice. So the integration costs, the maintenance costs we talked about this a little bit earlier. Who's in charge of making sure that once you turn it on, it's still doing its job correctly?

Speaker 2:

We use a couple of algorithms in our practice and you can say oh, anecdotally, it feels like the AI is missing a little more, but do I have data on that? So is someone actually making sure it's still working? There can be all sorts of issues and really so far in the US we don't have any real oversights mandated by the FDA. The FDA doesn't allow algorithms to change, so they can't learn, but they can drift and other things can happen to them. So I think that's another hidden cost.

Speaker 2:

Is that the work and the expense of maintaining the AI and making sure it's still doing its job, and even little things like evaluating which AI you want.

Speaker 2:

I mean, if you're a three-person radiology group and you're doing all you can do to just keep the list at bay and do your hour procedures and your mammo, you know which one of the three of you is out there looking at the 12 different nodule detector algorithms and deciding which one they like the best. There's all sorts of, I think, limitations that make this hard to do on a large scale. The ACR has worked. There's a lot of tools they have. Through the Data Science Institute. There's websites that will compare different algorithms that will show you the FDA approvals and indications for them. The vendors are great, but sometimes they try to shade around what they're indicated for and whatnot, so it's nice to have that at your disposal. So there's tools out there, but it's a little bit of a daunting task, especially with the current workforce issues, where nobody has a ton of time on their hands during the day just to be tooling around the website.

Speaker 3:

Sure, and at the very minimum there will always be updates to things, even if one's tools you've already decided on. So it's certainly a commitment with someone that needs to at least be somewhat knowledgeable.

Speaker 2:

Yeah, and one thing we've seen in our practice is this problem with inertia. We may know there may be an algorithm out there that we think would save us time and be awesome, but the inertia of stopping what we're doing for even 30 minutes to learn something new. Why? Because you're going to get further behind on the list.

Speaker 1:

Right, right, that's very true, very true. Well, I think that sort of leads to one of the things we were wondering have you ruled out AI in your practice.

Speaker 2:

So we don't have any MAMO AI yet. Our hospital does use some detection algorithms that will look for PE and ICH and cervical spine fracture. What else do we have? Intra-cranial aneurysm, and then we have the LBO detector. That had the NTAP initially. So the hospital has some tools that we use. We've looked at some breast tools, but haven't purchased anything yet.

Speaker 1:

And may I ask, did the hospital ask you to pay or did they pay? No, the hospital bought it.

Speaker 2:

The marketing advantage. I think that our hospital wanted to be the first in the area to have the tools, so that was important to them. So that was one of the reasons that they purchased it. We like it. I don't know that we like it enough that we had to pay for it that we would pay for it. It's interesting. The detection algorithms supposedly save you time, but in my personal opinion, but if you disagree with it, the time that it takes you to really think about it and say, oh my gosh, if I was in front of a jury of my peers, could I stand up and say that I don't think that's a PE?

Speaker 2:

You know that might take me five minutes where, if I was just reading it by myself, I would make my own decision to move on. So the net, you know, net difference of time, I'm not sure it's much of a time thing. It's nice to have a second set of eyes on things, but it's hard to say that, oh my gosh, savings.

Speaker 1:

Yeah, wow, my gosh, what an interesting conversation, and so I so appreciate how you have a way of explaining things, I think, in a very clear and easily understandable way. I really appreciate that. I always close these podcasts with a little bit more of a personal question Nothing too far out there or drink, but I will say I've wondered and again, I think I've heard you speak at ACR so many years and I think a couple of those presentations you've mentioned that you've called yourself a health care coding nerd. I want to know how did your interest in health care policy and payment arise and how has it influenced your path?

Speaker 2:

We don't have enough time to talk about all this. So I was an economics major in college. So I knew I wanted to get a medicine, but I was interested in economics and business and that sort of thing. So I thought I'm going to get plenty of science classes in med school. Let me just do something different in undergrad. So from my college years I was interested in economics in general.

Speaker 2:

Once I was out in practice for a couple years in 2015 is when the MACRA legislation dropped and I found it interesting. I was reading all about it, trying to learn about it. Turned out nobody in my practice had even heard of it or any idea what to do with it. So I kind of became the de facto expert on MIPS and MACRA for my practice. From that standpoint I've gotten in touch to people with ACR and they're like oh, you know a lot about this, we should put you on the committee. So that kind of snowballed into my involvement with ACR economics team and through the RUC. I just find it really fascinating. It is absolutely nerdy and I don't have normal people holly copies, I just do this on the side but it's fascinating. It makes you feel like you really can make a difference, sometimes at least, with our advocacy for fair radiology payments, to do what we can for the ACR members and radiologists across the country. So it's rewarding to me and I find it oddly interesting and it led to a matchmaking?

Speaker 1:

Yeah, it did.

Speaker 2:

My husband is also an economics expert, so that's always a bonus. Our dinner conversations are fascinating, as you can imagine. We have five daughters and they all know way more than they. One of them was asking me something about METS the other day. Oh my Lord, they all go to the than they. One of them was asking me something about Mets the other day. I was like cool class. Yeah, I understand. They're based in the payment system, that's definitely the coolest kid.

Speaker 1:

Yeah, absolutely Well. Again, thank you so very much, lauren, for your time, really appreciate your expertise here and I know there's just so much interest around AI and how we're going to do this, so I really appreciate having this episode for this series on economic issues. Thank you very much. We're so, so grateful. Thank you so much. Thank you both, and hopefully we'll be seeing many of you at the SBI meeting in April. Thank you for joining us and we'll reconvene for the fourth episode, fourth and last for this inaugural year no-transcript.