• Blog

Grouping Investigative Sites into Risk Groups for “Fit for Purpose” Monitoring

Updated: Sep 28

Transcript of the MCC Podcast with TRI's Tammy Finnigan and Elizabeth Robertson, and MCC’s Linda Sullivan


SUMMARY KEYWORDS

sites, data, monitoring, clinical trial, clinical research, monitor, starting, risk, organizations, important, people, industry, approach, process, studies, cto, areas, trial, support, technology


Linda Sullivan

There are more than 344,000 clinical trials in the United States and in more than 200 other countries. These clinical trials are all different, of course, but they have one goal in common, be as productive as they can be.


Linda

Welcome to Episode 25 of CTO, the Clinical Trial Optimization podcast. CTO is a twice monthly podcast that brings together clinical research stakeholders to exchange ideas, share knowledge, and think creatively about how to oversee, manage, and optimize clinical trial planning and execution. The podcast includes discussions with clinical research industry, thought leaders and practitioners about how the industry is transforming clinical research, design and operations to speed up the delivery of life changing therapies. I'm your host, Linda Sullivan, and I want to thank you for joining me on this exciting journey to raise the bar on clinical trials and provide an interactive forum for discussing what we do professionally every day. We hope you'll subscribe to CTO on your favourite podcast platform, so you'll automatically get every episode in your feed for free. In this episode, we're going to be exploring an aspect of risk-based quality management, specifically, the grouping or tiering of investigational sites into risk groups that align with fit for purpose monitoring approaches. I'm joined today by two guests, Tammy Finnigan, is Chief Operating Officer of Triumph Research Intelligence, also known as TRI. And her colleague, Liz Robertson, who's a Risk-Based Monitoring Operations Consultant. So welcome to CTO, Tammy, and Liz.


Tammy Finnigan

Thank you. It’s nice to be here.


Liz Roberston

Thank you for having us.


Linda

So, Tammy, tell us a little bit about yourself? How did you get into clinical research? And why did you decide to be part of TRI?


Tammy

Oh, that's a big question. So I've been in clinical research, really my entire career, I didn't take to lab work after university, and got into clinical research as an alternative. I've worked as a clinical trial monitor, as a Data Manager, as a clinical project manager, some line management and training. So I've done the whole bit with CEOs and pharma. And then, about 10 years or so ago, I moved into consultancy, looking at process development, process improvement in clinical trials, how we run our trials, how we select our sites, how we manage patient engagement. And as part of that there was a big technology component where you’re starting to look at ways of capturing more meta data around clinical trials. And at the same time, we were starting to see some regulatory changes happening within industry around risk and risk management and using data more intelligently to inform our trial decision making. In my consulting background that had been a big missing piece, some organizations really had great data strategies, others less so but we were seeing a real push within industry to move towards that. And that's really how I got involved with TRI, the consulting and technology background, to the organization.


Linda

What does TRI do today in terms of the services provided. Obviously, you're still there, so they must have addressed some of that data technology side of things.


Tammy

Our background is consulting so we did do a lot of consulting around process improvement and implementing technology to support process improvement. But since around 2013, with risk-based monitoring starting to come to the fore with the FDA and the MHRA and the EMA, as well. So looking at how, I guess, setting out their strategies for risk-based approaches, we saw that there was really a gap in the market for process and technology in that area, and it had always been a passion of Duncan and I, Duncan Hall my business partner, it had really been a passion of both of ours to bring both of those things together. And I think that we could see a real shift happening in the industry, where there was no real process or technology to support that. It wasn't like EDC, where you were changing a paper process into an electronic process. This was really something quite different. We took the opportunity at that point in time to look at process around the regulatory shift and also technology to support it. And that's how TRI really sort of started in 2013. Since then, we've continued to grow our consulting experience with a real focus on risk-based quality management. And we also have a product called OPRA, which supports the data visualizations for risk-based quality management. So it's really allowing you to perform your risk assessment and look at your data visualizations that will help inform your risk review, and also identify sites that maybe are not performing as well or are struggling or their data quality doesn't look as good through bringing those big data sets in and turning those into visualizations. So that's the main focus of TRI, and really the only focus of TRI!


Linda

Great, thank you. So Liz, tell us a little bit about yourself, how did you get into clinical research and what is your role at TRI.


Liz

So I started off in a similar route, doing a degree, and then I went into a post op work and went into academic research. I did that for a little while, and an opportunity came up to work in a clinical trial in the NHS. So I made the transition into clinical trials. And there we were out visiting sites, looking at all the different setups and problems and then the opportunity came up at TRI, which obviously, then I could use my experience of doing academic research, you can see all the different areas that people are investigating, and also using the clinical trial work, we've got the practical experience of going out and talking to sites. And then understanding how we can kind of look at the risks involved and help better, improve the process and help sites get involved in research without being waylaid by all the processes involved. So I'm now at TRI, and I work as an operations consultant. We have the different teams, set up the KRIs, and evaluate the risks and the best way of monitoring sites and getting that information and making it more useful if possible.


Linda

That's a great lead in for the discussion that we're going to have today. Before we get into lots of details, I thought it would be helpful if we explored the concept of fit for purpose monitoring. What exactly is it? And why are we talking about it right now? Why is it a topic of interest?


Tammy

I think it's a topic, that's been around for a while, since the concept of risk-based monitoring came to light, in that 2011 2013 period. But I think we're now at the point where, people are quite used to visualizing data, and using metrics to identify issues with the trial and with the sites. And I think now we're at the point where we can start to use that information to help us tailor the monitoring strategy both for the study and for the site based on what we know of the sites or don't know, and some situations as well. I think it's really about making more practical use of the data at the beginning of the trial, rather than the practical use during the trial, which I think we're all getting more comfortable with. One of the other things we've been talking about this recently Linda, is that we are seeing that, you more and more clinical trials are happening year on year. And, we do need to look on diversifying our sites and our patient populations. I think there's also a lot of those to target our resources to new sites that probably need more support in the beginning. And lessen our resources on sites that we're very comfortable with them that we know well. So, bringing in new research sites doesn't mean that we're having to add a lot of additional overhead to our clinical trial processes and budgets, but it allows us to really focus our resources to the sites that need assistance the most.


Linda

That's a great point. And it's one of the things that piques my interest, certainly I know when the whole idea of risk-based monitoring started to formulate, MCC did a survey and we had lots of different models of what might risk-based monitoring look like? And, I think some of those concepts have not been adopted and others have, and this particular one, can we identify and match up the right resources for, the right sites is one that was always of interest. And as you say, now that now that we have some new sites or less experienced sites may be coming in, it's even more important. I think, surprisingly, there are still some organizations that are using the old monitoring model where everybody gets the same number of site visits and on a set schedule, but certainly, it's a concept that’s seen more risk-based approaches. But it'll be interesting to see, as we talk further here, whether people are actually tiering sites, or putting them into groups that align with certain monitoring resources. Is that something that you're starting to see, that organizations are actually using that data to tier sites?


Tammy

I wouldn't say that we're seeing it being utilized in any sort of great degree at this point in time, but it's definitely an area that I think has given a lot more traction and that people do want to start looking at that. And I think there has been advances in technology that supports it, so there's the targeted SDV, and those types of things, and EDC, and then you have tools like the ORRA application that allows you to see site profiles at a point in time, but also historically. So we've got more of that metric, driven approach that allows us to make some of those decisions. But I think one of the challenges is that, again, I think it comes back to resources and budgets, it's much easier to work with a very defined strategy. And here, we're saying, Okay, well, we don't, we don't necessarily have a bucket of visits, for example, or hours that are going to apply across the board to every site, we're going to make adjustments on that. And I think that can be a practical challenge to implement from a resource and budget management. But I think that, there are ways to implement it, and you reap the benefits of taking that approach, because, as we just said, you can target your resources to the areas that need it most rather than spreading it evenly across the board.


Linda

Have you found that some of your customers that adopted risk-based monitoring early on, and so early adopters are maybe evolving towards being more comfortable with fit for purpose monitoring, is it part of a maturity model?


Tammy

I think there's a maturity model, both on an organizational level and an industry level. But I think we've got some organizations that we work with, that are maybe new to taking a truly risk-based approach. So they've maybe been doing risk assessments on their protocols. And they've maybe been doing some visualizations on their site data, but they haven't necessarily been adjusting their monitoring strategy. So there is that maturing process within an organization. But we've got others that have maybe just been doing risk assessment, and they haven't really looked at using data to target their monitoring, as they're now starting to look at that they're going straight in for the tiered approach, because I think industry has been talking about it a little bit more and industry has more guidance around how to implement it and I think that maybe those people that are coming in a little bit later or are reaping the benefits of what the trailblazers have gone through. And so they can take a few leaps, I guess, over that more step by step process, so they can miss some of that, and go to this type of process much earlier.


Linda

Liz, I know you're out there helping people implement on the operation side, what are your thoughts? Are people really thinking about fit for purpose monitoring, in a way that would allow you to tier sites and have them have different monitoring approaches, or are we not quite there yet?


Liz

I think it's definitely a work in progress. And you see lots of different sites and lots of different studies. The thing is, like we've been talking about earlier is that there are so many more studies being done and you've got so much more data that we’re that we're going to have to think of a better way of managing the data, but also maintaining patient safety, as well as the quality of data. So to be able to move to a more fit for purpose approach, where you're giving the sites the support that they need, is definitely something that's going to continue to develop in the industry. And it's interesting, when you see studies come in, they've all initially been working on the 100% SDV model. But to see them evolve and use centralized monitoring, and some of the remote monitoring techniques as well, to keep on top of that data is really interesting. And it's definitely needed with the increasing number of research studies that are going to be happening going forward.


Linda

I think, in my experience, as I've talked to organizations, we've had other people on this show other guests, I think we're seeing that there's a need for people to be comfortable that the data is going to tell them things they need to know, they’ve got to get comfortable with the data before they then really think about and are ready to explore how to use the data to drive activities. It's almost like they run traditional monitoring side by side with the data for a bit to make sure that they're comfortable, and then they're more willing to open the door to using data to help make decisions and drive strategy. Is that the sense of what you're seeing as well.


Liz

Definitely. I think we're seeing a lot of cases where they've been using these traditional sorts of approaches and monitoring everything. But with increasing work demands, that's going to become more challenging. So it's interesting to see people coming around to the idea of that, they're still maintaining that data quality, they're still maintaining patient care, but they are able to monitor the sights. And also, the monitoring is looking at giving the sites the support they need. If they’re quite new to research, then they're going to need more support. And if they’re quite experienced, then they may need less depending on the project they're working on, and the other projects they have going on at the same time. So you can see the process evolving from this as 100%, to a more adaptive approach. So it's definitely something that is going to be taken up more across the industry, as people are, exploring wider areas of research, diversifying the patient population. It’s something that's an interesting story to watch, as we're actively seeing it evolve.


Tammy

One of the other areas that I think is starting to drive a move away from a traditional monitoring approach is also data sources. I think while a large proportion of our research still has the concept of patients coming into clinic, they see a health care provider, they go through assessments, those assessments are recorded and put into an EDC. I think as there is more technology that we're seeing more patient reported outcomes, ERT's being used, and even, there's definitely a growing area of the centralized trials where patients are coming into clinics less, and sometimes even not at all. And so we're getting more data direct from patients. That concept of monitoring, I think is going to change as a result of technology and the more traditional approaches of reviewing medical records, etc, I think will, once you get beyond the inclusion criteria process, the data is going to come more direct from patients. So we need to think about other ways of looking at data quality and supporting sites around the processes. And I think those things are going to drive a change in the monitoring strategy as well.


Liz

Definitely, and as a lot of the data that we're collecting is more electronic now, that's going to influence on those using different systems, which will have a massive impact on how data is monitored. So yeah, you're right. And it's interesting that it is more centralized.


Linda

I want to follow up on a point here, which is, you've talked a little bit about how monitoring itself is changing, etc. And we know we've had several other podcast show episodes really talking about, how do we find people to do monitoring, what are the skills they needed etc, and the flip side of fit for purpose monitoring could be that, as we get new monitors coming on board, perhaps that site that's not very experienced in clinical research should not be getting the new monitor either. Have you seen anything related to making sure you get the more experienced monitors that have more experience with what should be happening at a site that the type of monitoring oversight that is needed at a newer site may be very different from what an inexperienced site is. Not only in the frequency that you go, or what you're doing when you're there, but really the skill set that a monitor needs to be bringing to the table. Has that come up at all?


Tammy

It hasn’t come up for a while. In the early days, when we were starting to look at data, we did see some situations where you’d have really poorly performing sites. And this might be not the most politically correct thing to say, but when we looked at sites that were really off the wall in terms of their performance, there was often a correlation with the performance of the monitor as well. So we did some very early, I wouldn’t call them case studies, but as we dived into the data, and the reasons behind site performance, monitoring performance definitely came up in those studies. So I think it's something that we should be taking into consideration as we move forward with this type of tailored or fit for purpose monitoring approach. Because I do think that if both parties are going through training, then that just presents a bigger risk. And so you really need to make sure that if you've got a site that's being educated and training up on clinical trial processes, you should definitely want to put your more experienced monitoring staff, with those sites. I think that's a really good point and something to consider. So there's definitely a correlation there, on how a site performs, and how a monitor is performing as well.


Liz

So if that's something that you're tracking in your system? Can you be looking at performance of sites and tier them by the monitor?


Tammy

We haven't got anything like that up and running at the moment, I think there's often a lot of sensitivity in collecting that type of performance data in a system that is visible to multiple people within an organization, because it's often data that's more controlled by HR. So there's data security aspects, I guess, around that and so data confidentiality. But we do try, when we are looking at data for a site, we try for the things that are going to influence or drive decisions on whether to look at a site more closely, apply more resources, we’ll try to focus on things that are much more driven by the site, and less influenced by third parties, like your monitor or your data manager, because they can be quite heavily skewed based on how those individuals are performing or the processes that they're following. It's not necessarily a performance issue, but a process issue. What are they being asked to measure? How are they being asked to track? What is the focus behind what they are doing? When you're looking at things like site issues, for example, or manual queries, that those can often be heavily influenced by, the people that are there raising those and not as much by the site themselves.


Linda

That's an interesting point. What are some of the things that you find organizations are measuring, around site performance that could be used to tier them into different risk groups?


Tammy

The areas that we've been looking at, you have the sites that we have experience of, and those are the ones that we are then trying to use historical data to tier at the beginning of a trial, and then look at their actual data while the trial is in flight. So again, make adaptations or adjustments to which tier they sit in, and then you've got sites that are research naïve or that we have not used before as an organization or our clients have not used before as an organization, and they pretty much automatically get put into the high tier monitoring bucket because, you know until we can see how they're performing, we're going to put them into that higher risk bucket and then adjust from there. But for sites that we have experience of, and we have data from, I think there, there's probably nothing here that it's going to be surprising, it's going to be things like, how quickly are they giving us data? What is their recruitment and patient retention look like? What kind of protocol deviations are we seeing, particularly those that are considered important, so, related to consent, related to inclusion exclusion criteria? Are we seeing lots of missing data from that site? I think it's typical site performance data that we measure right now, and have been measuring for quite some time, and it's utilizing that to say, Okay, well based on the last x studies, or the average that we've seen in the last x studies, then, you where should this site sit in terms of our monitoring strategy. And sometimes you don't want to go too far back either, because, there's a lot of rotation on sites on staff, and staffing can have a big influence on how a site performs as well. It's also looking at the population of data that you take into consideration when you are making those decisions. And then there's always the aspect of, the engagement between the monitor and the site during that selection process. I think, talking about monitoring skills, monitors are always building up on that auditor type scale, where they're inquiring about the site's processing, and the site staffing and the site's experience so that they are building up that picture to kind of say, okay, well, since the last study, there's been that type of change at the site. And so even though there's data on the last study was great, I think we might want to put them in a slightly higher tier, because there's staff that we're not used to, there's new processes in place at that site, there's new systems, and we'd probably just want to keep a closer eye on that and the first few months at the trial to make things are going well. There's always a human element to it, as well as the data that we have available, but I think there's definitely some considerations that mean that it's not just a straightforward metric that we're looking at, there's aspects that should be considered.


Linda

You raise a great point, I mean, there could in fact, be something around the protocol itself, where there's some new testing procedures, or something that nobody has a lot of experience with that maybe would put all the sites into a ‘we need closer monitoring’ kind of a category. And so maybe it's not even related to actual site performance itself, that there could be some studies, given the work that's being done that would make it not even eligible to try to tier the site. Have you seen that at all?


Tammy

Yeah, I think one of the things with the E8 guidance as well, I think, what is so important, as an industry, is that we really treat the site as stakeholders and our risk assessment process, and really, give them a voice in that process. Because when we go out and we do protocol feasibility with a small number of sites and some key opinion leaders, it's about thinking of the intent behind that, and carrying that on as we go into site selection, so that we're really building up that picture of what the site views as risks to our protocol, both just within therapy area, an indication, and also what they view within their own set-up as being a risk, are there things for them that are not standard care? And so those are going to be areas that are, higher risk, and maybe they're going to have challenges. Also that dialogue is really important to help inform that tiering, and hopefully improve protocol overall, but I think that voice of the site, and that risk assessment process and prioritization. We're collecting more and more data on trials, the amount of endpoint data and just data points that we're collecting is just increasing significantly, year on year. It does become a point of, how do we best monitor and utilize that, and it's also providing the site with some guidance. If they are stretched and they have to let something go, which is not beyond the bounds of possibility, then what should they be prioritizing, what is most important for this study, and are those things that they have supporting processes in place to manage? That sort of dialogue in the early part of the trial, even before sites are selected, or sites are initiated is an important part of this best practice and the risk-based quality approach in general.


Linda

I couldn't agree more. Thank you so much for sharing your experience, your views. I want to let our listeners know that the two of you are going to be running a mini workshop about this topic at the upcoming WCG MCC clinical trial risk and performance management virtual summit taking place September 28-30. I'm hoping I get to sit in on the session. I look forward to hearing more about the approach, and from participants in the group, what is their experience? And where might this be going in the future? Do you guys have anything else to add before we before we end the show?


Tammy

I think we’d just like thank you for this opportunity to be part of the MCC summit. It's a topic that we are very passionate about. While we like our data and our data visualizations, I think how we engage with sites is incredibly important to ensure data quality, and we're really excited to be speaking about this topic. Thank you.


Liz

Absolutely. It's an important topic and with increasing amount of data that's being generated, it's definitely going to continue to be important. So yes, it's been interesting to have a discussion.


Linda

Great. Well, thank you. To learn more about the workshop as well as other interactive collaborative sessions, please visit www.MCC-summit.com

I'd also like to thank our producer Michael Levin Epstein. And finally, we want to hear from you, so rate us on iTunes. Are there topics that you would like to hear us discuss? Guests we should have on the show? Let us know by emailing me at lsullivan@metricschampion.org

This is Linda Sullivan and we'll see you next time on CTO, the clinical trial optimization podcast.