Why not look at the positive side?

When it comes to Risk-Based Quality Management (RBQM) and Centralized Monitoring people often bring up the idea of comparing sites to averages. Sites that are above average are deemed to be doing well and sites that are below average are said to be poor.


This struck a chord with me. I asked myself “What would I like in life to be average?” “Not much” was the answer! However, it did make me think about monitoring sites. Should a monitor ever say to a site “Get better because you’re below the study average”? The statistician in me would love to reply “What’s your point? Someone needs to be below average by definition”.


There has to be a better way. Instead of thinking about better or worse than average, especially if these averages move frequently, what happens if we think about achievable targets for our sites? Targets such as: how long does it take to return a CRF; or how quickly a site responds to queries. Setting these kinds of targets means that in principle all sites can achieve them. Now the conversation between the monitor and the site would be much more productive.


Monitor: “We see that you’re struggling to return you CRFs within seven days. Has something happened that you need help with?”


Site: “We’ve had an issue with administration staff due to COVID, we had a run on AEs and decided to prioritize these first. We are working through the backlog of CRFs.”


Monitor: “Great, from the trend graph I can tell that most of the time you were meeting expectations, so I’m sure you will get back on track soon. Thanks for prioritizing subject safety first.”


We use graphs like the ones below give a clear indication of performance against set targets. Helping sites improve reduces workloads for both sites and central monitors.




One of the other things that central monitors ask us for is the ability to track sites that are doing exceptionally well. As a slight contradiction to above, sometimes we have to use the average to compare to (and look about three standard deviations away from the mean). For example, we don’t always have a clear target number of protocol deviations that a site should be below. Using event graphs, we can display some sites in purple to highlight those demonstrating great practice, and that other sites can learn from.



After all, we're not machines. All our organizations rely on people working together to produce better results. And isn’t that the point of RBQM and central monitoring – to improve clinical trials? Whatever central monitoring tool you choose to use, it should facilitate helpful and insightful conversations between the monitor and the site; not just produce ‘nagging stats’ that provoke “get better” conversations. Use it to look on the positive side.