A story about a multi-national sponsor that may resonate with your experiences in implementing Risk-based Quality Management.
The Sponsor has been working on initial risk management for 18 months now. In that time they have mainly focused on operational risk. Most of their studies are outsourced, and so have looked to CROs to provide risk and quality management for study risk.
Unfortunately, the CROs they work with haven’t inspired confidence in this area. They’ve shown little evidence of data surveillance, trend analysis, risk escalation or adaptation of the monitoring approach, either internally or to the Sponsor. For this reason, studies have retained a routine monitoring and 100% SDV approach to quality management.
The Sponsor uses a well-known data tool for the visualisation of some core risk indicators, such as AE reporting rates, and these have been common across all studies but have not been used to influence monitoring approach.
The Sponsor knows that RBQM is important and have appointed a Central Risk Manager to work with the study teams and help with the initial risk assessment and then ongoing monitoring of risk.
The Sponsor, like the majority in the sector, is naturally cautious and wants to take a measured approach to implementing RBQM. They want to build on the work already undertaken and start to use the outputs from the process to positively influence their monitoring approach and approve trial quality.
They want to run pilots on low-risk studies so that they can compare the benefits of RBQM alongside a traditional monitoring approach. That will allow their team to build confidence in the process and its outputs before starting to adapt their current monitoring approach.
In line with the regulatory guidance, the Sponsor wants to show more effective oversight of their CRO partners and more collaboration around the central monitoring process. This will drive compliance with the GCP regulations and allow greater flexibility in the way they engage with their CROs.
What happened next?
The Sponsor approached several organisations, all of whom immediately tried to sell software solutions. That really wasn’t what the Sponsor wanted or needed.
Then they came to talk to TRI. We took the time to listen to the Sponsor’s team, understand their challenges and aspirations. Only after those conversations did we suggested the following approach.
The first thing would be to review the existing processes and roles. This would involve thoroughly evaluating the Sponsor’s existing approach to protocol risk assessment, quality management planning and strategic monitoring plan development. Only then would we be in a position to identify any gaps in roles or processes and make appropriate suggestions for change. If required, we’d be able to help the Sponsor fill in those gaps from our extensive library of processes and training materials.
As processes and roles are updated, TRI would apply the revised processes to one or two pilot studies to show the value and outputs of the updated processes. This would be taken as far as some basic, ongoing central monitoring of the study data in order to show the ability of RBQM to identify areas of site and subject risk.
TRI would work alongside the Sponsor’s risk managers and site monitors (and CROs if appropriate) to evaluate data on an ongoing basis. This includes making recommendations to the site monitors as to areas of risk to investigate and the likely causes, and then work with site monitors to confirm observations and potentially make corrective actions. This would be performed alongside the traditional monitoring approach and therefore be a ‘soft’ confirmation of the power of RBQM without any long-term commitment needed from the Sponsor.
Only when all that work is complete would TRI look to work with the Sponsor to evaluate the findings from the pilot(s) and provide support in determining whether they need technology to support their RBQM implementation.
Now, isn’t that better than someone just trying to sell you software?