The ICH Executive Working Committee’s (EWC) recent presentation on E6(R3) updates was interesting for several reasons, but perhaps the most interesting was the introduction of a new phrase, the “Quality Continuum”.
As with all guidance, E6 (R3) will tell you ‘what’ to do, but not ‘how’ to do it. Having given it a lot of thought, below is my interpretation of what the EWC mean by a quality continuum, and hopefully more usefully for you the reader, what it means in practise and how you could create a quality continuum for your organization to both improve your studies and demonstrate GCP compliance.
What we know about E6(R3) and E8(R1) is that they heavily emphasise the need for quality, both in trial design and trial execution. Much of that quality is expected to come from what we already know: from lessons learned in trials you’ve run before (both successful and unsuccessful); from earlier phases of the trial; trials conducted in the same therapeutic area; relevant information already in the public domain; and SME’s industry experience and knowledge. All of that seems sensible, but what does it mean in practice?
Part of the challenge in using the term “continuum” is that implies a linear approach. This perception is reinforced by E6(R2) section 5 on Quality Management which sets out the approach to quality in seven steps.
While I think that the seven steps are all appropriate, I believe there’s a problem with approaching quality as linear. Thinking about it linearly is a problem that inhibits both the concept and practice of implementing a quality continuum. That’s because quality is not something you start doing then end up with ‘quality’. To coin a well-known phrase – quality is a journey, not a destination.
If we approach quality as a ‘virtuous circle’, it makes much more sense. That’s because learning from a trial isn’t ‘one and done’. We’re learning new things all the time during trial conduct as new data comes in, as new risks appear, as controls are implemented and tested and as tolerance limits are challenged. It seems obvious that this information should be used as soon as it’s available and not just at the end of the trial. The diagram below shows what we think the ‘quality continuum virtuous circle’ looks like in practice.
The top half relates to quality in clinical trials: the design, planning, management, and reporting of a trial. The bottom half relates to quality in learning: how an organization both captures, stores, manages, and uses lessons learned.
It’s interesting when you overlay the requirements of E6(R3) and E8(R1). E8 is focused on quality by design and the quality approach. E6 is focused on the conduct of the clinical trial. The diagram below shows the overlap between E6 and E8 around quality planning, but we feel there are some important gaps which aren’t covered. These gaps are around learning lessons generated through the management and reporting phases and represent a great opportunity to increase the learning and therefore the quality of future trials.
Learning lessons and reusing them seems obvious and common sense, and many organizations claim they do it. But our anecdotal experience of working with many CROs and Sponsors of all shapes and sizes is that capturing, and mining knowledge is difficult. For example, lessons learned about specifics such as what QTLs were triggered and what did we do about them either aren’t captured or reside in the head of a few individuals and not systematically shared. Yet these are very valuable sources of knowledge when designing your trial and your associated monitoring plan.
E6(R2) didn’t explicitly mention extracting data from our clinical systems, building that back into our knowledge systems and systematically re-using it to inform current and future trials. The capability to do this already exists. Indeed, we built OPRA 5 explicitly to do that. But my hope is that as E6(R3) becomes better defined, it will become more of a requirement for companies to follow this approach of a ‘continuum cycle’, whatever tools you use.