Why 100% SDV is not the way forward
top of page

Why 100% SDV is not the way forward

As a recent university graduate starting a career in clinical trials, there's lots I've had to learn. While getting to grips with the world of good clinical practice, I stumbled upon an article by Dean Gittleman regarding some of the bad habits the clinical trial community are struggling to break. One of the more pernicious habits is the tendency for trials to rely too heavily on Source Data Verification (SDV) as a means of quality control. For those of you that don’t know, this is the process of confirming that source data matches the EDC (electronic data capture). In 1988, this was viewed by the FDA as an effective means of monitoring, however since then it is thought that a variety of tools, such as combining Central Monitoring and Source Data Review is a much better means of testing.


Gittleman’s critique highlighted that while many clinical trials seek to minimize error, at times there is an over-reliance on tried-and-tested, but now antiquated methods. In the British Journal of Clinical Pharmacology, Jeppe Ragnar Andersen (et al, 2014) commented on both the high cost and lack of effectiveness that studies utilizing the 100% SDV approach had encountered. Namely, that not even complete SDV will generate 100% error free data, yet its methods are some of the costliest to adhere to (Andersen, et al, 2014; pp. 665). A study by Funning (et al, 2009) confirmed this view when they found that the average cost of this approach could make up for 1/4 of a clinical trial’s entire budget. With the cost of effective clinical trials increasing over time, it is of particular importance, both for the economic health of pharmaceutical firms and to the mission of producing effective medicines to those in need, that funds are not wasted on only partially effective quality control techniques.


Andersen’s answer to this discussion is the use of different monitoring strategies for the most important categories of clinical trial review. In his subsequent paper (2016) he argues that there is empirical evidence to suggest that reduced SDV monitoring, combined with a centralized, risk-based approach “may be the ideal solution to reduce monitoring costs while improving essential data quality” (Andersen, et al, 2016).


The above discussion points towards a blended approach of on-site, remote, and central monitoring strategies as the most effective means of conducting quality control in a clinical trial. This has been demonstrated by many of TRI’s clients through the pandemic. Companies had to adapt their monitoring approaches to deal with the problems caused by global lockdowns. And Sponsors are asking ‘why go back to 100% SDV?’ when a blended approach is cheaper and arguably produces better quality data. Even with my limited experience, it seems like a very good question.



References:


Andersen, J. R., Byrjalsen, I., Bihlet, A., Kalakou, F., Hoeck, H. C., Hansen, G., Hansen, H. B., Karsdal, M. A. and Riis, B. J. (2015), Impact of source data verification on data quality in clinical trials: an empirical post hoc analysis of three phase 3 randomized clinical trials. Br J Clin Pharmacol, 79, 660– 668.


Gittleman, D. (2020), “5 Bad Habits The Clinical Trial Community Needs To Break ASAP”. [Online, available here: https://www.clinicalleader.com/doc/bad-habits-the-clinical-trial-community-needs-to-break-asap-0002]


Olsen, R., Bihlet, A.R., Kalakou, F. et al. The impact of clinical trial monitoring approaches on data integrity and cost—a review of current literature. Eur J Clin Pharmacol 72, 399–412 (2016).


bottom of page