ICH GCP FAQs

No.

GCP Ref.

Question

Answer

1

1.63.

What qualifications or who meets the acceptability to certify copies?

I think to answer this question you need to first consider what the process for certification will be.

For example, if the certification is of a document that is generated from a system, through a non-validated process how would a person go about confirming the data on the copy has the same information, include the data, the context, content and structure as the original data in the system? This may be more straightforward with some systems than others, is all information visible on a screen, or is some information hidden, does all data in the system print out on the report, if not is the data relevant, does this system hold an audit trail, is this available for the data being exported/copied? This may require someone with a different skill set than say someone making a scan or photo copy of a paper document where the process is much more straight forward as it is a direction comparison, and they may need to also assess legibility, before certifying.

Either way, the process needs to be defined and the organization must train those individuals who will be responsible of the process.

10

5.0, 5.0.2

Risks are always identified as potential facts right?. I mean errors done with the ICF for example (as not timely approved by ECs although used) are already deviations, not risks.

I like to think of this using the simple worked example below.
Fact: Known Information e.g. the study drug has a short shelf life due to lack of stability data
Risk: What could go wrong? e.g. Study duration may exceed the shelf-life. What would be the impact? e.g. Cost of re-labelling, cost of manufacturing additional supplies, suspension of subject recruitment and/or treatment, safety concerns and possible impact on results due to treatment interruption. What could be done to mitigate? e.g. Continue with long term stability research; Budget for additional supplies; put plan and forecast in place to detect and monitor when additional supplies/re-labelling will be required to avoid treatment interruption/recruitment suspension.
Issue: What has gone wrong? e.g. Confirmed that study term will exceed shelf-life. What has been affected? e.g. Nothing at this time (as we have forecast and detected the date where mitigations need to become effective) Was it a previously identified risk? Yes What has been done to mitigate? e.g. Budget secured for additional supplies/re-labelling; date for this to become effective has been forecast in advance, process for additional supplies to be shipped/re-labelling to be conducted is in place so time to put plan into effect. Act to resolve: e.g. Implement re-supply/re-labelling plan.
What risk assessment allows us to do is to be proactive and better prepared for dealing with issues before the impact escalates.

11

5.0.

How are the regulators viewing non brick and mortar sites (i.e. telehealth)? How does this fit in the new Quality Management system?

Quality Mgmt still applies, in fact, more so in this scenario. The risks to the critical variables should still be assessed and controlled, and the provision for central monitoring activities should very much support the requirements of this scenario. We are expecting similar requirements to be added to E8 General Considerations for Clinical Trials, in the next 18 months, so all types of research will be expected to follow this approach.

12

5.0.

What is the definition of 'safety'. protecting subject 'safety'. I believe this really means AE/SAE, etc. However, as a CRA we often hear it referenced to 'safety' labs. e.g., general chem, haematology, etc. These are referred to safety labs and may have no bearing on critical data. Isn't it the PI's responsibility to review these 'safety labs' and track his/her subject's safety?

One of the core principles of GCP is to protect subject rights and well-being, often this is paraphrased as 'subject safety' but this is only one aspect. All parties involved in clinical trials are responsible for collectively protecting the rights and well-being of subjects. This begins with the sponsor in protocol development, evaluating the risk-benefit of the protocol procedures that the subject will undertake and ensuring that we are not exposing the subjects to risk unnecessarily, and putting controls in place where risk is necessary. You are correct, the PI is responsible for the subjects care, and ensuring that any decisions on the subject medical care has the subjects' well-being at the core. As a monitor (CRA), you are overseeing how the PI/site is conducting the protocol, ensuring they are following protocol procedures, which are in place not only to meet the objectives of the study but to protect the subjects' well-being. Not all data that is collected in a clinical trial is 'critical', what we need to do is determine what is and ensure that our activities focus on these areas. Some things, such as safety labs, are routine clinical care and is the responsibility of the PI and site staff, but there may be key elements of safety labs e.g. ALTs that are of particular interest because to the current safety profile of the drug that may be of particular interest that must be a focus of monitoring activities. There are some changes being put forward by industry bodies around the difference between SDV and SDR, you may review (SDR) and verify (SDV) specific tests in a series of labs, looking for specific values/AEs for all subjects but only review all labs for a small percentage of subjects to determine if there is any persistent behaviour/practice evident at the site that would put the subject or the data the site is producing at risk.

13

5.0.1

When should the risk identification take place? during the development of the protocol or when the protocol is finalized? who/what expertise would you expect to be at the risk identification meeting?

It is an ongoing process, but should be initiated during the protocol development cycle, as early as possible, ideally with the robust synopsis where the objectives can be evaluated. The extent of the assessment will expand, more functions will become involved as more detail is added to the protocol. Any changes from the draft to the final should be assessed as control measures may change and risks may close depending on the changes.
During the conduct phase, processes should be in place to support the identification of emerging risk, which should go through the same evaluation process to determine the appropriate controls.

14

5.0.1

Would you generally consider an "important" process like consent a "critical" process from a risk management perspective? For example, if you have an optimized process for consenting, no historical issues with the consent process, and nothing unique about the study that would increase risk (vulnerable subjects or multiple consents). Could you omit the consent process from your list of critical processes?

Great question, I would still consider it a critical process in terms of subject well being and protection, but in terms of evaluation I might rate it low probability, medium-high impact and easy to detect, and if I have good processes in place and there have been no issues with consent deviations or CAPAs in the past and there is nothing unusual about the consent process in the protocol e.g. vulnerable populations, I might state that there is low/negligible risk to this critical process, and it is controlled by my standard processes. It is acceptable to state that there is no or negligible risk associated with a critical process, just because its critical does not mean there is risk associated with it.

15

5.0.1, 5.0.2

Where would secondary objectives fall as a structural risk, essential or non-essential?

Each objective should be evaluated in terms of it's importance to the program and drug safety profile. It would not be accurate to state that all secondary or exploratory objectives are non-essential, but they should be questioned/challenged in terms of their necessity to the program and profile and the risk-benefit should be considered, in terms of subject protection and burden to subject and site.
Also important to note that non-essential risk can be present in the processes and data related to the primary objectives, so these should be scrutinized too, to ensure that we are doing what is necessary.

16

5.0.1, 5.0.2, 5.0.3, 5.0.4

What is best practice regarding implementing risk mgmt., should it be a new process or integrated into existing processes?

Best practice would be both, a risk assessment and mgmt. SOP that sets out the process, responsible functions etc. and then integrate this process into existing processes where applicable, e.g. project mgmt./study mgmt.; protocol development; study planning.

17

5.0.1, 5.0.2, 5.0.3, 5.0.4, 5.18.1

What’s the current status on establishing a widely recognized risk assessment tool for clinical monitoring? I have the impression countries are individually setting up their tools as of how to evaluate risk criteria on the basis of available literature, but there’s no harmonized approach globally, yet.

There is no harmonized approach, and I am not sure there ever will be … We are starting to see best practice process emerge for risk assessment and evaluation, that is being accepted by the main agencies, in the sense it has not been challenged during inspection. Similarly for clinical monitoring, I think the question is referring to a standard tool for risk detection as part of centralized monitoring. Some industry organizations such as MCC and Transceleration have set out common data to review centrally across all trials, but risk, to a greater or lesser extent, depending on the trial design, will always be specific to the trial, so it is unlikely to get to the point of being 'one size fits all'. However, standards are emerging and as above, we do have examples of central monitoring plans and systems, such as OPRA, being used in studies that have been subject to inspection. What the agencies are looking for is a traceable process, that you have identified what is critical, assessed the risks, and tailored the study mgmt. plans to address those risks, some of which may include centralized monitoring techniques.

18

5.0.1, 5.0.2, 5.0.4, 5.0.6, 5.0.7

How do you see the implementation of Quality Tolerance Limits happening and being aligned with the Risks assessed for the trial?

This is a great question! Quality Tolerance Limits should be set for critical data/process that could jeopardise the overall integrity of the trial data, or that point to issues related to subject rights/well-being, which would also impact the trial integrity.
This is likely to involve a combination of data/indicators that are common to most studies and also some that are specific to the study. To that end, they may not always relate to specific risks in the risk assessment but be more standard indicators of the quality health of a study, some examples of those common to most trials may be Eligibility Violations, Consent Deviations; versus those that are specific to the study, e.g. Missing End Point Data, Early Termination, Mis-stratification. During the trial they should be reviewed as part of the risk review/escalation and central monitoring processes.

19

5.0.1, 5.0.2, 5.03

Who should identify the critical data/process for study? Should it be science?

Best practice would be for this step to also be a cross functional activity performed at the beginning of the risk assessment process. It is essential to have science involved but would not recommend this task is performed solely by science, or the protocol owner(s), as they have written the protocol it will be more challenging to remain objective, it is like critiquing your own work. The views of other parties can help to examine if data/process is truly essential to the study, or to evaluate if data/process is missing. If science initiates the process, other functions should have the ability to review and comment but would recommend it being cross functional activity.

2

1.65, 5.5.3

During the discussion surrounding electronic data and ensuring validation, compliance to 21cfr11, etc.. does this requirement apply to the use of Adobe for the creation of electronic signatures? Adobe is not validated.. this is why I ask

It depends on how Adobe is being used, if it is stated in process that you are using electronic signatures for control documents and these esigned documents will be part of your 'system of record', if this is implemented through Adobe, this would not fulfil the requirements of electronic signatures or validation. Documents generated through this processes would therefore not be considered to meet the requirements of electronic signatures or control, and would not be considered source or system of record.

20

5.0.3

Your description of assessing probability is at odds with your earthquake insurance example - should you also assess likelihood of recurrence? Maybe something has occurred in a previous trial but you've installed mitigations for it?

The point of the earthquake example is that risk management is often emotive - through personal experience people are more likely to try and mitigate a risk even through they objectively understand that the likelihood of occurrence is much lower (due to the nature of earthquakes in this example).
In a clinical trial, we are quite likely to think first of risks which have previously occurred, those we have experience of. We must evaluate those risks objectively and not put disproportionate risk management measures in place, just because it occurred previously. If risk controls are already in place from previous experience, there is likely to be an impact on the likelihood of the risk occurring again, and this will need to be taken into account.
We do also have the opportunity to learn from any previous occurrence and should consider whether the management measures we put in place on the last trial were effective or not, and potentially consider changing our approach if they were not effective previously.

21

5.0.3

RE quantification of risk: the variables are orthogonal. Wouldn't a calculation based on sum of squares be more reasonable?

You are correct, that the variables are uncorrelated and hence can be treated independently, and your approach would make good sense. At this stage in the industry's maturity in this area we are trying to keep things simple and accessible and basically using the scoring as a means of prioritization and assist with determining proportionate responses. We have no doubt that more sophisticated algorithms will develop over time as acceptance and experience grow.
At TRI we are always researching new techniques to be more accurate in the way we measure and detects risk, but we also have to be careful not to overrun our core audience and confuse people more than we help them!

22

5.0.3, 5.0.4, 5.0.6

When detecting risk, we are detecting risk only on the critical data that was identified, correct? We are not looking at non-critical data for risks, e.g. the fluff.

Risk is typically multi-variate, sometimes we will look at many indicators to get an idea of quality, there is not always a 1:1 relationship. What we are trying to achieve is that we focus our trial activities on those processes and data that are critical to the study success and the protection of subjects and where there is greatest risk of something going wrong.
Sometimes looking at specific critical data could risk unblinding so often detection methods will look around the critical variable e.g. the processes associated with collection. There may well be risks that don't relate directly to critical data that could impact the study or subjects, this is where past experience is useful either from historic data or from the cross-functional team, but start with what is critical first.

23

5.0.4

What would be an example of a risk control at an organizational level? You stated risk control can be at a study, site, organization level?.

This is typically a change at a procedural level, similar to CAPA, putting preventative measures in place to avoid risk may be as a result of persistent breach of QTLs or the same risk emerging in many trials.
May also be related to vendors, if root cause has pin pointed risk with a particular vendor, they may be removed from the 'approved vendor' list.

24

5.0.4, 5.0.7

Can you please distinguish the difference between QTLs and risk indicator/thresholds? How does your interpretation align/differ from recent TransCelerate paper?

Quality Tolerance Limits (QTLs) are set at a study/protocol level and are typically a sub-set of the KRIs that are being used to look at site quality. The intent of the QTL is to determine if either subject well-being/safety or data integrity are being put at risk to such a level that the overall study could be in jeopardy. The tolerance limits or thresholds set at a site level are likely to be different, possibly multiple, each with different actions e.g. watch and wait, vs act. The QTL should be set to a level that allows proactive recovery steps to be implemented and should be set based on experience and expertise (medical and statistical) and the QTL is likely to be specific to the study or therapy area/indication. Common indicators where QTLs are implemented are Subject Consent Errors (particularly re-consent), Protocol Deviations (Particularly edibility), AE Rates, AEs/SAEs of Special Interest, Subject Visits Out of Window, others specific to the study e.g. Incomplete/missing end point data such as images.
I think our definition is similar to the recently published Transcelerate paper.

25

5.0.7

I may misunderstood but did you say that the Quality Risk Management Plan needs to be included in the CSR?

ICH E6 R2 5.0.7 Risk Reporting: The sponsor should describe the quality management approach implemented in the trial and summarize important deviations from the predefined quality tolerance limits and remedial actions taken in the clinical study report (ICH E3, Section 9.6 Data Quality Assurance).
Our recommendation is that the QRMP is included as an addendum or summarized in section 9.6 to meet the first requirements and the second requirement of deviations from QTLs are also summarized together with actions taken and any changes made to the QRMP as a result of the deviations.

26

5.03, 5.04

When we are assessing risks should our scoring be with or without mitigation?

Good question, I would advocate both, because the mitigations can make a significant difference to how you monitor and review the risk, and the score is not static, as the process is an ongoing cycle, your risk scores may change based on real world evidence as the trial progresses.

Also, starting without mitigation and re-scoring after mitigation is a good sense check on, is the mitigation/control sufficient, if it hasn't changed one or more of likelihood, impact or detectability scores, and has not reduced the risk to an acceptable level, one would have to ask, is it the right control?

27

5.05.

If monitoring is done by outside vendor (aside from CRO doing data management and statistics, for example), who holds the vendor management plan?

This will depend on the who holds the contract and the terms of the contract.
Would suggest if sponsor holds the contract with the monitoring vendor, they should be responsible for the vendor oversight/mgmt. plan. Some activities may be delegated to DM/Biostats vendor in the scope of central monitoring, stats monitoring, in such cases vendor should ensure these are listed in the relevant function plan.

28

5.18.1

What are the regulatory authorities thoughts regarding introduction of bias or influencing go/no go decisions if data is being evaluated ongoing bases?

The intent of the central data review/monitoring is not to evaluate the data from the point of view of objectives, this would still be completed by the normal methods, e.g. adjudication, IDMC. The type of review being positioned in the guidance is related to data quality, and the surrounding processes being conducted accurately and timely, so as to identify problems early and rectify these to prevent them from impacting the integrity of the data and/or subject well-being, this type of review should not influence go/no go decisions on the IMP or overall success or otherwise of the trial objectives.

29

5.2.

Does root cause analysis (RCA) have to be documented for all CAPAs and to what extent?

RCA is only required for non-compliance that impact or has the potential to impact subject rights and/or reliability of trial results/data integrity. RCA should be part of the CAPA process, and the process should state when RCA is required and reference or describe the preferred method of RCA, including responsibilities, timelines.

3

1.65, 5.5.3

eData handling - what is the impact beyond IT, e.g. is DM responsible for checking all updates that EDC vendor makes?

No, but the EDC vendor and their SDLC/CSV SOPs should be assessed as part of the vendor assessment/qualification process conducted by QA.

30

8.1.

If PI needs continuous access to CRF data, how do we address that many PIs do not seem to access EDC until the end of trial (just to Sign)?

The 2 requirements are not related.
The first comment about the PI not completing training to access the EDC until the end of the trial, is poor practice by the PI and the oversight and supervision responsibilities in section 4 should provide more grounds for taking action against the site(s) where the PI is not demonstrating appropriate oversight/supervision.
The second comment about the access, it is the site, under the supervision of the PI, that requires continuous access and control and while they have access to the EDC, they have access and control. The concern arises when we remove access ... The important point is that the site must verify access to the copies of the data before access is removed. While I take the point about continuous access cannot be assured if the PI has not been trained and therefore does not have access during the trial, other site personnel do have access, so the sponsor does not have exclusive control of the data.

31

8.1.

If site does not have means to access CRF data/essential document data through CDs, could memory sticks be used?

Yes, the important point is that we check with the site that they will be able to access the data on the media provided and that the site confirms access.
Memory sticks have been used as have file drops to secure sites and document portals which remain active following study close, all of which are acceptable, providing the site confirm access.

32

FDA Version of E6 R2 Section 9

Can you please comment about the FDA release of E6(R2) March 2018, and why 9. Paperwork reduction act of 1995 was added? Also why does the ich.org not even have this version posted?

We don't claim to be experts on the Paperwork Reduction Act of 1995, but I think in this instance, it is a disclaimer by the FDA as the E6 guidance refers to collection of information which could be collected on paper, which in such circumstances MAY contravene some of the terms in the 1995 Act and the FDA is not advocating this by recognising the e6 guidance. Section 9 in the FDA version does not impact the principles of or intent behind ICH E6 R2.
To the second point, the Paperwork Reduction Act of 1995 is legislation that applies only in the USA and ICH E6 R2 is a global guidance document, section 9 has been added by the FDA not by the ICH Committee and therefore does not appear in the final version published by ICH.

33

NA

Is R2 implemented in national legislation?

Not directly, but if it has been adopted by the regulatory agency in your country, you can be cited against it. What this means in terms of law will vary by country.

4

1.65, 5.5.3

Do the edata handling requirements apply to all vendors, e.g. regulatory vendors we might use for electronic publishing of regulatory submissions (eCTD format)?

Yes. Best practice would be to review what the vendor is required to do and if their scope would potentially impact subject protection and/or data integrity. Based on the outcome of this assessment, the scope of expected validation activities performed by/required of the vendor should be determined and this should form part of the vendor assessment.

5

1.65, 5.5.3

Do you see the changes in CSV as new or just catching up with the current validation process that the FDA guidelines cover?

Mainly catching up with FDA guidelines and GaMP guidelines, GaMP5 allows for a risk based approach to validation, which is supported in E6 R2.

6

5.0, 5.0.1 - 5.0.7, 5.2.2

Is the task of quality management now implicit as part of delegated study activities mgmt. to a CRO?

Yes, as more regulatory authorities adopt R2, the expectation will be there that this process is accounted for in the QMS of all vendors involved in trial conduct, and it may be conducted independently or in conjunction with the sponsor (I would advocate it is a conducted as a joint activity). The sponsor is ultimately responsible but as the representative, your processes should cover this.
Interestingly, in a recent presentation at the Global QA meeting in Edinburgh (02 Nov), the EMA representative, Sophia Mylona, publicly stated that the EMA would expect sponsors (and their representatives) to implement R2 principles retroactively to ongoing studies, where is made sense and was practicable, I would suggest this means, if you have protocol amendments, the Q(R)M processes should be applied, but Sophia also indicated this could apply to monitoring findings e.g. applying root cause analysis methods, assessing overall risk to other sites and implementing changes to monitoring plans as necessary. So if you are working in areas under EMA jurisdiction or the study data may be submitted to the EMA, this is something to be thinking about as it is now in effect as of 14 Jun 2017.

7

5.0, 5.0.1 - 5.0.7, 5.2.2

How is the CRO receiving Sponsor's delegation to this quality management action? Any specific message mandatory for us (CRO) being responsible?

This should be worked out as part of the contract, but it is worthwhile to have your own SOPs that cover the process of quality (risk) mgmt. I often recommend that it is good practice to perform an early critical variable and risk assessment as part of the RFP process. As the sponsors representative for the areas that are delegated to you (the CRO) you will have some level of responsibility for identifying and managing the risks in conjunction with the sponsor. I would recommend as a best practice that you (the CRO) has a risk assessment and mgmt. process in place and conduct your own risk assessment of the protocol, this process can also be conducted in conjunction with the sponsor (ideally a joint process would be conducted). The outcome of risk assessment, as you will see as the presentation progresses, should inform the functional activities and functional plans, e.g. Monitoring, Data Mgmt, Supplies etc.

8

5.0, 5.0.1 - 5.0.7, 5.2.2

What if the risk is out of the CRO's management/control, though still having probability & magnitude as the budget assigned to sites for example?

This is important from the CROs perspective. I think documenting the risk and agreeing the control mechanisms with the sponsor is critical in this scenario, ultimately the controls will be agreed to by the sponsor, if they do not agree to the controls, you could argue this increases the risk, but documenting the risk and the likelihood and magnitude should help the sponsor understand the rationale behind the reason for the recommended controls

9

5.0, 5.0.1 - 5.0.7, 5.2.2

What is the responsibility if the CRO is in charge of drafting the protocol?

The same process would apply, risk assessment (ideally jointly with the sponsor) should be conducted, beginning with evaluating what is critical and identifying/evaluating the risk around it, where possible agreeing risk reduction/elimination with the sponsor and if not, then agreeing risk controls to implement and monitor (in the wider sense of the word) during the trial. I would recommend that the functions involved in the trial are involved in the risk assessment process