Webinar & Video Transcripts

Risk Assessment Series - Risk Controls

Hello, and welcome to the fourth

in the series of risk assessment webinars.

Throughout these webinars, we will walk you

through the protocol risk assessment

process in bite sized chunks.

My name is Macarena Sahores

and I am an RBQM operations

consultant at TRI.

I work alongside our customers,

pretty much focusing on RBQM

implementation in clinical trials.

I moved into clinical research

after holding a postdoctoral fellow role.

I started as a central monitoring associate

and then worked as a central monitor lead

manager of CMAs, risk manager

and also an associate director

focusing on RBQM implementation

across studies and partnerships.

This has been a rewarding journey

seeing how the industry

started really to embrace

all things RBQM.

Today I will be talking

about risk control as part of the

risk quality management process.

After this session, you will be confident

in continuing with the quality management process.

If you haven't seen the first webinars

on critical variable identification,

risk identification and risk evaluation,

I recommend you do so.

Today, we focus on controlling risks.

Here you can see the topics

we will discuss again

the intent of quality management,

risk control as part of the

RBQM process.

Common misconceptions and best practices.

So what is required to implement

an effective, risk based

quality management process?

This process, it starts early.

You need to build quality into the trial.

Use risk assessment

to inform a well-designed

and clearly articulated protocol.

Conduct early and ongoing

risk assessments of the protocol

and use protocol risk assessment

to create an inform the monitoring plans.

If you have followed

the previous webinars,

you would notice that we have covered

E6(R2) and E8(R1) in more detail there,

So I will be just briefly commenting

E8 and E6

ICH E8(R1), or general

considerations of clinical studies

is pretty much focused on

clinical trial design principles.

So which are the key messages of E8?

Identify risks and focus designs

and processes to mitigate those risks.

Build quality into the trial from the beginning.

Involve external stakeholders

in trial design,

such as patients and sites.

E8(R1) introduces

the quality by design and critical

to quality factors ideas.

Together with the concept

of a risk proportionate approach.

It consolidates the principles

introduced in ICH GCP E6(R2).

Finalization of E8(R1) has been delayed.

Now, moving on to E6(R2)

which introduced the principles of RBQM,

mainly the risk assessment process

and risk based approach to monitoring.

This slide shows E6(R2) section

five, quality management,

which is a seven step

process whereby proactive

identification and prioritization of risk

to critical data and processes

will improve the quality

of the clinical trial.

As we have discussed

in previous webinars, this is an

ongoing process,

ongoing process.

For example, following

a protocol amendment, new critical variables

might be identified on your risks,

which means you need

to follow the same process

over and over again.

E6(R3) is currently being developed

to acknowledge the diversity

of trial science, data sources

and a different context

in which clinical trials

can be conducted,

and it is expected to be more

aligned with E8..

So we can say that E6(R3) is coming soon.

Now we moved into the

quality management process

and actually the focus on this webinar,

which is the fourth step on the risk

assessment process - how to control risks.

So you should not start

the process by deciding

which controls you will be implementing.

First, you will need to identify

your critical data and processes,

then identify your risks

and evaluate them.

Only then you'll be able to move on

to risk control.

As always, you need to ensure

cross-functional representation

during the process, as well as

sponsors and CROs working

as part of a partnership.

And if you want to examine E6(R2),

what does it tell us about risk control?

And it tells us that

the sponsor should decide

which risks to reduce

and or which risks to accept.

The approach used to reduce

risk to an acceptable level

should be proportionate

to the significance of the risk.

And in this slide

is about common misconceptions.

Examples of 

things we have experienced.

And the first one is,

for example, I've heard

I know which controls I need.

I do not have to identify

critical variables or risks

for this protocol.

And as we know, the intent is to control,

either minimize and or avoid the risk

whenever possible.

So if you implement a formal risk

assessment process,

then you'll be able to determine

which are the critical variables

and the risks to patient

safety and data integrity

specific to your protocol.

And only then you can determine

the most appropriate controls.

Another common misconception is around

monitoring rights.

For example,

we will be implementing 100 % SDV

So we do not need

any of the functions involved.

to accomplish monitoring.

And actually, you have to think

which functional roles will be owning

the mitigations or controls

you will be putting in place.

Yes, you have to think

clinical operations,

but you also need to consider

data management, medical management,

safety, center monitoring and so forth.

Of course, it would depend

on your organizational processes

and functional roles,

but also the requirements

of this specific protocol you're

looking at.

Another example of a common

misconception is

nothing has changed in the study,

so what do I need to review the controls

we agreed during startup?

And as you may have experience,

controls, agreed  early on

may not be enough once

a study is ongoing.

Things change.

You may realize you need

additional controls

or conversely

the risk levels may have changed.

You have to remember E6(R2)

calls for an approach

used to reduce risks to an acceptable

level should be proportionate

to the significance of the risk.

You need to avoid again, that sort of

have one same model across

all the studies.

You have to tailor your

your monitoring strategy

to your protocol, right?

You need to make sure you have controls

and mitigations in place

which are fit for your protocol.

So next, we are going to have

a few slides of what

which which are the items

we need to consider when determining

which controls to implement.

And you have to ask yourself

questions, right?

What actions are we going

to take to control the risk?

To what extent can we or do

we want to control each risk?

How much risk are we willing to tolerate?

What level of control is appropriate

or proportionate?

Who will be responsible

for this control?

What risks can we manage

before the trial starts?

Other considerations.

Identified, risks can be reduced

prior to or during the trial or both.

Right?

You may have to put different

controls in place.

Where possible, we should aim

to eliminate as much risk

through redesigns of your protocol.

And for those risks we have to accept

or those we can reduce,

we need ongoing management.

And as we discussed already,

you need to make sure

you review those risks

which have been eliminated to make sure

that nothing has changed since.

And when discussing which controls

to implement, often only

likelihood and impact are considered,

but however, detection

may be the easier way to reduce a risk

because better detectability

gives us the opportunity

to reduce impact, right?

Why does detectability matter?

If we can detect the cause 

and stop it leading

to an event, the impact, it can be small

or even be zero right?

And also, we need to detect

with enough time to take action.

So either we will try to stop the cause early on,

you know, the cause

leading to the event

and negative consequences.

Or to put a contingency plan in place

to reduce the impact.

So yes, detectability

has to be considered

when you are trying

to implement controls.

And and now this slide

is just to have a word

about centralized monitoring

and centralized monitoring

being used as a risk control?

As as you should be aware,

right here is where E6(R2)

addresses the extent and nature

of monitoring to be implemented.

It acknowledges that onsite

monitoring is performed at the sites

where the clinical trial

is being conducted.

And it also mentions

that centralized monitoring

is a remote evaluation

of accumulating data.

And it is clear from there that we need

to broaden our approach

to monitoring study

conduct and forget the one

size fits all approach, right. 

And continuing with E6(R2),

and the concepts within E6(R2) 

centralized monitoring processes provide

additional monitoring capabilities

that can complement and reduce the extent

and or frequency of onsite monitoring,

and also help distinguish

between reliable data

and potentially unreliable data.  

So this leads to the use

of statistical data monitoring,

trend analysis, key risk

indicators and performance metrics.

And all of these

these are all forms of detection.

So when you are discussing

which controls to implement,

you should be considering

central monitoring as well

And now what we're going to do

is, as we've done the previous webinars

we'll use some examples to illustrate

how to identify appropriate controls,

and we will be using the same mock protocol,.

So it's the same Phase II

Oncology study looking at dose

limiting toxicity

and efficacy of IMP  compared

with standard treatments.

And when we started

our first webinar

when we were identifying

the critical variables

we identified setting

your secondary objective,

which is, you know, safety

our critical variable was toxicity.

More specifically,

identification and management.

And then doing the

second webinar we established

that one of the risks was

if sites are not fully aware

of how to reduce or delay drug doses

in response to toxicity events

due to inadequate

understanding of the protocol,

these could lead to patient

safety being compromised.

And then on our third webinar,

we assessed a risk

and it came with that risk score of 18.

Which is the second highest

possible risk score.

How likely was it

for this risk to occur?

We determined it was quite possible.

20 to 60 % chance of occurring.

We also determined

that the impact was significant

because you would have

an impact on patient safety.

And also that it would

be difficult to detect.

So this gave us a score of 18

and this is the second highest, highest

possible risk score.

And this means we need to put

mitigation in place

for this particular risk.

This risk needs specific management.

Risk is of sufficient significance

to safety of trial subjects

to warrant specific

mitigation and management activities.

So which controls do

we need to implement?

And we here to have

an example of a pre-study

risk mitigation, such as ensure site

staff are trained in dose

delay or reduction process

to understand what is required

for this protocol.

An example of a risk mitigation during

study conducted is,

appropriate source data review

of patient medical notes

conducted by monitors to identify

any dose delays or escalations

should have occurred

in response to patient events

or stages.

And now we will have a second example

Again, at the time

the primary objective

was linked to efficacy progression,

free survival PFS.

We determined that

one of the critical variables

linked to this primary objective

was a PET-CT scan.

And on the second webinar,

we determined a risk was the following:

If imaging assessments are not completed

or of good enough quality,

then there may be not enough data

to assess primary

and secondary end points.

And on the third webinar

which was a previous webinar,

we also determined

why was the risk score

for this particular risk

related to PET-CT scan.

So we determined that

it was very unlikely, right?

Less than 20% chance of occurring.

However.

The impact was quite significant

because you will have

an impact on data integrity

and reliability of primary

and secondary endpoints analysis

as well as subjects safety related to

disease progression.

And regarding detectability,

we determined it was a moderate detection.

Assessments would be going

to a third party vendor.

So this score actually

has a score of six,

which is low to moderate risk score.

However, we just did

have a mitigation in place

due to the high impact on both data

integrity and patient safety.

So as we have done

for the other control,  and for the other risk

this risk needs specific management.

There is the risk of sufficient

significance to reliability

and integrity of trial results

and patient safety to warrant specific

mitigation and management activities.

So again, we need to think

which controls do we need to implement?

And here we have a few examples

of a pre-study risk mitigation.

Such as, ensure site staff

are trained in imaging requirements

and transmission process

to understand what is required

for this protocol.

Another example is ensure sites

have the correct equipment,

and that equipment will

be available to perform

imaging as per protocol,

and that an imaging

manual is available addressing

all accepted equipment.

A third example is

confirm with the vendor being used,

what reporting they have, for example,

will they report

if an image was expected

but not received?

And here we have

a few examples of risk mitigations

I will read this, but

I think the idea is here,

you may have one control,

you may have more than one control.

That would depend on your protocol.

So these are some examples of risk

mitigations that are in the study conduct.

one is related to the checks

to be programed into the EDC

to auto-query missing

imaging information.

Another example would be

remote data review

of vendor protal reports.

Another one is quality

control of first images before

allowing the second subject

to be enrolled.

And our last example here

is implemented key risk indicators

for missing and missed assessments.

And as you can see here,

these risk mitigations

could be assigned

to functions which are not,

or may not be clinical operations.

It could be data management,

it could be central

monitoring and so forth.

OK, now we're going to discuss

what are the best practices.

You have to make sure

the controls you

put in place are informed

by a formal risk assessment process.

You must engage

the cross-functional team.

Control strategies

may be standard or specific to the study.

You should discourage the use of

the one size fits

all approach.

Has to be customized

to the protocol.

Once your controls are in place

with the study ongoing,

you must review and re-evaluate the risks

and confirm that controls

are sufficient.

Also, you should be documenting

and communicating agreed

controls and responsibilities.

You can use your risk

assessments, document functional plans

and also as part of the overarching

Integrated Strategic Monitoring Plan.

If you haven't heard

of this strategy before,

we'll be discussing

the overarching Integrated

Strategic Monitoring Plan

or ISMP on the next webinar,

so make sure you register for it.

So going back to the quality

management process,

as we discussed,

only after you have identified

the critical variables

and risks and evaluated this

as well as identified your controls,

then you are ready to move on

to risk communication.

Remember to document

the critical variables and risks

you have identified and evaluated

which along with the justification

and their corresponding controls.

We will be covering

risk communication

and beyond on our next webinar.

As always, you have to ensure

cross-functional representation

during the process.

And sponsors and CROs must work

as part of a partnership.

So what we covered today.

We discussed the intent

of quality management.

We reviewed risk controls

as part of the RBQM process,

as well as discussed

common misconceptions and best practices.

As I already mentioned,

we will be covering risk communication

and beyond on our next webinar

on the 26th of August

so please make sure you enrol.

And now I would like

to thank you for watching.

Enjoy the rest of your day.

ICH E6 & ICH E8 - How to make data quality a virtuous circle

Well, good afternoon or good morning, everybody.

Wherever you are joining us from in the world, it's

just after three o'clock

in the UK in the afternoon.

So let's get started.

So welcome to today's webinar,

on ICH E6 and E8 and how to make data

quality a virtuous circle.

Thank you very much for taking the time

to join us today.

My name is Ben Brummitt,

and I'm delighted to be able to co-present

today's webinar with my colleague,

who is the CEO and founder

of TRI, Duncan Hall. Hi Duncan.

Good afternoon, Ben.

Good to be here. Thanks.

Thanks, everybody.

Absolutely, we're we're glad

everybody could join us today

and we're sure

you'll find the next forty five

minutes is very, very valuable.

It's certainly a hot topic

around the industry at the minute,

all of the regulators reedited.

So just before we get into it,

we've got a little bit of housekeeping

just before we start.

No need to make notes

or take screenshots.

The slides will be available

at the end of the webinar today.

We are happy to take questions.

So if you're able to put your questions

in the chat box, then we can answer them

at the end of the webinar.

We do also have a

unique offer for anybody

that's attended today with regards

to some supports around

what we're going to be going through.

But we'll come to that

at the end of the webinar today.

So who we? Who are TRI?

Well, we've been running in this industry

for approximately eight years now.

We're a complete solution

provider for Risk-Based

Quality Management

or RBQM for short.

ICH E6 compliance, and Central Monitoring.

So we help CROs

and sponsors implement

a risk-based approach

to running clinical trials

more efficiently, to achieve

better data quality

and to comply with regulatory guidance.

So we do this through

a range of training,

consulting and technology

and all of our solutions are developed

through experience of

operational delivery,

customer feedback, and importantly,

regular engagement

with the regulatory authorities.

So we know that every

customer's difference,

whether you're just starting

to implement a risk-  based approach

or you already further

advance down that journey.

We have solutions to enable you

to achieve your goals.

So that's who we are.

So why are we running today's webinar?

Well, let's start off with a statement,

if you're running a clinical trial

in 2021,

you're going to be doing

that under ICH E6(R2)

otherwise known as GCP.

But at the moment,

the ICH are in the process

of updating E8,

which will become the new E8(R1)

as well as updating E6(R2)

which will soon be E6(R3).

So we're still speaking

with companies that are

dealing with (R2) adoption.

Now from a regulatory view

It's only going to get more complex.

But on the upside,

there is a great opportunity to cater

for both today's guidance

and the upcoming guidance in one go. .

And today's webinar is going to help

you really navigate through

all of this while

trying to understand the relationship

between all of these different sets

of regulatory guidance.

So from a regulatory standpoint, then,

this is our current

understanding of the guidance.

So, currently, we should all be working under

E6(R2) which started

its adoption in 2017.

You can see in the EMA in 2017

and adopted in the FDA in 2018.

E8, which is the overarching

general considerations

for clinical trials,

is under revision currently,

which has been delayed

by the pandemic, amongst other things.

The latest revision of the work

plan on the website is showing it

to have been done by May 2020.

But we're obviously expecting that to be finalized

towards the end of this year

or into next year.

We then have the third revision

to E6(R3).

So I’ve pulled these

dates from the ICH

work plan

which again can be found online.

I'd encourage

any of you to go and have a look at that

if you are interested in

learning a little bit more.

But with all of these updates,

there's certainly some confusion

as to how you can apply

the principles from all of the guidance into

day-to-day working practices.

So in order to help

with understanding the relationship

between all of these sets of guidance,

we want to start off today

by really breaking down some key points

for each of the bits

of guidance, in our opinion.

So, Duncan, I'm going to

hand over to you at this point.

Thank you, Ben.

So

with E8 and the upcoming revision

one of E8, we are pretty much there.

We've got full draft guidance on that.

So we've actually got

some quite good content

as to what is expected

to be coming up with that revision

with E6, the third revision of E6.

We are very much

just down to sort of early

the working group outputs

the sort of early discussion papers

and the general principles.

So we're not actually into the draft

final guidance yet,

but there's definitely enough

information to go on so far,

I think, to give us a good feel for what

the intent of of that third

revision of GCP is going to be.

And so obviously,

we can only go with what we know

and certainly

some of the discussions that we've had

with customers

around the industry as well.

To summarize some of the key messages

from E8(R1) then.

So let's let's focus on E8 first.

Really, what it boils down to is it's

it's about quality by design.

It's about building quality

into clinical trials to ensure

that we get the absolute highest quality

outcomes from from any given trial.

So starting by just

looking up at the top left here,

the objective is the study, making sure

that the objectives

of the study are clear.

That the endpoints of the study

are clearly defined

and therefore, there's

a very clear relationship

between those endpoints that we’re gathering

during the course of the trial.

And the question that we're

trying to answer with that trial

and I think of technical trials

is as being a scientific experiment.

We start with an endpoint to the question

we try to answer

we’re then gathering

the information to help us

answer that during

the course of the study.

And I think that fits

very well with the quality

by design principles

that if we understand

what we're trying to achieve,

that we understand what's required

in terms of data, and we can see a clear

relationship between the two,

we're going to get a much better outcome.

The second point

then is around what we already know,

and I think this is

something that is poorly served

in the current GCP guidance

and definitely something

that the new E8

update really, really tries to stress

is that we shouldn't

be running studies in isolation.

We should really be thinking about what

what we've run previously.

And, you know, what

what were the critical success

factors for those studies?

Do we know what sort of risks

occurred during those previous studies

which were identified

which actually were triggered?

Do we know what controls

that we put in place,

were those controls effective,

which controls what we put in place

that would just never have any value?

Where is the data going to come from?

How are we going to track

that quality management process?

And how do we access

that data, that historic data?

We're generating a huge amount of data,

not just the clinical data now,

but we're also generating

quality management data.

Now, previously,

that was really just

sort of high level monitoring data.

What we were trying to do was to prove

that we were monitoring

and overseeing the study.

Now we're looking much more

at the quality management process

and what are we doing to manage quality

during the course of the clinical study.

But a lot of that information

gets lost at the end of the study.

So E8 is pushing us

to really start to think

about that information and how do we

how do we capture that information

in a format that allows us to look at it,

build up some history and

factor that in as we as

we move to new study designs.

We also want to be thinking about

what we know about the drug

and making sure that we understand,

you know, what

information is out there

in the public domain about the drug,

what information is out there

that we know within, you know,

within the sponsor organizations

that have run earlier phase

studies in this particular area.

And how can that be factored

into the protection of the patients

in the clinical trial?

So there are lots of public

data sources that we should be

we should be factoring in as well.

And then, but

not only are we talking about the data

that we know at the start of the study,

but we're also, there's also emerging data

that's becoming

apparent and available as

we go through the study.

And that's where as we start

to get on in this presentation,

you'll see that we're trying

to get much more cyclical

about our approach and our use of data,

because as this data emerges

during the course of a clinical trial,

we should be looking at that

and making ongoing considerations

as to whether we need to be adapting

the trial design

in the way that we're conducting

that clinical trial based

on that emergent data,

as well as the data that's available

internally to any organization,

as well as publicly before

we start the study.

Patient centricity is also very much

a commonly heard term at the moment.

It was especially prevalent

during lockdown.

But patient centricity

is not just about using the patients

as a source of data

and sort of using the patients

as a data gathering mechanism

rather than relying more on

on investigators.

And, you know, I think there was a

that was a real fast move

towards the use of

of wearables and patient diaries

and other mechanisms to

to sort of collect data to enable

that data collection to continue,

but without obviously the need

for quite so much human to human contact.

But patient centricity

is more than that.

It's not just about

using them as a mechanism.

It's about putting them front and center

in trial design.

It's really about understanding

their views on areas

such as the treatment schedules,

the assessment procedures,

informed consent forms,

all those sorts of things.

Getting that early consideration

from patients during

the study design is going to give us

a much better feel for

whether or not that we're going

to get successful engagement

with those patients

during the clinical trial.

And overall, improved

engagement is going to improve

our recruitment ability

in the first place.

It's definitely going to improve

our patient retention

during the course of the study.

And, of course, the overall

patient experience. And again,

I think part of patient centricity

is much more about

thinking of patients as human beings

rather than subjects.

And we, I certainly in

when I'm talking about this stuff,

I've tried to move away

from the term subject.

I think that's a, it's a

it makes it very cold

and very scientific.

I think we are trying much more

to think about patients as human beings

and thinking about their experience

and what's it going to be .

What's it like already with whatever the

you know, whatever the issue is, whatever

the disease or whatever

the problem is that patient

is suffering from?

What does that

what's that going to be like?

What's the experience going to be like?

And then what's

what's putting them

through a clinical trial

going to really be like

for that person on top of whatever it is

that they're already

already suffering from.

So that's really,

again, is made very clear in E8

is something we should really be

taking a lot more into account

during the study design process.

Also, thinking about the

the current standard of care.

So, again, obviously, depending on

what therapy area is and

the disease that we're thinking about

or the condition

that we're thinking about.

There's almost certainly

going to be some sort of current

standard of care

and starting to think about

how far is this study

that we're about to conduct

and therefore the process

that that patient is going to go through,

how far different is that

from the current standard of care?

Obviously, the further that

that standard, the bigger the gap,

I guess, between

the current standard of care

and the trial that we're designing,

the bigger

the risk, potentially the bigger the

the impact on the patient.

And so, again, taking

that into consideration

is really important. Now obviously,

if we're if we're trying

to break new ground with

with clinical research,

and we're trying new novel

approaches, there is going to be a void.

But we need to be very cognizant

of what that void is and how do we

how do we manage the risks

around that void?

And, of course,

how do we make the process

of being part of that clinical

trial is as comfortable

as we possibly can for the

for the patients.

So when we look at all

those things together in

E6(R2) the big new phrase

that was really introduced in

E6(R2) was quality management,

that whole new quality management

section five that appeared

with E8(R1)

I really think the buzz word

is quality by design.

It's a phrase that's used over

and over again in E8(R1).

Or certainly the draft guidance,

as we've currently

we currently have access to.

And all of those pieces

that I've just talked

about are all forward in that

that banner of quality by design.

Now.

We've covered them to some of the high

overarching principles

they're with E8(R1).

Let’s now start to think about

E6(R3), which is

which is really some of the more

recent information

that's been made available.

As I said earlier, the details on

GCP Revision 3 are much lighter.

They really are just

a set of guiding

principles at this stage.

And, but there's been

a lot of documentation

on the reasons for the need

for a revision to GCP.

And when you think about

how long it took for the last revision

so that the elapsed time between

E6(R1) and

E6(R2) was nearly 20 years.

But the time that

that's looking like

it's going to be between

(R2) and (R3)

is going to be closer to five years.

So I think that's a

really good indication

of the rate of change

in clinical research within the industry.

The fact that we're having

another revision of GCP

relatively, I say so soon after (R2)

It has been five years,

but, you know, as Ben said

right at the beginning,

for a lot of companies,

they're still in the process

of implementing GCP (R2).

And so, of course,

(R2) and E8(R1) are going to create

further change burden

on those organizations,

which is really what we're what

we're trying to highlight today.

I think a lot of the change

that we've seen in the industry was

was already in motion

a couple of years ago, but definitely

our experience has been

that the COVID 19 pandemic

and the adaptations

we've had to make

to clinical research during that time

has really accelerated

the rate of change.

And again, that could be partly

the reason that we're trying to

to move to this revision

sooner rather than later.

So I'm talking about

sort of four principles

now that are very prevalent

in the documentation

that's out there on

E6(R3).

And the first principle, again, talks

directly about quality by design.

And it's a very clear and tangible link

between E6 and E8.

And it's really trying

to make it clear that quality by design

was always part of the intent of GCP.

And I think in E6(R2)

many of the changes

that came in in R2

were really just clarifications of things

that were assumed to be clear in (R1)

but were in many cases either

misinterpreted or just ignored.

And what it's really saying

is that as both technology evolves,

as well as trial designs

evolve, that quality by design

is as relevant as ever.

So it really is just a restating of that

need for quality by design.

The principles do

make specific references

to the impact of the pandemic

and obviously the challenges around

human to human contact

during that period.

But also it does

make it very clear reference

to the fact that (R2) really didn't

cover the scope of some of the advances

in the emerging practices

and technologies

that are now prevalent today,

and certainly in terms

of technologies, 

technologies that have become available

in the last few years.

Now, a new phrase was always interesting

when you sort of read

something that stands out

a little bit in regulatory guidance,

but a new phrase that

has been introduced in that update

guidance documentation

that we're talking about

was “thoughtful”.

Now thoughtful is a

very broad based term.

But again, I think really

what it's harking back to

is both those quality

by design principles

that we've talked about and

and that patient centricity

that I've just talked about.

You know, it's

not just about collecting data

now, we're really being asked to consider

how we collect that data

and what that means to the stakeholders

for that particular trial.

And when I talk about stakeholders,

of course, first and foremost,

it is the patients.

They are the ones

that are absolutely vital

 for the trial.

But also, you know,

the investigators

that are involved in the study

conduct, in the assessments

of those patients,

the people, but also the people

that are analyzing the data and

and monitoring that data.

And I think

as we build

more and more studies

in our RBQM platform,

what we're seeing is

that we're getting more and more involved

earlier in the study design,

because we're starting to think about

not just “is the data

that we're collecting

during the course of this study

going to help us answer the question?”

as I talked about previously. You know,

is it the right data to be collecting

and is it going to actually answer

the exam question

or the scientific content of that study?

But we're also starts to think about

how we're going to analyze this data.

What is good data look like?

What does an inlier look like?

What does an outlier look like?

How are we going to segment this data?

How are we going to slice and dice it

to answer some of the ongoing questions

that we're going to be

posing of that data?

And as I said earlier,

talking about that, that emergent data

that we should be using

as we go through a study to actually

to actually determine

the path. Is the path

that we've set through

this trial the right path,

or is that emerging data

telling us something else?

Now, if you start to think

in that way,

that sort of frame of mind.

If you can think about that

at the start of the study is

we're starting to set things up

and think about, you know,

our CRF designs and the sorts of data

that we're going to be collecting

and start to think about

the analysis of that data

as the study goes on,

we can really give ourselves a big

a big head start

in actually getting the data right

so that we can perform analysis

and that we can get early indications

as to where the study's going well,

but also where the risks are

and is our quality management

approach working?

You know, sometimes just catching

one more piece of demographic data

may allow us to completely segment

our data in a much more meaningful way

when we start to think about

that ongoing process of risk

based quality management.

So that's what we mean by that

thoughtful process

and thoughtful study design.

It’s not just about the patients,

which is really important,

but is about what we’re going to do

with the data as well.

The third principle we can see here

is where we see this new phrase,

it was the heading for the webinar today,

“Quality Continuum”.

It is starting to be spelt out

in the new guidance documentation.

And it's actually

one of the first areas that’s covered,

it really is right up front and central.

And again, I think it sets

a very clear expectation

that quality by design, and in fact,

this is actually a diagram in the in

some of the discussion papers

that we've seen, some of the slide decks

that have been produced by the group

that are analyzing the feedback on

on (R2) at the moment

is that there is a very clear expectation

that quality by design

will flow into quality management.

But that's also a backward cycle as well.

So there is there is a true cycle

between these two.

And it's not just an end-to-end process.

We see in that same section

the use of the term proportionality.

Now, again, this is not a new concept.

The concepts of proportionality was covered

in the first revision,

sorry,  in the second revision (R2),

but again, I think

that it is going to be further stated.

I think it's one of those things

where the intent

was there in (R2).

It probably wasn't made clear enough

and that we are expected

to be proportionate

in the way that we manage

our clinical research.

And that really is the basis of risk

based quality management.

And I've said this on

just about every webinar

that I've been part of

in the last four

or five years, that the risk-based

quality management

isn't about taking risks.

It's about the identification of areas

of likely risk and managing

those areas in a manner

that is proportionate to the risk level,

or the perceived risk level.

The higher the risk

level or perceived risk level,

the greater our focus

should be on that area

and the more controls

and management

we should be putting in place.

And of course, conversely,

the quid pro quo here is that for areas

where we see very low risk,

we don't need to be plowing valuable

R&D resources into

monitoring those areas.

They don't want to see lots of effort

being spent on areas

of very low risk

and areas that are not closely related

to critical to quality factors.

And then finally,

we start to see some references

to technology as well.

And of course, as we know, more

and more data is being collected

in clinical research from more and more

disparate sources.

And the use of technology

is absolutely essential

for the collection

and analysis and decision

making in evidencing of that risk

based quality management approach.

And of course, now as we

start to think about this concept

of a quality continuum,

there's an even greater burden to show

that you are following a process

and that that process is cyclical,

that you are using information

and learnings from from

from the study itself

and, of course, from previous

studies and public information

as part of that process.

And of course, that presents

both a challenge and an opportunity.

We need technology to be

to make information available in

in a format that's usable.

But we also need to demonstrate

that we're following

that quality continuum.

And our technology

should be a key part

in demonstrating that we are

that we have a clear and coherent story

about that quality by design

and quality management process

that we followed.

Now, we also see for the first time

a real reference to information security

now as a software company,

information security

is always on our minds .

It's something that we take

incredibly seriously for

for obvious reasons.

But it's really the first time

I've seen much of a reference to it in

in regulatory guidance.

Again, I think that just shows

how the regulatory authorities

are sort of catching up

with the importance of

of not only just technology,

but the security around technology.

And I'm sure that's, you know,

this is just to say

a principle at this stage.

I'm sure that (R3) is going

to go into a lot more specificity

around the processes

of data transfer, collection, processing

and the security levels

that needs to sit around that.

And I do wonder and I certainly do hope

that we may finally see the final nail

in the coffin for the non-validated

unsecured desktop solutions

like Word and Excel

through this process.

And certainly let me let me ask you

a question to the audience.

How would you feel if your bank

was managing all your personal

finance data in Excel

and emailing copies

of those spreadsheets back and forth

between different branches?

I'm sure you you wouldn't

be too excited about that,

but you'd be amazed the amount of

of important clinical data

that still gets managed

in spreadsheets and word documents

and emailed around

in a relatively unsecure manner.

So I really do hope that

that (R3) will

put more pressure

on companies actually using properly

validated and secured systems

for clinical research.

With all that said,

hopefully that's giving you

a bit of a feel for some of the

the sort of key principles

of E8(R1) and E6(R3).

Ben, I'm going to hand back to you

and perhaps you could just tell us

a little bit more

about what you're seeing,

some of the advances that we have seen

in quality management

and a bit more about what

sort of today's world looks like.

Absolutely.

Thank you, Duncan

Certainly from what

we're seeing at the

minute out there,

and certainly some of the conversations

I'm having with with

customers is that from

from (R2) it's very clear

if we look visually

at how quality management section

5. of (R2) is laid out,

it's laid out in this linear

linear format.

And, you know, from

what we're seeing is that quite

a lot of companies are treating quality as a

as a linear process, as being presented.

And you can understand why that is,

because it's in the guidance.

And you can see that.

And people are asking

people are asking themselves, you know,

what is the path that is running through

all of the guidance and more importantly,

what is the most efficient

pathway through it?

You know, maybe the reason why

people haven't got this down pat just yet

is that there are a

lot of challenges along the way.

You know, on a path you often divert

because of different opportunities

or different pitfalls that you encounter.

You could equate it to being like

a game of snakes and ladders.

Now, snakes and ladders,

I'm pretty sure everybody here,

everybody remembers, and it's certainly

certainly a game

we play in our household.

I'm sure everybody knows

the concept of it.

You start off at the bottom.

If you jump on

a ladder you advance more quickly.

If you jump on a snake, you go back,

you find yourself further back.

But we can we can really relate this idea

to this linear idea,

to our clinical trials.

So I'm going to go through

some of the things that we're seeing.

That are potentially the ladders

and potentially are going to be snakes.

So if we start with a prerequisite

for jumping on the board,

the fundamental key prerequisite,

excuse me, has to be leadership buy-in.

Now, you can't even start the game

without leadership buy-in if you know

that you're going to be fighting

an uphill battle.

You've really got to get that

get that buy-in to start with.

But that’s something

we're going to speak

about a little bit later on

in the webinar as we go.

So moving on to some of the ladders then.

Sfome of them include

defining critical variables,

defining your risk controls,

relating risks to variables,

defining risk review, your risk review schedule.

And as Duncan said before,

minimizing exploratory variables.

And certainly review and update

risks and controls.

That's a big one.

So all of these are laid out in our tool

which gives you a really good advantage

and advances

you along the game, so to speak.

However, there are obviously some snakes

in there as well.

So some of these snakes

 and we see the often

the pitfalls we often see

risk controls not being proportionate

to risk levels, ignoring historic data

improper use of quality

tolerance limits,

a lack of cross-functional engagement,

certain parties working in isolation

and so on and so forth.

And a lack of standardized

scoring can be a big one as well.

And  as we said before,

the lack of leadership engagement again

comes in. At the end.

You can do all of this great work,

but if you haven't got that leadership

engagement, you're right. at the end

you're going to end up right

back, right back to the start.

So those are some of

the snakes that we see,

you know, from that linear model. though.

Are we missing something?

So we can understand

why people have adopted

this more linear attitude.

However, you know what

that concept really misses

 is a key point.

And as Duncan stressed, looking back

really has to be a key factor

in improvement for the future.

So instead of that traditional games

of snakes and ladders

quality, really is as Duncan said

is far more of a cyclical process.

So what we need is a better visual

to showcase the nature,

the cyclical nature of quality.

Which obviously leads us on

to our interpretation of what

the quality continuum looks like.

So Duncan, back over to you.

Thanks a lot, Ben.

So I’ve tried to be pretty ambitious here

and obviously helping.

I'm very much a visual person

and a visual learner myself.

I like infographics.

I like visual guidance.

And so what we try to do here,

really for the benefit

of everyone on the webinar,

and I will say that we will provide

this infographics to anybody,

the ones who wants it after the webinar

absolutely free of charge.

So please,

Ben and my colleague

Jo will be reaching out to you

after the webinar to offer

you copies of the slides

and this infographics specifically.

So, again, please

don't worry about taking screenshots

or making notes.

What I tried to do here

is just to kind of tie

these things together,

tie the principles from E8(R1)

and E6(R3) and (R2) together

to show what I think

what my interpretation

or our interpretation as a company of what

that quality continuum could look like.

And I can certainly guarantee

you'll get nothing like this

in the regulatory guidance.

You rarey get anything

as specific as this.

So it's really just to help

give a feel for what

that could look like.

And I'm trying to sort of

be a little bit predictive,

look into the future a little bit here

and give everyone a model

that they can start to think

about as they start to cater

for their sort of change management

plans over the next few years

as we really start to gear up for

improving quality

in clinical clinical trials

Starting then,

so I think at the core of the process,

we've got this sort

of four step process going from design

through to final reporting.

And you can see again,

in each of these steps, quality is the

is the key word.

The quality of the design piece

then we've talked about quite a bit.

And this is really about the protocol.

It's about getting

the protocol fit for purpose

that we're building quality in

right from the get go,

that we've got, you know,

clear objectives.

We understand what the endpoints are.

And, but again, that we're

we're building this 

this previous experience,

industry knowledge into the protocol.

And that's where really

the first part of our continuum comes in,

because as we're conducting

more and more studies,

we should be collecting

data along the way.

And you can see I'll

come to this in a minute,

but you can see that

I've got data falling

out of the quality management process

into our knowledge base.

That is something

that we should be looking at

as we go through this

this protocol design process.

We shouldn't just be thinking about these

the overall objectives

of the study, but what else do we know?

You know, what risks

have we seen in these areas before?

How have we controlled those risks,

which those triggers, which are

those risks were triggered

what QTLs were used

and which of those were breached

and all of these areas.

And if we can start to think about those

if we can start to think

about these quality management assets

and build them into our design,

we will start to really drive

a process of continual improvement.

And that gets us away

from Ben's kind of snakes

and ladders model, which wesee

being so prevalent

in the industry right now

into more of this cyclical

process of continuing

improvements or a quality continuum.

But once we've

optimized our actual design itself,

we then move into the planning stage

here, we're really starting to think

about the protocol risk factors.

And again, this is areas around,

you know, we're seeing with

we're trying to encourage companies

to come up with sort of

protocol ranking models

where they can ask

sort of challenging

questions of the protocol

to get an overall sort of risk score

for the protocol

that really starts to think about,

you know, areas like

how close is this protocol design

to the current standard of care?

You know,

is this cutting edge or is it know,

is it really just incremental gain?

And what do we know?

What do we know about the patients?

What do we know about the

previous treatments?

Really thinking about those critical

to quality factors.

Now, that, again, is a big component

E8(R1) this concept

of critical to quality.

So what are the things that are going to,

you know, are going to  diminish our ability

to answer the exam question

or to compromise data quality

or to compromise patient security?

Those are the things

that we're talking about when we mean

critical to quality factors.

And what are the risks to those?

So if we identify what those factors are,

what are the risks

and what are those risks

going to look like?

And again, we talked about one

of the one of these snakes

that Ben talked about in his slides

Just now was about not using standardized

scoring mechanisms. OK,

if we don't use standardized

scoring mechanisms across

all of our studies,

how can we possibly get any idea

of whether any one risk or any one

protocol is high,

medium , low or indifferent?

If we don't have that view,

how can we possibly be expected

to create proportionate controls

if we just got if

every time we run a study,

we're coming up with

a new scoring mechanism

or it's a different team doing it,

and they've got a

different interpretation

of a scoring mechanism.

So we would need to think

about that proportionality.

And then, of course, thinking about,

OK, now that we understand

what's critical, what the risks are,

how are we going to control those,

how we're going to approach them,

who's going to monitor those controls?

How often are they going to monitor them?

To what level?

Where are we going to use

remote monitoring?

Where are we going to use

central monitoring?

Where are we going to use

onsite, onsite monitoring?

Where does source

data review come into this,

where does source data

verification come into this?

Hopefully not very much,

but that's what we're talking about here.

And again.

You know, talking about our quality

continuum, talking about this process

of continual improvement,

we should have access as we start

to put our quality

management plans together.

We should have access to previous

relevant risk assessments.

So what are the studies

that we run in this area

or that were similar to this in design?

You know, what were the quality plans

that we put together with those plans,

effective or not?

Did we catch issues?

Did our controls work?

What risks did become issues?

What QTLs were breached? Again,

what were the common risks?

That information is all information

that should be fed into that process.

So we're not just looking

at the protocol at this point.

We're looking at the protocol

but we're

pulling further information into that.

Now, as we start to manage quality,

the quality management cycle itself

is a cyclical process.

So we are looking at our quality plan,

which we've created here,

where that plan will inform us as to what

we need to be looking at,

what data we're looking at,

who needs to be looking at

that data and when, what controls

we should be looking at.

And those controls

could be all sorts of things

from a simple phone

call to an investigator or to a patient

or whatever, or something mechanical,

or it could be something like

a key risk indicator or a key

performance indicator, or a QTL,

some sort of more analytical model

or a stat's monitoring approach,

which is actually

looking at data

and telling us where we think

there might be concerns.

Once we've looked at that data, that's

going to inform our monitoring approach.

So, again, this piece back here,

the monitoring approach

and monitoring levels,

once we've looked at our controls

and that will tell us

what's our approach

going to be at this point in time.

This is a great example

of that data emergence

coming out and informing the study.

We're not just

sleepwalking our way

through a routine monitoring plan.

We're using the data,

we're using the review

of those controls to inform us

what type of monitoring

we should be doing

and where we should be monitoring

and how we should be

focusing our attention

in that proportionate manner.

Once we've gone through that process

and we've got the feedback

from that process,

we can then update our quality management

plan if we need to.

And that

might be just updating

the fact that we've done that review

and that we've observed these things.

Or it might be that we're saying, hey,

we've identified a bunch of new risks

that we hadn't really thought of.

Or this control

doesn't seem to be working,

we need to adjust it

or whatever that might be.

So that the next time

we come to our review,

we're already learning.

We're already building that knowledge

into the next review cycle.

So we get these virtuous circles

starting to build up.

Now, all the way through this process,

not only are we feeding

back into this particular

study that we're working on,

but data is starting to drop out

of the process into our knowledge base.

When I talk about the knowledge base,

that can be one of many things. You know,

from our perspective with our customers,

it is the OPRA 5 technology platform.

That's where we're storing

all this information, its where

we're performing this study conduct.

But it's also where we're collecting

a lot of information

that can then be analyzed

during these stages of the process.

So we're starting to look at the risk

statements, the scores, the activities.

You know, what are we doing?

As you know, if we trigger an activity

during this cycle here,

was that activity effective?

Did it result in the outputs,

or the piece of information

that we expected?

What were our

SDV and SDR levels going into the study

and how did those change

during the course of study?

That's all incredibly

valuable information

to help us determine how we proceed.

And then as we move down to the

to the quality reporting side of things,

of course, this is now

where we need to report on all of this.

We need to show evidence of having

taken quality by design seriously up

front, of having fed all this information

into the process

of having put together

a solid quality management plan

and having followed that plan

and adjusted that plan,

there is absolutely no expectation

whatsoever in any regulatory guidance

that we create the perfect plan

and we execute that plan

to the letter perfectly.

What's important

is that we are constantly

challenging that plan and we document

those changes.

All that information needs to be

pulled out into our quality reports.

And all of that information

is absolutely vital

for the Final CSR.

That final study report.

And if we do all those things,

if we follow this process,

we will absolutely end up

with a high quality output

and a high quality clinical trial.

All that information

then will be stored

in our knowledge place,

all that historic information,

and every time we start

a clinical trial,

we should be mining that information

and building those back in.

So that is what we think.

That's my interpretation at this stage

of what that quality continuum

should look like.

Now, none of this is stuff

that we can't do today.

This is exactly what we are doing

with our customers.

This is the, you know,

what the OPRA 5 platform

is all about.

But that is our quality continuum

and hopefully this infographic

will be really helpful

as you start to plan your clinical trials

and think about what

this looks like for you.

Just to show sort

of a bit of coverage then, so ICH E8

really is all about this up front.

It's all about that

the generals are considerations

and there's a lot of focus on

on quality by design

and the planning of a quality

conduct approach.

The E6 is much more

about the actual conduct

of clinical research.

There's definitely some overlaps

between them.

I also think there are some gaps.

I don't think that E6(R3)

or E6(R2) certainly

really talked about

how do we get data out of our systems,

how do we feed that

back into that knowledge base?

And I do think that a big gap,

and I hope that with the

ER(R3) revisions, as they start

to become better defined,

that we will see

more of that information being pulled out

and more of a requirement for companies

to really be following this true

continuum cycle.

OK, well, I'm going to hand

back over to Ben now and

Ben, please,

pick back up from from there.

Thank you Duncan.

I think the infographic is

will be really useful

and yeah, very happy

to send that out afterwards.

And I think, you know,

what are we doing to help

I think is the question?

You know,

what are we doing as a business?

So we've been you know,

we've been saying this for years, but,

you know, change management

is the greatest challenge

for RBQM being accepted.

You know, it's recently

highlighted in the ICH presentation.

It is the biggest challenge to adoption.

So what does that really mean?

You know, what does change

management really mean?

We break it down into three parts.

So we go people,

process and technology.

So on the people side then,

So we found that

really a sensible

route for change management,

which I know it's really tempting

to jump straight into technology,

but it's not the ideal approach.

If you throw technology at the issue

without the proper backup,

then things only get

a little more tricky.

What we found is

education is the first step.

So we offer a range

of training options to build knowledge

and understanding of ICH E6

and risk-based quality management

through e-learning,

through free webinars like this one.

We also run workshops

and instructor led training.

So training

your people really is essential.

So following on from that,

if we move on to the process part.

Embedding the concepts of a quality

continuum really relies on

you having robust processes

and also holding people accountable

to those processes.

So having supported dozens of companies

through this change management,

we've actually developed

a sense of standardized,

RBQM SOPs that we normally use

in conjunction with our gap analysis.

So what that allows companies to do

is to really be compliant with both E6

and then to prepare for the upcoming E8(R1).

So we're also working with companies

to put a number of change

management processes in place,

and we've developed a set of specific

change management

programs which allow companies

to adopt RBQM.

So once a company has laid

the foundation of education

and compliant processes,

that's a good place

to introduce technology.

So, as Duncan's mentioned,

we've developed our OPRA technology

to enable that full

end-to-end continuum for any clinical trial.

We just released our version 5,

which is really exciting.

We've had some fantastic

feedback already.

But just to break it down, OPRA

is comprised of two modules.

So we have OPRA RAM

which is our risk assessment

and management piece.

And then OPRA CM

which is our central monitoring piece,

and it provides that single environment

that seems to collaborate a shared view

and obviously enables

that company to have a full end-to-end

quality continuum.

So our business is all about

helping people through this process.

You know, we know that

it can be a very big investment.

And so we come up with a couple of offers

to make things as easy

as possible for you.

So for anybody that's

been on the webinar today

that's attended today, those of you

that are happy to

to contract for this before September.

We will give you these packages.

So we have people kickstarter pack.

So there's our e-learning.

So we've got risk assessment.

And also GCP at a 25 percent discount

along with our SOP and job descriptions

pack for free.

And then we have a technology

kick starter pack as well.

So that is your first risk assessment

study license fee for free

on OPRA RAM for 12 months.

And also, the SOP

and job descriptions pack

for free as well. So that

as I say applies

before the end of September.

So that is a unique offer for everybody

who is attended today.

So, as we said at the start,

we're happy to field questions

from the audience.

I'm just looking here, I can see three

that have come through here.

So if I start with the first one.

So why do you think change

management is the biggest issue?

Duncan, do you want to take that one?

Yeah.

Thanks, Ben.

I guess it's a big, big question.

Probably not why I'm not going to answer

in the next couple of minutes.

But I think the thing

with change management

here is, and certainly with RBQM

as you just said

in the last couple of slides,

it really is, it impacts

all areas of the business.

It impacts a lot of roles

within the business.

It does drive process

change and process improvements.

It certainly mandates the need for

for new and improved technology.

And I think for a lot of companies

who have been used to more iterative

change over time.

So, for example,

if you take the move from paper

based trials to electronic data capture,

you know that was a kind of automation

of an existing process.

So it wasn't that we weren't

capturing data previously

and now we're starting

it was that we were

the processes were already in place.

We were just implementing new technology.

But with RBQM, in many cases,

it is a wholesale change.

And I think for a lot of companies,

that's just a lot to take on.

And knowing where to start

can be really challenging.

And, you know, again, hence

some of the offers

that we've just made and,

you know, and our desire

to help people with that

change management

change management piece.

Fantastic. Second question.

So quality management is perceived

as being a dry

and boring, boring topic.

Have you got any tips or recommendations

for getting engagement from people,

especially senior management?

Yeah, I think

so quality management.

Is it a dry and boring piece?

I mean, I think at the

end of the day, it's absolutely

the heart of everything we do

in this industry.

I think, I think you know

often people's experience

with SOPs

and what have you is

it's a lot of reading, a lot

a lot to deal with.

I think in terms of engagement,

to me, it's looking at the upside.

You know, whenever you're

trying to engage

someone in change management

and getting someone to change

their views on something

or to change

working practices, it's

about looking at the upside.

And I think, I think

the upside in

RBQM and this quality

continuum approach is

is it's in a number of areas.

You know,

of course, it's going to benefit

the patients themselves.

If if we're focusing on the patient,

we're being patient centric,

we're being considerate.

It's going to have

a very positive influence

on the patients.

You know, if we are thinking

about data quality

right from the beginning,

we're building it into the

to the protocol design .

And we're constantly thinking

about managing quality.

You know, we are going to get

a much more accurate

outcome from the study.

You know, we will know

whether that study is being successful

or not much sooner.

And if it is being successful,

we will have much

better evidence to prove

that it's is being successful

and hopefully get,

you know, get regulatory approval.

Much more, much more easily.

And that has huge

value, obviously,

both for the for the sufferers

of whatever that that's,

you know, disease or therapy area

is, but

of course, for the company running

the clinical trial themselves.

And thirdly, efficiency.

You know, whilst

this seems like a lot whilst

it seems like there's

a lot to take on here,

what this is all about

is focusing our resources

on what matters.

And by focusing our resources

on what matters, it means that we are

saving huge amounts of human resource

and time and money,

which is currently , in many cases,

being wasted on stuff

that just doesn't matter.

And so there's no reason

why any of this should be considered

a cost increase in clinical research.

It absolutely should,

and we've

we've got plenty of examples

where it does have a huge positive

increase on both the timeliness

and the cost of a clinical trial.

And my advice to anyone trying to engage

their organization is

focus on those things

and then suddenly this stuff

seems a bit less boring

because it is enabling huge improvements.

Fantastic.

I'm just looking at

it, we've got quite a few questions.

If you're just to say, if we don't

get around to answering today,

we will come back to you

in due course with the answers.

So we will answer all the questions

that are coming through.

I'm just looking at the time.

We'll go for one more

if that's OK.

Sure.

So what's the biggest blocker

to making the virtual circle work?

I think factoring in the concept

of the virtuous circle

right from the beginning,

so that you're thinking about

how am I going to pull

data out of this process

and how am I going to store

it and learn from it is really important.

And being prepared

to do that thinking when,

you know, certainly

at the beginning of a study,

it's pretty hectic.

You've got you've got lots going on

and you've got recruitment going

on, you've got sides initiation going on,

and everyone's excited

and fired up and wants to get on with it.

But taking a bit of a step back

and thinking about that is important.

And then I think the other thing

is having the right technology, you know,

when when a lot of companies

are still doing quality management in

in BI tools and spreadsheets

and what have you,

which are very much

about the here and now,

it's very difficult

to go back and look at historic data

and actually get any

real value from that.

And so I think, again,

we're looking for real change

in mindset around

around the technologies

that are used that actually

not only allow us to conduct that quality

management cycle, but also to

to provide that historical story

and evidence and be able to go back

and look at that data

in a sensible manner,

you know, at an enterprise level,

and make sure that we're really factoring

those learnings in

as we as we start new studies.

And for a lot of companies, that's

that's just technology

they're not using yet.

Fantastic.

Like I say, that's

that's all we've got time

for in terms of questions,

but we will come back to any questions

that we haven't answered today.

I can see there is quite a few

that have come through,

which is which is great,

but just to round off then

just at the end of the

at the end of the hour,

what we're going to do

following on from from this webinar

today, the copies of the slides

and the infographics will be available

which myself and my colleague Jo

will be following up

with everybody that's attended today.

We'll obviously follow up

with regards to the offer as well.

And if there's any interest there,

then it would be great

to have some engagements

from there.

If you would like any RBQM

historic RBQM guidance,

then I highly recommend that you go in to

look at our website,

which you can see on the screen

www.tritrials.com/videos

If anybody is interested

in a personal demo of our OPRA

RBQM platform

then that's something we're very happy

to provide as well.

And any links to any additional

webinars will be coming up

from here as well.

So

it's been great to 

speak to all of you.

Thank you.

Thank you Duncan for your time today as well.

Thank you, Ben.

And thanks to everyone for giving up

their valuable time.

And I really hope

people have got good value.

It's always great when we see everyone

staying to the end of a webinar.

We still got most of the people

that have joined today right on now.

So thanks, everyone.

And I wish you a great end to the week

and look forward to engaging

with you further.

Absolutely.

Thank you, everybody.

Bye now.

Big Industry Challenges that OPRA 5 Addresses

There's a big industry drive

certainly at the minute around quality.

Now RBQM, risk based quality

management is a foundation

for quality in clinical trials.

And to quote from E8(R1)

quality is a primary

consideration for the design, planning,

conduct and analysis of clinical studies.

And to summarize

some of the key messages around E8(R1)

its about building quality

into a trial design.

And that's the best way

of ensuring quality outcomes.

We need to kind of break away

from the standardized checks.

What is important

today may not be as important

as it was yesterday

or on a monthly basis.

And we need to take time

to really identify

the critical factors in our trials.

So what we're seeing is that

sponsors are driving

more of a move to a balanced,

more balanced sort of central

sites, remote monitoring mix,

rather than just

a general monitoring plan.

So how does this

how does this relate to OPRA?

So OPRA covers Section

five, the quality management

section of ICH E6(R2).

So right away from risk assessment,

you know, in order to use risk assessment

to inform the well designed,

clearly articulated protocol,

to use the protocol risk assessment,

to create an informed

that monitoring functional plan.

That's where OPRA comes in, really.

OPRA follows that step-by-step basis,

step-by-step process

specified in R2.

And it provides a single

environment for teams to collaborate,

to have a shared view and to manage

trial quality efficiently.

So it creates an ongoing record

of all quality management

activities, decisions, which is really

what the regulatory authorities

want to see as well.

Duncan Hall - Why OPRA 5 for RBQM

Good afternoon. My name is Duncan Hall. I'm the CEO and founder of tear II. And I'm just recording this short video clip in order to talk about opera five. Opera five is the latest release of our RB qm platform that has just gone live in the last month. Now, we spent more than 12 months developing opera five. And when you spend that amount of time and you invest that amount of money and resource into developing a software platform, you got to ask yourself the question why? And this video is all about answering that question. So why did we develop opera five? Well, fundamentally, it boils down to three reasons. And those three reasons have come from extensive conversations with our existing customers, people that were using the previous version of opera, our prospects, companies that were engaged in sales cycles with already or in earlier sales discussions with who often give us great feedback as to what they're looking for, from a software platform for our b QN. And, of course, the regulatory authorities as well, the ultimate customer for Rb QM. And as you all know, there are upcoming regulatory changes with IC h, e eight, with the first revision of that coming out this year, an IC h g six revision three, which is expected sometime next year. So we've been watching very closely what's going on there in the process of updating that guidance, and how that's going to impact the product and the need for technology support for risk based quality management going forward. So I talked about those three things, what are the three drivers for that have really caused us to want to spend the time and effort developing our okpara five platform? Well, the first thing is quality by design. Now, quality by design is becoming more and more prevalent in clinical research. It is talked about extensively in that revision eight, the first revision EHR one, and the concept of building quality into a clinical trial, and then managing that quality throughout the lifecycle of a clinical trial, which is really the foundation of risk based quality management. But it's becoming more and more prevalent. And what we're starting to see in some of the early guidance and drafts and webinars that have been released recently about e6 r three is this concept of a quality continuum, which again really means that end to end quality management right from the initial protocol design, all the way through to final study reporting. opera is a platform that enables that initial quality by design, the support of the analysis of a protocol, right through to the risk assessment, risk management, central monitoring, adaptive monitoring, changing the operational management of a clinical trial based on the quality and the risks, the quality all the way through that interim process. So that was the first driver that that quality continuum and support for quality by design. The second piece is around data and data standardisation. Now, as you know, when you're looking at risk based quality management, you're looking at data on long game basis centrally and you're using that data to make decisions about how you're going to monitor the study going forward. you're consuming a huge amount of data and that can take some setting up and management. Nobody wants to be spending time and money on on study setups that he builds they want to be getting on with execution. And so what we have created is a platform that allows standardisation, where standardisation is appropriate. And we know that all clinical trials are different. And you cannot standardise a clinical trial. You cannot standardise the data you look at a clinical trial. You cannot standardise the data visualisations but there are elements within that which you can standardise where that's possible we have but we also and the real challenge from a technical perspective is how do you standardise where appropriate, but create a platform that allows configuration and customization where needed as well, so that you can factor in those nuances of that particular study. The study design the way that you're going to execute that study the assessments, what data is being collected, the patient visit schedules, all those components that are specific to any given study. So we've created a platform that truly allows us to standardise where possible that to be bespoke and flexible where we need to be.

And then finally, one of our mantra all the way all the way through the opera lifecycle going back five years now when the product first was first released, is that it's got to be operational. It has always been our view that opera is an operational platform. It's going to be used by operational users to make operational decisions and we've Really continued that through. So with all of that, that quality by design, that that that that continuum of quality. Our mantra has always been, how do we make the information that's being gathered and assessed and the outputs in that process of operationally valuable? How do we really drive operational decisions based on that data? How do the operators of the system know what they're looking for? Know how to easily read the signals that they're getting from the platform and know what to do next, when they see a signal? How do we combine all of that into one platform? So all of that information, all that decision making all those resulting activities are captured in one place, and can be exported as a series of reports to be included? In the final CSR that show that true quality continuum? You know, what was my protocol design? What was the initial risk assessment? What did I just about the design? What were the the critical to quality factors? What were the risks to those factors? How did I manage them? And what the decisions and observations that I made along the way and what was the end result? That is operational management's in an RB qm environment? That's the third driver for the opera five design. Thank you very much. It's been a pleasure talking to you today. I hope this has really helped you understand why we've done what we've done. And what makes opera five such an exciting and valuable software platform. Thank you. Bye bye.