samedan logo
 
 
spacer
home > ict > summer 2012 > doing more with less
PUBLICATIONS
International Clinical Trials

Doing More with Less


As the processing and managing of clinical data continues to evolve with the advent of new technologies, the approach taken to risk-based monitoring is being challenged and modernised.

More than 50 years ago, the clinical trial industry was shaken to its very core by the thalidomide research disaster. The reactions were widespread and included a renewed focus on data quality, patient source notes, investigator management and, in particular, a demand for extensive processes to obviate the collection and reporting of fraudulent data. The follow-up to the disaster also marked the birth of modern-day monitoring as we now know it. Safety in checking led to confidence, confidence led to best practice, and best practice led to an all-consuming acceptance that monitors must go to sites every four to six weeks, check every data point against the source notes, and ‘guarantee’ that data is valid and correct at all times.

During the latter stages of the 20th century, with profits at a record high, financial constraints took second place to speed and inherited best practice. People accepted historical processes as standard practice and did not seek more timely solutions. Clinical operations and data management worked two different jobs; handling the same pieces of paper, but rarely combining their efforts beyond a transitory collaboration around database lock time. There is, of course, a touch of sarcasm in this position, but having spent 15 years working in data management, I feel confident that many of you will agree with me. Today, we find this paradigm under extreme threat from many directions: increasing investigator fees; tightening regulatory requirements; the impact of new technology; and the growing pressure on R&D time and cost. We should also not forget the patient, who today demands lower costs for treatments and strives for unprecedented levels of personalised medicine. The net impact is that we need to do much more, with much less, much faster.

In August 2011, the US Food and Drug Administration (FDA) released the ‘Guidance for Industry Oversight of Clinical Investigations – A Risk-Based Approach to Monitoring’ (1). This draft guidance addresses the misconception that 100 per cent source document verification (SDV) is required by the FDA, encourages sponsors to tailor monitoring plans to the needs of the trial, and even describes strategies for monitoring activities that reflect a modern, risk-based approach. The guidance encourages sponsors to focus on critical study parameters and rely on well-defined, yet flexible processes to oversee a study effectively. The European Medicines Agency (EMA) released a similar paper, also in August 2011.

Although risk-based site monitoring is a hot topic, the counter-considerations are equally strong – we operate in a risk-adverse industry, leading to the stifling of innovation by restrictive business practices, preconceived ideas and incorrect perceptions. The resulting failure to evolve processes and resistance to new approaches or technologies is the subject of this paper. This article examines some major industry concerns in adopting a riskbased approach and showcases how technology paves the way for clinical researchers to take it on.

The Challenge: How Do We Ensure Trial Integrity and Optimise Trial Costs?

Traditionally, four of the most significant considerations when initiating a trial are:

  • Trial design (and statistical planning)
  • Site selection and patient recruitment
  • Monitoring
  • Data collection and management

These considerations can all be intrinsically linked by consolidating our focus on the lowest common denominator: data. If we accept that the main objective of clinical trial is to collect the data that we require to prove our hypothesis and confirm the safe and efficacious long-term usage of the solution under examination (chemical entity or device), we can then review our clinical trials’ best practices from a more holistic perspective.

Indeed, by examining these four trial components, we can trace the data through these phases, reviewing how, when and why we interact with the data generated. We can then truly identify how a streamlined strategy can not only simplify trial execution, but actually enhance quality by focusing on what truly matters, and consequently deliver upon a critical theme in clinical trial execution – putting the right people in the right place, at the right time. It is through this analysis that we will address the true challenges to ensuring trial integrity and optimising trial costs.

Trial Design (and Statistical Planning)

The key question every study team should be asking is ‘are we collecting the data we need and only the data we need?’ More often than not, the truth is that we are collecting too much data – data that we will not even analyse in a final study report.

A recent analysis completed by the Tufts Center for the Study of Drug Development revealed that 15 to 30 per cent of all data collected by some sponsors are never used or incorporated into a new drug approval submission (2). If we condense the volume of data collected to what is truly important, there must be fewer data points to verify, and therefore we reduce the volume of time, resources and money required to manage that trial.

However, reviewing the trial design also permits us a chance to prioritise that data – most typically key efficacy and safety data, or data linked directly to a primary or secondary objective. This analysis also allows us to identify the less important data – the data that we can make a conscious decision to expend less energy upon. This is the first step in our risk-based, or targeted, data strategy.

Site Selection and Patient Recruitment

As a clinical development programme progresses, we can learn from one study to the next, gaining insight into common mistakes, challenges and high value data points. Sites participating in early phase studies might be expected to carry forward their lessons learnt and experiences in a positive way. That means we can consider categorising sites according to their relative experience with the clinical development programme under discussion.

Does this knowledge offer us a further decision on our risk-based data strategy? Put simply, yes. If we select the most appropriate sites and train them appropriately, we can expect significantly higher data quality in return. Using technology to gain feedback on data issues in ‘real-time’ early in the process is key; and of course, sharing feedback across sites, so that mistakes are not replicated across the entire site base, is fundamentally important.

Monitoring, Data Collection and Management

The wide adoption of data management technologies such as electronic data capture (EDC) in the last decade certainly made its mark in the area of processing and managing data. It is the advanced use of these technologies, in conjunction with trial design and site selection, which can enable further step changes in the classic site monitoring strategy. Perhaps the simplest example of this is site-based data entry; as sites enter data, they are immediately subject to validation checks, promoting immediate feedback to the site, which in turn promotes the real fixing of basic data errors. Anyone viewing the data can assess the data quality at a glance without having to review the content manually. This is just one simple example of technology performing in seconds – a task that traditionally could take minutes or even hours. Can we challenge other activities in the same manner?

Changing Paradigm

It is important to challenge the classic functional labels of monitoring and data management, and instead consider the tasks that need to be performed. In my experience, I have always looked at monitors/clinical research associates (CRAs) as requiring two very different skill sets:

  • Site motivators: skilled at site management, motivation and patient recruitment
  • Data monitors: skilled at reviewing data and finding the errors

Monitors with a preference towards site motivation tend to deliver lots of patients and lots of data, but might lack the attention to detail when it comes to reviewing that data. By comparison, monitors with a preference towards the data generally delivered fewer patients (and data), but what they delivered was immaculate. This clearly describes the clinical challenges – how to find patients to meet recruitment targets or deliver perfect data. In an ideal world, we would do both.

If we also overlay the advent of technology to this role map – in particular EDC – we can identify a further fundamental change in our strategy. Utilising a paper case report form (CRF), the first person to see the data was the monitor, who would then make provision for that data to be shipped to data management after performing the required SDV. Over time, this non-direct data transmission, was evolved to enable sites to pull pages and submit them direct to data management (direct data transmission).

Under direct CRF or data transmission, the data manager would review the data, generate queries and provide direct site feedback, often before the next monitoring visit would occur. Today, using EDC, it is almost always the data manager who reviews the data first, enabling us to take this process even further, challenging the traditional division of responsibilities and allowing us to further streamline data checking.

Are we merging the roles of data management and clinical monitoring, or should we now simply decide that data management is responsible for the quality of the data? I certainly do not think the former is true, but more a case of maximising strategy. The site management component is absolutely key and cannot be underestimated. Perhaps a more valid approach is to re-examine the real steps involved and align our data strategy around these. In essence, we combine the classic monitoring plan with the classic data management and validation plan, and allocate full responsibility for each activity to a single individual.

New Approach

There are two major considerations when we build and execute our monitoring plan: volume of SDV and frequency of visits. By controlling and understanding the volume of SDV, we can allow the accrual of data to drive events at the site. Instead of defaulting to setting monitoring visits to every four to six weeks, the data can dictate that frequency. High enrolling sites generating greater volumes of data would therefore likely need more frequent or longer visits. With these considerations in mind, we can employ a very different mentality to scheduling and managing standard monitoring visits.

Site Management Activities

Each time a monitor visits a site, there are certain activities that must be performed. If we review these activities, we can anticipate they will take a fixed time (for example, four hours) for the monitor to complete – assuming that there are no major deviations from normal practice. Therefore, our goal is to optimise our return on investment (ROI) for this visit by ensuring there is sufficient work required in other areas to allow the CRA to spend a full day (or days) on site.

Data Handling and Review (for Example, SDV and Query Management)

If we can track the volume of data that requires SDV in real-time, we can project when there will be sufficient data to maximise the monitor’s time on site. By examining a patient’s visit schedule, with an understanding of the data flow and our SDV requirements, we can project when this optimal time will be. Furthermore, the monitor can work with data management to ensure that all data-related activities are concurrent in time for that visit, again maximising the ROI for that visit. In terms of a monitoring plan, there are other considerations that we must consider – such as serious adverse events, deaths, protocol violations, site personnel changes – all of which may cause us to require a visit before the data projects, but using the data as an overall projector brings significant benefits, saving time and effort.

With this type of planning, it is now common for sponsors to estimate that monitors will be on site (post-initiation and pre-lock), on average, every eight to 12 weeks (as opposed to four to six weeks). The reduction in direct (labour) and indirect (travel) costs are significant.

Does This Safeguard Our Data?

As an industry, our mission remains unchanged – to deliver safe and effective treatments to patients and those in need of clinical assistance. It is clearly stated and understood that sponsors of clinical investigations are required to provide oversight to ensure adequate protection of the rights, welfare and safety of human subjects and the quality and integrity of the resulting data submitted to regulatory authorities. With the advent of technology and an updated regulatory landscape, our standard approach is being challenged and modernised.

It is important that we recognise the difference between quality assurance and quality check. Risk-based or targeted site monitoring enables a systematic, total data quality assurance approach to be implemented, which considers the overall responsibilities shared by clinical project management, clinical monitoring, data management and other reviewers, and then re-deploys them in a manner that obviates duplications and streamlines clinical data acquisition and management. The ability to identify data trends early and to share these learnings enables us to re-deploy study personnel, placing the right person in the right place at the right time. The industry must stop performing site visits with no data, and stop burdening high performing sites with unnecessary visits on a fixed schedule, as they serve only to distract and delay the process.

By using the data to drive those decisions, we promote our chances of catching errors early and learning from mistakes first time around. Technology plays a fundamental role in every current clinical trial and brings with it the ability for real-time visibility. If we can view real-time data, and most importantly utilise that data in a standardised format, we can direct field operations in a logical and appropriate manner.

Sites are happy because they will not be visited as often, or perhaps their average visit duration will be reduced. Monitors will be happy because they can perform more centralised management of sites, spend less time on travelling and more time on valued added activities. They can also prioritise appropriately, visiting sites that need assistance in meeting enrolment targets or data quality standards, not visiting low recruiting sites ‘just because my last visit was six weeks ago’. Data management assumes more of a daily role in the site management activities, facilitating the site, the monitor and the rest of the clinical study team.

Conclusion

None of this is a replacement for thinking. The data does not always tell the whole story. In addition to data-driven factors, we must also look for other signals – serious adverse events, deaths, protocol violations, site feedback – events that can require more immediate action. The data helps us identify global trends, as well as localised issues, but the data cannot be used in isolation to drive every critical decision. You can, however, use the data to challenge decisions and verify the need, or lack thereof, for activity.

The key to all of this is technology. Real-time, complete and coherent data flow shows us where we are being successful and where we need to invest our time and effort. The advent of technology has promoted these ideas from potential to standard procedure, turning risk-based approach into reality. The critical piece to this puzzle was, and remains, a flexible platform that centralised clinical researchers can use to dynamically adjust the SDV level in real-time. Based on the observed quality, these researchers can effectively manage sites and their field-based operations.


References

  1. 1Guidance for industry oversight of clinical investigations – A risk-based approach to monitoring, US Department of Health and Human Services, Food and Drug Administration, Center for Drug Evaluation and Research, Center for Biological Evaluation Research, Center for Devices and Radiological Health, August 2011
  2. Getz K, Assessing the downstream impact of protocol design complex city, Tufts Center for the Study of Drug Development, August 2009


Read full article from PDF >>

Rate this article You must be a member of the site to make a vote.  
Average rating:
0
     

There are no comments in regards to this article.

spacer
Richard Young joined Medidata Solutions in 2010 and is currently Director of Regional Sales, with responsibility for EMEA. Richard started his career in data management and spent almost 15 years designing, managing and reporting clinical trials for pharma, CRO and technology organisations in both the US and Europe. 
spacer
Richard Young
spacer
spacer
Print this page
Send to a friend
Privacy statement
News and Press Releases

Axol Bioscience introduces multi-electrode array screening services for human iPSC-derived cells

• Expansion of Axol Bioscience’s service offering for pre-clinical drug discovery • New services combine the Company’s iPSC-derived human cells and electrophysiological expertise with Axion Biosystems’ Maestro Pro MEA platform
More info >>

White Papers

Key to Outsourcing Method Development and Validation: A Pragmatic Approach

RSSL

In an industry that is seeing an increasing level of work being outsourced, the Contract Research Organisation (CRO) of choice needs to have proven experience in both the pragmatism and flexibility of the method developer’s mind set and a regulatory background in validation. As companies are focussing on achieving ever shorter times of drug to market, it is vital that a tailored, pragmatic approach is adopted when engaging in both method development and validation activities for an Active Pharmaceutical Ingredient (API) or drug product (DP). Although methods still require a high degree of robustness, the overall strategy should encompass a full evaluation of the regulatory requirements applicable to the particular phase of the drug life-cycle; this is pivotal in Key to Outsourcing Method Development and Validation A Pragmatic Approach order to ensure a successful regulatory submission, where the applicant must demonstrate suitable validation of all methods used to support the filing.
More info >>

 
Industry Events

Clinical Operations in Oncology Trials West Coast

17-18 November 2020, Hilton San Francisco Airport Bayfront

Clinical Operations in Oncology Trials West Coast will be returning this April for another 2 day event full of thought-provoking presentations, discussions and roundtables. This years' conference highlights include the high-level, interactive immuno-oncology discussion panel where specialists from Shasta Bio Ventures, Abbvie and BeiGene shared their top tips on how to run successful and impactful immuno-oncology studies.
More info >>

 

 

©2000-2011 Samedan Ltd.
Add to favourites

Print this page

Send to a friend
Privacy statement