spacer
home > ict > spring 2014 > prime numbers
PUBLICATIONS
International Clinical Trials

Prime Numbers

Major forces are exerting increasing pressures upon the clinical research industry – we are facing a turning point unlike any other in our history. Regulatory agencies are demanding smarter, more effi cient risk-based methodologies to improve the quality of data. The costs of bringing innovative new compounds to market are rising due to generics, lower approval rates, increased scrutiny and testing regulations. And the general public are becoming increasingly more educated and invested in their healthcare.

In order for ‘data-driven’ healthcare decisions to be made, organisations are being established and are gaining momentum, responding to the demand for openness and transparency from the pharmaceutical industry.

Both the US Food and Drug Administration (FDA) and the European Medicines Agency (EMA) are advocating the use of risk-based methodologies in an effort to decrease the resource and cost burden of ineffective, traditional monitoring approaches (1,2). They recommend the use of data in an effort to target resources onto high-value, high-impact tasks that ensure quality of data which protects both the safety of the patient population and the integrity of the clinical trial protocol.

With the clinical research industry preparing itself to respond to pressures for better quality data, efficiencies and transparency, the focus is starting to turn inwards as organisations endeavour to innovate and adapt. As we shine this self-appraising spotlight on ourselves, it is becoming apparent that, as data-driven companies, there is enormous scope for increasing the transparency and availability of our own data reservoirs in order to improve efficiencies and quality.

Continuum of Quality

Most organisations employ resource-intensive and reactive approaches to quality management, coupled with outdated manual processes that contribute little or no value to quality issues affecting patient safety or integrity of the protocol.

A clinical trial is a virtually perfect data ecosystem; its flaws lie only in the interpretation and adherence to the rules and structures that define and govern it. Data is collected from an appointed community of highly trained, professional contributors (investigators), in rigidly defined structures (electronic case report form, electronic and manual edit specifications) and against an exhaustive set of operating instructions (protocol).

The quality of a clinical database exists on a f nite continuum. At one extreme, there is an end-point of 100 per cent clean and valid data, gradually retreating backwards towards an undefined point beyond which the data is of sufficiently poor quality as to either be unusable for accurate analysis, or poses a significant risk to patient safety.

Getting Smarter, Quicker

Traditional models dictate that in the pursuit of ever-greater quality, increasingly large sums of money must be spent; whereas pressures from regulatory authorities and the global economic markets dictate that costs must be reduced as quality levels increase – resulting in a quality-cost paradox. Many organisations fail to recognise there is a zone of tolerance within which the data is accurate and valid, supports the integrity of the protocol, and protects the safety of the patient population.

The industry needs to become more comfortable with achieving acceptable levels of quality versus desired levels, and thus begin to recognise the associated reduction in costs and effort in producing a clean, analysis-ready clinical trial database.

So that we can begin to adopt to the new paradigm, the research industry needs to shift focus from identifying and fixing errors once they have occurred, to investigating and establishing causality factors earlier in the clinical trial process, and resolving issues before they manifest themselves in less than adequate quality clinical data. To do this, we need to adopt a different approach to looking at data. We need to get smarter, quicker.

Data Reservoirs

‘Big data’ is a catch-all phrase that is used to describe the sudden explosion and availability of data over recent years. Some 90 per cent of the world’s data has been created in the last two to three years (3). As a result of rapid technology expansion over the last decade or so, many organisations are faced with a multitude of disparate, unconnected systems that provide disjointed data reservoirs.

Data is everywhere, and the rate of data generation has long outpaced our ability to structure, model and use it effectively in any form of intelligence-led decision-making. It is not just the volume of data that has increased, but also the velocity of data generation and the variety of data sources available to us.

Technological advances have made strides in addressing the velocity and volume. However, dealing with the variety of data continues to be the main source of frustration in providing cohesive, integrated and useful data reservoirs.

Many solutions involve some sort of ‘data warehouse’ or integration, with a business intelligence layer above that. Unless you have an agile, flexible integration methodology, the variety of clinical data sources for a modern clinical research organisation will always throw challenges in the path of useable data reservoirs.

Within the clinical research industry, there is a vast, cavernous void of potential, unrealised information that is untapped and currently providing limited, if any, insight into trial conduct and quality. This gap exists between the amount of data available to us, and our ability and willingness to utilise it.

Multitude of Sources

Clinical trials create data. Not just clinical, but huge amounts of operational data are created in vast data reservoirs. A typical trial will have a multitude of isolated or partially connected data sources. This is further compounded by research organisations that utilise multiple systems from different vendors in order to fulfil client demands for choice and flexibility, such as:
  • Electronic case report forms
  • Medical records
  • Electronic patient-reported outcomes
  • eLab/central laboratories
  • Local laboratories
  • Pharmacokinetic laboratories
  • Safety databases
There are also other data sources which are not directly involved in collecting, cleaning or analysing a trial, but are or could be available to most clinical research organisations, including:
  • Recruiting and personnel management data
  • Personnel performance data
  • Quality assurance
  • Business operations
  • Operational process metrics
  • Clinical trial management systems
  • Finance and legal systems
  • Regulatory systems and repositories
  • Electronic health records
  • Social and community media
Couple all these systems and data sources together, and a picture emerges of how extensive these data reservoirs are. Furthermore, much of this data is segregated in silos, providing sponsors and research organisations with very little opportunity to use this for intelligence or decision-making.

When efforts have been made to utilise this data, we tend to sail around on these reservoirs, occasionally throwing in a line or net to see if something interesting comes up to use. So, despite generating and collecting vast amounts of data, we are currently unable to use this efficiently to identify areas of risk, poor quality or performance, potential safety concerns, or issues that may affect the integrity of clinical study protocols.

Managing the Flood

We have our data reservoirs – vast, deep and typically isolated. With new technology, we are able to consider opening the gates and tapping into this available resource in order to drive better business intelligence, better quality, more efficiencies, lower costs and higher margins.

But is it as simple as implementing new technology? Opening the reservoir gates will only lead to a tsunami of unintelligent, unconnected and irrelevant data – more noise than signal – and serve only to confuse and obfuscate decision-making in project team leaderships. Without having infrastructure, processes and the right technologies, this flood of data could begin to overwhelm and paralyse project teams as they struggle to make sense of everything.

The answer is as simple and as complicated as anything the industry has faced before. We need to put data at the heart of everything we do, and become data driven. Get the right information, to the right people, at the right time for them to make informed, calculated, accurate and timely decisions. In an industry that is traditionally conservative in its approach to quality control methodologies, this is a major paradigm shift.

There are technology burdens in being able to implement satisfactory data-driven quality processes. Even with existing platforms and some minimal changes to process, huge advantages can be gained. Using data is not just a technological solution – it needs to be process driven, with the data at the core. It in its simplest form, it is heuristic, holistic and analytic. These principles should be central to any big data strategy.

Heuristic – from the Greek heuriskein, meaning ‘to discover, to find out’

So that we may use the data available to us to make empowered and informed decisions, it needs to start with an inquisition, a question, an issue that needs identifying and resolving. There is little point in IT departments embarking on huge transformational projects to integrate multiple data sources and serve up interactive reports and data if it is not going to answer the fundamental questions being posed by the business – if it is not going to provide actionable intelligence.

Holistic – from the Greek holos, meaning ‘whole, entire, total’

In order for us to gain an overall perspective of quality, performance and risk indicators, we need to expose and utilise as many of the data sources as are available, and ensure they are relevant to the questions the business is posing. Although insight can be gleaned from traditional reporting and metrics, this often produces a singular view, potentially leading to poorly informed decision-making.

The holistic approach needs to be flexible and agile in its deployment, as data sources may change as the study moves from feasibility, through enrolment, study conduct, study close-out, and final statistical analysis and reporting. These data may come from already integrated systems or as part of the data harvesting, and presentation custom integrations may need to be developed. Whatever systems, technology and processes are adopted, the outcome should be relevant, harvested data in analysis-ready data formats.

Analytic – from the Greek analutikos, meaning ‘to resolve’

To gain advantage from exposing multiple data reservoirs and exploit the value and differentiators they may hold, an organisation needs to design an appropriate, relevant and agile analytics layer. These analytics must be designed to frame the entire study process and support the questions being posed by the business.

Careful consideration should be made in designing a reusable and reflective suite of analytics that provide the necessary intelligence, not only for individual studies or programmes, but also across the organisation as a whole. We may be able to introduce and incorporate forwardlooking, predictive analytics that utilise the power of retrospective analysis and complex event processing.

Cultural Shift

One of biggest challenges in utilising analytics and nontraditional data sources will be cultural – moving away from the ingrained insistency within the industry that, in order to ensure quality, every data point must be checked and every error fixed.

Using analytics in a historical, current and predictive manner, we can begin to build new, additional and improved quality into the design and conduct of clinical trials. We can start to increase risk levels and tolerance with mitigation and management strategies by focusing resources onto highimpact, high-value tasks around patient safety and integrity of the trial protocol. Analytics will allow us to model and shape data in ways not implemented before. We will be able to identify risks and quality issues not encompassed or defined in current data collection and review methodologies, and resolve issues much earlier in the data collection and cleaning process.

References
1. FDA, Guidance for industry: Oversight of clinical investigations – a risk-based approach to monitoring, 2013
2. EMA, Reflection paper on risk based quality management in clinical trials, EMA/269011/2013, 18th November 2013
3. SINTEF, Big Data, for better or worse: 90% of world's data generated over last two years, ScienceDaily. Visit: www.sciencedaily.com/ releases/2013/05/130522085217.htm


Read full article from PDF >>

Rate this article You must be a member of the site to make a vote.  
Average rating:
0
     

There are no comments in regards to this article.

spacer
Gareth Adams is Senior Director of Central Analytics at PRA. In his 19 years of professional experience, Gareth has held several senior management positions within the contract research industry and, since 2010, has been deeply involved in process optimisation, strategic visioning, talent management and change leadership. He graduated with honours in Biological Sciences from the University of Plymouth before starting his career working as a Data Analyst.
spacer
Gareth Adams
spacer
spacer
Print this page
Send to a friend
Privacy statement
News and Press Releases

ERT Offers First Patient-Administered ECG Assessment for Continuation of Clinical Trials during COVID-19

ERT, a global data and technology company that captures critical endpoint data while minimizing uncertainty and risk in clinical trials, today announced a first-of-its-kind partnership with AliveCor, the leader in AI-based, personal ECG technology. The partnership enables ERT to capture digital cardiac safety data with KardiaMobile 6L, the only FDA-cleared personal ECG for patient-administered 6-lead data collection.
More info >>

White Papers

Clinical Research in Spain

BioClever

This whitepaper is on clinical research in Spain, giving a general outline of the main characteristics of research in this country and an overview of the applicable regulations, along with the reasons why Spain is a good place to do research.
More info >>

 
Industry Events

World Vaccine Congress Europe

18-21 October 2020, Barcelona, Spain

The World Vaccine Congress is an award-winning series of conferences and exhibitions that have grown to become the largest and most established vaccine meeting of its kind across the globe. Our credibility is show through the prestigious scientific advisory board that spend months of hard work creating a new and topical agenda, year on year.
More info >>

 

 

©2000-2011 Samedan Ltd.
Add to favourites

Print this page

Send to a friend
Privacy statement