spacer
home > ict > summer 2012 > non-response: issue or inconvenience?
PUBLICATIONS
International Clinical Trials

Non-Response: Issue or Inconvenience?



The occurrence of missing data has been a prevalent issue for clinicians, pharmacists and statisticians for many years. While there are requirements in place to reduce non-response and missing data, one question that has continually arisen is whether there is information contained within the non-response.

Missing or incomplete responses are a common feature with many clinical trials and observational studies. The problem created by trial non-response is that data values intended by the trial design to be observed are, in fact, missing. These missing values mean not only less efficient estimates due to the reduction in sample size, but also that standard complete-data methods cannot be used to analyse the data. Furthermore, bias could become an issue in the analysis due to responses often being systematically different to non-responses.

What are the Regulators Saying?

Currently, regulatory agencies such as the FDA in the US and EMA in Europe are exploring various options to approach missing data in clinical trials. The EMA have released a report in 2011 expressing their views on handling missing data in confirmatory clinical trials (1). Similarly, in 2010, the FDA commissioned a special panel from the US National Academy of Science in Statistics (NAS) to draw up “a report with recommendations that would be useful for FDA’s development of a guidance for clinical trials on appropriate study design and follow-up methods to reduce missing data and appropriate statistical methods to address missing data analysis of results”

(2). Additionally, there was a report by the International Conference on Harmonization (ICH) in 1998 addressing the issue of non-response in clinical trials and asking some tough questions about how this needs to be dealt with (3). Both the FDA and the EMA recognise how missing data can affect the analysis and any interpretations that are made, due to it reducing sample size and injecting potential bias.

In the NAS report the panel makes 18 recommendations to the FDA for consideration, spanning areas such as: trial objectives; reducing dropout through trial design and trial conduct; treating missing data; and understanding the causes of dropouts in clinical trials (2). Of these 18 recommendations, seven are focused on reducing missing data and a further seven are aimed at dealing with missing data. Furthermore both the NAS and the EMA reports suggest similar approaches to handling missing data (1-2). As a result, it appears that a certain trend is surfacing and a particular direction in this field is beginning to gain traction among the regulatory agencies.

What are the Statisticians Saying?

Historically, many statistical techniques were developed in an attempt to gain a wealth of knowledge from very limited sources of information, hence the birth of the ‘sample’. By taking a sample of an overall population, it is possible to make certain inferences about that population. Thus, a natural progression of opinion was whether any information could be salvaged from missing data that can so easily creep into any clinical trial. This debate led to the question of whether the observed data is biased due to lack of complete data. In other words, if the missing data were observed, would the results from any analysis differ?

Traditionally, there have been various ways of dealing with non-response and missing data; the most common of which is referred to as ‘complete case analysis’. This is where all the cases with incomplete responses are systematically deleted before any analysis is performed. One opinion that has gained a lot of momentum and support is the idea that deleting the incomplete cases is wrong for two main reasons: firstly, by deleting the incomplete cases you are still throwing away some information that you already have; and secondly, you may be injecting bias into the sample you have collected. To tackle this dilemma, statisticians and clinicians decided to replace the missing data with the worst possible (or best possible) response to purposely bias the data. Therefore, if you could prove that a particular drug or device can perform even under the worst/ best case scenario, then that result can be extrapolated to the masses.

In longitudinal trials, or trials conducted over time, the single imputation approach is usually implemented to fill in the missing data, using the last value recorded, or the last observation made about a particular case (where single imputation can be defined as the procedure of entering a single value for a specific data point that is missing). This common method is referred to as Last Value Carried Forward (LVCF), Last Observation Carried Forward (LOCF) or Baseline Observation Carried Forward (BOCF) depending on the values in question. Both the EMA and NAS reports were critical of these single imputation methods as they felt this approach can result in “confidence intervals for the treatment effect … may be too narrow and give an artificial impression of precision” and “not conform to well-recognised statistical principles for drawing inferences” (1-2).

A modern and more novel approach that is gaining much interest in the clinical and pharmaceutical circles currently is the multiple imputation approach. Originally proposed by Donald Rubin of Harvard University, multiple imputation is the technique that replaces each missing or deficient value with two or more acceptable values that represent a distribution of the possibilities (4). Using this method, you start out with one incomplete database and you end up with multiple complete databases.



Why so Many and What to Do with Them?

An important question to pose is whether non-response is an issue or an inconvenience. In terms of the inconvenience, when incorporating multiple imputation, we start with one database containing all of the information gathered throughout the trial and, subsequently, we generate several – and generally the more the better. An overview of the multiple imputation process is displayed in Figure 1. Beginning with the original data collected from the study, there are missing data points displayed in red. Inserting multiple possible values for each missing data point generates a number of databases. Finally the various characteristics of the databases are examined.

The reason for this course of action is that simply inserting one value to replace a missing data point can lack the depth required to fully encompass all the information that is missing. In other words, inserting one value may be too extreme while inserting a whole range of values generates a more complete picture. Once we have generated several complete databases, we approach them in the same way as with any database. This is where the inconvenience comes into play; where there was only one database and one set of analysis to be done, we now have to look at and analyse several databases. Once this is done, a comparison is undertaken to contrast the results across all the newly created databases and examine the findings. So when adopting multiple imputation, there is a considerable amount of extra work involved, but the return is far greater than the investment.

So Why Bother with It?

Turning to the ‘issue’ side of the story, the argument for dealing with the issue of missing data, and not simply ignoring it, is that there could be substantial bias attributed to the absence of the data in question. This means that there could be a very specific reason (or several reasons) why certain information is missing, and to simply ignore that could be detrimental to a study and its outcomes. For example, if a specific type of patients’ measurement could not be taken, then this group of patients are essentially excluded from the study. If their measurements were recorded, they might influence the outcome significantly. Based on the observed data, making accurate imputations can help to achieve a much more representative insight into the group being studied. As it has been already outlined, Table 1 summarises the main advantages and disadvantages to incorporating multiple imputation into the data analysis and study process. Bias reduction, increase study power and more efficient sample size are clear advantages to implementing multiple imputation. The side effects of using this method are that there is a requirement for additional analysis time and technological issues such as computation speed and memory.

Conclusion

In statistical terms, multiple imputation is still a relatively young methodology and certainly a very new approach to handling data in the clinical trial setting. It is a novel approach and appears to be gaining a significant amount of momentum with both regulatory agencies and pharmaceutical companies. A point was made that is a very prevalent issue in clinical trials as missing data can occur so easily. Using multiple imputation as a solution to handling missing data may at first appear as an inconvenience due to the extra analysis required. With that in mind, advancements in technology and software are reducing that inconvenience at an incredible pace. Therefore, to answer the question of whether non-response in clinical trials is an issue or an inconvenience, the answer is a bit of both, but while the inconvenience will subside, the issue will always remain.

References

  1. European Medicines Agency, Guideline on Missing Data in Confirmatory Clinical Trials, www.ema.europa.eu/docs/en_GB/ document_library/Scientific_guideline/2010/09/WC500096793. pdf, accessed on 11th May 2012
  2. National Research Council, The Prevention and Treatment of Missing Data in Clinical Trials, 2010
  3. International Conference of Harmonisation, Topic E9: Statistical Principles for Clinical Trials, www.emea.europa.eu/docs/en_GB/ document_library/Scientific_guideline/2009/09/WC500002928. pdf, accessed on 11th May 2012
  4. Rubin D, Multiple Imputation for Survey Non-Response, 1987



Read full article from PDF >>

Rate this article You must be a member of the site to make a vote.  
Average rating:
0
     

There are no comments in regards to this article.

spacer
Andrew Grannell is Senior Statistician at Statistical Solutions, a company that provides and develops statistical software for the pharmaceutical industry. He received his BEng in Microelectronic Engineering from the National University of Ireland, Cork, in 2008. He went on to receive his HDip in Applied Statistics and MSc in Statistics from the National University of Ireland, Cork, in 2009 and 2011 respectively. Research areas covered in his thesis are focused on the medical and social science applications of statistics. His more recent topics of interest include missing data analysis. His more recent topics of interest include missing data analysis as well as being involved in the development of SOLAS for Missing Data Analysis software package.
spacer
Andrew Grannell
spacer
spacer
Print this page
Send to a friend
Privacy statement
News and Press Releases

3P Biopharmaceuticals renews its “Credit Impôt Recherche” (CIR) by the French Ministry of Higher Education and Research

[Noáin, April 22, 2020] 3P Biopharmaceuticals, a European leading Contract Development and Manufacturing Organization (CDMO) specialized in process development and cGMP manufacturing of biologics, has successfully extended its French CIR certificate for another four-year period: 2020-2024.
More info >>

White Papers
 
Industry Events

Outsourcing in Clinical Trials Europe

26-27 October 2020, Paris, France

Arena International are delighted to announce the return of Outsourcing in Clinical Trials Europe. Part of our global series of events, this flagship show will attract the leading clinical professionals from across Europe. The 10th Annual event will be hosted in Paris.
More info >>

 

 

©2000-2011 Samedan Ltd.
Add to favourites

Print this page

Send to a friend
Privacy statement