spacer
home > ict > spring 2013 > use and principles of automated measurements
PUBLICATIONS
International Clinical Trials

Use and Principles of Automated Measurements

Even before the approval of the E14 guidelines, automated electrocardiographic measurements and interpretations were available from electrocardiogram (ECG) machine manufacturers. These appear on the printed ECG. However, each manufacturer advised caution when using the readings. The final arbiter for ECG measurements and interpretation, and thus the gold-standard, has always been the cardiologist.

In the era of the E14, the number of ECGs collected in early phase drug development has multiplied exponentially. Previously, the standard was to collect only safety ECGs at the Phase 1 unit. However, a simple crossover Thorough QT (TQT) trial can consist of four periods, more than 40 subjects and multiple timepoints, with ECGs collected in triplicate. This can lead to approximately 10,000 ECGs. It has also become a regular practice for the sponsor to collect multiple ECGs in the early development, to give some indication of the drug’s effect on the ECG measurements.

The large number of ECGs now collected in the short timecourse of a Phase 1 study places a great burden for the reader to view all of the ECGs in a timely manner. The E14, therefore, allows for more than one person to read the ECGs from a TQT trial as long as each subject is assigned to a single “skilled reader”. Due to the inter-reader variability concerns, it is best to keep the number of readers to a minimum. The effect is to reduce the time to database lock. But there is an alternative – the use of automated readings.

Automated Algorithms

Automated algorithms come from two main sources: ECG machine manufacturers and third-party vendors. Manufacturers such as GE, Mortara, Phillips and Schiller can have the ECG measurements and interpretation printed on the unconfirmed copy of the ECG (as well as residing in the database).

Newer, third-party algorithms, from companies such as AMPS, OBS, Monebo and NewCardio, can measure digital ECGs (usually in an XML format) after collection. Because the algorithm is not connected to an ECG machine, these programs usually process the data in a batch format, probably at the end of the trial. Therefore, the readings are not available at collection time. Regarding the measurement of the ECG, the ICH E14 states: “At present, this would usually involve the measurement by a few skilled readers (whether or not assisted by computer) operating from a centralised ECG laboratory. If well characterised data validating the use of fully-automated technologies become available, the recommendations in the guidance for the measurement of ECG intervals could be modified.”

Currently, automated measurements are being used in some situations. Regardless of the method used to measure the ECG, the sponsor is responsible for the data. Therefore, regardless of the measurement methodology, the sponsor has to be comfortable with and confident in the final database.

Using Automated Measurements

Given the ICH E14 guidelines, traditional ECG measurements are made by cardiologists who placed electronic calipers (or annotations) on the fiducial points of the electronic ECG. Intervals are then calculated and stored in the computerised system.

The process of measuring an ECG takes a discrete amount of time, with the variables being the technical quality of the ECG, number of annotations to be placed, and the ability to discriminate the onsets and offsets of the ECG waveforms. Depending on the number of ECGs in a clinical trial – sometimes upwards of 10,000 – the time it takes to measure all of the ECGs can be considerable.

Variability Issues

With the time constraints of the clinical trial database lock, more than one reader is needed. A few skilled readers may be used. However, because readers have their own opinions as to where the fiducial points are located on the ECG, and consistency is important, inter- and intra-reader variability needs to be addressed. ECG core laboratories have dedicated significant resources to benchmark and measure inter- and intra-reader variability.

Finally, ECGs in TQT trials are usually collected in triplicate – that is, three ECGs are collected within about a five-minute window at a nominal time-point relative to dosing. The triplicate ECGs are used to reduce the normal physiologic variability inherent in ECG measurements. The standard deviation of the QTcF is often used as a measure of data quality. Manufacturers of automated algorithms often tout the fact they can help lower the standard deviation of the QTcF within the triplicate ECG, thus producing a higher-quality database.

With these thoughts in mind, the fully-automated approach has been shown to be effective in many cases. This is because the time to a locked database can be reduced as the computer takes a much shorter time per ECG, compared to a human reader. Another benefit is that the computer will have little to no intra-reader variability. The result is that the consistency of the readings will be of a high quality.

Creating a Global Median

In general, automated measurements come from a representative beat based on all 10 seconds of the digitally-acquired ECG. The process begins by taking each lead of the ECG and creating a median complex from a primary, or dominant, normal beat. All of the beats with the primary morphology are used for the creation of the median beat.

After medians are created for each lead, a global median is created by aligning the individual median beat. From there, the earliest onset to the latest offset is measured for all variables (PR, QRS and QT). In many cases, the result of using the earliest onset to the latest offset will be to have a longer QT-interval measurement than the human reading. Heart rates for automated algorithms are usually calculated over the entire 10-second ECG.

Need for Standardisation

However, not all algorithms work the same. Kligfield et al published a comparison of the performance of two algorithms from two different manufacturers (1). The authors collected two sets of ECGs (one for each manufacturer’s digital electrocardiograph) at the same time. The acquisition was not simultaneous because prerecording processing operations differ between the ECG machines. Not only did the algorithms measure the QT interval differently, automated measurements of the QT interval differed from one version to another within the same algorithm family by up to 24ms. As a result, this study highlights the importance of standardising the equipment, including software versions between all of the sites and subjects, if automated readings are to be used from the ECG machine.

Data Quality

Automated algorithms are being used in many clinical trials, offering a cost- and time-effective alternative to human readings in the correct circumstances. However, two particular issues need to be considered: ECG quality, and adjudication of substandard readings.

Poor ECG quality will affect the accuracy of ECG measurements, whether by a human reader or an automated reading. High-frequency is the noise at the baseline level that interferes with the ability to determine, with precision, where one feature of the ECG ends and another begins. Unfortunately, there is no adequate solution to the problem once the ECG has been collected as ECG filters can slightly alter the morphology. The only adequate solution is to collect good-quality ECGs.

Since not all ECGs are of the highest quality, and sometimes the computer reading fails, using an adjudication algorithm should be part of the clinical trial when automated algorithms are used. Adjudication should be considered based on ECG values that are not in the mainstream of the normal QT/RR relationship (so-called outliers), and when certain quality control metrics are not met by triplicate ECGs.

Finding outliers in the dataset is not difficult. These values fall outside either the predefined value (such as a QTc greater than 450ms or a change from baseline of 30ms), or a predefined relative value (such as a value greater than two standard deviations from the mean).

Another set of issues with the data quality arises when values appear normal but are not accurate. These values can negatively affect the overall quality of the database. To find these values, we must look at metrics such as within triplicate standard deviation; relative changes from baseline as compared the other subjects; and within-triplicate ranges. By running data quality metrics, database quality can be assured, regardless of reading methodology.

New Approach

The Food and Drug Administration (FDA) was not in favour of automated measurements of ECGs until relatively recently. Now, however, it seems to be accepting of them with the caveat that the sponsor is responsible for the data.

It is widely recognised that automated measurements are best used with ECGs from normal, healthy volunteers where the ECG intervals are easy to read. This generally means the automated measurements are best used in early phase clinical trials. It is fortuitous that these trials are mainly where data is collected digitally and can therefore be read with automated algorithms.

Automated algorithms can be found from both third-party vendors and from ECG machine manufacturers. Both types have been used in clinical trials. Regardless of what algorithm is used, standardisation is important to make sure all ECGs are measured with the same version of software and an adjudication algorithm is employed.

Reference

1. Kligfield P, Hancock EW, Helfenbein ED, Dawson EJ, Cook MA, Lindauer JM, Zhou SH and Xue J, Relation of QT interval measurements to evolving automated algorithms from different manufacturers of electrocardiographs, American Journal of Cardiology 98: pp88-92, 2006



Read full article from PDF >>

Rate this article You must be a member of the site to make a vote.  
Average rating:
0
     

There are no comments in regards to this article.

spacer
Timothy Callahan PhD is a senior healthcare executive and researcher, with extensive expertise in cardiac research design, development, implementation and analysis. As the Chief Scientific Offi cer of Biomedical Systems, he is a liaison between company clients and the FDA, as well as managing consulting personnel. He is also responsible for scientific support of new and ongoing projects, developing scientifi c standards and protocols, and writing client summary reports.
spacer
Timothy Callahan
spacer
spacer
Print this page
Send to a friend
Privacy statement
News and Press Releases

CPhI expert forecasts ‘huge growth in Middle East manufacturing with Europe a medium-term target’

CPhI Middle East & Africa (CPhI MEA) expert Madhukar Tanna, Chief Executive Officer of Pharmax, a United Arab Emirates (UAE) based branded generic manufacturer, says favourable conditions in the UAE are resulting in a boom of pharmaceutical manufacturing throughout the region. Government incentive schemes to increase domestic production, coupled with a brand friendly environment and rising healthcare needs is fuelling surging demand – with many new companies and plants launching in the last two years alone.
More info >>

White Papers

Is Your Biobank Ready for the Challenge of Biomarker-based Research?

BioFortis

Targeted and personalized studies with well-defined patient segmentation biomarkers are becoming the norm in clinical trials. This increased interest in molecular biomarker studies necessitates a rigor and sophistication in sample management within the clinical trial context that is often not supported either by traditional clinical trial management software (CTMS), or biobanking systems.  Download our Next Generation Biobanking whitepaper and learn about how to overcome the key challenges in clinical trial sample management from working in a distributed network of partners and stakeholder to managing consents and generating scientific insights.
More info >>

 
Industry Events

Clinical Trial Supply Global Forum

20-22 May 2019, Brussels, Belgium

You can’t get a drug to market without a clinical trial. You can’t run a clinical trial without product supply. We understand that your role is critical in ensuring the on time and efficient development of the next life saving drug and we know that this makes your time precious. In 2019 we have condensed everything you need to advance your Clinical Trial Supply into three jam packed days. The industry's most interactive and solution focused CTS forum features deep dive workshops, live polls, panel debates, round table discussions and exclusive key note case studies to ensure that you leave with the industry's best solutions and strategies to transform your clinical trial supply. If there is ONE Clinical Trial Supply conference that you attend in 2019, make sure that it is this one.
More info >>

 

 

©2000-2011 Samedan Ltd.
Add to favourites

Print this page

Send to a friend
Privacy statement