samedan logo

home > ict > summer 2010 > press the rec button
International Clinical Trials

Press the REC Button

Hugh Davies of the National Research Ethics Service examines the ways to research the decisions and deliberations of Research Ethics Committees

Research Ethics Committees (RECs) are established to protect the rights, dignity, safety and wellbeing of research participants, while also facilitating ethical research. Whether they do so, and how this is achieved, is of obvious importance, particularly for countries with limited resources who are being encouraged to develop capacity to review research.

Is it simply the existence of RECs that alters researchers’ behaviour and ensures research conforms to accepted ethical standards, or do their processes and deliberations have a part to play? These are difficult questions to answer. Given the common view that research is a risky venture (rightly or wrongly), and the plausibility that the presence of REC review raises ethical standards and protects research subjects, a cluster randomised trial of research with or without such review seems unlikely to be accepted. We are, therefore, some distance from answering this question through evidence and research itself.

If REC processes themselves contribute to research standards, which ones can be identified as being effective, and which ones are redundant? How detailed does a review need to be? Whether a different cluster randomised trial of varying levels of review on the same project is acceptable is difficult to gauge, but it might be more palatable. However we may have to accept that some issues, as yet, cannot be addressed directly. In this area we do not have answerable questions. “If you cannot attain to knowledge without torturing your dog, you must do without knowledge”, wrote George Bernard Shaw (1).

Robust evaluation of the consequences of REC processes, deliberations and decisions eludes us (2). There is much criticism of RECs/IRBs in the journals (one author recently went as far as to offer his readers a $50 reward if they could provide published evidence of RECs’ effectiveness), but while published evidence may question whether RECs meet their aims, it is anecdotal and predominantly written by aggrieved researchers. Systematic reviews that would provide a balanced picture are sadly lacking (and much needed).

Why is this so? Both pragmatic and methodological problems have hindered analysis. RECs have had a reputation for idiosyncrasy, once established developing their own modus operandi, but over the last decade the UK landscape, for one, has changed. The establishment of the Central Office for Research Ethics Committees and then the National Research Ethics Service (NRES), drafting of the Governance Arrangements for Research Ethics Committee (GAfREC) and subsequent Standard Operating Procedures for RECs (SOPs) have established a unified process for RECs. The more recent web-based Integrated Research Application System provides a single data entry portal for applicants and the linked electronic Research Ethics Database in which all applications, correspondence and decisions are entered have simplified application and eased the problems of data collection. Perhaps more importantly, NRES and RECs have developed a noncritical, supportive partnership in their quality assurance programme on which outcome analysis could now be founded.

Methodological issues have also presented problems. REC deliberations, their decisions, and the remedies or suggestions they offer ultimately rest upon moral judgment – itself a balance of the personal values of REC members, professional guidance, ethical debate and the application of published evidence. Any decision upon the REC’s judgment is therefore difficult to evaluate in a way that will command broad consensus.


So if RECs are expected to protect the rights, dignity, safety and wellbeing of research participants, what answerable questions can we pose?

Do RECs and NRES Protect Research Participants?

Given the seeming impossibility of conducting a randomised trial (research with no REC review versus research with REC review) and the questionable validity of historical comparisons, this question is difficult to answer. If we are to undertake this we will need to develop a ‘Delphi-like’ procedure to reach accepted consensus. Possible approaches might be:

  • Analysis of REC letters, along the lines of the SIMPSON project previously commissioned by NRES (3)
  • Direct comparison of the application first presented to the committee with the final, approved study

Questions to be asked might be:

  • Do reviews reduce unnecessary research, asking a question that has already been proven?
  • Does it prevent research in which treatment is studied that has already been proven to be no better or worse than current care?
  • Does it prevent participants being allocated to care that has already been proved ineffective?
  • Has review reduced the research risk?

Are there Lenient and Strict Committees (Hawks and Doves)?
The Research Ethics Database records all REC decisions and hence allows comparison of the types of decisions different RECs make (colloquially – ‘hawks or doves’) and there may be possible value in looking at outliers – those whose decision rates fall outside the normal range, but such method gives little other information about outcome. NRES has undertaken this but is yet to report its feasibility and value.

How do Local and Central Review Compare?
Many studies will be conducted in more than one site and reviewed centrally and locally. Collection, comparison and analysis of these views would seem straightforward, and offer the opportunity to see how committees see the same project.

Do RECs Understand the Application?
This question is closer to analysis of process rather than outcome, and consequently is more easily answered, but nevertheless important to the outcome. Any ethical analysis based on a mistaken understanding of the project will most likely be flawed. Critical analysis of the project (getting a detailed picture or story, determining the facts of the matter, how it relates to, and may change treatment) is the first step in ethical analysis. Quality assurance feedback from applicants could explore this, although the current method provides us with an unreliable, biased group. A structured programme could address this bias and set an indication of the frequency of any misunderstanding. Another approach would be an extension of the past SIMPSON project which analysed letters sent from RECs to researchers after their study had been reviewed (3). This work established the feasibility of the method and the structure for such work therefore exists. All that is required is modification of the analytical tool.

Are RECs Consistent in their Decisions, the Issues they Raise and the Remedies they Offer?
The decisions of RECs are more easily analysed than their deliberations, yet they may be of less concern than the issues raised and remedies expected. This has been addressed by submitting applications to more than one committee and comparing the committees’ decisions, although within current procedures, this is difficult to do without the REC knowing they are being studied. A further problem is that some inconsistency in ethical debate and REC conclusions may and probably can be justified. No method has yet been defined to separate acceptable from unacceptable inconsistency. Current literature provides some answers to the first part of this question, but leaves the last two problems unaddressed.

Recognising and acknowledging these limitations, NRES has established a process of duplicate review in its Shared Ethical Debate (ShED). An application is sent to up to 20 committees, asking them to review it within their routine agenda. Responses are collated for a workshop in which issues are debated. A final report on differences, agreements and how the deliberations and decisions align with public guidance and published evidence is drawn up and re-circulated to RECs for their comment, which is re-collated by NRES.

One criticism is that the REC is not blind to the process and it therefore has methodological limitations. Interestingly, it was noted at a recent workshop that several REC members would have no objection if they were unaware that there was a duplicate application in their agenda, and ‘substitute researchers’ were sent to their meeting. Practical constraints would probably mean we could only send the researcher to two or three RECs.

Do RECs Reflect Published Guidance in their Decisions and Deliberations?
There are libraries of guidance from many different national and international bodies. These usually provide broad principles that require interpretation and application to the peculiarities of the study before the REC. Whether RECs adhere to these documents when making their decisions is therefore not straightforward. We currently explore this in our ShED.

Coleman and Bouësseau
Coleman and Bouësseau add further questions into the mix (2). First, they consider: does a review result in more research that is responsive to the local community’s self-identified needs? It would be difficult to get an overview, but a Delphi group could compare the submitted application with the final study although it may have difficulty in determining the community’s needs.

Other key questions should also be considered. Does a review change participants’ subjective experiences in studies? Does a REC review improve participants’ understanding of the risks and potential benefits of studies or change their attitudes about research? And does the process affect prospective participants’ decisions about whether to participate in research? The first question is difficult to address, as we have no control, but the second and third could be answered by public and participant interview, to establish their understanding of the workings of RECs and whether they would or do feel safer in a study that has been reviewed by a REC. It could be explored in more formal research in which volunteers are asked to read information that was first presented in the application to the REC and finally given favourable opinion. Analysis of feedback might give a view on the level of reassurance they felt. The technique of user testing already employed in drug package inserts and recently applied to the information sheets used in a Phase I study could be followed.


If researchers and reviewer are to avoid Mary Warnock’s ‘public house discussion’ there is a clear need to ‘research research’ and develop an evidence base on which dialogue can be founded (4). There are however constraints upon us; resources are limited and consequently priorities need to be set (5). Considering these questions (and others) to identify the ones that are key and the ones that are answered most easily is one way forward. The National Research Ethics Service in conjunction with the WHO is therefore proposing to hold a meeting to examine the issues raised and solutions. We hope this will help us respond to the criticism of burdensome regulation in the UK while also identifying how the limited resources can best be spent in countries trying to establish capacity to review research.


1. Shaw GB, The Doctor’s Dilemma, Penguin, 1906

2. Coleman CH and Bouësseau MC, How do we know that research ethics committees are really working? The neglected role of outcomes research, BMC Med Ethics 28(9): p6, 2008

3. Dixon-Woods M, Angell E, Ashcroft R and Bryman A, Written work: The Social Function of Research Ethics Committee letters, Social Science and Medicine, doi:10.1016/jsocscimed.2007.03.046, 2007

4. Warnock M, The Intelligent Person’s Guide to Ethics, Duckbacks, 2001

5. Davies H, Wells F and Czarkowski M, Standards for Research Ethics Committees, Journal of Medical Ethics 35: p382, 2009

Read full article from PDF >>

Rate this article You must be a member of the site to make a vote.  
Average rating:

There are no comments in regards to this article.

 You must be a member of the site to make a comment.
Hugh Davies is a consultant paediatrician working at the Oxford Radcliffe Hospital and Research Ethics Advisor at the National Research Ethics Service in London, UK. After post graduate training in respiratory paediatric medicine, completing an MD in the application of radionuclide lung scanning in childhood pulmonary disease, he was appointed as a consultant paediatrician in St Mary’s and Central Middlesex Hospitals, London in 1989. In 2008 he moved to the Oxford Radcliffe Hospitals. From appointment he developed an interest in research ethics and took a place on the St Mary’s local research ethics committee. From 1994 to 1997 he was chairman of the Brent research ethics committee up until his appointment to the chair of the North Thames multi-centre research ethics committee. He held this until 2002 when he was seconded to the Central Office of Research Ethics Committees as ethics and training advisor. He is currently the research ethics advisor at the National Research Ethics Service providing support and training to RECs in the UK. He has also worked in Europe within the EU to provide similar training.
Hugh Davies
Print this page
Send to a friend
Privacy statement
News and Press Releases

Drug delivery innovation across Europe improves for third consecutive year ahead of Pharmapack Europe

Paris, 25th June 2021: Ahead of Pharmapack Europe 2021 (Paris Expo, Porte de Versailles, 13-14 October) – the returning European event dedicated to drug packaging and delivery – there has been a huge swathe of innovation across the industry, with more than 53 FDA approvals in 2020 and 26 approvals in 2021.
More info >>

White Papers



©2000-2011 Samedan Ltd.
Add to favourites

Print this page

Send to a friend
Privacy statement