Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

A decision tool to guide the ethics review of a challenging breed of emerging genomic projects


Recent projects conducted by the International Cancer Genome Consortium (ICGC) have raised the important issue of distinguishing quality assurance (QA) activities from research in the context of genomics. Research was historically defined as a systematic effort to expand a shared body of knowledge, whereas QA was defined as an effort to ascertain whether a specific project met desired standards. However, the two categories increasingly overlap due to advances in bioinformatics and the shift toward open science. As few ethics review policies take these changes into account, it is often difficult to determine the appropriate level of review. Mislabeling can result in unnecessary burdens for the investigators or, conversely, in underestimation of the risks to participants. Therefore, it is important to develop a consistent method of selecting the review process for genomics and bioinformatics projects. This paper begins by discussing two case studies from the ICGC, followed by a literature review on the distinction between QA and research and a comparative analysis of ethics review policies from Canada, the United States, the United Kingdom, and Australia. These results are synthesized into a novel two-step decision tool for researchers and policymakers, which uses traditional criteria to sort clearly defined activities while requiring the use of actual risk levels to decide more complex cases.


This paper was composed by members of the Ethics and Policy Committee (EPC) of the International Cancer Genome Consortium (ICGC), a large-scale genomics consortium that coordinates 74 research projects across 17 jurisdictions to investigate over 50 cancer types and sub-types.1, 2 The ICGC also conducts a variety of benchmarking initiatives, which are increasingly used for standardization in genomic research. However, two recent activities of this type posed a dilemma for the EPC.

The first activity, the Somatic Variant Calling Pipeline Benchmark, involved sharing cancer patients’ whole-genome sequence data and associated metadata with participating centers in order to improve the data generation and analysis methods. Each group analyzed the data separately and reported results back to a central analysis team. Some centers expressed interest in publishing their findings,3 which at the time raised some concern that the benchmarking activity might be better categorized as research.

The second activity, the Dream Somatic Mutation Calling Challenge, was a global competition meant to define standard methods for identifying cancer-induced mutations in whole-genome sequencing data. Data representing pairs of normal and tumor genomes were stored on a cloud computing repository and results were also returned to the cloud. Multiple metrics, including balanced accuracy, specificity, and sensitivity were used to determine the challenge’s winner. The global results were then published in a collaborative-scientific paper.4

Access to the ICGC data is overseen by the Data Access Compliance Office (DACO) in order to maintain robust privacy standards. Both activities were initially presented to DACO as quality assurance (QA) projects, but shared some common features of research, such as confidentiality risks and the documentation of novel bioinformatics techniques. After considerable debate, the EPC decided that for the data protection purposes both activities should be considered human subjects research. The EPC also recognized that ethics oversight should not pose a ‘burden’ so large as to dissuade investigators from having their simple QA activities reviewed.5 Therefore, a strict controlled access procedure was applied: the DACO committee reviewed each applicant’s credentials, collaborators, research plan, ethics approval, and potential risks to the data privacy in the same manner as conventional research applications before granting them access to the ICGC data. However, given the limitations of an international consortium, the choice to obtain local ethics approval from an IRB or adopt additional oversight procedures was left up to each participant in accordance with their jurisdiction’s laws and policies. This experience convinced the EPC that a more systematic approach was needed to help investigators and policymakers assess complex projects and select the appropriate oversight mechanisms.

In theory, QA requires less rigorous ethics review because it differs substantially from traditional biomedical research.6 Research is generally defined as ‘an undertaking intended to extend knowledge through a disciplined inquiry or systematic investigation’,7 whereas QA is the ‘systematic monitoring and evaluation of the various aspects of a project, service or facility to ensure that standards of quality are being met’.8 International standards indicate that best practices in biobanking and genetic data collection should include QA mechanisms.9, 10, 11, 12, 13 These are especially important in light of current emphases on personalized medicine and open science, which have led to massive amounts of genomic data being stored in biobanks and shared with the scientific community.14 Although such data sharing generally involves only minor psychosocial risks, in practice nobody can be aware of all future challenges raised by these projects.11, 15, 16, 17, 18 They can pose different risks from clinical research, such as privacy infringements and future sharing of the data in unanticipated ways.19, 20 Indeed, research and QA undertaken at the same biobank often have similar risks of confidentiality breach.11, 21

Given these considerations, it can be unclear how to review QA projects that share similarities with research, especially in the context of data-intensive activities. Both can begin with a clear question or problem, use systematic methods of data gathering, generate questions to inform future research, and use the same large data sets.22, 23 QA misclassified as research may lack rigor and fail to comply with requirements regarding study design, participants’ rights, and other applicable laws and policies.24, 25, 26 In addition to unnecessary bureaucratic delays, this sort of confusion has caused ‘criticism by regulatory authorities, rejection of manuscripts by journals for lack of informed consent procedures, and feelings of considerable frustration’.27 Conversely, classifying complex projects as QA risks offering too little protection for participants. As such, classifying ambiguous projects poses an increasingly important dilemma, one which has historically been hampered by a lack of established guidelines.21

Our goal therefore involved three steps: a literature review to determine the relevant factors for differentiating QA and research in genomics; an international comparison of policy frameworks and the review pathways they allocate to each type of project; and the integration of those results in the development of an effective decision-making tool to help researchers and policymakers alike classify genomics projects into the appropriate ethics review streams.

Scholarship on the distinction between research and QA

Our literature review began with a keyword search for ‘quality assurance’ or ‘quality improvement’ and ‘research’ using the tools Google Scholar and Web of Science. Further papers were identified from their references through a snowballing method until no more were accessible. One challenge for this review was that ‘quality assurance’, ‘quality improvement’, ‘quality activities’, ‘quality studies’, and even ‘audit’ are often used interchangeably to describe any kind of routine knowledge-generating process used to mitigate risky practices.15, 17, 28 Some papers compare these categories (see, for example, refs 6, 26, 29, 30, 31, 32), whereas others group them as functionally equivalent (see, for example, refs 28, 33, 34). We describe them all more generally as QA, as many of the distinctions referred to clinical care and were not relevant to our purposes. Indeed, nearly all of the sources comparing QA and research do so in a clinical context,35 including significant documents like the United States’ Common Rule.36 This may be attributable to the fact that much of the existing literature is over a decade old, predating the present importance of genomics. After excluding criteria considered to be outdated or inapplicable to the ICGC, like sample selection, rigor of analysis, and choice to publish, we were left with six primary criteria.


QA and research are often defined by their ‘intent’ or ‘purpose’, based on whether they aim at producing local improvements or generalizable knowledge.19, 22, 23, 25, 28, 31, 32, 34, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50 According to the Council for International Organizations of Medical Sciences, ‘the defining attribute of research is that it is designed to produce new generalizable knowledge as distinct from knowledge pertaining to a particular individual or program’.51 Indeed, several sources describe generalizability as the consensus criterion.32, 38, 42 A similar formulation asks who benefits directly from the activity: the institution or system, the participants, or society in general.6, 19, 23, 25, 37, 52, 53 However, generalizability on its own fails to consider important factors like risks, methodologies or the types and sources of the data collected.26, 42 Furthermore, intent is unenforceable and subjective, and can vary over the course of a single project.6, 26, 45

Risks and benefits

Many authors classify projects based on their levels of or increases in risks, burdens, and chances of failure, with research being more risky than QA.6, 20, 23, 31, 32, 35, 37, 44, 45, 47, 53, 54, 55 A related distinction asks whether risks are limited to privacy or confidentiality breaches or extend to broader emotional, psychological, social, financial, or physical harms.19, 20, 22, 37, 45, 56


Although several authors suggest considering whether an activity departs from standard practice,52, 54, 55 a similar criterion asks whether it builds on previous research, such as by comparing performance to best practices, or whether it investigates a new question or area, such as by testing a new technology.22, 23, 28, 32, 37, 39, 43, 44, 54 QA is often described as occurring in response to a problem, whereas research is more forward thinking.32, 34, 37, 43, 45 Knowledge gained through research is used to confirm standards and best practices, whereas QA measures practices against those standards once they are established.22, 23, 24, 28, 33, 37, 44, 47, 54, 57, 58

Speed of implementation

Another common criterion asks whether the project’s results are applied immediately, in the case of QA, or after a delay, in the case of research.19, 22, 32 Reasons for this disparity include differences in methodology and intended outcome as well as the fact that research is typically disseminated to the broader scientific community for critique and replication before being put into action.19, 23, 40 In comparison, QA often functions as an ongoing process, consisting of rapid small-scale cycles in a sort of feedback loop.25, 26, 46, 48, 59

Theory and method

Research tends to generate or test a theory using the scientific method and a formal and explicit hypothesis.19, 20, 22, 25, 32, 33, 34, 37, 38, 39, 40, 43, 44, 45, 46, 47, 48 Yet, although QA methods are seen as more flexible and iterative,19, 23, 37, 43, 48 QA increasingly uses cohorts and other methods typical of research.27, 46 GWAS and other types of large-scale genomic research can also lack clear hypotheses, which can make this particular criterion challenging to use in this context.

Involvement of researchers

QA and research may be conducted by different people:39, 57 QA data collection is more typically performed by those with routine access to the data and who may be investigating their own practices,25, 32, 45, 47 whereas the research data is collected by trained investigators whose involvement may end at publication.32, 34, 39, 44 Research is also more likely to have external funding along with the associated requirements and risks of bias.20, 39, 44

Country-specific approaches to the review of QA projects

In order to identify potential review pathways for QA, we decided to compare models from various countries, since genomics projects often involve collaboration on an international level. We selected Canada, the United States, the UK, and Australia since each had previously considered the QA-research issue and their relevant ethics documents were available in English.


The Tri-Council Policy Statement (TCPS2) is the national guide mandating ethics review and approval for all human subjects research funded by Canadian agencies. It distinguishes research that requires IRB review from ‘non-research’ activities, including QA ‘used exclusively for assessment, management, or improvement purposes’.7 Although the TCPS2 does not describe a clear assessment process, Canada’s Interagency Advisory Panel on Research Ethics suggests that concerned researchers consult an IRB or base their decision on study generalizability, intent to publish, or other elements considered essential to QA.8 In Canada, QA projects presenting minimal risk generally receive either an exemption or expedited review. In an expedited review, the IRB chair and another delegated member review the proposal and can recommend a full review, if appropriate.60 Administrative approval is used if QA proposals ‘raise ethical issues that would benefit from careful consideration by an individual or body capable of providing independent guidance’ other than an IRB.7 This allows QA to be evaluated by department heads or other institutional representatives, but its specifics are not described in the TCPS2.

According to the Alberta-based network A pRoject Ethics Community Consensus Initiative (ARECCI), projects with human participants should first be sorted by ‘primary purpose’. Research aims to ‘contribute to the growing body of knowledge regarding health and/or health systems that is generally accessible through standard search procedures of academic literature’, whereas QA aims to ‘assess or improve the quality of a treatment, service or program’.61 Each project is then screened by risk level: QA posing more than minimal risk are vetted through the full IRB process, while the rest receive an exemption or expedited review. Whereas ARECCI does not define minimal risk, the TCPS2 describes it as when ‘the probability and magnitude of possible harms implied by participation in the research is no greater than those encountered by participants in those aspects of their everyday life that relate to the research’.7

United States

Regulations in the United States also focus on the purpose of an activity. The National Bioethics Advisory Commission describes research as ‘undertaken to test a new, modified or untested intervention, service or program’, whereas QA assesses the quality of an established program.62 The Common Rule defines research as ‘a systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge’,36 which does not necessarily exclude QA. It further requires that federally funded or affiliated projects with human participants undergo IRB review. However, the IRB may use an expedited procedure for research with ‘no more than minimal risk’.36 Although ‘quality assurance methodologies’ are also eligible for expedited review,63 a more precise definition of QA is not provided in the Common Rule. According to the Office for Human Research Protections (OHRP), QA is not subject to Common Rule regulations if its purpose is limited to delivering, improving, or collecting performance data for health care. However, some types of QA may be considered research if, for instance, they are also meant to establish proof of efficacy. Intent to publish is not considered a decisive factor in classifying these activities.64

United Kingdom

In the UK, research falling outside the National Health Service (NHS) generally requires review by an independent or institutional IRB, whereas research involving patients and users of the NHS or other services from the Department of Health requires review by an NHS IRB. Health Research Authority (HRA) guidelines stipulate that NHS IRBs should not consider activities like clinical audit, service evaluation, or public health surveillance as research.65 Audits and service evaluation projects that impose minimal additional risks are defined as QA and subjected to administrative approval. Recognizing that QA may overlap with research, the HRA also provides clear guidance for distinguishing them using four key determinants: intent; treatment/service; allocation; and randomization. The intent of research is ‘to find out what you should be doing’, whereas the intent of QA is to find out ‘whether it is working’.66 The HRA emphasizes that this criterion depends on the primary purpose, and even provides an online tool to help make the distinction.67 QA projects that do not require the involvement of an NHS IRB may be reviewed by research ethics committees from universities or other institutions. Although IRB members may be consulted during this process, those performing the QA are responsible for considering ethical issues themselves.65


Australia’s 1999 National Statement on Ethical Conduct in Human Research has been revised to include guidance for review of low-risk projects. It acknowledges that ‘there is a great deal of uncertainty about the appropriate levels of governance for such activity’, that review pathways are unclear, and that processes meant for research may be too onerous to apply to QA.68 Indeed, Australia’s Human Research Ethics Committee (HREC) review process for QA activities has been criticized as overly time consuming.69 Although activities involving ‘more than low risk’ require review by an HREC,68 QA is described mostly as having negligible or low risks.69 ‘Negligible risk’ projects have no foreseeable risks greater than inconvenience and ‘use existing collections of data or records that contain only non-identifiable data’; these may be exempted from review.68 ‘Low-risk’ projects may cause foreseeable discomfort68 and do not use non-identifiable existing data. The draft document recommends that these be reviewed by non HREC-level review bodies, which include department heads, departmental committees, delegated review groups that report to an HREC, or HREC subcommittees.69


The laws and policies that guide ethical review and oversight of human research in Canada, Australia, the UK, and the USA are consistent on the need for a nuanced, tailored approach to the review of QA projects in the context of research. Depending on the project, they recommend one or more of four broad pathways: full review; exemption; expedited review; and administrative approval. Unless otherwise exempted by national laws and policies, it was agreed that clear ‘research on human participants’ should undergo ethics review by an IRB or similar committee authorized to review medical research projects. The IRB should have the authority to approve, reject, require modifications, terminate, or suspend approval of any proposed or ongoing research.

Although each country also presented criteria for the exemption of certain studies, requiring instead only a waiver or a letter of exemption, there was no consensus between them on those criteria. Full exemption generally depended on the types of the data collected and levels of risk to the participants. Even in such cases, all countries recommend careful consideration of ethical issues through administrative approval by a departmental committee chair, or minimally by the investigators before and during the activity. It was not seen as problematic for the investigators’ peers to review QA projects given the lack of significance-associated ethical issues.

Although projects using QA methodologies that pose minimal risk were generally seen as fit for expedited review, there was little agreement on the types of QA that should qualify for a less onerous process or how they should be evaluated. Given the increasing variety of sophisticated QA in biomedical research, there is a need for greater coherence between the approaches described. Furthermore, these laws and policies are mostly directed towards the clinical context. More thought should be given to QA in the context of non-clinical research.

The decision tool

These reviews of the literature and policy environment were followed by a discussion among the members of the EPC. On the basis of our research, we decided that the most difficult borderline cases ought to be resolved using the specific criterion of risk levels. Although international consortia like the ICGC have very limited influence over review at the national level, they can insist that member projects meet minimal ethics requirements and can use a controlled access strategy like DACO to ensure greater oversight over data sharing activities with privacy risks. With these considerations in mind, we developed a tool that can help researchers and policymakers determine how activities should be reviewed locally while resolving some of the difficulties posed by differing standards of project classification (Figure 1).

Figure 1

A decision tool for assessing proposed QA projects in a research context. A decision tree to facilitate the classification of activities undertaken in genomics projects. The colored bar represents the spectrum between QA and research, with activities of uncertain type falling in the middle. The six criteria determining where a project falls are listed above the bar. Below each extreme of the spectrum, the flowchart indicates the appropriate level of ethics review, whereas the middle section proceeds to a second level of review. This step uses the criterion of risk level to divide projects between the use of exemption, expedited review, or administrative approval and the use of an independent ethics review.

The first step of the tool uses the six criteria discussed above (generalizability, risk, novelty, speed of implementation, methodology, and scope of involvement) to identify which projects clearly represent QA, which are clearly research, and which share characteristics of both. Generally, those which compare interventions between groups, extend biomedical knowledge, incur some risks to the participants, and produce generalizable results should fall on the research side. Those which aim at measuring and more immediately improving local performance with respect to a standard of quality, and have minimal risks and burdens, should fall on the QA side. Projects that exhibit overlap are then sorted based on whether they have low foreseeable risk and a favorable risk-benefit ratio (as in 6). In such cases, the amount of ethics oversight should correspond to the degree of actual, rather than hypothetical, risk to participants,70 independent of the project’s other characteristics. As complex projects of this type are more properly envisaged on a spectrum, this sorting is not meant to suggest that only binary classification is possible; it is only meant to indicate the appropriate level of ethics review.

Given the lack of policy uniformity on this topic, parts of this framework may be difficult to reconcile with some countries’ existing laws or ethics guidelines. In these countries, and in those with insufficiently developed ethics frameworks, our tool and the broad approach it proposes could be used to help guide policy reform, promote normalization, and achieve greater international coherence on QA review.


Although many projects are clearly QA and should not be subjected to unnecessarily lengthy ethical review, others share aspects of both QA and human subject research. It can be difficult to place these activities clearly in either category, and categorizing them arbitrarily can result in a suboptimal level of protection or in excessively burdensome regulation for projects with little likelihood of harm.55 As members of the ICGC’s multidisciplinary EPC, we were recently confronted with several projects of this type. On the basis of an international comparative policy review and a scoping literature review, we decided to devise an assessment tool that could satisfy members of the consortium while helping international research groups, academic institutions and researchers meet their responsibilities for ethical review and oversight of these activities. Our proposed framework uses a two-step approach that enables investigators and policymakers to classify complex projects as research or QA using the traditional characteristics of both as well as the evaluation of the actual risk posed by more hybrid projects. Although projects will not necessarily need to be evaluated by a formally designated IRB, they will always require independent ethical debate regarding the risks involved and the protection of human participants.


  1. 1

    International Cancer Genome Consortium International Cancer Genome Consortium, Hudson TJ, Anderson W, et al: International network of cancer genome projects. Nature 2010; 464: 993–998.

    Article  Google Scholar 

  2. 2

    International Cancer Genome Consortium. 2012 (updated 21 January 2015). Available at (accessed on 25 May 2015).

  3. 3

    Alioto TS, Buchhalter I, Derdiak S et al: A comprehensive assessment of somatic mutation detection in cancer using whole-genome sequencing. Nat Commun 2015; 6: 10001.

    CAS  Article  Google Scholar 

  4. 4

    Ewing AD, Houlahan KE, Hu Y, et al: Combining tumor genome simulation with crowdsourcing to benchmark somatic single-nucleotide-variant detection. Nat Methods. 2015; 12: 623–630.

    CAS  Article  Google Scholar 

  5. 5

    Kass N, Pronovost PJ, Sugarman J, Goeschel CA, Lubomski LH, Faden R : Controversy and quality improvement: lingering questions about ethics, oversight, and patient safety research. Jt Comm J Qual and Patient Saf 2008; 34: 349–353.

    Article  Google Scholar 

  6. 6

    Casarett D, Karlawish JHT, Sugarman J : Determining when quality improvement initiatives should be considered research: proposed criteria and potential implications. JAMA 2000; 283: 2275–2280.

    CAS  Article  Google Scholar 

  7. 7

    Canadian Institute of Health Research, Natural Sciences and Engineering Research Council of Canada Social Sciences and Humanities Research Council of Canada Tri-council Policy Statement: Ethical Conduct for Research Involving Humans. Ottawa, Canada: CIHR, NSERC & SSHRC, 2014.

  8. 8

    Interagency Advisory Panel on Research Ethics Definition of quality assurance studies, performance review and research. Ottawa, Canada: Government of Canada, 2003.

  9. 9

    International Organization for Standardization/International Electrotechnical Commission ISO/IEC 17025:2005 General requirements for the competence of testing and calibration laboratories. Geneva, Switzerland: ISO/IEC, 2005.

  10. 10

    Organization for Economic Co-Operation and Development OECD Best Practice Guidelines For Biological Resource Centers. Paris, France: OECD, 2007.

  11. 11

    Organization for Economic Co-Operation and Development OECD guidelines on human biobanks and genetic research databases. Paris, France: OECD, 2009.

  12. 12

    International Organization for Standardization ISO 9001:2008 Quality management systems-requirements. Geneva, Switzerland: ISO, 2008.

  13. 13

    International Society for Biological and Environmental Repositories: 2012 best practices for repositories: collection, storage, retrieval, and distribution of biological materials for research. Biopreserv Biobank 2012; 10: 79–161.

    Article  Google Scholar 

  14. 14

    Joly Y, Dove ES, Knoppers BM, Bobrow M, Chalmers D : Data sharing in the post-genomic world: the experience of the International Cancer Genome Consortium (ICGC) Data Access Compliance Office (DACO). PLoS Comput Biol 2012; 8: e1002549.

    CAS  Article  Google Scholar 

  15. 15

    National Health & Medical Research Council When Does Quality Assurance in Health Care Require Independent Ethical Review?. Canberra, Australia: Australian Government, 2003.

  16. 16

    MacDonald S, Mardis ER, Ota D, Watson MA, Pfeifer JD, Green JM : Comprehensive genomic Studies: emerging regulatory, strategic, and quality assurance challenges for biorepositories. Am J Clin Pathol 2012; 138: 31–41.

    Article  Google Scholar 

  17. 17

    Rhodes R, Azzouni J, Baumrin SB, Benkov K, Blaser MJ, Brenner B et al: De minimis risk: a proposal for a new category of research risk. Am J Bioeth. 2011; 11: 1–7.

    Article  Google Scholar 

  18. 18

    Cretin S, Keeler EB, Lynn J, Batalden PB, Berwick DM, Bisognano M : Should patients in quality-improvement activities have the same protections as participants in research studies? JAMA 2000; 284: 1786–1788.

    CAS  Article  Google Scholar 

  19. 19

    Ogrinc G, Nelson WA, Adams SM, O’Hara AE : An instrument to differentiate between clinical research and quality improvement. IRB 2013; 35: 1–8.

    PubMed  Google Scholar 

  20. 20

    Lynn J : When does quality improvement count as research? Human subject protection and theories of knowledge. Qual Saf Health Care. 2004; 13: 67–70.

    CAS  Article  Google Scholar 

  21. 21

    Pritchard IA : Searching for ‘research involving Human subjects’: what is examined? What is exempt? What is exasperating? IRB 2001; 23: 5–13.

    CAS  Article  Google Scholar 

  22. 22

    Closs SJ, Cheater FM : Audit or research—what is the difference? J Clin Nurs 1996; 5: 249–256.

    CAS  Article  Google Scholar 

  23. 23

    Kring DL : Research and quality improvement: different processes, different evidence. Medsurg Nurs 2008; 17: 162–169.

    PubMed  Google Scholar 

  24. 24

    Black N : The relationship between evaluative research and audit. J Public Health Med. 1992; 14: 361–366.

    CAS  Article  Google Scholar 

  25. 25

    Newhouse RP, Pettit JC, Poe S, Rocco L : The slippery slope: differentiating between quality improvement and research. J Nurs Adm. 2006; 36: 211–219.

    Article  Google Scholar 

  26. 26

    Kofke WA, Rie MA : Research ethics and law of healthcare system quality improvement: the conflict of cost containment and quality. Crit Care Med 2003; 31 (3 Suppl): S143–S152.

    Article  Google Scholar 

  27. 27

    Grady C : Quality improvement and ethical oversight. Ann Intern Med. 2007; 146: 680–681.

    Article  Google Scholar 

  28. 28

    Fain JA : What is the relationship between the continuous quality improvement process and the research process? Diabetes Educ 2005; 31: 461.

    Article  Google Scholar 

  29. 29

    Eastes L : Quality assurance vs quality improvement: the new challenge in health care. J Air Med Transp 1991; 10: 5–6.

    CAS  Article  Google Scholar 

  30. 30

    Fainter J : Quality assurance ≠ quality improvement. J Qual Assur 1991; 13: 8–9, 36.

    CAS  PubMed  Google Scholar 

  31. 31

    Amdur RJ, Speers M, Bankert EA : Identifying intent: Is this project research? In: Bankert EA, Amdur RJ (eds): Institutional Review Board: Management and Function. 2nd edn London, UK: Jones and Bartlett, 2006; 101–105.

    Google Scholar 

  32. 32

    Hill SL, Small N : Differentiating between research, audit and quality improvement: governance implications. Clin Gov 2006; 11: 98–107.

    Article  Google Scholar 

  33. 33

    Martin PA : Is it research? Appl Nurs Res 1995; 8: 199–201.

    CAS  Article  Google Scholar 

  34. 34

    Vogelsang J : Quantitative research versus quality assurance, quality improvement, total quality management, and continuous quality improvement. J Perianesth Nurs 1999; 14: 78–81.

    CAS  Article  Google Scholar 

  35. 35

    Lynn J, Baily MA, Bottrell M, et al: The ethics of using quality improvement methods in health care. Ann Intern Med. 2007; 146: 666–673.

    Article  Google Scholar 

  36. 36

    Federal policy for the protection of human subjects, 45 CFR. Part 46 1991.

  37. 37

    Reinhardt AC, Ray LN : Differentiating quality improvement from research. Appl Nurs Res 2003; 16: 2–8.

    Article  Google Scholar 

  38. 38

    Nerenz DR, Stoltz PK, Jordan J : Quality improvement and the need for IRB review. Qual Manag Health Care 2003; 12: 159–170.

    Article  Google Scholar 

  39. 39

    Rix G, Cutting K : Clinical audit, the case for ethical scrutiny? Int J Health Care Qual Assur 1996; 9: 18–20.

    CAS  Article  Google Scholar 

  40. 40

    Koschnitzke L, McCraken SC, Pranulis MF : Ethical considerations for quality assurance versus scientific research. West J Nurs Res 1992; 14: 392–396.

    CAS  Article  Google Scholar 

  41. 41

    Dokholyan RS, Muhlbaier LH, Falletta JM, et al: Regulatory and ethical considerations for linking clinical and administrative databases. Am Heart J. 2009; 157: 971–982.

    Article  Google Scholar 

  42. 42

    Amoroso PJ, Middaugh JP : Research vs public health practice: when does a study require IRB review? Prev Med 2003; 36: 250–253.

    Article  Google Scholar 

  43. 43

    Beyea SC, Nicoll LH : Is it research or quality improvement? AORN J 1998; 68: 117–119.

    CAS  Article  Google Scholar 

  44. 44

    Paxton R, Whitty P, Zaatar A, Fairbarn A, Lothian J : Research, audit and quality improvement. Int J Health Care Qual Assur Inc Leadersh Health Serv 2006; 19: 105–111.

    Article  Google Scholar 

  45. 45

    Johnson N, Vermeulen L, Smith KM : A survey of academic medical centers to distinguish between quality improvement and research activities. Qual Manag Health Care 2006; 15: 215–220.

    Article  Google Scholar 

  46. 46

    Bellin E, Dubler NN : The quality improvement-research divide and the need for external oversight. Am J Pulic Health 2001; 91: 1512–1517.

    CAS  Article  Google Scholar 

  47. 47

    Bull AR : Audit and research: complementary but distinct. Ann R Coll Surg Engl. 1993; 75: 308–311.

    CAS  PubMed  PubMed Central  Google Scholar 

  48. 48

    Saunders MJ : Director of quality improvement research. J Nurs Care Qual. 1993; 7: 39–43.

    CAS  Article  Google Scholar 

  49. 49

    Snider Jr DE, Stroup DF : Defining research when it comes to public health. Public Health Rep. 1997; 112: 29–32.

    PubMed  PubMed Central  Google Scholar 

  50. 50

    Perneger TV : Why we need ethical oversight of quality improvement projects. Int J Qual Health Care. 2004; 16: 343–344.

    Article  Google Scholar 

  51. 51

    Council for International Organizations of Medical Sciences International Guidelines For Ethical Review of Epidemiological Studies. Geneva, Switzerland: CIOMS, 1991.

  52. 52

    Brett A, Grodin M : Ethical aspects of human experimentation in health services research. JAMA 1991; 265: 1854–1857.

    CAS  Article  Google Scholar 

  53. 53

    Truog RD, Robinson W, Randolph A, Morris A : Is informed consent always necessary for randomized, controlled trials? N Engl J Med 1999; 340: 804–807.

    CAS  Article  Google Scholar 

  54. 54

    Thurston NE, Watson LA, Reimer MA : Research or quality improvement? Making the decision. J Nurs Adm. 1993; 23: 46–49.

    CAS  Article  Google Scholar 

  55. 55

    Lo B, Groman M : Oversight of quality improvement: focusing on benefits and risks. Arch Intern Med. 2003; 163: 1481–1486.

    Article  Google Scholar 

  56. 56

    Parker M, Ashcroft R, Wilkie AOM, Kent A : Ethical review of research into rare genetic disorders. BMJ 2004; 329: 288–289.

    CAS  Article  Google Scholar 

  57. 57

    Wilson A, Grimshaw G, Baker R, Thompson J : Differentiating between audit and research: postal survey of health authorities’ views. BMJ 1999; 319: 1235.

    CAS  Article  Google Scholar 

  58. 58

    Diamond LH, Kliger AS, Goldman RD, Palevsky PM : Commentary: quality improvement projects: how do we protect patients’ rights? Am J Med Qual 2004; 19: 25–27.

    Article  Google Scholar 

  59. 59

    Smith R : Audit and research. BMJ 1992; 305: 905–906.

    CAS  Article  Google Scholar 

  60. 60

    Health Canada Research Ethics Board Ethics Review of Research Involving Humans. Ottawa, Canada: Health Canada, 2009.

  61. 61

    Alberta Research Ethics Community Consensus Initiative Protecting People While Increasing Knowledge: Recommendations for a Province-wide Approach to Ethics Review of Knowledge-generating Projects (Research, Program Evaluation, and Quality Improvement) in Healthcare. Edmonton: ARECCI, 2005.

  62. 62

    National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. Bethesda, MD, USA: National Institutes of Health, 1979.

  63. 63

    Department of Health and Human Services: Categories of research that may be reviewed by the Institutional Review Board (IRB) through an expedited review procedure. Fed Regist 1998; 63: 60364–60367.

    Google Scholar 

  64. 64

    Quality Improvement FAQs. US Department of Health & Human Services; 2011. Available at (accessed on 3 December 2014).

  65. 65

    Department of Health Governance arrangements for research ethics committees: a harmonized edition. London, UK: UK Health Departments, 2011.

  66. 66

    Health Research Authority Defining Research: NRES Guidance to Help You Decide If Your Project Requires Review by a Research Ethics Committee. London, UK: National Health Service, 2009.

  67. 67

    Is my study research? Medical Research Council & National Health Service; 2014. Available at (accessed on 3 December 2014).

  68. 68

    National Health & Medical Research Council National Statement on Ethical Conduct in Human Research. Canberra, Australia: Australian Government, 2007.

  69. 69

    National Health & Medical Research Council Using The National Statement: Ethical Review of Quality Improvement Activities in Health Services. Canberra, Australia: Australian Government, 2012.

  70. 70

    Goldman B, Dixon LB, Adler DA, et al: Rational protection of subjects in research and quality improvement activities. Psychiatr Serv. 2010; 61: 180–183.

    Article  Google Scholar 

Download references


We thank David Parry, Pilar Nicolas and Francis Ouellette for their valuable contributions to this article. We also thank the Ontario Institute for Cancer Research for their funding of the ICGC project, as well as the Fonds de Recherche du Québec—Santé (Grant # 24463), the Réseau de Médecine Génétique Appliqué, and the Canadian Institutes of Health Research (Grant # TGF-96109) for their financial support.

Author information



Corresponding author

Correspondence to Yann Joly.

Ethics declarations

Competing interests

The authors declare no conflict of interest.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Joly, Y., So, D., Osien, G. et al. A decision tool to guide the ethics review of a challenging breed of emerging genomic projects. Eur J Hum Genet 24, 1099–1103 (2016).

Download citation


Quick links