INTRODUCTION

Genomic medicine as an emerging field has the promise of delivering greater diagnostic accuracy, targeted treatment options, and ultimately, improved patient outcomes. However, these achievements depend on skilled health professionals. Quality, evidence-based education that demonstrably improves health professionals’ competence in genomics is essential to ensure that genomics is used appropriately in patient care and, ultimately, that the promise of genomic medicine does translate to improved patient outcomes.1,2,3,4,5

Genetics/genomics education initiatives—including programs, learning activities, and resources, collectively referred to as “education interventions”6—have been developed to improve health professionals’ genomic literacy. Recent reviews of these efforts7,8 highlight the difficulty in interpreting individual and overall quality, due to inconsistent descriptions in the reporting of interventions and their evaluations. This inconsistency limits not only replication and comparison,7 but also evidence for which educational strategies are most effective, and in which settings. Consistent descriptions would also assist those developing genomics education interventions to learn from previous efforts, as well as support the development of a stronger evidence base for effective genomics education, for example by enabling meta-analysis.

Standards aim to clarify and define technical terms to provide a consistent way of describing a craft or profession.9 Reporting standards assist the cultivation of transparent communication and, in turn, facilitate appraisal and comparison of value and quality, systematic review, and replication.9 Widely adopted standards have improved the appraisal and quality of studies reporting diagnostic accuracy (STARD),10 randomized trials (CONSORT),11 observational (STROBE)12 or qualitative studies (COREQ),13 systematic reviews and meta-analyses (PRISMA),14 and program evaluation.9 As genomic testing expands into routine clinical practice, reporting standards are emerging to help establish consistent and equitable provision of genomic medicine.15,16,17 Despite an identified need,7 there are currently no standards for reporting genetics or genomics education interventions or their evaluation.

We have addressed the lack of standards in genomics education by developing a recommended minimum set of information to support consistent descriptions of the design, development, and delivery of genomics education interventions for health professionals, as well as their evaluation. To ensure international relevance, a co-design process18 was undertaken with international education and evaluation experts to develop these reporting standards.

MATERIALS AND METHODS

The consensus methodology applied was adapted from those used in the development of reporting standards for diagnostic accuracy studies,10 randomized controlled trials,11 observational epidemiological studies,12 systematic reviews,14 educational interventions for evidence-based practice,19 and educational program evaluation.20

Figure 1 summarizes the three-stage approach we employed to develop and refine the reporting standards, described below. This process was overseen by a Project Working Group comprising C.G., H.J., M. Martyn, A.N., and M.J.

Fig. 1: Study design when developing reporting standards for genomics education and evaluation.
figure 1

There were three stages: a literature review (1) to inform draft reporting standards (2), then a modified Delphi process to review and refine the standards (3).

Stage 1: literature review

A review of genetics/genomics education literature was conducted to generate an initial bank of items to be included in the standards. Here we define “item” as any metric or term used to describe an educational intervention or its evaluation. We applied the principles of scoping studies21 to map the relevant literature both broadly and in detail. Publications identified in our previous review of genetics/genomics education for physicians22 formed the basis of the subsequent in-depth literature review; forward and backward citation searches of these nine publications6,23,24,25,26,27,28,29,30 identified other relevant publications on continuing genetics/genomics education. Papers were eligible for inclusion if they (1) were published in English, (2) were published between January 2000 and May 2019, and (3) included comprehensive descriptions of continuing education interventions. This process identified papers on both genetics/genomics education and continuing education for medical professionals more broadly. Publications were iteratively reviewed using inductive content analysis to identify items that were reported when describing the education intervention itself and/or any evaluation. An item was classified as being “reported” in a publication if the term was included and details were provided. For example, if a publication noted that key stakeholders were consulted, this was only included if the publication also defined who the key stakeholders were.

Stage 2: draft reporting standards

To produce the first draft, all items identified from the literature review were collated and cross-referenced with a published program logic model for genomics education interventions that can be used to describe how interventions intend to achieve desired outcomes and plan their development and evaluation.6 The logic model encompasses the four key components of the program cycle—planning, development, delivery, and outcomes—with goals, stakeholder engagement, and evaluation spanning all stages. The draft standards were then also compared with general evaluation standards.9,31,32 This process ensured that the reporting standards would align with accepted practice.

Stage 3: modified Delphi review

The draft reporting standards were refined using a modified, reactive Delphi process.18 A cohort of Delphi experts was purposively recruited to include international experts with a breadth of expertise in evaluation, medical or genetics/genomics education research or delivery, implementation science or knowledge translation, as well as representatives from education committees of relevant national and international human/medical genetics societies. Invitations were sent to experts who participated in the development of the genomics education program logic model (n = 24)6 and key collaborators within the Project Working Group members’ professional network (n = 18). An open invitation was sent to members of the Genomics Education Network of Australia (n = 116 at the time).6

Data collection

Over three rounds of Delphi review each version of the standards was distributed by email, with accompanying feedback templates that included a mix of closed and open questions. Experts were also invited to directly edit items in each version of the standards as desired, using tracked changes. Iterative refinements were made based on cohort feedback after each round. The purpose of round 1 was to obtain consensus on relevance and to review clarity and comprehensiveness of items through comments and direct edits. In round 2, experts were asked to comment on the typical requirements for utility and design of this version of the standards overall. In round 3, experts were asked to identify items that could be considered optional and confirm if they were satisfied with the proposed set of reporting standards. As each round had a specific purpose, all experts were invited to participate in all rounds regardless of completion or noncompletion in previous rounds.

Data analysis

Microsoft Excel 2013 was used to collate, clean, and analyze the data. In round 1, relevance was determined using descriptive statistics with a threshold of 80% consensus. This threshold was based on previous Delphi consensus levels ranging from 70% to 80%.33,34,35 Open-text comments were coded using inductive content analysis36 to identify common themes in the feedback on specific items. In round 1, comments on lack of relevance were coded as “do not retain”; “retain with modification” was applied if a comment suggested modification or lower priority. Throughout all rounds, suggestions for additional items were reviewed and items amended, merged, or added. The Project Working Group reviewed the reporting standards after each round and resolved any conflicts, drawing on collective knowledge and experience in program evaluation and developing, delivering, and evaluating genomics education interventions.

RESULTS

Literature review

The literature review identified 21 publications for detailed analysis: 13 individual original papers and eight systematic reviews describing genetics/genomics education for health professionals or nongenomic continuing medical education (CME; Fig. 2, Supplementary Table S1). A total of 25 items were identified: 15 describing the intervention, nine related to evaluation, and one regarding stakeholders (Supplementary Table S2). All 21 publications reported on three education intervention items (target audience, mode of delivery, and content) and three evaluation items (type of evaluation, study design, and outcome measures); however, the level of detail provided for study design and reporting of other items varied, and only one publication reported on key stakeholders. Few original papers described a theoretical framework for either the educational intervention (n = 3) or evaluation (n = 3), and even fewer described the use of a program logic approach (n = 1).

Fig. 2: Results of stage 1: literature review of genetics/genomics education, continuing medical education, and evaluation.
figure 2

The review was based in a literature review conducted in July 2018 that focused on continuing education for internal medicine physicians. 22 The search terms for that review can be accessed in the Supplementary Materials for Crellin et al. 22

Draft reporting standards

Analysis of the items identified from the literature review highlighted that some items related to development and delivery of genomics education interventions and some to the evaluation of interventions. While ideally these would be reported together, to encourage use of the standards as broadly and as early as possible in reporting, we decided to retain this distinction. The Project Working Group mapped the 25 items against the program logic model for genomics education6 and evaluation standards.9,31,32 Three items in the education standards were duplicated into the evaluation standards (objective, program logic approach, key stakeholders/partners; Supplementary Table S3), three more items were added to the education standards (learning objectives, required prior knowledge or skills, structure of the intervention) and two to the evaluation standards (evaluation questions, evaluation subtype). One item in the evaluation standards was also split into two (“comparator group” as distinct from “study design”). This resulted in a total of 34 items in version 1 of the reporting standards (Fig. 3). Nineteen items related to education interventions and 15 to evaluation.

Fig. 3: Results of stages 2 and 3 to draft, review, and refine the reporting standards.
figure 3

Each panel describes the focus and outcomes of each stage or round of Delphi review, showing number of items reviewed, amended, merged, or added for each version.

Delphi cohort

Of the 158 people invited, 38 agreed to participate in this Delphi review (Supplementary Table S4), with the highest response from direct invitations through professional networks (15/18 invited). The Delphi cohort constituted experts from 11 countries across five continents, with expertise in education (n = 31; teaching at university, continuing professional development [CPD] or a combination), evaluation (n = 4), clinical experience (n = 25), policy (n = 2), and/or implementation science/knowledge translation (n = 2). The group included members of the education committees of four human/medical genetics societies in North America, Europe, and Australasia, and those who collectively participate in 16 national or international-level genetics/genomics initiatives.

Delphi review

The results of the Delphi process and outcomes of each round of review are presented in Fig. 3Supplementary materials provide a summary of the evolution of all items during the Delphi process (Supplementary Table S3), with an illustrative example provided (Supplementary Table S5).

Round 1: relevance, clarity, and comprehensiveness

Thirty-six of the 38 experts (95%) completed round 1. Twenty-seven items reached the 80% threshold and were retained. Initially, there was no consensus whether to retain a further seven items (key stakeholders, program logic, development process, theoretical framework, audience size, evaluation subtype, and evaluation program logic approach) but after reviewing open-text comments, these items were categorized as “retain with modification.” Based on six comments, the item “evaluation subtype” was merged into item “type(s) of evaluation.”

Experts collectively made 54 suggestions for new items: 16 were incorporated as modifications to seven existing items; 22 were collated into eight new items (developer/host characteristics, access, assessment, evaluation plan, revision strategy, recruitment, and dissemination strategy, for both the education intervention and the evaluation); and 11 were considered by the Project Working Group to be sufficiently addressed by existing items. The remaining five suggestions related to general information expected in any publication—human research ethics approval, impact of the intervention, results or impact of any evaluation, or any limitations of the study—so were not included at this stage (see footnote to Table 1). This was consistent with approaches used for similar standards.11 Therefore, version 2 of the reporting standards comprised a total of 41 items.

Table 1 Standards for consistent reporting of genomics education intervention development, delivery, and evaluationa.

Round 2: utility and design typical requirements

Twenty-nine experts (76%) completed round 2. Eleven experts (38%) reached consensus, with no further changes to version 2 suggested. Overall, comments received in round 2 indicated that the draft reporting standards had reasonably high utility. Specific comments made related to relevance (n = 18), clarity (n = 25), merging or splitting items (n = 2), or modifying wording or format (n = 36). In response, the Project Working Group decided to merge ten items into seven existing items and add three new items (qualification, year of delivery, and evaluation results), resulting in 34 items for version 3.

Round 3: final review and optional items

Thirty-one experts (82%) completed round 3 and 13 (42%) approved version 3 with no further changes. In this round the Delphi cohort was primarily asked to reflect on whether items should be essential or optional. Fifteen experts (48%) collectively indicated that 18 items could be optional. For 14 items, only 1–2 reviewers per item suggested that they be recategorized as “optional,” with all other experts recommending “essential”; these items were therefore retained as essential. The remaining four items were rated as “optional” by three or more experts: qualification (n = 5 comments), needs assessment (n = 3), theoretical framework for the intervention (n = 7), and theoretical framework for the evaluation (n = 6). After review by the Project Working Group, these four items were merged into three existing items (see Supplementary Tables S3 and S5). One expert also suggested adding an item (evaluator), which was approved by the Project Working Group. Lastly, the five additional items suggested but excluded in round 1 were reviewed, with two (“impact of intervention” and “results or impact of any evaluation data”) incorporated into the items “evaluation results” and “evaluation impact.” The final standards (version 4) therefore included 31 items.

Overview of Reporting Item Standards for Education and its Evaluation in Genomics (RISE2 Genomics)

The final 31 standards are provided in Table 1, with a simplified checklist available to download in Supplementary Table S6. The standards provide guidance on reporting both development and delivery of genomics education interventions (18 standards) and the evaluation of those interventions (12 standards), plus how stakeholders are identified and engaged. We have deliberately duplicated three items in both education intervention and evaluation (aim, approach, and funding) to allow the two parts of the standards to be used either independently or jointly. In response to requests from the Delphi cohort, more detailed descriptions of each item, and in some cases examples, are also provided. For example, while some authors may be familiar with items related to theoretical frameworks or program logic approaches and consider them best practice, these considerations may be outside the scope of some education providers’ professional qualifications and experience. Some terms may vary across settings, for example, a genomics education intervention may have “learning objectives” or “learning outcomes,” or be mapped to professional “competencies” or “capabilities.”

DISCUSSION

Through a rigorous consensus process with international experts, we have developed reporting standards to guide the preparation and review of reports and manuscripts on genomics education interventions and their evaluation. The aim of these reporting standards is to support the development of an evidence base for genomics education by both facilitating transparency and appraisal of interventions. As education interventions may be reported separately from their evaluation (which may or may not be reported at all), we present the standards in two parts to encourage their use early in the program life cycle. These standards build upon previously published reporting standards for health professional education19,37 by placing much greater emphasis and elaboration on evaluation in addition to the education intervention itself. Our inclusion of evaluation experts in our Delphi cohort, as well as experts from different countries, further differentiates these standards from others. We designed the standards to be accessible to all professionals involved in genomics education of health professionals. Ideally genomics educators should have some formal level of training; however, many professionals who develop, deliver, and/or evaluate genomics education interventions do not have formal training in education or evaluation.38 Consequently, they may find the terminology of more generic general education reporting standards impenetrable. Encouragingly, standards published very recently on the EQUATOR Network site (www.equator-network.org) used more accessible language.37

In response to the growing emphasis on genomic workforce literacy,1 genomics education interventions—and their evaluation—may be commissioned and subsequently reported in the gray literature (e.g., stakeholder and technical reports). Not all reporting items will be applicable to all interventions or appropriate to include in all types of literature. To encourage use across a broad range of contexts we have not classified any items as “essential,” nor require authors to provide page citations for items when submitting manuscripts. Nevertheless, authors are encouraged to provide reasoning if an item is not reported, to maintain comprehensiveness and consistency within the broader genomics education literature. These are necessary points of difference from some existing reporting standards, where all items are essential10,11,12,13,14 or require page number citation10,11,13,14 but align with the Joint Committee on Standards for Educational Evaluation’s “open standards” that guide rather than prescribe.9

A further point of difference relates to quality. While all standards aim to encourage “quality of reporting” through transparency and disclosure of valid and reliable information, some, such as the evaluation standards reviewed,9,31,32 also aim to improve quality of design and delivery. That was not the aim of these reporting standards for genomics education and evaluation. Consequently, the few suggestions from the Delphi cohort that were prescriptive of quality were not incorporated. For example, “strive to achieve maximum learning outcomes using Bloom’s taxonomy39” (learning objectives), “use mixed methods for evaluation” (data collection modality), or “collect evaluation data at all stages of the intervention development, delivery and impact” (data collection timing). We envisage that the reporting standards will be used as part of a suite of tools, which include a program logic model6 and an evaluation framework (in development), that will collectively encourage and support quality practice, and provide an evidence base for genomics education.

Consistent with a previous review,7 we found that use of theoretical frameworks and program logic approaches are not widely reported in the literature on genetics or genomics education and this was also an area of divergent views within our Delphi cohort. Theoretical frameworks can help examine assumptions or limits in the design of an education intervention or evaluation study and connect to the broader literature base, and program logic models can be used to describe how an education intervention is intended to work, thus helping to define hypotheses to be tested in the evaluation.40 However, some members of our Delphi cohort were not familiar with these approaches and felt the terminology may be a potential barrier to the uptake of the standards. Therefore, items that related to theoretical frameworks or program logic models were deliberately merged into the items “approach to development” and “approach to evaluation.”

A strength of this study was the combined expertise and diverse backgrounds of the Delphi cohort, which contributed to a robust development process. This cohort included those directly involved in genomics education of health professionals, those with general health professional education expertise, and, importantly, those with a background in evaluation. This is a unique feature of our standards compared with previous health professional education reporting standards, developed without evaluation expertise.19,37 As intended, items identified in the initial literature review prompted the Delphi cohort to identify additional items at each stage of the review process. Some confusion about the purpose of the reporting standards was revealed through the Delphi process and some participants identified potential uses beyond reporting (e.g., to guide development of education interventions or assist policy makers who may commission evaluations). While synchronous review methods, such as face-to-face workshops6,10,11,12,14,20 may have identified and resolved this confusion quickly, the asynchronous nature of contribution to a Delphi process meant that queries were documented for in-depth discussion and resolution to clarify the purpose and utility of the standards.

There is no single governing organization to disseminate or mandate the use of educational reporting standards for genomics. However, there are several international networks and consortia in genetic and genomics education—such as the Global Genomic Medicine Collaborative (G2MC; https://g2mc.org), Global Genomics Nursing Alliance (G2NA; https://g2na.org/), and the Genetics/Genomics Competency Center (G2C2; https://genomicseducation.net). As with other groups who developed reporting standards in the absence of a governing body,10,11,12,13,14,19,37 our Delphi cohort members are also now important early adopters or “champions” who can facilitate wider dissemination and adoption of the standards throughout their professional networks, societies, and communities. We purposively invited leaders in the development, provision, and evaluation of genetics/genomics education within their countries and internationally to join our expert Delphi cohort. Some experts expressed that the draft reporting standards were already compelling them to reflect on their practice, which demonstrates the value of the standards across all stages of planning, development, delivery, evaluation, and reporting.

The RISE2 Genomics standards have the potential to transform the evidence base of genomics education thereby making it more transparent, consistent, and comprehensive. Consequently, this will enable more robust, high-quality genomics education interventions and evaluation across a range of settings. Although our methodology focused on genomics education for health professionals, the resulting standards appear to be sufficiently generic to be used in settings beyond this, such as the genomics education of patients, communities, and the public, or even health professional education more generally. As these standards are adopted and applied, iterations may be necessary. The standards are certainly timely. Genomics education efforts are increasing globally and an evidence base is an imperative; early adoption of these standards will greatly strengthen the ability of educators to identify effective education strategies in the future.