INTRODUCTION

Genomic medicine and its impact are actively evolving, offering a promise of improved and more reliable health care [1]. In the United States, the American College of Medical Genetics and Genomics (ACMG) lists 1,278 genetic clinics that provide genetic services, including specialty clinics, hospitals, and cancer centers [2]. The genetic testing utilized by these institutions can be applied to more than 15,000 conditions. Currently there are more than 34,000 clinical genetic tests provided for various purposes, including screening, diagnosis, and therapeutic management, and these numbers are growing [3,4,5,6]. Genetic test reports are the primary source for communicating test information and results from the testing lab to hospitals and clinics.

Genetic testing labs utilize computational systems to interpret results, compose final test reports, and return these to their clients. However, the returned information is sometimes of limited computational availability when presented to hospitals and clinics, particularly common for information within interpretation sections. In many cases labs send genetic test reports as PDFs [7] or scanned images, thereby limiting the primary and secondary use of valuable information for clinical decision support systems (CDSs), and clinical genetic research.

Health-care (HC) interoperability standards offer a common language for representing and communicating medical information between labs and hospitals [8]. Many US hospitals have electronic health records (EHRs) that have incorporated interoperability standards to support many clinical activities [9]. Moreover, health-care data interoperability is considered a national goal in the United States and has been evaluated in many clinical settings [10, 11]. Although genetic information is part of the national interoperability roadmap, the perspective of genetic testing labs in that setting has been underresearched.

We interviewed staff associated with US-based genetic testing labs to identify their perspectives on adopting HC interoperability standards within their laboratory information management systems (LIMS). We asked specific questions and analyzed their answers about the expected benefits, challenges, and motivations for implementing HC interoperability standards. Another part of the study examined the implemented interoperability standards, the processes of test report generation, and the communication between the labs and their clients. The results of those investigations are being reported in another publication [12]. The results of our current study may be of great value in informing decision makers, LIMS vendors, and genetic testing labs regarding the adoption of interoperability standards within related LIMS.

MATERIALS AND METHODS

Throughout this paper, we use the word “standards” to refer to HC interoperability standards, unless otherwise specified.

We employed a qualitative approach using an applied thematic analysis [13, 14] with semistructured interviews and a discussion with a panel of content experts to further explore and validate the themes. Qualitative methods have been used to study health information exchange (HIE) in various clinical settings [15, 16]. This method is pertinent to subjects with little prior study, where perspectives, experiences, and attitudes of stakeholders need to be explored [17, 18]. The following sections will describe the steps we followed in chronological order.

Review of labs and study invitation

A review of US-based genetic testing labs was done to identify the market landscape and candidates for the interviews. All US-based labs listed in the National Center for Biotechnology Information–Genetic Testing Registry (NCBI-GTR) [6] were retrieved and reviewed to identify their business descriptions, such as university-affiliated, hospital-based, or commercial. Additional labs were added to the list through an Internet search. The lab review was completed between October and December 2018.

We reviewed the content of the website listed in the corresponding NCBI-GTR entry to determine if each lab currently undertook clinical genetic testing or research testing only. The business description was characterized and contact persons were identified. The initial business descriptions were categorized as university-affiliated, hospital-based, commercial, or reference labs. The descriptions were further extended to include blood banks, registries, governmental labs, nonprofit organizations, nonuniversity research organizations, and health systems through lab review. A given lab may have more than one business description. For example, a lab may be both university-affiliated and hospital-based. Reference labs were identified according to the labs’ self-description. If two or more labs were affiliated with the same organization or considered units within a general lab, they were described individually based on the NCBI-GTR entry.

Additional labs were added by identification through Internet searches. Labs meeting at least one of the following criteria were excluded from the list of candidate participating labs:

  • Research and development-oriented labs with no clinical services provided

  • No available webpage for the lab

  • No services or CLIA certification, according to NCBI-GTR

  • No longer in operation

  • The NCBI-GTR entry listed a consortium, registry, clinical trial, or research project

  • Those focused on paternity testing or direct-to-consumer genetic tests

  • Duplicate entry for the same lab

The business descriptions, compiled list, and exclusion criteria were discussed and reviewed periodically by the study team.

Interviews

To optimize use of the interviews to focus on exploration of themes, a preinterview survey was developed to capture discrete information of interviewees and their labs. The preinterview survey collected information about the interviewee, the description of the lab business, the information systems implemented for authoring and communicating genetic test reports, and the standards used, if any. The preinterview survey was administered through the University of Utah REDCap platform [19, 20]. Branching logic was used to avoid asking irrelevant questions based on the response to previous questions.

The semistructured interview questions included customized sections to confirm and elaborate on the preinterview responses and the process of report authoring and subsequent communication of the reports to hospitals and clinics. The interviewee was also asked to identify and rank the top three benefits, challenges, motivations, and lessons learned concerning standards implementation. The interview format encouraged discussion, eliciting additional information, and developing and exploring emergent themes of relevance to the subject. The structured interview script (Supplemental Material) provides the syntax used to ask each question, e.g., “Please mention and rank the top three benefits that may be realized by your lab from using biomedical informatics interoperability standards.”

Study invitations were sent to candidate participants from the identified labs’ representatives or through the website direct-messaging option, if no contact person was available on the website. If the contact person agreed to participate, an individualized invitation was sent requesting completion of the preinterview survey using REDCap [19, 20]. All of the interviews were conducted by video conference or telephone. The preinterview survey and the generic semistructured interview script are provided in the Supplemental material.

Thematic analysis

Coding

We followed a theoretical thematic analysis approach, where the study questions guided the coding process [13, 14]. The interviewer transcribed and de-identified all of the interviews and imported them into ATLAS.ti version 8 Windows and Mac versions (Berlin, Germany) [21]. Two coders followed an open-coding technique of the interview, and only segments relevant to the research questions were coded. The two initial coders coded each interview independently and then met face to face to discuss and come to consensus on all flagged content and associated codes. Audio recordings were consulted when needed, and S.M.H. followed the progress to ensure the validity and consistency of codes and adjudicated conflicts. Codes were iteratively developed while coding parts of the same interview and while coding different interviews. The iterative process included removing, adding, merging, and renaming the codes. Notes were taken using ATLAS.ti [21], and descriptions were added to non-self-explanatory codes or to differentiate similar codes to ensure consistency in coding of different interviews. The coders defined a coding scheme (Supplemental Material) and strategy to ensure consistency in coding and for future use.

The initial semistructured interview questions covered the categories of benefits, challenges, motivations, and lessons learned. Interviewees were also encouraged to expound on their answers. Accordingly, some codes were identified in noncorresponding questions. For example, some codes relevant to challenges were identified and labeled within the answer to the question regarding benefits.

Some “benefits” and “motivations” codes were confusing, where the expected “benefit” from implementing standards may also be considered the “motivation” for implementing these standards. For example, “regulatory requirements” may be considered a motivation to implement standards, but at the same time, meeting these requirements may be considered a direct benefit by some stakeholders, i.e., they consider “meeting regulatory requirements” as a benefit by itself. To clarify this, we followed a transparent coding scheme where “benefits” were defined as positive direct results of implementing standards to patients, health-care workers, researchers, and information system specialists, such as “reduce ambiguity and errors.” In contrast, motivations were defined as factors that encourage the implementation, such as “providing financial incentives." If “benefits” codes can be considered the “pulling factors” to implement standards, then “motivation” codes are the “pushing” ones.

Identifying themes and supporting literature

After the interview coding was completed, the final codes were reviewed, duplicates were merged, and more descriptions were added as needed to explicitly define each code. A total of 294 codes were identified. Codes were then grouped into themes through an iterative process of reviewing and modifying. Themes were further categorized and ranked based on the related study question. We identified themes from the interview scripts in their entirety, not just based on the answers for the specific questions about benefits, challenges, and other categories. However, the ranking was based on the number of interviewees who mentioned a corresponding theme within their answers to the corresponding specific questions (e.g., benefits question), i.e., theme frequency. If two or more themes have the same frequency, the ranking will be based on how the interviewees ranked these themes in relation to other themes of the same category (i.e., benefits, challenges, motivations, and lessons learned).

A targeted literature review was conducted to associate the identified themes with previously reported benefits, challenges, and motivations of interoperability standards and HIE in general, which do not necessarily focus on the genetic test reporting.

Panel discussion

The study team reviewed the results, followed by presentation with a panel discussion to ensure their validity. The panel consisted of seven subject matter experts from four states (California, Ohio, Pennsylvania, and Utah) with a cumulative experience of 130+ years. The panelists’ expertise included clinical genetics, genetic testing, genetic counseling, genomic medicine, clinical informatics, laboratory information systems, Biomedical Informatics (BMI), interoperability standards, and HIE. Panelist perspectives included academic, laboratory, and delivery system viewpoints.

RESULTS

Laboratory identification and review

Three hundred two US-based genetic testing labs were identified for potential participation. Two hundred fifty-eight labs were retrieved from NCBI-GTR, while the remainder were found through online searching. Two hundred seven labs were CLIA-certified labs. Table 1 provides a list of lab categories along with frequencies.

Table 1 Lab frequencies according to business categories or affiliations (a lab may belong to one or more categories).

Participating labs

Application of the exclusion criteria eliminated 92 labs from further participation, leaving 210 eligible labs. Invitations were extended to 188 of the 210 labs and 8 invitees opted out of the study.

Thirteen of the 180 labs completed the preinterview survey, and 10 labs participated in the remote interviews. One of the interviewees was only available for 30 minutes, which covered the first part of the interview but was not sufficient to cover questions related to standards, benefits, implementation challenges, and motivations. Nine labs conducted the full interview (5% response rate). The interviewed labs were affiliated with companies, universities, and research organizations that provide either general testing services or specialized genetic tests. The labs are located in Alabama, California, Massachusetts, Nevada, Ohio, Utah, Vermont, and Washington. Most of the labs have information systems that adopt one or more of the following standards [12]:

  • Logical Observation Identifiers Names and Codes (LOINC)

  • Systematized Nomenclature of Medicine–Clinical Terms (SNOMED-CT)

  • International Classification of Diseases, Ninth, and Tenth Clinical Modification (ICD-9 and ICD-10-CM)

  • Current Procedural Terminology (CPT)

  • Health Level Seven Version 2.x (HL7 V2.x) and HL7 V3

  • HL7 Fast Healthcare Interoperability Resources (FHIR)

  • Human Phenotype Ontology (HPO).

  • RxNorm

The other part of our study sheds light on these standards and their usage, information system models, and other system characteristics of the participating labs [12]. In particular, that part of our study found that of the ten interviewed labs: one had no current implementation of standards but was aiming to implement some of these standards in the future, FHIR was being implemented by another one of the interviewed labs to support some under-development medical applications, and the remaining labs were implementing other standards, such as LOINC, and SNOMED-CT to describe the performed tests and diagnoses (but not specific genetic information such as identified single-nucleotide variants [SNVs]). In the majority of cases, the genetic lab test reports were being delivered to hospitals as scanned images, or PDF files [12].

Excluding the interview that was terminated prematurely, the interview lengths ranged between 38 and 82 minutes with a mean duration of 56 minutes. The interviewed labs were a mixture of university-affiliated labs, hospital-based labs, reference labs, and commercial labs as described in Table 1. The interviewees’ roles include one lab president, three directors, two professors, two PhD scientists, and one senior bioinformatics scientist. Supplemental material contains detailed information about the interviewees’ roles, backgrounds, experiences, and their tenure at their current labs.

Identified themes

Of 295 codes, we identified 24 themes within 5 domains, categorized as follows: expected benefits (6 themes), challenges (5 themes), motivations (5 themes), future directions (4 themes), and lessons learned (4 themes)—all relative to adopting HC interoperability standards by the genetic testing labs. Tables 25 describe the identified themes. Supplemental material includes detailed themes tables that provide illustrative quotations and reference relevant literature.

Table 2 Benefits of implementing biomedical informatics interoperability standards.
Table 3 Challenges of implementing biomedical informatics interoperability standards.
Table 4 Motivations of implementing biomedical informatics interoperability standards.
Table 5 Labs’ future directions for laboratory information management systems (LIMS) and lessons learned from implementing BMI interoperability standards and informatics solutions.

Some interviewees clearly stated that increased data availability and accessibility is one of the main expected benefits from implementing interoperability standards. For example, one interviewee said, “we obviously want it to go back to our EHR electronically and seamlessly." However, this would be very hard due to the complexity of these standards, as stated by another interviewee, “It does take some manual work and some expertise in order to apply a standard like LOINC. With HL7, I would say, again, it requires building interfaces, requires fairly specialized expertise and so you have staff that are very experienced with HL7 interfaces. So, it’s not plug-and-play, it requires a lot of work.” In addition, another interviewee commented that even when two organizations are implementing the same standard, equivalent methods/output may not be present, “they can both be implementing the same standard but they are doing it very, very differently.” (Please see Tables 2 and 3 for additional themes and Tables A and B in the Supplemental material for additional quotations).

DISCUSSION

We identified the key benefits, challenges, and motivations for implementation of interoperability standards as perceived by representatives of genetic testing labs. We found repeatedly mentioned factors by interviewees that may be slowing the adoption of interoperability standards by genetic testing labs, including lack of motivation (i.e., a lack of practical demand by their customers—the hospitals and clinics), high cost with a lack of financial incentives (e.g., the Health Information Technology for Economic and Clinical Health (HITECH) Act [22]—Meaningful Use), and a lack of regulatory and legal requirements to implement a specific set of standards for genetic test reporting.

Among all of the motivations, interviewees reported that increased clinical demand might be the most crucial for pushing forward the adoption of standards by genetic testing labs. Some interviewees clearly highlighted this point, e.g., one stated, “so that is why we tend to be very cautious about implementing them, because we want to wait to see that our customers really need it first, and not just because the informatics community tells us that they are a good thing.” The four main stakeholders, i.e., the labs, LIMS vendors, SDOs, and regulators share the goal of improving patient care. More clinical pilot projects, similar to Sync for Genes [23], may need to be conducted to clearly demonstrate the value of standards in health care and guide clinical genetic data interoperability. Interviewees reported that financial incentives for the use of explicit standards tied to improved patient outcomes could also encourage labs to provide their data in standard-based formats. The HITECH Act has been reported to have some success in improving general clinical interoperability over time [22, 24], and a similar approach focused on genetic data could possibly help.

Using standards to represent and transfer the content of genetic lab test reports may be more straightforward than in some other domains, e.g., anatomic pathology, physical exam, or clinical visit notes, because computational tools are heavily used in the analysis and interpretation of genetic results. However, it may be more challenging with regard to the tailored report and genetic results having unclear interpretation, e.g., variants of unknown significance or rare variants lacking substantial evidence of effect, as the interpretation section would by necessity require communication of the uncertainty associated with the result. In addition, genetic testing may range from single variant detection to exome or genome sequencing. Thus, clinical information systems need to be able to receive and process data of different nature and volume to avoid development and operational challenges [25].

Some of those interviewed mentioned that clinicians prefer a tailored report over standard formats based on reusable templates. This is due to the need for a customized and individualized report reflecting the unique characteristics of the patient’s case. For many genetic tests, such as variant detection for carrier screening, pharmacogenomics, or familial variant confirmation, templated reports may be adequate. An important point to consider while working on standardizing genetic test reports is their volume, complexity, and how the included information is intended to be used, e.g., to be read by clinicians, or to be computationally available for informatics tools (e.g., CDS systems). Ideally the information could be provided both ways so that it would be both clinician-friendly and computable.

Although the consistent sharing and use of genetic information is part of the HealthIT.gov milestone “A learning health system enabled by nationwide interoperability,” targeted for 2021–2024 [10], it is essential to consider more details about the priority of data to be standardized and which standards are to be used. From a business point of view, the global genetic testing market was $13.1 billion in 2019, with the market share in North America being 58% [26]. This global market value is projected to reach $29 billion by 2026 [26]. Therefore, it is expected that current technical and financial investments in the exchange of clinical genetic data will pave the way for better health care as soon as it is proven to be beneficial in health-care settings [11].

This study used a rigorous qualitative method to investigate an important and underresearched area of clinical genetics interoperability. The participating labs were located across the United States and represented a range of business models and specialties. The interviewees and panelists had extensive experience and diverse backgrounds that enabled them to analyze and respond to the research questions critically. This study was limited by a low participation rate despite many individual invitations, reminders, and the use of personal outreach to ensure the greatest possible participation of labs. Another limitation of this study is that the sample is not a random sample. However, the results may be informative even though they may not be fully generalizable. The reasons for low participation may have included time constraints and concerns about disclosing what respondents perceived as proprietary information. While we are confident we identified the major themes, we cannot be certain that thematic saturation was achieved, potentially resulting in a less rich interpretation of the data. It is possible that the participating labs were more enthusiastic about interoperability standards than other organizations. Nevertheless, these study results may help guide stakeholders to increase the adoption of interoperability standards for genetic testing across the United States and worldwide. Our future research plans include the confirmation and quantification of the current themes and stratification according to labs’ specialties and business models.

In conclusion, this study identified expected benefits, challenges, and motivations of implementing interoperability standards in the setting of the genetic laboratory. Interviewees frequently reported that increased motivation through clinical demand is critical to accelerate adoption. As hospitals, clinics, and other end users realize the benefits of improved health-care services, robust research, and greater accuracy, they will be more motivated to increase their demand resulting in more rapid adoption. Interviewees also reported that initiating an incentive program, with reasonable technical specifications and proper regulation, may also foster the adoption of BMI interoperability standards by genetic testing labs.