In recent years, the number of digital health tools with the potential to significantly improve delivery of healthcare services has grown tremendously. However, the use of these tools in large, complex health systems remains comparatively limited. The adoption and implementation of digital health tools at an enterprise level is a challenge; few strategies exist to help tools cross the chasm from clinical validation to integration within the workflows of a large health system. Many previously proposed frameworks for digital health implementation are difficult to operationalize in these dynamic organizations. In this piece, we put forth nine dimensions along which clinically validated digital health tools should be examined by health systems prior to adoption, and propose strategies for selecting digital health tools and planning for implementation in this setting. By evaluating prospective tools along these dimensions, health systems can evaluate which existing digital health solutions are worthy of adoption, ensure they have sufficient resources for deployment and long-term use, and devise a strategic plan for implementation.
The digital health industry has seen rapid expansion over the past several years. Over the past decade, over 1200 US digital health companies have attracted a cumulative 33 billion USD in investment, from 1.1 billion USD in 2011 up to 14 billion USD in 20201. The Food and Drug Administration (FDA) has also just recently begun approving digital therapeutics, and options for consumer-facing digital health tools are rapidly growing2,3.
While large health systems are increasingly looking to invest in digital health tools, the landscape of this field is very complex4,5. The market for health-system-based digital health tool development includes not only many start-up companies, but also several incumbents and entrenched players like large electronic health-record (EHR) vendors; there are innumerable regulatory-compliance considerations; and there are several unique barriers to the successful adoption of new digital health tools within these large, complex organizations6,7. There is no clear framework by which health systems can evaluate the realities of selecting, building, and deploying a digital health tool in a complex system8.
Many frameworks exist for clinical validation of these tools3,9,10,11, but few strategies describe how to integrate these validated tools in health systems; the literature on digital health implementation within large organizations is limited12,13,14. Existing frameworks for digital health implementation are difficult to operationalize for these dynamic organizations as they do not clearly define which stakeholders to involve, their roles, or the sequence of recommended steps15. The need to more critically evaluate this process is increasingly evident: the COVID-19 pandemic illustrated the value of many digital health tools16, but also revealed that there are adoption and deployment strategies that lead to minimal value creation, large amounts of frustration and inefficiency, and in some cases even patient harm17.
In this piece, we highlight the unique dimensions along which health systems should evaluate digital health tools to anticipate the potential benefits and challenges of adopting them. Key stakeholders who draw from experience in four large organizations—Brigham and Women’s Hospital (Boston, MA), Beth Israel Deaconess Medical Center (Boston, MA), Atrium Health (Charlotte, NC), and Intermountain Healthcare (Salt Lake City, UT)—convened to identify common lessons from implementation of digital health tools within these systems. In this piece, we define digital health tools as software applications developed for the purpose of improving healthcare services, such as providing clinical decision support or automating administrative or research processes within health systems14.
What is the optimal product selection approach?
Digital health tools can be built in a number of different ways (Fig. 1). Considering how a tool will be built (or has been built) may offer some insight into its strengths and weaknesses and can help a health system identify which product-selection approach is best in the context of the problem they are trying to solve. Depending on the needs and constraints of the institution, it may be best to purchase a complete product, configure an existing tool, or build a tool internally.
The advantages and limitations of each product-selection approach depend largely on the type of problem that the tool is intended to address, and the characteristics of the institution. Thus, an intimate understanding of the organization’s specific need or problem in context is essential before considering any tool or selection approach. If the tool is being built to address a site-specific problem unique to one health system, then an internally developed tool may be best. If the problem is universal, an externally built tool that has been validated elsewhere may be better. Internal custom development may also be preferable when there is no externally built tool currently available and a solution is needed immediately.
During the early months of the COVID-19 pandemic, US health systems leveraged many different development pathways to solve different types of problems. The need to rapidly expand telemedicine capacity was a pressure felt almost universally by health systems, so many quickly identified externally-built tools from third-party vendors18. Similarly, some health systems turned to existing external SMS-based or chatbot-based vendor solutions to help screen patients and direct them to the appropriate resources19. In contrast, each state enacted slightly different restrictions on postponing elective surgeries; since the criteria varied from state to state, many health systems turned to internal software-development teams to generate customized tools to comply with state rules. For example, to comply with Wisconsin-specific vaccine distribution and prioritization criteria, ThedaCare (a Wisconsin-based health system) turned to their EHR vendor to build a customized tool18. The Mass General Brigham health system (Boston, MA) internally developed a tool to comply with specific guidance from the Massachusetts Department of Public Health (MDPH) on screening visitors and employees for respiratory symptoms before entering healthcare facilities (Table 1)20.
Internally built tools are also often built with a deeper understanding of the health system’s problem in context, and tailored to the IT configuration and security standards of the health system. Internally developed tools can also be frequently updated to reflect the specific needs of the organization. A drawback, however, is that an internal department may not have the bandwidth to troubleshoot or update the tool frequently or rapidly. While this drawback for internally developed tools is a strength of third-party tools, integration and interoperability with a health system’s existing IT infrastructure may be a challenge of third-party products. Further, external tools are often updated less frequently and have their own development priorities that may not match the needs of the organization. When evaluating a third-party tool, a health system must ask itself the following question: could a tool by an existing vendor of ours (e.g., our existing EHR vendor), or one that already exists internally, accomplish the same task? If so, these options may be more favorable to adopting a new third-party tool. Duplicative efforts should be avoided at this stage by identifying and repurposing existing tools within the health system that may already address the same problem.
Another advantage of externally built tools is that they may already be in use at other health systems. Health systems should seek out such case studies or references when evaluating third-party tools. Assessing the experience of these other health systems in using this tool may reveal valuable evidence for or against the adoption of the tool. For this reason, there is an important distinction between digital health tools built by the EHR vendor as existing products versus those built as site-specific configurations. Products offered by the EHR vendor may be more likely to have a track record of deployment and use at other health systems previously, which can be used to assess the tool’s value. In some cases, health systems use varying configurations of the tool; this enables some EHR vendors to offer libraries in which other health-system customers can share their tool configurations. For example, in the case of a health system seeking to address a common problem such as poor operating-room (OR) utilization, their EHR vendor may offer an OR-scheduling tool as an optional add-on (Table 1).
Increasingly, an additional model of product development is also being adopted by large academic medical centers in which they form private-sector partnerships to codevelop digital health tools14,21. This approach has many advantages for health systems, including the ability to tailor tools for ease of adoption and institution-specific needs, the ability to scale the tool beyond one health system, and the ability to generate nonclinical revenue through IP licensing4. It may be advantageous for vendors as well, as it increases the likelihood of adoption of the product they are codeveloping.
Given the complementary opportunities and challenges associated with each method of digital health-tool selection, we suggest giving four methods full and equal consideration before selecting one. Once there is a deep understanding of the need that the health system is trying to meet, first consider if the organization’s EHR vendor or vendor for an existing digital health tool offers a solution. Second, consider if there is another external, validated solution on the market. Third, consider identifying a vendor who is willing to codevelop the solution. Finally, also consider internal custom development of the desired tool. In Fig. 1, we highlight considerations for each of these approaches. The ideal product-selection approach may vary by a health system’s specific needs.
Is there a clear demonstration of ROI?
While CMS has not yet issued clear guidance on reimbursement for use of digital health tools, there are a number of other ways use of these tools can generate ROI2; understanding the framework by which the tool generates this value is important. ROI in health systems is driven by a number of factors, and it should be clear which factor the tool addresses. Some frameworks and examples by which tools can generate financial ROI are shown in Box 1. For example, a tool that increases OR utilization through more efficient scheduling of cases would generate financial value in a fee-for-service model by increasing procedural volume.
In considering the financial value of the tool, it is also important to measure how much the tool costs to acquire and maintain and who will be paying for it. A tool may have a time-limited capital expense associated with it, or an operational expense that incurs costs in perpetuity, or in most cases, both. In adopting a digital health tool, there is often not only an upfront cost associated with implementation, but also ongoing costs and staffing requirements associated with support, maintenance, and hosting. Once these upfront and long-term costs are estimated, it is also important to consider whether there is an existing line item in the health system’s budget for the tool, or if a new source of funding is required.
Finally, the overall business plan of the tool should be examined by comparing the financial value it generates against costs associated with long-term maintenance. Once the tool has been implemented, it will likely continue to incur operational expenses and also require support personnel, computer and storage space, and project managers to oversee updates and maintenance. These responsibilities may be distributed in various ways across internal staff at the health system, the vendor, or even third parties to whom the health system can outsource support tasks.
Is there a clear demonstration of clinical value?
There are a number of clinical dimensions along which to evaluate a digital health tool’s value and risks22. Identifying a meaningful clear outcome metric that would be improved by adoption of the tool in the short term is essential when considering adoption. A meaningful metric is one that affects an aspect of the Institute for Healthcare Improvement (IHI)’s Quadruple Aim: improving the health of populations, enhancing the experience of care for individuals, reducing the per-capita cost of health care, and improving the experience of clinicians and staff23. Examples of such metrics include provider burnout, time savings, patient outcomes, and patient satisfaction. In the case of externally developed tools, the vendor sometimes may be able to provide customer case studies or scientific evidence showing that the use of their product is associated with improvements in some meaningful outcome of interest. Measuring a tool’s direct impact on a granular outcome—rather than a more high-level one—may be a better way to capture its value. For example, it may be difficult to measure a tool’s “total provider time saved,” and easier instead to measure its “reduction in number of unnecessary consultations.” For example, the Mass General Brigham health system measured the impact of their COVID Pass screening tool in terms of the number of symptomatic individuals successfully prevented from entering the facility, the fraction of individuals tested for COVID-19 within 14 days of a positive screen, and the fraction of individuals with a positive screen who truly had a COVID-19 infection20,24.
A tool may be designed to improve a specific outcome, but whether or not it is actually capable of achieving this goal in clinical practice ultimately dictates its true clinical value. A digital health tool that is useful in practice is one that is able to trigger a specific workflow: the insight that the tool provides must be translated into an actual human intervention in order to realize the tool’s value25. Prior work suggests that predictive models have the capacity to move the needle on clinical outcomes when their output is coupled with tailored human interventions26. Thus, both the tool’s target-outcome metric and its ability to affect this outcome metric must be considered.
Off-the-shelf tools sometimes have a track record of use at other health systems that may offer insight into their clinical value. However, in the case of tools with no prior use history (e.g., those that have been internally developed by the organization), a limited pilot among a small sample of the total intended user population may help the organization assess if the new technology actually affects the desired outcome metric or not. Validated resources for designing effective pilots are described elsewhere14.
What data assets are required for product functionality?
Digital health tools are typically informed by one or more sources of data when generating outputs such as prognostic or diagnostic predictions, administrative tasks, or documentation recommendations. Common sources of structured and unstructured data include the EHR, scheduling or administrative systems, and administrative claims and billing systems. The health system must ensure that the new tool has access to the appropriate stream of data at the appropriate interval (e.g., real time, weekly, or monthly) for full functionality. The tool must also be fully embedded within the health system’s information ecosystem: it should be able to send and retrieve data from other applications as needed for functionality and accessibility. Encouragingly, a growing number of tools are being built that directly connect to EHRs and use standards-based application programming interfaces (APIs) such as fast healthcare interoperability resources (FHIR)27. Some organizations have also begun curating these products based on their interoperability with existing systems28. Ensuring that the tool shares a common data standard with other tools in the organization is essential. In addition, for off-the-shelf tools built by external vendors that are implementing machine-learning algorithms, local training data may be required to tune the model’s hyperparameters to the new environment it will be deployed within, prior to deployment. Finally, in order to share these data with a third-party vendor, the organization must ensure that there is some data-governance structure in place, including a data-use agreement (DUA) that allows the vendor to access the minimal necessary data for tool functionality. Importantly, the DUA should specify how patient-level data are to be protected (e.g., a DUA might specify that any data shared with the vendor must be anonymized immediately prior to use), and should define what—if any—secondary use of the data is permitted. The Department of Health and Human Services (HHS) has proposed best practices for DUAs29. In the case of our OR scheduling tool example, local training data were needed to inform the tool of how long various procedures at this health system typically take. Fortunately, a previously established DUA between the EHR vendor and the hospital covered this data sharing (Table 1).
Is there an internal champion?
An internal champion is someone with an intimate understanding of how the tool meets a specific need of the organization, the bandwidth and motivation to orchestrate adoption of the tool (including IT integration and personnel training), the leverage to encourage people to use it, and is similar enough to the typical user to envision how it should be integrated into existing workflows. The internal champion must be able to engage a representative sample of prospective users (including clinical, management, and administrative users) prior to, during, and after implementation, who can provide continual feedback on both the new product and new process created by the product30,31. To improve the likelihood of successful implementation of health IT products, prior literature supports obtaining feedback from representative samples of all distinct groups that interact with the tool, such as nurses and physicians, and finding inclusive solutions that address each group’s needs32,33,34. Important feedback on the product may include comments on the interpretability, reliability, and usability of its output or the algorithm it uses; feedback on the process may comment on how disruptive or time-consuming the workflows triggered by the tool’s various outputs are.
We recommend identifying an internal champion that resembles a potential user as closely as possible. For example, an OR-scheduling tool should have an OR-scheduling administrative staff member as its internal champion. Nurses, physicians, advanced practice providers (APPs), physical/occupational therapists, and administrative staff are all examples of possible internal champions as they are in a unique position to identify digital health tools that are suitable for adoption, have an intimate knowledge of the health system’s daily processes, and have the leverage to encourage their colleagues to use the tool and provide feedback. In the case of digital health tools that are being implemented for the purposes of research or data collection, having an internal research champion—such as the principal investigator of a research laboratory within the organization—is also important.
Are there executive sponsors?
The adoption and implementation of digital health tools must be considered from the perspective of two executive-level strategies. One is the organization’s digital strategy: ensuring that new tools are compatible with the organization’s IT infrastructure and platforms, information-security requirements, and more. The other is the organization’s broader healthcare-delivery strategy: ensuring that new tools align with the general purpose, direction, and needs of the health system. To address these two strategies, two or more executive sponsors may be needed.
A senior health-system executive such as the chief executive officer, chief medical officer, chief nursing officer, or senior vice president of clinical services may be in a good position to ensure that the tool aligns with the organization’s healthcare-delivery strategy, and also that the tool’s business plan justifies its adoption. This executive sponsor may also be able to support the tool’s adoption—even if they do not engage directly with the tool nor resemble the tool’s typical user—by providing any necessary resources identified by the internal champion. The second executive sponsor should address the organization’s digital strategy. A chief digital officer or chief medical informatics officer may be able to ensure that the solution being selected will adhere to the organization’s digital strategy for interoperability and security.
Does the tool align with institutional priorities?
A number of existing digital health tools have the ability to demonstrate ROI and clinical value. Only the ones that align with institutional priorities, however, are likely to attract the momentum necessary for widespread adoption and deployment. A tool—and the outcomes it targets—must align with the goals and strategy of the health system’s executive leadership, such as expanding to new markets or addressing specific conditions. An understanding of these goals can help identify tools that are most likely to gain the support of executive leadership. As discussed in the prior section, a key role of the executive sponsor should be to ensure this institutional alignment of priorities.
Regulatory compliance is another way a tool can meet institutional priorities. Compliance with upcoming legislative deadlines is often a catalyst for adoption of tools that help health systems comply. For example, the ONC’s Final Rule, which requires increased electronic health information (EHI) sharing between health systems and patients, will likely catalyze adoption of new digital health tools that enable this sharing among health systems as they race to become compliant with upcoming deadlines. In the case of COVID Pass, the institution’s urgent need to limit unnecessary COVID-19 exposures to patients and staff—as well as issues ordered by the State of Massachusetts—prompted them to develop and deploy the tool within two weeks of conception.
What resources are required for implementation?
Once a tool has been selected or built, the resources necessary for implementation should be identified. Whether a tool’s implementation will be resource- and effort-intensive or not can likely be estimated based on the scope of its domain and its relationship to existing workflows—namely, how many workflows it affects and whether it burdens or improves these workflows, as described recently by Morse et al.8. For example, a general inpatient-deterioration prediction tool may burden the workflow of multiple care teams by giving them information that they would normally not evaluate and are unsure how to respond to. Such a model would also require large numbers of staff—primary teams, consulting teams, and nurses—to be trained on how to react to its output. In contrast, an OR-scheduling tool designed for the very narrow purpose of scheduling procedures in a time-efficient manner may only affect the workflow of perioperative schedulers, increase the efficiency of the scheduling workflow, and only require scheduling staff to be trained in its use.
We recommend identifying the resources necessary to address five issues: (1) training, (2) IT integration, (3) information security, (4) human capital investment, and (5) adapting existing real-world workflows and EHR workflows to incorporate the new tool. Important considerations for each of these five issues are listed in Box 2. The US Office of the National Coordinator for Health Information Technology (ONC) has published guides that offer further detail on ensuring information security, interoperability, and preparing the necessary hardware and software infrastructure for health IT implementation35,36. Other research groups have offered further detail on preparing for training and adapting workflows32,37.
Given these multiple simultaneous considerations and the overall complexity of implementation, there is some risk of developing incomplete implementation plans or losing momentum while planning the implementation process. Some institutions have developed process maps to help standardize the pilot and clinical validation processes for tools they are considering implementing—a similar methodical approach to implementation may be beneficial14,38. We suggest using the nine key considerations outlined in this piece as a checklist to standardize digital health-tool selection and to prepare for implementation. Health systems may customize the considerations listed in this piece to their organization’s needs, to ensure that all aspects of implementation are considered for each tool, that they are considered in the right order, and that teams responsible for implementation are able to promptly connect with appropriate contacts in each area to maintain momentum.
Does the tool have a long-term operational home?
Digital health tools in large health systems are sometimes viewed as innovation projects and receive short-term funding for adoption and deployment. However, at some point, these projects must graduate to an operational home. It is important to identify who will be responsible for “owning” the tool’s functionality, financial sustainability, and clinical impact in the long-term before it is adopted.
One dimension of long-term ownership that should be considered is whether the tool will have a “technical owner” who has the ability to provide technical support and maintenance into the future. For internally developed tools, there must be capacity within the health system for informatics staff to identify, understand, and develop solutions to future issues with the tool; there must also be capacity for technical IT staff to implement these proposed solutions, host the tool, and handle other technical issues. For externally developed tools, technical support contracts with vendors should be maintained. Contingency plans should be made as well: what if the vendor gets acquired or goes out of business and can no longer offer ongoing technical support? As more workflows and functions in healthcare are digitized, health systems will likely need to expand their own technical teams to support these tools.
Another dimension of long-term ownership that should be considered is who the tool’s “business owner” will be. An operational group should be responsible for overseeing the tool, continuing to fund it, and performing ongoing quality assessments: the tool should be continually reexamined to ensure it is delivering its intended value, and iteratively refined if it is not. This may require assigning specific project managers or business analysts to this long-term task. COVID Pass remained under the ownership of the in-house digital health development and adoption team, who continued to solicit feedback from Human Resources and Occupational Health on new features such as informing users of COVID-19 clinical trials and enabling scheduling of vaccine and testing appointments. The OR-scheduling tool was placed under the ownership of the Department of Perioperative Services, whose responsibilities included overseeing the tool, incorporating it in their budget, and assigning business analysts to regularly assess OR-utilization metrics (Table 1).
Large health systems face unique, complex challenges in adopting, implementing, and operationalizing digital health tools, which stand in the way of significant potential improvements in healthcare services. In this piece, we identify nine key considerations to help these organizations identify and implement digital health tools strategically. By evaluating prospective tools along the dimensions described in this piece and summarized in this table—the product-selection approach, the ROI and clinical value, internal champions and executive sponsors, data assets required for functionality, alignment with institutional priorities, requirements for implementation, and long-term operations—health systems can decide on how best to select or develop a digital health solution, evaluate whether an existing tool is worthy of adoption, ensure they have sufficient resources for deployment and long-term use, and devise a plan for implementation.
RockHealth. Q1 2021 Funding Report: digital health is all grown up. https://rockhealth.com/reports/q1-2021-funding-report-digital-health-is-all-grown-up/ (2021).
Patel, N. A. & Butte, A. J. Characteristics and challenges of the clinical pipeline of digital therapeutics. npj Digital Med. 3, 1–5 (2020).
Powell, A. C., Landman, A. B. & Bates, D. W. In search of a few good apps. JAMA 311, 1851–1852 (2014).
Gordon, W. J., Landman, A., Zhang, H. & Bates, D. W. Beyond validation: getting health apps into clinical practice. npj Digital Med. 3, 1–6 (2020).
Abraham, T. Hospitals look to venture capital as R&D extension. https://www.healthcaredive.com/news/hospitals-look-to-venture-capital-as-rd-extension/549854/ (2019).
PWC. PwC US Health Top Health Issues 2021. https://www.pwc.com/us/en/industries/health-industries/assets/pwc-us-health-top-health-issues.pdf (2021).
Flannery, D. & Jarrin, R. Building a regulatory and payment framework flexible enough to withstand technological progress. Health Aff. https://doi.org/10.1377/hlthaff.2018.05151 (2018).
Morse, K. E., Bagley, S. C. & Shah, N. H. Estimate the hidden deployment cost of predictive models to improve patient care. Nat. Med. 26, 18–19 (2020).
Henson, P., David, G., Albright, K. & Torous, J. Deriving a practical framework for the evaluation of health apps. Lancet Digit. Health 1, e52–e54 (2019).
Mathews, S. C. et al. Digital health: a path to validation. npj Digital Med. 2, 1–9 (2019).
Van Velthoven, M. H., Smith, J., Wells, G. & Brindley, D. Digital health app development standards: a systematic review protocol. BMJ Open 8, e022969 (2018).
Cresswell, K. & Sheikh, A. Organizational issues in the implementation and adoption of health information technology innovations: an interpretative review. Int. J. Med. Inform. 82, e73–e86 (2013).
Li, R. C., Asch, S. M. & Shah, N. H. Developing a delivery science for artificial intelligence in healthcare. npj Digital Med. 3, 1–3 (2020).
Tseng, J., Samagh, S., Fraser, D. & Landman, A. B. Catalyzing healthcare transformation with digital health: Performance indicators and lessons learned from a Digital Health Innovation Group. Healthcare 6, 150–155 (2018).
Verma, A. A. et al. Implementing machine learning in medicine. CMAJ 193, E1351–E1357 (2021).
Reeves, J. J. et al. Rapid response to COVID-19: health informatics support for outbreak management in an academic health system. J. Am. Med. Inform. Assoc. 27, 853–859 (2020).
Brodwin, E., Aguilar, M., STAT staff, Ross, C. & Palmer, K. ‘It’s really on them to learn’: How the rapid rollout of AI tools has fueled frustration among clinicians - STAT. https://www.statnews.com/2020/12/17/artificial-intelligence-doctors-nurses-frustrated-ai-hospitals/ (2020).
Adams, K. Which digital health tools helped hospitals weather the pandemic? 4 CIOs discuss. https://www.beckershospitalreview.com/digital-transformation/which-digital-health-tools-helped-hospitals-weather-the-pandemic-4-cios-discuss.html.
Massgeneral. Mass General Provides Telemedicine in Response to COVID-19. https://advances.massgeneral.org/research-and-innovation/article.aspx?id=1201 (2020).
Zhang, H. et al. A web-based, mobile-responsive application to screen health care workers for COVID-19 symptoms: rapid design, deployment, and usage. JMIR Form. Res. 4, e19533 (2020).
Shah, R. N., Nghiem, J. & Ranney, M. L. The rise of digital health and innovation centers at academic medical centers: time for a new industry relationship paradigm. JAMA Health Forum 2, e210339–e210339 (2021).
Perakslis, E. & Ginsburg, G. S. Digital health-the need to assess benefits, risks, and value. JAMA 325, 127–128 (2021).
Bodenheimer, T. & Sinsky, C. From triple to quadruple aim: care of the patient requires care of the provider. Ann. Fam. Med. 12, 573–576 (2014).
Kim, E. et al. COVID-19 screening system utilizing daily symptom attestation helps identify hospital employees who should be tested to protect patients and co-workers. Infect. Control Hosp. Epidemiol. https://doi.org/10.1017/ice.2021.461 (2021).
Jung, K. et al. A framework for making predictive models useful in practice. J. Am. Med. Inform. Assoc. 28, 1149–1158 (2020).
Golas, S. B. et al. Predictive analytics and tailored interventions improve clinical outcomes in older adults: a randomized controlled trial. npj Digital Med. 4, 1–10 (2021).
Barker, W. & Johnson, C. The ecosystem of apps and software integrated with certified health information technology. J. Am. Med. Inform. Assoc. 28, 2379–2384 (2021).
Ross, C., Palmer, K., Feuerstein, A. & Sheridan, K. Hospitals launch new venture to build a better digital health marketplace. https://www.statnews.com/2021/10/05/graphite-health-digital-marketplace-civicarx/ (2021).
Department of Health and Human Services. Practices Guide. https://www.hhs.gov/sites/default/files/ocio/eplc/EPLC%20Archive%20Documents/55-Data%20Use%20Agreement%20%28DUA%29/eplc_dua_practices_guide.pdf (2012).
Kappelman & Lean, M. User engagement in the development, implementation, and use of information technologies. in Proceedings of the Twenty-Seventh Hawaii International Conference on System Sciences HICSS-94 (IEEE, 1994). https://doi.org/10.1109/hicss.1994.323467.
Cresswell, K. M. et al. Sustained user engagement in health information technology: the long road from implementation to system optimization of computerized physician order entry and clinical decision support systems for prescribing in hospitals in England. Health Serv. Res. 52, 1928 (2017).
Cresswell, K. M., Bates, D. W. & Sheikh, A. Ten key considerations for the successful implementation and adoption of large-scale health information technology. J. Am. Med. Inform. Assoc. 20, e9 (2013).
Cresswell, K., Morrison, Z., Crowe, S., Robertson, A. & Sheikh, A. Anything but engaged: user involvement in the context of a national electronic health record implementation. Inform. Prim. Care 19, 191–206 (2011).
Dagroso, D. et al. Implementation of an obstetrics EMR module: overcoming user dissatisfaction. J. Healthc. Inf. Manag. 21, 87–94 (2007).
SAFER. Safer System Configuration. https://www.healthit.gov/sites/default/files/safer/guides/safer_system_configuration.pdf (2016).
SAFER. General instructions for the SAFER self-assessment guides. https://www.healthit.gov/sites/default/files/safer/guides/safer_system_interfaces.pdf (2016).
NEJM Catalyst. How to Avoid Common Pitfalls of Health IT Implementation. https://catalyst.nejm.org/doi/full/10.1056/CAT.20.0048 (2020).
Brigham Digital Innovation Hub. Can a Simple Checklist Help Effectively Launch Pilots for Digital Health? – Brigham Digital Innovation Hub. https://bwhihub.org/news/2018-3-5-can-a-simple-checklist-help-effectively-launch-pilots-for-digital-health/ (2018).
J.S.M. is supported by T15LM007092 from the US National Library of Medicine/National Institutes of Health, and the Biomedical Informatics and Data Science Research Training (BIRT) Program of Harvard University.
WJG reports consulting income from the Office of the National Coordinator for Health IT and Novocardia Inc., both outside the scope of this work. ABL was previously a member of the Abbott Medical Device Cybersecurity Council. JSM, GAB, and TD declare no competing interests.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Marwaha, J.S., Landman, A.B., Brat, G.A. et al. Deploying digital health tools within large, complex health systems: key considerations for adoption and implementation. npj Digit. Med. 5, 13 (2022). https://doi.org/10.1038/s41746-022-00557-1