The integration of research with service is core to clinical psychological science. Thirty years ago, McFall decried the rift between clinical practice and scientific evidence1, spurring the adoption of a ‘clinical scientist’ approach to training practice-informed researchers and research-informed practitioners. Relatedly, the scientist–practitioner clinical training model aims to prepare individuals to merge research with service provision by studying the nature of the problems they aim to treat, and by evaluating efforts to reduce those problems2. In both training models, clinical psychology researchers are tasked with conducting science that, proximally or distally, aims to serve individuals and communities — especially those living with or at risk of mental illness.

Traditionally, the service-related aims of clinical psychology research have been viewed as referencing service through clinical practice (for example, using research to inform the delivery of psychotherapy). However, just as clinical psychologists work beyond clinical practice settings — and just as mental health is shaped by myriad factors, outside formal treatment — our professional obligation to ‘serve’ through research extends beyond therapists’ offices. Indeed, this narrow definition of ‘service’ through clinical research has yielded missed opportunities to support the mental health needs, knowledge, and literacy of study participants, patients, and communities.

Defining service in clinical psychology

Service in clinical psychology research can be conceptualized in two ways. First, according to the profession’s stated goals — to integrate science with clinical practice to understand and reduce mental illness1,2; and second, according to an intentional ethics of reciprocation3,4, where scientists give back both knowledge ownership and material benefit to research participants and their communities. Combining these perspectives, service across basic (mechanism-focused) and applied (treatment-focused) clinical psychology research may be broadly defined as intentional efforts by researchers to confer mental health-relevant knowledge and benefits to study participants, future patients, and/or individuals or communities potentially affected by their work.

For example, at the participant level, knowledge and benefits might be conferred by increasing participants’ awareness of their own mental health, understanding of study results, or access to treatment resources. At the future patient level, knowledge and benefits might be conferred by using methods that are sufficiently rigorous to inform clinical care, or through sharing of study materials such as treatment programs or protocols. At the community level, knowledge and benefits might be conferred by integrating community perspectives during study design, and later communicating results back to the communities involved. A given clinical psychology study involving human subjects might be positioned to ‘serve’ at one or multiple levels, depending on its scope and goals. However, service at some level is virtually always possible.

Where clinical research fails to serve

In the absence of a definition of service and guidelines for conducting service-centred science, psychology research cannot reach its potential to give back to participants, patients, or communities. In many basic clinical research reports, allusions to individual or societal benefit appear briefly in Discussion or Future Directions sections, reflecting perfunctory afterthoughts rather than core professional tenets. Separately, applied clinical psychology research is often characterized by aspiration–method mismatches5, whereby methodological shortcomings render stated goals to improve clinical care unachievable. For example, an underpowered study with inflated, unreliable estimated effects cannot differentiate helpful from unhelpful treatments, or clinically-useful prognostic predictors from statistical noise. At best, such work produces equivocal findings with limited practical utility; at worst, it yields endorsements or dissemination of questionable treatment approaches, or spurs the misallocation of resources towards unreliable leads.

Youth psychotherapy research exemplifies how aspirations to identify evidence-based treatments are routinely misaligned with methods, and how this mismatch undermines our work’s utility. Over the past six decades, the average sample size of youth psychotherapy trials has been approximately 69 participants, giving the average trial 47.5% power to detect overall treatment effects (estimated, optimistically, at Cohen’s effect size (d) = 0.46)6. To achieve 80% power to detect treatment effects would require twice as many participants5. Likewise, aspirations to identify treatments for diverse populations are undermined when participants in clinical trials are overwhelmingly white and middle-income, as in much youth psychotherapy research5,6.

Moreover, although community perspectives are commonly incorporated into the planning of community and counselling psychology research7, this is rarely the case in clinical psychology research. This decreases the odds that clinical psychology will address scientific questions that matter to those we aim to serve. Even efforts to benefit participants themselves in a given study (for example, by connecting participants with treatment, providing feedback reports, or sharing plain-language study results) are seldom documented in scientific publications, leaving the nature and depth of missed opportunities impossible to gauge.

Importantly, individual researchers are not to blame for these challenges and omissions. Clinical psychologists are not asked to provide evidence of service at any level when designing studies, applying for funding, or publishing results. Indeed, clear standards for conducting service-centred science simply do not exist, despite the profession’s goal of service–research integration.

Recommendations for scientists

Clinical psychology researchers can take several immediate steps toward conducting service-centred research. For example, service to participants might be achieved by offering participants individual assessment reports, reflecting self-reports of symptoms or emotions during the study (generating such reports may be partially automated, for example, by using open-access code that generates personalized assessment and feedback reports8), or by sharing plain-language summaries of results after the study has been completed. To better serve communities, scientists might organize community advisory boards to allow them to learn about and incorporate community perspectives into their research questions and measurement approaches.

To serve future patients, clinical scientists can minimize aspiration–method mismatches in their work. This goal might be forwarded by, for instance, pooling data across studies; prioritizing large-scale trials; and conducting rigorous power analyses to gauge the feasibility of new ideas5. Aspiration–method alignment poses challenges and difficult choices. For instance, researchers might need to forgo projects in which aspiration-aligned methods are infeasible; pursuing multi-site studies to ensure appropriate sample size and diversity requires substantial funding for staff, community outreach, and participant compensation. However, aspiration–method alignment remains a necessary precondition to serving patients.

Clinical intervention scientists can also serve future patients by sharing treatment materials alongside trial results. Surprisingly, intervention materials are rarely provided by research teams, even when studies identify a treatment as effective. Per a review of mental health treatment trials conducted in low- and middle-income countries, only 7% of study teams (2 of 27) provided access to the intervention materials needed for implementation9 despite the minimal cost and effort required to do so. To adequately serve patients through intervention research, individual scientists must facilitate access to treatment materials.

Recommendations for journals and funders

The above recommendations for scientists might prove unsustainable without structural support. Thus, journal editors and funders should consider several steps to prioritize service–research integration in clinical psychology research. For example, journal editors might require authors to clearly report their efforts to serve participants, future patients, and/or communities. This information might be described in the text, alongside other required ethics-related information, such as IRB approval and trial registration; such statements could take the following form: “towards participant knowledge/benefit, participants received individualized reports describing their symptom levels; towards future patient knowledge/benefit, the treatment manual is accessible at XYZ website; towards community knowledge/benefit, the authors presented results at XYZ community centre.” If certain levels of service are unaddressed, authors could explicitly justify why (for example, mechanisms-focused researchers will have no treatment materials to share). Likewise, funders that support clinical research might require applicants to describe plans for service–research integration, and, crucially, ask grantees to report the outcomes of those plans in annual progress reports. In addition, publishers and funders might create mechanisms for requiring clinical trial researchers to share treatment materials, and ensure this material is freely available (for instance, published in front of the journal paywall).


Achieving and sustaining science–service integration in clinical psychology research will require sustained changes in perspective and priorities, at both the scientist and structural levels. Some changes might prove challenging to realize, but navigating them is worth the benefits. A service-centred core to our scientific pipelines is essential for forwarding clinical psychology’s stated objective: to serve society by elucidating and reducing mental illness.