Background

The engagement of stakeholders in research is being increasingly viewed as a major factor in maximising desired outcomes in terms of knowledge translation, impact and implementation in policy and practice. Engaging stakeholders in research can legitimise research findings (Opoku et al., 2014), is considered to bring ‘significant benefits to the process of knowledge production’ (Phillipson et al., 2012) and is viewed as an important approach to promoting impact (Buxton and Hanney, 1996; Innvær et al., 2002; Hanney et al., 2007; Lavis et al., 2005; Bullock et al., 2012; Kok et al., 2016). Stakeholder engagement (SE) is being increasingly promoted across the board by health research funding organisations, and indeed by many researchers themselves, as an important pathway to achieving impact (Boaz et al., 2008).

Support for claims about the importance of researchers engaging with potential users of their research comes from a wide range of fields (Weiss, 1977). Specifically in relation to health research there are some early examples (Glaser and Taylor, 1973) and SE was explored in-depth in a study of the English Health Department’s R&D Division conducted at Brunel University (Kogan and Henkel, 1983). Engagement was seen as an important approach to promoting impact (Buxton and Hanney, 1996). Several reviews point to interaction between health researchers and potential users in policy or managerial fields as being one of the key factors most often associated with impact being achieved (Invær and Trommald 2002; Hanney et al., 2003; Lavis et al., 2005; Bullock et al., 2012). Most of the examples, however, involve retrospective analysis and it is not always clear in the review articles how much of the evidence about interactions refers solely to the more limited concept of contact after the completion of the research, and how much relates to contact during the conduct of the research studies.

This paper presents a prospective, as opposed to retrospective, approach to exploring SE in health research. The potential and actual challenges and benefits of retrospective and prospective research have been extensively written about from the perspectives of a wide range of disciplines. Prospective studies produce greater accuracy of data collection but are less efficient in terms of being expensive and time-consuming. Inversely, retrospective studies are time efficient but can work only with data that have already been gathered and measured, often for another purpose than the one under investigation (Euser, et al., 2009). A further challenge of retrospective research is that it can conceal flaws in sampling and data collection, relating to individual reasoning and cognitive bias, which leads to the production of biased interpretations (Bitektine, 2008). An advantage of prospective research is that it can reduce measurement error, an issue which can arise in retrospective research due to poor or incomplete recall (White et al., 1998). Prospective research is also appropriate for investigating the expectations of participants and comparing these data with data collected at a later point in time (Van Ness et al., 2011), which allows for ‘scrutiny of the making of opinion and attitude’ (Holder, 2016) and understanding change. However, there are also many challenges to this approach relating to recruitment of participants, sampling, sample sizes and attrition (Plano Clark et al., 2015). It is not unusual for researchers to encounter methodological challenges that require a shift from those methods originally intended to some adaptation as the research progresses (e.g., Thomson and Holland, 2003) and it is increasingly viewed as good practice for studies to publish their protocol as a journal article so that any changes in the study’s methods occur in an open, transparent way.

Conducting an empirical study of a research project, conversely, has generated little detailed knowledge to date in terms of processes, outcomes and the challenges of this type of study. The authors of a protocol paper for a current longitudinal prospective project, which explores partnership working in a large, multi-stakeholder health research programme, however, set out some of the envisaged, potential limitations to conducting this type of study (Greenhalgh et al., 2017). These relate to and include human and financial resources, and access to the research team of key data.

The longitudinal research presented in this paper is an impact study. Its two distinct methodological features are that it uses a prospective approach, and it researches SE in another research project. The impact study explores EQUIPT (European-study on the Quantifying Utility of Investment in Protection from Tobacco), a 3-year EU-funded research project, which uses SE in the development, testing and dissemination of its tobacco control return on investment (ROI) tool. EQUIPT sought to implement a ROI tool across five EU sample countries—Germany, Hungary, the Netherlands, Spain and the UK—and investigate its transferability beyond those five countries. The engagement of stakeholders from each country in the co-production of the ROI tool was a key element in EQUIPT’s programme of research, and the project anticipated SE would make a major contribution to its success in creating an impact.

The impact study, SEE-Impact (Stakeholder Engagement in EQUIPT for Impact), received funding from the Medical Research Council (MRC) to examine specifically the engagement of stakeholders in the EQUIPT project (Boaz, 2017) and proceeded alongside the EQUIPT project over the 3-year period (October 2013 to September 2016) to ‘track’ SE as it occurred. The study aimed to provide researchers and research funders with an improved evidence base on which to decide whether and how to apply what is thought to be a key mechanism (SE) to enhance the adoption, and hence impact, of health research.

Questions have been raised about whether stakeholder co-production in research produces greater usefulness and relevance, and the associated costs to co-production—as well as the benefits—which to date has received little attention (Oliver et al., 2019; Oliver and Boaz, 2019). There are also calls for methods to be found by which to evaluate the impact of evidence on policy practice and change, amidst claims that research should be targeted towards questions of direct interest to policy makers and practitioners (Oliver et al., 2019; Oliver and Boaz, 2019; Amy et al., 2015). The SEE-Impact study sought to learn about the challenges and benefits of SE, how co-production in research is experienced by those involved, and to what extent co-production and impact are achieved. This study, however, encountered challenges of its own in terms of factors that hindered and facilitated its progress, and which in particular related directly to the need to reassess and modify data collection methods as the study developed in order to enhance understanding around SE. By highlighting these factors, this paper aims to contribute to closing the gap in literature about difficulties associated with examining SE in research and to SE in the co-production of research.

In the methods section below, we describe the processes around the identification, recruitment and categorisation of stakeholders for both the EQUIPT project and SEE-Impact study. We then set out EQUIPT’s planned methods of SE at the beginning of the project, and SEE-Impact’s planned methods of gathering data on SE in EQUIPT. The findings section goes on to present the actual methods in each case.

Methods

Stakeholders

EQUIPT team members from the five participating countries identified and recruited stakeholders to the EQUIPT project. A research advisory group (RAG) was created comprising nine stakeholders with a wide range of expertise covering health policy and practice, health economics, and research relating to smoking cessation and tobacco control. Wider stakeholders constituted five pre-defined categories: decision makers, purchasers of services/pharma products, professional service providers, evidence generators and advocates of health promotion. The EQUIPT team anticipated that stakeholders would be involved in defining the end product (ROI tool), provide feedback on the applicability and relevance of the tool, and discuss and agree policy proposals and dissemination of project results.

For the SEE-Impact study, two categories of stakeholders were identified—engaged and unengaged—informed by Kok and Schuit’s work (2012) mapping the contribution of key actors and users involved in research projects.

  • Engaged stakeholders constituted two groups:

  1. (i)

    EQUIPT team members and RAG members. EQUIPT team members had distinct roles and were involved in different work packages around the development of the ROI tool.

  2. (ii)

    Linked stakeholders who had been identified and contacted by EQUIPT team researchers and had become involved in the project. These stakeholders each occupied one of EQUIPT’s five pre-defined categories stated above.

  • Unengaged stakeholders constituted two groups:

  1. (i)

    Unlinked stakeholders who had also been identified and contacted by the EQUIPT team but had declined the invitation to take part in the project.

  2. (ii)

    Unlinked people who had not been contacted by the EQUIPT team to take part in the project but who the SEE-Impact team identified as potential stakeholders.

EQUIPT’s original plans for stakeholder engagement

At the design stage of the project, EQUIPT’s SE methods were to include (a) surveys with 75–100 stakeholders, one at baseline and another at a later time point during the project; (b) interviews with stakeholders; and (c) eighteen workshops/meetings with stakeholders, comprising four events with EQUIPT team members and RAG members, and fourteen with wider stakeholders. Objectives for engaging stakeholders in events were to gain feedback on the use of the ROI tool; gain support for the validation of the tool; and discuss and disseminate findings about the development of the tool.

SEE-Impact’s original plans for data collection of stakeholder engagement in EQUIPT

SEE-Impact study data collection methods were developed on the basis of EQUIPT’s strategy for SE as set out above and in the project’s published protocol (Pokhrel et al., 2014), and in discussion with EQUIPT team key members. The SEE-Impact study was therefore designed to include:

  1. (i)

    Surveys with stakeholders from across the five participating countries (Germany, Hungary, the Netherlands, Spain and the UK). A SEE-Impact survey with stakeholders was intended at baseline, with a second survey taking place towards the end of the EQUIPT project. The baseline survey was designed to comprise both open and closed questions and would enable SEE-Impact researchers to establish stakeholders’ expectations, levels of understanding and planned intensity of engagement for later comparison with data from the follow-up survey.

  2. (ii)

    Interviews with stakeholders—the SEE-Impact study anticipated that at least 35 stakeholders from across all five countries would take part in an interview at least once and that a smaller subset (n = 5–10) would participate in repeat interviews at a later point in time during participation in the EQUIPT project.

  3. (iii)

    Observations—SEE-Impact researchers would carry out observations of all 18 EQUIPT stakeholder events in order to explore the interactions of stakeholders and the nature and circumstances of their input to the project. Observations would focus on, in particular, the level of SE, supplemented by detailed field-notes completed in each case.

It was anticipated these methods of data collection would enable the SEE-Impact study to build a picture of the level and nature of SE, the timing of engagement, the type of stakeholders involved in EQUIPT and their motivations for becoming involved, and the impact of their involvement.

Ethics approval

Ethics approval for the SEE-Impact (Stakeholder Engagement in EQUIPT for Impact) study was gained from the Faculty Research Ethics Committee, Faculty of Health, Social Care and Education, St George’s University of London and Kingston University, on 18th March 2014.

Findings

The methods of SE that actually occurred in the EQUIPT project, and the SE data collection methods that were actually used in the SEE-Impact study, compared to those intended in each case, form the basis of our findings. Table 1 gives a summary of the intended and actual data collection methods. Figure 1 shows the different types of data collection and the stages at which they occurred over the three-year study.

Table 1 Intended and actual SEE-Impact data collection methods.

What type of stakeholder engagement actually occurred in EQUIPT?

EQUIPT’s two surveys with stakeholders and interviews with stakeholders took place as originally intended. The stakeholder events, however, became reduced from the intended 18 to just six. These comprised the originally planned four events for EQUIPT team and RAG members and two events—out of the planned 14—for wider stakeholders. Reasons for EQUIPT reducing the number of stakeholder events and what this meant for the SEE-Impact study of SE are discussed below.

What type of stakeholder engagement data did the SEE-Impact study actually collect?

Surveys

At the request of EQUIPT, the SEE-Impact baseline survey with stakeholders did not go ahead in its entirety. Instead, one question from the survey was incorporated into the EQUIPT stakeholder baseline survey. The SEE-Impact question was ‘what would you like to get out of involvement in the EQUIPT project?’ We anticipated this question would generate responses that could be compared with data collected during the second, follow-up survey. The EQUIPT stakeholder survey was conducted, interview style, by the project’s researchers in each of the countries (Cheung et al., 2016). A total of 93 stakeholders from across all five countries participated (17 from Germany, 16 from Hungary, 28 from the Netherlands, 18 from Spain and 14 from the UK).

The second, follow-up SEE-Impact stakeholder survey also became incorporated into EQUIPT’s second survey, although on this occasion several SEE-Impact questions were included rather than one. These questions asked about stakeholder involvement and expectations, and stakeholder communication with the EQUIPT project team. A total of 66 stakeholders from across all countries participated: 14 from Germany, 16 from Hungary, 15 from the Netherlands, 14 from Spain and 7 from the UK. Responses to SEE-Impact questions in both EQUIPT surveys (baseline and second, follow-up) were collated and presented in graph-form by an EQUIPT team member.

Interviews

As planned, 45 SEE-Impact study interviews took place with stakeholders, including two follow-up interviews one year later. These comprised 6 in Germany, 8 in Hungary, 13 in the Netherlands, 9 in Spain, and 9 in the UK. Interviews across the two groups of stakeholders—engaged and unengaged—included 16 with EQUIPT team members, 19 with wider, engaged stakeholders and 10 with unengaged (potential) stakeholders.

Interview questions were open-ended and investigated the circumstances around stakeholders’ awareness of and involvement in EQUIPT, expectations of involvement in the project, the type and level of interaction with the EQUIPT team, benefits gained through working with EQUIPT, the perceived influence of SE on the project, and barriers to effective engagement.

An additional five interviews were carried out with EQUIPT researchers, one from each of the participating countries, soon after the EQUIPT baseline survey had taken place. These researchers had conducted the survey with stakeholders in which the single SEE-Impact study question was asked. Our objective was to gain understanding of stakeholder recruitment and attitudes more generally to EQUIPT. Interviews with EQUIPT researchers proved useful for understanding more about stakeholders, including why some people who had been identified as potential stakeholders did not wish to take part in EQUIPT; what sorts of questions about EQUIPT stakeholders had asked; and stakeholder attitudes more generally to EQUIPT.

Observations

EQUIPT reduced the number of stakeholder events from 18 to 6. SEE-Impact researchers carried out observations of all six SE events that did take place. These comprised four events for EQUIPT project and RAG team members, and just two events for wider stakeholders. The number of stakeholders who took part in the six events ranged between 22 and 60. There was a broad spread of the types of key and wider stakeholders in terms of the five categories identified by the EQUIPT project team. Locations for the events were agreed by the EQUIPT team based on venue availability, convenience and practicality for stakeholders and EQUIPT team members. The events comprised one in Maastricht, two in Brussels, one in Budapest, one in London and one in Zagreb.

An additional six observations of EQUIPT project team teleconference meetings, which were held approximately monthly, were carried out. Attempts to observe more of these meetings met with a number of challenges relating to unmanageable timing; details of meeting dates and times on occasions not reaching the SEE-Impact team; and technical difficulties with equipment, which affected the quality of—or prevented—connection. Observing EQUIPT team meetings enabled us to learn promptly of future plans for SE. Both teams of researchers recognised that any amendments would necessarily affect SEE-Impact’s prospective data collection activities and therefore early awareness was advantageous.

Fig. 1: SEE-Impact data collection: different stages and types of activities over the 3-year study.
figure 1

Each top panel denotes the year in which the activities beneath took place. The first (upper) horizontal line below the top panel marks EQUIPT activities in relation to eachyear. The second (lower) horizontal line depicts the various SEE-impact datacollection activities that took place. The numbered arrows show the type of datacollection in relation to EQUIPT activities and each year of the project.

Discussion

SEE-Impact data collection methods of SE in EQUIPT had to be modified, which resulted in a smaller quantity of data than had been envisaged. The need for these modifications unfolded during the course of the study as the EQUIPT project progressed. The overarching, determining factor was that SEE-Impact was wholly dependent on EQUIPT for its data. Of particular significance to SEE-Impact was EQUIPT reducing the amount of its stakeholder events. The circumstances and implications around the modifications to SEE-Impact data collection are discussed below.

Stakeholder engagement in EQUIPT

The difference between EQUIPT’s intentions for SE and what SE in reality looked like, was a noteworthy finding to the SEE-Impact study in its exploration of how and to what extent SE can influence the use and impact of research. Another key finding was that the actual (as opposed to intended) type of SE in the project was not at the high end recognised by engagement models. INVOLVE (http://www.invo.org.uk/) sets out different levels of public involvement in research: engagement, the lower level of public activity in research— information and knowledge about research is disseminated to research participants, colleagues or members of the public; participation—members of public take part in a research project, for example by completing a questionnaire or participating in a focus group; involvement, the higher level of activity—members of the public are actively involved in research projects and in research organisations, for example as joint grant holders or co-applicants on a research project. SEE-Impact found that although co-production was the intended role of SE in EQUIPT, in reality SE most closely fitted with INVOLVE’s participation. EQUIPT stakeholders gave feedback on models of the ROI tool by taking part in surveys and interviews. Details around this issue and the contribution to the EQUIPT project are not elaborated on here, rather the key findings of the SEE-Impact study are presented in Boaz et al. (2018; 2021).

Constraints within the structure of relationships

SEE-Impact data collection methods were initially designed on the basis of EQUIPT’s original intentions to engage stakeholders in particular ways and through particular events during the project. To gather data on SE, SEE-Impact was dependent on EQUIPT’s arrangements for when, how and the extent to which it would engage its stakeholders; it was not our intention nor our role to influence these arrangements. When the nature and number of stakeholder events underwent revision during the EQUIPT project so, too, did SEE-Impact data collection activities. Dependency implies that one party is in a position, to some degree, to grant or deny, facilitate or hinder, the other’s gratification (Emerson, 1962). SEE-Impact was dependent on EQUIPT for SE in EQUIPT’s work in order to have something to explore and report on, and to enable SEE-Impact to work with stakeholders to learn more about their involvement in the EQUIPT project. However, both these needs were to some extent compromised due to EQUIPT significantly reducing SE in the co-production of the ROI tool and reducing (and conducting) SEE-Impact surveys with stakeholders. Thus, SEE-Impact’s opportunities for gathering data were somewhat hindered.

The dependency of one actor on another can engender an imbalance of power relations. Emerson (1962), for example, claims ‘power resides implicitly in the other’s dependency‘ (p. 32). Power imbalances have been found to exist around stakeholder or patient and public engagement in healthcare service development between professionals and members of public, where the views and knowledge of professionals are seen as having greater value or legitimacy (e.g., O’Shea et al., 2019). Lunde et al. (2012) point out ‘it is in the relationship between the individual and the institution that power operates most clearly’ (pg. 207). Conspicuous power relations are also believed to exist in interdisciplinary collaborative research (Lunde et al., 2012). SEE-Impact possessed little control over its data collection by the very nature of the study - the fact it was prospectively exploring SE in another research project. To have control would have necessitated influencing SE, which in turn would be self-defeating of SEE-Impact’s objectives to track SE in EQUIPT.

However, despite the lack of control, we did not experience an imbalance of power relations between SEE-Impact and EQUIPT. On the contrary, EQUIPT formed bridges between both teams, which served SEE-Impact well in terms of recruiting stakeholders to its study. SEE-Impact researchers were treated by EQUIPT as equals—as an extension of the EQUIPT team. Some members of the EQUIPT team joined SEE-Impact team meetings as a way of providing updates on the development of the project, especially with regards SE and changes that were taking place. SEE-Impact researchers gave presentations to EQUIPT on updates of its work.

We did observe power relations, however, between EQUIPT and its funder, which directly influenced the reduction of SE in EQUIPT. The EQUIPT project had a responsibility to its funder (who had invested heavily in the project) and entered into an agreement to produce an outcome within a stipulated period of time, with no flexibility, and which would provide value for money; EQUIPT was answerable to and bound by the rules of that agreement.

The non-negotiable final end date for the EQUIPT project was very much a priority focus of the team as it strived to meet deadlines and provide deliverables. There had been delays at the start of the project and a further delay imposed by complexity in modelling inter-country decision support tools, with the need to make them available within 36 months of the start of the project. Stakeholder engagement was strategically organised around the objectives and timing of each work package, and the amount of time required to facilitate 18 stakeholder events would not allow EQUIPT to keep within the timelines. The commitment of time and resources are some of the costs attached to co-productive research that Oliver et al argue are often overlooked in studies of SE, along with professional and personal risks for stakeholders and researchers; conflict; and differences of opinion about the purpose and role of co-production (Oliver et al., 2019)—issues that were evident within the EQUIPT project (Boaz et al., 2021).

In a sense there was a domino effect at play here within a hierarchy structure, with the EQUIPT funder positioned at the top (the navigator), EQUIPT occupying the position below (the driver), and SEE-Impact at the bottom (the passenger), with each position signifying the level of control over the research process it possessed. The EQUIPT project and SEE-Impact study were each affected by the requirements or actions of the position above.

In stark contrast to the EQUIPT project, there were no overt power imbalances or control issues between SEE-Impact and its funder that influenced any aspect of the research process. No regular deliverables beyond annual and end-of-award reporting were required; SEE-Impact study methods could have been questioned, but they were not.

Reflections of data collected

The SEE-Impact methodological design evolved into something resembling a ‘bricolaged’ approach (Vandenbussche et al., 2019), involving the use of ‘the tools and means “at hand” to accomplish knowledge work’ (Kincheloe 2004, in Vandenbussche et al., 2019), and adopting a flexible/emergent construction and readjustment of research design whereby “[…] new tools or techniques have to be invented or pieced together[…]” (Denzin and Lincoln 2011, in Vandenbussche et al., 2019). There were various modifications or some piecing together of the SEE-Impact research design, which involved tapping into data collection activities that were available or ‘at hand’, including interviews with EQUIPT researchers and observations of EQUIPT team meetings.

Without doubt, the SEE-Impact study did not gain the quantity of data on SE in EQUIPT that was expected from the follow-up interviews, the two surveys and the observations of SE events. The follow-up interviews were subject to participant attrition, a challenge that has been well-documented and constitutes a major problem in longitudinal research. Relocation, changing schedules and cost implications are some of the factors associated with participant attrition (Plano Clark et al., 2015; Barry, 2005). SEE-Impact achieved just two follow-up interviews with stakeholders; furthermore, perhaps unsurprisingly, they were both EQUIPT team or RAG members as opposed to wider stakeholders, which produced limited and specific data.

The other two areas where the quantity of data dropped— surveys and observations—were directly related to EQUIPT’s own methods of SE and the dependency of SEE-Impact on EQUIPT for these data. With the surveys, due to EQUIPT concerns about overburdening stakeholders with their own surveys and SEE-Impact surveys around the same time, SEE-Impact questions became incorporated into EQUIPT’s surveys. Most notably, the baseline survey constituted a single SEE-Impact question. The result was that SEE-Impact did not gain the amount of data that was anticipated, rather a large response to one question. An advantage was the 100 percent response rate, which likely would not have been the case if the SEE-Impact full baseline survey had gone ahead independently of EQUIPT. However, while this take-up rate demonstrates a good reach of the SEE-Impact survey question, a disadvantage was that because only one question was asked, there were limitations to our depth of learning.

For the second survey, several SEE-Impact questions were included in EQUIPT’s survey which, together with a high response rate, produced a larger quantity of data than the baseline. Overall, the surveys produced meaningful data. We acknowledge, however, the potential for bias given that both SEE-Impact surveys with stakeholders were undertaken by EQUIPT researchers, which potentially presented a conflict of interest because the survey questions related to stakeholders’ experience of being involved in EQUIPT. Hence it is possible that social desirability may have affected responses. Stakeholders might have responded to questions more favourably; they might have felt less able to respond as openly and honestly about their experience of involvement in EQUIPT than if SEE-Impact researchers had been asking the questions.

It was the substantial reduction in the number of stakeholder events, however, which presented the greatest challenge to SEE-Impact. It is reasonable to suppose that this development would potentially have created an even greater problem for EQUIPT. Whether a larger number of stakeholder events would have made a significant difference to EQUIPT’s outcomes is unknown. Fewer and fewer resources were allocated to SE in terms of stakeholder events; theoretically SE was important but in reality was not the main priority. EQUIPT had a responsibility to its funder to meet the agreed timelines for the project as discussed above, and resources did not allow for all the planned stakeholder events to go ahead.

Had all 18 events gone ahead as EQUIPT originally planned, SEE-Impact observations would arguably have gained broader insights into how SE can operate at a higher level or in the co-production of a ROI tool. It was a disappointing and unexpected development and we had to reassess our methods and find other ‘tools and means at hand’ to compensate this deficit. Interviewing EQUIPT researchers and observing EQUIPT project team meetings, at the team’s invitation, were the identified other means.

SEE-Impact itself was fundamentally a victim of its own findings. The study found that SE in EQUIPT was not as prevalent as had been intended and as a result the quantity of SEE-Impact’s data collection was impeded. However, the quality of data was not compromised; the first round of interviews with stakeholders provided a large volume of rich data. These interviews were conducted by SEE-Impact researchers at least a year after the baseline survey took place, which enabled us to gather data that the surveys did not. The additional interviews that were carried out with EQUIPT researchers who had conducted the baseline survey also proved useful for filling some of the gaps in baseline data. The observations of EQUIPT team meetings provided valuable insight into the vision for SE; the challenges associated with SE and how these were experienced and managed; and difference in attitudes towards SE among team members and related tensions that on occasions emerged as a result (Boaz et al., 2021).

Cross-team relationships

One of the EQUIPT team members occupied the role of gatekeeper for the SEE-Impact study, enabling access to data sources. Reeves (2010) emphasises the importance of researchers’ relationships with gatekeepers in terms of negotiating access to participants and gaining consent to contact them. The gatekeeper not only reinforced links between the two research teams more widely and facilitated SEE-Impact contact with stakeholder participants, but also shared various papers relating to the project, including meeting papers, and the data and analysis from stakeholder engagement in the EQUIPT project (in line with the combined ethical approval obtained for the two studies).

A relationship with the gatekeeper had been formed prior to the start of the SEE-Impact study because there had been collaboration between some members of the two research teams in previous projects. These relationships had a long history and the significance of this should not be underestimated in the part it played in the SEE-Impact study. At times this connection involved SEE-Impact researchers walking a tightrope between a social relationship and being critical about what we were observing and on one occasion, for example, as SE was reducing, one of the EQUIPT team asked SEE-Impact researchers to reflect on how they were ‘doing’ with SE. In our view, this did not compromise the study. We have reflected, however, on the potential for the relationship between members of the two teams to generate bias in terms of our findings, for example whether the connection influenced our view of SE in EQUIPT, and whether there was an element of one team wanting to please the other. We believe there was mutual respect and consideration for each team’s research, rather than a need to work in a way that would be seen in a favourable light by the other. There was a significant degree of influence from EQUIPT on the SEE-Impact study, but we consider this related to the inevitable dependency and subsequent type and level of SE data we had access to, discussed above, rather than to pre-existing links or any sense of obligation between the two teams that may have influenced our interpretation of the findings. However, before and during the study, regular discussions with members of EQUIPT about plans for data collection did take place - discussions that were useful to SEE-Impact in terms of accessing participants and transparency around the need to modify our data collection methods—which might not have been the case if these relationships had not existed. In our experience, investing in relationships between two research teams reaped substantial rewards.

The prospective approach

Longitudinal, prospective research can present challenges, not least around gaining access to data (Greenhalgh et al., 2017). Retrospective research is considered as having many advantages over prospective research (Song et al., 2010) including peace of mind, knowing the type and volume of data that exist, and that data are available and accessible to researchers. A retrospective approach to the SEE-Impact study, however, would likely have presented challenges of a different nature, for example in relation to associated poor recall (White et al., 1998), which may have resulted in fewer and/or less robust data with which to achieve the study’s objectives.

There are some important strengths to prospective research, which we believe benefited SEE-Impact. The rationale behind the prospective approach was to interpret ‘developments as they occur, on the basis that change can be best understood contemporaneously rather than retrospectively’ (Mason, 2002 p.31). The longitudinal, prospective nature of SEE-Impact enabled us to capture first-hand, in detail and in ‘real-time’ how SE evolved and what it looked like, from the start of the EQUIPT project to the end. Observing EQUIPT team meetings, for example, allowed us to gain ‘knowledge from “within” and “in-between”’, enabling us to ‘become familiar with the research context: its protagonists, the collaborative set-up and atmosphere’ (Vandenbussche et al., 2019 p. 10), which greatly contributed to our learning of what SE engendered, what it involved and what it required, from the perspectives of stakeholders and EQUIPT researchers.

The EQUIPT project set out with clear and ambitious plans for high level SE but these gradually became diluted, and it was the prospective nature of the SEE-Impact study that enabled us to witness the circumstances surrounding these developments as they unfolded. The prospective approach facilitated understanding of how, and the conditions under which, SE can best help maximise the impact of health research; insight into the ways and extent to which stakeholders shape knowledge translation processes (Borst et al., 2019); and the production of a set of indicators that could be used to identify SE with potential for impact (Boaz et al., 2018).

Conclusion

The aim of this paper has been to contribute understanding of the key challenges of data collection using a prospective approach to explore SE in health research. The SEE-Impact study of SE in the EQUIPT project demonstrates the unpredictability of challenges to data collection methods. This relates mainly to the need to gather data while— in the case of SEE-Impact—at the same time, the project being researched (EQUIPT) is making modifications to aspects of its own data collection.

It is important, however, to acknowledge that many of the challenges presented in this paper, and many features of SEE-Impact, are peculiar to this study and therefore generalisability is limited. Some characteristics of this study, however, are likely to have resonance for other prospective studies. With increasing emphasis and calls for public engagement in research and for engagement to be evaluated for impact, we hope that the challenges SEE-Impact and EQUIPT both experienced, relating to SE, will provide some insight to future research that seeks to respond to those calls.

A key lesson learned from the SEE-Impact study relates to the lack of control over data collection methods and subsequent impact on the quantity of data. In our experience, lack of control is a feature of the prospective approach but is also specific to research that studies research. This paper highlights why, despite the challenges, we would still favour a prospective approach to exploring SE in research over a retrospective one. It was also the case that decisions around SE were to a large extent beyond the control of EQUIPT or, at the very least, heavily influenced by restrictions the project faced. In relation to this point, the EQUIPT project illustrates there is a need for flexibility from funders in research that involves SE and recognition of the related challenges that may impact on the project’s deliverables and timelines. The availability of resources is essential in order that SE in research can occur with maximum effectiveness.

A further key point is the importance of viewing the planning of methods in a study like SEE-Impact as an on-going process, which requires flexibility, open-mindedness and opportunity in order that any necessary amendments that arise can be made with least disruption to participants and to the methodological rigour and validity of the research. Research design involves factoring in as many potential pitfalls or challenges around data collection as possible and where relevant sharing these plans as a way of checking for any potential difficulties. As we discovered, however, careful and collaborative planning does not necessarily insure against challenges to the collection of data arising. In theory, forewarned may be forearmed; in practice, however, prospectively researching research can nonetheless present unexpected challenges.