Introduction

The World Health Organisation declared that COVID-19 was no longer a public health emergency of international concern on 4th May 2023 but there remain many people with the long-term sequalae of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS CoV-2) infection1. Post COVID-19 condition or Long COVID is defined as the presence of relevant symptoms at least three months after infection, with symptoms lasting for at least two months and with no other explanation2. It is associated with a wide range of symptoms and symptom clusters, including fatigue, breathlessness, brain fog, and pain3,4. These symptoms often have significant impacts on health, quality of life (QoL) and work capability5. A recent study estimated that 6.2% of individuals who had symptomatic SARS-CoV-2 infection experienced long-term symptoms6. This would amount to over 40 million cases of Long COVID worldwide6,7. The United Kingdom’s Office for National Statistics estimated that there were 1.9 million cases (2.9%) in the UK1,8.

Despite the high prevalence of Long COVID, there are currently few evidence-based treatments available9,10. Existing treatment approaches are based on expert consensus rather than on evidence from randomised clinical trials. A systematic review of non-pharmacological treatments for post-viral syndromes including Long COVID, found few relevant clinical trials11. It highlighted the urgent need for such studies to be undertaken to improve the evidence base for the support being delivered to the growing number of affected people.

As part of the NIHR funded Therapies for Long COVID (TLC) Study, we held a research priority setting consensus workshop involving patients, clinicians and other relevant stakeholders12,13. One of the key outcomes of this workshop was to prioritise the development of a decentralised clinical trial (DCT) platform to evaluate the effectiveness and cost-effectiveness of pacing and other non-pharmacological interventions to support people with symptoms associated with Long COVID. A DCT is a trial which is conducted without in-person contact between the study team and participants14. Since the COVID-19 pandemic, there has been an increase in digitally enabled research in many clinical specialties, including Long COVID15. Decentralised RCTs have been shown to increase inclusivity16 and allow participation from a wider pool of participants in settings that are more representative of the context in which interventions will be used17. This is particularly relevant to people with Long COVID who are often homebound and have difficulties attending research centres.

DCTs have the potential to evaluate remotely delivered interventions at scale and across large geographies, and at potentially lower costs than traditional clinical trials17. Their use has been fuelled by the implementation challenges brought on by the COVID-19 pandemic18 but there remains much to be learned about factors influencing the successful implementation of these trial designs. We therefore undertook a study to assess the feasibility of conducting a DCT of non-pharmacological interventions for long COVID in adults using a digital study platform. We assessed as an exemplar pacing interventions that guide the balancing of rest and activities to prevent fatigue and post-exertional malaise. We aimed to assess the feasibility of recruiting adults with self-reported Long COVID through social media, delivering DCT processes, including interventions via a digital study platform (Aparito Atom5™). In addition, we also aimed to assess participants’ engagement with the interventions, retention and attrition rates, data completeness, the feasibility of capturing data relevant for an economic evaluation and participants’ views on all aspects of platform trial design and implementation.

Results

Recruitment

A total of 85 people with self-reported Long COVID were recruited to the study. 70 participants (82.3%) were female and 79 (92.9%) were of white ethnicity. The mean age was 46.1 years (range 25–71). Further participant characteristics can be found in Table 1.

Table 1 Participant characteristics.

On average, 14 patients were recruited per week to the study over a 6-week period, between October 2022 and November 2022 through several recruitment routes as described in the methods. In addition, we posted the recruitment advert on social media at weeks 2 and 4, explaining the slight surge in recruitment after weeks 2 and 4 (Fig. 1).

Figure 1
figure 1

Number of participants recruited in the study.

Challenges with recruitment

At the start of the third week of recruitment, the research team noticed an unusually high number of people joining the study (n = 111) within the same day. Of these 111 participants, 28 were randomised to one of the four intervention arms, completed study questionnaires, and received vouchers. The research team suspected that these were imposter participants and were motivated to join the study to receive the vouchers. After closely looking at the participant data, the research team made several observations that confirmed their initial suspicions:

  1. 1.

    All 111 of these participants were from a single location, Lagos, Africa.

  2. 2.

    They went through the process of registering onto the study much quicker than other participants (within seconds instead of several minutes).

  3. 3.

    They included an unusually high proportion of men compared to the normal Long COVID population, which normally includes a higher proportion of women19.

  4. 4.

    The email addresses used to join the study were all from Gmail accounts.

  5. 5.

    Five out of 28 suspected imposter participants who had been randomised had the same surname.

After discussion, the research team agreed to stop the vouchers being sent automatically to anyone joining the study and being randomised and it was agreed to remove the imposter participants from the dataset.

Randomisation

Overall, 230 people visited the study website, 133 consented to take part, and 85 were randomised (85% of initial target sample size), as shown in Fig. 2. Two female participants formally withdrew from the study (one did not like the pacing app and the other did not have sufficient time to participate).

Figure 2
figure 2

Consort diagram.

Twenty-one participants were randomly allocated to each of the pacing video, pacing book, and usual care arms, and 22 participants were randomly allocated to receive the pacing app (Fig. 2). Participant characteristics were well balanced in terms of age, sex, and fatigue severity (Table 1).

Data collection

Patient reported outcomes measures (PROMs)

All participants completed the ‘Questions about you’ questionnaire (questionnaire which includes health and demographics questions) and the Symptom Burden Questionnaire™-Long COVID (SBQ™-LC) at baseline as it was a requirement to be randomised to one of the four study arms. Seventy-eight (91.7%) completed the remaining patient-reported outcome measures (PROMs) (Table 2). Following baseline data collection, and following randomisation just over half of participants completed the PROMs at weeks 4 and 8 (n = 44) and just under half completed the PROMs at week 12 (n = 38).

Table 2 Percent of participants with complete data over time.

Healthcare resource use

The healthcare resource use questionnaire asked about whether patients had used any types of healthcare services (either via a face-to-face visit, or via telephone or video call) in the last three months. In total, 79 of the 85 participants (92.9%) completed the healthcare services use questionnaire at baseline. This decreased by over 50% at the follow-up, with 38 of the 85 participants (44.7%) at Week 12 completed the questionnaire. Most of the participants who had completed the questionnaire at baseline had used healthcare services at least once (n = 75, 94.9%). Healthcare resource use, which was related to both primary and secondary care, was collected. Data collection on the volume of healthcare service use (i.e., how many times a service has been used) was not fully captured, with 13% of participants at baseline and 11% of participants at follow-up reporting having used healthcare services but not providing further details. GP visits/calls and Long COVID clinics were the services most frequently accessed.

Similarly, results show that 79 of the 85 participants (92.9%) completed the questionnaire on the prescription use and the questionnaire related to private costs (non-prescription medication use/work productivity) at baseline; 38 of the 85 participants (44.7%) at week 12 completed these questionnaires. At baseline, over half the participants reported using prescriptions and having private costs (non-prescription/productivity) at baseline (n = 45, 52.9% and n = 59, 69.4% respectively) (Table 3). Data on lost working hours and time off work can be used to estimate the cost of sickness absence.

Table 3 Prescription and private costs associated with Long COVID among participants.

Feasibility questionnaire

The feasibility questionnaire was completed by 28 participants at Week 12 and showed that most of participants who had completed the questionnaire (n = 24, 85.7%) had used the interventions at least once. Fewer participants allocated to the pacing app group used the intervention than in the other intervention arms. The pacing book was read three or more times over a period of at least 2 weeks by eight of the nine participants who had used it (Table 4). Of the participants who had watched the video (n = 10), three participants had done so over a period of at least 2 weeks and on average 1.8 times. The pacing app was used daily for 3 months by four of the five participants (80%) who had used it.

Table 4 Feasibility questionnaire exploring intervention use.

Qualitative findings

Qualitative interviews were used to explore participant views on all aspects of platform trial design and implementation. Nineteen participants in the study and two people who consented but were not randomised were interviewed. Participants were generally motivated to take part in the study to support research, even if the outcome was not successful:

We don’t know what’s going on. So the more research, even if it’s negative, the more research the better (Participant 20)

When asked whether the vouchers received after each timepoint were a reason for taking part, the consensus was that it was appreciated but not the main motivation for participation.

In terms of onboarding onto the study via Atom5™, all but one interviewee managed to download the app without any technical problems. The interviewee who did not manage to download the app believed the reason was owing to a faulty mobile device (which “kept crashing”) rather than due to the Atom5™ platform. Other technical issues related to the QR codes that were provided on the participant information sheet to navigate participants to the Atom5™ download page. The QR code reportedly did not always work on a tablet computer but did work on a smartphone, they sometimes required multiple scanning, and participants were not always able to obtain a new QR code following an application crash that required reinstallation.

Participants in the qualitative interviews said that the randomisation process through Atom5™ was seamless and they were satisfied with their intervention allocation. This included participants who were randomised to the usual care arm, although they would have preferred to be allocated to one of the pacing interventions. Only one participant withdrew from the trial because of the study arm allocation (pacing app). This participant explained that they had used the pacing app before and disliked it. They therefore felt that they would not be able to contribute to the study and formally withdrew:

That’s the reason why I decided um to drop out is I got the app, and then I realized I’d use the app before… I had tried that app before, and I really didn’t get on with it. (Participant 87)

Among individuals who had consented but had not been randomised and therefore did not take part in the trial (n = 48), reasons for discontinuing from the study included forgetting that they had downloaded the app, being unclear of what to do after they had downloaded the app, and the app malfunctioning following phone software updates.

All participants reported that they could relate to the included PROMs and believed the questions were relevant to Long COVID. Most participants mentioned that the PROMs and the other study questionnaires were easy to complete and that it took them approximately 20–25 min on Atom5™ at each study timepoint. Several said that they would be happy to complete the PROMs more often (e.g., weekly, or fortnightly) than in the study protocol as their symptoms were perceived to change more quickly than the follow up intervals, and for a longer follow-up time (12 months). Two participants felt that there were too many questionnaires and that they might not have taken part in the study had they known the number of questionnaires they were required to complete.

Qualitative data alongside the feasibility questionnaire provided insights into the use of interventions during the trial. Reasons for not using the allocated intervention included not receiving the intervention by post (pacing book) and using an alternative pacing app. Amongst the nine participants who used the pacing book, one used it only partially because of a lack of time and social circumstances:

Don’t think pacing is working for me because of my job and living alone. (Participant 61)

All but one of the 19 interviewees who took part in the qualitative study reported using the intervention they had been allocated to at least once throughout the 12-week follow-up period. The participant who did not use it said they had no interest in pacing and had taken part in the trial only to find a cure for Long COVID.

Most of the interviewees said that they used the interventions mainly in the early parts of the study (in the first 2 to 3 weeks) and less so towards the middle and end of the study. One of the interviewees who was randomised to the pacing book reported using it before each follow up timepoint. Most interviewees had no technical issues with Atom5™ when accessing the interventions. However, one participant reported that the links to the pacing video disappeared from Atom5™ after the first follow up timepoint (week 4). As a result, they were not able to access the video after week 4.

Participants were particularly positive about the clinical alert functionality within the digital platform which notified the clinical research team to contact them to provide advice if concerning symptoms were reported. Participants found the phone call reassuring and felt acknowledged. One interviewee found the phone call unnecessary as they believed they felt well and another felt they would have liked to have been warned beforehand about receiving a phone call, although this information was in the participant information sheet:

It was useful, and it wasn’t a burden at all, but maybe with informing people beforehand. I’ve got the long Covid clinic, GP, and other health professionals that may have contacted me. It might not have been an immediate link that somebody would make that was speaking to a nurse (136)

Participants were generally positive about their experiences of using Atom5™, but several interviewees reported that although they had received notifications to complete questionnaires, they had to wait over a minute before receiving them or had to close and reopen the app for the questionnaires to appear.

When asked about the reasons for not completing the follow up PROMs, some participants mentioned that they had not received app notifications and that they otherwise would have completed those measures. However, it was unclear whether the interviewees had turned on notifications on their device, although they had been reminded to do so by the research team. Other reasons mentioned by some interviewees included forgetting that they had downloaded Atom5™ and completed the PROMs at baseline, and questionnaires not being received despite notifications being turned on.

Several interviewees reported that they found the health resource use questionnaire too long and difficult to complete.

Overall, interviewees were enthusiastic about the Atom5™ app, found it easy to use and beneficial to research. They liked having all the study questionnaires in one place, however, some of them noted that a synchronisation pop-up message that appeared regularly on screen was distracting.

Areas for improvement for the delivery of future digital platform trials as suggested by participants and supplemented by reflections of the research team can be found in Box 1.

Box 1 Suggestions to improve delivery of DCTs.

Discussion

This study explored the feasibility of using a digital study platform to undertake a DCT for assessing the effectiveness and cost-effectiveness of non-pharmacological interventions for adults with Long COVID, using pacing interventions as an exemplar. Our study has shown that while DCTs present several advantages, they also face a range of challenges that can affect their feasibility.

The study demonstrated that eligible participants could be recruited, e-consented, and randomised within the study platform, and that baseline data collection, including PROMS could be successfully captured on most of the recruited participants. Furthermore, the study interventions were delivered successfully to study participants with reasonable levels of use following treatment allocation. Data relevant to an economic evaluation were also successfully captured from participants who completed the study, although additional information may need to be collected regarding the length of appointment and the method of appointment (face-to-face, phone call or video call).

There was generally positive feedback from participants on the study platform with good user engagement. Digital research platforms such as Atom5™ are being increasingly used in research, especially as restrictions on movement during the COVID-19 pandemic generally increased the use of digital technology for many tasks that would previously have been conducted in person17. These digital platforms have been found to be acceptable by users experiencing Long COVID symptoms20 and individuals with other conditions21,22,23,24,25, a trend that aligns with the findings from our study.

However, the study also highlighted important challenges for this study design, including the issue of poor recruitment. The study demonstrated some of the potential shortcomings of using social media and email correspondence for trial recruitment and the need for alternative recruitment strategies. In the context of DCTs, concerns often arise around the exclusion of underserved populations16,17,26, which could explain the fact that 6 of the 85 participants (7.1%) were from ethnic minorities. A notable barrier for certain population groups is the divide in digital access20. Whilst DCTs have the potential to allow rapid recruitment, they may not always capture a representative target population17. Further research is needed to develop and implement inclusive online recruitment strategies and empirically assess the impact on recruitment of representative study populations.

A second challenge were imposter participants, highlighting the need to have robust processes to prevent and swiftly identify such people before they have an opportunity to join a study. Using social media for recruitment offers the advantage of having access to a substantial pool of participants, thereby potentially shortening the recruitment period27. However, this approach may attract individuals who may join studies primarily to benefit from associated financial incentives28,29. Although the motivations of most participants interviewed in our study did not revolve around financial rewards, we suspect that it was a key incentive for imposter participants28. Ways to reduce the risk of imposter participants joining the study could include not providing incentives, minimising incentives, or delaying sending the incentives while the research team checks the participants are genuine. However, as exploring this was outside of the remit of this study, it is difficult to say whether or not this would have made a difference. The issue of incentive is a complex one which requires further research in the context of decentralised trials30.

Attention therefore needs to be paid to the way participants are recruited, especially via social media. This may include improving the way the desired study population are targeted, such as the use of relevant and precise hashtags. Researchers and software developers also need to collaborate to find technical solutions to prevent imposter participants from creating fraudulent profiles and by restricting participation according to geographic locations. While verifying participants’ identity would be a way to decrease the number of imposter participants, it raises further ethical issues with the local Research Ethics Committee27.

Another important challenge was the high attrition rate and the consequent loss of data on study outcomes. This was perceived by participants to be because they had forgotten that they had downloaded the study app and signed up to the study. This may have been a consequence of brain fog and other cognitive deficits associated with Long COVID, which requires further exploration. Participants may also not have received notification reminders to complete follow-up questionnaires.

More robust processes are needed to improve participant follow-up to reduce attrition rates, including incorporating appropriate levels of communication between the research team and participants throughout the study to enhance participant experience and encourage completion of follow up questionnaires. Other suggested improvements included providing confirmation emails to participants following study onboarding, sending frequent reminders to complete follow up questionnaires via various means (app, email, text messages), and adding multimedia content to the study website to explain the study aims and procedures.

Despite issues with recruitment, most participants decided to join our feasibility trial for altruistic reasons to help and contribute to research. However, some studies have shown that another factor influencing individuals’ willingness to take part in trials may be the severity of their condition31,32. It is important to consider the demographics of those recruited and generalizability of findings to the target population.

The main strength of this feasibility study is its design which included both a quantitative and qualitative evaluation, including an assessment of the feasibility of data capture for a health economic evaluation. This design allowed the research team to obtain a more complete picture of the feasibility of a fully powered trial and participants’ views and experience of using the study platform. Furthermore, the study was co-produced with individuals with lived experience of Long COVID, ensuring that it was patient-centred. The study also recruited participants who shared similar demographic characteristics to those with long COVID in the general population, who are mostly aged 35 to 69 years, although the proportion of females was overrepresented in our study (82.3% in our study versus 58.2% in the general population)1.

Our study also has several limitations. Firstly, our sample was self-selected. This means that we relied on participants to report their own symptoms. Data was self-reported without verification from a clinician or electronic health record. Whilst this could be a limitation, patient-reported outcomes arguably provide important information on symptom burden, the primary target of the pacing intervention. Secondly, due to the high loss to follow up, only 28 participants completed the feasibility questionnaire, which provided a snapshot of participants’ views on usage of the interventions during the feasibility trial and attempting to pace after using the interventions. In addition, our inclusion criteria only included people with a positive SARS-CoV-2 PCR or lateral flow antigen test result. This meant that many people who caught SARS-CoV-2 but were not able to obtain a diagnostic test were not eligible to take part in the study. These inclusion criteria could have led to the exclusion of a significant number of otherwise eligible participants and thus limited recruitment. Finally, it was not possible to state which recruitment methods worked the best. However, we have shown that recruitment increased after each time we posted the study advert on social media.

Conclusion

A decentralised clinical trial platform can technically conduct most study procedures needed to evaluate non-pharmacological interventions for Long COVID, based on the experience of our feasibility study. However, important challenges such as limited recruitment, imposter participants, and high attrition rates, threaten the feasibility of conducting fully powered trials through this approach. Strategies to mitigate these challenges would need to be implemented before a fully powered decentralised clinical trial should be attempted.

Methods and analysis

Study design

This was a feasibility study of a four parallel arm open label randomised DCT, investigating the feasibility of assessing the effectiveness and cost-effectiveness of three pacing interventions compared to usual care using a digital study platform (Aparito Atom5™) with a three-month follow-up period (Fig. 3). A qualitative evaluation was conducted to explore participants’ views on the trial design, including acceptability and adverse event reporting. We also assessed the feasibility of capturing cost and outcome data relevant to a health economic evaluation to inform the design of a future definitive RCT. The study was co-produced with individuals with lived experience of long COVID13.

Figure 3
figure 3

Study design.

This study was registered on the ISRCTN registry (Trial registration number 1567490).

Digital study platform (Atom5™)

The Atom5™ platform is an electronic data capture system developed by Aparito Ltd, a UK-based medical technology company. Atom5™ consists of two interfaces (Supplementary File 1):

  1. (i)

    A clinician dashboard accessed via a web browser on which research team members manage participant information and undertake data downloads.

  2. (ii)

    A participant facing interface accessed via an app on Android/iOS devices or via a web portal onto which trial participants input their data, including PROMs.

Participant recruitment

Participants were adults with a self-reported diagnosis of Long COVID (positive polymerase chain reaction (PCR) or lateral flow antigen test) who reported experiencing significant fatigue and had no record of hospitalisation within 28 days of the diagnosis12. Inclusion criteria can be found in Box 2. The TLC study was funded to research Long COVID in non-hospitalised adults as previous studies had already focussed on hospitalised patients, and the vast majority of people with long COVID had not been hospitalised with acute COVID-19.

Box 2 Participant inclusion criteria.

Participants were recruited through the following strategies:

  1. 1.

    People who contacted the TLC Study team expressing that they wished to participate in Long COVID research were sent the participant information sheet and a link to the feasibility study website.

  2. 2.

    Study advertisements were posted on social media (e.g., Twitter and Facebook), Long COVID support groups (e.g., LongCOVIDSOS)33, the TLC Study website34, and relevant newsletters.

  3. 3.

    Study advertisements were distributed as posters to community groups identified by patient and public involvement members13.

All participants were invited to take part in a qualitative interview at the end of the study. People who initially consented to take part in the study but had not been randomised and participants who had consented but withdrew from the study were also invited for an interview.

The participants received both verbal and written explanations of the experimental protocol that was in accordance with the declaration of Helsinki. Participants signed the informed consent document. All experimental protocols were approved by the West Midlands Solihull Research Ethics Committee (21/WM/0203).

Baseline data collection

Baseline data collection included participants’ demographic characteristics (“Questions about you” questionnaire), anxiety (Generalised Anxiety Disorder 2-item [GAD-2])35, quality of life (EQ5D-5L)36, shortness of breath (MRC Dyspnoea scale)37, Functional Assessment of Chronic Illness Therapy—Fatigue (FACIT-F)38, COVID-19 Recovery (COVID-19 core outcome measure for recovery), and Health Resources Use (Resources Use questionnaire). Data were also collected on Long COVID symptoms using the Symptom Burden Questionnaire for Long COVID (SBQ™-LC)39. Further information on each PROM and baseline data collection can be found in Supplementary File 3.

Intervention

Participants were randomly allocated to one of three intervention arms or the control arm in a 1:1:1:1: allocation ratio:

  • Arm 1—Pacing app (Spoonie Day40)

  • Arm 2—Pacing video (The Why, When and How of Pacing | Long Covid’s Most Important Lesson41)

  • Arm 3—Pacing book (The Pocket Book of Pacing42)

  • Arm 4—Comparator (NHS Your COVID Recovery website43)

Participants in all study arms were signposted to the NHS Your COVID Recovery website. A detailed description of the four trial arms can be found in Supplementary Table 2.

Participants in arms 1 (pacing app) and 2 (pacing video) received a URL for the corresponding pacing intervention immediately after randomisation had taken place via a notification on Atom5™. The pacing book (arm 3) was only available in paper format and was therefore posted to participants’ home address within 72 h of randomisation.

Intervention arm participants were given brief instructions on how to use the respective pacing interventions. Reminders were automatically sent twice on days one and three during the week after randomisation. In addition to the above, all participants reporting red flag symptoms, such as chest pain, on the SBQ™-LC received a telephone call from a research nurse, although they were reminded that if they were concerned about their well-being to contact a healthcare professional. The research nurses ascertained if participants felt that their symptoms were exacerbated by the intervention and reported any significant adverse events.

Vouchers were sent electronically to participants after completing the PROMs at each study time point.

Randomisation

Randomisation was undertaken within Atom5™ using a computer-generated sequence at the level of the individual following baseline data collection. A minimisation algorithm with a random element was used to achieve balance between trial arms on age (< 40, ≥ 40 years), sex at birth (male, female), fatigue severity (severe versus mild/moderate fatigue as measured on the SBQ™-LC).

Participants were notified of the study arm they had been allocated to with a push notification through Atom5™. This sequence was unavailable to members of the research team. The intervention arms were randomly allocated by Atom5™ and could not be influenced by members of the research team.

Sample size

A formal power calculation was not required as this was a feasibility trial. The research team determined that 25 participants per arm would be sufficient to meet the study objectives, and therefore our total target sample size was set at 100 participants.

Qualitative evaluation

Participants who took part in the feasibility trial were invited to take part in qualitative interviews at Week 12. In addition, participants who formally withdrew and individuals who consented but did not take part in the study were also invited for a qualitative interview.

Outcomes

The outcomes of this feasibility DCT were:

  1. 1.

    The ability to achieve the recruitment target.

  2. 2.

    The feasibility of undertaking trial processes including e-consent, baseline data collection, randomisation, delivery of the interventions and capture of follow up data through the digital study platform (Atom5™).

  3. 3.

    Participants’ engagement with the interventions.

  4. 4.

    Retention and attrition rates.

  5. 5.

    Data completeness, including the feasibility of capturing data relevant for an economic evaluation.

  6. 6.

    Participants’ views on all aspects of the trial design, including acceptability of randomisation to the various intervention arms, and adverse event reporting.

Follow-up

Follow-up data included data from the following PROMs: SBQ™-LC39, FACIT-F, COVID-19 core outcome measure for recovery, MRC Dyspnoea scale, GAD-2, and EQ-5D-5L. In addition, feasibility (Feasibility questionnaire) and health resource use (Resources Use questionnaire) data were also collected (Supplementary File 4). All outcomes were captured on Atom5™ at baseline, and weeks 4, 8, and 12 post-randomisation (Supplementary File 4). Participants were reimbursed with a £10 gift voucher via Atom5™ for completing each set of questionnaires at each timepoint.

Qualitative evaluation

Semi-structured, one-to-one interviews were conducted with a sub-sample of participants who took part in the study. These interviews were conducted at the 12-week follow-up (end of study) to ensure they had enough time to use the pacing interventions and experience the study processes to be able to reflect on them. Interviews were conducted online (via Zoom/Microsoft Teams using University of Birmingham secure accounts). Interviews were conducted by a member of the research team (CM) who is an experienced qualitative researcher from a non-clinical background. Semi-structured topic guides (tailored to the different participant groups) were used to ensure key topics were consistently covered. The topic guides remained flexible and evolved based on findings.

The researcher offered participants breaks during the interview and participants were able to request breaks when they wished to do so or to reschedule a follow up interview if they became tired. Interviews were audio recorded using an encrypted digital recorder or on Zoom, except during the breaks. Recordings were transcribed verbatim automatically in Zoom and checked by the researcher for accuracy. The transcripts were anonymised and assigned a unique identification number. Participants who agreed to take part in a qualitative interview received a £10 Amazon voucher.

Ethical approval

Ethical approval was granted by the West Midlands Solihull Research Ethics Committee (21/WM/0203).

Data analysis

Trial feasibility

Simple descriptive statistics were used including frequencies and percentages to present each trial and feasibility outcome measure (e.g., number and percentage of eligible participants enrolled, number and percentage of participants randomised to each intervention arm, number and percentage of participants completing baseline data collection and follow up questionnaires).

Based on the nature of the interventions being assessed, there was no need to include stopping guidelines.

Qualitative evaluation

Computer-Aided Qualitative Data Analysis software NVivo was used to manage, sort, and code the transcribed data from the qualitative interviews. Reflective thematic analysis of the data was conducted with the aim of exploring and developing the main themes found in the data. Two researchers independently coded two transcripts to cross check the coding strategy and data interpretation. Data analysis was carried out simultaneously with data collection.

Feasibility of economic evaluation

The feasibility of collecting cost and outcome data to inform methods for a future definitive economic evaluation was assessed. Issues such as trial processes for obtaining cost data and acceptability and completion of questions within the questionnaires were assessed. As this is a feasibility study of an economic evaluation, the purpose was not to produce a definitive result with respect to the cost-effectiveness of the intervention, but to undertake exploratory analyses to refine the focus for the methods for an economic evaluation alongside a future definitive trial.

Patient and public involvement and engagement (PPIE)

Members of the PPIE group contributed to the design of the feasibility study, including the co-production of a virtual intervention and participant resources, advising on recruitment strategies, and informing the design and content of the Atom5™ platform13. The protocol was reviewed and approved by PPIE members12. The Guidance for Reporting Involvement of Patients and the Public (GRIPP) 2 checklist was used for the reporting and evaluation of PPI for the TLC Study44.