Article Text

Population-level impact of a pulse oximetry remote monitoring programme on mortality and healthcare utilisation in the people with COVID-19 in England: a national analysis using a stepped wedge design
  1. Thomas Beaney1,2,
  2. Jonathan Clarke3,
  3. Ahmed Alboksmaty1,2,
  4. Kelsey Flott2,
  5. Aidan Fowler4,
  6. Jonathan Benger5,
  7. Paul P Aylin1,2,
  8. Sarah Elkin6,
  9. Ana Luisa Neves2,
  10. Ara Darzi2
  1. 1 Department of Primary Care and Public Health, Imperial College London, London, UK
  2. 2 Patient Safety Translational Research Centre, Institute of Global Health Innovation, Imperial College London, London, UK
  3. 3 Centre for Mathematics of Precision Healthcare, Department of Mathematics, Imperial College London, London, UK
  4. 4 NHS England and Improvement, London, UK
  5. 5 NHS Digital, Leeds, UK
  6. 6 Imperial College Healthcare NHS Trust, London, UK
  1. Correspondence to Dr Thomas Beaney, Department of Primary Care and Public Health, Imperial College London, London, London, UK; thomas.beaney{at}imperial.ac.uk

Abstract

Background To identify the population-level impact of a national pulse oximetry remote monitoring programme for COVID-19 (COVID Oximetry @home (CO@h)) in England on mortality and health service use.

Methods We conducted a retrospective cohort study using a stepped wedge pre-implementation and post-implementation design, including all 106 Clinical Commissioning Groups (CCGs) in England implementing a local CO@h programme. All symptomatic people with a positive COVID-19 PCR test result from 1 October 2020 to 3 May 2021, and who were aged ≥65 years or identified as clinically extremely vulnerable were included. Care home residents were excluded. A pre-intervention period before implementation of the CO@h programme in each CCG was compared with a post-intervention period after implementation. Five outcome measures within 28 days of a positive COVID-19 test: (i) death from any cause; (ii) any ED attendance; (iii) any emergency hospital admission; (iv) critical care admission and (v) total length of hospital stay.

Results 217 650 people were eligible and included in the analysis. Total enrolment onto the programme was low, with enrolment data received for only 5527 (2.5%) of the eligible population. The period of implementation of the programme was not associated with mortality or length of hospital stay. The period of implementation was associated with increased health service utilisation with a 12% increase in the odds of ED attendance (95% CI: 6% to 18%) and emergency hospital admission (95% CI: 5% to 20%) and a 24% increase in the odds of critical care admission in those admitted (95% CI: 5% to 47%). In a secondary analysis of CO@h sites with at least 10% or 20% of eligible people enrolled, there was no significant association with any outcome measure.

Conclusion At a population level, there was no association with mortality before and after the implementation period of the CO@h programme, and small increases in health service utilisation were observed. However, lower than expected enrolment is likely to have diluted the effects of the programme at a population level.

  • COVID-19

Data availability statement

Data may be obtained from a third party and are not publicly available. The patient-level data used in this study are not publicly available but are available to applicants meeting certain criteria through application of a Data Access Request Service (DARS) and approval from the Independent Group Advising on the Release of Data.

https://creativecommons.org/licenses/by/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/licenses/by/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Key messages

What is already known on this topic

  • The COVID Oximetry @home (CO@h) programme was implemented in November 2020 to provide pulse oximeters to people with confirmed or suspected COVID-19 infection to support self-monitoring.

  • A pilot of the programme was identified as being a safe pathway for patients but the effectiveness of the programme remains unknown.

What this study adds

  • Overall enrolment onto the programme in eligible people was low (2.5%).

  • At a population level in England, there was no association with a change in mortality after implementation of the programme, and small increases in ED attendances and emergency hospital admissions.

How this study might affect research, practice or policy

  • Our findings suggest the CO@h programme is a safe pathway for patients with COVID-19, but due to low total enrolment at a population level, further research is needed to identify whether the programme is effective at an individual level.

Background

Since the start of the COVID-19 pandemic, asymptomatic (‘silent’) hypoxaemia has complicated the assessment and care of patients with COVID-19.1 Hypoxaemia has been shown to be an important predictor of mortality and the need for hospital admission in patients with COVID-19, yet those patients with asymptomatic hypoxaemia may be unaware of dangerously low blood oxygen saturations.2 3 Pulse oximetry allows patients and clinicians to regularly monitor a patient’s oxygen saturation and promptly initiate escalation of care should deterioration occur, such as triggering hospital assessment or admission.1 Health systems across the world introduced remote monitoring pathways, including the use of pulse oximetry, to support the care of patients with COVID-19 outside hospital.4 5

In November 2020, NHS England and Improvement introduced the COVID Oximetry @home (CO@h) programme, recommending that all Clinical Commissioning Groups (CCGs; responsible for local healthcare commissioning) in England provide services to monitor patients with a diagnosis of COVID-19 at home using pulse oximetry.6 The service built on local remote monitoring services provided by individual CCGs and hospital trusts earlier in the pandemic. CCGs were responsible for establishing services in their area, although these could be shared between CCGs, and more than one could operate within a single CCG.7 People enrolled were provided with a pulse oximeter and encouraged to record regular oxygen saturation readings with advice to call emergency services for readings of 92% or less, or to contact primary care services for readings of 93%–94%. There was no single model for CO@h, with differences across sites in how readings were recorded and reported (eg, via an app or via paper and telephone) and in the frequency of staff contact.4 7 8

Patients were eligible for the CO@h programme if they had symptomatic COVID-19 and were aged 65 years or older or at high risk from COVID-19, although some sites adopted broader eligibility criteria and criteria could vary over time.7 9 Additionally, clinical judgement could be applied to consider other individual risk factors,including pregnancy, learning disability and socioeconomic deprivation. The programme accepted patients from primary care, NHS Test and Trace, ambulance services and A&E departments, in contrast to ‘COVID-19 Virtual Wards’ which aimed to support discharge of patients with COVID-19 from hospital.10

The clinical effectiveness of the CO@h programme on mortality and secondary care utilisation was unknown. The primary aim of this analysis is to identify differences in mortality and use of healthcare services at a population level after implementation of the CO@h programme among eligible people. A secondary aim is to identify the impact of the programme in sites with a high total enrolment onto the programme among the eligible population.

Methods

This study used a retrospective cohort of people eligible for the CO@h programme, comparing outcomes at a CCG level using a stepped wedge pre-implementation and post-implementation design. A population approach was chosen to reduce the impact of biases in patient selection which may occur at an individual level. Eligibility was defined as the population resident in England, with a positive COVID-19 PCR test result, who were symptomatic at the time of testing, from 1 October 2020 to 3 May 2021. Due to differing eligibility across sites and over time, and the role of clinical judgement, we selected for analysis the group of people who would have met the minimum eligibility criteria throughout: people aged 65 years or older, or those who were at high risk (see box 1). Those at high risk were identified through the NHS Digital Shielded Patient List as clinically extremely vulnerable (CEV).9 The conditions and risk factors determining CEV status are shown in the online supplemental appendix p3 and box A1 and). Care home residents were excluded from the analysis, as previous work has suggested significantly higher mortality in this group.11

Supplemental material

Box 1

Eligibility criteria for the COVID Oximetry @home programme

  • Diagnosed with COVID-19: either clinically or positive test result AND

  • Symptomatic AND EITHER

  • Aged 65 years or older OR

  • Under 65 years and at higher risk from COVID-19 or where clinical judgement applied considering individual risk factors such as pregnancy, learning disability, caring responsibilities and/or deprivation. Pregnant women being referred to a COVID Oximetry @home service should also be asked to contact their maternity team for specific advice around pregnancy and COVID-19.

A lighter touch pathway should be available to any adult aged 18–64 years, that has tested positive and has not been double vaccinated. This pathway is fully self- managed and escalated.

Five outcomes were selected to capture impact on mortality and healthcare utilisation. Outcomes were defined as occurring within 28 days of the date of a positive COVID-19 test, for consistency with government-reported metrics12:

  1. Death from any cause.

  2. One or more A&E department attendances.

  3. One or more emergency hospital admissions.

  4. One or more critical care admissions (of those admitted to hospital).

  5. Total hospital length of stay in days, of those admitted who did not die within 28 days.

Data sources and processing

COVID-19 testing data were provided through the Second Generation Surveillance System,13 which collates positive COVID-19 test results conducted in laboratories across England. Data were available from 1 October 2020 to 3 May 2021. This analysis used PCR tests, with symptoms documented at the time of testing.14 Where more than one test was recorded, only the date of first test was used. Data on the number of patients enrolled onto the CO@h programme were submitted from CO@h sites via NHS Digital’s Strategic Data Collection Service.15 Primary care data were sourced from the General Practice Extraction Service Data for Pandemic Planning and Research (GDPPR).16 CEV status was sourced from NHS Digital’s Shielded Patient List linked to GDPPR. Hospital Episode Statistics (HES) data17 and the Emergency Care Data Set (ECDS)18 provided data on hospital admissions and ED (24-hour consultant led or specialist) attendances up to 31 May 2021. Data on registration of deaths were sourced from the Office for National Statistics, with data available up to 5 July 2021. Datasets were linked using a deidentified NHS patient ID.

The study population were assigned to the CCG they were resident in when the test was performed. Patient demographic data, including age, sex, ethnicity, lower layer super output area (LSOA) of residence were derived from GDPPR, or, if missing, from HES or ECDS. LSOA was linked to measures of socioeconomic deprivation based on deciles of the Index of Multiple Deprivation (IMD) 2019 for England.19 Data on care home residence, body mass index (BMI) and smoking status were available from GDPPR only. Information on 12 chronic conditions were included, extracted from GDPPR (online supplemental appendix). For demographics and chronic conditions, the most recent codes up to and including the date of a positive COVID-19 test were used to exclude those which may have resulted from COVID-19 infection. If no data were available prior to the date of a positive test for age, sex and ethnicity only, then the earliest data following the positive test was used. Full details of the datasets and cleaning approach are provided in the online supplemental appendix, with a link to the code lists.

Statistical analysis

The pre-implementation and post-implementation periods were defined for each CO@h site, with implementation start dates for each site provided by NHS England @home. A stepped wedge design was used. All eligible people in each of the 106 CCGs in England before and after implementation of the CO@h programme were allocated to the control group and intervention group, respectively (irrespective of whether enrolment data were received for an individual). Two-level hierarchical regression models were run for each outcome, incorporating random intercepts for CCG. Logistic regression was used for the four binary end points and negative binomial regression models were used for the single continuous outcome (length of stay). Analyses of length of stay excluded patients who died within the 28-day time window.

To account for possible changes in the baseline risk of each outcome over time, the primary models for each outcome incorporated fixed effects for the month of positive COVID-19 test. To account for potential differences in the at-risk population before and after implementation, the primary models adjusted patient-level risk factors. Final covariates in each model included age category (years), sex, ethnicity, IMD score, BMI category, month of COVID-19 test, CEV status and clinical conditions. Intraclass correlation coefficients were calculated for each model. Sensitivity analyses were conducted to explore the robustness of results to adjustment for time and patient risk factors (online supplemental appendix).

A secondary analysis was performed on the subset of sites with a higher proportion of eligible people enrolled. Two thresholds were defined a priori, at 10% or more and 20% or more across the whole study period.

Analyses were conducted in the Big Data and Analytics Unit Secure Environment, Imperial College. Python V.3.9.5 and Pandas V.1.2.3 were used in data manipulation. Regression models were conducted in Stata V.17.0, using the melogit and menbreg commands.

Patient and public involvement

Patients or the public were not involved in the design, conduct or reporting of our research.

Results

A total of 1 714 182 people resident in 106 CCGs in England had a positive PCR COVID-19 test between 1 October 2020 and 3 May 2021, and were symptomatic at the time of the test. A total of 223 429 (13.0%) were at least 65 years of age or CEV and were eligible for the analysis. Of these, 5779 (2.6%) were living in a care home and were excluded. A total of 217 650 people were included in the analysis (figure 1).

Figure 1

Flow diagram of eligibility criteria for the evaluation of the COVID Oximetry @home programme.

The mean (SD) age of participants was 62.4 (15.9) years, with 53.4% between 65 and 79 years. More women than men were enrolled (54.4% vs 43.9%). The majority were of white ethnicity (72.5%) and resident in the most socioeconomically deprived half of LSOAs in England (57.5%). More of the population were obese (39.4%) than overweight (33.6%) or had a healthy weight (20.8%). Just over half (54.9%) were never smokers. Hypertension (40.6%), diabetes (31.4%; type 1 and type 2) and chronic respiratory disease (29.3%) were the most common comorbidities. A total of 5616 (2.6%) of the study population died within 28 days of a positive COVID-19 test, 19.9% attended ED at least once, 12.2% were admitted at least once and of those admitted, 16.1% required critical care. There were significant differences in distributions of most of the predictor and outcome variables in the eligible population before and after implementation in each site (table 1). In the pre-implementation period, 77.6% were of white ethnic backgrounds, compared with 69.8% in the period after implementation. The percentage of deaths, ED attendances and admissions within 28 days from positive test were all significantly higher in the post-implementation period.

Table 1

Characteristics of people eligible for the CO@h programme from 1 October 2020 to 3 May 2021, before and after implementation at each site

Data were received via submissions from CO@h sites for 5527 patients enrolled onto the programme, giving an overall enrolment rate based on submitted data of 2.5%. There was considerable variation in uptake across the 106 CCGs, ranging from 0.0% to 33.0% total enrolment, with a median of 2.2% (figure 2). The earliest date a CO@h site became operational was 20 November 2020, with all sites operational from 10 January 2021.

Figure 2

Percentage of the eligible population enrolled onto the COVID Oximetry @home programme in each Clinical Commissioning Group (CCG) from date of implementation based on submissions from sites. Hashed areas represent CCGs with no patient enrolment data submitted.

Mixed effects logistic regression was run separately for each outcome, with CCG of residence as a random intercept. Table 2 shows the results for the primary analysis for each outcome, adjusted for month of COVID-19 test and patient-level covariates. There was no significant difference in the adjusted odds of 28-day mortality in the period following implementation of the CO@h programme (adjusted OR (aOR)=1.06, p=0.405). There was evidence of a small increase in both ED attendances (aOR=1.12, p<0.001) and emergency hospital admissions (aOR=1.12, p<0.001) within 28 days. Of those patients admitted to hospital in the period after implementation, there was a 24% increase in the adjusted odds of requiring critical care (aOR=1.24, p=0.012). There was no significant difference in the length of stay of those admitted (p=0.588).

Table 2

Effect estimates for the implementation period of the CO@h programme, from mixed effects regression models, adjusted for month of test and patient-level covariates

Sensitivity analyses

Sensitivity analyses comparing alternative model specifications are given in the online supplemental tables A1–A5. Naïve models (unadjusted for time) showed significant increases in 28-day mortality, ED attendance and admissions associated with the programme and a weak association with lower odds of critical care admission. Little meaningful difference was seen between models unadjusted or adjusted for patient-level covariates, or with the addition of random time by CCG interactions. The intraclass coefficients for both CCG and CCG by time interactions for mortality, ED attendance and hospital admission models were all <1%, suggesting minimal variation between CCGs that might be accounted for by time-varying CCG factors.

Secondary analysis of high enrolment CCGs

Secondary analyses were performed for 16 CCGs with 10% or more enrolment (table 3), and for 5 CCGs with 20% or more enrolment (table 4), representing 9.4% and 2.4% of the eligible population, respectively. In the 10% enrolment group, there was a 9% lower odds or mortality, 10% higher odds of ED attendance and 23% higher odds of critical care admission after implementation, but effects were statistically non-significant. There was evidence of 27% higher odds of admission (p=0.046). In the 20% enrolment group, effect sizes were larger, but none was statistically significant.

Table 3

Effect estimates for CO@h sites with 10% or more enrolment, from mixed effects logistic/negative binomial regression models adjusted for month of test and patient factors

Table 4

Effect estimates for CO@h sites with 20% or more enrolment, from mixed effects logistic/negative binomial regression models adjusted for month of test and patient factors

Discussion

At a population level, there was no change in mortality in the period after implementation of the CO@h programme. The period of implementation was associated with statistically significant 12% increases in ED attendances and emergency hospital admissions and there was some evidence of an increase in the odds of receiving critical care in patients admitted. Although not statistically significant, the findings from the secondary analysis of high enrolment sites demonstrated a trend towards lower mortality and higher healthcare utilisation after the period of implementation.

Overall enrolment onto the programme was lower than expected, with data received from only 2.5% of the eligible population. This may be due to incomplete data submissions or may reflect genuinely low enrolment onto the programme and represents the lowest bound of true enrolment. If the 2.5% enrolment reflects the true number enrolled onto CO@h, then there is likely to be a dilutional effect on our population-level analysis and the effect estimates are likely to be picking up changes external to, rather than from, the programme itself.

A separate study using the same data sources showed significant increases in hospitalisation and fatality risk from COVID-19 over the time period CO@h sites were becoming operational.20 These trends may relate to winter effects on mortality, new treatments, hospital pressures, changes to admission criteria and the alpha COVID-19 variant which became the dominant strain in England during December 2020 and has been linked to significantly higher mortality compared with earlier variants.21 22 Introduction of the vaccination programme in December 2020 in England is likely to have had a protective effect on hospitalisation and mortality, particularly for higher-risk people, who were among the first eligible for vaccination.23 Despite our analysis incorporating time as an adjusting covariate, there may be residual confounding between sites becoming operational, and changes in the underlying hospitalisation and mortality rates.

The increases seen in ED attendances, emergency hospital admissions and critical care use following implementation of CO@h may reflect early recognition of silent hypoxaemia in COVID-19. The magnitude of increase in ED attendances was similar to the magnitude of increase in hospital admissions, suggesting that implementation did not cause a large increase in ED attendances not requiring admission. However, early intervention might be expected to decrease length of stay and mortality, which was not found here and could reflect changes to disease severity across the time of implementation.

Remote monitoring technologies have been widely used in the management chronic diseases, but with mixed evidence of their effectiveness24 and limited evidence for their use in COVID-19 with which to compare our findings.25 A pilot study of four NHS COVID-19 pulse oximetry programmes in England indicated the pathway was safe, but did not include a control group.5 A study in the USA of patients with COVID-19 referred to a remote patient monitoring pathway after discharge from hospital found lower odds of ED or hospital reattendance in those enrolled, but did not assess mortality.26

In a separate study of CO@h programme by the same authors, patients with COVID-19 enrolled to CO@h after assessment in ED had lower odds of mortality and critical care use, and higher odds of subsequent ED use and admission, compared with matched controls, although selection bias may limit the generalisability of the findings.27 Collectively, results from the two studies suggest that although there was no impact on mortality at a population level, there is some evidence for the effectiveness of the CO@h programme at an individual level (although within a narrower eligibility cohort of patients assessed in ED), but indicate that the programme could not be scaled up to provide a benefit to all eligible people nationally. Neither of the studies indicate that the programme causes harm, but it is unclear whether the results are generalisable to other forms of remote monitoring in COVID-19. There is a need for further research into patient and staff experience of the programme, and the barriers and facilitators to implementation of the programme which may explain the significant variation in enrolment across CCGs, which may aid policymakers and commissioners in implementing remote monitoring programmes in the future. There is also a need to understand the equity of access to CO@h, and to evaluate user experience and cost implications.

Strengths and limitations

A strength of this study is the availability of comprehensive data on COVID-19 testing. Through linkage to primary and secondary care records, we were able to identify the eligible population and adjust for risk factors pre-implementation and post-implementation. Eligibility for the programme was not absolute, with a recommendation to extend to those aged 50 years and over from February 2021 and further variation across sites. Our analysis focused on a narrower subgroup of people aged 65 years or over or CEV, who would have remained eligible over the full study period, but findings might not be generalisable to all those included in the programme or to lower risk cohorts.

The stepped wedge design was chosen in part as it is robust to selection biases in enrolment of patients (eg, if there were systematic enrolment of patients with higher or lower risk or severity), which would impact individual-level study designs. The effect estimates are also not impacted by lack of submission of patient-level programme data on patients enrolled, which may not be complete. However, the fact we do not know whether the low numbers represent incomplete data or true enrolment impacts on our ability to judge if the study was adequately powered to detect a difference in mortality should one exist. Although our analysis accounted for underlying risk, we could not account for disease severity at diagnosis. Furthermore, some areas may have existing remote pulse oximetry services prior to the roll-out of the CO@h programme, which were either replaced or relabelled as CO@h, which may lead to a dilution of effect sizes.

Incorporation of additional CCG-level and hospital-level covariates, such as hospital bed and intensive care occupancy, was considered but estimates of the residual variation explained by clustering at the CCG-level (intraclass correlation) were small, suggesting these would have limited impact. Sensitivity analyses considered time-varying CCG-level effects, with almost identical results compared with the primary analysis, suggesting minimal impact of time-varying differences between CCGs. Across England, CO@h sites implemented different types of model, run by different sectors of the healthcare system, and with different recommendations for the frequency of monitoring.4 8 National population effect estimates as presented here may therefore mask variation in the effectiveness between sites. Our approach assumed that each site represents a discrete unit, but some sites may not cover an entire CCG, while others may provide services across boundaries.

Conclusion

Implementation of the CO@h programme across England had no impact on mortality at a population level and was associated with small increases in ED attendances, hospitalisations and critical care use in people with COVID-19 aged 65 years or over or CEV, which may indicate early recognition of hypoxaemia and escalation. Lower than expected enrolment of eligible people may have diluted the effects of the programme at a population level. There is a need for further research into the uptake and effectiveness of remote monitoring programmes for COVID-19.

Data availability statement

Data may be obtained from a third party and are not publicly available. The patient-level data used in this study are not publicly available but are available to applicants meeting certain criteria through application of a Data Access Request Service (DARS) and approval from the Independent Group Advising on the Release of Data.

Ethics statements

Patient consent for publication

Ethics approval

The work was conducted as a national service evaluation of the CO@h programme, approved by Imperial College Health Trust on 3 December 2020. Data access was approved by the Independent Group Advising on the Release of Data (IGARD; DARS-NIC-421524-R0Y3P) on 15 April 2021.

Acknowledgments

The authors would like to thank Hutan Ashrafian, Gianluca Fontana, Saira Ghafur, Melanie Leis and Mahsa Mazidi for their input and support. Data management was provided by the Big Data and Analytical Unit (BDAU) at the Institute of Global Health Innovation (IGHI), Imperial College London. The authors would like to thank the other evaluation teams collaborating on the CO@h programme: the National Institute for Health Research (NIHR) BRACE and RSET team, the Nuffield Trust and the Improvement Analytics Unit, a partnership between The Health Foundation and NHS England. The authors also extend their thanks to NHS Digital for data support and advice throughout the project.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Handling editor Edward Carlton

  • Twitter @drtombeaney

  • Contributors All authors were involved in the conceptualisation of the study, interpretation of the findings and in the reviewing and editing of the manuscript. TB, JC, KF, JB, PPA, SE, ALN and AD developed the study design and methodology. TB and JC conducted the data management and formal analysis, supported by ALN. TB and JC wrote the first draft of the manuscript. TB is the guarantor and accepts full responsibility for the work and the conduct of the study, had access to the data and controlled the decision to publish. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

  • Funding This work was funded by NHS England (award number: P012627) and supported by the NIHR Imperial Patient Safety Translation Research Centre. Infrastructure support was provided by the NIHR Imperial Biomedical Research Centre (BRC). TB received support from the NIHR Applied Research Collaboration Northwest London. JC was supported by the Wellcome Trust (215938/Z/19/Z).

  • Disclaimer The study funder(s) did not play a role in study design; in the collection, analysis and interpretation of data; in the writing of the report and in the decision to submit the article for publication. In addition, researchers were independent from funders, and all authors had full access to all of the data included in this study and can take responsibility for the integrity of the data and the accuracy of the data analysis.

  • Map disclaimer The depiction of boundaries on this map does not imply the expression of any opinion whatsoever on the part of BMJ (or any member of its group) concerning the legal status of any country, territory, jurisdiction or area or of its authorities. This map is provided without any warranty of any kind, either express or implied.

  • Competing interests JC has received fees from Philips UK for consultancy services outside of the submitted work. SE has received fees for an educational lecture sponsored by AstraZeneca and is co-clinical director for the NHS England and Improvement London Respiratory Network. All other authors report no support from any organisation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous 3 years; no other relationships or activities that could appear to have influenced the submitted work. All authors have completed the ICMJE uniform disclosure form at: http://www.icmje.org/disclosure-of-interest/

  • Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Linked Articles