Best-practice statements are useful in theory but their effectiveness may be compromised by varied development methods and a lack of implementation evaluation
Best-practice statements aim to facilitate evidence-based practice and improve care quality. They may improve care provision when developed using systematic, rigorous methods but there is no standardised approach for this and the support available may be limited. Methods used to develop, implement and monitor statements are not always transparent, and evaluation of their impact is inadequate. This article discusses their use as tools to facilitate evidence-based practice and improve care, and the factors that can constrain or promote this.
Citation: Tolmie EP, Rice AM (2015) Best practice statements: should we use them? Nursing Times; 111: 42, 12-14.
Authors: Elizabeth P Tolmie is lecturer; Anne Marie Rice is senior university teacher; both at Nursing Health and Care School, University of Glasgow.
- This article has been double-blind peer reviewed
- Scroll down to read the article or download a print-friendly PDF here
The concept of integrating the best available evidence from systematic research with individual clinical expertise to help practitioners make decisions and guide patient care was introduced as evidence-based medicine (EBM) before the middle of the 19th century (Sackett et al, 1996). It was later extended to nursing and professions allied to medicine, and the term “evidence-based practice” (EBP) was adopted.
A distinguishing feature of EBM, less evident in the wider EBP literature, is that clinical decisions are based on the results of population-based research. These results are then applied to individuals, after taking into account the context of the clinical situation and the person concerned (Greenhalgh, 2014). Meta-analysis and systematic reviews, such as those by the Cochrane Collaboration and the Campbell Collaboration, combine results from several studies to identify the best treatment or intervention and reduce uncertainty.
A robust review can identify whether the quality of the available evidence is sufficient to develop a clinical guideline, or whether more research needs to be conducted before the evidence can be confirmed or refuted. Where the evidence is sparse or of poor quality, a statement of best practice may be appropriate.
Best-practice statements (BPSs) aim to guide the practice of nurses, midwives, health professionals and support staff. They have been described as protocols that give practitioners guidance on the best and most comprehensive care, and as something to which practitioners should aspire (Cayless and Wengström, 2008). Conversely, they have also been described as “realistic but challenging rather than aspirational” (Booth et al, 2005). The former implies an optimal service will be provided if the recommendations are followed, but fails to acknowledge the well-documented problems associated with implementation (Gichuhi and Gomersall, 2013; Teresi et al, 2013). The latter, while more pragmatic, may dampen, rather than generate, enthusiasm among those keen to develop new initiatives to positively influence practice.
Developing best-practice statements
The methods used to develop and produce statements vary. As with clinical guidelines, their content is based on current literature and the combined knowledge and experience of clinicians, academics and other stakeholders. Contributors are considered to be experts or specialists in the subject, and conversant with relevant, current evidence.
The quality of the BPS is influenced by the combined expertise of those involved, along with the resources available to support its development. When limited resources and time restrict the scope of the review, a rapid evidence assessment (REA) method may be the most feasible one to use.
The REA method originally derived from the need to respond quickly to public health emergencies (Fitch et al, 2004) so it does not enable a complete and exhaustive review of the literature. This means there is a risk that some relevant literature may be overlooked and the time available for quality assessment may be restricted. An additional limitation is that many existing BPSs do not comply with the World Health Organization’s (2012) recommendation that they be reviewed or updated (or invalidated) within a specified time frame. In the context of a public health emergency this is justifiable, but it is less so in other situations. Nevertheless, BPSs can - and do - offer some guidance where this has previously been lacking or is inconsistent.
Some BPSs are intentionally discipline- specific, while others are multidisciplinary and take into account different perspectives, including those of people who have personal experience of the issue. Informing all stakeholders about any proposed, and current, work is essential if appropriate contributions and support are to be secured (Tolmie and Stanley, 2015). However, when using the REA method to develop a BPS, the limited timescale (2-6 months) can make it difficult to secure meaningful involvement.
Freedom from bias and political influence is also important but might be difficult to achieve when contributors to the BPS include those who are expected to implement the recommendations, as well as those who provide or receive the service or intervention to which the recommendations relate. Nevertheless, pooled resources, collegiality and wider involvement have the potential to lever change at a local level and beyond. Working with those who have different but relevant perspectives, expertise and aspirations may help generate the creativity and innovation needed to overcome some of the challenges likely to occur, particularly with regard to implementation.
Careful attention needs to be paid to the needs of lay contributors who may not have had the opportunity to contribute to health recommendations, and those whose previous attempts have been restricted by others because of assigned roles, or lack of training and support (Snape et al, 2014). Although public contributions are increasingly sought, some health professionals have been reluctant to embrace the idea. Pollock et al (2015) acknowledged the discomfort experienced by physiotherapists when handing over the updating of a systematic review to non-academics and survivors of stroke; notably, a positive outcome was reported.
A wide range of quality-assessment instruments are available to help researchers and guideline developers assess the quality of existing literature and ensure the review is conducted systematically. Examples are given in Box 1; most are free of charge but as there is no agreed “gold standard”, once the type of instrument needed has been identified, selection is a matter of personal or group choice.
Box 1. Quality assessment instruments
- Scottish Intercollegiate Guidelines Network critical appraisal notes and checklists
- Critical Appraisal Skills Programme checklists
- Joanna Briggs Institute Critical Appraisal Checklist for Systematic Reviews and Research Syntheses
- National Institute for Health and Care Excellence methodology checklists. The guidelines manual: appendices B-I
- The Cochrane Collaboration’s tool for assessing risk of bias: Higgins JPT, Green S (eds) (2011) Table 8.5.a. Assessing risk of bias in included studies. In: Cochrane Handbook for Systematic Reviews of Interventions, Version 5.1.0.
Implementation of best-practice guidance
In contrast to the quality-assessment aspect of developing practice guidelines, implementation guidance lacks clarity and tends to be descriptive and intervention specific (Powell et al, 2015). Identified barriers to implementation include:
- Gaps in knowledge;
- Inadequate dissemination;
- Transition from primary to secondary care;
- Geographical area;
- Patient non-adherence (Cayless and Wengström, 2008).
A toolkit published by the Registered Nurses’ Association of Ontario (2012) provides some helpful information about implementation and other aspects of best- practice guidance. Its strengths are:
- Comprehensive coverage;
- Emphasis on the need for training and development, publication and dissemination, monitoring and evaluation;
- A focus on nursing (albeit a limitation from a multidisciplinary perspective).
Notably, with regard to implementation, the complexity of many healthcare interventions cautions against a blanket or unidisciplinary approach. A comprehensive assessment may determine that adherence to the recommendations is inappropriate for a particular patient, at a particular time, or in a particular context. Moreover, as experts do not always agree, even when robust evidence is available, some practitioners and potential recipients may be less willing, or able, to adhere to the recommendations.
Geographical, organisational and financial constraints may also make it difficult - and perhaps impossible - to fully meet the recommendations. In such circumstances, a degree of flexibility is required to accommodate individual or cultural differences - at least until an appropriate alternative or compromise is found. This may require reconfiguration of a service, local adaptation of the recommendations or additional research evidence.
Monitoring and evaluation
Monitoring and evaluating the impact of BPSs and similar initiatives is imperative. Schouten et al (2008) proposed that quality initiatives should be considered in terms of their effectiveness, efficacy and economy. In the current economic climate, in which health professionals are expected to do more with less, this is difficult to contest.
In 2003, NHS Quality Improvement Scotland commissioned an independent evaluation of the first five BPSs it developed. Responses (n=537) to 1,278 questionnaires indicated that fewer than half of respondents (n=250; 46.6%) knew about the BPSs and only two of them (pressure ulcers and continence) were being used in full by around 25% of clinical respondents (Ring and Finnie, 2004).
On a more positive note, 15 nurses interviewed as part of the evaluation said the BPSs had a positive impact on practice; however, as five of the interviewees were project leaders and two had been involved in the development of the statement, their views are likely to be biased and may not be representative of nurses in general.
The evaluators concluded that, while the development of BPSs should continue, there was a need to review the process and conduct a more detailed evaluation. To date, there is no evidence that the development and implementation of these statements have been further evaluated by NHS QIS or any other organisation.
Challenges and considerations
Best practice statements rely on the commitment and motivation of the individuals who voluntarily provide their time and expertise to allow for their production. This means they are cost neutral neither to the organisation(s) involved nor to the individual contributors. Failure to monitor and evaluate their impact could result in serious, costly consequences that misuse limited resources and potentially disadvantage patients - this was demonstrated with the implementation of the Liverpool Care Pathway and its subsequent withdrawal due to rising concerns about its misuse (Neuberger et al, 2013). When faced with such results, should we continue to do what we have always done or withdraw the intervention that is in place?
The National Institute for Health and Care Excellence (2015) indicated that some wound care treatments have been withdrawn because there was insufficient evidence about their effectiveness. Logically, such actions have the potential to secure substantial cost savings that could be re-invested in more effective treatments (Carter, 2010) but lack of transparency about decisions regarding which interventions should or should not be withdrawn has led to accusations that inappropriate treatment decisions may be made (Beldon, 2014).
While the withdrawal of life-saving drugs secures much media attention, the removal of wound care treatments and other interventions - such as specialist nursing input, patient education and rehabilitation - attract less. Clearly, we must be cautious about the vested interests that may influence such debates but we also need to question whether continuing to use an intervention on the basis that it permits it to be evaluated is ethical. If recipients are not aware that this is the case, and cannot be assured the evaluation will be free from bias and rigorous in terms of its development, implementation, monitoring and evaluation, it is not ethical. Applying similar criteria to BPSs and acknowledging their limitations would be a positive move.
Implications for research
Randomised controlled trials (RCTs) and meta-analysis are considered by many to be the “gold standard” and only reliable source of evidence on which to base practice. Monitoring and evaluation is extensive, and occurs over a number of years after implementation.
While acknowledging that not all RCTs are robust, or produce confirmatory evidence, guidelines based on those that are can be implemented with relative confidence. However, systematic reviews of RCTs frequently conclude that there is insufficient evidence on which to base recommendations, and that further research needs to be conducted.
BPSs may fill the gap and improve care when they are developed using systematic and rigorous methods, but there is insufficient evidence to indicate whether, and when, this condition has been met.
Implications for nursing
At a time when healthcare and public sector services are experiencing increasing financial pressures, it is tempting to disregard the “aspirational” aspect of improving practice; doing so, however, is likely to limit what it might be possible to achieve. Working with others who have different but relevant perspectives, expertise and aspirations could help generate the creativity and innovation needed to overcome the challenges associated with attempts to initiate and sustain practice. Pooled resources, collegiality, transparency and wider involvement may help minimise unwarranted political influence and bias.
Given the financial and opportunity cost of producing and implementing BPSs, and the consequences of doing so, it is imperative that the methods used to develop them are transparent and that their impact and sustainability are independently evaluated.
- Best-practice statements (BPSs) may improve care when they are developed using systematic, rigorous methods
- Many tools are available to help researchers and guideline developers assess the quality of existing literature
- Telling relevant stakeholders about proposed and current work is essential to secure appropriate contributions and support
- The cost of producing and implementing a BPS is high
- Transparent methods must be used to develop BPSs and their impact and sustainability must be independently evaluated
Beldon P (2014) The role of ethics in the wound care setting. Wounds UK; 10: 3, 72-75.
Booth J et al (2005) Implementing a best practice statement in nutrition for frail older people: part 1. Nursing Older People; 16: 10, 26-28.
Carter MJ (2010) Cost-effectiveness research in wound care: definitions, approaches, and limitations. Ostomy Wound Management; 56: 11, 22.
Cayless S, Wengström Y (2008) Scoping Exercise to Review Best Practice Statements: Skincare of Patients Receiving Radiotherapy; and, the Management of Pain in Patients with Cancer. Stirling: Cancer Care Research Centre, University of Stirling.
Fitch C et al (2004) Rapid assessment: an international review of diffusion, practice and outcomes in the substance use field. Social Science & Medicine; 59: 9, 1819-1830.
Gichuhi MM, Gomersall JCS (2013) Implementation of best practice for dyspepsia management in an outpatient hospital setting in Kenya. International Journal of Evidence-Based Healthcare; 11: 3, 187-193.
Greenhalgh T (2014) How to Read a Paper: The Basics of Evidence-based Medicine. London: Wiley Blackwell.
National Institute for Health and Care Excellence (2015) Wound Care Products.
Neuberger J et al (2013) More Care, Less Pathway: A Review of the Liverpool Care Pathway.
Pollock A et al (2015) User involvement in a Cochrane systematic review: using structured methods to enhance the clinical relevance, usefulness and usability of a systematic review update. Systematic Reviews; 4: 55.
Powell BJ et al (2015) A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implementation Science; 10: 21.
Registered Nurses’ Association of Ontario (2012) Toolkit: Implementation of Best Practice Guidelines.
Ring N, Finnie A (2004) Best Practice Statements. Report of the Impact Evaluation Study.
Sackett DL et al (1996) Evidence based medicine: what it is and what it isn’t. BMJ. 312: 7023, 71-72.
Schouten LMT et al (2008) Evidence for the impact of quality improvement collaboratives: systematic review. BMJ; 336: 7659, 1491-1494.
Snape D et al (2014) Exploring perceived barriers, drivers, impacts and the need for evaluation of public involvement in health and social care research: a modified Delphi study. BMJ Open; 4: 6, e004943.
Teresi JA et al (2013) Comparative effectiveness of implementing evidence-based education and best practices in nursing homes: effects on falls, quality-of-life and societal costs. International Journal of Nursing Studies; 5: 4, 448-463.
Tolmie E, Stanley J (2015) Vision problems following stroke: developing a best practice statement. British Journal of Health Care Management; 21: 7, 326-330.
World Health Organization (2012) Handbook for Guideline Development.