Your browser is no longer supported

For the best possible experience using our website we recommend you upgrade to a newer version or another browser.

Your browser appears to have cookies disabled. For the best experience of this website, please enable cookies in your browser

We'll assume we have your consent to use cookies, for example so you won't need to log in each time you visit our site.
Learn more

DISCUSSION

Best-practice statements: should we use them?

  • Comment

Best-practice statements are useful in theory but their effectiveness may be compromised by varied development methods and a lack of implementation evaluation

Abstract

Best-practice statements aim to facilitate evidence-based practice and improve care quality. They may improve care provision when developed using systematic, rigorous methods but there is no standardised approach for this and the support available may be limited. Methods used to develop, implement and monitor statements are not always transparent, and evaluation of their impact is inadequate. This article discusses their use as tools to facilitate evidence-based practice and improve care, and the factors that can constrain or promote this.

Citation: Tolmie EP, Rice AM (2015) Best practice statements: should we use them? Nursing Times; 111: 42, 12-14.

Authors: Elizabeth P Tolmie is lecturer; Anne Marie Rice is senior university teacher; both at Nursing Health and Care School, University of Glasgow.

Introduction

The concept of integrating the best available evidence from systematic research with individual clinical expertise to help practitioners make decisions and guide patient care was introduced as evidence-based medicine (EBM) before the middle of the 19th century (Sackett et al, 1996). It was later extended to nursing and professions allied to medicine, and the term “evidence-based practice” (EBP) was adopted.

A distinguishing feature of EBM, less evident in the wider EBP literature, is that clinical decisions are based on the results of population-based research. These results are then applied to individuals, after taking into account the context of the clinical situation and the person concerned (Greenhalgh, 2014). Meta-analysis and systematic reviews, such as those by the Cochrane Collaboration and the Campbell Collaboration, combine results from several studies to identify the best treatment or intervention and reduce uncertainty.

A robust review can identify whether the quality of the available evidence is sufficient to develop a clinical guideline, or whether more research needs to be conducted before the evidence can be confirmed or refuted. Where the evidence is sparse or of poor quality, a statement of best practice may be appropriate.

Best-practice statements

Best-practice statements (BPSs) aim to guide the practice of nurses, midwives, health professionals and support staff. They have been described as protocols that give practitioners guidance on the best and most comprehensive care, and as something to which practitioners should aspire (Cayless and Wengström, 2008). Conversely, they have also been described as “realistic but challenging rather than aspirational” (Booth et al, 2005). The former implies an optimal service will be provided if the recommendations are followed, but fails to acknowledge the well-documented problems associated with implementation (Gichuhi and Gomersall, 2013; Teresi et al, 2013). The latter, while more pragmatic, may dampen, rather than generate, enthusiasm among those keen to develop new initiatives to positively influence practice.

Developing best-practice statements

The methods used to develop and produce statements vary. As with clinical guidelines, their content is based on current literature and the combined knowledge and experience of clinicians, academics and other stakeholders. Contributors are considered to be experts or specialists in the subject, and conversant with relevant, current evidence.

The quality of the BPS is influenced by the combined expertise of those involved, along with the resources available to support its development. When limited resources and time restrict the scope of the review, a rapid evidence assessment (REA) method may be the most feasible one to use.

The REA method originally derived from the need to respond quickly to public health emergencies (Fitch et al, 2004) so it does not enable a complete and exhaustive review of the literature. This means there is a risk that some relevant literature may be overlooked and the time available for quality assessment may be restricted. An additional limitation is that many existing BPSs do not comply with the World Health Organization’s (2012) recommendation that they be reviewed or updated (or invalidated) within a specified time frame. In the context of a public health emergency this is justifiable, but it is less so in other situations. Nevertheless, BPSs can - and do - offer some guidance where this has previously been lacking or is inconsistent.

Stakeholder involvement

Some BPSs are intentionally discipline- specific, while others are multidisciplinary and take into account different perspectives, including those of people who have personal experience of the issue. Informing all stakeholders about any proposed, and current, work is essential if appropriate contributions and support are to be secured (Tolmie and Stanley, 2015). However, when using the REA method to develop a BPS, the limited timescale (2-6 months) can make it difficult to secure meaningful involvement.

Freedom from bias and political influence is also important but might be difficult to achieve when contributors to the BPS include those who are expected to implement the recommendations, as well as those who provide or receive the service or intervention to which the recommendations relate. Nevertheless, pooled resources, collegiality and wider involvement have the potential to lever change at a local level and beyond. Working with those who have different but relevant perspectives, expertise and aspirations may help generate the creativity and innovation needed to overcome some of the challenges likely to occur, particularly with regard to implementation.

Careful attention needs to be paid to the needs of lay contributors who may not have had the opportunity to contribute to health recommendations, and those whose previous attempts have been restricted by others because of assigned roles, or lack of training and support (Snape et al, 2014). Although public contributions are increasingly sought, some health professionals have been reluctant to embrace the idea. Pollock et al (2015) acknowledged the discomfort experienced by physiotherapists when handing over the updating of a systematic review to non-academics and survivors of stroke; notably, a positive outcome was reported.

Quality assessment

A wide range of quality-assessment instruments are available to help researchers and guideline developers assess the quality of existing literature and ensure the review is conducted systematically. Examples are given in Box 1; most are free of charge but as there is no agreed “gold standard”, once the type of instrument needed has been identified, selection is a matter of personal or group choice.

Implementation of best-practice guidance

In contrast to the quality-assessment aspect of developing practice guidelines, implementation guidance lacks clarity and tends to be descriptive and intervention specific (Powell et al, 2015). Identified barriers to implementation include:

  • Gaps in knowledge;
  • Inaccessibility;
  • Inadequate dissemination;
  • Transition from primary to secondary care;
  • Geographical area;
  • Patient non-adherence (Cayless and Wengström, 2008).

A toolkit published by the Registered Nurses’ Association of Ontario (2012) provides some helpful information about implementation and other aspects of best- practice guidance. Its strengths are:

  • Comprehensive coverage;
  • Emphasis on the need for training and development, publication and dissemination, monitoring and evaluation;
  • A focus on nursing (albeit a limitation from a multidisciplinary perspective).

Notably, with regard to implementation, the complexity of many healthcare interventions cautions against a blanket or unidisciplinary approach. A comprehensive assessment may determine that adherence to the recommendations is inappropriate for a particular patient, at a particular time, or in a particular context. Moreover, as experts do not always agree, even when robust evidence is available, some practitioners and potential recipients may be less willing, or able, to adhere to the recommendations.

Geographical, organisational and financial constraints may also make it difficult - and perhaps impossible - to fully meet the recommendations. In such circumstances, a degree of flexibility is required to accommodate individual or cultural differences - at least until an appropriate alternative or compromise is found. This may require reconfiguration of a service, local adaptation of the recommendations or additional research evidence.

Monitoring and evaluation

Monitoring and evaluating the impact of BPSs and similar initiatives is imperative. Schouten et al (2008) proposed that quality initiatives should be considered in terms of their effectiveness, efficacy and economy. In the current economic climate, in which health professionals are expected to do more with less, this is difficult to contest.

In 2003, NHS Quality Improvement Scotland commissioned an independent evaluation of the first five BPSs it developed. Responses (n=537) to 1,278 questionnaires indicated that fewer than half of respondents (n=250; 46.6%) knew about the BPSs and only two of them (pressure ulcers and continence) were being used in full by around 25% of clinical respondents (Ring and Finnie, 2004).

On a more positive note, 15 nurses interviewed as part of the evaluation said the BPSs had a positive impact on practice; however, as five of the interviewees were project leaders and two had been involved in the development of the statement, their views are likely to be biased and may not be representative of nurses in general.

The evaluators concluded that, while the development of BPSs should continue, there was a need to review the process and conduct a more detailed evaluation. To date, there is no evidence that the development and implementation of these statements have been further evaluated by NHS QIS or any other organisation.

Challenges and considerations

Best practice statements rely on the commitment and motivation of the individuals who voluntarily provide their time and expertise to allow for their production. This means they are cost neutral neither to the organisation(s) involved nor to the individual contributors. Failure to monitor and evaluate their impact could result in serious, costly consequences that misuse limited resources and potentially disadvantage patients - this was demonstrated with the implementation of the Liverpool Care Pathway and its subsequent withdrawal due to rising concerns about its misuse (Neuberger et al, 2013). When faced with such results, should we continue to do what we have always done or withdraw the intervention that is in place?

The National Institute for Health and Care Excellence (2015) indicated that some wound care treatments have been withdrawn because there was insufficient evidence about their effectiveness. Logically, such actions have the potential to secure substantial cost savings that could be re-invested in more effective treatments (Carter, 2010) but lack of transparency about decisions regarding which interventions should or should not be withdrawn has led to accusations that inappropriate treatment decisions may be made (Beldon, 2014).

While the withdrawal of life-saving drugs secures much media attention, the removal of wound care treatments and other interventions - such as specialist nursing input, patient education and rehabilitation - attract less. Clearly, we must be cautious about the vested interests that may influence such debates but we also need to question whether continuing to use an intervention on the basis that it permits it to be evaluated is ethical. If recipients are not aware that this is the case, and cannot be assured the evaluation will be free from bias and rigorous in terms of its development, implementation, monitoring and evaluation, it is not ethical. Applying similar criteria to BPSs and acknowledging their limitations would be a positive move.

Implications for research

Randomised controlled trials (RCTs) and meta-analysis are considered by many to be the “gold standard” and only reliable source of evidence on which to base practice. Monitoring and evaluation is extensive, and occurs over a number of years after implementation.

While acknowledging that not all RCTs are robust, or produce confirmatory evidence, guidelines based on those that are can be implemented with relative confidence. However, systematic reviews of RCTs frequently conclude that there is insufficient evidence on which to base recommendations, and that further research needs to be conducted.

BPSs may fill the gap and improve care when they are developed using systematic and rigorous methods, but there is insufficient evidence to indicate whether, and when, this condition has been met.

Implications for nursing

At a time when healthcare and public sector services are experiencing increasing financial pressures, it is tempting to disregard the “aspirational” aspect of improving practice; doing so, however, is likely to limit what it might be possible to achieve. Working with others who have different but relevant perspectives, expertise and aspirations could help generate the creativity and innovation needed to overcome the challenges associated with attempts to initiate and sustain practice. Pooled resources, collegiality, transparency and wider involvement may help minimise unwarranted political influence and bias.

Given the financial and opportunity cost of producing and implementing BPSs, and the consequences of doing so, it is imperative that the methods used to develop them are transparent and that their impact and sustainability are independently evaluated.

Key points

  • Best-practice statements (BPSs) may improve care when they are developed using systematic, rigorous methods
  • Many tools are available to help researchers and guideline developers assess the quality of existing literature
  • Telling relevant stakeholders about proposed and current work is essential to secure appropriate contributions and support
  • The cost of producing and implementing a BPS is high
  • Transparent methods must be used to develop BPSs and their impact and sustainability must be independently evaluated
  • Comment

Have your say

You must sign in to make a comment

Please remember that the submission of any material is governed by our Terms and Conditions and by submitting material you confirm your agreement to these Terms and Conditions. Links may be included in your comments but HTML is not permitted.