VOL: 99, ISSUE: 13, PAGE NO: 63
Nicola Waters RGN, DipN, is wound care nurse (nursing homes), Brighton General Hospital, Sussex
The Braden scale is a widely used pressure risk assessment tool and it is, therefore, essential to ensure that the tool is reliable and valid. Several studies have questioned the predictive validity of the Braden scale (Nixon and McGough, 2001). Bergstrom et al (1998) used a quantitative research paradigm to evaluate the effectiveness of the Braden scale in predicting which patients who are at risk of developing pressure ulcers in three different clinical settings in the USA. The study aimed to determine at what point a pressure ulcer will develop, the critical cut-off point, and whether this cut-off point can be duplicated. The authors also hoped to establish optimum timing for risk assessments.
A computer randomised sample of 843 inpatients in three care settings was selected for this study. Inclusion criteria included subjects free from pressure damage on admission to hospital, however, pressure damage to underlying tissue and structures can occur for up to three days before they become visible (Bergstrom et al, 1998).
Data was collected using two tools, the Braden scale for predicting pressure sore risk and a skin assessment tool which identifies the bony prominences of the body and allows for the presence or absence of skin damage to be assessed and recorded at all sites. Staff received initial and ongoing training in using the Braden scale, staging ulcers and recording data.
In order to check that data collection was reliable, inter-rater reliability of research staff was checked regularly. The inter-rater reliability of research staff ranged from 95 per cent to 100 per cent.
Two research nurses were used at each site; one to score the Braden scale and one to assess the skin. Participants were assessed on day one and every 48 to 72 hours after, for one to four weeks. The nurses involved at each centre were blind to the other findings. This ruled out the potential for the nurses to be influenced by each other’s results.
Data collected on admission (time 1), 48-72 hours after admission (time 2) and the observation before the first recorded pressure ulcer was used in the data analysis. Independent sample t-tests were used to analyse the differences in the patient’s ages and the Braden scale score of those participants who did not develop pressure ulcers.
To examine the critical cut-off point and optimal time for identifying risk, the authors considered the sensitivity, specificity and predictive value of positive and negative tests. Sensitivity relates to the numbers of patients predicted to develop pressure sores who actually went on to develop them. Specificity refers to the number of patients deemed not to be at risk who do not develop pressure sores (Dealey, 1999). Sensitivity and specificity are very closely related to positive and negative tests. There are four possible decision states;
- True positive: the patient has a condition, a diagnosis is confirmed;
- True negative: the patient does not have a condition, diagnosis is rejected;
- False positive: the patient does not have a condition, but a diagnosis is confirmed;
- False negative: the patient has the condition, but diagnosis is rejected.
Anthony (1996) discusses the limitation of thresholds or cut-off points. If the cut-off point is set too high the majority of the population will not be diagnosed. But if the cut-off point is too low many people will be diagnosed who do not have the condition. In terms of pressure damage prevention this could mean patients using expensive pressure-relieving equipment which they do not need.
In an attempt to discover a critical cut-off point to reduce the problems identified above, the authors used a receiver-operating characteristic (ROC). This tool assesses the classification potential of a procedure and makes it possible to identify the optimum cut-off point for the Braden scale at time 1, time 2 and pre-breakdown. In practice, this may assist the assessor to make a more accurate clinical decision relating to the possible risk of pressure ulceration and the need for preventive strategies (Anthony, 1996). Success is dependant on the individual assessor’s ability to accurately complete and score the Braden scale assessment.
Bergstrom et al (1998) state that the optimum cut-off point is located in the region where the ROC curve changes direction. However, from studying the graphs produced by the authors, this information is not clear. Anthony (1996) reports that problems can occur when trying to identify the optimum threshold from the ROC plot and that there are available techniques to assist with this procedure.
Exactly how the authors of the Braden scale study identified the optimium cut-off point is not clear from the text, yet accuracy is essential if a critical cut-off point that is clinically useful is to be established. The ROC curve was also used to calculate the optimum time to predict potential pressure ulcer development.
Bergstrom et al (1998) used tables and graphs to display the data collected in the study. However, these graphs provide a crude representation of the data and do not relate well to information in the text.
The majority of the results indicate a high level of statistical significance. However, these results may not always be clinically significant (Nieswiadomy, 1998) For example Bergstrom et al (1998) report that the mean age for those who developed pressure sores was higher than those who did not. This was statistically significant in two of the care settings. However, the mean age difference between those who did not develop ulcers and those who developed stage 1 ulcers was only six months. This is unlikely to have an impact on clinical practice.
The authors gained verbal consent from participants at the beginning of the study. However, the study does not discuss pressure-relieving strategies implemented once a participant was identified as being at risk of pressure ulceration. It would not be ethical to leave a vulnerable participant to develop pressure ulcers once they had been identified as being at risk of skin breakdown. However, nursing these participants on pressure relieving mattresses with the intention of reducing the risk of skin damage could have a direct effect on the results.
Bergstrom et al (1998) established a critical cut-off point when using the Braden scale: a score of 18 indicates pressure ulcer risk. The authors make reference to the fact that raising the cut-off point will result in a higher amount of over-prediction but will reduce the number of false negatives as the number of patients at risk of pressure ulcers but not identified as ‘at risk’ will be lower (Anthony, 1996). This study aimed to establish an optimal cut-off point in three care settings across the USA and while it has established an optimal point, in practice this may not be sensitive and specific for all care settings. This should be a consideration when applying these findings to practice, as the clinical significance of the identified cut-off point may be questionable.
Results relating to the optimal timing of assessment indicate the benefits of assessment at the time of admission and 48 hours later. The authors provide a rationale for this. They argue that admission assessment, highly predictive of pressure ulcer development, allows for early identification of those at risk and allows for early intervention with preventive strategies. A further assessment 48 hours later is more predictive than admission assessment. The authors suggest this may be because factors not always apparent on initial assessment, for example the degree of incontinence and restricted mobility, become more apparent. The authors identify cost benefits of further reassessment during the patients’ hospital stay by preventing pressure ulcer development, however, they do not suggest a frequency for reassessment.
Critical appraisal of this study indicates the authors selected an appropriate methodology, method of data collection and data analysis. Some valuable findings can be related to clinical practice. The authors aimed to establish critical cut-off points for the Braden scale and investigate whether they can be duplicated across settings. Although the authors suggest a cut-off score of 18 could be used, this was not duplicated precisely in the various care centres. Therefore, optimum sensitivity and specificity may not be established in all areas. More multisite studies testing cut-off points are needed to address these concerns and ensure the validity of the Braden scale.
If pressure sore risk assessment tools are to be used in clinical practice it is essential to monitor their validity. However, it is important to remember that the tool should not be used exclusively to identify risk but is part of a complete holistic assessment (Nixon and McGough, 2001).