Your browser is no longer supported

For the best possible experience using our website we recommend you upgrade to a newer version or another browser.

Your browser appears to have cookies disabled. For the best experience of this website, please enable cookies in your browser

We'll assume we have your consent to use cookies, for example so you won't need to log in each time you visit our site.
Learn more

Study reveals difficulty of comparing hospitals on safety

  • Comment

Significant variations in the way intensive care nurses collect information on central line infections across the country may compromise attempts to monitor patient safety, researchers have warned.

They claim the variation in information collection raises serious questions about how effectively the performance of different hospitals can be compared for a range of clinical factors.

Academics spent more than 900 hours observing how nurses and other clinicians in 19 intensive care units measured and collected data on bloodstream infections in central venous catheters (CVCs).

Most of the units were taking part in the National Patient Safety Agency’s “Matching Michigan” project, which provided guidance on data collection and definitions in order to try and standardise their approach.

But, despite the attempt at standardisation, the study found big differences in approach that were likely to have a significant effect on the results.

Researchers found ICUs differed over which patients with CVCs and which infections were deemed eligible for the study, with some excluding patients at high or low risk of infection.

They also found CVC tips that were supposed to be sent for testing were sometimes lost or thrown away by staff who had not been briefed on the project.

In addition, microbiology departments differed in their methods of analysis and definitions of whether an infection could be attributed to a CVC.

The research was commissioned by the Health Foundation. Its assistant director Elaine Maxwell told Nursing Times the findings demonstrated the limitations of measuring clinical outcomes.

“This study happens to be about infections but it could be about any safety measurement. Most people would think measuring infections in intensive care is fairly unambiguous, but this study shows that is not the case,” she said.

There has been a move towards measuring harm to NHS patients in recent years, most notably through the introduction of a tool called the patient safety thermometer. Data from the thermometer is published online and can be used to make comparisons between trusts.

Ms Maxwell, a former nursing director, said the research suggested measuring for improvement – where an organisation compares its own results with previous years – was valuable, but there were “difficulties” in making comparisons with other organisations.

The researchers also concluded the differences in approach were not the result of deliberate attempts to manipulate figures, often referred to as “gaming”.

Ms Maxwell said: “Where there have been differences, a lot of people have attributed it to gaming. The explanation has often been given that staff actively interpret the definitions differently in order to show themselves in a good light.”

But she said the research had shown this was not the case. The variation was “because if you have different resources available to you, you will automatically get different ways of doing things”, she said.

  • Comment

Have your say

You must sign in to make a comment

Please remember that the submission of any material is governed by our Terms and Conditions and by submitting material you confirm your agreement to these Terms and Conditions. Links may be included in your comments but HTML is not permitted.