Your browser is no longer supported

For the best possible experience using our website we recommend you upgrade to a newer version or another browser.

Your browser appears to have cookies disabled. For the best experience of this website, please enable cookies in your browser

We'll assume we have your consent to use cookies, for example so you won't need to log in each time you visit our site.
Learn more

Awareness and prevention of error in clinical decision-making.

  • Comment

This is the final paper in a series of four discussing judgement and decision-making in nursing. The first three highlighted the importance of judgement and decision-making in nursing practice (Thompson et al, 2004), the utility of decision analysis as a way of structuring decisions (Dowding and Thompson, 2004a) and examined judgement in nursing (Dowding and Thompson, 2004b).

Abstract

VOL: 100, ISSUE: 23, PAGE NO: 40

Carl Thompson, DPhil, RN, is senior research fellow, Department of Health Sciences, University of York

Dawn Dowding, PhD, RN, is senior lecturer, Department of Health Sciences and Hull-York Medical School, University of York.

Download a print-friendly PDF file of this article here

This paper discusses error in judgement and decision-making. It highlights areas where common errors occur and practical ways in which they can be avoided.

Understanding error

Adverse events occur in around 10 per cent of hospital admissions, affecting over 850,000 patients per year (Department of Health, 2000). These events have significant social and financial consequences:

- Each year 400 people die or are seriously injured from medical devices;

- 1,150 people who have had recent contact with mental health services commit suicide;

- Hospital-acquired infection (15 per cent of which is avoidable) costs the NHS around £1bn per year.

Such events - defined as events or omissions arising during clinical care and causing physical or psychological injury to a patient (DoH, 2000) - will always happen in a complex system such as health care delivery. However, at least 50 per cent are the result of error. In recognising this, health systems have developed a range of defences against such events:

- Automated prescribing systems;

- Early warning indicators based around routinely collected information;

- Compulsory continuing professional development for some health care professionals;

- Clinical governance;

- Risk assessment procedures.

However, even the best systems have the potential for failure as a result of human error. Box 1 describes three clinical situations in which systems were present but in which human error contributed to their failure. In order to understand these events we need to know something about the anatomy of errors and adverse events.

The nature of error in decision-making

Making decisions involves choosing between discrete options and acting upon this choice - they are therefore intentional. It is not possible to make an error without there being such an intention in place. Reason (1990) defines error as ‘a generic term encompassing all those occasions in which a planned sequence of mental or physical activities fails to achieve its intended outcome, and when these failures cannot be attributed to the intervention of some chance agency’.

There are three kinds of error or decision failure: slips, lapses, and mistakes. Slips or lapses occur when the actions associated with a decision do not proceed as planned - for example, a practitioner forgets to gain consent from a patient despite going to their bedside to do so (Reason, 1990). Mistakes occur when a practitioner’s thinking about a decision is faulty - for example, thinking that a patient’s chest pain is a symptom of indigestion when they are actually having a heart attack (Reason, 1990). Mistakes can be further subdivided into:

- Failures of expertise: the practitioner brings a pre-established plan or solution to a decision based on their expertise in the area but does so inappropriately;

- Failures due to a lack of expertise: the practitioner has no pre-formed plan (perhaps never having encountered this kind of patient before), and has to develop a solution or plan from whatever knowledge is available and perceived as relevant. Of course, if intended actions proceed as intended, and achieve the desired ends, this is a successful decision and there is no problem.

Most nurses recognise instances in their practice where slips, lapses, and mistakes occurred. In general, these errors take place at the level of skills (and the application of those skills to patients), while mistakes operate on two levels: using decision rules and handling knowledge.

Slips and lapses: skills-based errors

There are different categories of skills-based error, including inattention, distractions, interruptions, and reduced intentionality (when there is a gap between intending to do something and carrying it out). Another common skills-based error is perceptual confusion - when the ‘matching’ of action to routinely recognised situations is confused. This tends to happen when the use of information to inform a decision becomes so automatic that rough approximations suffice, for example, ‘I meant to get the water ampoule but grabbed sodium chloride instead… they look so similar’.

Similarly, interference errors occur when a number of tasks are being carried out at the same time and planned action sequences get mixed up, for example, ‘I was desperate to finish the drugs round and this patient kept going on about wanting a drink so I ended up pouring his medicine into his water cup’.

A number of strategies can be put in place to try and avoid skills-based errors, many of which have been recognised and acted upon at national level. These include examination of the labelling of IV fluids and certain types of cytotoxic drugs to avoid perceptual confusion (DoH, 2000).

Good environmental management can also help reduce the number of slips and lapses. For instance, reducing interruptions from external sources, trying not to have frequent changes of activity, not rushing through - and into - tasks without first quickly reflecting on the importance of each to the patient or practitioner, will all reduce the likelihood of skills-based errors.

Mistakes: rule-based errors

When people try to make sense of situations and decide what to do, especially in familiar situations, they often apply a series of decision rules. Commonly these are in the form of ‘if… a situation matches X… then… it merits action Y’. Often people have many rules that could apply. Decision rules exist on a continuum ranging from good (match the decision situation, have worked in the past) to bad (do not match the decision situation, may not have worked in the past).

Errors may occur when ‘good’ rules are used inappropriately or when ‘bad’ rules are used to make a decision. Good decision rules may be used in the wrong situations in a number of circumstances. The situation may be an exception to the general rule but because it is so similar to those seen previously the decision rule is still used. It may be that the decision contains a lot of competing information, making it difficult to identify which rule is most appropriate. Rules with a proven track record of success will necessarily be given a higher weighting in decisions. Reason (1990) uses the analogy of a horse race: the rule with the best form - previous history - has the highest chance of winning future races. However, sometimes these ‘superstrong’ rules are applied without regard to other factors determining usefulness such as the degree of ‘fit’ or support from other decision rules.

Other decision rules can be seen as good for lots of situations - they are general rules, while others are more specific. Commonly people are attracted to general decision rules rather than specific ones but in some circumstances it is more appropriate to use a specific decision rule.

Mistakes based on the application of bad rules may be due to individuals only focusing on one aspect of the task rather than the whole thing. This means that a rule may be used for one small part of the decision, ignoring other and more pertinent areas. Mistakes on the basis of bad rules may also be due to the use of ‘intuitive’ reasoning that is often erroneous. Humans often use this approach when making judgements of the physical world, and as nursing depends heavily on observing the physical world - for example, looking at a patient’s colour or listening to chest sounds - this misuse of decision rules is important. Reason (1990) points to the example of college students watching a ball being fired from a coiled tube. They reasoned intuitively but wrongly that the trajectory of the ball was curved because the tube was coiled. The use of intuitive decision choices in the physical world can often lead to wrong decisions. Box 2 gives some examples of the inappropriate use of decision rules.

Heuristics or bias

Almost all professional practice involves applying knowledge to a problem. This is a positive thing in that knowledge of the right sort - for example, research combined with clinical experience - brings potentially better patient outcomes. However, in processing and applying knowledge in practice health professionals are prone to a different set of errors to those highlighted so far, these errors are known as heuristics or bias.

Heuristics are often referred to as ‘mental short cuts’ that have developed as a way of helping people to process large amounts of information when facing decision tasks. Most of the time their use leads to efficient decision-making. However, if used inappropriately they can lead to consistent and serious errors in judgement. Research has identified a number of heuristics (Lichenstein, 1982; Kahneman and Tversky, 1973):

- Overconfidence;

- Hindsight bias;

- Base rate neglect;

- Anchoring.

Individuals are often overconfident when assessing the correctness of their knowledge (Baumann et al, 1991). Ironically, this often occurs in situations when we have least knowledge. This is a problem as this is the time when uncertainty, and an associated lack of confidence, is most likely to arise. It is important for practitioners to be aware of their own knowledge, or lack of knowledge, so they have insight into when they may be overconfident in their assessments.

Hindsight bias refers to the effect of knowledge of the outcomes of a particular decision on subsequent actions. For example, when teaching diagnostic skills, if the diagnosis is presented first, by the expert, and the nurse (student) is asked to ‘work backwards’ then what emerges is the symptoms the nurse thinks should be present, rather than the picture that would have been presented had the nurse been asked to deduce forwards from the symptoms to the diagnosis. Knowing the outcome of events makes it impossible to be fair when reflecting on their causes: people will search for detailed causal connections, selectively recall key events, and reconstruct scenarios different to those constructed if the outcome had been unknown (Poulton, 1994).

Jones (1995) has suggested that nurses are prone to hindsight bias, with knowledge of a patient’s medical diagnosis affecting their judgement. A clinical parallel is the use of clinical supervision as a device for improving the quality of practitioners’ decisions. If supervision takes place according to a framework that is naive and unsophisticated - does not try to counter hindsight bias - it is likely that the power of the technique will be limited.

Health professionals have a tendency to neglect the underlying base rates of disease or symptoms when diagnosing or treating illnesses (Thompson, 2003) but base rates are important. For example, a test for a particular disease - ordering tests is one of ten key tasks for nurses (Mullally, 2002) - may have a high false positive rate (where the test is positive but the person does not have the disease), while the disease base rate in the population is low. What this means is most people testing positive do not have the disease and the test is of little use.

Ignoring the base rates in a population runs the risk of misinterpreting the real chances of a patient having a disease or the significance of the signs observed.

When people make decisions they draw on anchors to help them. These are cognitive reference points from which people work outwards to reach a judgment. Cioffi (1997) uses the example of a midwife who describes the colour of Caucasian babies following an assessment for jaundice: ‘every baby has a colour of his own… hopefully every baby is pink, or if they’re a couple of days old, they may be a little yellow’. This midwife’s initial anchors are her colour values of ‘pink’ and ‘yellow’, from which she can work in order to describe the jaundice.

Cioffi implies this kind of heuristic is valuable and desirable in professional practice. Approaches to professional expertise such as that promoted by Benner and Tanner (1987) and Benner (1984) imply these anchors are legitimate and useful. Experts are commonly classified as such because they are skilled in deploying these anchors, often as a result of considerable clinical experience. As long as the anchors are of good quality and reliable, this heuristic may be an effective way of making decisions. The problem with such anchors is other kinds of heuristic and types of error can distort their construction, and it is difficult to amass enough experiences of a similar nature to construct them for every situation.

Combating mistakes

Practitioners can use a number of relatively simple strategies to try to combat these tendencies towards bias in reasoning and reduce the chances of error. They can try to minimise the chances of mistakes by using validated decision rules. National evidence-based clinical guidelines, such as those issued by the National Institute for Clinical Excellence, are good examples.

Locally, decision-support tools such as evidence-based protocols may provide the best source of support in times of uncertainty. Such tools should be critically appraised to ascertain their quality, implications of use, and applicability to the patient. Individual decision rules can be successfully challenged via reflection conducted ‘on the hoof’ or ‘reflection in action’. Questioning our assumptions about patients when making a decision can be useful, as can imagining the ‘long run’ consequences of a decision choice - what would happen if you took the same decision for other similar patients over time.

Other strategies can also be used when applying knowledge to help avoid knowledge-based errors. In the absence of high-quality diagnostic tests, base rates can be the best indication of a particular disease or condition. It is easy to make the classic mistake of confusing the probability of disease given symptoms, with the probability of symptoms given the presence of disease. Consider a scenario in which you need to explain to an asymptomatic woman the probability of breast cancer after a routine screening. The relevant information (for women aged 40) can be summarised as follows:

- The frequency of breast cancer is one per cent in the whole population; 

- The probability of a positive test given breast cancer is 80 per cent;

- The probability of a positive test given no breast cancer is 10 per cent.

What is the probability that a woman who tests positive actually has breast cancer: 100 per cent, 90 per cent, 80 per cent, 72 per cent, 7.5 per cent, or less than one per cent? The answer is 7.5 per cent (0.75).

If you got this answer do you know why? This is best explained if you imagine 1,000 women, 10 of every 1,000 women have breast cancer; eight of those 10 women with breast cancer will test positive, and 99 of the 990 women without breast cancer will also test positive. The number who test positive and have breast cancer is 8/(8+99) = 0.075 in 1,000 or 7.5 per cent.

This example illustrates the need to be aware of a test’s properties such as sensitivity and specificity. Critical appraisal of the tests ordered can also be useful, as test accuracy is not the same in any environment and depends on the prevalence of the target disease in the population to which the test is to be applied. Looking at the evidence for a test enables practitioners to ascertain whether it was developed and tested in settings that make them unsuitable for their needs. For example, if a blood sugar test was tested in settings in which the prevalence of diabetes is much higher (say acute medical wards) then the test is less likely to help identify people with diabetes where the prevalence is much lower (say in the community).

Practitioners can also combat overconfidence by questioning whether they are really 90 per cent sure, or if it is more like 75 per cent. They can then revise their estimates of correctness. This technique also helps them decide whether they really know the answer or whether further evidence-gathering is necessary. The information that jumps to the front of the mind when trying to grapple with the uncertainties in a clinical decision may not necessarily be the best evidence and the information should be actively questioned and alternatives sought.

Recognising that just because something sounds plausible, or fits a stereotype, does not necessarily make it more likely, should challenge anchoring heuristics. Nursing has a culture of individualised care but most nurses can think of examples where stereotypes determine the approach to a patient’s management. For example, ‘the ‘wimp’ in bed four who is taking too long to get out of bed post-hernia repair’ or ‘the unemployed father whose daughter’s broken arm was ‘probably’ a non-accidental injury’. Both these examples are drawn from our own experiences in practice. Probability indicates the chances of things happening, not qualitative indicators of bias. Stereotypical views of patients are one of the most unacceptable faces of health care delivery and a major block on quality decision-making.

Conclusion

This paper has argued that when making decisions, nurses, like all people, are subject to uncertainty, error, and heuristic short-cuts. Unfortunately, it has shown that these heuristics are fallible and can introduce unhelpful bias into decision-making. The need to prevent harm to patients demands that professionals learn from mistakes and take corrective action.

The techniques described in this series are designed to combat the tendencies of human decision-makers. However, not all decisions are amenable to the potential benefits these tools offer. Most ‘real-life’ nursing and midwifery decisions require readily accessible techniques to apply at the bedside, in the ward, or treatment room.

This paper has highlighted a few simple techniques nurses can use to reduce bias and error, and common pitfalls to avoid. Most of these skills can be picked up quickly, discussed with colleagues, and with practice, mastered and embedded in clinical practice decision-making - even the most advanced require no more than simple mathematical skills.

SERIES ON CLINICAL DECISION-MAKING

This is the fourth in a four-part series on decision-making:

1. Strategies for avoiding the pitfalls in clinical decision-making

2. Using decision trees to structure clinical decisions

3. How to use information ‘cues’ accurately when making clinical decisions;

4. Tools for handling information in clinical decision-making.

This article has been double-blind peer-reviewed.

  • Comment

Have your say

You must sign in to make a comment

Please remember that the submission of any material is governed by our Terms and Conditions and by submitting material you confirm your agreement to these Terms and Conditions. Links may be included in your comments but HTML is not permitted.

Related Jobs