VOL: 103, ISSUE: 17, PAGE NO: 32-33
Jane Fitzpatrick, DEd, PGCEA, MSc, RGN, RHV, RM
Senior lecturer in the faculty of health and social care, University of the West of England
Abstract Fitzpatrick, J. (2007) Finding the research for evidence-based practice
This is the second of three articles exploring what is involved in developing effective evidence-based practice processes. The first article addressed the context of EBP, effective question identification and search strategies, while this looks at selecting and retrieving credible sources of evidence and the critical evaluation of research, including primary sources of evidence, systematic reviews and clinical guidelines.
The three articles in this series aim to facilitate understanding of:
- Effective question identification;
- Search strategies;
- Selecting and retrieving credible sources of evidence;
- The skills required for critical evaluation of sources of evidence;
- The application of research evidence to nursing practice.
Although evidence-based practice (EBP) was first described in the mid 1990s (Cluett, 2002) the approach had evolved over a number of years. Research developed alongside other types of data collection, in parallel with a rapid expansion in communication technologies. A mass of information became available to practitioners, the sheer volume of which made it difficult to access and assimilate evidence effectively.
In response to this, Archie Cochrane developed the Cochrane database in the early 1970s. This aimed to collate data from randomised controlled trials (RCTs) in order to streamline access to information (Cochrane, 1972) and led to the development of the systematic review. The Cochrane database also evolved to provide healthcare practitioners with registers of research that could offer information about the efficacy of care options and offered critical evaluations of RCTs that indicated whether they were effective, ineffective or of unknown value. The emergence of the internet led to the development of the Cochrane Library, which is available online.
Nurses looking for information via the internet will find a huge amount of material, much of which is irrelevant. A number of questions are helpful in assessing the quality and credibility of a source:
- Who is the author?
- Where is it from?
- Is what it says true or false?
- How long has it been there?
- Is it likely to change suddenly or be removed?
If evidence is retrieved using a search engine the first step is to decide if it is from a reputable source. Anyone can upload material onto the web and it may or may not be well researched and presented. Its credibility can be determined by looking for clues, for example whether it is published by a government source such as the Department of Health or an academic institution.
How does this compare with a library source such as a book or journal article? If it is from a university or NHS library:
- The author will have been required to check their work;
- An editor will have checked it;
- A publisher will have decided it is good enough to publish;
- A librarian or academic member of staff will have recommended it for the library;
- If it is a journal article it is likely to have been peer reviewed by experts in the field.
Articles accessed via an electronic library database will have met the criteria for library sources.
The OVID, EBSCO or DataStar databases, available via university or NHS libraries, offer a range of resources that have been selected by an approval process. For example, articles in the research section of Nursing Times and journals such as the Journal of Advanced Nursing and Nurse Researcher will have been subject to peer review, which means they are reviewed by experts in the field. Authors are required to address issues such as the audience, presentation style of the journal and, most importantly, the accuracy of the content. Journals accessed via BNI and CINAHL will have been peer-reviewed by experts in health and social care, including nurses.
Selecting and retrieving credible sources of evidence
After retrieving articles from a reputable source the next step is to assess their relevance. The database will contain links to the abstract, and the full text of the article if this is available. A quick scan of the abstract on screen should give a sense of the topic and the approach to the question covered. This enables researchers to decide on its relevance to their question. It is essential to develop skills in this initial phase of selecting and retrieving evidence as the volume of material available can be overwhelming.
Hierarchies of evidence
Hierarchies of evidence refer to the system of ranking developed by researchers and practitioners to identify the ‘best’ evidence they can apply to practice. However, hierarchies of evidence are not absolute - they are often depicted as a pyramid with three, four or five levels (Fig 1). Often research that can be generalised (applied to whole populations), such as RCTs, is depicted at the apex of the pyramid. Evidence that is not generally applicable, such as that obtained from qualitative research and expert opinion, is usually placed at the bottom of the pyramid.
Initially RCTs were perceived as the ‘gold standard’ in levels of evidence. However, Gyatt et al (1995) described the following hierarchy, which puts systematic reviews and meta-analyses - which constitute a comparative analysis of research - at the apex and excludes qualitative research and expert opinion:
- Systematic reviews and meta-analyses;
- Randomised controlled trials with definitive results;
- Randomised controlled trials with non-definitive results;
- Cohort studies;
- Case control studies;
- Cross-sectional studies;
- Case reports.
Polit et al (2001) and Hek et al (2006) described these types of research in greater detail.
Page and Meerbabeau (2004) argued that there are hierarchies of ‘prestige’ in research that reflect the status of the knowledge base of the occupation presenting the hierarchy. They suggest that hierarchies of evidence that rate quantitative research approaches highly are developed when the occupation creating and using it is technical and specialised. These occupations will tend to dismiss qualitative studies as anecdotal or needlessly complex. In scientific forums, such as medicine, therefore the RCT is seen as the ‘gold standard’. In this definition of the hierarchy of evidence, research using social science research approaches, used in exploring aspects of patient experience or change management, may be undervalued.
Evans (2003) argued that hierarchies focusing on research methods and validity of findings are inappropriate for finding out if a study addresses appropriateness or feasibility, and suggested a hierarchy of evidence that enables us to grade evidence to rank interventions is needed. This would mean research could be graded from a range of methodologies and develop protocols for systematic review. Nurses draw on research from a range of methodological approaches to explore the implications for care and care management. It is important therefore to develop skills to critically evaluate sources of evidence and clearly articulate why our sources are credible when suggesting changes to practice.
Critical appraisal of the evidence
Greenhalgh (1997a) reported that many peer-review journal articles describing original research are presented using the IMRAD format:
- Introduction (Why the authors carried out the study);
- Methods (How they did it);
- Results (What they found);
- Discussion (What the results mean).
To make a quick assessment of an article it is worth looking at the methods section to identify whether they are appropriate to address the question posed. For example, what methods might be used to answer the question: How many teenage children in the UK smoke? A reasonable way to address this question would be to conduct a survey of a large cohort of children within a specific age range. The analysis would involve using statistical tests to determine the reliability of the data.
In contrast, if the question was asking about the experiences of teenagers who smoke (including an exploration of why they smoke), a study might use a qualitative approach. Data collection might involve semi-structured interviews or focus groups, and analysis might be undertaken using grounded theory or discourse analysis to allow the researcher to draw out themes from the data.
An exercise in the first article in this series located the article: Luker, K.A. et al (2003) The role of districtnursing: perspectives of cancer patients and their carers before and after hospital discharge.European Journal of Cancer Care; 12: 4, 308-16.
As an exercise, locate a copy for yourself and find out about the researcher’s methods by looking at the data collection and data analysis sections. In data collection the author states that ‘interviews were conducted using an interview guide composed of questions that allowed the researcher to explore interviewees’ experiences and clarify issues that arose’. This suggests the researcher used a semi-structured questionnaire or interview schedule, which seems a reasonable way to find out about the perspectives of cancer patients and their carers about the role of the district nurse.
The section on findings contains direct quotes from the research participants. What does this mean in terms of the representation of the results? Is it likely to include statistical analysis of the data? In the conclusions section the author does not claim to make generally applicable statements about the research but does make some tentative suggestions. This seems a reasonable approach for this type of research question.
The key to understanding the link between the research approach and methodology is to make a judgement as to whether they match. Nurses who are unsure of the approach and methods may find it helpful to have a good source such as a web link or text book that describes research terms to help you decide if a source is credible.
Framework for critiquing research articles
There are many frameworks for critiquing research. Caldwell et al (2005) developed one that enables key questions pertinent to both quantitative and qualitative studies to be reviewed and provides a comprehensive series of questions to ask about a research article (Fig 2). In order to develop skills in analysing research papers it is advisable to keep an introductory text to hand. Nursing Times also publishes a range of articles on research approaches, which are available on nursingtimes.net.
Systematic reviews summarise the results of high-quality healthcare studies, which are often reviews of RCTs. Systematic reviews are developed methodically and include:
- A clear statement of objectives;
- A description of the methods of collecting and collating the studies;
- The criteria for inclusion and exclusion of studies from the review;
- How the studies were judged with respect to their contribution to the question posed;
- How the separate studies are brought together to present an overall measure of effectiveness. If statistical methods are used to combine the results this is called a meta-analysis.
Greenhalgh (1997b) suggested the following questions can be used to evaluate a systematic review:
1. Can you find an important clinical question that the review has addressed? The question should be precise since the reviewer has to decide whether or not the paper should be included in the review.
2. Was there a thorough search of the appropriate databases and were other potentially important sources explored? Have the reviewers only used one database such as Medline? Have they looked for ‘grey literature’, for example in other fields such as physiotherapy? Have they asked for raw data from the researcher(s) that may not have been included in a published paper?
3.Was the methodological quality assessed and the trials weighted accordingly? How did the reviewers establish the generic (pertinent to all research studies) and particular (relevant to this topic) criteria for inclusion/exclusion?
4. How sensitive are the results to the way the review was conducted? Greenhalgh gave an example from the Christmas issue of the British Medical Journal in 1994 where the authors proved that shaking dice had a direct relationship with the outcome of acute stroke. Using red, white and green dice they reported that throwing the dice had no causal relationship. However, if they excluded red dice as harmful therapy and excluded other aspects according to methodological quality there was an apparent significant benefit of dice therapy in acute stroke. This demonstrated how entirely factors can affect results and conclusions of poorly designed research.
Greenhalgh suggested the ‘what ifs’ should also be considered in evaluating systematic reviews. For example, what if the reviewers changed their inclusion criteria? What if unpublished work is excluded? What if the quality weightings were assigned differently?
5. Have the numerical results been interpreted with common sense and due regard to the broader aspects of the problem? Whatever the numerical result this must be contextualised with reference to the question addressed in the review. Nurses must then decide whether the result should influence patient care.
Returning to the topic of discharge planning for the transfer of a patient suffering from cancer to the care of the district nursing team, I was unable to locate a systematic review that directly addresses this issue. However, I found two related reviews via the Cochrane database. The first (Shepperd et al, 2000) examines factors associated with discharge planning. It is located at Cochrane Database of Systematic Reviews; 4: CD000313, and was updated in 2004. The authors concluded that: ‘The impact of discharge planning on readmission rates, hospital length of stay, health outcomes and cost is uncertain. This reflects a lack of power as the degree to which we could pool data was restricted by the different reported measures of outcome. It is possible that even a small reduction in length of stay, or readmission rate, could have an impact on the timeliness of subsequent admissions in a system where there is a shortage of acute hospital beds.’
The second review (Jeffery et al, 2007) addresses follow-up strategies for patients treated for non-metastatic colorectal cancer is also from the Cochrane Database of Systematic Reviews. The reviewers of this topic concluded that: ‘The results of our review suggest that there is an overall survival benefit for intensifying the follow-up of patients after curative surgery for colorectal cancer. Because of the wide variation in the follow-up programmes used in the included studies it is not possible to infer from the data the best combination and frequency of clinic (or family practice) visits, blood tests, endoscopic procedures and radiological investigations to maximise the outcomes for these patients.
‘Nor is it possible to estimate the potential harms or costs of intensifying follow-up for these patients in order to adopt a cost-effective approach in this clinical area. Large clinical trials underway or about to commence are likely to contribute valuable further information to clarify these areas of clinical uncertainty.’
These conclusions from two systematic reviews demonstrate the difficulty in locating specific examples of reviews contributing to the evidence base that practitioners can draw on to support developments in nursing practice. However, they may also guide researchers to raise questions about the primary sources they locate and suggest issues worthy of further exploration before nurses consider changing their practice.
Framework for critiquing clinical guidelines
Clinical guidelines, protocols and policies need to be subjected to comparable levels of scrutiny as other sources of evidence. Nurses need to know, for example, if members of the team writing a clinical guideline are credible in their field, represent the range of people who may be involved in the particular area or aspect of practice and have based their recommendations on current available research evidence.
The AGREE instrument (2001) was developed through international collaboration to provide a framework for assessing clinical guidelines. It is a generic tool to help both guideline developers and users to assess the quality of the guideline and addresses six domains:
- Scope and purpose;
- Stakeholder involvement;
- Rigour of development;
- Clarity of presentation;
- Editorial independence.
The full AGREE instrument can be found at: www.agreecollaboration.org/
The Oncology Nursing Society (2007) suggests that clinical guidelines should make explicit recommendations and be based upon some evidence. Citing Hayward et al (1995) and Brown (1999) the society asserted that evaluation of clinical guidelines should include appraisal of the following items:
- The guideline specificity and population to whom it will be applicable;
- All relevant options and outcomes are specified with decision-making points apparent;
- Process to identify, select and combine evidence is described and makes sense;
- Includes most recent findings (meaning it is current);
- Process of peer review and evaluation specified;
- Recommendations are practical and clinically relevant;
- Recommendations are strong (strength of evidence described);
- Guideline responds to a clinical problem;
- Recommendations are applicable to patients in your current setting;
- Use of recommendations would lead to identifiable outcomes that could be measured.
Returning to the topic of planning the discharge of patients with cancer to the care of district nurses, these items can be applied to the NHS Cancer Plan, to establish the credibility of the section on developments required for service provision in community nursing. The plan is located at: www.dh.gov.uk/en/Publicationsandstatistics/Publications/PublicationsPolicyAndGuidance/DH_4009609.
It is also possible to locate local practice guidelines developed by NHS trusts such as those from Brent Teaching Primary Care Trust (www.brenttpct.org/doxpixandgragix/DNGUIDELINES.pdf) or to refer to one of the national service framework.
Parker (2001) described the key attributes of an expert nurse as an aptitude for ‘reflection and critical thinking’. The expert nurse also appreciates the relationship between theory and practice. Many expert nurses and nurse consultants are now employed in the NHS in areas such as infection control, tissue viability or specialist areas of practice. They are often a good starting point for nurses beginning to investigate an area of concern. They have a wealth of expertise and background literature which can inform search strategies.
This article has introduced the processes involved in critically evaluating sources of evidence, offered an overview of critiquing frameworks and introduced the processes involved in making an informed judgement about the sources of evidence. It has outlined the nature of hierarchies of evidence and the importance of clearly identifying how sources contribute to the literature informing developments in nursing practice. The third article in this series will consider how EBP can be applied in practice. developing these skills will enable nurses to develop competence in reviewing sources of evidence and presenting a case to influence developments in care, and to contribute to an evidence-based practice culture in the workplace.
AGREE collaboration (2001) The Agree Instrument: Appraisal of Guidelines for Research and Evaluation. www.agreecollaboration.org
Brown, S.J. (1999). Knowledge for Health Care Practice: A Guide to Using Research Evidence. Philadelphia, PA: WB Saunders.
Caldwell, K. et al (2005) Developing a framework for critiquing health research. Journal of Health, Social and Environmental Issues; 6: 1, 45-54.
Cluett, E.R. (2002) Evidence-based practice. In: Cluett, E.R., Bluff, R. (2002) Principles and Practice of Research in Midwifery.London: Churchill Livingstone.
Cochrane, A. (1972)Effectiveness and Efficiency. London: Nuffield Provincial Hospitals Trust.
Evans, D. (2003) Hierarchy of evidence: a framework for ranking evidence evaluating healthcare interventions. Journal of Clinical Nursing; 12: 1, 77-84.
Greenhalgh, T. (1997a) How to read a paper: getting your bearings. British Medical Journal; 315: 243-246.
Greenhalgh, T. (1997b) How to read a paper: Papers that summarise other papers (Systematic reviews and meta-analysis). British Medical Journal; 315: 672-675.
Gyatt, G. et al (1995)User’s guide to the medical literature, 9: A method of grading health care recommendations. Journal of the American Medical Association; 274: 1800-1804.
Hayward, R.S.A. et al (1995) Users’ guides to the medical literature VIII: How to use clinical practice guidelines A. Are the recommendations valid?Journal of the American Medical Association; 274: 7, 570-574.
Hek, G. et al (2006)Making Sense of Research (3rd ed.). London: Sage Publications.
Jeffery, M. et al (2007) Follow-up Strategies for Patients Treated for Non-metastatic Colorectal Cancer.Cochrane Database Systematic Review 24; 1: CD002200. Review.
Luker, K.A. et al (2003)The role of district nursing: perspectives of cancer patients and their carers before and after hospital discharge. European Journal of Cancer Care; 12: 4, 308-316.
Oncology Nursing Society (2007) Critiquing Clinical Practice Guidelines. http://onsopcontent.ons.org/Toolkits/evidence/Process/guidelines.shtml
Page, S., Meerabeau, E.(2004) Hierarchies of evidence and hierarchies of education: reflections on a multiprofessional education initiative. Learning in Health and Social Care; 3:3, 118-128.
Parker, M. (2001)Nursing Theories and Nursing Practice. Philadelphia, PA: FA Davis.
Polit, D. et al (2001)Essentials of Nursing Research, Methods, Appraisals and Utilization (5th ed.). Philadelphia, PA: Lippincott.
Shepperd, S. et al (2000) Discharge planning from hospital to home. Cochrane Database Systematic Review; 1: CD000313. Review.