Your browser is no longer supported

For the best possible experience using our website we recommend you upgrade to a newer version or another browser.

Your browser appears to have cookies disabled. For the best experience of this website, please enable cookies in your browser

We'll assume we have your consent to use cookies, for example so you won't need to log in each time you visit our site.
Learn more


Lifecycle of a research project 3: reading research findings

  • Comment

This article, the third in a series on the lifecycle of a research project, explains how to read research results using the Creating Learning Environments for Compassionate Care study as an example


The Creating Learning Environments for Compassionate Care programme aims to foster a culture of compassionate care among hospital nursing teams working in inpatient settings for older people. The study aimed to determine whether the programme could be made to work and its impact could be measured. This third article in a six-part series explains what can and cannot be concluded from the results, using the study as an example of how to read research results when considering implementing an intervention in clinical practice.

Citation: Bridges J, Griffiths P (2019) Lifecycle of a research project 3: reading research findings. Nursing Times [online]; 115: 3, 40-42.

Authors: Jackie Bridges is professor of older people’s care; Peter Griffiths is chair of Health Services Research; both at the University of Southampton.

  • This article has been double-blind peer reviewed
  • Scroll down to read the article or download a print-friendly PDF here (if the PDF fails to fully download please try again using a different browser) 
  • Click here to see other articles in this series


Creating Learning Environments for Compassionate Care (CLECC) is a practice development programme for nursing teams on older people’s hospital wards that aims to build a culture in which the members feel able to practise compassionately (Bridges and Fuller, 2015). It begins with a four-month period facilitated by a practice development nurse, after which the ward team should take ownership and carry on with the new ways of working. The CLECC study used mixed methods to determine whether CLECC could be made to work on busy wards in acute hospitals and whether it was possible to measure its impact using an experimental design.

All Nursing and Midwifery Council registrants have a duty to ensure their practice is evidence-based, even if they do not undertake research themselves, so they must be able to critically appraise research findings. In this article, we explain how to make sense of research findings and make decisions about their use in clinical practice. We briefly summarise key results from the CLECC study and use them as an example of what to look for.

This article is the third in a six-part series on the trajectory of a research project. Part 1 described the call for research proposals by the National Institute of Health Research (NIHR) that prompted the CLECC study; part 2 looked at study background, rationale and design.

Methods and results summary

As you read this next section, it could be useful to mark any terms you do not understand and look them up. When you are a relative newcomer to reading about research methods, it is common to have to do this, as research papers explaining all the technical terms would be too long. Getting hold of a good all-purpose textbook on health research can be helpful.

The main methods of the CLECC study were a pilot cluster randomised controlled trial (RCT) and a qualitative evaluation. Six wards from two NHS hospitals in England were involved. It is described as a pilot because the main purpose was to test methods to see whether using the CLECC programme could affect patient care. If successful, these methods could be used to conduct a larger study (see part 4).

Pilot RCT

The main measurement involved researcher-rated observations of the quality of staff–patient interactions. To record this we used the Quality of Interaction Schedule (QuIS), details of which are outlined by McLean et al (2017) and Dean et al (1993). Researchers observed interactions between patients and staff at random times over a three-week period from Monday to Friday, 8am-10pm. They spent two hours at a time observing and repeated this 10 times per ward. Interactions between staff and patients were rated as positive social, positive care, neutral, negative protective or negative (Barker et al, 2016; Dean et al, 1993).

We gathered baseline data on all six wards before starting the trial. The study statistician then randomly allocated wards to either intervention or control using specialist randomisation software. Four wards received the four-month facilitated introduction to the CLECC programme (interventions); two did not (controls). Randomisation was used to increase confidence that any changes found would be due to CLECC and not to other influences. We then waited another four months to test whether changes (if there were any) lasted beyond the introductory period. Data was gathered using the same collection methods as at baseline.

Among the patients we approached, 93% agreed to take part; 25% of those whose care we observed had evidence of cognitive impairment so we were confident we had recruited a sample that was representative of hospital populations in the NHS. We concluded from this that having their care observed was acceptable to hospital patients.

Table 1 shows the QuIS observations on the intervention and control wards at baseline and follow-up. In total, 3,109 interactions were rated. Most were rated positive (positive social or positive care) and a significant minority were rated negative (negative protective or negative restrictive).

At follow-up, there were more positive (78% versus 74%) and fewer negative (8% versus 11%) interactions in intervention wards than controls. It was important to know whether these differences were likely to be due to chance, so we undertook statistical testing (chi square); this showed that the differences were statistically significant (p=0.017) and, therefore, not due to chance. Further analysis taking into account differences in other variables, such as patient age, was conducted. Although there was still less chance of negative interactions occurring on intervention wards, the difference was no longer statistically significant; this meant we could not say with confidence that the change was due to CLECC rather than other factors, such as differences between patients in the groups observed or other natural variations over time.

table 1 qu is observations on intervention

Qualitative evaluation

To help us assess whether CLECC was workable and could be integrated into existing practice (that is, its feasibility), we undertook qualitative interviews with nursing staff and managers during implementation and follow-up (n=33), observations of staff learning activities (n=7) and ward manager questionnaires (n=12). We found that staff were generally keen to participate in the CLECC programme and thought CLECC had made a positive contribution not only to their own wellbeing, but also to patient care.

During the four-month facilitated period, reflection, learning, mutual support and innovation had become more accepted ways of working in all wards to a greater or lesser extent. Staff struggled to find the time to engage in all activities, but it had been possible to implement much of CLECC as originally planned. After the initial four months, however, we found differences between the wards regarding whether or not they continued with CLECC. All teams had to contend with high workloads and changing staff, but we also found factors that influenced whether or not CLECC continued. It was more likely to carry on if there was:

  • Ownership by manager and team members for making CLECC happen;
  • Support from matron and other managers for the team to engage with CLECC;
  • Transmission of a wider organisational culture emphasising staff wellbeing and person-based care as opposed to task-based care.

Two quotations illustrated the differences in staff experiences between wards:

“My matron’s been very supportive the whole way through. We’ve kept in regular contact. She’s been asking for updates, she’s known about the interventions that we’ve done on the ward.” (Ward manager)

“Some of the staff felt a little bit disappointed that they’d made these suggestions and took their time to do them and then no one really followed it through or said ‘yes, we can use that’ or ‘no we can’t’.” (Staff nurse)

We concluded from this that if frontline workers do not perceive the ways of working promoted by CLECC as valued or managers do not make that value clear, these ways of working will not routinely occur. It is possible to introduce practices at a local level that promote relational ways of working within a team, but the impact may be limited by wider-system factors.

What questions to ask of a research article

This next section provides guidance on how to find and critique research results. We suggest you use this to re-read the results described above and draw conclusions about what you could learn from this study for your own practice.

When researching a particular topic, the first step is to choose which research article(s) to read. Two main questions will help you decide how to search for, and select, the research you should look at:

  • What information do I need to guide my practice?
  • Does this research look likely to give me that information?

Titles, abstracts and even authors’ qualifications or job titles may help you spot potentially useful papers. There is a wealth of guidance on searching the literature, and a health services librarian can be an important ally in the search for good-quality research.

Once you have chosen the research paper(s), key questions you need to ask yourself are:

  • What are the results?
  • How much weight do I give them when considering changing my practice?

In both cases, the focus should be on the results that answer your original question, not questions the authors are addressing. The authors’ conclusions might help you to understand their results, but they are not the same as the conclusions you need to draw for yourself.

You also need to ensure that the authors’ conclusions are supported by the results. For example, if the authors conclude that “this intervention is effective”, they need to demonstrate a result such as “patients who experienced the new treatment were 25% more likely to be satisfied”.

How do you decide how much weight to give results? This is mostly a question about judging the quality of research. Different types of research – at its most simple, quantitative or qualitative – need to be judged according to different criteria. There are many critical appraisal or risk-of-bias assessment guides; these can help, but we suggest you use guidance that is specific to the research methods used by the authors.

When reading a research article, key questions to ask yourself are:

  • How many people have participated in the research and in how many places was it undertaken? Large samples and multiple settings are likely to provide more robust evidence for generalising the findings. Their relative importance varies: qualitative research typically has smaller sample sizes. Qualitative research may be more about explaining things than making generalisations about outcomes.
  • Was the method used suited to the question asked? Qualitative research generally provides a richer picture of patient experience than surveys, but surveys are better for measuring the frequency or extent of a problem in a population. Questions about treatment effectiveness need to be answered by trials comparing outcomes in two different groups of people – an RCT is generally the most suitable design to address them.
  • How do the results accord with other research? It is rare that a single study delivers enough evidence to influence practice. A well-written paper will give a good overview of existing research and try to offer conclusions by combining existing knowledge with the new evidence. Many authors do not do a good job with this and tend to overestimate the novelty and significance of their findings.

Does CLECC work?

In the case of CLECC, the question likely to be useful for you in practice is whether or not it is effective at promoting compassionate care. The focus of our research was slightly different, as our aim was to test the feasibility of implementing CLECC and measuring its impact. However, we did generate findings that can help answer the question of effectiveness, albeit not definitively. The qualitative findings certainly suggest that CLECC is workable and was appreciated by the staff who used it, although it might need to be developed further to increase its impact and sustainability. A trial design is generally the best way to answer the question of effectiveness, and our findings lend some support to staff views regarding benefits for patient care. However, our research was not designed to give a full answer.

Before the study, we had no way of knowing how many wards we would need to involve to address the question of CLECC’s effectiveness, but it was never likely that six would be enough. We cannot be certain that any apparent positive effects are not produced by chance alone, rather than by the CLECC intervention. In our study report (Bridges et al, 2018), we concluded that small but non-statisically? significant differences in the quality of interactions between intervention and control wards at follow-up were promising findings – this was particularly the case in the context of qualitative findings indicating that staff thought there were benefits to patient care. A practitioner or manager reading these findings may, therefore, conclude that there may be merit in introducing CLECC in their organisation, but that it would be important to explore whether potential benefits for patients and staff were actually delivered. Ideally, formal research would be needed but local evaluation could be enough as CLECC is unlikely to cause harm.

Part 4 looks at how the research team took the project further after completing the initial study.  

Key points

  • Nurses need to be able to critically appraise research findings
  • To make use of research results, nurses need to ask themselves the right questions
  • Creating Learning Environments for Compassionate Care (CLECC) fosters compassion among nursing staff who care for older people in hospital
  • A mixed-methods study explored whether CLECC is feasible and whether its impact can be measured
  • The CLECC study was not designed to assess the effectiveness of the intervention
  • For full CLECC study findings, read open-access articles by Gould et al (2018), Bridges et al (2017) and Bridges et al (2018).
  • Funding – The CLECC study was funded by the National Institute for Health Research (NIHR) (Health Services and Delivery Research programme, project number 13/07/48). The views and opinions expressed in this article are those of the author and do not necessarily reflect those of the Health Services and Delivery Research programme, NIHR, NHS or Department of Health and Social Care.
  • Comment

Have your say

You must sign in to make a comment

Please remember that the submission of any material is governed by our Terms and Conditions and by submitting material you confirm your agreement to these Terms and Conditions. Links may be included in your comments but HTML is not permitted.