Your browser is no longer supported

For the best possible experience using our website we recommend you upgrade to a newer version or another browser.

Your browser appears to have cookies disabled. For the best experience of this website, please enable cookies in your browser

We'll assume we have your consent to use cookies, for example so you won't need to log in each time you visit our site.
Learn more

Success can be made to measure

  • Comment

Evaluating projects that concern services can be difficult, but there’s a way to do it

In these times of great financial pressure, there is more need than ever for health services to be clear about the impact of new initiatives and approaches.

We do not have the luxury to carry on with practices that cannot show their worth, and have to fight harder than ever for funding to try ideas out.

For those of us trained in clinical specialties, a randomised controlled trial would provide the level of evidence needed to convince us that a treatment would be worth new investment.

Ideally, we would apply such a rigorous methodology to initiatives that focus on managing people and organising services.

In reality, this is difficult if not impossible to do for such projects because of the range of variables and the ever-changing organisational and policy environment in which they are based.

In addition, we may not be clear at the outset about what we hope to achieve, and stakeholders will have different (and potentially contrasting) views of what success would look like.

This uncertain picture, combined with a need to get on, means that we often fail to identify the result we want and how it will be measured.

An approach that has proved successful in addressing these issues is the logic model (see http://tinyurl.com/logicmodel).

This requires stakeholders to spend time at the beginning of a project discussing and agreeing the short, medium and long-term outcomes that are expected and the connected timescales.

Having agreed the outcomes, the focus goes back to the beginning of the process and considers staff time, training and available equipment and how these will be combined to achieve the outcomes.

This is often a messy business at first as differing views are expressed and debated.

But, if discussions are facilitated well, they will result in a clearer and agreed view of not only what should be achieved but also the expected process and the assumptions that underpin it.

Evaluation can then be tailored towards capturing these outcomes and be more realistic about what can be achieved in the early stages - without this, an initiative can be written off before it has had the chance to deliver.

Such an approach will not replicate the type of evidence produced by a clinical trial but will provide the basis for an objective evaluation of an initiative’s actual impact. Furthermore, clarity about the process will allow you to use the parts that worked for future projects.

How to measure the impact of change

● Securing the views of stakeholders provides clarity of outcomes and generates early support
● Short-term outcomes are often easier to measure but need to be linked clearly to long-term outcomes if they are to be seen as evidence of success
● Evaluating impact will require resources and these need to be built in to the project plan and the evaluation followed through.

Robin Miller is a senior fellow at the Health Services Management Centre at the University of Birmingham. He has worked as a manager and commissioner in health and social care services.

  • Comment

Have your say

You must sign in to make a comment

Please remember that the submission of any material is governed by our Terms and Conditions and by submitting material you confirm your agreement to these Terms and Conditions. Links may be included in your comments but HTML is not permitted.