Your browser is no longer supported

For the best possible experience using our website we recommend you upgrade to a newer version or another browser.

Your browser appears to have cookies disabled. For the best experience of this website, please enable cookies in your browser

We'll assume we have your consent to use cookies, for example so you won't need to log in each time you visit our site.
Learn more

Little voice: giving young patients a say

  • Comment

How a patient experience questionnaire for young outpatients was developed

In this article…

  • Parental views are not a good measure of children’s experience of healthcare
  • Careful development and testing can lead to a useful questionnaire for young people
  • The results can be used to improve services

 

Authors

Bridget Hopwood is senior project manager and Amy Tallett is project manager, both in the children and young people research team at Picker Institute Europe.

Abstract

Hopwood B, Tallett A (2011) Little voice: giving young patients a say. Nursing Times; 107: 49/50, 18-20.
Background Patient experience is widely measured in healthcare settings, but few tools exist that gather feedback directly from young patients.
Aim To develop a paediatric questionnaire to gain meaningful feedback from young hospital outpatients.
Method Two paper questionnaires were designed and tested - one for parents and one for young people. These were piloted in 2009 with 1,200 recent outpatients and their parents/carers from Sheffield Children’s Foundation Trust. Rollout to 14 acute NHS trusts in England achieved an average response rate of 33%.
Results The main problems for young outpatients related to waiting, pre-appointment information and communication. Questionnaire validation showed that both surveys were accessible and a reliable measure of patient experience.
Conclusions It is important to give children and young people a say in their healthcare via tested methods that are appropriate to their needs and abilities. Results can help hospital outpatient departments to identify the main issues and problems experienced by their young patients.

Keywords: Survey/Children/Patient experience

  • This article has been double-blind peer reviewed
  • Figures and tables can be seen in the attached print-friendly PDF file of the complete article

 

5 key points

  1. Parent/carer views are not always a reliable proxy measure of children’s experiences
  2. Few questionnaires have been developed to gather feedback directly from children about hospital care
  3. Surveys for younger patients must be developed with them and must be thoroughly tested
  4. Findings can be used to prioritise quality improvement work
  5. Priority areas identified in a recent paediatric outpatient survey were waiting, pre-appointment information and communication


Every Child Matters said all children and young people should make a positive contribution to decision making that affects their lives, and that “real service improvement is only attainable through involving children and young people, and listening to their views” (Department for Education, 2004).
The NHS patient survey programme, undertaken on behalf of the Care Quality Commission, enables NHS trusts to gather feedback about patients’ experiences. National surveys assessing adult experience are done annually in England, but there has only ever been one national survey of children’s experience - conducted in 2004 for young inpatients and their parents/carers (CQC and Picker Institute Europe, 2004).
It is important to consult children and young people directly about their experiences of healthcare, but designing a standard approach is difficult. A child is defined as aged 0-17 years and, within this age range, there are clearly huge differences in cognitive ability including memory and the accuracy of recalled information, reading ability and comprehension.
Worldwide, there is little available evidence of children’s experiences of healthcare, and few specifically designed measuring tools. Magaret et al (2002) gained feedback on children’s satisfaction with - rather than experience of - an emergency hospital department in the US. Lindeke et al (2009) adapted and validated an existing paediatric tool for inpatient and outpatient quality improvement with children, based on limited sample sizes. In Europe, children’s survey tools have been developed for young patients in emergency and inpatient settings (Chappuis et al, 2011; Pelander et al, 2008).
Parents often act as proxies in completing child-specific questionnaires (Kam et al, 2008; Battrick and Glasper, 2004). However, parents’ evaluations of children’s care do not always accurately represent children’s and young people’s perceptions, often being more positive (Mah et al, 2006; Chesney et al, 2005; Naar-King, 2001).

Aim

We set out to develop and validate a child-friendly self-completion questionnaire to allow younger outpatients in England to describe their experiences of healthcare and to provide feedback to hospitals that could be acted upon. This work was initiated, commissioned and funded by Sheffield Children’s Foundation Trust.

Method

Two paediatric outpatient questionnaires were developed based on an existing adult outpatient questionnaire, two existing paediatric inpatient questionnaires, and previous qualitative research with young patients and their carers carried out in 2004 and 2007. One survey (P) was for parents/carers of young outpatients and the other (YP) for young patients to complete themselves.
The target age range of the P and YP surveys was determined from a 2008 questionnaire that showed an increase in child involvement from the ages of 7-8 years (Fig 1).
We decided to set the lower age limit to eight years for the YP survey. By doing so, we hoped to maximise child involvement and minimise the likelihood that parents/carers would complete the survey on their child’s behalf. Consequently, the P survey was aimed at parents/carers of outpatients aged seven and under.
The surveys asked specific factual questions about what happened during all aspects of the outpatient appointment including booking, arrival and waiting, hospital facilities, seeing a doctor or nurse, communication, tests and X-rays, new medication, information and overall impression.
The questionnaires also incorporated some open-ended questions about the visit. The YP survey was made appealing to younger children while at the same time attempting to avoid patronising older children, and accounted for different reading and comprehension levels.
Cognitive interviews tested overall content, structure, flow and length, focusing particularly on clarity of language and question comprehension. In response to this testing, the questionnaire was shortened and redesigned with illustrations and colour. The length, wording and/or response options to some questions were also amended and long, multifaceted questions removed entirely.
Children found it difficult to recall information about booking and arrival at hospital so these questions were incorporated into a parent’s section within the children’s questionnaire.
The P survey contained 61 experience questions, four demographic questions and four open-ended, free-text questions. The YP survey had two sections. The first section was for children and included 34 experience questions, three demographic questions and two free-text questions; the second parent section had a further 25 questions (including four demographic questions and one free-text question).
The questionnaires were initially posted to 1,200 recent young outpatients at Sheffield Children’s Hospital, with two reminders sent to non-responders. The response was 37% (YP: 35%; P: 39%); 65% of the YP surveys were completed by the child alone, with a further 26% completed by both the child and parent.
Only 11% were filled out by the parent alone, showing that children had a high level of engagement. There was also a considerable increase in child-only completion relative to the previous inpatient surveys (50% in 2004; 54% in 2008; Fig 2), a likely result of cognitive testing, modifying the questionnaire and adjusting the target age range of the YP survey.
After this pilot, the survey was rolled out to 14 acute NHS trusts in England. At each trust, a random sample of 850 patients aged 17 and under was selected in a specified month, ensuring a representative sample across all clinics.

Results

The rollout to 14 acute trusts yielded an overall response rate of 33% (3,783 completed surveys; P response rate 34%; YP response rate 32%). Two thirds of returned YP questionnaires were completed by the child unaided, while a quarter were completed by parent and child together. Only 9% of questionnaires were completed by the parent alone (Fig 2).
The most positive responses related to hospital cleanliness (only 3% felt the department was unclean and 8% said toilets were unclean) and overall ratings of care (95% of parents said their child’s care was excellent, very good or good; 96% of children said they were looked after very or fairly well).
Communication about waiting times in hospital was the biggest problem area, with 65% of those who waited more than five minutes for their appointment not being told about the wait. Pre-appointment information was also inadequate, with 62% of parents and 57% of children not fully aware before the appointment, what would happen.
The surveys also found that:

  • 55% of parents given new medication for their child were not given full explanations about side-effects;
  • 48% of all respondents felt that there were not enough age-relevant activities to do when waiting to be seen - this was higher for children (63%) than for parents (35%);
  • 38% of parents could not find a convenient place to park;
  • 38% of parents did not have access to suitable food and drinks during their hospital visit;
  • 35% of children reported that doctors did not speak to them in a way they could fully understand;
  • 35% of children said they were not fully involved in decisions about what happened to them during their appointment;
  • 24% of YP respondents felt that they were not given enough privacy when being treated or examined, compared with only 12% of P survey respondents.

Both surveys were accessible to their respective target audiences and showed high levels of completion. In each survey, 99% of respondents reached the end of the “experience” questions.
Statistical analysis of the YP survey responses showed that the questionnaire reliably measures the patient experience and consistently discriminates between different levels of experience.

Discussion

Our work confirmed that parent/carer views are not necessarily accurate or appropriate as proxy measures for the experiences of young patients, and that it is possible and practical to measure younger outpatients’ experiences directly.
The work emphasised that questionnaires designed for children must be tested with the target age group before they are finalised.
Cognitive testing of the YP outpatient survey revealed that children found it easy to recall information relating to people and their characteristics, to the hospital environment and to entertainment. Pelander and Leino-Kilpi (2010) reported that people, activities and environment were among children’s best and worst experiences during hospitalisation, and therefore these areas may be better remembered by children.
In contrast, children found it difficult to recall information about booking and arrival at the hospital, and did not accurately answer time-specific questions. These types of question were therefore incorporated into a parents’ section, which may also give parents a sense of inclusion, reducing the extent to which they intervened with completion of the children’s section. The survey rollout showed a high level of child involvement.
The most significant problem areas arising from the survey results related to parking, pre-appointment information, and waiting (concerning both age-appropriate things to do when waiting, and communication about waiting times). Communication between healthcare professionals and young patients was also highlighted as an area that could be improved.
In measuring patients’ experiences, it is essential that survey tools and methods provide feedback that is sufficiently specific and can be acted on. Sheffield Children’s Hospital used its 2009 survey results to develop an action plan to address concerns. The survey was repeated in 2010, and considerable improvements on many of the measures were found.
Communication about waiting times and organisation of the outpatients department were tackled by introducing a bleep system, so that young patients waiting to be seen could leave the department (for example to visit the hospital cafe) and be notified when it was time to return to the clinic. In the 2010 survey, the percentage of patients saying the outpatient’s department was not well organised had fallen from 41% to 36%.
Handheld games consoles were introduced to counter boredom while waiting. Dedicated space was provided for entertainment facilities, and a youth worker employed to spend time with children. In the 2009 survey, nearly half (49%) of respondents felt there was not enough to do when waiting to be seen - this fell to 41% in 2010.
The signs to the outpatient department were redesigned because 52% felt it was not easy to find the right department; this fell to 44% in 2010. These findings show how the young outpatient questionnaire can be used to effectively improve services for this population, and how any changes in performance can be monitored over time (see tinyurl.com/sheffield-survey).
There is a need to develop instruments and methods that allow children and young people to provide feedback on all aspects of their healthcare. An inpatient questionnaire has also been developed using a similar methodology. Other
priority areas include paediatric hospice and end-of-life care, primary care and cancer care.
There is also a need to trial alternative methodologies for administering the surveys, including digital data-capture methods that obtain real-time feedback near the point of care. These should be tested with children and young people to establish whether they are more effective in terms of cost, response rates and child-only completion.
Finally, measures of self-completion for children are typically targeted at those aged eight years and above (Eiser and Morse, 2001), so methods of obtaining feedback from younger children, in addition to those with learning difficulties, need to be researched and developed.

Conclusion

This research highlights the importance of consulting and involving children in the development of children’s surveys. These surveys allow young people to give accurate feedback on issues that are important to them and inform healthcare organisations on problem areas and priorities for improving services for their young patient population.

  • Comment

Related files

Have your say

You must sign in to make a comment

Please remember that the submission of any material is governed by our Terms and Conditions and by submitting material you confirm your agreement to these Terms and Conditions. Links may be included in your comments but HTML is not permitted.